Skip to main content

Issue 30 - The Big Book of Bullsh*t

Page 1


Myth Busters: THE BIG BOOK OF BULLSH!T

letter From the editor

Ella Crisologo

Hello and welcome to a VERY special edition of Scientifica Magazine! As we come to the end of the University of Miami’s centennial year and celebrate the 10th anniversary of our magazine, I want to take this opportunity to express my appreciation for everyone– past AND present– who has helped to make Scientifica Magazine what it is today.

I would be lying if I said that stepping into this role, especially following Shirley Pandya, has been easy, and as the University’s only student-led publication, I have truly come to understand the meaning of the phrase, “It takes a village.” Without the organization's revealing writers and talented art & designers, WE would not be the Scientifica that you know today.

As they approach the culmination of their time at UM, I would also like to recognize the senior members of our Executive Board: Chaunté Lewis (Art & Design), Ethan Tieu (Writing), and Nina Adrineda (Copy Editing).

With that being said, I invite you to explore Issue 30, Myth Busters: The Big Book of Bullsh*t. As you flip through the pages, take note of the beautiful artwork organized by Chaunté and allow yourself to be debunked for many of the things that we commonly perceive as true are not at all what they seem in reality.

Microbiology & Immunology, and Public Health, Class of 2027 Editor-in-Chief, Scientifica Magazine

letter

From

the editorial advisor

In the scientific community, we often assume that data speaks for itself. We hope that if we present the facts, the world will listen. However, as we navigate a landscape increasingly cluttered with pseudoscience and viral misinformation, it is becoming clear that data alone is not enough. We must also be effective storytellers.

This semester, the staff of Scientifica Magazine has taken on the vital task of "myth-busting." They have curated a collection of articles that challenge the status quo and demand evidence. From the physics of the "Big Bang" to the misunderstood nature of mental illness, these students are not just reporting science; they are defending the scientific method.

I am particularly proud of how this issue balances scientific curiosity with social responsibility. The team has fearlessly tackled controversial topics, such as the efficacy of SPF labeling , the "Ozempic craze" , and the grim reality of rare cancers often overlooked by major funding.

To the writers, designers, and editors—thank you for your dedication to accuracy and integrity. You have created a resource that not only educates but empowers your readers to think critically.

As you flip through these pages, I invite you to challenge your own assumptions. Science is a journey of discovery, and sometimes, the most important discovery is realizing that what you thought you knew was wrong.

Roger I. Williams Jr., M.S. Ed. Director, Student Activities Advisor, Microbiology & Immunology Editorial Advisor, UMiami Scientifica

Ella Crisologo

Dominique Thomas Chaunté Lewis

Ethan Tieu

Nina Adrineda

Jolie Abdelmalak

Nina Cottone

Francisco Hernandez

Milana Camilleri

C O R E T E A M

EDITOR-IN-CHIEF

MANAGING EDITOR

ART & DESIGN DIRECTOR

DIRECTOR OF WRITING

COPY CHIEF

DIRECTOR OF PHOTOGRAPHY

DIRECTOR OF DISTRIBUTION AND COMMUNITY OUTREACH

VP OF FINANCE

DIRECTOR OF PUBLIC RELATIONS

SCIENTIFICA STAFF 2025

Board oF advisors

Barbara Colonna Ph.D.

Senior Lecturer

Organic Chemistry

Department of Chemistry

Richard J. Cote, M.D., FRCPath, FCAP

Professor & Joseph R. Coutler Jr. Chair

Department of Pathology

Professor, Dept. of Biochemistry & Molecular Biology

Chief of Pathology, Jackson Memorial Hospital

Director, Dr. Jonn T. Macdonald Foundation

Biochemical Nanotechnology Institute

University of Miami Miller School of Medicine

Michael S. Gaines, Ph.D.

Assistant Provost Undergraduate Research and Community Outreach

Professor of Biology

Mathias G. Lichtenheld, M.D.

Associate Professor of Microbiology & Immunology

FBS 3 Coordinator

University of Miami Miller School of Medicine

Charles Mallery, Ph.D.

Associate Professor

Biology & Cellular and Molecular Biology

Associate Dean

April Mann

Director of the Writing Center

Catherine Newell, Ph.D.

Associate Professor of Religion

section editors

Ariel Shavit

Sophia Odell

Veronica Richmond, Alexander Serrano

Sabrina Muller, Anya Beniwal

Sophia Odell

Ethics

News

Research Health Profiles

Leticia Oropesa, D.A.

Coordinator Department of Mathematics

*Eckhard R. Podack, M.D., Ph.D.

Professor & Chair

Department of Microbiology & Immunology

University of Miami Miller School of Medicine

Adina Sanchez-Garcia

Associate Director of English Composition

Senior Lecturer

Geoff Sutcliffe, Ph.D.

Professor of Computer Science

Yunqiu (Daniel) Wang, Ph.D.

Senior Lecturer

Department of Biology

artists & designers

Chaunté Lewis

Veronica Richmond

Ashleigh Morris

Mimi Fingold

Sasha Thorne

Kaitlyn Hancock

edItors

Ashleigh Morris

Kaveer Bhagwandeen

Kristina Majkic

Milana Camilleri

Ivy Chen

Hannah Ko

Sabrina Merola

Isabella Reichard

Caleb Duke

Alexander Serrano

Erich Laschinski

Audrey Zhang

Nina Cottone

Ana Widmer Witthuser

Emily Bauer

Rhea Sharma

Emma Mears

Megan Haniger

Anya Beniwal

Mahlet Kassim

Writers

Robert Jaca-Baez

Hannah Herskovic

Sabrina Muller

Isabella Reichard

Nina Cottone

Naeelah Refah

Kasey Moriarty

Janna Mawa

Kaveer Bhagwandeen

Ashleigh Morris

Roxio Ortiz

Arshom Tavakoli

Chloe Ruiseco

Semaj McCoy

Krish Patel

Antonio Blanco

Nicole Vedder

Mariana Nieto Reyes

Pearl Amromin

Jolie Abdelmalak

*Deceased

The Universe before the

Big Bang

Design: Chaunté Lewis

Have you ever wondered how the universe began? Well, many people believe that everything around us started with the Big Bang. The Big Bang Theory postulates that about 13.8 billion years ago, the universe began from an extremely small, hot and dense state which rapidly expanded into what we know of today, all in a fraction of a second. This state is known as “singularity,”and it marks the beginning of everything we can comprehend. The initial expansion that occurred is called cosmic inflation, expanding the universe faster than light itself. In 10-32 seconds, space was on track to rapidly cool and enlarge.

Many key concepts of physics and chemistry were created shortly after cosmic inflation. Only one microsecond after the Big Bang took place, subatomic particles including protons, neutrons and electrons were formed. Three minutes later, helium and hydrogen became the first elements to exist. 380,000 years later, the universe cooled down enough to allow atomic nuclei to capture electrons and form the first atoms. This formation caused production of the oldest known light to exist, called the cosmic microwave background. The cosmic microwave background is leftover radiation from the Big Bang, and proves that the universe was once in a hot, dense state. More visible light was produced 200 million years after The Big Bang. Gas and dust from around the universe condensed into the first stars. Another 200 million years later, galaxies began to form and dark matter surrounded it all. Dark matter is the invisible glue that holds the universe together. It makes up 27% of the universe, and is only detectable because of its gravitational effects. 10 billion years after The Big Bang, the universe’s expansion began to accelerate. Up until 1998, astronomers mistakenly believed the opposite was happening. Nobel Prize winning astronomers Adam Riess, Saul Perlmutter, and Brian Schmidt discovered that the suspect is dark energy, a mysterious form of energy that scientists know close to nothing about, except that it makes up 68% of the universe. According to researchers, dark energy appears to exert a negative pressure that pushes space outward. However, they do not know the specifics of this phenomenon. Dark energy is only a name scientists gave to the unknown "something" causing the universe’s acceleration. Together, dark energy and dark matter make up 95% of the universe, whereas the things we humans can observe make up only 5%. Evidence supports that The Big Bang took place 13.8 billion years ago, but we are still waiting to see what new discoveries could reveal the origin of the Big Bang, or if we must change our hypothesis completely. One thing is for certain, though. The universe will continue to expand forever.

Now, here’s the problem: many people think The Big Bang was the beginning of time itself. However, this idea could be false. The Big Bang was merely an expansion from “singularity,” a product or result of a reaction. There was something before our perceived beginning, but nobody knows what that thing is. There are too many questions relating to the start of the universe. What caused The Big Bang? Why did The Big Bang happen? And why is it that the laws of nature—from the sizes and movements of subatomic particles to the speed of light—are so fixed that a single fluctuation would cause the universe to cease to exist? What could have driven the formation of the universe if it functions on such a rigid and precise set of rules? Although nobody knows for sure, day by day we inch closer to understanding.

Enter string theory, a complicated modern theory that physicists are working on to understand how our universe came to be and where it all started. String theory—created by Italian physicist Gabriele Veneziano—is an idea in theoretical physics that claims reality is made up of infinitesimal vibrating strings, smaller than atoms, quarks, or

even electrons. According to this theory, many strings vibrate, twist, fold, interact and produce effects that humans interpret as everything from particle physics to large-scale phenomena like gravity (think of a bunch of wriggly strings making up all of reality rather than point-like spherical particles). A string vibrating in one way can end up playing the role of a photon. Another string vibrating differently could play the role of an electron. In other words, it is a “theory of everything.”

The theory has not been proven, but mathematical models have been produced that connect the dots between it and other famous theories, and it has passed countless theoretical and physical tests over the last 50 years. String theory holds great value because it may be what connects the two most important theories of our universe: Albert Einstein’s theory of general relativity and quantum mechanics.

Einstein’s theory of relativity describes gravity not as a force, but as the warping of four-dimensional spacetime by massive objects, which causes other objects to move along curved paths—you can think of our solar system rotating around the Sun. Mass and energy curve spacetime, and this curvature is what objects experience as gravity. On the other hand, quantum mechanics is the fundamental physical theory that describes matter and energy at the subatomic level, explaining behaviors like wave-particle duality and energy quantization that classical physics cannot account for completely. In the past, these two theories were considered separately because the effects of gravity on subatomic particles are so weak that their interaction cannot be observed or detected. However, string theory is the framework from which physicists use to describe how forces on a gigantic level, like gravity, can affect tiny objects like electrons and protons.

One of the largest drawbacks of string theory is that it only works outside the familiar four dimensions of space and time. The theory requires a total of 10 dimensions, with six visible only from the perspective of the infinitesimally small strings. It’s much like how a powerline looks like a 1-dimensional line to birds flying far overhead but becomes a 3-dimensional cylinder to an ant crawling on the wire. The extra dimensions may only be perceivable at a size unknown to humans.

String theory's wildest application is the possibility of other universes existing somewhere far beyond the observable universe. This claim supports the stochastic process of the Big Bang, implying it to be a random event. The idea of a multiverse would prove that the universe’s precise laws of physics are the result of the specific reactions that uniquely occur in our universe. This idea is an exact representation of string theory. It would prove that other universes are governed by their own laws of physics that may allow for life to exist, or even make life outright impossible. In the grand scheme of things, string theory is too unfinished to truly know what will come from it. Theoretical and mathematical models can portray its ideas but a physical experiment cannot yet be performed due to our lack of knowledge on other dimensions. Despite its limitations, the discoveries made from the exploration of string theory have contributed immensely to solving problems in mathematics and other fields of theoretical physics.

Humanity has gone from questioning the existence of galaxies outside of ours to questioning the creation of our very own universe. This journey has all started from the curiosity dormant in the human mind, and the ability to ponder our own existence. Scientists started by learning about the Big Bang, and soon created concepts such as dark energy, quantum physics, and general relativity. Now string theory is the newest hot topic waiting to unlock the answers of the cosmos. The knowledge obtained through time will only continue for as long as research persists.

The PresenT Doesn’T exisT,

Have you ever looked up at the night sky and admired how bright a particular star was twinkling? Well, what if I told you that the star you see is not in your timeline? That’s exactly the case when we look at anything around us. You see, our eyes trick us every time we open them, and it is no fault of theirs. In order for us to see, light reflected off or generated by objects has to reach our eyes. Depending on the distance between us and the object, it takes different amounts of time to do so. That timespan creates a lag in our perception, meaning everything we observe has already happened before our brain can comprehend it. So much for living in the present!

To grasp how this phenomenon works, we first need to know how our eyes function. Different parts of our eyes work in tandem to display images. First, light passes through the clear front layer of the eye called the cornea. The cornea is shaped like a dome and bends light to focus the eye. Some of this light enters the eye through the pupil, the dark center of our eye. The iris, which is the colored ring surrounding the pupil, controls how much light goes into the pupil. Next, light goes from the pupil to the lens, a clear inner part of the eye. The lens works with the cornea to focus light directly on the retina. The retina is a light-sensitive layer at the back of the eye.When light hits it, special cells called photoreceptors turn the light into electrical signals. Those electrical signals travel from the retina to the optic nerve, which leads to the brain. Once the signals reach the brain, they are transformed into the beautiful and colorful images we see on a daily basis. In other words, if there’s no light, our eyes have no use.

So how far into the past can we see? To answer that question, we have to know how light correlates with distance and time. That’s where the term “light-year" comes in. A light-year is the distance that light travels in one year. The speed of light is 186,000 miles (299,000 kilometers) per second, meaning that in one full Earth year, nearly 6 trillion miles (9.5 trillion kilometers) are covered! Light travels this insanely long distance to reach our eyes, and

even longer when we use the most advanced telescopes to view the far ends of the universe. From those closest to us to the most distant star ever discovered, scientists have measured not only their distances from Earth, but also how far into the past they are from our standpoint. It takes 3.336 nanoseconds for light to reflect off of people in front of us to our eyes. This means every person we come across is at least 3.336 nanoseconds in the past. When we look at the sun, we’re seeing it as it was 8 minutes ago. The next closest star to us is named Proxima Centauri, and it is about 4 lightyears away. If Proxima Centauri ever decided to explode into a supernova, the people of Earth wouldn’t know for 4 whole years. The Sirius A star system is 8.6 years behind us. The Crab Nebula, one of the most beautiful collections of cosmic clouds in the observable universe, is seen as it was 6,500 years ago. When scientists currently peer at it through telescopes, they are seeing the form it took way back in 5446 BCE! Nebulae are the results of a supernova, during which a giant star dies and a birthing home for emerging stars is created. Yet, it is impossible to determine how many stars have been born since 5446 BCE. Beyond our home galaxy, the Milky Way, anything beyond it is seen at least 2.5 million years in the past. The farthest recorded star in the universe, Earendel, holds the record for being seen as it was about 12.9 billion years ago. This is because this star first emitted its light within the first billion years of the universe, about 13.8 billion years ago. It took roughly 12.9 billion years for Earendel’s light to reach the Hubble Space Telescope. The universe continues to expand, and more celestial objects will be discovered as it continues doing so. Along with those discoveries, more of the universe's history will come into view as ancient light continues its long journey to Earth, revealing its secrets with us. In fact, Earth might not even be around when light from some stars and solar systems reach our galaxy. It makes you wonder how much our sense of time truly matters in the vastness of the cosmos. Or maybe, our perception of time doesn’t matter at all since time is perceived differently everywhere.

The Truth Behind Thanksgiving Food Comas

FTurkey Isn’t to Blame

or many Americans, Thanksgiving is defined by a table lined with mashed potatoes, gravy, cranberry sauce, stuffing and a roasted turkey at the center. Shortly after the feast, a heavy sensation of fatigue settles in, often referred to as a food coma. According to St. Vincent’s Medical Center, nearly 60% of Americans report taking a nap after Thanksgiving dinner, and turkey has carried the blame for decades.

The reason for this widespread assumption seems straightforward: turkey contains tryptophan, an essential amino acid involved in the production of serotonin and melatonin. These two neurotransmitters influence mood, relaxation and sleep. According to Cleveland Clinic, tryptophan cannot be produced by the human body, so it must be obtained through dietary sources, primarily animal-based proteins such as turkey, chicken, beef, pork, eggs and fish. Since turkey is wellknown for carrying this amino acid, many assume it is also responsible for the drowsiness that follows a Thanksgiving meal. However, the science behind this claim tells a different story.

While turkey does have tryptophan, it is not particularly abundant. A study published in the National Library of Medicine found that 100 grams of chicken breast contains 404 milligrams of tryptophan, beef steak has 374 milligrams, and turkey breast only 287 milligrams. Moreover, the amount of tryptophan present in a typical serving of turkey is far below the one gram or more required to have any noticeable sedative effect.

Even when tryptophan is consumed, its ability to influence sleep is limited by competition with other amino acids for transport across the blood-brain barrier, which only allows certain nutrients into the brain through shared carriers. This means that tryptophan must compete with other amino acids for entry, and that simply eating turkey does not guarantee enough of it will reach the brain. In other words, for tryptophan to have a significant effect on sleep, the right conditions must be present for it to cross the barrier and trigger the production of serotonin and melatonin in sufficient quantities.

If turkey is not the culprit for the overwhelming sense of drowsiness that follows the Thanksgiving feast, then what is? The true cause lies not in a single ingredient, but rather in the meal as a whole. According to John Redden, a neurobiologist at the University

of Connecticut, people usually consume over 4,000 calories on Thanksgiving, almost twice the daily requirement for the average person. Most of these calories come from carb-heavy foods such as stuffing, rolls, gravy, potatoes and yams. These dishes, rich in sugar, starch and fat, contribute to rapid energy fluctuations that can overwhelm the body’s metabolic systems. To be more specific, these fluctuations are caused by the spike in blood sugar as carbohydrates break down into glucose. Soon after, a rapid decline in blood sugar ensues: a sugar crash, which directly contributes to the lethargy many experience after the meal. As blood glucose levels drop, energy decreases, and feelings of fatigue intensify.

The body’s natural response to consuming such a large meal also contributes to post-dinner sluggishness. Digesting an unusually large quantity of food requires significant blood flow, which diverts oxygen and energy away from other systems, including the brain. Additionally, alcohol, often found at holiday gatherings, acts as a central nervous system depressant and further amplifies exhaustion. The combination of overeating at the table and increased alcohol consumption during Thanksgiving dinner creates the perfect recipe for fatigue.

Fortunately, simple dietary adjustments can minimize blood sugar fluctuations. Balancing carbohydrate-heavy dishes with adequate amounts of protein, such as turkey, and healthy fats can stabilize blood sugar levels, reducing sudden spikes and crashes. Adding fiber-rich vegetables, legumes, or whole grains can further slow sugar absorption and support steady energy levels. Eating slowly, paying attention to portion sizes, and allowing time for the body’s natural fullness signals to activate can help prevent excessive intake. In addition, moderating alcohol consumption and staying hydrated promotes efficient digestion and energy regulation throughout the evening.

Despite abundant scientific evidence, the turkey myth remains a beloved piece of holiday folklore, handed down with the same care as family recipes and rituals. After all, the simplicity of blaming a single amino acid in a holiday bird is far more appealing than confronting the complexity of human appetite and metabolism. With the holiday season upon us, families will gather to celebrate with classic dishes and old traditions. When the post-dinner fatigue inevitably sets in and the battle to keep eyes open begins, maybe turkey can finally be acquitted— and the third helping of stuffing can stand trial instead.

The Neuroscience BehiNd Forgiveness

An excerpt from Helen's testimony reads:

"Then the wild dogs arrived. They were moving the bodies around, scavenging for food, eating the people. I couldn’t bear to watch, so I climbed down from the tree and ran…From that day on I kept running…not really knowing where I was going. I just made a conscious decision to keep moving, to never stop."

Helen was a survivor of the Rwandan genocide, enduring this conflict at just 16 years old. Her gut-wrenching testimonial is but one reminder of the burden that survivors of genocide will carry with them for the rest of their lives

The Rwandan Genocide was a planned, 100-day mass murder political campaign led by Hutu extremists who targeted the minority Tutsi ethnic group. During my junior year of high school, I attended a panel on the Rwandan genocide. The speakers included three scholars, one of whom was a survivor. An American doctor, he provided humanitarian aid while on a mission trip in 1994 and lived through all 100 days. As a survivor, he shared how Rwandans grew closer together post-genocide and built stronger communities by putting beautiful practices of forgiveness into motion, even though the genocide claimed over 800,000 lives. The doctor told the story of a mother who forgave the brutal torturer and murderer of her family. Her choice reveals that forgiveness is a mystifying and incredibly powerful act. Using the Rwandan genocide as a case study of forgiveness in the face of trauma, I will examine how the neuroscience of forgiveness can aid PTSD patients and could be applied to rebuilding lives and communities.

Neuroscientists have begun to explain how forgiveness is possible through studying neuroplasticity- the brain's ability to form and destroy neural pathways throughout life, allowing it to learn new abilities or abandon unused ones. This topic dives deep into the science of the brain as people forgive and how it results in reducing the impact of PTSD. A 2013 study by Ricciardi et al. found that “granting forgiveness was associated with activations in a brain network involved in theory of mind, empathy, and the regulation of affect through cognition, which comprised the precuneus, right inferior parietal regions, and the dorsolateral prefrontal cortex.” To elaborate, the precuneus is the part of the brain that allows people to replay memories in their head and gives them the ability to imagine themselves in other people’s shoes. Similarly, the right inferior parietal regions allow individuals to actively switch into another’s perspective. The dorsolateral prefrontal cortex is the brain region that helps regulate cognitive control, which in turn helps with regulating the urge for revenge.

As we continue to explore the science behind forgiveness, there will be two elements to capture: brain chemistry and the three necessary and therapeutic actions one must take to forgive. The first of the three necessary actions is cognitive control. Fourie, Hortensius, and Decety's 2020 paper states, “To forgive, one typically needs to overcome strong negative emotions, ruminative thoughts, or even vengeful impulses to punish the transgressor,

and instead cultivate more positive feelings for that person.” Cognitive control is the first step towards shaping one’s life around forgiveness rather than revenge. For trauma survivors, this step helps to extinguish the fuel for their anger and their cycles of revenge by drawing the attention away from these emotions. For the survivors of the genocide, they needed to exert cognitive control to be able to work and live in a community full of perpetrators. This control over the brain is what makes choosing forgiveness such a remarkable feat.

Fourie et al. describe perspective taking as the imagining of the offender’s intentions and situation, which fosters empathy and reduces hostility within the survivors. Contextualizing the situation the perpetrators were in and their thought process before committing their acts facilitates the victim’s forgiving. The researchers explain that “...perspective taking seems crucial for forgiveness, because it involves temporarily suspending one’s own point-of-view and feelings in an attempt to adopt and understand those of the wrongdoer.” For victims of mass violence, reframing their offender as a human shaped by circumstance fosters the cognitive control they rely on to move forward in their healing. After the genocide, Rwanda created "gacaca" courts, a community-based justice system that emphasized confession and remorse. The perpetrators were incentivized to tell the truth as it would reduce their sentence. The survivors would sit in and hear the testimonies, from the offender's perspective, to understand their actions rather than just excuse or punish them.

Social valuation is the last step toward peace through forgiveness. Fourie et al. explain that social valuation allows us to compare the costs and benefits of forgiving rather than revenge. These benefits include personal relief, community restoration, and potential social ties that can benefit the individuals violated in the future. However, this comes with the risk of potentially being taken advantage of by the perpetrator again. Social valuation creates the opportunity for survivors to see forgiveness as a quantitative approach with measurable physical benefits. In contrast, cognitive control and perspective taking are more qualitative approaches to forgiveness as they are not objects that can be counted objectively.

After Helen recounted her experience of surviving the genocide, she unfortunately elaborates on how the trauma had claimed her life in other ways. Helen now lives with the genocide everywhere she goes, unable to move on and unable to live in peace. Victims like Helen are those whom research is trying to aid, those with PTSD so prevalent that the trauma is constantly being relived. Through the neuroscience of forgiveness, those suffering may be able to rebuild their lives and move forward. PTSD is an incredibly complex and personal issue. However, we can use this neuroscience as a tool to lessen the burden of individuals' traumas. By implementing forgiveness into clinical therapies, communities beyond Rwanda can discover new ways of reshaping their lives. Looking forward, implementing forgiveness into clinical therapies and healing individuals or communities beyond Rwanda can help reshape survivors' lives worldwide.

FREE WILL V

The belief that all humans have free will is a widespread idea, one many even consider to be a fact. While we acknowledge that people may be coerced into actions they may not want to complete, at the end of the day, the choice is still theirs. Humans always have a choice, prompting society to place an emphasis on morality in order to keep organized civilization afloat. While this may be the case, American psychologist B.F. Skinner claimed otherwise. He believed that every action or behavior exhibited by a person was not caused by a conscious, well-though out intention, but instead by a few processes, mainly through operant conditioning.

Skinner was a determinist, someone who believes that human behavior is completely out of one’s control, and instead due to external or internal forces. Determinism can be broken down into two categories: external determinism, the idea that environmental factors, such as observation or educational/media influences, control behavior; and internal determinism, the idea that behavior can be attributed to hormones, neurochemistry, and genetics. Since many people may be skeptical of these propositions, we can take a look into three studies that support Skinner’s philosophy.

The first study you may be familiar with is Pavlov’s Dogs, the famous experiment that demonstrated dogs could be conditioned to salivate at the ring of a bell. In this approach, Pavlov utilized classical conditioning, a form of conditioning where a neutral stimulus (the ringing of the bell) becomes a conditioned stimulus presentation of food) that naturally produces a response (the dog conditioned stimulus may seem like an irrelevant undertaking, but translating this research

Another study that further supports Skinner’s belief is Benjamin Libet’s 1983 study that utilized an electroencephalogram (EEG) to record participants’ brain activity through electrical impulses in the brain. The participants were told to flick their wrist whenever they like, and to take note of the time they felt the urge to move. They were also monitored by an EEG to record their Readiness Potential, the build up of electrical activity in the brain that occurs immediately before a voluntary movement. Through analyzing the EEG and the times in which the patients recorded, Libet found that the Readiness Potential appeared about 500 milliseconds before the participants flicked their wrist, and more importantly, about 350-400 milliseconds before the time noted by the participants. This is an interesting finding. The brain decided to move before the participant was aware of this decision, even though the participants believed they made the choice to move. Some, including Libet, argue that free will can step in and stop a predetermined action from taking place, but it is unknown

lives is trivial.

Unconsciously, humans could be conditioning themselves, or even be conditioned by others, to act in a positive or negative way. For example, if you were to study when you were tired then take a nap as a reward for studying, you may accidentally condition yourself to become drowsy every time you study.

Lastly, the third study we will examine that affirms Skinner’s beliefs that is one performed by Skinner himself in the 1930s. In the Skinner Box experiment, he employed operant conditioning, a form of conditioning that applies either positive reinforcement, negative reinforcement, or punishment to either strengthen or weaken a certain behavior. He used operant conditioning to condition rats to press a lever when they were hungry. While he demonstrated both types of reinforcement and punishment through the Skinner Box experiment, we will focus on just one. After placing a hungry rat in a box, the rat would eventually come across a lever that when activated, whether by accident or on purpose, would cause food to enter the box. This is an example of positive reinforcement. According to Skinner, this repetition of reward-seeking behavior is the main way in which our actions arise. Rather than from a conscious decision, our actions are the direct result of years of positive and negative reinforcement, as well as punishment. This can be seen in our everyday lives, especially when it comes to raising children. When a child behaves inappropriately, such as displaying an act of violence, the parent will typically punish the child to ensure that behavior does not happen again. In circumstances where the parent allows the child to behave violently, or worse, reinforces it, the child will never learn that it is wrong, and may continue that behavior into adulthood. Can we blame the individual for committing a violent act as an adult if they were never taught that it was wrong? Skinner would not believe it to be the moral failing of the individual, but instead an uncontrollable and inevitable action due to insufficient conditioning. While many behavioral experiments support hard determinism, we can also take a deeper look at the chemical mechanisms in the brain that take place during operant conditioning.

S CHEMICAL REACTION

differences in the central pattern generator (CPG) of Aplysia that had undergone operant conditioning versus those that had not. They decided to look into pattern-initiating neurons, and determined the primary initiators of feeding behavior to be the B63, B30, and B65 interneurons, with B63 being able to initiate on its own. Animals that had not been trained showed impulsive, inconsistent, and fragmented bursts that could start with any of the three interneurons. In contrast, animals that had been conditioned displayed stronger, more regular bursts that typically were started by B63. This increased coordination in patterninitiating neuron bursts is significant because stronger electrical coupling allows signals to travel through neurons more easily, resulting in stronger behavioral conditioning.

Their experiment also looked into dopamine, a chemical transmitter involved in our brain’s reward system, relating it to reinforcement and learning. Reinforcement that promotes a behavior activates dopaminergic neurons in the brain, enforcing said behavior, whereas blockage or destruction or dopamine receptors can cause deficiencies in associative learning. Nargeot and Simmers used this knowledge to directly add dopamine to isolated B51 neurons, a neuron involved in decision making. They observed that this resembled the effect of reinforcement, and decided to test it in the opposite direction. They found that applying methylergonovine, a dopamine receptor antagonist,

society to believe in, at the very least, soft determinism, if not complete free will.

Brought to You by Aliens: Pyramids and Other Wonders

Greetings, Earthlings. Welcome aboard. Joezorblax here (the zorblax is silent), from the Andromeda Union. Intergalactic contractor, professional abductee, and part-time cosmic barista.

I’ll be your tour guide today as we explore the greatest monuments on Earth that definitely were not built by intergalactic construction crews. Do you really think we crossed galaxies, dodged black holes, and survived wormhole traffic just to play prehistoric Jenga with rocks? Please.

Now, buckle in and make sure to keep your arms, legs, and conspiracy theories inside the vehicle at all times as I blast off. Tinfoil hats are optional.

Stop 1: The Pyramids of Giza, Egypt

The gold of the sun melts into an ocean of sand from which three colossal triangles rise. The air hums and shakes with heat. The pyramids glow like the ribs of an ancient god lost to time. To stand before them is to observe time itself stacked in layers of stone.

Ahh, yes. The celebrity of alien construction.

Do you realize how many of your documentaries open with, “But how could simple Egyptians have done this?” Every time a new Ancient Aliens episode drops, I get 47,000 messages asking if we built the pyramids “with laser beams.”

Newsflash: if we had, they wouldn’t have missing limestone casing stones, they’d actually glow in the dark and play smooth jazz.

In fact, archaeological excavations around Giza reveal extensive evidence of human labor organization, including workers’ villages with bakeries and breweries, hieroglyphic records documenting rations, and even unfinished pyramid ramps that demonstrate construction methods. The estimated workforce of 20,000–30,000 people was supported by a sophisticated logistical system involving quarrying, transport via sledges lubricated with water to reduce friction, and inclined ramp structures.

This wasn’t divine intervention; it was applied physics, engineering, and social coordination on a massive scale.

These pyramids hold great cultural value (and no, not because they worshipped us aliens).

The pyramids were deeply integrated into Egyptian cosmology. Their shape reflects solar symbolism, the sloping sides representing the rays of the sun guiding the pharaoh’s soul to the heavens. Their cardinal alignments to stars such as Orion’s Belt illustrate the Egyptians’ astronomical precision. As monumental mortuary architecture, they reinforced divine kingship, centralized authority, and the concept of eternal life. To strip them of human authorship is to deny the Egyptians their role as one of the most scientifically advanced civilizations of the ancient world.

Stop 2: Stonehenge, England

A ring of titans stands silent on a windswept plain, weathered faces catching the pale English dawn. Mist curls through the grass, swallowing centuries, while crows trace black orbits overhead. Each monolith seems to hum with the patience of rock, an ancient clock ticking on a planetary scale.

Our planet calls it the giant stone donut. Everyone’s always talking about how we used our trillion-dollar float-things-in-air-inator to levitate the bluestones across 150 miles.

Ready for some spoilers? This also…wasn’t us!

A lot of geoarchaeological studies can confirm that the sarsen stones originated locally, while the smaller bluestones were transported from the Preseli Hills in Wales. Experimental archaeology demonstrates that Neolithic communities could transport multi-ton stones using timber sledges, rope, and log rollers. Radiocarbon dating shows construction spanned over a millennium, reflecting continuous cultural investment rather than a singular, inexplicable feat. Astronomical alignments with the solstice, sunrise, and sunset reveal intentional orientation, suggesting advanced observational astronomy.

Stonehenge was not merely a “calendar,” but a ceremonial landscape embedded in ritual and cosmology. The alignments to celestial events reinforced seasonal cycles critical to agrarian societies, while burials and artifacts discovered nearby suggest the site was a locus of both ritual commemoration and social gathering. Far from being the product of extraterrestrial whimsy, Stonehenge reflects the intersection of astronomy, ritual, and community identity in Neolithic Britain.

Stop 3: Nazca Lines, Peru

A flat, endless parchment of ochre dust. From the sky, the Earth reveals sketches: hummingbirds, monkeys, spirals, and arrows stretching to the horizon. It’s as if the desert itself were whispering messages in intricate geometry, visible only to those who dare look down from the heavens.

Every time I fly past Peru, humans say, “Look, alien landing strips!”

Sigh.

Friends, if we need a runway, we’re doing space travel wrong.

The Nazca geoglyphs were created by removing the oxidized, iron-rich desert surface stones to expose the lighter substratum beneath, producing highcontrast designs. Archaeological dating places their construction between 500 BCE and 500 CE. Many geoglyphs align with ritual pathways, while others correspond to hydrological features and celestial events. Ethnohistorical parallels suggest these were part of ritualized practices invoking water and fertility in one of the driest regions on Earth. Remote sensing and GIS mapping confirm the geoglyphs were visible from the surrounding foothills, negating the need for aerial perspectives.

These designs functioned as ritual landscapes, cosmographic expressions integrating art, spirituality, and survival.

The Nazca demonstrated profound ecological and

astronomical awareness by embedding meaning into their desert environment. To call them “airstrips” trivializes a sophisticated cultural response to extreme aridity and reduces symbolic art to a utilitarian caricature.

Human 1: Alright, Joe, I’ll buy that humans built the pyramids and those desert doodles. But seriously… those giant heads on that tiny island? That has to be alien work, right?

Stop 4: Easter Island (Rapa Nui)

Oh, for the love of cosmic entropy, you humans grind my gears.

Emerald land on endless sapphire water. On its windswept hills, colossal figures stand sentinel, stone faces etched with calm defiance, watching over the living. Their eyes, long empty, still seem to see. Twilight paints them in amber and shadow.

Welcome to our final stop: Rapa Nui. You built monumental sculptures of your ancestors, and instead of being impressed, half your species went, “Yeah, that seems like alien handiwork.” Honestly, I wish you’d give your own kinds more credit.

The Moai were sacred embodiments of ancestors; repositories of mana, spiritual power, and symbols of lineage and community authority. Facing inward toward villages from their ahu platforms, they guarded the living: linking the present to the spiritual and genealogical past. Attributing them to extraterrestrials erases Rapa Nui’s ingenuity, sophisticated artistry, and spiritual philosophy – diminishing the cultural significance of ancestor veneration at the heart of their society and reducing rich cultural legacy to tabloid-level fantasy.

Let’s get our facts straight then. You don’t need starships to touch the cosmos. Your ancestors did not wait for visitors from the void. They studied the heavens, chiseled meaning from earth, and built their gods from geometry and faith. They made art with gravity as their brush.

And maybe that’s the most alien thing of all: a species so fragile, so temporary, daring to build forever. Alien conspiracies don’t just distort science; they perpetuate pseudoscience and cultural erasure. They dismiss non-Western ingenuity, reduce human heritage to “cosmic accidents,” and feed into narratives that deny your species’ creativity and resilience. So, the next time you stand before the pyramids, or under the solstice sun at Stonehenge, or trace the lines across the Nazca desert, don’t look up. Look within. Your monuments were never messages to us. They were messages to yourselves. And please, please, stop calling me every time you see a rock you can’t immediately explain.

The Blood Microbiome

A Contentious Topic in Microbiome Research

For a long time, blood was considered to be sterile, consisting only of red and white blood cells, platelets, and plasma. The sign of microbial life in the blood is often interpreted as a sign of infection, and in the worst-case scenario, the cause of life-threatening septic shock. However, advancements in DNA sequencing technology have allowed scientists to detect microbial DNA in the blood that did not cause disease, leading to the contentious hypothesis that humans may indeed have a blood microbiome, an assemblage of living microbes present in the blood.

This idea was not out of the question; there is extensive knowledge of other microbiomes in the human body, such as the gut and skin microbiomes. However, the studies done on the prospect of the blood microbiome have been inconsistent, often outright contradicting each other. Some papers use the term casually when describing microbial signatures in the blood, theorizing that the supposed blood microbiome may be used as a prognostic marker for various diseases. Other papers challenge the blood microbiome’s existence altogether, stating that bacteria in the blood are not a common feature in healthy people. Based on the latest research, the existence of a universal blood microbiome across the human population is most likely false.

The Origins of the Blood Microbiome

The presence of microbes in the blood of healthy individuals was first proposed in the 60s and 70s. A 1969 study found observations of metabolically active bacteria in the blood of healthy people, and a 1977 study provided bacterial cultures from healthy blood samples. Both studies challenged the notion that microbes are only found in blood when there is an infection. Blood was proven to not exist in a naturally sterile state.

Over the following decades, more research was done to confirm the presence of microbes in a healthy population. For instance, a 2001 study confirmed that not only are active bacteria present in the blood, but also remnants of their DNA. This was an exciting discovery for microbiologists, especially during a time when the human microbiome was becoming a highly popular subject.

As microbiome research took off, the idea of a common blood microbiome became a topic of interest. Its existence looked promising. All that was left to do was figure out which universal microbes are present.

The Not-So Common Blood Microbiome

In microbiome research, identifying a universal group of microbes in a specific anatomical site is crucial for properly defining a core microbiome. These microbes are present in the majority of the human population, and are crucial for a host’s biological function. Both these important factors are contested, impacting the potential blood microbiome's legitimacy.

A 2023 population study done by Tan et al. proposed a bold claim that there is no evidence of a common blood microbiome in the human population. The team characterized the DNA signatures of microbes present in the blood of 9,770 healthy individuals. They detected 117 microbial species in this population. However, most of these microbes were commensals associated with other body sites, such as the GI tract; species of Bifidobacterium and Lactobacillus were especially prominent. A core microbiome presents a universal population of microbes within large groups of people, but no core species in the blood were found in this study. Of the 117 microbial species found throughout the 9,770 study participants, no similar species were detected across 84% of all individuals. Less than 5% of individuals shared the same species. Co-occurrence patterns were not observed between the different microbial species.

This study provided concrete evidence that questioned the existence of the blood microbiome. Tan et al. proposed that the commensal microbes present in the blood underwent sporadic translocation, a shift of locale from the microbes’ original habitat. Microbes shifting their location throughout the body is common and occurs even in healthy individuals. In the case of microbes from our gut, this phenomenon is informally called “leaky gut syndrome.” Because none of the microbial species detected originated in the blood, nor are they crucial for blood’s overall function, Tan et al. concluded that they cannot constitute a blood microbiome.

Is the Blood Microbiome Truly a Myth?

The population study done by Tan et al. became the basis for concluding that a core blood microbiome most likely does not exist. Despite the controversy surrounding its legitimacy, the term “blood microbiome” is still used in papers following Tan et al.’s publication. Is this merely scientists having different understandings of what a microbiome refers to? Is this a language barrier between scientists around the world? How will this contention be resolved? A singular, large study can be a powerful tool in questioning a popular hypothesis, but more should be done. If the term is still approved for usage in reputable journals, then is it truly a myth? Right now, it remains inconclusive.

Illustration: Ashleigh Morris | Design: Chaunté Lewis

Sharks

or decades, and largely since the release of the hit film “Jaws” (1975), sharks have had a reputation as voracious man-eaters that are a major threat to human life. In reality, vending machines kill more people than sharks do every year. So do lightning strikes, rock climbing, and champagne corks—all with annual death counts above the four to six fatal shark attacks that occur each year. Even more glaringly, the media perception of sharks fails to consider that we are far more dangerous to them. Humans kill between 70 million and 100 million sharks annually, compared to the small number of people involved in fatal shark attacks. Are sharks truly dangerous, or have we just misunderstood them?

Jaws of Misconception

where the shark often circles a victim, knocks into it and then attacks. This is usually associated with territorial instincts and is extremely uncommon, but is far more dangerous. The interesting thing about these attacks is they often involve a victim being perceived as a threat in the shark’s environment, often in feeding situations. Despite this, humans are not perceived as prey after an attack. Sharks simply aren’t interested in food sources that don’t meet their energy needs.

Sharks were feared long before Hollywood, thanks to stories that spread faster than actual encounters. It wasn’t until the 1970s that Hollywood truly cemented sharks’ reputation as ruthless killers, despite them being vulnerable,and deeply misunderstood creatures that are fundamental to ecosystem function. It’s not just media, however— psychologically, fear of sharks is also linked to fear of the unknown and the intimidating image of the ocean. The isolated incidents, despite paling in comparison to the number of sharks we kill, are sensationalized and exploited for entertainment, with the headline always being “sharks kill,” with no mind paid to their ecological role or the circumstances of these incidents.

The reality is that most sharks are simply not dangerous. Of the over 500 species of sharks, only about 30 have ever been recorded to bite humans, and only 12 of those are considered potentially dangerous. This of course begs the question: why do sharks bite in the first place?

One theory is that many sharks cannot see well enough to distinguish a human from their actual prey—the ‘mistaken identity’ theory most commonly associated with the Great White Shark, which is believed to often mistake humans for pinnipeds (seals and sea lions—their primary food source). This theory aligns with the belief that many sharks will simply ‘test-bite’ humans and release them upon realizing they aren’t actual prey or are too large. This is a hit-and-run attack and is often not fatal.

However, not all shark attacks are hit-and-run; roughly 20% don’t fit that pattern, according to the Florida Museum of Natural History. They’re a different kind of attack termed bump-and-bite,

Now that we’ve explored why sharks occasionally attack humans— and how their feeding behavior and the low odds of an encounter make them far less dangerous than we assume. Still, the real threat isn’t them—it’s us. Shark fishing is a multi-billion-dollar industry that has led to a 71% decline in the global abundance of sharks and rays since 1970. Many of these sharks are caught just for their fins, the rest of the shark discarded back into the ocean as waste because of the lower value to fishermen. Unlike most fish, sharks grow slowly and reach sexual maturity later in life. They also produce only a small number of offspring in each brood. They are severely overfished—harvested faster than they can reproduce, preventing populations from recovering. Bycatch is another significant threat, as 12 million sharks and rays are caught in bycatch every year with relatively low survival rates after hooking.

As apex predators, sharks play a vital role in maintaining balance within marine ecosystems. Their loss leads to ecological imbalances, as prey populations grow unchecked and habitats begin to collapse. Sharks generally feed on mid-level predatorial fish - if sharks aren’t feeding, these fish feed uncontrollably, leading to a decline in herbivorous fish populations. The decline in herbivorous fish allows algae to grow rampantly because far less fish are eating it - smothering coral reefs and ultimately causing them to die. The removal of an apex predator ultimately affects the entire ecosystem.

It's on us now to change the way we view sharks. They’ve been negatively perceived for far too long, causing us to overlook the threat we pose to them—and the impact our ignorance has on the ecosystems we depend on. Remember, for every human killed by a shark, there are over 11 million sharks killed by humans. Yet it’s the former that continues to be sensationalized. Our fear of sharks is rooted in myth, not reality. The more we understand sharks, the more we realize their importance to the ocean—and to us.

Hot Topic

Denial Won’t Cool the Planet

Lewis

Nearly 14% of Americans deny that climate change is real, and less than half believe that humans are major contributors, according to a 2023 Pew Research Center study.

This denial is partly rooted in the misconception that there is insufficient evidence of Earth’s warming. Yet, as more people recognize that climate change—and the long-term environmental shifts it encompasses—is real, skepticism has shifted toward its cause. Many now argue that global warming, one of the key symptoms of climate change, results from natural environmental cycles, claiming that extreme weather has always occurred and the current pattern is just part of Earth’s rhythm.

Science paints a far clearer picture. The evidence overwhelmingly shows that the planet is warming at an unprecedented rate, and human activity is the dominant driver. Global warming is fueled by greenhouse gases such as carbon dioxide, methane, and nitrous oxide. These gases act as an atmospheric blanket to trap heat from the Sun and warm the planet. Since the Industrial Revolution, the burning of fossil fuels—coal, oil and gas—has released massive amounts of greenhouse gases into the air. According to the United Nations Environment Programme, scientists have tracked greenhouse gas concentrations over time through ice cores, tree rings, and sediment layers, revealing that current carbon dioxide levels are the highest they’ve been in at least 2 million years.

The consequences are undeniable. The greenhouse effect has warmed Earth’s atmosphere, oceans, and land. NASA data shows that the planet’s average surface temperature has risen about 2°F since the late 19th century, and 2024 marked the hottest year on record. Much of this excess heat has been absorbed by the oceans, causing thermal expansion—a process in which warm water takes up more space— leading to an 8-inch rise in global sea levels over the past century. As a coastal community, Miami faces disproportionate impacts from climate change due to its very low average elevation of approximately six feet above sea level. According to Miami Beach Rising Above, this makes Miami, like other coastal cities, extremely vulnerable to rising sea levels, frequent flooding, and saltwater intrusion.

Additionally, ice sheets in Greenland and Antarctica are rapidly losing mass; between 1993 and 2019, Greenland alone lost an average of 279 billion tons of ice per year. Polar sea ice is vanishing and glaciers across the world are retreating at alarming rates.

The added heat and moisture in the atmosphere is also fueling more frequent and intense extreme weather. Heat waves are becoming longer, hurricanes stronger, floods more destructive, and droughts more persistent. Scientists use long-term data and climate modeling to

directly link these changes to rising greenhouse gas concentrations. Climate change does not only endanger ecosystems but also threatens human health, food security, housing, and safety. Air pollution, disease transmission, and displacement are increasing worldwide. Coastal communities face heightened risks from sea-level rise and hurricanes, often forcing families to abandon their homes. Ocean warming and resulting acidification are also undermining global fisheries and marine biodiversity, threatening one of humanity’s most important food sources.

A major obstacle to addressing these issues is persistent skepticism fueled by misinformation. Some media outlets amplify uncertainty about climate science, framing it as a political issue rather than a scientific reality. This narrative has fostered distrust in scientists, who are sometimes accused of financial or ideological bias. As a result, policy responses are delayed and public motivation weakens, even as the evidence becomes harder to ignore. According to Psychology Today, conservatives and Republicans are significantly more likely than liberals and Democrats to deny that human activity drives climate change, largely because fossil fuel industries hold economic importance in conservative politics.

Yet in the face of this disinformation, awareness is growing, as evidenced in increased public concern. The Peoples’ Climate Vote 2024 survey found that over half of people worldwide are more worried about climate change now than last year, and four out of five want their governments to strengthen commitments to combat it. Scientists, educators, and advocacy groups have contributed to this shift by improving climate communication and integrating environmental education in schools.

Despite the urgency, many still believe it is too late to make a difference, with some damage being irreversible. However, scientists emphasize that every fraction of a degree matters. Each small reduction in global warming lessens the frequency and intensity of extreme events, according to the United Nations. Transitioning from fossil fuelbased power plants to renewable energy sources like wind, solar, and hydroelectric power would significantly cut greenhouse gas emissions and limit global temperature rise. Renewable sources like wind turbines and solar panels are great alternatives for electricity, as they do not emit air pollutants and better protect natural ecosystems.

Ultimately, the most crucial step in addressing the crisis is reaching a shared understanding of reality. As NASA summarizes it, “From global temperature rise to melting ice sheets, the evidence of a warming planet abounds.” Climate change is not a theory, but a measurable, visible, and human-driven phenomenon. The only uncertainty left is how quickly the world will choose to act.

Why don’t they just seek jobs? They are poor because they are lazy. I bet they are using up my tax dollars to pay for their heroin. This is America; you can find a job anywhere and get out of poverty. They are using SNAP? I bet it’s another one of those low-life government aid leeches.

How often have we heard or thought these words? To many Americans, being homeless, unemployed, or dependent on government aid seems almost alien. Although these remarks may appear rational at first glance, the reality is more intricate than it seems.

What if I told you that America’s middle and upper classes are the biggest government leeches? We don’t label it welfare; we disguise it as tax credits, deductions, and subsidies. “Big government” helps middle-class families with their mortgage, insurance, utilities, and college funds. In 2024, the U.S. Treasury estimated a $1.9 trillion dollar loss in revenue from tax breaks and subsidies. The largest benefits weren’t programs for low-income families but rather subsidies: employer-provided health insurance ($231 billion), imputed rental income ($152 billion), employer retirement contributions ($136 billion), and capital gains ($114 billion). These tax credits mostly go to people with stable jobs, employer-provided insurance, retirement plans, and the wealth to own homes and stocks.

company claimed there was “no catch” for customers who filed their taxes with them. In reality, Liberty Tax charged $200 for their filing services under the “Cash in a Flash” program, which was higher than customers who did not use the program. In May 2024, Liberty Tax settled another lawsuit due to the marketing of high-cost loans to customers as if they were an advance on their tax refunds. Predatory tax companies target low-income families seeking their rightful aid, while middle- and high-income earners benefit the most from our tax system, leaving the poor with no real support. Our perception of the welfare state is wrong. Welfare is not draining the economy; it’s a business on its own.

The business of poverty management began when President Reagan shifted our welfare system toward the privatization of government spending. Before this shift, the federal government distributed the aid directly to citizens. Afterward, the government allocated the funds to the states, which in turn contracted for-profit companies to administer and distribute them. Supposedly, these companies were more efficient and fair. In reality, a new industry was born and dedicated to managing poverty. The middleman emerged and was responsible for determining who was eligible and distributing the benefits.

Poverty in America is a business, and the American people are perpetuating the problem by stigmatizing low-income Americans as lazy welfare leeches. What if I told you that America’s middle and upper classes are the biggest government leeches?

The upper and middle classes quietly drain our nation’s coffers, hiding their welfare under the table through tax breaks and credits, while the lowest 20% of households receive less than 10% of these benefits.

The upper class receives more money in tax breaks than the lowest-income households under our current tax system. Thanks to our complex tax code, claiming tax credits can be extremely complicated, and many low-income families rely on commercial tax services to secure the refunds they rightfully deserve. These commercial tax services are often predatory, profiting from customers who need their help the most. Numerous lawsuits have been filed against tax service companies that mislead, overcharge, and promote high-cost loans to low-income taxpayers. Liberty Tax, for instance, settled a consumerprotection lawsuit related to its “Cash in a Flash” program. In January 2024, the company offered 7,300 residents an immediate 50-dollar payment as an incentive for filing their taxes with Liberty Tax. The

For example, take Maximus, a 4.9-billion-dollar business that reviews applications for Medicare and SNAP, among other welfare programs. Maximus is a for-profit company that will do anything to remain profitable. They have a backlog of applications due to staff cuts to meet higher profit margins, leaving countless families without aid. The middlemen are not motivated by elderly individuals who can barely afford food or by the plight of starving children. Instead, they are driven solely by profit. Rejecting people and avoiding their cases is more profitable than managing millions of families. Because of this, in Tennessee the number of uninsured children increased 23% in 2017, and seniors were incorrectly determined ineligible at alarming rates. In Kansas, Maximus was reviewed due to poor service and a backlog in reviewing Medicaid applications. It achieved only 40% accuracy in processing financial payments. Thus, a company hired to review as many applications as possible, with the best accuracy, is being paid to do the opposite. Despite all this, America still claims that it is the poor who are leeching off our hard-earned money.

Another welfare program that has become a profit scheme involves the Social Security benefits intended for foster children. Again, a

middleman is hired to locate foster children whose parents have died or are disabled and qualify for Title II Survivor Benefits or Supplemental Security Income (SSI). Maximus, the middleman, reportedly receives $1600 for each eligible child a state identifies, and then the benefit is claimed by the state, not given directly to the children. Forty-nine states justify this practice as reimbursement for the costs of caring for the children, claiming that the state acts as their financial representative; however, this justification lacks transparency. When the child does receive the payment directly, the amount averages to $700 a month. Ultimately, bureaucracy and middlemen have reduced the amount of money that foster children receive.

The middleman also appears in the Section 8 housing system. Housing vouchers are designed to help low-income families pay rent on the property they choose. The government pays a subsidy to the landlord instead of offering public housing. Finding Section 8 housing and being approved is excruciatingly challenging. There is a high demand and a long waitlist for a limited number of participant housing units. In Florida, the Section 8 waitlist has been closed since 2015. As of March 2025, 462 applicants remained waiting for assistance.

Due to the complexities of the system, some landlords specialize in housing government-assisted tenants. Recent studies have shown that these landlords increase rent above the market price for the unit, leaving tenants paying close to the original value the property would have without assistance. A Harvard study on Milwaukee residents estimated that “rent overcharging costs an extra 3.8 million per year in Milwaukee alone, the equivalent of supplying roughly 620 families with housing assistance.” In D.C., The Washington Post found that the D.C. Housing Authority “spends more than 1 million dollars a month for these units compared to the estimated neighborhood market medians.” Thus, government inefficiency is costing Americans money without providing any meaningful benefits. Rising rents trap low-income families in unaffordable apartments, increasing their risk of eviction and homelessness and perpetuating the cycle of poverty.

Low-income Americans are not only struggling to receive their tax returns, find housing, and obtain healthcare and food but are also being exploited by corporations that profit from their vulnerability and disregard their health. The dental management company

Benevis and affiliated Kool Smiles settled in 2018 for “23.9 million dollars for medically unnecessary pulpotomies (baby root canals), tooth extractions, and stainless-steel crowns, in addition to seeking payment for pulpotomies that were never performed.” A company with 130 offices across 17 states exploited poor children to profit from government funds.

DaVita, the second-largest dialysis company in the United States, also profits from government subsidies. The government subsidizes dialysis clinics through Medicare’s ESRD program until patients receive a kidney transplant and no longer require treatment. This dependency stems from the fact that dialysis is costly, private insurance companies often limit coverage, and low socioeconomic individuals have a “disproportionately higher incidence rate of kidney failure.” These life-saving treatment clinics, like DaVita, are understaffed, have high mortality rates, and make 17% fewer kidney transplant referrals. For dialysis companies, a patient getting dialysis makes more money than a patient who receives a transplant and recovers. DaVita

had to pay a $450 million settlement for Medicare fraud after billing the government for discarded dialysis drugs. The company later paid another 350 million dollars for providing kickbacks to doctors.

In America, we portray the poor as profiting from our hard work. Given the evidence, we must reconsider our participation in the poverty economy that exploits low-income families and scrutinize the true intentions of the companies fueling inequality. These are examples of how the welfare system is not simply giving free money to lazy people; it is a complex, interconnected web of for-profit companies, lobbyists, and paperwork. Low-income families not only have to navigate these systems blindly but must also endure the health issues caused by their circumstances. They become trapped in a poverty cycle, where poverty leads to stress, stigma, and trauma, decreasing their overall mental health. This decline in mental health leads to worse physical health outcomes, which can result in job loss and medical debt.

Poverty primarily affects children. Children from low socioeconomic backgrounds have worse nutritional intake and less access to high-quality education. Their parents often endure food insecurity and housing problems, diminishing their capacity to engage in positive parenting. Chronic early stress leads to epigenetic changes across generations. Poverty is engraved into the DNA of low-income families and their children, who not only inherit financial hardships but also the weight of generational trauma.

Companies and middle-class families profit from the government that funds their tax breaks and subsidies, while low-income families endure the stigma of being labeled lazy. We criticize programs like SNAP and place the burden on parents, while 13 million children in the U.S. experience hunger that pushes their futures into the poverty cycle from which their parents desperately try to escape. And yes, they are trying to escape. In 2021, low-income families were receiving an unprecedented amount of aid due to COVID-19 regulations. There were “16 million fewer Americans in poverty in 2021 than in 2018,” and child poverty was cut by half. At that point, Americans were arguing that unemployment benefits were keeping people comfortably unemployed instead of trying to obtain jobs. From June to July 2021, 25 states stopped their emergency benefits. Did the unemployment rate suddenly decrease? Did low-income families get off their couch to find the work they had been avoiding for so long? No. There was no jump in employment rates. In fact, the states with the fastest job growth had retained some or all the unemployment benefits.

Poverty in America is a business, and the American people are perpetuating the problem by stigmatizing low-income Americans as lazy welfare leeches. If you truly care about your hard-earned dollars, don’t criticize the poor with insensitive name-calling. Criticize the for-profit companies that perpetuate and worsen the poverty cycle by exploiting children needing dental care, patients struggling with chronic kidney disease, and families targeted by predatory tax services. Criticize the absurdity of the middlemen who mismanage government welfare programs and contracts. The welfare state hardly benefits the intended recipients but instead aids the for-profit companies that should be providing care. America must realize that a cruel, complex, and predatory system lies beneath the stigma. Poverty is both a cycle and a business, and Americans are paying for it—just not the way we assumed.

The Myth Behind Alkaline Water

An Industry Built on Lies

ILLuSTRATiON & DESigN: vERONiCA RiCHMOND

Istill remember fifth grade, the year I, like millions of other kids across America, did my first science fair experiment. I still reminisce about going to the grocery store with my mom and buying a bunch of different water bottle brands, putting phenolphthalein dye into a sample of each, and lining them up next to

each other from most acidic to basic. And the conclusion my friends and I—and millions of other kids—came up with is that the more ‘purple’ or alkaline the water is, the healthier it is. With the red (acidic) and yellow (neutral) waters being worse for our guts, circulation, acid reflux, and almost every other aspect of bodily function. But the truth is everything we were told as kids is bullshit propagated by the bottled water industry to justify egregious markups on what’s essentially repurposed tap water.

Alkaline Water and the Elementary Science Fair

What Makes Water ‘Alkaline’ or ‘Acidic’ Anyways?

Scientifically, any water with a pH greater than 7 is alkaline and less than 7 acidic; perfectly purified water has a neutral pH of 7.0. Yet both bottled and tap water are never at an exact pH of 7 since the water we drink isn’t pure H2O. Tap water in Miami-Dade County, for example, generally hovers at a slightly alkaline pH of 7.3-8.5, averaging around 7.6 for most of the year. This mainly stems from the source of the county’s water and the way that water is treated in plants. Dade County’s water is sourced from the sediment-rich, limestone Biscayne aquifer, and the chemical agents used to treat the water, like soda ash (sodium carbonate), raise the pH too. Similarly, bottled water can have varying levels of acidity, becoming more or less acidic through the bottling process. Nevertheless, the main determinant of a bottled water’s pH is the same as tap—the source of the water itself—with different sources having varying pH levels generally in the range of 6.5 to 8.

Robert O. Young and the ‘pH Miracle’

Alkaline water as defined in the context of bottled water marketing is any water that has a pH greater than 9. The root of this classification as well as all of the other proposed benefits of alkaline water generally stem from one disgraced “scientist’s” theory, Robert O. Young. Young presented his pH theory in a 2002 book he co-authored with his wife called “The pH Miracle: Balance Your Diet, Reclaim Your Health”, with the fundamental belief that human health is solely dependent on maintaining an ‘alkaline’ pH level in the body. According to his so-called theory, people should only drink water with a pH greater than 9, as drinking water below that pH leads to a pH imbalance, weakening the immune system, and riddling our bodies with a plethora of diseases. Before moving on to debunking Young’s ‘theory’, I find it important to note that Young doesn’t have any formal education past a high school diploma, let alone a medical degree, and in February of this year he was indicted on multiple felony counts related to practicing medicine without a license. In May of this year, he was sentenced to six years in the California state penitentiary system, which he is currently serving.

How Do Our Bodies Actually Regulate pH?

In the healthy human body, natural blood pH levels range from 7.35-7.45, with the lungs and kidneys constantly working to regulate and keep the body’s pH in that range. Fluctuations in our body’s pH levels can be harmful, leading to conditions like acidosis (too acidic) or alkalosis (too alkaline), which when left untreated can disrupt cellular activity, leading to symptoms like confusion, fatigue, and muscle cramps. With that said, as per EPA and FDA regulations, the standard recommended pH range for drinking water (pH=6-9) has no effect on the body’s pH balance and is considered completely safe. In fact, our kidneys are so good at regulating our blood pH levels that the only way drinking water could affect it is if the kidneys intake water to the point of intoxication, which for a healthy 180 lb. adult male would be more than 1 gallon within an hour, as per the Cleveland Clinic’s health guidelines. With all this in mind, how did Young’s lies become so widespread among the public psyche?

Young’s Lies, the Guilty Pleasure of the Water Industry

Regardless of the science behind the mechanisms that control our blood pH and health, Young’s lies were picked up by bottled water manufacturers and marketing teams all over the world. Water companies have been smart enough not to cite any of Young’s claims directly, as to avoid lawsuits, but it hasn’t stopped them from frivolously advertising the so-called benefits of alkaline water.

‘Essentia’ water (pH=9.5) for example, one of the largest players in the alkaline water industry, regularly pays stars and athletes like Jennifer Aniston, Travis Kelce, and most recently superstar quarterback Patrick Mahomes, to advertise the so-called benefits of their water which they source from regular municipal water supplies, adding chemicals to artificially raise the water’s pH, allowing them to sell the water at a 2000% markup. Now, to give credit where very little credit is due, the acid reflux claim has some truth to it as alkaline water does somewhat neutralize stomach acid, giving a temporary sense of relief. However, like all the other health claims of the alkaline water industry, the effect alkaline water has on stomach acid is so small and temporary that it’s basically useless as a means to reduce acid reflux. Not only are you much better off just taking acid reflux medication regularly, drinking overly alkaline water (pH greater than 9) can have a slew of negative health effects, especially for people with specific conditions. Alkaline water with a pH above 9.8 in particular, which many alkaline water brands advertise their water as, can lead to high levels of potassium in the blood, making it highly dangerous for people living with kidney disease.

Ditch the Bottle, Stick to Tap, Experiment with Another Cliche and Don’t Look Back

As much fun as I had making my 5th grade experiment, if you’re looking to do an elementary science fair project anytime soon, avoid measuring bottled water pH and stick to the other cliches like the baking soda volcanoes or how different soils and lights affect a plant’s growth, because at least those experiments have some truth to their results. But all jokes aside, with the world of misinformation floating around on the internet today, always look out for cons and be wary of brands making big claims about everyday foods and drinks we’ve been consuming for thousands of years. Fads come and go, and the water industry’s unfortunate obsession with alkaline water seems like it’s here to stay, with a steady growth of profits in the industry year by year. If you really want to drink good quality, purified water, buy yourself a nice water filter and go with tap; you’ll save yourself a pretty penny in the process, and long term, it’s much healthier than buying overpriced, bottled alkaline water.

Wh ’s

Who really brought food to the table in preagricultural societies? Dinner?

Design: chaUnTé leWis

Now that we’ve established there is substantial evidence of women being hunters in pre-agricultural societies, let’s talk about what that means when it comes to gender roles. “Man the Hunter, Woman the Gatherer" is a theory steeped in male scientific bias. Up until very recently, most scientific study in any field has been done by men, with very few exceptions. There has been a strong bias in science for a long time. This bias has shaped a lot of our fundamental understanding of the world and has shaped how our societies have functioned. While we no longer diagnose women with “hysteria” or accuse them of being witches, there are still some stereotypes that come from male scientific bias. Let’s look at “Man the Hunter, Woman the Gatherer." This idea created the division of labor in our society. Men are biologically predisposed to hunt and make money for the family. Women are meant to stay close to home and perform other domestic duties like child-rearing. The theory assumes that men are always physically superior to women, and explains why women have been relegated to being housewives for the last 1,800 years (Ocobock & Lacy 2023). This idea has been a foundation of our society for what seems like

When you think of pre-agricultural society, what comes to mind? Some people think of the nomadic groups that roamed the Earth 11,000 years ago. Some people picture characters from popular culture like The Croods or The Flintstones. Some people don’t really want to think about pre-agricultural societies at all, especially now that we have DoorDash and Uber Eats. Pre-agricultural societies are those that existed before the practice of agriculture came around. These societies consisted of nomadic peoples who traveled from site to site to find food. These groups used subsistence parties to hunt, fish, and forage. Another term used for these groups are hunter-gatherer societies. Some people in these societies were tasked with hunting while others were tasked with gathering. We typically learn that the men did the hunting while the women stayed near the base camp and gathered other sources of food. For a long time, archeologists, historians, and anthropologists believed this assertion was true. They had evidence to back this theory up. However, recent archeological research is finding that women also did the hunting. This flips what we assumed about huntergatherer culture on its head. It also challenges society's gender paradigms. What can recent archeological findings tell us about women’s role in hunter-gatherer societies, and how do these findings challenge standard gender roles? In other words, who really brings the food to the table? Let’s begin by discussing a recent archaeological study done by Anderson et al. They selected 391 societies from around the world that foraged for food. These societies were investigated using all available records and accounts. Of these 391 societies, 63 societies had explicit data available. This meant that somebody had actually gone to study these groups in person since 1800 ( some of these foraging groups are still active today), and had first-person testimonials and evidence on who hunted. Anderson et al. focused on these 63 societies, and coded the evidence for mentions of “women were hunting or women killed animals” (Anderson et al. 2023). They wanted to know how often it was mentioned that women specifically hunted to sustain their communities. When the results of the study were tallied up, 50 of the 63 societies had documented women hunting. This was the purposeful killing of some type of game to feed their communities. That is 79% of these 69 societies. As a 2023 study by Osborne states,“Women had their own tool kit. They had favorite weapons. Grandmas were the best hunters of the village.” There are several archeological sites that have been found, mostly burial sites, where women are buried with their favorite hunting tools. A recent discovery of this was found in 2020 during an archaeological excavation in Peru. A woman was found buried alongside her hunting toolkit, stone projectiles, and animal processing equipment (tools used to make the catch edible). This burial site is 9,000 years old, and shows that women were prolific hunters despite what many of us believe (Anderson et al. 2023). There was also a lot of diversity in how women hunted. Some women hunted with groups, while others hunted alone, some with dogs, or with their children. They used a number of different practices depending on where they were and what type of game they were trying to catch.

Mix-Up:

The Great Reading

4

In addition, mistrust of the medical and pharmaceutical system due to decades of corporate scandals and inequities that have left people wary.

The vaccine myth didn’t just exploit fear of autism, it exploited the fear of being lied to, like many have before.

Rebuilding Trust

Science alone cannot silence misinformation; communication is the key. The CDC and NIH have emphasized community-based outreach to rebuild confidence and transparency about data and funding, which helps shield the public against future misinformation. Equally crucial are local messengers such as present and future pediatricians, nurses, and faith leaders who can bridge the gap between scientific evidence and personal belief. As the WHO notes, empathy and listening are more persuasive than lecturing when confronting vaccine doubts.

Nonetheless, a single fraudulent study, echoed by the media and ideology, still continues to shape public health decisions today. Exposing this myth matters because it reminds us that evidence–real, reproducible, transparent evidence–is the very foundation of medicine and can topple public health and democracy if not treated and defined

Thus, in science and in society, separating correlation from causation isn’t optional. It’s survival.

3

One Clear Verdict

The ultimate driver of many medical myths, such as this one, is confusing two concepts taught in the very first lessons of research: correlation and causation. Assuming that because autism symptoms appear around the same age as childhood vaccinations, one must cause the other. As of 2024, the CDC flatly states: “The National Academy of Medicine, reviewed the safety of 8 vaccines to children and adults. The review found that with rare exceptions, these vaccines are very safe.” Nine other CDC-funded studies and independent global reviews have confirmed this, and WHO/NIH meta-analyses covering over a million children reached the same conclusion. The verdict is clear: the vaccine-autism link is neither biologically nor statistically supported. So, after every reputable source has stated clearly that vaccines do not cause autism, why do we keep believing Fox News over the nation’s leading sciencebased and data-driven public health organizations, why do we believe a study with 12 kids written by a fraudulent man over one with over a million children, why believe that one random guy on TikTok over Ph.D.’s and M.D.’s with decades of experience. If the evidence is so strong, why does the myth survive?

Why People Still Believe

The answer to this question lies within human psychology. We are pattern-seekers; we connect dots even when none exist and fill in the blanks even when we don’t know the answers. Vivid anecdotes of children changing, suffering, and dying after shots outweigh mountains of statistical reassurance that there is no relation. Cognitive biases like the availability heuristic make very rare but emotional stories shared on social media feel common.

Design: Chaunté Lewis

The Autism-Vaccine Myth and the Dangers of False Causation

Order:

The Origin of the Myth

A great and menacing myth began in 1998, when Andrew Wakefield and twelve coauthors published a paper in The Lancet suggesting a connection between the MMR vaccine and autism spectrum disorders. The study used a small sample and was biased due to several undisclosed financial conflicts of interest.

Misinformation As a Movement

Follow-up investigations revealed that the data was manipulated. By 2010, the paper was retracted, and Wakefield lost his medical license. Nevertheless, the damage was done. Media coverage amplified emotional stories of “healthy children turned autistic,” pointing the finger at vaccines while ignoring the lack of credible evidence to back these claims. As social media evolved, antivaccine movements found new ways to spread fear faster than fact, especially when it turned into a war between the “vaccine-loving” left and “anti-vaxxer” right. Thus, this 12-child study spread like wildfire and was made into a global controversy.

Once public fear hit the ground running, the anti-vaccine movement reframed it as a fight for “medical freedom.” It became part of the political identity and foundation of many conservatives and republicans– to be one, you had to believe in the other. Being against the preventative care and protection of children became antiestablishment, anti-institutional, and very partisan. The consequences of this were measurable. Vaccine hesitancy has been named one of the top ten global health threats by the World Health Organization since at least 2019. This distrust has been linked to the resurgence of measles, whooping cough, and even polio. These are lives that suffer all because of a myth. In the U.S., COVID-19 vaccine skepticism also led to broader vaccine doubt, with nearly 45% of Americans reporting lingering hesitancy towards getting it, even though it is the reason the world gradually returned to the normalcy we once knew.

Vaccines and autism, two words that should have never been put together: yet once they were, the damage became irreparable. Almost every person who has ever interacted with social media or news outlets has heard of a relation between the two, whether they scoffed at it or believed it. This great myth of the relationship between vaccines and autism is not something that has simply graced the hidden corners of Reddit and Twitter, but also a misconception that has swept through US policies and politics. This fabricated connection has had real-life implications for many Americans. According to the University of Minnesota, 1 in 4 US adults mistakenly believe the measles-mumps-rubella (MMR) vaccine causes autism, meaning that at least 1 in 4 children likely will not receive life-saving preventative care against measles. Even after two decades from the initial uproar in the media, the belief that vaccines cause autism continues to breed echo chambers as misinformation finds fertile ground in politics and fear: a (literal) deadly combination. This connection is not just a scientific misunderstanding; it is a political issue about how a distorted narrative, once sent into orbit, can reshape public opinion and the health of a nation. The false link between vaccination and autism is a cautionary tale of how bad science can be weaponized in the age of social media to deteriorate trust, polarize communities, and turn health into a political battleground. 1

Good Genes, Bad Genes

What a Jean Ad Got Wrong About Genes

Genes are passed down from parents to offspring, often determining traits like hair color, personality, and even eye color. My jeans are blue,” Sweeney says as the camera pans up her body, ending with the tagline: “Sydney Sweeney has great jeans.”

At first glance, American Eagle’s ad campaign featuring actress Sydney Sweeney appears to be a simple play on words, pairing “good genes” with “good jeans,” but the message goes beyond clothing. By associating Sweeney—a blonde haired, blue-eyed woman—with having “great genes,” the ad reinforces a long standing Western beauty ideal rooted in Eurocentricism. Sweeny’s physical features have historically been positioned as symbols of “superior” genetics, whether in fashion, media, or pseudoscience hierarchies.

The commercial also draws an unmistakable parallel to Brooke Shields’ infamous 1980 Calvin Klein ad, which was controversial in its own way. However, while Shields’ ad referenced genetics only indirectly, its focus was primarily on the juxtaposition between her youthful innocence and the suggestive tagline: “You want to know what comes between me and my Calvins? Nothing.” In contrast, American Eagle’s version takes this connection further by explicitly linking physical beauty and personal worth to the idea of having “great genes.” Both ads feature white, conventionally attractive women as embodiments of idealized beauty, but Sweeney’s portrayal transforms that ideal into a genetic statement.

Sweeney’s ad quickly sparked backlash, becoming another flashpoint in the country’s cultural divide. For many viewers, the problem went beyond one advertisement. The phrase “great genes” carried a troubling echo of the late nineteenth and early twentieth centuries when eugenicists misused genetic science to justify discrimination, forced sterilizations, and the false idea that certain races were biologically superior. During this period, the eugenics movement gained popularity in Europe and the United States. In Europe, many countries established academic eugenics societies that promoted selective breeding and racial hygiene, ideas that would become basic for Nazi policies. In the United States, eugenics grew through state-funded programs that promoted “fitter family” contests and sterilizations. Proponents claimed that human traits such as intelligence, appearance, and morality were inherited and that society could be improved by encouraging the “fit” to reproduce while discouraging or preventing the “unfit” from doing so. Eugenic ideas influenced policies such as forced sterilizations, restrictive immigration laws, and social programs designed to marginalize people deemed genetically “inferior,” often targeting racial and ethnic minorities. These theories did not remain confined to academic circles. In Nazi

Design: Chaunté Lewis

Germany, eugenic ideology became state policy, shaping a terrifying vision of genetic “purity.” The Nazis used these pseudoscientific ideas to justify sterilizing tens of thousands of people with disabilities, institutionalizing others, and ultimately committing mass murder against Jews, Romani people, and countless others deemed “undesirable.” The obsession with creating a so-called “master race” transformed eugenics from a theoretical concept into a blueprint for genocide, culminating in the Holocaust and the deaths of millions.

While American Eagle’s ad is far removed from the intentions or consequences of eugenics, the casual association of beauty and worth with “great genes” evokes echoes of this dark past. By framing genetics as a measure of desirability, the advertisement unintentionally recalls the logic that once justified discrimination, forced sterilizations, and violence. This connection illustrates how even modern marketing can carry historical baggage, making it crucial to consider the broader implications of seemingly playful language.

Beyond its troubling historical parallels, the ad also oversimplifies and misrepresents what genetics actually reveals about human traits. Genes are not deterministic blueprints for beauty, personality, or worth. Most traits, including eye color, hair color, and even behavioral tendencies, are polygenic. This means they are influenced by the interaction of many genes, often alongside environmental factors such as nutrition, upbringing, and lifestyle. For instance, eye color involves multiple genes that interact in complex ways, producing a range of shades rather than a single “superior” variant. Similarly, intelligence, temperament, and physical appearance are shaped by countless genetic and environmental variables, making it scientifically meaningless to label someone as having “great genes.”

By reducing genetics to a simple measure of desirability, the ad recycles the same flawed logic that once underpinned eugenics: the idea that some people are biologically superior to others. Unlike eugenicists of the past, modern science recognizes that no gene or combination of genes can determine a person’s value, beauty, or moral character. Yet, casual representations in media continue to blur this line, giving the public a misleading impression of biology. Genetics is nuanced, probabilistic, and deeply context-dependent; portraying it as a neat indicator of worth is not only scientifically inaccurate but socially irresponsible.

In the end, American Eagle’s “good genes” ad may have been intended as clever wordplay, but its implications reach far beyond fashion. By declaring “Sydney Sweeney has great jeans,” the ad blurs the line between a slogan and a claim about genetic superiority. By linking physical appearance to genetic “superiority,” the advertisement unintentionally echoes a dark history of eugenics while misrepresenting the complexity of genetics. Human traits are shaped by countless interacting genes and environmental factors, making any claim of inherently “great genes” scientifically unfounded. This ad serves as a reminder that even seemingly lighthearted marketing carries weight, and that thoughtful consideration of language, history, and science is essential when communicating ideas about human identity.

THE SILENT KILLER UNMASKING THE TRUTH ABOUT RARE CANCERS

Every year, millions of lives are claimed by cancers that the world barely acknowledges. These are not the headline diseases with pink ribbons or national campaigns. They are the less common cancers—the overlooked threats that together make up nearly a quarter of all cancer diagnoses worldwide, yet remain shrouded in the shadows. According to the American Cancer Society report, in the United States alone, that translates to more than 400,000 new cases every year, a number that proves these cancers are anything but rare. They strike without warning, often misdiagnosed or brushed aside until it is too late, and they thrive in the blind spots of our healthcare systems, precisely because society has chosen to see them as too small to matter.

That dismissal is not harmless. It is devastating. When a disease is labeled “rare,” funding agencies hesitate to prioritize it, and even pharmaceutical companies back away from investing in treatments. Families are left with more questions than answers, while patients face the terrifying reality of battling a disease that even the most accredited healthcare professionals struggle to treat. The outcome is predictable: delayed diagnoses, limited treatment options, and survival rates that lag far behind more common cancers.

Rare cancers are not just a medical challenge, but something that fuels the expansion of the public health gap. Nearly one in four cancer patients is battling a rare cancer, yet the funding, awareness campaigns, and research pipelines rarely reflect that. As a result, patients are often left navigating a healthcare system that hasn’t been productive in treating and understanding their needs.

The consequences are clear; while common cancers more often benefit from a steady stream of new drugs and clinical trials, rare cancers are frequently left out. Even when promising therapies exist, many never reach patients because there is little financial incentive to develop them. As a result of determined advocacy from patients and family, these rarer diseases can gain public attention which puts stress on the officials responsible for funding decisions. This is often how medical breakthroughs in rare diseases come about.

Patients themselves understand this better than anyone. An individual currently undergoing treatment for cholangiocarcinoma, a rare bile duct cancer affecting the liver, shared her perspective with the National Cholangiocarcinoma Foundation, saying, “The word ‘rare’ is already half of the battle, because there is so little attention, which means there is little variety in treatment options.” That single word— rare— creates a barrier that slows progress. It leads people to assume

these diseases are insignificant despite them affecting millions of lives worldwide.

The silence regarding these cancers feeds the cycle. Since advocacy isn’t pushed a majority of the time, many people do not understand these rare cancer patients’ struggles and assume their cancer is less important. Because they’re seen as insignificant to society, they don’t get necessary resources. Without resources, patients suffer, and because outcomes remain unfavorable, the world continues to view them as lost causes. This cycle is neglectful and must be changed.

We cannot let the word “rare” become a reason for inaction. Each life affected is significant, and cancers left unaddressed create a quiet but substantial public health burden. The suffering caused by rare cancers is not uncommon—it is widespread, often under-recognized, and in need of greater attention.

Addressing this challenge requires more than awareness; it calls for deliberate steps. Governments can help by allocating research funding that better reflects the actual impact of these diseases. Pharmaceutical companies could be supported with incentives to develop treatments for smaller patient populations, not only the most common diagnoses. Community voices can make a difference by signing petitions, participating in awareness walks and advocating publicly. This ensures that these cancers are a part of broader health and equity conservation.

Progress has already shown what is possible. Advances in treating certain rare cancers demonstrate that when resources and focus are effectively applied, results follow. With sustained commitment, these gains need not remain limited to only a few diseases. With the proper allocation of resources and effort, advancements in various treatments can be expanded to include the wider range of rare cancers that could benefit millions of patients worldwide.

Silence has already cost too much, and we can no longer afford to act as though these lives are expendable simply because their cancers don’t make headlines. To dismiss them is to perpetuate injustice; to recognize them is to affirm the basic truth that every patient deserves a fighting chance.

During her Ted Talk, Tatiana Cordts emphasized, “ just because it is rare does not make it impossible.” The time to change course is now. Rare cancers are not footnotes in the story of human health—they are chapters written in the lives of millions of families worldwide. And those chapters deserve attention, resources, and hope. What has been hidden must be brought into the light, and acted on appropriately.

Illustration & Design: Kaitlyn Hancock

Fluoride Friend or Foe?

By IsaBella reichard design: chaunté leWis
Illustration: ashleigh morris

Fluoridation Under Fire

For over 75 years, community water fluoridation has been viewed as one of American public health’s greatest achievements, preventing cavities for both children and adults nationwide. Recently, though, the narrative has shifted. In early 2025, U.S. Health Secretary Robert F. Kennedy Jr. announced plans to remove the recommendation by the Centers for Disease Control and Prevention (CDC) for the fluoridation of drinking water, a dramatic change that challenged decades of public health consensus.

Today’s debate over fluoride echoes questions that have followed the mineral since its strange effects were initially discovered. The story begins over a century ago, when an unusual dental mystery in Colorado first revealed fluoride’s powerful abilities.

The Colorado Brown Stain Epidemic

In 1901, a young Frederick McKay, fresh off his dental school graduation, traveled to Colorado Springs to open his own dental practice. He was quickly startled by the high prevalence of local residents with permanent brown stains on their teeth. McKay hit the books at once, but could not find any information in the dental literature about this peculiar condition. After reading about a study that reported nearly 90% of locally born children contracting "Colorado Brown Stain," dentist and researcher Dr. G.V. Black joined McKay in his investigation. In their efforts, the two came to an important discovery: teeth plagued with Colorado Brown stain were oddly resistant to tooth decay.

After Black’s death, McKay continued to search for the origin of these unexpected markings. Over the next twenty years, McKay traveled to towns afflicted with this mysterious ailment, testing water supplies for clues. Alongside the help of chemist H.V. Churchill and his photospectrographic analysis methods, McKay established that high levels of fluoride in water were the cause of Colorado Brown Stain, now known as fluorosis. As methods of measuring fluoride levels became more accurate, later research by Dr. H. Trendly Dean established that levels below 1.0 mg of fluoride per liter in drinking water would not cause discoloration in most citizens' teeth. But Dean still had a crucial question lingering in his mind from the early studies of McKay and Black; why were Colorado Brown Stain teeth resistant to decay? Could low levels of fluoride prevent cavities without causing stains?

Dean finally received his answer in 1945, when Grand Rapids, Michigan became the first city to purposely fluoridate its water supply as part of a public health study. The results were astounding: in the next 15 years, the prevalence of cavities in children born after the addition of fluoride was over 60% lower than that of previous generations. For the first time in history, widespread prevention of tooth decay seemed like a possibility for the American public. But how does this chemical ion prevent tooth decay? To understand the impacts of fluoride, we have to look at the teeth on a microscopic level.

this results in a cavity, or a hole in the tooth enamel. Fluoride helps to reverse this process by binding with dissolved components of hydroxyapatite to form fluorapatite - a tougher, more acid-resistant compound. This subtle chemical shift makes enamel stronger, slows down tooth decay, and ultimately helps prevent the formation of cavities.

The Science Behind the Smile

Fluoride strengthens teeth by altering the chemical structure of tooth enamel, which is the hard, outer layer that shields teeth from daily wear and decay. Enamel is mainly composed of a mineral called hydroxyapatite. While naturally strong, this compound is susceptible to attack from acids produced by bacteria, plaque, and sugars in the mouth. When acid levels rise, hydroxyapatite begins to dissolve;

The effectiveness of this process has been well documented. The Cochrane Collaboration performed a meta-analysis of 155 studies, and they found that communities with fluoridated water across the globe experienced 35% less tooth decay in primary teeth and 26% less in permanent teeth. However, with fluoride now found in most toothpaste, mouthwashes, and professional dental treatments, some have begun to question whether adding it to community water supplies is still necessary. Has the daily exposure from these other sources made water fluoridation redundant, or does it still play a vital role in protecting teeth?

A Dual Defense Against Decay

Although the fluoride added to toothpaste provides a more targeted defense against tooth decay, the impact of this treatment depends heavily on the daily habits of individuals. People who brush less frequently or have limited access to dental care are left susceptible to inadequate dental care. Water fluoridation offers a universal baseline of protection by continuously exposing the teeth to small amounts of fluoride throughout the day, helping to maintain enamel strength between brushings.

Empirical evidence highlights this necessity. In Calgary, Canada, fluoride was removed from the city’s water supply in 2011 after public debate over its requirement. Seven years after its removal, the prevalence of cavities in Calgary was compared to Edmonton, a neighboring city in the Alberta province that still fluoridated their water. Between cities that had previously similar rates of cavities in primary teeth, Calgary showed a sharp increase in the prevalence of cavities. Even in an era with widespread toothpaste use, the absence of fluoridated water led to measurable declines in the oral health of children.

These findings show that fluoridated toothpaste and water aren’t interchangeable; they’re complementary. Toothpaste and brushing offer a direct, high-dose shield and remove plaque, while water fluoridation provides consistent protection to reach all social and economic groups regardless of dental hygiene. The combination has resulted in a public health success story, one that could eventually bring us closer to a future where tooth decay is a rare occurrence, and where oral health is improved worldwide.

The Caveat

Despite clear evidence of its benefits, fluoride has recently come under rash scrutiny. Critics such as RFK Jr. argue that the risks of water fluoridation vastly outweigh the benefits. Kennedy has publicly characterized fluoride as a “dangerous neurotoxin,’’ linking it to arthritis, increased bone fractures, thyroid disease, and neurodevelopmental harm. He argues that contemporary research demands a re-examination of fluoridation policy, especially in the light of emerging studies suggesting possible cognitive impacts at lower exposure levels than previously scrutinized. Given these alarming claims, it is of paramount importance that the public understands the risk of excessive fluoridation and what concerns are currently posed by the recommended levels of fluoride in public drinking water.

In 2019, a Canadian study looked at the association between typical levels of fluoride exposure in pregnant mothers and the IQ scores of their children between the ages of 3 and 4. The researchers looked into two subsets of women, one who gave urine samples to determine fluoride exposure, and another who self-reported their fluoride intake based on water and beverage consumption during pregnancy. According to these measurements, higher fluoride exposure measured by urinary levels was correlated with lower IQ in boys and a slightly higher IQ in girls. Higher fluoride exposure measured through selfreporting was correlated with a lower IQ in all children by 3 to 4 IQ points.

While the study garnered significant attention, experts have cautioned that its findings should be interpreted carefully. Critics note that the data show considerable variation, making it difficult to establish causation or draw firm conclusions. The method of measuring fluoride exposure, especially through self-reported water and beverage consumption, is unreliable due to response bias and may not accurately reflect total fluoride exposure. Even urinary measurements capture only short-term intake, typically only reflecting the last drink that was consumed, rather than consistent exposure throughout pregnancy.

The observed association of a lower IQ in boys and a higher IQ in girls raises questions as well. If fluoride is actually neurotoxic and causes a lower IQ, the decrease would be seen across both sexes. Other confounding exposures, such as lead, from the time the children were born to the time their IQ was measured, could influence the findings as well. In the group of women who were exposed to fluoride during pregnancy, the exposure was not uniform. Many women were exposed to low levels of fluoride, while only a few experienced high exposure. The few women who were exposed to high levels may have disproportionately influenced the results. Overall, while the study highlights a possible association, it does not establish a direct causal link between maternal fluoride exposure at typical levels and cognitive outcomes for their children. As Kennedy himself recommends, more research should be conducted to establish or disprove the association between fluoride exposure and cognitive

ability, but there is currently not enough evidence to call for the complete removal of fluoride from community drinking water.

In 2024, the National Toxicology Program released an analysis of 74 studies, finding a statistically significant association between higher levels of fluoride exposure and lower IQ scores in children. The CDC currently recommends that communities maintain a level of 0.7mg/L of fluoride in their water. While localities are not required to follow this recommendation, it is still widely observed. The only binding requirement is set by the U.S. Environmental Protection Agency, who enforce a standard maximum contaminant level of 4.0mg/L. Water systems are also required to notify the public when fluoride levels exceed 2.0mg/L, due to the risk of fluorosis above these levels. The NTP Monograph looked mainly at global communities whose fluoride levels were much higher than the level recommended by the CDC. They concluded that higher levels of fluoride exposure, or water containing more than 1.5mg/L, were associated with lower IQ in children, but found insufficient evidence to establish an association between the recommended level of 0.7mg/L and an effect on IQ.

The Path Forward

While we have significant evidence that high levels of fluoridation can have negative neurological effects, we lack evidence that the current recommendations by the CDC pose harm to the public. However, there are over 2.9 million U.S. residents who live in areas with average fluoride concentrations exceeding the 1.5mg/L recommendation by the World Health Organization. The next step toward safer drinking water is not eliminating fluoride completely, but ensuring that all people receive the appropriate level.

Ensuring that every community, regardless of location or income, has access to properly fluoridated water is a matter of public health equity. By supporting rigorous, unbiased research and continuing to monitor fluoridation levels, we can protect both dental health and neurological safety for generations to come. The debate over fluoride reminds us that science and policy must work in tandem and be guided by evidence rather than apprehension. By maintaining vigilance through research and education, we can ensure that adequate public health decisions are made to uplift every community.

“Mental Illness is Just a Chemical Imbalance”

Mental illness is real, but it is not “just” a chemical imbalance. It stems way further than its biological components. Diseases stemming from our psychological state contain complex pathways of brain chemistry, the environment, trauma, genetics, and social factors. Breaking down this myth helps people to better understand that mental health is real, multilayered, and deserves comprehensive care.

origins oF the myth

The idea that mental illness is due to a chemical imbalance became mainstream in the 1960s and 1980s. It all started in the 60s, when scientists studied and tested early drugs and antidepressants that affected neurotransmitters such as serotonin, dopamine, and norepinephrine. This type of research led to the monoamine hypothesis, which proposed that the deficiency or underactivity of certain monoamine neurotransmitters in the brain, like serotonin, dopamine, and norepinephrine, has been known to cause depression. This hypothesis suggested a direct link between low levels of monoamine neurotransmitters and the symptoms of depression. However, this was just one piece of a much larger puzzle. This led to further review of the mechanism of action of antidepressants and how these specific agents can lead to an increase in levels of these neurotransmitters in the brain. Pharmaceutical companies then designed specific antidepressants to treat mental illnesses such as depression and mood disorders to alleviate depressive symptoms. Eli Lilly is an American multinational pharmaceutical company that created one of the first effective antidepressants, fluoxetine, which was given the name Prozac and gained sufficient popularity in the late 80s. Prozac gained popularity because it was more tolerable, had fewer side effects than previous antidepressants and symbolized a new era of biological psychiatry.

The drug's rise in popularity cemented the idea that depression was an ailment stemming from brain chemical imbalance, specifically a serotonin deficiency. Since Prozac and other antidepressants were becoming more accessible, ads were put on display for all to see. These ads would contain misleading messages to society about how these SSRIs “boosted serotonin” or “corrected a chemical imbalance,” resulting in public acceptance of these misconstrued views as the truth. causing the public to absorb these types of ads as the literal truth.

the Bullshit

Mental illness cannot be pinned down to one single neurotransmitter; in reality, it is much more complex than that. There are billions of nerve cells and neurotransmitters in the human brain, and no single neurotransmitter can fully explain anxiety, depression, or schizophrenia. Additionally, there are no conclusive tests that show someone has a “chemical imbalance.” The monoamine hypothesis was only a theory and was never proven, as some people with depression can have normal levels of serotonin. Antidepressants have been shown

to improve outcomes of depressive symptoms and have helped many people, but not everyone. If mental illnesses were simply a serotonin deficiency, then SSRIs would work universally, but that isn’t the case. The monoamine myth was amplified by drug ads and was not fully supported by scientific reasoning. The narrative created unrealistic expectations of people believing that antidepressants were a surefire way to cure their depression or anxiety, just as insulin treats diabetes or antibiotics treat infections. With the scientific evidence we have now, the myth can be credited to a combination of poor science communication, pharma spin, and society’s desire for a simple story.

the truth

Mental health conditions are influenced by several components: genetics, brain chemistry, environment and upbringing, trauma and stress, and social and cultural factors. No single factor is solely responsible because mental illness arises from complex interactions between several of them. Many mental disorders are often caused by a combination of genetics and environment. Genes play a role in predisposing individuals to certain mental disorders, as they may influence an individual’s brain chemistry and neurotransmitter balance. How an individual interacts with their environment and their life choices can affect how likely it is for a mental illness to occur. It is important to mention how trauma and stress can play roles in developing a mental disorders. When an individual is exposed to some sort of life-altering event that causes an intense amount of trauma, it can trigger or exacerbate mental health conditions such as posttraumatic stress disorder (PTSD) or anxiety disorders. Chronic stress can also negatively impact mental well-being and how an individual responds or reacts. Societal stigma, lack of support, discrimination, and cultural beliefs around mental health can all contribute to the development and severity of mental health conditions. Treatments that work best to address mental illnesses such as depression or anxiety are a mixture of therapy, medications, lifestyle changes, and social support. With the right balance, mental illnesses can be treated and individuals who suffer from them can live with an increased quality of lifemaintained.

Why it matters

A more accurate understanding encourages compassion, holistic care, and better recovery outcomes. Debunking the myth supports a more compassionate, nuanced view of mental health and helps reduce stigma. Many people struggle with mental illnesses, but it is not just a “chemical imbalance.” In reality, mental health is shaped by a complex interplay of genetics, the environment, upbringing, trauma, and social support. The myth that mental illness is only a result of neurological chemical abnomalities is not only misleading, but can also prevent people from seeking help or understanding the root causes of their struggles.

byAntonioBlanco|Design:MimiFingold

TYLENOLAUTISM:AND WHATISTHEREASON FORCONCERN?

For years in high school, I was known as the student clinic among my peers. Given that I would often get headaches in class and stomach aches from the cafeteria lunch, I always made sure to have a personal medicine bag with me. Tylenol was one of the few trusty medicines that I often carried and gave to others if needed.

For decades, acetaminophen (Tylenol) has been a prominent medication and considered safe if used as directed. It has helped my classmates and I relieve various pains and stubborn fevers. Its easy accessibility and recommendations by health professionals have given people confidence in the product. However, on September 22, 2025, the White House conducted a press release stating “Evidence Suggests Link Between Acetaminophen, Autism,” sharing that acetaminophen use during pregnancy, more specifically late-term pregnancy, “may cause long-term neurological effects in their children.” The immediate reaction to this statement was largely met with skepticism and, for some people, concern. This left many Americans with one question: “Where is the proof?”

For many, this press release may have been the first time they have heard of a study or link between prenatal exposure to acetaminophen and a neurological disorder. However, studies have been around for over a decade examining the effects of acetaminophen use in the context of embryonic development. In 2013, researchers at the University of Oslo, Norway, published a study on prenatal paracetamol (European name for acetaminophen) exposure and child neurodevelopment. Mothers across the country were asked if they would like to participate in the study and reported their paracetamol use during weeks 17 and 30 of pregnancy, as well as their use at six months postpartum. Then, researchers followed up on 2,919 same sex sibling pairs at the age of three.

The study reported that children exposed to prenatal paracetamol for more than 28 days were associated with worse gross motor development and communication at three years old. It concluded by stating that consistent prenatal paracetamol exposure may be associated with these different developmental outcomes. Although this study was not done to find a bridge between prenatal acetaminophen exposure to neurological disorders, it opened a doorway for additional research to be conducted.

In 2014, one of the first neurological studies was conducted by the University of California - Los Angeles in collaboration with the University of Aarhus in Denmark on the association of prenatal acetaminophen exposure to ADHD. In the study, over 64,000 children and mothers were involved. Phone interviews were conducted during pregnancy and six months postpartum to determine the amount of exposure to acetaminophen. Seven years after birth, results were recorded via behavior reports from the parents, diagnoses given by the Danish National Registry, or through prescribed medicine for the child. With over half the mothers reporting acetaminophen use, the results from that study concluded that the association between the use of Tylenol (or a generic brand version) and ADHD is possible.

Recently, the White House press release shared studies that supported their own announcement. An article from the American Journal of Epidemiology, conducted by researchers from Harvard, Yale, and other universities, observed the link between acetaminophen use during pregnancy and ADHD, suggesting that “prenatal acetaminophen exposure may influence neurodevelopment.” While reading the claims of these articles which support the announcement, it is important to look into the studies themselves to view how they came to these conclusions. Viewing what type of research was

conducted as well can assist in our understanding of the results. Observational research studies patterns and associations in cases such as a cohort study while laboratory research looks for cause and effect and mechanism which are involved. While both are very important types of research, the outcomes may be weighed differently due to the specificity of a lab study which serves as the basis for the importance of determining which type of research was done.

A collaborative study done by researchers from Johns Hopkins University and the United States Department of Health and Human Services Research Division looked for a link between cord biomarkers, which showed a correlation between exposure to acetaminophen and the presence of ADHD and Autism in the child. The 20-year-long study monitored 996 women at birth and their children 20 years later. It states that of the 996 children, 257 children (25.8%) were diagnosed with only ADHD, 66 children (6.6%) were diagnosed with autism, and 42 children (4.2%) were diagnosed with both autism and ADHD. Other developmental diagnoses besides autism and ADHD were found in 304 children (30.5%), and finally, 327 (32.8%) came out to be neurotypical. This study concluded that “Cord biomarkers [which are indicators of chemical compounds in cord blood] of fetal exposure to acetaminophen were associated with a significantly increased risk of childhood ADHD and ASD (autism spectrum disorders)." More studies have been increasingly conducted, including one by Mount Sinai Health System, which warns about the use of acetaminophen during pregnancy. To this day, none of these studies have been able to show confident proof that there is a solid causal link between prenatal acetaminophen exposure and neurological disorders in the child.

While many studies have shown that an association is indeed real, other articles claim that the result of neurological disorders in children is due to other factors. A study conducted in April 2024 by researchers at the Karolinska Institute in Sweden followed a similar approach to past studies. In this study, acetaminophen was not associated with neurological disorders. Instead, they believe that potential neurological disorders were likely due to outside factors, not acetaminophen. They correlated the use of acetaminophen with families in lower socioeconomic classes who had a higher rate of high body mass index, smoking during pregnancy, and known neurological disorders in the family.

Due to the nature of the White House Press release announcement, many Americans and people around the world have become fearful and cautious regarding acetaminophen. In September of 2025, the FDA responded with a label change in response to the recent studies linking prenatal exposure to acetaminophen and neurological disorders in the fetus. That same day, a letter was released by Marty Makary, the commissioner of the FDA, stating “clinicians should consider minimizing the use of acetaminophen during pregnancy for routine low-grade fevers.” However, it also shared that the decision is left up to the patients themselves.

Whether the use of acetaminophen during pregnancy can directly affect the fetus’s rate of developing neurological disorders is yet to be proven. To learn more about the development of autism, the National Institute of Health announced a 50 million dollar investment into future autism-targeted research. Ultimately, discovering if acetaminophen truly affects the development of neurological diseases is an answer that only future research can give us. Only time will tell whether it is a breakthrough discovery or another debunked myth.

The Ozempic Craze Miracle Drug or Modern Myth

How Ozempic Works

Ozempic belongs to a class of medications known as GLP-1 receptor agonists, which mimic a naturally occurring hormone called glucagon-like peptide-1. This hormone helps regulate blood sugar levels by stimulating insulin release and slowing digestion—both of which promote a feeling of fullness.

Originally approved by the FDA in 2017 to treat Type 2 diabetes, Ozempic was never intended for weight loss. Yet during clinical trials, many participants reported a striking side effect–significant weight reduction. That discovery paved the way for semaglutide’s rebranding under a new name, Wegovy, specifically for obesity management. Following the rebranding, the demand for Ozempic among nondiabetic users continues to surge, blurring the line between legitimate medical treatment and a cultural phenomenon.

Ozempic’s rise has been swift and dramatic. A once little-known medication for Type 2 diabetes, the drug scientifically known as semaglutide, has exploded into the public eye as a shortcut to weight loss. On TikTok, users document their weekly injections and shrinking waistlines. In Hollywood, it’s credited with the sudden and dramatic weight loss transformations for celebrities like Lizzo, Meghan Trainor, and even the Kardashians. Meanwhile, doctors and scientists urge caution, warning that the hype may be overshadowing the facts. As demand for Ozempic skyrockets and shortages leave diabetic patients struggling to refill their necessary prescriptions, a larger question emerges: Is Ozempic a true medical breakthrough, or just the latest example of society’s fixation on fast and easy results?

Celebrity Influence and Media Hype

Yet the reality is far more complex than viral TikToks suggest. Common side effects include nausea, vomiting, diarrhea, and fatigue. Rapid weight loss can also exacerbate other health issues, particularly in people using the drug without medical supervision. These complications have prompted at least 50 lawsuits against Novo Nordisk, with plaintiffs claiming that the company failed to adequately warn users about serious gastrointestinal issues, including gastroparesis and intestinal blockages. While the company maintains that the side effects are clearly listed on the drug’s labeling, the litigation highlights that the medication is not without real, sometimes severe, risks.

Moreover, weight often returns once the drug is discontinued, raising questions about its long-term effectiveness for non-diabetic users. The widespread narrative on social media frequently portrays Ozempic as a magical shortcut to rapid weight loss, downplaying costs, ongoing medical supervision, and potential adverse effects on health. The combination of celebrity endorsements, viral trends, and anecdotal success stories has created a cultural perception that overshadows clinical reality. While Ozempic represents a genuine scientific breakthrough for certain medical conditions, the hype has inflated its reputation, blurring the line between legitimate treatment and societal fascination with effortless transformation.

Ethical and Social Concerns

Ozempic’s popularity raises serious ethical questions. Surging demand among non-diabetic users has caused shortages, leaving patients who rely on the drug for managing Type 2 diabetes struggling to refill prescriptions. Access is also tied to cost. Those with insurance or financial means can obtain the drug, while others cannot, highlighting disparities in healthcare access. More broadly, the popularity and obsession reflect societal pressures around body image and instant results, where cosmetic desires overshadow medical need.

Conclusion

Ozempic stands at the crossroads of medicine, media, and culture. While clinical evidence supports its use for diabetes and obesity management, social media hype, celebrity influence, and viral trends have exaggerated its image as a “miracle” weight-loss solution. Coupled with potential health risks, lawsuits, and access issues, the phenomenon serves as a cautionary tale: true medical breakthroughs require careful understanding, responsible use, and an awareness that no drug can replace the value of informed, long-term health decisions.

Amid the online frenzy, the science behind Ozempic often gets lost and misunderstood. Semaglutide, the active ingredient, is a GLP-1 receptor agonist designed to help people with Type 2 diabetes regulate blood sugar by stimulating insulin release and slowing digestion. Clinical trials have shown that, when combined with diet and exercise, the drug is able to aid in significant weight loss, sometimes even exceeding 15% of body weight over several months. For patients managing obesity or diabetes, these results can significantly improve their quality of life.

Few factors have propelled Ozempic into the spotlight more than its entanglement with celebrity culture. Over the past few years, entertainment media have extensively reported on stars who discuss their weight loss journeys—often referencing Ozempic or other GLP-1 drugs in interviews, Instagram posts, and TikTok videos. Rumors about high-profile figures such as Lizzo, Meghan Trainor, and members of the Kardashian family have further fueled public fascination, making the drug a staple of tabloid headlines and lifestyle coverage. Late-night talk shows, comedy sketches, and award show commentary have capitalized on the trend, using Ozempic as a punchline and cementing the drug as a recognizable cultural reference point, far removed from its original medical purpose.

The Social Media Feedback Loop

Social media has magnified the celebrity effect. Viral before-and-after photos, anecdotal weight loss updates, and speculation about which stars are using the drug have spread rapidly on platforms like TikTok, Instagram, and X. Influencers and ordinary users alike share their own experiences, creating a feedback loop that normalizes off-label use and generates curiosity among viewers who might never have considered prescription medication otherwise.

One of the more striking aspects of this trend is the emergence of “Ozempic face”— a term describing the sunken facial appearance some users develop as a result of rapid weight loss. Memes, TikToks, and news articles dissecting this side effect have made it a viral talking point, highlighting that even cosmetic consequences have been intertwined in the conversation about health and beauty. The celebrity-driven narrative has also shaped public perception and behavior. Media coverage suggests that celebrity mentions, whether intentional endorsements or casual remarks, drive interest in the drug among the general population, with more people asking healthcare providers about Ozempic or similar weight-loss-inducing medications. According to a 2024

JAMA Network Open study conducted by researchers from the AMA and CDC, spending on GLP-1 drugs such as Ozempic and Wegovy increased by more than 500% from 2018 to 2023, reaching $71.7 billion. The resulting surge in spending and prescriptions illustrates how modern celebrity culture can transform a medical treatment into a cultural phenomenon.

The Science Behind the Hype

yOuR SPF 50 iS PROBABLy WORkiNg LikE AN SPF 15

veroniCa riChmond

In the sun-drenched landscape of Miami, sunscreen is not just a beach-day accessory— it’s a daily necessity. You probably check the labels at Target, reach for the sunscreen bottle with the highest Sun Protection Factor (SPF) you can find, and trust that it will provide an adequate shield against UV radiation. But what if that SPF number is being undermined before you even step outside? Here’s the truth: for most of us, your SPF 50 is likely performing at the level of an SPF 15 (if you’re lucky), and the reason is very simple: you are not using enough. Before we break down how that SPF number is calculated, let’s cover why that even matters.

Why This Matters

Why are we even having this conversation? It’s not just about avoiding a painful sunburn. This is the long game. The why is twofold: vanity and health. First, vanity. UV radiation is the number one undisputed cause of premature aging. It is the primary driver of collagen and elastin breakdown, which leads directly to the fine lines, wrinkles, and sunspots that many people try to prevent. Second, and far more critical, is health. Skin cancer is the most common cancer in the United States, and it is largely preventable. Using sunscreen correctly is not an optional beauty step; it’s a non-negotiable

health habit, just like brushing your teeth. This is why it is so crucial to know that while we might think we are protecting ourselves, we may be mistaken.

How SPF Is Calculated

Let’s get into the how. That number on your bottle is a real, scientific value. But to earn it, lab technicians have to apply a layer of sunscreen so thick, you would likely never wear it in public. We are talking about a specific, FDA regulated amount of 2 milligrams of product for every square centimeter of skin. To translate that, it means you would need to use a full shot glass of sunscreen for your body and a nickel-sized amount for your face. It is, in fact, this dense layer of sunscreen that is tested to determine SPF. Meanwhile, most of us are using a fraction of that, given that we have our social lives and probably don’t want to look like we just took a dip in a vat of white paint.

A 2014 study done by Petersen & Wulf confirmed that most people apply only 25% to 50% of the recommended amount. We go for a thin layer that feels good and disappears quickly. But here is where the math will really shock you. The protection you get is nonlinear. It’s a concept dermatologists

call the “square root rule,” which is a clinical way of saying the penalty for under-applying is way worse than you’d think. It means if you use only half the right amount, you do not get half the SPF. You get the square root of it. So, with half the correct amount, your SPF 50 just became a SPF 22. And if you’re in that 25% group? Your SPF 50 is now performing at a SPF 12.5.

The Miami Sun

This is not just a theoretical problem. This is a Miami problem. Here in South Florida, the sun practically never leaves. We live in a subtropical climate where the UV Index, the official measure of ultraviolet radiation strength, is consistently in the “High,” “Very High,” or “Extreme” category for most of the year. You can check it on your phone’s weather app if you just scroll down and look at the stats in your area. On any given day, you will see a UV Index of 7, 8, or 9+. What does that mean? It means the sun is strong enough to cause significant skin damage in as little as 10 to 15 minutes. When the stakes are that high, walking around with what you think is an SPF 50, but is actually an SPF 12.5, is a serious risk.

The Solution

advising you to use “at least 1 teaspoon (about the amount needed to cover the length of your index and middle fingers)” for your face. That’s all you need. It might feel like a lot the first time, but let it sink in. This is the single most important change you can make.

Second, make sure to date your bottle. Sunscreen expires, and the active ingredients become less effective over time. Grab a Sharpie and write the date you opened the bottle on the bottle. If it’s been over a year, it’s time for a new one. This is a simple step that ensures the product you are using is actually potent.

Think of applying sunscreen as part of your skincare routine. You probably don’t use just a tiny drop of moisturizer. Treat your sunscreen the same way. Apply it generously as the last step of your routine, right before your makeup.

NOT juST A

Okay, so how do we fix this? Let’s start with the practical bare minimum. We cannot be perfect every day. But on the days that count, this is what you should focus on. First, check the UV Index on your weather app. Is it 6 (“High”) or above? If yes, this is a non-negotiable sunscreen day (FYI: You should be applying sunscreen every day, regardless of the UV index, but especially when it’s above 6). That is your first filter. On those days, commit to doing the one thing that matters most, which is using the right amount.

Now, here are the actionable steps to apply the right amount and make sure you maintain the quality of your sunscreen and application:

First, use two fingers. The easiest way to remember the right amount for your face and neck is the two-finger rule. Squeeze a line of sunscreen down your index and middle finger. The American Academy of Dermatology officially recommends this amount,

Now, for reapplying without ruining your makeup, let’s talk. This is the big one. This is where most of us fail, in addition to not applying enough. Sunscreen is not a one-and-done deal. It breaks down from sun exposure and wears off with sweat. The official rule from the American Academy of Dermatology is to reapply every two hours, or immediately after swimming. This rule is not a suggestion. But who wants to smear lotion over a full face of makeup? This is where a little planning can save you. The solution is to get a product specifically for reapplication. An SPF setting spray or a powder sunscreen (which often comes in a self-contained brush) is a game-changer. You can re-up your protection right over your makeup without messing anything up. Keep one in your bag. It makes the 2-hour rule a bit more realistic.

The Bottom Line

The SPF on your bottle represents potential, not guaranteed performance. To get what you paid for, you must apply enough—and reapply often.

Think of sunscreen as both a beauty product and a health investment. A few extra seconds and a bit more product could mean the difference between protection and exposure under the relentless Miami sun.

Can being squeaky CLean make you sneezy? the reality oF Being germ-Free

t has been ingrained since childhood that cleanliness prevents sickness. From applying Germ-X to our hands to wiping surfaces with Clorox wipes to neutralize microbial enemies, modern living has been fixated on keeping germs away. However, research has demonstrated that excessive sanitizing and cleansing can actually make us sicker. Studies highlight the importance of having a well-trained immune system, maintaining a robust microbiome, and warding off autoimmune diseases. In essence, being clean does not always guarantee being healthy.

The “Hygiene Hypothesis” proposes that increased cleanliness and reduced exposure to microorganisms have driven an increase in autoimmune disorders and allergies. The Industrial Revolution (17601840) brought about measures such as water purification, sanitation, pasteurization, vaccination, and antibiotic usage in Western countries to improve public health, eliminating diseases such as hepatitis A and parasitic infections. Overall, the Industrial Revolution was a time of significant progress in different areas and was a result of Great Britain's successful intercontinental economic success which led to the spread of manufacturing industries and urbanization and eventually expanded to healthcare, catering to increases in population and disease. However, in countries where these practices are not implemented, the prevalence of allergies remains low. Conversely, in countries that have eradicated these infections, there has been a greater concern for the emergence of autoimmune diseases and allergies.

Animal models, increases in allergic diseases after anti-parasitic medication, prevention of autoimmune diseases by infection, and the use of probiotics all support the idea that decreases in infectious disease cases—achieved through excessive cleanliness—have led to increased cases of allergy and autoimmune diseases. For instance, a 2002 study using mice demonstrated that microbial infection of non-obese diabetic mice with high microbial diversity offered protection from diabetes, whereas mice raised in specific pathogen-free conditions showed almost a 100% incidence of diabetes.

Lynch et al. published a 1993 paper that attributes the eradication of helminths, parasitic worms, to a rise in dermatological skin allergies in Venezuela. The administration of parasite eggs has been shown to improve symptoms of autoimmune diseases. For example, a 2005 study by Summers et al. showed a regimen of parasite eggs from pigs (Trichuris suis) administered every three weeks for six months to 29 patients with Crohn’s disease improved symptoms in 72% of patients with no adverse effects .

Probiotics, which are non-pathogenic microorganisms commonly taken as supplements, have been shown to significantly decrease cases

of atopic dermatitis when expectant mothers consumed Lactobacillus GG for two to four weeks before birth and six months postnatally, according to a 2001 study by Kalliomäki et al. In 2023, Rook described the “Old Friends Hypothesis,” which refined the Hygiene Hypothesis by proposing that specific microorganisms that humans evolved with will help regulate immune tolerance. One example comes from Yang & Cong's 2021 study; metabolites produced by the gut microbiome, such as short-chain fatty acids, enhance the production of immune system signaling molecules and regulatory T cells.

Modern lifestyles that have diminished microbial diversity in the human body and consequently increased allergies and autoimmune disorders include: C-section deliveries, lack of breastfeeding, pollution, limited exposure to green spaces, and stressors such as insufficient sunlight, drug use, poor diet, smoking, antibiotic misuse, and vaccine mistrust, as listed by Rook's 2023 paper.

In addition to the Hygiene Hypothesis, the “Microfloria Hypothesis” proposes that early-life agitations, such as the use of antibiotics and poor diet, disrupt the native flora of the human body, disturbing microbiota-influenced immunological tolerance and leading to hypersensitivity disorders, according to a 2005 publication by Noverr & Huffnagle. A study published by Hoskin-Parr, Teyhan, Blocker, and Henderson in 2013 found that the use of antibiotics in the first two years of life is correlated with the development of asthma at 7.5 years of life in a dose-dependent manner. Dose-dependent manner means that the probability of developing asthma increases with the amount of antibiotics given.

Jakobsson et al.'s 2013 study found that birth through Caesarean section (C-section) has been found to be associated with lower microbial diversity and reduced colonization of Bacteroides, leading to decreased Th1 immune responses in the first two years of life. Through vaginal delivery, bacteria from the mother colonize the infant’s gut immediately after birth, primarily with anaerobic Firmicutes and Bacteroidetes, which are key components of the human microbiome.

Ultimately, with new advances and understanding in modern medicine, it has become apparent that over-sterilization may hinder human immune systems. The Hygiene, Old Friends, and Microflora Hypotheses all converge on the conclusion that the immune system needs microbial exposure to develop properly and have efficient immune responses. A stable relationship with microbes is a requirement to maintain good health. In order to reduce the cases of allergies and autoimmune diseases, modern countries must learn to balance decontaminating harmful microbes while preserving beneficial ones that train and strengthen our immune system for the long term.

Can Cracking Your Knuckles Cause Arthritis?

There is a good chance that you have been warned, “Stop cracking your knuckles. It will cause arthritis.” This belief has been spread by many parents, teachers, and doctors. If you grew up hearing it, you are not alone. The pop of a knuckle is perceived as a dangerous habit that may give rise to medical risks. Despite how common the warning is, most people don’t stop to ask whether it is scientifically true. Does cracking your knuckles actually cause joint damage, or is it simply a myth passed down over generations? If this everyday habit truly causes long-term physiological damage, it would affect millions of people globally. What actually happens in people’s joints when their knuckles are cracked? To uncover the truth, we have to understand the mechanism that causes the sharp “pop” in our fingers. By exploring the anatomy of the human joints and the science behind knuckle cracking, we can determine whether this warning is a medical fact or a myth.

The myth that knuckle cracking can lead to arthritis originated from a long-held belief passed down from generation to generation. Many people, including relatives and doctors, warned that knuckle cracking was harmful to joints in the long run as it would lead to bone deterioration, as observed in arthritis. For decades, the myth has persisted and has been spreading without clear scientific evidence. Despite the concerns, there has been no scientific data that has proved that consistent knuckle cracking breaks bones, harms joints, and leads to the development of arthritis.

To understand whether knuckle cracking is harmful, it is important to first understand how the joints in our hands operate. When you crack your knuckles, the joint capsule gets stretched, which creates a vacuum inside the capsule.

According to Dr. Fackler, in the article by Shaina Huntsberry, gases in the synovial fluid dissolve and form nitrogen bubbles in the vacuum. The bubbles then rapidly collapse, which releases energy and causes the popping sound in the form of a sound wave. Individuals tend to crack their knuckles because the slight stretch of the joint capsule releases pressure and tension. However, nothing is breaking, no cartilage is grinding, and no actual damage is happening. According to the Cleveland Clinic, a 2011 study looked at “crack years” to see if habitual knuckle cracking

can lead to osteoarthritis or arthritis.

The study found that cracking knuckles, no matter how often, does not increase the risk of joint swelling or developing arthritis. If the sound comes from harmless nitrogen and carbon dioxide gas bubbles rather than bone damage, then why do so many people believe that knuckle cracking leads to arthritis?

Arthritis is a condition that leads to inflammation and pain in the joints. A common form of arthritis is osteoarthritis, which involves the deterioration of cartilage and joints over time. While there are many different types of arthritis, osteoarthritis is the strain most associated with knuckle cracking. People have associated knuckle cracking with osteoarthritis due to the long-held myth that the audible “pop” sound indicates that the bones are being damaged and broken down. In fact, according to Dr. Fackler, age and genetic predisposition are large factors in the development of osteoarthritis, a condition that people don’t develop significantly until their 40s or 50s. Cracking knuckles can potentially lead to temporary swelling, but it is essentially harmless in the long run. Some doctors have conducted their own experiments in order to refute the idea that habitual knuckle cracking leads to arthritis.

For example, Donald Unger cracked his knuckles in only one hand for fifty years to put the rumor to the test. After comparing it to the other hand, he observed no significant difference in arthritis development between the two hands. Unger won the Nobel Prize in 2009, disputing the common misconception that knuckle cracking is detrimental to the phalangeal joints in the hand.

Scientific studies have ultimately not found any association between repetitive knuckle cracking and the development and progression of arthritis. The next time someone tells you to stop cracking your knuckles because it will cause arthritis, you can confidently tell them it’s just a myth. Science has shown that the familiar popping sound comes from harmless gas bubbles in the joints, not from bones or cartilage being damaged. Although knuckle cracking may be irritating to others, it remains a safe habit rather than a harmful one. The next time someone warns you otherwise, you can politely let them know that science disagrees.

Humans Have Only 5 senses

The historical debate on the number of human sensory systems, five, six, or seven, is still alive. Human perception of the world is inherently multisensory. Like the feeling of sand beneath our feet at the beach or feeling the warmth from the sun on our skin. Most of us were taught in school that we have five senses: touch, sight, hearing, smell, and taste. But in reality, we experience far more than just those. These additional senses include: proprioception, equilibrioception, nociception, thermoception, and sensations such as hunger and thirst.

Origins of the Myth

The idea of the five senses “sight, hearing, taste, smell, touch”, originated from the ancient Greek philosopher Aristotle (384-322 BC), who associated distant sense organs with each perception. Perception was believed to take place in the external organs themselves, in the eye, in the ear, in the external organ of smell, independently of the central sense. The idea is how each sense is connected to external organs, associating it one to a specific perception. Not many philosophers or scientists of the time attempted to investigate the internal processes that involve the sensory system. Since childhood, humans are conditioned to accept just the basic five senses as an easier way to understand the concept of sensory perception. This outlook on sensory perception follows us into adulthood, but some facts remain little known to most.

The Truth

Our senses are not all about the external; they also involve internal mechanisms. Nerves are the pathways to our sensory systems and with billions of nerves in our body, we are constantly responding to various stimuli that we are presented with. Close your eyes and then touch the end of your nose with your finger. It’s not difficult to do, right? But how did you do it? Which of the senses did you use? Before touching your nose with your fingertip, you were using an additional sense known as proprioception. Proprioception is just one of a host of senses beyond the standard five that you probably didn’t even realise you had. This is the ability to be able to sense the position as well as movement of your body parts in terms of space without necessarily looking at them. How do we manage to type without looking at the keyboard, or walk without looking at our feet? The answer is that it is consistent with a constant feedback loop within your nervous system, telling your brain exactly what position you are in and what forces are acting upon it at any given point in time. The awareness of our bodily movements in space allows us to adjust and make changes when needed.

Your inner ear knows balance long before your mind does. Equilibrioception is the sense of balance that allows us to keep upright and be able to walk around without getting hurt. This system relies on vestibular organs in the inner ear, which can detect changes in head position and movement and work together to maintain stability. Often taken for granted, equilibrioception is constantly at work when you're walking, jumping, or running. It’s like walking on a narrow beam carefully placing one foot in front of

the other as your body is constantly adjusting before you realize. It's the dialogue between your body and gravity, and allows you to find your center of gravity again.

Our body also has a built-in alarm system called nociception. This is the sense of feeling and responding to pain in body tissue, a vital part of our survival mechanism. Nociceptors can respond to three different kinds of stimuli: temperature, chemical, or mechanical, which send signals to the brain to indicate deviation from normal mechanisms. It is like your body reminding you of danger. For example, when you touch something hot, nociceptors send a signal to your brain warning you of potential harm. This sense plays an essential role in survival, yet is taken for granted due to the expected nature of the nociception mechanism.

Have you ever wondered how your body knows when it’s too hot or too cold before it comes to your mind? Being able to distinguish different temperatures of the environment that keeps us alive and well is known as thermoreception. There are thermoreceptors in the human body that can detect the flow of heat. As temperature changes around us, these receptors convert this information into nerve impulses. To initiate a response, the nerve impulses transfer information for the central nervous system to processes. Thermoreception is seen in action as the body fights infection. When our body detects an infection, we typically run a fever as our body temperature rises and indicates danger as a natural defensive mechanism to keep us well. Through these senses, our body gathers the necessary amount of critical information regarding mechanical, thermal, and chemical properties of the surroundings.

The last two are psychological sensations, hunger and thirst, that send signals to the body to respond to. Hunger conveys the minimal energy requirement, while thirst represents sensations that promote the body's digestion system, water and electrolyte intake, and waste filtration. Both of these sensations are to fulfill basic biological needs stemming from human evolution. Hunger and thirst mechanisms aid in survival and prioritize the need for nutrients and water in order to ensure proper bodily function and to maintain homeostasis.

Why it Matters

While we aren’t as aware of these senses as we are of the primary five senses, they possess the same influence over our everyday lives. These additional senses are essential in carrying out daily tasks , enabling us to navigate various environments and interact with the world and others around us. Their importance lies in providing a more aware, nuanced perspective of our surroundings and our physical state, leading to better interactions, a higher appreciation of our world, and an increased quality of life. Ultimately, how our senses interact with one another to create the rich and complex experience of the world that we know and love may be more fascinating than the subject of how many senses we have. Science and philosophy are just now realizing how the senses interact in intricate and exciting ways rather than performing in isolation. Therefore, the next time you eat, travel, or attend an event or gathering, stop and consider which senses you are employing. You might be surprised by the response.

Vicks

VapoRub

Vicks & its Olfactory Comfort Blanketing Hispanic culture

on’t let the mild weather & blue skies characteristic of South Florida living on our campus fool you. The Fall ‘25 semester is at its end, and it’s unmistakable to notice the slump of health at the peak of "sick season." Be it the tabling of seasonal Flu vaccines around campus starting to becoming prominent in the corner of your periphery or the staggered, almost rhythmic, timing of the coughs from your peers that echo around your classes.

Everyone handles this season differently: whether its stocking up on Emergen-C packets, the loving married couple that is DayQuil & NyQuil, being a bit more self-aware in how thoroughly you wash your hands, or even a pit stop at the aforementioned tabling to schedule a date with you, a Flu vaccine, and the healthcare professional administering said shot. Or, if you’re like me, you’ve fallen prone to illness and are swinging by the CVS conveniently located a short walk from campus after a phone call with your mami to buy... "Vicvaporu.”

While many have heard of the brand Vicks VapoRub, a cough suppressant & topical analgesic medicated ointment, many people in the hispanic community may be more familiarly acquainted with its siren call– Vicvaporu, Vaporú, Bibaporrú, El Vic, El Bix, El Vickisito. These names come from the very own voices of their mother or abuelita’s insistence on the application of this opaque white gel salve as some sort of savior, a cure-all to any and all ailments that have befallen their hijitos preciosos (precious little children).

This little blue jar with accents of teal has many aliases and its cultural footing in hispanic culture is prominent. A shoo-in for both nostalgia, memory, and the blueprint for creative uses that borderline stretch the capacity of this menthol scented ointment. This product that many swear by stirs up such loyalty that just by mentioning my pitch for this article I was planning to write, my mother mused that I should actually get some at the CVS or Target, since I was feeling under the weather.

How Vicks Started its Eucalyptus-Scented Empire

Lunsford Richardson graduated as valedictorian at Davidson College, class of 1875, with a degree in Latin & a passion for chemistry. He began working to become a pharmacist with his brother-in-law,

Design: chaUnTé leWis

Dr. Joshua Vick, at his drug-manufacturing company in Selma. Richardson even roomed at Dr. Vick's home in Selma on Massey Street, paying 10 dollars per month for rent. But after 10 years, Richardson’s entrepreneurial spirit drove him forward, and he relocated to Greensboro, NC.

In his career, Richardson retained the rights to his own 21 medicines such as Vick’s Turtle Oil Liniment, Vick’s Little Liver Pills and Little Laxative Pills, and Vick’s Grippe Knockers and opened the “Vick Family Remedies Company” in 1905. However, his sales struggled for mostly all his products except the one that became the sole focus of commercial success for decades— Richardson's Croup and Pneumonia Cure Salve, made in 1894. This name was shortened to Vick's Magic Salve, but the name that we all know and love today is actually attributed to his son, H. Smith Richardson. He suggested the discontinuation of all the company’s products except Vick’s Magic Salve and a rebranding to Vicks VapoRub.

According to the official Vicks website, the magic salve's creation was "inspired by [their] founder's love and concern for his sick son." At the time, Lunsford Richardson’s son “had a severe case of croup. Lunsford combined unique ingredients into a salve that when heated by the body would release soothing vapors. The boy soon recovered.” There seems to be a suspension of belief when attributing full recovery to Vicks, however it seems the marketing's effectiveness holds true today as it did back then.

Road Signs, Store Displays, Junkmail …& the Spanish Flu Outbreak

Richardson did not live to see his creation become a worldwide sensation as we know it today due to his exposure to the widespread Spanish Flu. In August 1919, this very same epidemic increased the demand for his product (the irony is not lost on me there). However, that wasn’t before he and his son helped Vick’s garner a national following.

Richardson was a revolutionary in advertising as well, known as the father of junk mail. He did this by convincing the Postal Service to allow mass-mailing circulars addressed to “Boxholder” and did not require personalization of materials. Utilizing this strategy, around 1917, he drew in millions of customers as Vicks could use this technique to send a myriad of samples to mailboxes with possible consumers being able to try the product before even needing to purchase it.

However, the event that really brought Vicks into the public eye was the Flu pandemic of 1918-1919. This made Vick’s VapoRub an indispensable addition to the household. Tragically, nothing fueled sales quite like this deadly outbreak, with sales jumping from $900,000 to $2.9 million in a single year. After Richardson’s eventual death, his son Smith took over the company. Vicks continued to grow and develop an arsenal of products– cough drops, nose drops, inhalers, cough syrup, nasal sprays, as well as Dayquil & Nyquil.

Before jumping ahead to the modern day, I want to focus on how this surge in sales also allowed for pivot. This company began marketing internationally, and among its worldwide reach of more than 70 countries, the expansion and establishment of Latin American countries solidified its widespread surge of its use through immigrant communities in the future. Its familiarity, availability, and affordability all played a role as an option for these families as it had already become a default remedy in many Hispanic households. Immigrant households, especially earlier on in our American history, have had to lean on more familiar remedies as health systems were less accessible. A study by Ransford titled “Health care-seeking among Latino immigrants: blocked access, use of traditional medicine, and the role of religion,” pulled from open-ended interviews of 96 Latino immigrants, 12 hometown association leaders, five pastors and health outreach workers. It revealed that these challenges in connection with traditional medicine and healthcare is one of the factors to attribute between a navigation between “mainstream and traditional medicine,” a love-child blend of culturally relevant remedies and commercial options in an “interplay of culture and structure.” These statements are paralleled in Vicks’ rise to fame.

it's called, rubbing it on my chest, on my neck, and I'm like [pantomimes coughing]. It was really minty. I don’t know if it helped but I think if anything, the placebo effect took place.

"Even family friends, I remember this one time, I slept over at my sister's friend's place and I was sick and coughing a lot through the night and my sister's friend's mom woke me up with the VaporRub. We’ve had conversations before about Viporu! [Laughs] Viporu! Especially when I lived in Miami most of my friends were also Hispanic. I think it’s a common thing.

"I don't even know what the substance is, but it’s not medicine. But, the mintiness of it, I think decongests. And I think that's why families use it.”

So did Sophia "Fia" Fernandez, who has a Cuban background.

“Yeah, I mean like growing up anytime that I was sick, had a cold, and I was coughing a lot– my mom or my grandma, they would always put Vicks either on my chest to stop the coughing, or on my feet. And in my experience, maybe it was a placebo effect, but it always worked. My cough always went away, and I would start feeling better the next day.

"Following up on that, though– though it's always worked for me and all my family members– one time my boyfriend got sick and I tried it on him and he said it didn't help him, it didn't work. So I thought that was weird cause it always worked for me.

"But overall, my experience- this very Hispanic-adopted way of medication has always worked for me and always made me feel better so I 100% think it works.”

Fia continues the tradition with those she loves, her boyfriend Milo Greenspon now having memories associated with Vicks. Upon asking Milo to further expand on this shared experience between them, he says,

It’s all Menthol I mean, Mental: Testimonials among U.M. Undergraduates

I asked around about the impact Vicks has had on other people in their families. Sophia Perez, who is of Cuban and Columbian descent, had strong memories of it both in her family and that of family friends, “When I was little, anytime I would get sick or I would have a cold– my parents, or my mom mostly, would hear me cough in the middle of the night. Out of nowhere, she’s like, in my room with VapoRub, Vicks rub, whatever

“So I got sick with the Flu sometime last year and I was trying all of these antibiotics. None of them were helping with my cough. It still felt extremely painful in my back and my chest, and my girlfriend applied the Vicks VaporRub, and it didn't feel like it helped initially, but then I felt eventually that I could breathe more properly and the pain in my back slowly went away overtime after continuous applying. So although it didn't necessarily make a difference initially, it eventually helped.”

What does it actually do? …& that’s (partially) the rub!

In the Spring 2025 semester, I sent a message sending wishes of a speedy recovery to a sick friend, to which their texted reply was, “Thanks, I've been doing all the at home remedies lol soup, juice, medicine, vicks.” However, what does Vicks actually do?

the beloved Hispanic cure-

all dissected & discerned!

According to Vicks manufacturer, Procter & Gamble, the active ingredients present in this ointment are 1.2% eucalyptus oil, 2.6% menthol, and 4.8% camphor. The rest of the inactive ingredients include cedarleaf oil, nutmeg oil, petrolatum, thymol and turpentine oil. These all contribute to the familiar scent that we all know and love. While the three active ingredients are cough suppressants, while only the camphor and menthol are topical analgesics. When referring to the official Vicks website, it mentions these ingredients combining to help with decongestion, coughs, and minor muscle aches and pains.

As Neil Bhattacharyya, MD, a professor of otolaryngology at the Harvard Medical School and surgeon at Massachusetts Eye and Ear hospital in Boston, says, the shared characteristics between the three active ingredients"[give] you the sense of your throat being cooled off. Just like in Bengay, by the way, or Tiger Balm, the counterstimulation substitutes for the soreness of a sore throat or the feeling of congestion, and it tricks your brain." Even like how some cough drops work, it “can help by creating the sensation that more air is flowing into your body when you breathe.”

Mayo Clinic shares a similar sentiment, while Vicks VapoRub doesn’t clear up congestion in the nose, its strong odor can trick your brain into thinking you’re breathing through an unclogged nose. So what Vicks gives is the feeling of temporary relief to the sensation of congestion and pain. However, the underlying illness won’t change course on its own, much like a strep throat diagnosis would need a trip to the doctor and a few prescribed medications.

However, this is a vast departure from the off-label consumer behaviors that the Hispanic community has taken to using it in colorful Well, it's good for coughs and colds. You put it on your chest and back and it's a good remedy for that. You put it on your nails and it's also good for nail fungus. You put Vivaporu with Vaseline and it’s also a good remedy for when people have split or cracked heels. Excellent for the flu. Also for headaches on the temples and forehead and it relieves you,” Mami Fanny, my abuela states.

So let’s do a bit of lightning-round debunking here. A TL;DR (TooLong; Didn’t Read) of some Vivaporu’s do’s and don’ts!

Cough, decongestion, or minor muscle aches and pains? The sensation of relief, so it doesn’t hurt to apply some of this balm!

Chapped lips? Best reach for some Vaseline instead.

Mosquito bites? Many birds with one stone here, it can soothe the itchiness and reduce the redness, but much like acne or sunburn, never use Vicks on open wounds, blisters, or inflamed skin.

Nail fungus? Probably won’t help. But it won’t hurt either.

Telenovela tears? While some telenovela stars have utilized Vicks beneath the eye to help them cry due to the menthol causing irritation that produces tears on cue, keep in mind that if it makes contact with the eyes itself it can cause burning and blurred vision.

Consumption? It should go without saying, but never, ever ingest the stuff.

Broken heart? Well…

While it may not help with the intricacies of emotional heartbreak, it can, however, tap into a different sort of emotion. Vicks is more than the offer of physical relief, its trademark minty aroma creates a sensory-triggered memory of the feeling of being cared for. This is called the Proustian phenomenon, an experience about the embodiment of olfactory cognition named after Marcel Proust, a French novelist who famously described in Swann’s Way how the smell of madeleine cookies dipped in linden tea brings back some of his long-lost childhood memories. These triggers are further studied by Rachel Herz, a Canadian and American psychologist and cognitive neuroscientist, using Vicks VaporRub (among 4 other products: Coppertone suntan lotion, Crayola crayons, Play-Doh, Johnson & Johnson baby powder) as a subject in her research on the Proustian phenomenon. Her research emphasizes how Vicks transcends being simply a remedy and relief, instead it often brought up positive memories of “not of feeling sick, but of being cared for and being soothed.”

“Sana sana, colita de rana, si no sana hoy, sanará mañana” : love in the guise of medicine

I can distinctly recall my mother singing the Spanish nursery rhyme “Sana sana, colita de rana, si no sana hoy, sanará mañana” accompanied by gentle circles on the area causing pain as a reassurance of love and care in spite of my ailment. This is exactly what I’ve been writing about here.

Healing starts way before any medicine or trip to the doctor, because it’s not about any literal healing that comes from these words, actually, these words directly translate to "Heal, heal, little frog's tail, if you don't heal today, you'll heal tomorrow".

No, it’s about how the words themselves soothe.

Think about it. Your family making you chicken noodle soup, tucking you into bed, it's about the ritual of love, the emotional connection of comfort, care, and the belief that helps heal us, mentally. That is what Vicks represents in many Hispanic homes.

“When you grow up, especially as a child, when you’re so gullible, for lack of a better term, and your parents are such prominent figures telling you that a thing works that you start believing it. & I think there’s so much power in believing something works that it starts to work.”

– Sophia Perez

A gesture of love. It’s the action of someone's mama or abuela rubbing the minty balm on their chest and neck that shows they care.

While Vicks may not be a cure-all and certainly won’t be the direct reason you overcome a cold, a virus, or even a scraped knee, the love and memories are medicine in this cultural remedy that helps one breathe a little easier.

SUPPORT STUDENT JOURNALISM

Sign this petition so the Miami Hurricane can have it's budget reevaluated!

*expires in April 2026

Meow meow meow meow meow meow meow meow meow meow meow. Meow meow meow meow meow meow meow meow meow meow—meow meow meow meow meow meow meow meow meow meow meow "meow meow meow." Meow meow meow meow. Meow meow meow meow meow meow meow meow (meow meow, meow meow), meow meow meow meow meow meow meow meow.

Meow meow meow meow meow meow Meow Meow, meow meow meow meow meow meow meow meow meow meow meow meow meow meow, meow meow meow meow meow meow. Meow meow meow meow? Meow meow meow.

Meow meow meow meow meow meow meow meow meow meow meow meow meow, meow meow meow meow meow meow, meow meow meow meow meow meow meow meow meow meow meow meow meow.

Meow meow meow meow meow, meow "meowwwwww," meow meow meow meow meow meow meow meow meow meow meow meow meow meow meow meow—meow meow meow meow meow meow meow meow meow meow meow.

Meow meow meow meow meow meow meow "meow's meow." Meow meow meow. Meow meow meow meow meow meow meow meow meow meow meow meow meow meow. Meow meow meow meow meow meow meow meow meow meow meow meow meow meow meow meow meow meow, meow meow meow meow meow meow.

Turn static files into dynamic content formats.

Create a flipbook