Chemists shine a new light on safer pigments, by accident

Mankind has historically used colors to symbolize their sense of community. Of all the colors, blue is the most favored. In the Major League Baseball teams, for instance, nearly half of them feature a blue signature color. In North Carolina, the shades of Carolina blue and Duke blue can be found filling bars during basketball season. Though blue appears to be the most popular, it is not so easy for human hands to create.

Humans perceive the color of an object because it reflects a specific wavelength of light on the visible light spectrum. Color, though, is not accurately equal to pigment. A pigment is a tangible material that humans can physically utilize to provide color for uses like painting or cosmetics. Extracting pigments from nature, however, can be difficult and expensive. The quest is therefore on to find alternatives. To tackle the challenge of making the color blue, in particular, researchers take advantage of synthetic techniques. 

One of those successful at this challenge is Mas Subramanian, a materials science professor at Oregon State University at Corvallis. Subramanian and his team discovered the first new blue pigment in more than two centuries. The last invention of blue was cobalt blue, in 1802, derived from a mix of cobalt and aluminum oxides.

This ‘Bluetiful’ color came out of the blue

The birth of the brand-new blue, known as YInMn blue, turned out to be an accidental discovery. The research team was investigating novel electronics properties in hopes of revolutionizing the computer industry. They mixed oxides of three elements: yttrium (Y), indium (In) and manganese (Mn), each providing the desired electronic and magnetic properties. Surprisingly, after the mixture was baked for hours at intense heat, more than 2,000 °F, a vibrant blue powder was spit out of the oven, rather a brown or black material as the team would have expected.

YInMn blue powder. Source: Mas Subramanian, CC BY-SA 4.0

This out of the blue breakthrough has since triggered interest across industries, from Nike shoes to Chanel eye shadow, because of the safe and durable properties of the material. In 2017, Crayola created a crayon based on the YInMn blue, Bluetiful, aiming to inspire children by publicly recognizing the connection between scientific discovery and the real world. Earlier this year, YInMn made a chapter in the book “Chromatopia: An Illustrated History of Color.”

A broader palette: finding the perfect Ferrari red

The true breakthrough of YInMn blue was the structure of the crystals in the compound—the atomic arrangement in their respective lattices. This compound can not only provide blue, but is the basis to create a whole spectrum of colors. Adding titanium, zinc, copper and iron to the YInMn mix makes purple, orange and green materials. Soon, there may be bright and commercially viable pigments created by simply altering the chemical formula.

Now Subramanian’s team focuses on making a new non-toxic red pigment, especially desirable to replace the harmful cadmium-based red. The dangerous chemical elements in the pigment, similar to cobalt blue, can be poisonous if ingested in large quantities. Other red pigments, like Pigment Red 254, the characteristic Ferrari red, can fade to dullness unless treated with additional protection. Therefore, a safe and stable pigment is in demand. If the team succeeds, the new red should be a big hit and in high demand because of its attention-grabbing nature.

The search for the eye-catching red.

The search of the perfect red is, however, elusive. The challenge of these studies is that even with the most careful planning, it is hard to predict the color that will be produced. Whether Subramanian’s team will find the billion-dollar red or not, the world awaits.

Hear the story on TEDx Talks from Dr. Mas Subramanian.

Peer edited by Rita Meganck.

Molds, Mealworms, and Missed Opportunities: How We Think About Young Scientists

It’s no exaggeration to say that the discovery of antibiotics revolutionized medicine. For the first time, doctors had a powerful, reliable tool for treating deadly infections. We usually credit the discovery of the first antibiotic, penicillin, to Alexander Fleming in 1928. However, the ability of Penicillium molds to stop infections was actually discovered some 30 years earlier – by someone 24 years younger.

In 1896, a French medical student named Ernest Duchesne made a key stride toward the development of antibiotics as part of his thesis, yet it was all but forgotten until a few years after Fleming received the 1945 Nobel Prize. Though it is unlikely that the antibiotic that Duchesne observed was actually penicillin, as discussed here, his work should have received far more recognition than it did. Fleming was an excellent researcher, to be sure, but had Duchesne’s work been acknowledged instead of ignored by the Pasteur Institute, it’s hard to imagine that Fleming would have been credited with the key penicillin breakthrough.

An image of Ernest Duchesne.
Ernest Duchesne (1874-1912)

Sadly, Duchesne’s story is not unique in the scientific community. Though science has sought to value knowledge above all else, countless examples exist of findings being ignored due to biases about what makes a good scientist. In fact, even Duchesne’s work may have come from watching Arab stable boys treat saddle sores by allowing mold to grow on saddles, which raises further questions about attitudes toward non-Western knowledge – a contentious topic that I won’t delve into here. Instead, I want to consider how the scientific community views young scientists and how our assumptions hurt more than help the pursuit of knowledge. (Note: I use “our” here and throughout to acknowledge that grad students like me and other young academics can be victims of elitism while also perpetuating it against other young researchers.)

“But wait,” you might say, “surely we’ve gotten better at valuing young scientists in the past 100 years!” To some degree, you’re right – as a young researcher myself, I’ve felt generally supported and respected in academic circles. Yet, despite substantial progress, there is still a barrier between our young people and widely-accepted, legislation-informing science.

For instance, a recent discovery in the battle against plastic waste was the ability of mealworms to break down styrofoam. Once the worms eat the styrofoam, explains one of the two highly impactful 2015 articles, the bacteria in their guts can completely digest it. It was a fascinating finding, but it had already been made in 2009 by a 16-year-old. Tseng I-Ching, a Taiwanese high school student, won top prizes at the Intel International Science and Engineering Fair (ISEF; typically held in the U.S.) for her isolation of a styrofoam-degrading bacterium that lives in mealworms. Yet, there is no mention of her work in the 2015 papers. Even working with scholars from local universities and excelling at the ISEF – the world’s largest pre-college science fair – is not enough for a young researcher’s work to break into the bubble of academia. Prize money and prestige are certainly helpful for kickstarting an individual career, but allowing valuable research to go unnoticed holds science back for the entire community.

A man looks out of a window into a large room filled with science fair posters.
International science fairs like the ISEF can be great sources of fresh scientific ideas.

Now, any researcher can tell you: science isn’t easy. Producing data that you can be confident in and that accurately describes an unknown phenomenon is both knowledge- and labor-intensive. We therefore tend to assume, maybe not even consciously, that young people lack the skills and expertise needed to conduct good, rigorous science. Surely much of the work at high school science fairs, for example, has already been done before, and what hasn’t been done must be too full of mistakes and oversights for professionals to bother with, right? In fact, that’s not a fair assumption for large fairs like the ISEF, where attendees have already succeeded at multiple levels of local/regional fairs and are judged by doctoral-level researchers. Had established scientists paid attention to Tseng’s work, research on breaking down styrofoam could be years ahead of where it is now. Perhaps we should be considering how these students’ projects can feed the current body of knowledge, instead of just giving them a prize, fawning over their incredible potential, and sending them on their way.

A speaker at a workshop I recently attended about research careers left us with a memorable quote, to the effect of: “An academic’s greatest currency is their ideas.” When academia ignores young researchers, we’re essentially telling them that their ideas are not yet valuable. One group working to prove this wrong is the Institute for Research in Schools (IRIS). Founded in 2016 and operating in the UK, IRIS works to get school-age students working on cutting-edge scientific projects. In working with the group, Professor Alan Barr (University of Oxford) noted that his project 

“…has morphed into something that’s been taken in so many different directions by so many different students. There’s no way that we ourselves could have even come up with all of the questions that these students are asking. So we’re getting so much more value out of this data from CERN, because of the independent and creative ways in which these students are thinking about it. It’s just fantastic.” 

IRIS often encourages its students to publish in professional scientific journals, but even if a young researcher’s work is not quite worthy of publication, acting like they have nothing to contribute is foolish and elitist. In the Information Age, it’s easier than ever for curious, passionate people to gather enough knowledge to propose and pursue valid research questions. We might be surprised at the intellectual innovation spurred by browsing projects in IRIS or the ISEF and reaching out to students that share our research interests. Perhaps our reluctance to do so stems more from our egos than from fact.

Ultimately, promoting a diverse and inclusive scientific community will improve the quality of science done and help ease the strained relationship between scientists and the public. Doing so means we have to recognize and challenge our biases about who could be part of the team. These are widespread biases that can be difficult for individuals to act against; a well-intentioned researcher may struggle to convince funding agencies to back a project inspired by a high-schooler’s work. Therefore, I’m not pointing fingers, but calling on every member of the community to help change scientific culture. 

Perhaps students like Duchesne and Tseng are more the exception than the norm. Yet, when given an attentive ear, their fresh ideas are bound to be well worth any naiveté. The biggest barriers they face in contributing to science may not be their flaws, but their elders.

Peer edited by Gabrielle Dardis and Brittany Shepherd.

First the Moon, Now Mars: A Look at Rocket Science in Reaching the Red Planet

This year marks the 50th anniversary of humans becoming an extraterrestrial species. In July 1969, men set foot on lunar soil, making the first tangible step in connecting mankind with deep space and fueling aspirations for what humans could achieve when it came to interplanetary travel. Indeed, lofty goals have been made in this regard since the Apollo missions – some as ambitious as starting the first Martian colony as soon as 2025. But exactly how much progress have we made in launch systems since landing on the moon, and is it sufficient to achieve such a feat as colonizing Mars?

In order to travel to and colonize other planets, mankind would need to develop a way to send mass amounts of cargo reliably and consistently to the moon, as the moon would likely serve as a pit stop for subsequent legs of any given cosmic journey. This would require repeated use of rockets that are powerful enough to simultaneously lift heavy payloads and impart them with enough speed to clear Earth’s atmosphere – ideally rockets that are both extremely powerful and reusable. The Apollo program was supported by NASA’s Saturn V rocket which is the most powerful rocket brought to operational status to date. While these rockets provided the power necessary to carry men and their cargo to the moon, every part of the rocket itself was consumed in a mission; none of the engines, boosters, or fuel tanks survived for reuse. Additionally, these rockets were built during a time of political urgency; NASA’s budget peaked at 4% of all federal spending during the Cold War, compared to the 0.5% it operates at now. So, while Saturn V was an extremely powerful rocket, its disposable nature would not be economically sustainable for present day Martian colonization efforts.

Since Saturn V, scientists have worked to improve both the payload and reusability of rockets. From 1981 to 2011, NASA operated the space shuttle, a partially reusable spacecraft system that included two recoverable rocket boosters flanking an external fuel tank and three clustered liquid-fuel cryogenic rocket engines in the spaceplane itself. The rocket boosters were jettisoned just before reaching orbit and parachuted down for landing and recovery in the ocean; the rocket engines could be recovered from the spaceplane itself after it returned from a mission. Since the space shuttle era, companies have also hopped on the rocket development bandwagon; SpaceX, for example, is working on making their multistage Falcon rockets 100% reusable. Thus far, they can reliably achieve targeted landing of the first stage of its rockets and reuse these recovered boosters in subsequent flights. Complete rocket reuse would dramatically reduce the cost of colonizing Mars, making it much more feasible.

The space shuttle system: including spaceplane, fuel tank, and flanking rocket boosters.

In addition to making headway in rocket reusability, efforts have also been made to increase rocket power. NASA is now building the Space Launch System (SLS), a rocket family that is projected to be the most powerful in existence, with one variant carrying 143 tons of pure payload to low Earth orbit (surpassing Saturn V’s 130 ton payload). Pragmatically, the build of the first SLS variant borrows rocket engines from the space shuttle, a proven technology. Progress on this program however is uncertain, as the US 2020 fiscal year budget did not allot any money for SLS development. In private efforts, just earlier this year SpaceX successfully launched its Falcon Heavy rocket – the highest payload launch vehicle in current operation – and additionally recovered all three of its boosters on Earth.

Recovery of two rocket boosters after SpaceX Falcon Heavy launch.

Since landing on the moon, we’ve made a lot of progress in making rockets both more powerful and reusable – two key ingredients in the recipe for colonizing a planet like Mars. But the question remains, how close are we to reaching the red planet? Nothing is certain, but mankind is making great strides in getting our rockets up to par for these unprecedented flights, and I personally feel confident that we’ll see “Man on Mars” rocking our headlines within the next couple decades. Now, is there a future where humans terraform Mars, or where Earthly humans can pop over to visit their Martian relatives? To that I’d say, ask me again in another 50 years.

Peer Edited by Aldo Jordan

Fungi Join the Fight Against Malaria

SLAP. You’re sitting outside at a summer bonfire and you catch a mosquito feasting on your leg. Groaning, you realize this probably means a couple days of itching and a swollen welt. Usually in North Carolina, you’re going to be just fine and should only seek medical attention if you experience complications. However, for many, particularly in Sub-Saharan Africa, a mosquito bite can mean something particularly dangerous: the possibility of infection with malaria.

Cells infected with Plasmodium falciparum possess characteristic purple dots, often connected in a “headphones” shape. Source: Steven Glenn, Laboratory and Consultation Division, USCDCP (PIXIO)

Malaria is a parasitic infection caused by Plasmodium falciparum, a single-celled organism that lives in the salivary gland of Aedes aegypti mosquitoes and is considered the deadliest human parasite. The World Health Organization reported in 2018 that progress in reducing cases of malaria globally has stalled, with nearly 435,000 people, primarily Sub-Saharan African children, dying from the disease per year. Much of this lack of progress can be attributed to mosquitoes developing a tolerance for commonly used chemical insecticides. Recent work from scientists at the University of Maryland in collaboration with researchers in Burkina Faso has hacked biology to create a new tool to combat the problem of malaria-carrying mosquitoes. The tool? Genetically modified fungus that wipes out mosquitoes with a deadly dose of spider toxin.

To develop their tool, the researchers used a type of fungus that already occurs in malaria-affected areas and targets mosquitoes: Metarhizium pingshaense. This species of fungus is entomopathogenic, which means that the fungus is a parasite to insect hosts and kills or disables them. In the wild, this fungus can kill mosquitoes, albeit quite slowly. It takes over a week in the lab for Metarhizium to kill a mosquito, by which time the mosquito could spread malaria and reproduce. To speed up the process, the scientists used genetic engineering to give Metarhizium a deadly tool: a gene from the Australian Blue Mountains Funnel-web spider that creates a potent toxin. In the wild, the spider’s fangs deliver a neurotoxin that disrupts the way neurons send signals in the brain. This causes symptoms including dangerously elevated heart rate and blood pressure and difficulty breathing. If bitten by a funnel web spider, death can occur in humans rapidly. However, to ensure the toxin only affects mosquitos, the scientists disabled the toxin from being made by the fungus until the right moment — when the fungus reaches the bloodstream of the mosquito.

The Australian Funnel Web spider is considered to be one of the deadliest spiders on Earth. Source: Brenda Clarke (flickr)
A cockroach infected with a species of Metarhizium. Source: Chengshu Wang and Yuxian Xia, PLoS Genetics Jan. 2011 (Wikimedia Creative Commons)

The fungus was tested in a unique environment in Burkina Faso: the MosquitoSphere, an enclosure where genetically modified organisms are permitted to be tested.

To conduct the experiment, two populations of 1500 mosquitoes were released in the MosquitoSphere. To expose the insects to the fungus, the investigators mixed the Metarhizium spores with sesame oil and spread the mixture onto black cotton sheets. When mosquitoes landed on the sheets, they were exposed to the toxic fungal spores. The mosquito population exposed to the unmodified Metarhizium had no problem reproducing before the fungus proved fatal, with their population totaling nearly 2500 after 45 days. In contrast, the population of mosquitoes exposed to the genetically modified Metarhizium after 45 days shrunk to a mere 13 individuals. A population of 13 mosquitoes indicated that the mosquitoes were dying more quickly than they could breed, therefore producing an unsustainable, or collapsed population. The striking results of the experiments in the MosquitoSphere encouraged the investigators to conclude that the genetically modified Metarhizium can function as a potent mosquito-killing bio-pesticide and cause an interbreeding colony of mosquitoes to collapse in a little over a month.

Despite success in the MosquitoSphere, the fungus still needs to undergo more testing to be considered safe for humans and the environment. If it is approved, a formulation of the fungal spores could be used to protect dwellings and mosquito netting from malaria-carrying mosquitoes. This study is a promising sign for the use of genetically modified bio-pesticides to wipe out disease-causing insects, which could prove invaluable for malaria prevention. 

Peer editor: Elise Hickman

Are some of us “immune” to genome editing?

https://www.flickr.com/photos/nihgov/27669625144

Some of us may have characteristics we would like to change. We may want to be taller, skinnier, or smarter. While many traits can be changed during our lifetime, others simply cannot. There are certain ‘fates’ we cannot avoid, such as inherited diseases. For example, if you inherit two copies of a gene called CFTR that has mutations, you are likely to develop Cystic Fibrosis. Although your life choices can help accommodate the disease, there is currently no cure for Cystic Fibrosis and many other genetic disorders. However, with the development of genome editing, the potential to alter the entirety of our genetic information, there is hope in the ability to one day eradicate genetic diseases once and for all.

There are currently several genome editing methods available, but the most popular method is the CRISPR-Cas9 system, which was recently and famously used to edit the germline of babies born in China. Editing the germline means that the changes made to the genome will be inherited throughout following generations. Although this practice is still officially banned around the world, CRISPR-Cas9 was used to change two twins’ genomes. This tool allows us to edit parts of the genome by either removing, adding, or changing a given segment of DNA sequence. It is the go-to editing technique used by scientists because it can generate changes quickly, efficiently, and accurately  at a relatively low cost. There are two major components of this system – Cas9 and a guide RNA (gRNA). Cas9 is an enzyme that acts as ‘molecular scissors’ and cuts both strands of DNA at a specific location, which allows the removal, addition, or change of DNA sequence. The sequence where Cas9 cuts is determined by the gRNA, which, as its name implies, is an RNA molecule that ‘guides’ Cas9 to a specific place in the genome through complementary base pairing.

The CRISPR-Cas9 system was originally discovered in bacteria as a defense mechanism against viral infection. The most commonly used versions of Cas9 for genome editing come from two bacterial species – Staphylococcus aureus and Streptococcus pyogenes. Both species of bacteria frequently live on the body as harmless colonizers, but in some instances, they can cause disease. If you have been infected with either S. aureus or S. pyogenes, you likely developed adaptive immunity to the bacterium in order to protect yourself against future infections. A recent study from Stanford University published in the journal Nature Medicine examined this phenomenon, asking whether healthy humans possessed adaptive immunity against Cas9 from S. aureus and S. pyogenes.

First off, you may be wondering why adaptive immunity to Cas9 would matter at all. When we are exposed to ‘invaders,’ such as bacteria, our body develops adaptive immunity in the form of antibodies and T cells that target specific components (also called antigens) of the invader. For example, if you’ve eaten raw eggs (in cookie dough, let’s say), you may have suffered from an upset stomach caused by infection with Salmonella. If you are then exposed to Salmonella again in your lifetime, both antibodies and T cells that developed in response to the first infection should act quickly to suppress and clear the infection. Similarly, if you were previously infected with S. aureus, your body may negatively respond to receiving modified human cells producing Cas9 originating from S. aureus for the purpose of genome editing but the antibodies and T cells generated during the initial infection may quickly clear these cells. Our own body may think it is protecting us, although in reality it may be thwarting our efforts to make a change in our genome that is intended to have a positive effect. Even worse, this unintentional stimulation of our immune system could lead to an overwhelming response that can lead to organ failure and potentially death. Unfortunately, there is a documented case of a patient death due to an immune response to gene therapy using viruses. It may therefore be necessary to screen each patient before the start of therapy to determine whether they possess immune components that would make this form of therapy inefficient or even deadly.

The Stanford study mentioned above examined this phenomenon by testing for the presence of both antibodies and T cells against Cas9 from S. aureus and S. pyogenes in healthy donors. The authors found that ~75% of donors produced antibodies against Cas9 from S. aureus and ~60% of donors produced antibodies against Cas9 from S. pyogenes. These results come from analyzing blood from more than 125 adult donors. The scientists also used several approaches to examine T cell responses. In this case, 78% and 67% of donors were positive for antigen-specific T cells again Cas9 from S. aureus or S. pyogenes, respectively.

In summary, the authors of this study provide evidence for existing adaptive immunity to Cas9 from S. aureus and S. pyogenes in a large number of healthy adults. Two additional studies examined adaptive immunity against Cas9 with varying results. However, all three studies suggest that pre-existing immunity to Cas9 needs to be carefully considered and addressed before the CRISPR-Cas9 system moves toward clinical trials as a therapy for genetic diseases. In other words, scientists need to address whether the presence of antibodies and/or T cells against Cas9 would lead to destruction of cells carrying Cas9 for the purposes of genome editing. Other potential solutions include using Cas9 derived from bacterial species that do not cause human infections and thus should not trigger an adaptive immune response, or suppressing the immune system while administering CRISPR-Cas9.

CRISPR-Cas9 genome editing has the potential to transform our lives in many ways. While this technology certainly holds promise for the treatment of genetic disease, the Stanford study highlights the need for more research before we can conclude that CRISPR-Cas9 is effective and safe. Optimization of this system may improve its performance in clinical trials and lead to future approval for the treatment of human disease.

Peer edited by Keean Braceros and Isabel Newsome

Transforming Blood Transfusions

Blood is essential. It carries the oxygen you breathe throughout your body and to your lungs, keeping you alive and invigorated. However, our body can only produce so much blood in a day, and when we undergo serious blood loss through car accidents, genetic disorders, or surgeries, we need to replenish our body’s blood supply. People around the world donate their blood to those who are in desperate need and each year, 4.5 million lives are saved thanks to blood donations.

The transfer of one person’s blood to another person’s body is called transfusion, and in the United States alone, someone needs a transfusion every two seconds1. Currently, blood transfusions are limited by the type of blood one has – and therefore needs. Blood has four major types: ‘A’, ‘B’, ‘AB’, and ‘O’, and each of these major types has two subtypes, positive or negative. This means there are a total of eight blood types: A+, A-, B+, B-, AB+, AB-, O+, O-. These types are determined by different molecules called antigens (you can think of them as tags or signs) on the outside of the blood cell (see image below, different blood types have different molecular patterns or “tags”). 

The four major blood types are defined by the antigens, or “tags”, on their surfaces. The “H” antigen of blood type “O” is the most common, and can be converted by enzymes to “A” or “B” antigens. If you don’t have “A” or “B” blood types, it is because you don’t have the enzymes that can convert antigen “H” to the other antigen types.

Blood transfusions are limited by these specific antigens (tags) because the antigens are recognized by a person’s immune system or, the body’s defense system. If you have blood type “B”, and now have blood type “A” in your body, due to a transfusion, your immune system will recognize that “A” is not part of your body, and will trigger an immune response which attacks and destroys the blood that has been transfused.

A group from the University of British Columbia in Canada identified a bacterium in the human stomach that can convert blood type ‘A’ to the universal donor type, ‘O’2. By screening thousands of microbes in our stomachs, the scientists identified a particular obligate anaerobe (can only survive when oxygen is not present), Flavonifractor plautii. Inside F. plautii, two enzymes were found to work together to change the molecules on the surface of red blood cells of blood type ‘A’ to molecules of blood type ‘O’.

But how can blood types be changed from one blood type to another? 

Below is a diagram of blood type ‘A’ and blood type ‘O’. They have some “tags” in common (the blue squares, orange circles, and grey diamonds), and some that differ (green pentagon). The enzymes in F. plautii are able to cut off the extra molecular tag of blood type ‘A’, which then leaves blood type ‘O’! First, the enzyme FpGalNAcDeAc changes the ‘A’ blood type molecules to a slightly different molecule, which can then be recognized and cut by the second enzyme, FpGalNase, leaving blood type ‘O’. Not only can these enzymes change blood type ‘A’ to ‘O’, but they can do it fairly efficiently, which could greatly increase the blood supply for transfusions. 

Blood Type “A” to Blood Type “O” Conversion Pathway. FpGalNAcDeAc enzyme recognizes the “A” antigen, and changes it to a different molecule. This new molecule is recognized and removed by FpGalNase, leaving only the “H” antigen which characterizes blood type “O”.

The group from British Columbia are not the first to try changing blood type3. A previous study converted blood type ‘B’ to blood type ‘O’, albeit very inefficiently. So inefficiently, in fact, that the conversion would never have practical use4

Since the enzymes from F. plautii are bacterial, and only small amounts of the enzymes are needed for conversion of blood type ‘A’ to blood type ‘O’, it could be possible to mass produce the enzymes to be able to transform large quantities of blood for transfusions, increasing blood supply for medical procedures and saving more lives.

Sources:

  1. https://www.redcrossblood.org/donate-blood/blood-types.html
  2. Rahfeld, P., Sim, L., Moon, H., Constantinescu, I., Morgan-Lang, C., Hallam, S. J., Kizhakkedathu, J. N., and Withers, S. G. (2019) An enzymatic pathway in the human gut microbiome that converts A to universal O type blood. Nat. Microbiol. 10.1038/s41564-019-0469-7
  3. Goldstein, J., Siviglia, G., Hurst, R., Lenny, L. & Reich, L. Group-B erythrocytes enzymatically converted to Group-O survive normally in A, B, and O individuals. Science 215, 168–170 (1982).
  4. Kruskall, M. S. et al. Transfusion to blood group A and O patients of group B RBCs that have been enzymatically converted to group O. Transfusion 40, 1290–1298 (2000).

Peer edited by Abigail Agoglia

Sleep Your Way to Better Health

Woman sleeping on her stomach with her hair spread out on white sheets

Are you trying to lose weight, reduce your risk of heart disease, or improve your productivity? What if there was one thing you could do to help with all of those – would you give it a try? Well, what if I told you that you needed to get more sleep? Sounds easy enough, but why is sleep so important and why is it so hard for us to get enough?

The Importance of Sleep

Getting enough quality sleep is linked to many improved health outcomes. For your overall health, sleep is as important as eating well and exercising, yet it’s often overlooked as a factor in a healthy lifestyle. A meta-analysis of 45 studies concluded that adults and children who were “short sleepers” (<10 hours per night for children or <5 hours a night for adults) have a higher body mass index (BMI) than those who are regularly sleeping enough. This may be because reduced sleep is also associated with reduced leptin (the hormone that tells your body that you’ve eaten enough) and increased ghrelin (the hormone that induces appetite). In other words, chronic sleep deprivation can contribute to weight gain.

Additionally, studies have linked decreased sleep with increased risk for heart disease and decreased memory function. In Dr. Matt Walker’s popular TED Talk, Sleep is Your Superpower, he discusses a study where participants who slept a full 8 hours did 40% better on a cognitive task than those who were sleep deprived – that could be the difference between acing an exam or assignment or totally flunking it! Without adequate sleep, your heart and your brain cannot function optimally, so why do we so often skip out on sleep?

Why is Getting Enough Sleep So Hard?

Infographic describing the top healthiest sleeping habits: having a routine, avoid technology, and regular exercise.

Poor habits and modern technology contribute to many of us not getting enough sleep. One of the most fundamental habits that can improve your sleep quality is to go to bed and wake up at the same time every day – even on the weekends. While sleeping in on the weekends feels amazing at the time, developing a habit of going to bed and waking up at the same time every day helps your body in the long term fall asleep faster and stay asleep. Additionally, exposure to natural light early in the day and avoiding blue light from devices like your phone and computer later in the day also help your body maintain its natural wake-sleep cycle. Lastly, exercising regularly, but no closer than four hours before bedtime, is also helpful for getting quality sleep.

While sleeping can feel like a luxury, it’s not – sleep is absolutely essential to a healthy lifestyle. So the next time you’re feeling guilty about leaving so many things undone on your to-do list list, remind yourself how productive you’ll be by just getting a good night’s sleep.

Peer Edited by Eliza Thulson


30 Years Post Nuclear Meltdown

https://www.maxpixel.net/Pripyat-Ukraine-Chernobyl-Stalker-Travel-3711311

Pressure increasing, emergency sirens ringing, water in the cooling tanks superheating. 9:27:40 AM UTC April 26, 1986: a planned safety test was conducted to assess the cooling circulation system in case of a power outage. Under normal operations, the reactors would emit a tremendous amount of heat, 6% of their total power output. With 1,600 individual fuel channels in each core, a coolant flow rate of 28 metric tons/hour was required to avoid meltdown. The examination began; the water system started to boil. The water flow rate for the reactors decreased, and the unstable reactor spiraled into a positive feedback loop. A catastrophic explosion sent a radioactive cloud as far as Canada.

The HBO mini-documentary Chernobyl provides an in-depth look at the events that led to the nuclear power plant meltdown, along with the devastation the meltdown caused in the surrounding areas. More importantly, we experience the tragedies and heroic acts of those who saved lives and those who saved humanity. As a biologist, I can’t help but wonder about the biological implications of the event both at that moment in history and also now, 30 years after.

Implications of Radiation Exposure to Humans

https://www.maxpixel.net/Chernobyl-Pripyat-1366161
The abandonment at Chernobyl

The most devastating implication of such an incident is of course the countless lives lost due to exposure to high levels of radioactive debris. However, for those who had survived or lived in the countries nearby, the exposure to the radioactive isotope iodine-131 was the greatest threat immediately following the meltdown. The threat is attributed to the short half-life of iodine-131 (8 days) and its potential to cause thyroid cancer. Iodine is normally used by the thyroid to produce hormones, however uptake of iodine-131 will mutate thyroid cells and cause cancer.

Four years after the incident, an increase in incidence of thyroid cancer was observed. Disproportionately, those affected the greatest at the time were adolescents in countries such as Belarus and Ukraine, closest to ground zero.  

After the initial meltdown, the main crisis was due to radioactive fallout and iodine-131; however, three decades later, the concern shifted to caesium-137 and strontium-90. Both of these isotopes have a half-life of 30 years, leading to soil and water contamination with these radioactive isotopes which can persist for decades. Since caesium-137 can accumulate in biological organisms, livestock was tested in almost every European country. Due to the nature of these molecules and their inability to be readily excreted by the body, these compounds slowly aggregate into higher echelons of the food chain. The concern was that consumption of infected livestock could lead to chronic radiation exposure. The major consequences of such exposure is a much greater likelihood of developing lung, stomach, colon cancers and leukemias.

Implications for Wildlife

Beyond civilian exposure to acute radiation, the wildlife in the region were also devastated.  The Red Forest died due to acute radiation exposure. DNA of animal and plant populations from regions affected by radioactive fallout tested positive for abnormalities attributed to radiation poisoning, all of which led to increased physical and behavioral abnormalities. 

 Three decades after the incident, wildlife is beginning to return the desolate region. Many species of animals including wolves, boars, bears, and eagles have taken up residence in regions devoid of humans. Furthermore, there is evidence to suggest plants have developed a tolerance to the high radiation levels allowing them to survive in radioactive environments. These mechanisms of survival are based around increased activity of the DNA repair machinery. The increased activity of the repair machinery means plants can to some extent reverse the damage caused from the radiation, thus becoming slightly more tolerant to harmful radiation exposure. That being said, life in the surrounding area still experiences the radiological effects of the disaster. 


The Radioactive Climate

https://www.maxpixel.net/Chernobyl-Pripyat-1366159
Chernobyl

The disaster released 400 times more radioactive material than the atomic bombings of Hiroshima and Nagasaki. Almost 100,000 square kilometers were contaminated by radioactive fallout. The fallout mixing with cold fronts caused radioactive rainfall in much of northern Europe. Enhanced safety measures were undertaken by governments to reduce population exposure. As most of the radioactive isotopes could bioaccumulate through the food chain, guidelines and restrictions were advised for consuming fish and other animal products. Furthermore, stricter policies were enacted by countries on radiation management and radiation exposure to workers.   

https://upload.wikimedia.org/wikipedia/commons/thumb/2/23/Chernobyl_radiation_map_1996.svg/1024px-Chernobyl_radiation_map_1996.svg.png
The Exculsion Zone surrounding the site
https://www.maxpixel.net/Chernobyl-Gas-Mask-Pripyat-1366160
Gas mask hanging in Chernobyl

The Chernobyl Nuclear Exclusion Zone of Alienation was enacted and placed under strict military control to prevent further damage to society. This strict 30 km zone surrounds the remains of the reactors. Reactor four in particular is the highest radioactively contaminated region on Earth, sparking interest in science as well as the public. Even though Chernobyl has become a tourist attraction, access is only granted to people who apply. 

Three decades have passed since the meltdown of reactor four, initially resulting in the release of a radioactive cloud that stretched halfway across the globe and immediately eradicated all life in the surrounding area. However, three decades later, Chernobyl is finally beginning to see the return of life.

By Syed Masood

Peer edited by Caitlyn Molloy and Justine Grabiec

High-Throughput Urinary Testing: More Data, More Problems?

If a urinary tract infection (UTI) has ever ruined your day, you’re not alone. This uncomfortable condition is one of the most common infectious illnesses in the United States, responsible for more than ten million office visits and nearly $4 billion in costs each year. UTIs are usually diagnosed based on the results of three different types of test:

1.     Dipstick test: a prepared test strip provides basic information about the urine, including things like pH and the presence of blood cells or nitrites (a byproduct of some bacterial infections). The dipstick is fast and inexpensive but not very sensitive, leading to both false negative and false positive results.

2.     Microscopy: examining urine under a microscope to check for the presence of red and white blood cells or bacteria directly. This is a relatively quick and inexpensive test but is also prone to false negatives and false positives.

3.     Urine culture: currently the gold standard of UTI diagnosis. For this test, a sample of urine is plated on materials that will allow bacteria to thrive and then it is observed for 48 hours to see if any bacterial colonies will grow. Culturing urine samples allows the lab to identify the species of bacteria responsible for an infection and test antibiotics to determine the best treatment. Growing cells in culture can be difficult, however, particularly if bacteria live together cooperatively in a growth called a biofilm which is common in the urinary tract. This makes false negative results a problem for urine culture.

Staphylococcus aureus bacteria bound by a biofilm. These growths make bacteria more difficult to detect with urine culture. Photo credit Rodney M. Donlan, Ph.D and Janice Carr, 2005.

Recently, new technology has been introduced into UTI diagnosis that may eventually change the way in which infections are identified. Polymerase chain reaction (PCR) allows the lab to test urine samples for several bacterial genes to identify infections without the need to grow bacteria directly in culture. PCR is more sensitive than urinary culture and may be able to provide evidence of bacteria that are more difficult to identify by culture, like biofilm infections. More sensitive still is next-generation sequencing (NGS).In this technique, rather than testing for a few specific bacterial genes that may be present in the urine sample, the lab can identify every single bit of genetic material in the urine. This allows the lab to identify every organism that is living in the urinary tract, and test for the presence of genes that confer antibiotic resistance to suggest a more targeted antibiotic therapy. These methods are poised to make UTI diagnosis and treatment more precise and less prone to false negatives.

How next generation sequencing works. Photo credit to Jenny Cham, based on a lecture by Alan Pittman, Ph.D.

However, these more sensitive tests do have limitations. By identifying many more species of bacteria than urine culture, NGS may detect species that are present in urine but not responsible for urinary symptoms. In a comparison of NGS and urine culture, although NGS and culture identified the same bacteria in UTI patients 96% of the time, NGS also identified many species of bacteria in healthy control participants. Why were these healthy patients testing positive for bacteria? Although historically the urinary tract has been described as a sterile environment, modern evidence suggests that the urinary tract has its own microbiome, where multiple species of microbes live without causing human illness. This research is still in its infancy, and until the microbiome of the bladder has been thoroughly described it will be difficult for doctors to determine whether a species identified via NGS is an infectious organism or a normal part of the patient’s urinary tract. Another disadvantage is cost: these new tests are also much more expensive and not often covered by insurance, and not all doctors are willing to use the results to make treatment decisions. 

So, are these new tests worth it? That’s a question only the patient and their doctor can answer. For now, these more sophisticated techniques are best used as a companion to traditional urinary culture, not a replacement, until clinical standards are established. It might be helpful to think of NGS as similar to “23andMe for your bladder”- the information is interesting, and some of it might be useful for your medical provider, but for now it’s mostly medical trivia. Here’s a summary of the test options:

TEST SPEED COST DRUG
TESTING
FALSE
NEGATIVES
FALSE
POSITIVES
Dipstick Instant $
Microscopy Instant $
Culture 2 days $$ Uncommon
PCR 1 day $$$ Uncommon Uncommon
NGS 4 days $$$$ Uncommon

Peer-edited by Kathryn Weatherford and Chiung-Wei Huang.



A glass of wine a day…does not keep the doctor away

https://www.flickr.com/photos/117025355@N05/12429334035
Wine glasses with different types of wine in them.

One day, science shows that coffee is good for you, but the next day, science finds that coffee is bad for you. One day, chocolate is bad for you, and the next, it is good for you. Studies show that red meat is both good and bad for you. As we read the latest news, science seems to contradict itself every day. With all this confusion about science and nutrition how do we know which foods are good or bad for our health? A recent study has tried to simplify the answer for the link between alcohol and health.

One of the reasons for apparent contradictions in science is due to the nature of science. There is no answer sheet to check for the right answer and no textbook to see if your conclusion is correct. Science is completely new, and each study adds a small piece to our understanding of the world. Because each study is limited in what it investigates, occasionally the conclusions drawn from different studies may be at odds with one another. However, once enough science has been conducted, it is possible to “average out” all the information on a particular topic and come to a consensus. This consensus is often a “yes” or “no” answer to a single question, such as “Does chocolate lower blood pressure?” or “Does coffee increase the risk for cancer?” (Hint: the answers are yes, chocolate lowers blood pressure and no, coffee does not increase the risk for cancer!). One way to do this is by performing what is called a metanalysis.

Metanalyses investigate the research on a specific topic (such as a possible link between processed meats and cancer) and combine it to come to a single conclusion. A recent metanalysis studied the link between alcohol consumption and disease by combining information from 592 studies that investigated the risks and benefits of  alcohol.

The most important finding from the study is that even a single standard drink of alcohol per day increases a person’s risk of health problems, such as cancer, stroke, injuries, and infections. Furthermore, two drinks per day lead to a 7% higher risk of dying from alcohol-related health problems, and five drinks per day lead to a 37% higher risk of dying. Because of the study’s design, it is unclear how long these drinking levels must be sustained to increase the risk for health problems; future research should study this question. Although the metanalysis found that moderate consumption of alcohol (1-3 standard drinks daily) reduces the risk of ischemic heart disease and diabetes moderate alcohol consumption still increases the risk of developing over twenty other health complications and diseases. This increased risk explains the finding in this study that alcohol is the 7th leading risk factor for deaths globally, with alcohol involved in 2.2% of female deaths and 6.8% of male deaths in 2016. The disparity in male vs. female deaths may be due to discrepancies in drinking rates, since in many areas of the world, men drink more alcohol than women.

This metanalysis emphasizes the importance of re-evaluating current public health recommendations. The U.S. Dietary Guidelines recommends no more than 1 drink per day for women and no more than 2 drinks per day for men, and it also suggests that people who do not drink should not start drinking. However, the metanalysis discussed above suggests that we should reconsider these guidelines and avoid recommending alcohol consumption to anyone. This recommendation is unlikely to be a popular opinion, due to the number of people who enjoy consuming alcohol on a regular basis as well as the alcoholic beverage companies who prefer to cite science showing that moderate drinking is healthy. However, the sheer number of deaths caused by alcohol consumption (2.8 million deaths globally in 2016) highlights the importance of a thorough review of alcohol recommendations. Until then, we should individually consider how much and how frequently we consume alcohol, since alcohol does not keep the doctor away.

Peer-edited by Priya Stepp