Chemists shine a new light on safer pigments, by accident

Mankind has historically used colors to symbolize their sense of community. Of all the colors, blue is the most favored. In the Major League Baseball teams, for instance, nearly half of them feature a blue signature color. In North Carolina, the shades of Carolina blue and Duke blue can be found filling bars during basketball season. Though blue appears to be the most popular, it is not so easy for human hands to create.

Humans perceive the color of an object because it reflects a specific wavelength of light on the visible light spectrum. Color, though, is not accurately equal to pigment. A pigment is a tangible material that humans can physically utilize to provide color for uses like painting or cosmetics. Extracting pigments from nature, however, can be difficult and expensive. The quest is therefore on to find alternatives. To tackle the challenge of making the color blue, in particular, researchers take advantage of synthetic techniques. 

One of those successful at this challenge is Mas Subramanian, a materials science professor at Oregon State University at Corvallis. Subramanian and his team discovered the first new blue pigment in more than two centuries. The last invention of blue was cobalt blue, in 1802, derived from a mix of cobalt and aluminum oxides.

This ‘Bluetiful’ color came out of the blue

The birth of the brand-new blue, known as YInMn blue, turned out to be an accidental discovery. The research team was investigating novel electronics properties in hopes of revolutionizing the computer industry. They mixed oxides of three elements: yttrium (Y), indium (In) and manganese (Mn), each providing the desired electronic and magnetic properties. Surprisingly, after the mixture was baked for hours at intense heat, more than 2,000 °F, a vibrant blue powder was spit out of the oven, rather a brown or black material as the team would have expected.

YInMn blue powder. Source: Mas Subramanian, CC BY-SA 4.0

This out of the blue breakthrough has since triggered interest across industries, from Nike shoes to Chanel eye shadow, because of the safe and durable properties of the material. In 2017, Crayola created a crayon based on the YInMn blue, Bluetiful, aiming to inspire children by publicly recognizing the connection between scientific discovery and the real world. Earlier this year, YInMn made a chapter in the book “Chromatopia: An Illustrated History of Color.”

A broader palette: finding the perfect Ferrari red

The true breakthrough of YInMn blue was the structure of the crystals in the compound—the atomic arrangement in their respective lattices. This compound can not only provide blue, but is the basis to create a whole spectrum of colors. Adding titanium, zinc, copper and iron to the YInMn mix makes purple, orange and green materials. Soon, there may be bright and commercially viable pigments created by simply altering the chemical formula.

Now Subramanian’s team focuses on making a new non-toxic red pigment, especially desirable to replace the harmful cadmium-based red. The dangerous chemical elements in the pigment, similar to cobalt blue, can be poisonous if ingested in large quantities. Other red pigments, like Pigment Red 254, the characteristic Ferrari red, can fade to dullness unless treated with additional protection. Therefore, a safe and stable pigment is in demand. If the team succeeds, the new red should be a big hit and in high demand because of its attention-grabbing nature.

The search for the eye-catching red.

The search of the perfect red is, however, elusive. The challenge of these studies is that even with the most careful planning, it is hard to predict the color that will be produced. Whether Subramanian’s team will find the billion-dollar red or not, the world awaits.

Hear the story on TEDx Talks from Dr. Mas Subramanian.

Peer edited by Rita Meganck.

Molds, Mealworms, and Missed Opportunities: How We Think About Young Scientists

It’s no exaggeration to say that the discovery of antibiotics revolutionized medicine. For the first time, doctors had a powerful, reliable tool for treating deadly infections. We usually credit the discovery of the first antibiotic, penicillin, to Alexander Fleming in 1928. However, the ability of Penicillium molds to stop infections was actually discovered some 30 years earlier – by someone 24 years younger.

In 1896, a French medical student named Ernest Duchesne made a key stride toward the development of antibiotics as part of his thesis, yet it was all but forgotten until a few years after Fleming received the 1945 Nobel Prize. Though it is unlikely that the antibiotic that Duchesne observed was actually penicillin, as discussed here, his work should have received far more recognition than it did. Fleming was an excellent researcher, to be sure, but had Duchesne’s work been acknowledged instead of ignored by the Pasteur Institute, it’s hard to imagine that Fleming would have been credited with the key penicillin breakthrough.

An image of Ernest Duchesne.
Ernest Duchesne (1874-1912)

Sadly, Duchesne’s story is not unique in the scientific community. Though science has sought to value knowledge above all else, countless examples exist of findings being ignored due to biases about what makes a good scientist. In fact, even Duchesne’s work may have come from watching Arab stable boys treat saddle sores by allowing mold to grow on saddles, which raises further questions about attitudes toward non-Western knowledge – a contentious topic that I won’t delve into here. Instead, I want to consider how the scientific community views young scientists and how our assumptions hurt more than help the pursuit of knowledge. (Note: I use “our” here and throughout to acknowledge that grad students like me and other young academics can be victims of elitism while also perpetuating it against other young researchers.)

“But wait,” you might say, “surely we’ve gotten better at valuing young scientists in the past 100 years!” To some degree, you’re right – as a young researcher myself, I’ve felt generally supported and respected in academic circles. Yet, despite substantial progress, there is still a barrier between our young people and widely-accepted, legislation-informing science.

For instance, a recent discovery in the battle against plastic waste was the ability of mealworms to break down styrofoam. Once the worms eat the styrofoam, explains one of the two highly impactful 2015 articles, the bacteria in their guts can completely digest it. It was a fascinating finding, but it had already been made in 2009 by a 16-year-old. Tseng I-Ching, a Taiwanese high school student, won top prizes at the Intel International Science and Engineering Fair (ISEF; typically held in the U.S.) for her isolation of a styrofoam-degrading bacterium that lives in mealworms. Yet, there is no mention of her work in the 2015 papers. Even working with scholars from local universities and excelling at the ISEF – the world’s largest pre-college science fair – is not enough for a young researcher’s work to break into the bubble of academia. Prize money and prestige are certainly helpful for kickstarting an individual career, but allowing valuable research to go unnoticed holds science back for the entire community.

A man looks out of a window into a large room filled with science fair posters.
International science fairs like the ISEF can be great sources of fresh scientific ideas.

Now, any researcher can tell you: science isn’t easy. Producing data that you can be confident in and that accurately describes an unknown phenomenon is both knowledge- and labor-intensive. We therefore tend to assume, maybe not even consciously, that young people lack the skills and expertise needed to conduct good, rigorous science. Surely much of the work at high school science fairs, for example, has already been done before, and what hasn’t been done must be too full of mistakes and oversights for professionals to bother with, right? In fact, that’s not a fair assumption for large fairs like the ISEF, where attendees have already succeeded at multiple levels of local/regional fairs and are judged by doctoral-level researchers. Had established scientists paid attention to Tseng’s work, research on breaking down styrofoam could be years ahead of where it is now. Perhaps we should be considering how these students’ projects can feed the current body of knowledge, instead of just giving them a prize, fawning over their incredible potential, and sending them on their way.

A speaker at a workshop I recently attended about research careers left us with a memorable quote, to the effect of: “An academic’s greatest currency is their ideas.” When academia ignores young researchers, we’re essentially telling them that their ideas are not yet valuable. One group working to prove this wrong is the Institute for Research in Schools (IRIS). Founded in 2016 and operating in the UK, IRIS works to get school-age students working on cutting-edge scientific projects. In working with the group, Professor Alan Barr (University of Oxford) noted that his project 

“…has morphed into something that’s been taken in so many different directions by so many different students. There’s no way that we ourselves could have even come up with all of the questions that these students are asking. So we’re getting so much more value out of this data from CERN, because of the independent and creative ways in which these students are thinking about it. It’s just fantastic.” 

IRIS often encourages its students to publish in professional scientific journals, but even if a young researcher’s work is not quite worthy of publication, acting like they have nothing to contribute is foolish and elitist. In the Information Age, it’s easier than ever for curious, passionate people to gather enough knowledge to propose and pursue valid research questions. We might be surprised at the intellectual innovation spurred by browsing projects in IRIS or the ISEF and reaching out to students that share our research interests. Perhaps our reluctance to do so stems more from our egos than from fact.

Ultimately, promoting a diverse and inclusive scientific community will improve the quality of science done and help ease the strained relationship between scientists and the public. Doing so means we have to recognize and challenge our biases about who could be part of the team. These are widespread biases that can be difficult for individuals to act against; a well-intentioned researcher may struggle to convince funding agencies to back a project inspired by a high-schooler’s work. Therefore, I’m not pointing fingers, but calling on every member of the community to help change scientific culture. 

Perhaps students like Duchesne and Tseng are more the exception than the norm. Yet, when given an attentive ear, their fresh ideas are bound to be well worth any naiveté. The biggest barriers they face in contributing to science may not be their flaws, but their elders.

Peer edited by Gabrielle Dardis and Brittany Shepherd.

First the Moon, Now Mars: A Look at Rocket Science in Reaching the Red Planet

This year marks the 50th anniversary of humans becoming an extraterrestrial species. In July 1969, men set foot on lunar soil, making the first tangible step in connecting mankind with deep space and fueling aspirations for what humans could achieve when it came to interplanetary travel. Indeed, lofty goals have been made in this regard since the Apollo missions – some as ambitious as starting the first Martian colony as soon as 2025. But exactly how much progress have we made in launch systems since landing on the moon, and is it sufficient to achieve such a feat as colonizing Mars?

In order to travel to and colonize other planets, mankind would need to develop a way to send mass amounts of cargo reliably and consistently to the moon, as the moon would likely serve as a pit stop for subsequent legs of any given cosmic journey. This would require repeated use of rockets that are powerful enough to simultaneously lift heavy payloads and impart them with enough speed to clear Earth’s atmosphere – ideally rockets that are both extremely powerful and reusable. The Apollo program was supported by NASA’s Saturn V rocket which is the most powerful rocket brought to operational status to date. While these rockets provided the power necessary to carry men and their cargo to the moon, every part of the rocket itself was consumed in a mission; none of the engines, boosters, or fuel tanks survived for reuse. Additionally, these rockets were built during a time of political urgency; NASA’s budget peaked at 4% of all federal spending during the Cold War, compared to the 0.5% it operates at now. So, while Saturn V was an extremely powerful rocket, its disposable nature would not be economically sustainable for present day Martian colonization efforts.

Since Saturn V, scientists have worked to improve both the payload and reusability of rockets. From 1981 to 2011, NASA operated the space shuttle, a partially reusable spacecraft system that included two recoverable rocket boosters flanking an external fuel tank and three clustered liquid-fuel cryogenic rocket engines in the spaceplane itself. The rocket boosters were jettisoned just before reaching orbit and parachuted down for landing and recovery in the ocean; the rocket engines could be recovered from the spaceplane itself after it returned from a mission. Since the space shuttle era, companies have also hopped on the rocket development bandwagon; SpaceX, for example, is working on making their multistage Falcon rockets 100% reusable. Thus far, they can reliably achieve targeted landing of the first stage of its rockets and reuse these recovered boosters in subsequent flights. Complete rocket reuse would dramatically reduce the cost of colonizing Mars, making it much more feasible.

The space shuttle system: including spaceplane, fuel tank, and flanking rocket boosters.

In addition to making headway in rocket reusability, efforts have also been made to increase rocket power. NASA is now building the Space Launch System (SLS), a rocket family that is projected to be the most powerful in existence, with one variant carrying 143 tons of pure payload to low Earth orbit (surpassing Saturn V’s 130 ton payload). Pragmatically, the build of the first SLS variant borrows rocket engines from the space shuttle, a proven technology. Progress on this program however is uncertain, as the US 2020 fiscal year budget did not allot any money for SLS development. In private efforts, just earlier this year SpaceX successfully launched its Falcon Heavy rocket – the highest payload launch vehicle in current operation – and additionally recovered all three of its boosters on Earth.

Recovery of two rocket boosters after SpaceX Falcon Heavy launch.

Since landing on the moon, we’ve made a lot of progress in making rockets both more powerful and reusable – two key ingredients in the recipe for colonizing a planet like Mars. But the question remains, how close are we to reaching the red planet? Nothing is certain, but mankind is making great strides in getting our rockets up to par for these unprecedented flights, and I personally feel confident that we’ll see “Man on Mars” rocking our headlines within the next couple decades. Now, is there a future where humans terraform Mars, or where Earthly humans can pop over to visit their Martian relatives? To that I’d say, ask me again in another 50 years.

Peer Edited by Aldo Jordan

The Science Behind Spitting for your At-home Genetic Test

Conjuring up two milliliters of spit after not eating/drinking 30 minutes prior doesn’t sound taxing, but give it a try, and you’ll quickly change your mind. Four years ago, I sat in my kitchen wafting the scent of freshly baked brownies into my face in an attempt to make myself salivate. Voilá! I finally produced enough spit for my 23andMe kit and rewarded myself with a brownie. In 6-8 weeks, I would learn more about my health and ancestry, all thanks to just two milliliters of saliva. Best Christmas gift ever…thanks, mom and dad!

Popularity for these at-home genetic testing kits has soared in recent years. For example, AncestryDNA sold about 1.5 million kits in three days alone last fall. People love buying these DNA kits as a holiday gift, and it’s one of Oprah’s favorite things, so why wouldn’t you want to purchase one? Given the general affordability of these non-invasive tests and the variety of kits to choose from, it’s easy to participate in the fun.

https://commons.wikimedia.org/wiki/File:Tongue1.png & https://pixabay.com/en/genetics-chromosomes-rna-dna-156404

After scientists receive your at-home genetic kit, they isolate cells from your saliva and unravel the chromosomes in your DNA. Then they read your base pairs like a book and check for specific genetic markers that indicate risk of a certain disease. Your ancestry is determined by comparing your genetic markers to a reference database.

I’m amazed that information about my ancestors’ migration pattern is extracted from the same fluid that helped disintegrate the brownie I just ate. In addition to the enzymes that aid in digestion, human saliva contains white blood cells and cells from the inside of your cheeks. These cells are the source of DNA, which is packaged inside your chromosomes. Humans carry two sex chromosomes (XX or XY) and 22 numbered chromosomes (autosomes). Chromosomes are responsible for your inherited traits and your unique genetic blueprint. During a DNA test, scientists unravel your chromosomes and read the letters coding your DNA (base pairs) to check for specific genetic markers.

Companies perform one or multiple kinds of DNA tests, which include mitochondrial DNA (mtDNA), Y-chromosome (Y-DNA), or autosomal DNA. An mtDNA test reveals information about your maternal lineage since only women can pass on mitochondrial DNA. mtDNA codes for 37 out of the 20,000-25,000 protein-coding genes in humans. Y-DNA test is for male participants only and examines the paternal lineage. This DNA accounts for about 50-60 protein-coding genes. Autosomal DNA testing is considered the most comprehensive analysis, since autosomes include the majority of your DNA sequences and the test isn’t limited to a single lineage. However, companies don’t fully sequence your genome because then the at-home kits could no longer be affordable. Instead the tests look at about 1 million out of 3 billion base pairs.

But how reliable are these tests if less than one percent of your DNA is sequenced? When detecting genetic markers that could increase disease risk, these tests are very accurate since scientists are searching for known genes associated with a certain disease. However, the reliability of predicting your ancestry is another story.

Using DNA to determine ethnicity is difficult since ethnicity is not a trait determined by one gene or a combination of genes. Scientists analyze only some of your genome and compare those snippets of DNA to that of people with known origins. For example, Ancestry uses a reference panel that divides the world into 26 genetic regions with an average of 115 samples per region. If your results show that you’re 40% Irish, then that means 40% of your DNA snippets is most similar to a person who is completely Irish. Each company has their own reference database and algorithm to decide your ethnic makeup, so it’s likely that you would receive different results if you sent your DNA to multiple companies.

At-home genetic tests are an exciting and affordable way to explore the possibilities of your genetic makeup and can paint a general picture of your identity. While the specifics of the ethnicity results may be a bit unreliable, at least it’s a good starting point if you’re interested in building your family tree!

Peer edited by Nicole Fleming.

Follow us on social media and never miss an article:

Understanding the 2017 Climate Science Special Report

Earlier this year, the U.S. government released the Climate Science Special Report.  This document describes the state of the Earth’s climate, specifically focusing on the U.S.  If you are someone who is interested in environmental science or policy, you may have thought about reading it.  But where to start? The report contains fifteen chapters and four additional appendices, so reading it may seem daunting.  We published this summary of the report to provide a brief introduction to climate change, and to provide a starting point for anyone who wants to learn more.  

https://www.nps.gov/articles/glaciersandclimatechange.htm

Retreating of Lyell Glacier (Yosemite National Park) in 1883 and 2015. Park scientists study glaciers to understand the effects of climate change in parks serviced by the National Parks Service. 1883 Photo: USGS Photo/Israel Russell 2015 Photo: NPS Photo/Keenan Takahashi

 

 

 

 

 

 

 

 

 

 

 

 

What is Climate Change?      

“Climate change” is a phrase that has become ubiquitous throughout many aspects of American and global society, but what exactly is climate change?

Like weather, climate takes into account temperature, precipitation, humidity, and wind patterns.  However, while weather refers to the status of these factors on any given day, climate describes the average weather for a location over a long period of time.  We can consider a climate for a specific place (for example, the Caribbean Islands have a warm, humid climate), or we can consider all of Earth’s climate systems together, which is known as the global climate.

Depending on where you live, you may have seen how weather can change from day to day.  It may be sunny one day, but cool and rainy the next.  Climate change differs from changes in weather because it describes long-term changes in average weather. For example, a place with a changing climate may be traditionally warm and sunny, but over many years, become cooler and wetter.  While weather may fluctuate from day to day, climate change is due to gradual changes that occur over long periods of time.  Climate is viewed through an historical lense, comparing changes over many years. Though we may not notice the climate changing on a daily basis, it can have drastic effects on our everyday lives.  It can impact food production, world health, and prevalence of natural disasters.

https://commons.wikimedia.org/wiki/File:Effects_of_global_warming,_plotted_against_changes_in_global_mean_temperature.png

Summary of the potential physical, ecological, social and large-scale impacts of climate change. The plot shows the impacts of climate change versus changes in global mean temperature (above the 1980-1999 level). The arrows show that impacts tend to become more pronounced for higher magnitudes of warming. Dashed arrows indicate less certainty in the projected impact and solid arrows indicate  a high level of certainty.

What Causes Climate Change? 

The major factor determining the Earth’s climate is radiative balance.  Radiation is energy transmitted into and out of the Earth’s atmosphere, surface, and oceans.  Incoming radiation most often comes from light and heat energy from the Sun.  Earth can lose energy in several ways.  It can reflect a portion of the Sun’s radiation back into space.  It can also absorb the Sun’s energy, causing the planet to heat up and reflect low-energy infrared radiation back into the atmosphere.  The amount of incoming and outgoing radiation determines the characteristics of climate, such as temperature, humidity, wind, and precipitation.  When the balance of incoming and outgoing radiation changes, the climate also changes.

https://commons.wikimedia.org/wiki/File:Earth%27s_greenhouse_effect_(US_EPA,_2012).png

Scientists agree that it’s extremely likely that human activity (via greenhouse gas emissions) is the dominant cause of the increase in global temperature since the mid-20th century.

There are some natural factors that can influence climate.  The main ones are volcanic eruptions and the El Niño Effect.  Volcanic eruptions emit clouds of particles that block the Sun’s radiation from reaching the Earth, changing the planet’s radiative balance and causing the planet to cool. The El Niño Effect is a natural increase in ocean temperature in the Pacific Ocean that leads to other meteorological effects.  The increase in ocean temperature off the coast of South America leads to higher rates of evaporation, which can cause wind patterns to shift, influencing weather patterns worldwide. Together, these factors influence climate, so when they differ from the norm, they can contribute to climate change.

It is true that climate change can occur naturally and it is expected to happen slowly over long periods of time.  In some cases, the climate can change for a few months or years (such as in the case of a volcanic eruption), but the effects of these events are not long-lasting.  However, since the Industrial Era, the factor contributing most to climate change has been an anthropogenic driver, meaning one that is being caused by human activity. The primary cause of climate change since the Industrial Era has been the presence of greenhouse gases in the atmosphere.  The main greenhouse gases are carbon dioxide (CO2), methane (CH4), and nitrous oxide (N2O).  These gases are problematic because they remain in the Earth’s atmosphere for a long time after they are released.  They trap much of Earth’s outgoing radiation, leading to an imbalance of incoming and outgoing radiation energy.  Because the Earth’s atmosphere is holding on to all that energy while still receiving irradiation from the Sun, the planet heats up.  This is called the greenhouse effect, because it is similar to what happens in a greenhouse—the Sun’s energy can get in, but the heat cannot get out.  The greenhouse effect has intensified due to the greenhouse gases that are released during our modern industrial processes.  This has caused the Earth’s climate to begin to change.

 

Who contributed to the Climate Science Special Report?

The report was written by members of the American scientific community, including (but not limited to) the National Science Foundation, the National Aeronautics and Space Administration (NASA), the US Army Corps of Engineers, and multiple universities and national labs.  They analyzed data from articles in peer reviewed scientific journals—that is, other scientists read these articles before they were even published to check for questionable experiments, data, or conclusions—as well as government reports and statistics and other scientific assessments.  The authors provided links to each source in citation sections at the end of each chapter. They combined everything they learned into this one comprehensive document, the Climate Science Special Report.

What can we learn from this report?      

First of all, the report reveals that the Earth is getting warmer.  The average global surface temperature has increased about 1.8°F (1.0°C) since 1901.  This may seem like a small change, but this increase in temperature is enough to affect the global climate.  Sea levels have risen about eight inches since 1900, which has led to increased flooding in coastal cities.  Weather patterns have changed, with increased rainfall and heatwaves.  While the increased rainfall has been observed primarily in the Northeastern U.S., the western part of the U.S. has experienced an increase in forest fires, such as those that have devastated California this year.  Such changes in weather patterns can be dangerous for those who live in those areas.  They can even damage infrastructure and affect agriculture, which impacts public health and food production.  These changes mainly result from greenhouse gases, namely CO2, that humans have emitted into the atmosphere.

Where can I go to read the report myself?  

You can find a link to the main page of the report here.  There is also an Executive Summary, which was written for non-scientists.   While the rest of the report contains some technical language, it is generally accessible, and contains visuals to help readers understand the data.  If you are interested in gaining a better understanding of Earth’s climate and how it’s changing, we encourage you to take a look at the Climate Science Special Report to learn more.  

 

Peer edited by Amanda Tapia and Joanna Warren.

Follow us on social media and never miss an article:

Physics Through the Looking Glass

On Christmas Eve 1956, a woman caught the last train to New York in the snow to report experimental results that would alter the landscape of modern physics forever. Although members of the physics community would initially dismiss her results as nonsense, the evidence would soon become incontrovertible, launching many scientific possibilities.

https://commons.wikimedia.org/wiki/File:Chien-shiung_Wu_(1912-1997)_C.jpg

C. S. Wu working in the lab.

The scientist’s name was Chien-Shiung Wu, and it took her many years of study and perseverance to make this discovery. Starting at a young age, she was quite curious about the natural world. Chien-Shiung Wu’s father, Zhongyi Wu, was an engineer who believed strongly in equal rights for women. He started the first school for girls in his region of China. Chien-Shiung Wu was one of the first girls to obtain a formal education in China. She rapidly outpaced her peers, and proceeded to an all-girls boarding school 50 miles from her home. She continued on to college in Nanjing before traveling across the globe to pursue her graduate degree at University of California, Berkeley, in the US. Not long after her arrival in California, Chien-Shiung learned of the Japanese invasion of China, which affected her family’s hometown. She would not hear from her family for eight long years. After she completed her PhD, she was considered ‘the authority‘ on nuclear fission, according to Robert Oppenheimer. Renowned physicist Enrico Fermi even consulted her for advice on how to sustain a nuclear chain reaction in the making of the atomic bomb.

For decades, physicists had assumed that, there was no way to differentiate left from right according to quantum mechanics. Quantum mechanics is the theoretical underpinning of modern physics that successfully describes the behavior subatomic particles. The assumption that left and right were indistinguishable was known as ‘parity symmetry’. It was naturally appealing, much like symmetries that exist in art, biological organisms, or other natural phenomena, like snowflakes.

https://www.flickr.com/photos/chaoticmind75/6922463361

Zoomed-in image of a snowflake. When Mme Wu awaited the train that Christmas Eve, she was surrounded by snowflakes, a reminder of how symmetry is ubiquitous in the natural world. Such symmetries stood in stark contrast to the discovery she was about to announce.

Nonetheless, the idea of this symmetry was called into question at a scientific conference in 1956. Within that same year, Chien-Shiung Wu would demonstrate in her lab at NIST that parity was violated for particular types of decays. In other words, these decay processes did not look the same in a mirror. This discovery was far from Chien-Shiung Wu’s only claim to fame.

Wu made many advancements in beta decay, which is the disintegration of a neutron that results in the emission of an electron and another particle called a neutrino. It was eventually with the beta decay of the element cobalt-60 that she ran her famous parity violation experiment. Using a magnetic field and low temperatures, she was able to achieve the parity violation results that turned the world on its head.

Chien-Shiung Wu did not receive the Nobel Prize, though her two male theorist colleagues did. In spite of this oversight, she obtained recognition in other ways. She was the first female physics instructor at Princeton University and the first female president of the American Physical Society (APS). Her success can be partially attributed to her parent’s encouraging attitude towards women’s education. Fortunately, they survived the Japanese invasion during World War II, with her father engineering the famous Burma Road. C. S. Wu ushered in an entirely new era in which other assumed symmetries would be overturned, helping us to more deeply understand the state of the universe that we see today.

Peer edited by Kaylee Helfrich.

Follow us on social media and never miss an article:

Superbug Super Problem: The Emerging Age of Untreatable Infections

You’ve heard of MRSA. You may even have heard of XDR-TB and CRE. The rise of antibiotic-resistant infections in our communities has been both swift and alarming. But how did these once easily treated infections become the scourges of the healthcare world, and what can we do to stop them?

Antibiotic-resistant bacteria pose an alarming threat to global public health and result in higher mortality, increased medical costs, and longer hospital stays. Disease surveillance shows that infections which were once easily cured, including tuberculosis, pneumonia, gonorrhea, and blood poisoning, are becoming harder and harder to treat. According to the CDC, we are entering the “post-antibiotic era”, where bacterial infections could once again mean a death sentence because no treatment is available. Methicillin-resistant Staphylococcus aureus, or MRSA, kills more Americans every year than emphysema, HIV/AIDS, Parkinson’s disease, and homicide combined. The most serious antibiotic-resistant infections arise in healthcare settings and put particularly vulnerable populations, such as immunosuppressed and elderly patients, at risk. Of the 99,000 Americans per year who die from hospital-acquired infections, the vast majority die due to antibiotic-resistant pathogens.

http://www.lab-initio.com

Cartoon by Nick D Kim, scienceandink.com. Used by permission

Bacteria become resistant to antibiotics through their inherent biology. Using natural selection and genetic adaptation, they can acquire select genetic mutations that make the bacteria less susceptible to antimicrobial intervention. An example of this could be a bacterium acquiring a mutation that up-regulates the expression of a membrane efflux pump, which is a transport protein that removes toxic substances from the cell. If the gene encoding the transporter is up-regulated or a repressor gene is down-regulated, the pump would then be overexpressed, allowing the bacteria to pump the antibiotic back out of the cell before it can kill the organism. Bacteria can also alter the active sites of antibacterial targets, decreasing the rate with which these drugs can effectively kill the bacteria and requiring higher and higher doses for efficacy. Much of the research on antibiotic resistance is dedicated to better understanding these mutations and developing new and better therapies that can overcome existing resistance mechanisms.

 

While bacteria naturally acquire mutations in their genome that allow them to evolve and survive, the rapid rise of antibiotic resistance in the last few decades has been accelerated by human actions. Antibiotic drugs are overprescribed, used incorrectly, and applied in the wrong context, which expose bacteria to more opportunities to acquire resistance mechanisms. This starts with healthcare professionals, who often prescribe and dispense antibiotics without ensuring they are required. This could include prescribing antibiotics to someone with a viral infection, such as rhinovirus, as well as prescribing a broad spectrum antibiotic without performing the appropriate  tests to confirm which bacterial species they are targeting. The blame is also on patients, not only for seeking out antibiotics as a “cure-all” when it’s not necessarily appropriate, but for poor patient adherence and inappropriate disposal. It’s absolutely imperative that patients follow the advice of a qualified healthcare professional and finish antibiotics as prescribed. If a patient stops dosing early, they may have only cleared out the antibiotic-susceptible bacteria and enabled the stronger, resistant bacteria to thrive in that void. Additionally, if a patient incorrectly disposes of leftover antibiotics, they may end up in the water supply and present new opportunities for bacteria to develop resistance.

 

https://www.flickr.com/photos/diethylstilbestrol/16998887238/in/photostream/

Overuse of antibiotics in the agricultural sector also aggravates this problem, because antibiotics are often obtained without veterinary supervision and used without sufficient medical reasons in livestock, crops, and aquaculture, which can spread the drugs into the environment and food supply. These contributing factors to the rise of antibiotic resistance can be mitigated by proper prescriber and patient education and by limiting unnecessary antibiotic use. Policy makers also hold the power to control the spread of resistance by implementing surveillance of treatment failures, strengthening infection prevention, incentivizing antibiotic development in industry, and promoting proper public outreach and education.

 

While the pharmaceutical industry desperately needs to research and develop new antimicrobials to combat the rising number of antibiotic-resistant infections, the onus is also on every member of society to both promote appropriate use of antibiotics as well as ensure safe practices. The World Health Organization has issued guidelines that could help prevent the spread of infection and antibiotic resistance. In addition, World Antibiotic Awareness Week is November 13-19, 2017, and could be used as an opportunity to educate others about the risks associated with antibiotic resistance. These actions could significantly slow the spread and development of resistant infections and encourage the drug development industry to develop new antibiotics, vaccines, and diagnostics that can effectively treat and reduce antibiotic-resistant bacteria.

Peer edited by Sara Musetti 

Follow us on social media and never miss an article:

Underground Science at SNOLAB

The best models of how our world works are incomplete. Though they accurately describe much of what Mother Nature has thrown at us, models represent just the tip of the full iceberg and a deeper understanding awaits the endeavoring scientist. Peeling back the layers of the natural world is how we physicists seek a deeper understanding of the universe. This search pushes existing technology to its limits and fuels the innovation seen in modern day nuclear and particle physics experiments.

https://www.flickr.com/photos/colinjagoe/5247835812

This is a map of the SNOLAB facility. It’s 2 km (~1.2 miles) underground and is the deepest clean room facility in the world!

Today, many of these experiments search for new physics beyond the Standard Model, the theory physicists have accepted to describe the behavior of particles. Some physical phenomena have proven difficult to reconcile with the Standard Model and research seeks to improve understanding of those conundrums, particularly regarding the properties of elusive particles known as neutrinos which have very little mass and no electric charge, and dark matter, a mysterious cosmic ingredient that holds the galaxies together but whose form is not known. The experiments pursuing these phenomena each take a different approach toward these same unknowns resulting in an impressive diversity of techniques geared towards the same goal.

On one side of the experimental spectrum, the Large Hadron Collider smashes together high-energy protons at a rate of one billion collisions per second. These collisions could have the potential to create dark matter particles or spawn interactions between particles that break expected laws of nature. On the other side of the spectrum, there is a complimentary set of experiments that quietly observe their environments, patiently waiting to detect rare signals of dark matter and other new physical processes outside the realm of behavior described by the Standard Model. As the signals from the new physics are expected to be rare (~1 event per year as compared to the LHC’s billion events per second), the patient experiments must be exceedingly sensitive and avoid any imposter signals, or  “background”, that would mimic or obscure the true signal.

The quest to decrease background interference has pushed experiments underground to cleanroom laboratories setup in mine caverns. While cleanrooms reduce the chances of unwanted radioactive isotopes, like radon-222, wandering into one’s experiment,  mines provide a mile-thick shield from interference that would be present at the surface of Earth: particles called cosmic rays constantly pepper the Earth’s surface, but very few of them survive the long journey to an underground lab.

Figure reproduced with permission from Michel Sorel from La Rivista del Nuovo Cimento, 02/2012, Volume 35, Issue 2, “The search for neutrinoless double beta decay”, J. J. Gómez-Cadenas, J. Martin-Albo, M. Mezzetto, F. Monrabal, M. Sorel, all rights reserved, with kind permission of Società Italiana di Fisica.

The rate at which muons, a cosmic ray particle, pass through underground labs decreases with the depth of the lab. At the SNOLAB facility, shown in the lower right, approximately one muon passes through a square centimeter of the lab every 100 years.

The form and function of modern underground experiments emerged from the collective insights and discoveries of the scientific community studying rare physical processes. As in any field of science, this community has progressed through decades of experimentation with results being communicated, critiqued, and validated. Scientific conferences have played an essential role in this process by bringing the community together to take stock of progress and share new ideas. The recent conference on Topics in Astroparticle and Underground Physics (TAUP) was a forum for scientists working to detect dark matter and study the properties of neutrinos. Suitably, the conference was held in the historic mining town of Sudbury, Ontario, home to the Creighton Mine, at the bottom of which lies SNOLAB, a world-class underground physics laboratory which notably housed the 2015 Nobel Prize winning SNO experiment. SNO, along with the Super-Kamiokande experiment in Japan’s Kamioka mine, was awarded “for the discovery of neutrino oscillations, which shows that neutrinos have mass.”

There is a natural excitement upon entering an active nickel mine, donning a set of coveralls, and catching a cage ride down into the depths; this was our entrance into the Creighton Mine during the TAUP conference. After descending an ear-popping 6800 feet in four minutes, we stepped out of the cage into tunnels— known as drifts— of raw rock. From there, we followed the path taken everyday by SNOLAB scientists, walking approximately one kilometer through the drifts to the SNOLAB campus. At SNOLAB, we prepared to enter the clean laboratory space by removing our coveralls, showering, and donning cleansuits. Inside, the rock walls are finished over with concrete and epoxy paint and we walked through well-lit hallways to a number of experiments which occupy impressively large caverns, some ~100 feet high.

Photo credit: Tom Gilliss

Physicists visiting SNOLAB get a close-up view of the DEAP-3600 and MiniClean dark matter experiments. Shown here are large tanks of water that shield sensitive liquid argon detectors located within.

Our tour of SNOLAB included visits to several dark matter experiments, including DEAP-3600 and MiniClean, which attempt to catch the faint glimmer of light produced by the potential interaction of dark matter particles with liquid argon. A stop by PICO-60 educated visitors on another captivating experiment, which monitors a volume of a super-heated chemical fluid for bubbles that would indicate the interaction of a dark matter particle and a nucleus. The tour also included the SNO+ experiment, offering glimpses of the search for a rare nuclear transformation of the isotope tellurium-130; because this transformation depends on the nature of neutrinos, its observation would further our understanding of these particles.

SNOLAB is also home to underground experiments from other fields. The HALO experiment, for instance, monitors the galaxy for supernovae by capturing neutrinos that are emitted by stellar explosions; neutrinos may provide the first warnings of supernovae as they are able to escape the confines of a dying star prior to any other species of particle. Additionally, the REPAIR experiment studies the DNA of fish kept underground, away from the natural levels of radiation experienced by all life on the surface of Earth.

The search for rare signals from new physical phenomena pushed physicists far underground and required the development of new technologies that have been adapted by other scientific disciplines. The SNOLAB facility, in particular, has played a key role in helping physics revise its best model of the universe, and it can be expected that similar underground facilities around the world will continue to help scientists of many stripes reveal new facets of the natural world.

Peer edited by JoEllen McBride and Tamara Vital.

Follow us on social media and never miss an article:

Frog Slime: The Secret to Kicking that Awful Flu

 

https://pixabay.com/p-324553/?no_redirect

Frog slime, although gross, might help combat some strains of the influenza virus.

Got the flu? Time to start looking for your frog prince.

Researchers at Emory University have identified a substance that kills influenza, the virus that causes seasonal flu. The influenza-killing substance, called urumin, is produced on the skin of the South Indian frog and stops influenza virus growth by causing the virus to burst open-think of smashing an egg with a hammer!

Researchers think urumin disrupts a structure on the outside of the virus. Influenza, like an egg, has an outer shell that protects the contents of the virus- the “yolk”- which the virus uses to grow and replicate. Unlike an egg, the outer shell of influenza is not smooth. Instead, it contains small spikes. Urumin sticks to these influenza spikes, interfering with their function and causing the virus to burst open.

The influenza virus uses the spikes to stick to human cells and cause infection. Two types of spikes are found on each influenza virus, H and N. There are multiple types of H’s and N’s, and each virus picks one H and one N to “wear” on its outer shell, similar to the way we choose a pair of pants and shirt to wear every day.

https://pixnio.com/science/microscopy-images/influenza/3d-graphical-representation-of-flu-virus

Cartoon of Influenza. The outside is covered in spikes, H in light blue and N in dark blue.  The coils in the center contain the genetic information or “yolk” that causes the virus to replicate.  From: Doug Jordan.

 

Surprisingly, urumin is only effective against viruses containing the H spike type, H1. This is because urumin can only stick to H1 spikes, not to N spikes or to other types of H spikes. H1 is one of only 3 types of H spikes known to infect humans. Shockingly, H1 viruses are responsible for some of the worst flu outbreaks in history such as the 1918 Spanish flu pandemic that caused 50 million deaths and, more recently, the swine flu pandemic of 2009.

Destroying influenza in a lab environment is great, but what about in a living animal? In the same study, urumin treatment resulted in a 250% increase in mouse survival after influenza infection. Urumin treatment also decreased disease severity by lessening weight loss and decreasing the amount of virus in the lungs.

Although these mouse experiments are promising, it is important to point out that the mice were given urumin 5 minutes before they were infected with influenza and also received urumin everyday for the rest of the infection.  Because most of us do not know the exact moment we are exposed to influenza virus – the grocery store? the breakroom? the gym? – it is difficult to treat someone at the moment they are infected with influenza. Thus, more research is needed to look at the effectiveness of urumin when it is given days after infection, which is the typical time that an infected person might visit their doctor.  

With more research, urumin could be the promising new influenza drug researchers have been looking for, to potentially reduce influenza-associated deaths and complications.

Peer edited by Kaylee Helfrich

Follow us on social media and never miss an article:

Fossils That Slumber in the Mountains and the Mud

Over 200 million years ago, a reptile, 11 feet long and 1500 pounds, was prowling about, likely feeling very pleased with himself. Not only did he have four crunchy creatures starting to digest in his stomach, but he had bitten another weakling in the neck and then crushed it under his left knee. Just at this moment of triumph, the reptile got stuck in the mud of ancient Jordan Lake, and slowly drowned.

Around the same time, by the seaside of what would one day become Italy, the forerunners to today’s oyster were nestling on the sea floor.

41 years ago, in 1976, Dr. Joe Carter obtained his PhD from Yale University and then drove down with his wife to start a new job at Chapel Hill’s geology department. He came as a sleuth for fossils. Ancient oysters, clams, mollusks, bivalves – Dr. Carter wanted to learn as much about them as he could.

credit: Mejs Hasan

Dr. Carter and one of his fossil replicas

For most of us, shells are just the violet-tinted, half-moon shaped spectacles that nip our feet at the beach.

But for Dr. Carter, these bivalves – and especially their fossils resurrected millions of years after they lived – are clues into evolutionary history.

In 1980, Dr. Carter took a trip to the mountains of northeastern Italy. There, he found an 80-year-old man who had been collecting Triassic fossils for decades – bivalves that lived 200 to 250 million years ago. The prospect of so many fossils was like coming upon a casket of jewels for Dr. Carter. The Italian man gave him a generous sampling of his fossil collection, and Dr. Carter fell to examining them.

“Well, this looks like an oyster,” Dr. Carter speculated, as he dwelt upon one of his fossils. Or was it? Oyster fossils dated back in time for 200 million years, beyond which they disappeared into the guarded slumber of the unwritten past. Scientists had assumed this marked the juncture at which oysters evolved, and as they cast about for a suitable ancestor, they decided upon scallops: both oysters and scallops have similar, non-pearly shells.

But perhaps the little Italian oyster told a whole new story. To investigate, Dr. Carter participated in a blatant case of disturbing the peace of the deceased. He took his Italian bivalve, sharpened his knife, and embarked on a long-delayed autopsy.

He dissected the defenseless fossil into impossibly tiny 150 micrometer slices. He examined each slice carefully under a microscope, then enlarged them on plastic drafting paper. Then, he had a “eureka” moment.

Today’s oysters are almost all calcite and non-pearly. But Dr. Carter’s ancient Triassic oyster had only a hint of calcite and it consisted mostly of mother-of-pearl. Could the mother-of-pearl oyster indicate that oysters evolved from “pearl oysters”, rather than from scallops?

credit: Mejs Hasan

Momentos from a long career

It was time to see if DNA could confirm the hint provided by the fossil record, a task given to Dr. Carter’s student, Kateri Hoekstra. She performed one of the first DNA analyses of living bivalves ever to focus on their evolutionary relationships. Just as the fossil record predicted, the DNA confirmed that the oyster from the Italian mountains, dug up after its rest of 221 million years, was a closer relative of pearl oysters than scallops.

Dr. Carter sent a letter to many natural history museums in Italy, asking them to find more of the mother-of-pearl oyster. But no one ever did. Still, Dr. Carter had fine pictures and drawings of the single known fossil. People started citing the fossil as UNC-13497b.

Such a clunky name would never do for the only mother-of-pearl oyster in the world, even if it did honor our great university. Dr. Carter finally christened it Nacrolopha carolae: Nacrolopha after the nacre (mother-of-pearl) in the Lopha-like oyster, and carolae after his wife, Carol Elizabeth.

This is the sweet side of invertebrate paleontology: a fine day in the Italian mountains, mother-of-pearl oysters, and suffusing the faint echoes of history with the name of your loved one.

But not everyone wants to give fossils their due attention.

The fossil record isn’t always perfect. For example, jellyfish rarely even leave fossils. For snails, the fossil record is misleading due to convergent evolution. The same features evolved in so many different snails that it’s hard to put things in order. You see the same shapes come up again and again.

As a result, many biologists have decided to send the fossil record packing. Since it doesn’t enlighten relationships for all groups of species, the idea that it might provide clues for a few is uncharted territory.

On the other side of the line-up, you have a handful of scientists, Dr. Carter, his former students, and his research colleagues among them. They are trying to convince the biologists that for some groups of species – especially bivalves – the fossil record is actually crucial.

It’s an uphill battle because, as Dr. Carter explains, the biologists have all the money. They are awash with government funds through the “Tree of Life” project that puts primary emphasis on DNA linkages between species.

credits: Mejs Hasan

Dr. Carter working in his lab.

Dr. Carter recognizes that DNA is a necessary tool. After all, it was Kateri’s DNA analysis that confirmed the origination of Nacrolopha carolae and modern oysters from pearl oysters. But it’s not the whole story. For example, DNA tells us that our closest relatives are the chimps. But that does not mean we evolved from them, or them from us! Fossils are the missing key that can shed light on the extinct creatures who filled in the evolutionary gaps.

Dr. Carter, along with David Campbell, his former student, now a professor at Gardner-Webb University, published a paper where they described how DNA and the fossil record can be used in symphony. Unfortunately, as Dr. Carter explains, “lots of people thought it was baloney.”

That reception is not stopping Dr. Carter. He and David Campbell are trying to publish a series of papers with examples of how DNA can give faulty evidence that the fossil record can correct. As Dr. Carter says, it will be interesting to see what the opposition says at that point.

Opposition aside, there’s one set of fossils that dazzles everyone – those of dinosaurs. Dr. Carter’s one foray into reptilian fossils happened by accident. Two of his students were studying a Durham quarry in 1994, when they came across the ankle bones of “a weird new guy”. It was the same unfortunate creature that, having filled his stomach with four prey, sank into a mudhole of ancient Jordan Lake and drowned just at its very moment of triumph. Digging it up, Dr. Carter and his students found hundreds of bones. Once cleaned and reassembled, it turned out to be a reptile shaped very much like a dinosaur, but not quite.

Dinosaurs roamed about on tiptoe, but this reptile’s foot walked on both toes and heels, like humans do. It was the best-preserved skeleton of this group of reptiles ever found. Dr. Carter toured museums in Europe and the US to make sure the reptile had not been named before.

Just thereafter, Karin Peyer walked into Dr. Carter’s office. She had an undergraduate degree in paleontology, a husband starting graduate school at UNC, and time on her hands. She asked: do you need any help?

“Boy, did you come at the right time!” Dr. Carter greeted her. Karin worked with Dr. Carter and experts from the Smithsonian Institution to formally describe and name the find.

They called it Postosuchus alisonae – alisonae a tribute to a friend of Dr. Carter’s who was dying of cancer at that time.

*******************

It was December 2015. In Dr. Carter’s large dim lab, filmy sheets of plastic drafting paper were ruffling in a soft breeze from the open window looking out on a hillside over Columbia Street. Sickle-shaped knives were stacked here and there, beside replicas of treasure from King Tut’s tomb. In between sectioning and sketching an ancient bivalve called Modiolopsis, Dr. Carter was packing.

credits: Mejs Hasan

Dr. Carter at his retirement party

He was retiring after 39 years. In practice, that merely means that Dr. Carter can now avoid going to faculty meetings. Otherwise, he can still serve on graduate student committees; he is coordinating the revisions of bivalves in the Treatise of Invertebrate Paleontology. He still has fossils to section and examine, and biologists to convince of the worth of the fossil record.

The only difference is, when Dr. Carter began his professorial work, it was just him and his wife, a young daughter and a baby boy. Now his daughter is 45 years old and his son is 39, and they both have their own families that Dr. Carter will be spending a lot of time with. It’s amazing what changes four decades can bring. But perhaps it’s easier to be philosophical and surrender to what’s ahead when you hold in your hands an oyster that lived 221 million years ago.

Peer edited by Lindsay Walton and Alison Earley.

Follow us on social media and never miss an article: