Do Blood Sugar Levels Affect the Development of Sleeping Sickness?

You have probably never met anyone suffering from sleeping sickness, a potentially fatal condition. This is because the disease, also called African Trypanosomiasis, is only present in certain regions of sub-Saharan Africa. While the number of human cases has dropped to less than 3,000 in 2015, Trypanosoma parasites can also cause disease in cattle, greatly affecting economic development in these rural areas.

Sleeping sickness is transmitted by tsetse flies that carry the parasite Trypanosoma brucei. A recent study published by scientists from Clemson University aimed to better understand the switch between different life stages of T. brucei. While inside the fly, the parasite grows rapidly. Following a fly bite, many T. brucei cells are transferred to the bloodstream of a mammalian host. There, the parasite remains ‘dormant’ and no longer replicates. The research group, led by Dr. James C. Morris, was interested in characterizing the mechanisms T. brucei uses to decide when to grow and when to remain dormant.

https://commons.wikimedia.org/wiki/File:OSC_Microbio_05_01_tryplife.jpg

Life cycle of T. brucei

Because the parasite lives in two very distinct hosts, flies and mammals, it must be able to adapt and use cues from its environment to ensure proper development and survival. Trypanosomes use the sugar glucose as a critical source of carbon – one of the building blocks for biological molecules. While the levels of glucose are quite high in the blood of mammals, they decrease rapidly after a blood meal by the tsetse fly. This prompted the Dr. Morris’ lab to investigate glucose as a possible signal that controls the switch between the form of T. brucei in flies (dividing) and mammals (non-dividing).

https://en.wikipedia.org/wiki/File:Alpha-D-Glucopyranose.svg

Structure of glucose

Interestingly, Dr. Morris and his group found that if they grew T. brucei in laboratory media without any glucose, the parasite was able to survive, but it changed into a form adapted for survival in the fly. When this fly-adapted form was injected into mammals (mice), it was rapidly cleared by the immune system. This suggested that to avoid immune clearance, the parasite must sense its environment, including glucose levels, and change into a form infectious to mammals prior to or during transmission. Therefore, this sugar-induced switch could potentially be exploited for development of new therapeutics, which would mimic glucose depletion and should lead to improved clearance of the parasite by the immune system.

It is currently unclear how the parasite senses changes in sugar levels. There are several possibilities, including a glucose-responsive receptor on the cell surface and the involvement of glucose metabolism. The authors found that a glucose-resembling molecule, which could not be metabolized by the parasite, elicited similar results to glucose itself. This suggests involvement of a glucose-responsive receptor. Nevertheless, further study is needed to establish the precise mechanism.

While sleeping sickness is fatal if left untreated, the last major epidemic ended in the late 1990s. Moreover, the World Health Organization aims to eliminate sleeping sickness as a public health threat by 2020. However, this study will not only inform the development of vaccines or treatments for humans, but also of protective agents for cattle still often affected by African Trypanosomiasis.

Peer edited by Joanna Warren and Jack Sundberg.

Follow us on social media and never miss an article:

Science Fail Monday: How a dead salmon taught us about statistics

Any scientist knows the importance of a good negative control. A negative control in an experiment is a group of samples or subjects in which no response is expected to an experimental treatment. The experimental group can then be compared to the control group. Such negative controls are gold standards in science and are supposed to provide confidence in experimental results. However, occasionally, a negative control gives unexpected and hilarious results worth of an Ig-Nobel Prize, the highest honor for scientists who publish the silliest research. Such was the case in an experiment involving fMRI, human emotions, and an Atlantic salmon.

https://commons.wikimedia.org/wiki/File:FMRI_scan_during_working_memory_tasks.jpg

An example of an fMRI scan in a human. The red spots have higher brain activity when subjects are performing a memory task.

fMRI stands for functional magnetic resonance imaging. If you’ve ever had a knee injury or a concussion, you have likely experienced a normal MRI scan, which uses radio waves and a magnet to take a structural picture of the organ of interest. The “functional” in fMRI means that researchers can use MRI images to measure brain activity and take a snapshot of changes over time. When a strong magnet is turned on over the brain, the hydrogen atoms in all of the water molecules in the blood point in the same direction, like a compass needle next to a refrigerator magnet. When the magnet is turned off, the hydrogen atoms relax back to their original positions, which releases a signal. This signal changes based on how much oxygen is in the blood, so the end result is a picture of the brain with information about which regions have more oxygenated blood. Regions needing more oxygen are generally assumed to be more active. Researchers can even have study participants perform a task during an fMRI scan, such as viewing particular images or listening to music, and use the fMRI data to determine which areas of the brain are active during the task. These types of studies can tell us a lot about which brain regions are involved in everything from social situations to processing fear.

In the Ig-Nobel Prize-worthy experiment, researchers wanted to use fMRI to determine which parts of the brain were active in response to seeing human faces displaying different emotions. However, they needed a negative control for their human subjects just to make sure that any brain activity they saw in response to the faces wasn’t just due to chance. The ideal candidate for such a negative control? A four-pound Atlantic salmon, purchased by one of the researchers at the local fish market.

https://commons.wikimedia.org/wiki/File:Salmo_salar-Atlantic_Salmon-Atlanterhavsparken_Norway_(cropped).JPG

The authors of the IgNobel prize study used an Atlantic salmon like this one as their negative control.

The researchers put the dead salmon in their fMRI scanner and, for the sake of science, asked it what emotions it thought the humans were displaying in pictures flashed up on the screen in the scanner. The authors do not comment on the salmon’s responses, but it can be assumed that the salmon was not a model experimental participant and did not comply with the study directions. Expecting to see nothing, the authors analyzed the fMRI signal in the salmon’s brain before and after the salmon “saw” the photos of the faces. Imagine the shock in the room when a few spots in the salmon’s itty-bitty brain lit up like a Christmas tree, suggesting that it was thinking about the faces it saw. Duh duh duuuuuhh….zombie salmon?

Obviously, the salmon was not alive, nor was it thinking about the emotional state of humans. Luckily for the field of fMRI, instead of publishing a paper telling everyone they should use dead salmon to study human response to the emotions of others, the authors of this study delved deeper into why they were seeing “brain activity” in this very dead fish. In their original data, the researchers failed to correct for multiple comparisons: basically, because you are comparing so many brain regions to so many other brain regions, you’re much more likely to find a spot with significant activity in fMRI purely by chance (for more info on multiple comparisons, click here). The authors applied the appropriate statistical corrections to their data, and voila, no more zombie salmon. And then, because scientists have a funny sense of humor, they wrote up and published these results as a lesson to all on the importance of having a good statistician.

Peer edited by Claire Gyorke.

Follow us on social media and never miss an article:

Forest Fire Flames and Smoke: Double the Trouble

As of Friday November 16, 2018, California was home to the three most polluted cities in the world. These three cities – San Francisco, Stockton, and Sacramento – topped the world’s chart of polluted cities as a result of the infiltrating smoke produced from the nearby, devastating Camp Fire. To date, the Camp Fire is the deadliest fire in California history and has burned over 170,000 acres of land, roughly the size of New York City. Over its rampage it has destroyed immense areas of California’s wildlife and burned down over 17,000 man-made structures. Unfortunately, the Camp Fire’s destruction isn’t limited to the destruction inflicted by its flames. This mass burning of a variety of natural and man-made sources has resulted in smoke containing a myriad of small particles that can be hazardous when inhaled. Thus, the smoke produced from the Camp Fire, which is spreading over 150 miles away from the fire and polluting the air of California, is a matter of great health importance.

https://en.wikipedia.org/wiki/List_of_California_wildfires

Rim Fire Yosemite National Forest 2013

The link between adverse health effects due to smoke produced from forest fires and those due to emissions produced from other sources such as diesel engines and industrial factories has long been established. Specifically, exposure to these air pollutants is linked with the onset of respiratory effects such as bronchitis, increased asthma attacks, elevated blood pressure, atherosclerosis, and, for more susceptible individuals, heart attack or stroke.

https://commons.wikimedia.org/wiki/File:2309_The_Respiratory_Zone.jpg

Alveolar sac lined with capillary bed. Anatomical view of Air Blood Barrier (ABB).

These adverse health effects are largely driven from the small, ~1µm – 10µm in diameter, solid and liquid particles (PM) contained in smoke or emitted from a specific source. Due to these particle’s small size, they are able to travel throughout the lung after inhalation and negatively affect both the conducting and gas exchange regions of the lung. Once the particles have “landed” in a region of the lung, they can persist for days and begin to elicit a pro-inflammatory and oxidative stress response which can exacerbate asthma symptoms and damage integral components of the lung, leading to various respiratory effects.How these particles cause adverse cardiovascular effects is an active area of research. There are three proposed mechanisms: 1) particles smaller than 2.5µm in diameter travel through the lung, pass the alveolar blood barrier (ABB), and enter the bloodstream leading to direct vascular damage, 2) particles that reach the ABB, but do not pass, can induce oxidative stress in the underlying vasculature indirectly, and 3) particles can interfere with the autonomic and central nervous system leading to irregular signaling and irregular heart rate.

Overall, while some of the mechanisms leading to the variety of adverse health effects induced by PM exposure are still unknown, it is clear that PM exposure can be detrimental. When forest fires are near it is extremely important to listen to the local official’s recommendations for staying safe – even if the flames are 100+ miles away! Thus, as the incidence of forest fires continues to rise, likely due to factors such as climate change, we need to be mindful of both the destruction the flames create and the hazardous air the fires can produce.

Peer Edited by Rita Meganck and Jacob Pawlik.

Follow us on social media and never miss an article:

It helps to be flexible: disordered proteins in biological stress response

Imagine you are working on a project with a large group of people, all with different personalities and responsibilities. Your group was just informed that something important to the progress of the project went terribly wrong. Some people in the group start to panic, which causes other people to panic. There is no defined leader for this group project but you tend to take the lead during stressful times, so you quickly step up to the plate. You know that to get this project back on track, you first need to calm everyone down so that they can refocus on the tasks at hand.

Now try to imagine that instead of people, you and your group are large molecules composed of long chains of amino acids, a.k.a. proteins, and the group project is maintaining the life of your cell.

Much like a dollar bill must undergo many intricate folds to become an origami elephant, chains of amino acids must go through several steps to form a well-folded protein. Top image created by Chris Pielak

Proteins make up many important biological structures (such as hair, nails, and connective tissues) and carry out most chemical reactions in cells (such as converting food into energy or light into sight). For a long time it was thought  that proteins only function once they have “folded” into a highly-ordered shape, similar to how a flat sheet of paper folds into a smile-inducing origami elephant. The unique shape of a protein is dictated by chemical interactions between the amino acids that make up the protein as well as interactions between the protein and water. When drastic changes take place in the environment of the protein (i.e. during cellular stresses such as extreme heat, dehydration, or acidification), these important interactions are disrupted, which can cause proteins that are usually well-folded to temporarily unfold and become inactive. If such a protein remains unfolded for too long, temporary inactivity can become permanent as the protein becomes tangled up with other unfolded proteins in a process known as irreversible aggregation. Under extremely stressful conditions, a significant portion of the proteins in a cell can unfold and irreversibly aggregate, ultimately leading to cell death. So let’s keep all our proteins nicely folded, shall we?

Not so fast! In the past twenty years, the idea that a protein must be folded to function has been challenged by an up-and-coming group of proteins known as intrinsically disordered proteins (IDPs). As the name suggests, IDPs are defined by a distinct lack of a stable, well-folded structure, much like a single strand of spaghetti in a pot of water.

This cat knows spaghetti makes you feel better when you’re stressed out.

Interestingly, organisms across all domains of life have been shown to use IDPs to deal with environmental stresses. Many of these stress-response IDPs are “conditionally disordered”, meaning they can transition into or out of a more ordered state in response to an environmental cue. Given that IDPs are used to being in an unfolded-like state, it kind of makes sense that they can “survive” many of the environmental stresses that typically well-folded proteins can’t. But besides persisting through stressful times, how do IDPs help cells survive extreme environmental stresses? One emerging hypothesis is that stress-response IDPs work by morphing into a shape that can stick to partially unfolded proteins before irreversible aggregation can occur, thus making it possible for stress-sensitive proteins to refold after the stress goes away. In support of this idea, recent studies showed that the bacterial acid-sensing protein HdeA becomes disordered in acidic conditions, and it is in this disordered state that it can stick to partially unfolded proteins and prevent aggregation. Similar modes of action have been proposed for IDPs involved in heat- and dehydration-response as well.

So, just like you in the hypothetical scenario described at the beginning of this post, some IDPs keep the group project (the life of the cell) on track by pulling aside the easily stressed out group members (highly-ordered, stress-sensitive proteins) and calming them down a bit so that once the stress has subsided, everyone in the group can refold and get back to work.

Peer edited by Giehae Choi.

Follow us on social media and never miss an article:

A Southwest Turn

Hurricanes are well-known for how unpredictable their paths can be. As wild as they can get, we can usually count on two things for storms that live primarily in the ocean in the Northern hemisphere: their general hook shape, and sharp bends.

source: https://en.wikipedia.org/wiki/File:2018_Atlantic_hurricane_season_summary_map.png

2018 Atlantic Hurricane Season

As has been frequently reported in the news, Florence took a particularly strange turn when it headed southwest. To see how that came about, we can look at the Cauchy momentum equation, one of the Navier Stokes equations that are central to fluid dynamics:

The Cauchy momentum equation

The equation itself can take on several forms, depending on the usefulness for the particular application. For our purposes, however, the above is sufficient since we have the relevant terms. In determining the path for Florence, two out of three of the terms on the right side of the equal sign are important.

The first is the Coriolis force, usually lumped in with other forces. This is the force most famous for causing the spiral pattern of the storms, but, at the largest scale, is also why we have the trade winds and westerlies. This gives us the hook.

The second term is the change in pressure over space. The negative sign simply means that air and liquid prefer to move from areas of high pressure to areas of low pressure. If the pressure is high enough, as it was over New England and the Maritimes during Florence’s landfall, the path can acquire a sharp bend.

Put these two competing terms together, and we get Florence’s odd path.

map source: https://www.flickr.com/photos/internetarchivebookimages/19794489843/

Pressure field during Florence’s landfall.

Peer edited by Gabby Budziewski.

Follow us on social media and never miss an article:

Improving Science Literacy: How to Read Scientific Papers

 

Science literacy is the ability to read English words in a new language – the language of science.  You might measure science literacy by being conversant in various fields. Each sub-field in science has its own language.  For example, being literate in cancer biology does not mean you are literate in neuroscience (I am highly literate in only one of these). Luckily, many terms in one field of science help us understand other areas of research.  It’s all about your focus:

Resolution vs. Field of View in Biology

Here’s an analogy from one of the most commonly used tools in biology:  microscopy. Scientists use microscopes to image tissues, cells, or even pieces of cells.  The smallest distance that the microscope can image, or resolve, is called the resolution.  In science, it’s often very helpful to look at things under a high-resolution microscope, which  lets us see super small structures.

Left: a high resolution capture lets us see structural detail, while Right: A low res, large field of view lets us see the big picture.

 For example, a super high-resolution technique, electron microscopy, helps us see the structure of mitochondria, the “powerhouse of the cell,” quite easily.

But how do we know mitochondria are the powerhouse of the cell?  Just because we can see something as small as a mitochondria under a microscope doesn’t mean we know what it is or does,  nor does it help us understand how the cell works – a “missing the forest through the trees” kind of problem. To understand how one piece fits into the bigger puzzle, we have to first zoom out to see the whole cell.  This expands the field of view. Having a large field of view helps us understand complex systems like cell biology, the brain, or even species within an ecosystem.

Review Articles vs. Primary Literature vs. Textbooks

Google and Wikipedia are your friends.  Most scientific papers will drop key terms without much explanation.  It’s okay if you have no clue what those words mean – a quick search will fix that!  Building your own key terms glossary for science helps you get a handle on something as large as an entire field of research.

Review articles summarize the recent key findings for a question within a particular research field.  These articles are particularly useful if you need a basic understanding of something with a large field of view. Once you understand the basics of the system, you can delve deeper. Make sure to choose an article that is fairly recent if you’re reading anything life-sciences related.

Primary literature is science straight from the horse’s mouth. In other words, the same lab that performs the research and experiments writes about them in great detail.  This gives the highest resolution, but a limited field of view. Scientists have only so many hours in the day, and only so many dollars of funding. Neuroscientists would love to ask endless questions about the brain, for example, but alas, we are few and mortal. As such, primary literature is typically narrow and targeted, but delivers a cutting edge understanding of the science within.  Most people go to Pubmed to find these articles.

Textbooks are a great resource for the highly curious, but are dense, expensive, and hard to get if you can’t access a library or they are in high demand.  They also take far longer to write, publish, and print than a review article or primary research publication, meaning the information printed could be obsolete by the time you read it.

Finding research articles can often be the hardest part of all of this.  Some tips for finding scientific papers:

  1. If you attend a research institution, your library probably has access to most journals.
  2. Researchers are allowed to share their publications to those who request them: I recommend ResearchGate for contacting them on this subject, but you can always reach out to them directly (Many labs and scientists are on Twitter!)
  3. Google Scholar is your next best bet, or simply searching [Article title] + .pdf using your search engine.
  4. For finding new articles of interest and organizing old ones, I recommend Meta, Mendeley, or Endnote.  For health-related questions, Examine.com has a great database on various nutritional supplements.

*All images sourced from Pixabay.com, and modified by Connor Wander.

Peer edited by Justine Grabiec.

Follow us on social media and never miss an article:

 

Leaves Are Falling From the Trees

When the days suddenly seem shorter and the nights are colder, you know fall has arrived. You especially know it’s here when the leaves gain a reddish hue and soon after fall from the trees by the truckload. In fact, the name ‘fall’ has been used in America since the 17th century, shortened from the English phrase “leaves falling from the trees.” Fall technically begins with the autumnal equinox in late September, when the hours of daylight and night are the same. At this point, perennial trees, or trees that grow year after year, have two tasks: 1) to stop growing leaves and instead prepare flower buds for next spring, and 2) to protect those buds from winter’s cold. First, we must ask how trees sense changes in daylight in the first place?

Hybrid aspen trees (which are most often used in plant science to study genetics and growth regulation) have light and color detectors that respond to levels of light. One response is the activation of a protein called CONSTANS, which controls growth genes but gets broken down in the absence of light. In that regard, as the days grow shorter and there is more darkness, CONSTANS becomes less stable and leaf growth slows. The slowing of leaf growth marks the beginning of a process when the plant uses much of its remaining energy to form a bud, or an immature flower. This bud will bloom the following spring, but it needs protection from the cold and often dry climate brought on by winter. In addition to the bud, a multi-layered and extremely hardy outer shell is formed to protect the bud from the cold and drying out. The process of forming a bud is highly variable between plant species, but typically lasts 6-10 weeks. After the bud and its protective shell have formed, almost all plants in the Northern hemisphere enter the ‘dormancy’ phase to survive the winter.

Created by Clare Gyorke

Transition into Dormancy

To protect against drying out and cold temperatures, it is important that a bud receives no growth signals during dormancy. As buds are formed in trees, a plant hormone called abscisic acid, or ABA, encourages the production of callose, a starchy sugar that blocks signaling between plant cells. During dormancy, ABA maintains callose production, but becomes less effective as daylight hours increase. Even after ABA loses its effect (when we reach a certain amount of daylight hours), buds are still protected by callose, which is broken down slowly by prolonged low temperatures. Callose protects the plant by preventing it from growing during random hot spells in January and February, when there are more daylight hours, but temperatures will likely drop again. When there have been enough hours of cold (~30-55 °F), all callose will be broken down and the bud will slowly start receiving signals to grow. The accumulation of growth signals and the right temperature (usually above 60°F) will tell the tree to open its buds and start growing. One potential problem with this system is that if a frost comes after the buds have opened, they might become damaged from the cold and won’t produce flowers or fruit. However, as temperatures warm and the days get longer, the risk of frost decreases.

The spring equinox will mark the shift into longer days, and often coincides with trees exiting dormancy with an explosion of growth by the buds they formed in the previous the fall. These buds will rapidly cover the tree with beautiful green leaves, highlighting the beginning of the transition to the long, languid days of summer.

Peer Edited by Keean Braceros.

Follow us on social media and never miss an article:

A Scientist’s View of Animal Research

One of the most controversial aspects of biomedical research is the use of animals to benefit humans. Scientists use animals to test new treatments for human diseases and to understand human biology. Many groups have protested the use of animals for research. The most well-known and influential of these groups has been People for the Ethical Treatment of Animals (PETA). These groups have successfully raised concerns about using animals for research, and they have brought about changes such as closing down some research labs and decreasing the number of airlines that will transport animals destined for research. People perceive the benefits and detriments of these actions differently depending on whether they support or condemn animal use in research.

I am not writing this article from an entirely unbiased position because I work with animals to understand basic human biology and to discover treatments for human diseases. Since many articles about the negative aspects of animal research have been published, I intend to provide a more positive perspective on animal research from a scientist’s point of view.

The goal of using animals for research is to save human lives and improve human health.  Scientists do not use animals because it is fun, and they do not use animals when there are better alternatives (e.g. using humans, cell culture, or computer models). Scientists use animals for research because animal research can provide information to eliminate human diseases, improve health, and ultimately save human lives. Animal research has saved millions of human lives and has improved the health of billions more. Animals have played an important role in discovering cures for deadly diseases such as polio, smallpox, and hepatitis C. Animal research has also discovered treatments for Type 1 diabetes, malaria, cystic fibrosis, and thousands of other diseases.

Animal research improves animal health and finds cures for animal diseases. Animals contract many of the same diseases as humans do, such as heart failure and diabetes. Research in animals has saved the lives of millions of pets by providing vaccines, pacemakers, artificial joints, and chemotherapy for pets. Animal research has also improved our understanding of endangered species so that we can prevent their extinction.

Research in humans has limitations that can be overcome by using animals. Scientists and animal activists may ask why we cannot conduct all research in humans so that we can avoid the ethical dilemma of animal research. First, many studies are conducted in humans (over 100,000 people participate in clinical trials every year, and this number does not include the thousands more people involved studies that are not considered clinical trials). However, many studies are not feasible to perform  in humans. For example, studies involving diets or food components require subjects to be very compliant (follow the diet exactly) so that scientists can definitively answer their research questions (such as whether a vitamin or mineral is necessary for health). However, people are not usually very compliant with their diets, leading to confusing data and sometimes wrong answers to research questions. In animal studies, diets can be carefully controlled, which ensures that the data obtained is accurate. This allows scientists to answer very specific research questions about diet effects. Using animals for research also optimizes research funds by ensuring that research does not need to be repeated due to non-compliant human research subjects. Furthermore, research in humans is substantially more expensive than animal research, due to compensation for the research subjects and extra costs of research monitoring. Finally, humans have much longer lifespans than most animals, meaning that a single study could require 1-50X longer to complete in humans than in animals. This both raises research costs and increases the time required to make scientific discoveries.

Scientists prioritize animal health and minimize animal pain. When alternative methods of study, such as those in humans, are not an option, scientists use animals. Scientists undergo substantial training so that they know how to conduct research with animals in an ethical manner. Furthermore, before any animal research takes place, scientists must get approval for their planned study from the Institutional Animal Care and Use Committee (IACUC). This committee  includes at least one veterinarian, who ensures that the animals in the study are healthy and well. The committee also includes at least one person from the community who is not associated with the research institution. This ensures that animals used in experiments receive the maximum amount of care without interfering with the experiment. Every scientist must consider 3 words before they start working with animals: Replacement, Reduction, and Refinement. First, can the scientist replace animals with some other model? (For example, cells isolated from humans or animals or computer models). Second, can the scientist reduce the number of animals so that as few as possible are harmed? And third, can the scientist refine their experiments so that animals suffer as little as possible? All three questions must be addressed before research can begin.

While scientists may enjoy working with animals, they do not like causing pain for animals. Researchers ensure that the animals in their care are healthy and well for the research study. Many scientists are animal activists and whole-heartedly care for the animals they work with.

Science has a small impact on animals in comparison to animals harmed by other factors. Scientists in the United States used 12-27 million animals in 2010. Although this sounds like a large

A monument to the laboratory mouse in Novosibirsk, Russia

number, 99% of these animals are rats, mice, birds, or fish.  People in the U.S. consume more than 340 chickens for every 1 animal that is studied in a research facility. Furthermore, for every animal involved in research, another 14 animals are killed on roads.

Scientists and those who benefit from the science appreciate what animal research has accomplished. Scientists appreciate all that animals have done to benefit scientific advances and human health. A town in Russia raised enough money to erect a statue to pay tribute to all of the sacrifices that animals, namely the laboratory mouse, have paid to save human lives (see picture). This statue reflects the attitudes that scientists have for their laboratory animals, and it thanks them for what they have done to save millions of human lives.

Peer-reviewed by Caitlyn Molloy and Elise Hickman.

Follow us on social media and never miss an article:

What an 1.5°C increase can bring

Did you find the past few years’ climate peculiar, with extremely high temperature or intense rainfall days occurring more often? Have you ever heard of shrinking ice sheets or seen this famous photo of polar bear clinging to an iceberg? What about the massive bleaching of coral reefs? Unfortunately, you’ve spotted evidence of climate change

Temperature trends by NASA

Human-induced global warming has resulted in a 1°C increase in global mean surface temperature in 2017 when compared to the period prior to the spur of large-scale industrial activities (1850-1900, here-on referred to as pre-industrial levels). Further, the temperature increase can reach 1.5°C (2.7°F) above pre-industrial levels as early as 2030 and as late as 2052, a recent report released by Intergovernmental Panel on Climate Change (IPCC) says. We are already noticing some inconveniences previously noted at the 1°C above pre-industrial levels, but the extra 0.5°C warming on top of that is projected to amplify risks for all inhabitants on Earth.

The report by IPCC explains that many regions will be impacted with the global warming of 1.5°C above pre-industrial levels, with high temperature days, heavy precipitation, and intense droughts occurring more often. Such changes will impact our lives in the foreseeable future, not only by causing the disturbances we notice nowadays, but by increasing food insecurity, water stress, risk of vector-borne disease (e.g., malaria, flu), and heat-related deaths. Many ecosystems and the biodiversity within are at risk too; some species will face extinction and some will lose their habitats due sea-level rise, sea-ice loss in the Antarctic, coral bleaching, shifts in biomes.

So what is being done in the global scale to address the issues that come with climate change? The first global effort to fight climate change, the Paris Agreement, was initiated in Paris 2015 by the IPCC. 195 participating countries have agreed to focus their efforts on limiting the global temperature rise to not more than 2°C above pre-industrial level. Long-term climate change adaptation goals were established through the Paris Agreement, which will be met by participating countries through nationally determined contributions. Each countries are requested to outline and communicate country-level efforts to reduce greenhouse gas emissions and adapt to climate change (nationally determined contributions). To facilitate reduction in greenhouse emissions through nationally determined contributions, Talanoa Dialogue was launched in January 2018 and the next meeting will take place in December 2018. More recently in October 2018, the special report by IPCC was released in South Korea, assessing the global warming of 1.5°C above pre-industrial levels.

This special report by IPCC includes the impact of 1.5°C above pre-industrial levels and assessments on mitigation and adaptation strategies. To halt global warming, the amount of CO2 emitted into the atmosphere must equal the amount that is removed from the atmosphere (net zero). The report warns that such net zero CO2 emissions must be reached around 2050 to limit global warming to 1.5°C above pre-industrial levels. Good news is that we could slow the progression of climate change using currently available means, by changing our (individual and industrial) behaviors to reducing energy consumption and switch to more energy-efficient fuel options. Additionally, energy consumers can implement carbon capture options; energy providers must switch from coal-based to renewable energy sources, agricultural sector can shift to producing non-CO2 emitting crops and limit expansion into carbon rich ecosystems such as tropical forests. Luckily, some of these strategies are already underway in many countries, but the report warns us that more drastic measures need to be implemented in order to keep the temperature rise below 1.5°C. Given that the U.S, the world’s second largest greenhouse gas emitter, has announced to withdraw from the Paris Agreement, it is of particular importance to continue to increase awareness. It is critical to actively tackle adaptation and mitigation strategies to response to climate change across the nation.

Want to learn more and get involved? Visit the website for IPCC to read recent reports and activities to help reduce climate change. Follow their SNS to be up-to-date on global effort to fight climate change (facebook, twitter, instagram, or linkedin)!

Peer edited by Eliza Thulson.  

Follow us on social media and never miss an article:

Superior Syntheses: Sustainable Routes to Life-Saving Drugs

While HIV treatment has come a long way over the past few decades, there is still a discrepancy between total number of HIV patients and those with access to life-saving antiretroviral therapies (ART). The inability to access medications is often directly linked to the cost of the medication, demonstrating the need for ways to make these medicines cheaper. In October 2018, Dr. B. Frank Gupton and Dr. Tyler McQuade of Virginia Commonwealth University were awarded a 2018 Green Chemistry Challenge Award for their innovative work on the affordable synthesis of nevirapine, an essential component of some HIV combination drug therapies.

https://www.flickr.com/photos/blyth/1074446532

Neviripine, a component of some HIV therapies.

For the past 22 years, the American Chemical Society (ACS) in partnership with the U.S. Environmental Protection Agency (EPA), has awarded scientists who have contributed to the development of processes that protect public health and the environment. Awardees have made significant contributions in reducing hazards linked to designing, manufacturing, and using chemicals. As of 2018, the prize-winning technologies have eliminated 826 million pounds of dangerous chemicals and solvents, enough to fill a train 47 miles long. The nominated technologies are judged on the level of science and innovation, the benefits to human health and the environment, and the impact of the discovery.

https://www.flickr.com/photos/37873897@N06/8277000022

Green Chemistry protects public health and the environment.

Gupton and McQuade were awarded the Green Chemistry Challenge Award for the development of a sustainable and efficient synthesis of nevirapine. The chemists argue that oftentimes, the process to produce a drug remains consistent over time, and is not improved to reflect new innovations and technologies in the field of chemistry, which could make syntheses easier, cheaper, and more environmentally friendly. Synthesizing a drug molecule is not unlike building a Lego tower; the tower starts with a single Lego and bricks are added one-by-one until it resembles a building. Researchers start with a simple chemical and add “chemical blocks” one-by-one until it is the desired drug molecule.  Gupton and McQuade demonstrated that by employing state-of-the-art chemical methods, they can significantly decrease the cost to synthesize nevirapine.

https://www.flickr.com/photos/billward/5818794375

Producing pharmaceutical molecules is like building a Lego house.

Before this discovery, there were two known routes toward the synthesis of nevirapine. Researchers used projections to determine which steps were the costliest. With this knowledge, they were able to improve the expensive step of the synthesis by developing a new reaction that used cheap reagents (“chemical blocks”) and proceeded in high yield. A chemical yield is the amount of product obtained relative to the amount of material used. The higher the yield, the more efficient the reaction. Reactions may have a poor yield because of alternative reactions that result in impurities, or unexpected, undesired products (byproducts). Pharmaceutical companies often quantify chemical efficiency by using the Process Mass Intensity (PMI), which is the mass of all materials used to produce 1 kg of product. Solvent, the medium in which the reaction takes place, is a big contributor to PMI because it is a material that is necessary for the reaction, but not incorporated into the final product. Gupton and McQuade were able to decrease the amount of solvent used because they streamlined reactions that reduced impurities, allowing them to recycle and reuse solvent. These improvements reduced the PMI to 11 relative to the industry standard PMI of 46.

https://commons.wikimedia.org/wiki/File:Nevirapine.svg

Molecular structure of nevirapine 

In addition to their synthesis of nevirapine, Gupton and McQuade also developed a series of core principles to improve drug access and affordability for all medications. The general principles include implementation of novel and innovative chemical technologies, a decrease in the total number of synthetic steps and solvent changes, and use of cheap starting materials. Oftentimes, the pharmaceutical industry focuses on starting with very complex molecules in order to decrease the number of steps needed to reach the target molecule. Interestingly/unfortunately, starting with complex “chemical blocks” is often the most expensive part of  producing a medication. By starting with simpler chemicals, they believe production costs can be significantly decreased. Virginia Commonwealth University recently established the Medicines for All Institute in collaboration with the Bill & Melinda Gates foundation, and Gupton and McQuade hope that by employing the process development principles, they will be able to more efficiently and affordably synthesize many life-saving medications.

Peer edited by Dominika Trzilova and Connor Wander.

Follow us on social media and never miss an article: