Superior Syntheses: Sustainable Routes to Life-Saving Drugs

While HIV treatment has come a long way over the past few decades, there is still a discrepancy between total number of HIV patients and those with access to life-saving antiretroviral therapies (ART). The inability to access medications is often directly linked to the cost of the medication, demonstrating the need for ways to make these medicines cheaper. In October 2018, Dr. B. Frank Gupton and Dr. Tyler McQuade of Virginia Commonwealth University were awarded a 2018 Green Chemistry Challenge Award for their innovative work on the affordable synthesis of nevirapine, an essential component of some HIV combination drug therapies.

https://www.flickr.com/photos/blyth/1074446532

Neviripine, a component of some HIV therapies.

For the past 22 years, the American Chemical Society (ACS) in partnership with the U.S. Environmental Protection Agency (EPA), has awarded scientists who have contributed to the development of processes that protect public health and the environment. Awardees have made significant contributions in reducing hazards linked to designing, manufacturing, and using chemicals. As of 2018, the prize-winning technologies have eliminated 826 million pounds of dangerous chemicals and solvents, enough to fill a train 47 miles long. The nominated technologies are judged on the level of science and innovation, the benefits to human health and the environment, and the impact of the discovery.

https://www.flickr.com/photos/37873897@N06/8277000022

Green Chemistry protects public health and the environment.

Gupton and McQuade were awarded the Green Chemistry Challenge Award for the development of a sustainable and efficient synthesis of nevirapine. The chemists argue that oftentimes, the process to produce a drug remains consistent over time, and is not improved to reflect new innovations and technologies in the field of chemistry, which could make syntheses easier, cheaper, and more environmentally friendly. Synthesizing a drug molecule is not unlike building a Lego tower; the tower starts with a single Lego and bricks are added one-by-one until it resembles a building. Researchers start with a simple chemical and add “chemical blocks” one-by-one until it is the desired drug molecule.  Gupton and McQuade demonstrated that by employing state-of-the-art chemical methods, they can significantly decrease the cost to synthesize nevirapine.

https://www.flickr.com/photos/billward/5818794375

Producing pharmaceutical molecules is like building a Lego house.

Before this discovery, there were two known routes toward the synthesis of nevirapine. Researchers used projections to determine which steps were the costliest. With this knowledge, they were able to improve the expensive step of the synthesis by developing a new reaction that used cheap reagents (“chemical blocks”) and proceeded in high yield. A chemical yield is the amount of product obtained relative to the amount of material used. The higher the yield, the more efficient the reaction. Reactions may have a poor yield because of alternative reactions that result in impurities, or unexpected, undesired products (byproducts). Pharmaceutical companies often quantify chemical efficiency by using the Process Mass Intensity (PMI), which is the mass of all materials used to produce 1 kg of product. Solvent, the medium in which the reaction takes place, is a big contributor to PMI because it is a material that is necessary for the reaction, but not incorporated into the final product. Gupton and McQuade were able to decrease the amount of solvent used because they streamlined reactions that reduced impurities, allowing them to recycle and reuse solvent. These improvements reduced the PMI to 11 relative to the industry standard PMI of 46.

https://commons.wikimedia.org/wiki/File:Nevirapine.svg

Molecular structure of nevirapine 

In addition to their synthesis of nevirapine, Gupton and McQuade also developed a series of core principles to improve drug access and affordability for all medications. The general principles include implementation of novel and innovative chemical technologies, a decrease in the total number of synthetic steps and solvent changes, and use of cheap starting materials. Oftentimes, the pharmaceutical industry focuses on starting with very complex molecules in order to decrease the number of steps needed to reach the target molecule. Interestingly/unfortunately, starting with complex “chemical blocks” is often the most expensive part of  producing a medication. By starting with simpler chemicals, they believe production costs can be significantly decreased. Virginia Commonwealth University recently established the Medicines for All Institute in collaboration with the Bill & Melinda Gates foundation, and Gupton and McQuade hope that by employing the process development principles, they will be able to more efficiently and affordably synthesize many life-saving medications.

Peer edited by Dominika Trzilova and Connor Wander.

Follow us on social media and never miss an article:

 

 

Nobel in Chemistry for an Evolutionary Revolution

https://phys.org/news/2017-10-nobel-chemistry-prize-major-award.html

The Nobel Prize awarded to Dr. Arnold, Dr. Smith, and Dr. Winters

Evolution, the process of gradual changes to genetic information in each generation over millions of years, proposed by Charles Darwin in the 19th century is being revolutionized by modern science. Unexpectedly, the revolution is driven not by evolutionary biologists or ecologists, but rather centered around the methodologies of chemists and enzymologists.

Accumulation of mutations is a slow and random process. However, scientists were able to harness the power of evolution to identify and select for specific mutations that improve the ability of molecules of interest. This year’s Nobel Prize in Chemistry was awarded to Dr. Frances Arnold “for the directed evolution of enzymes” and to Dr. George Smith and Dr. Gregory Winters “for the phage display of peptides and antibodies.”

Dr. Arnold was able to develop a methodology to enhance enzyme activity and give existing enzymes new functionality, a feat that was realized in unprecedented amount of time. She was able to accomplish this by simply using error prone PCR to cause mutations in enzymes and then selecting one with favorable ability compared to the many enzymes produced. Repeating the cycles hundreds of times enhancing the functionality of the enzyme each time, allowed her to develop P450 with novel function.

Dr. Smith was also able to apply the concepts of evolution by using bacteriophages, viruses that normally infect bacteria. Dr. Smith developed the process of phage display, a means to study protein-protein interactions by encoding genetic information into bacteriophages. By using E. coli cells infected with bacteriophages and then infecting with another virus, Dr. Smith was able to encode genetic information of a protein on to the viral coat of bacteriophages. This allowed the new protein to be displayed on the viral coat and able to be recognized with an antibody.

Dr. Winters then incorporated the antibody gene rather than a specific protein into the viral coat. This allowed him to scavenge for antibodies with a specific binding site via interactions with different antigens. Using a similar method to Dr. Arnold, he caused mutations in the antibody, and selected for the antibody with highest affinity to the antigen. The results were highly efficient antibodies and a mechanism that is used to produce 11 out the 15 most-sold drugs on the planet.

Peer edited by Rachel Battaglia.

Follow us on social media and never miss an article:

 

Golden Medicine: Use of Plasmons for Cancer Therapy

Cancer is an immensely complex disease to treat. The number of mutations and combinations of mutations that can lead to its development make each “cure” more of a patch to a few specific cases. Couple that with the increasing rate of mutation within cancer cells, and it becomes difficult to even diagnose the issue. Plasmon therapy offers the potential for a broadly applicable treatment, and because it couples well with the bodies immune response, offers a therapy that could decrease the chance for metastatic tumor development.

Before we discuss this topic with greater specificity, a few terms should be defined. Plasmons, from the word plasma, are a material that has electrons that flow back and forth in a wave when light shines on them. Plasmas are just gaseous ions, like lightning or neon signs, and in the case of a plasmon, this plasma is confined to the surface of a nanoparticle. You can read more about plasmon theory here.

Nanoparticles abound in modern technologies and are defined by one dimension, the so called “critical dimension”, which is around two hundred nanometers. For reference, that’s roughly one hundred thousand times smaller than a human hair. This size can afford a variety of unique properties to a molecule: distinct colors, uncharacteristic electronic activities, and even the ability to move through a cellular membrane. All these attributes will come into play in how these molecules interact with cancer cells, so they’re important to keep in mind.  Plasmons are nanoparticles that are so small, that the plasma on the surface can be manipulated by light. This rapid movement of plasma gives rise to heat as it collides with surface particles just as your hands generate heat rubbing together. The type of light that does this can be visible or even radio waves, meaning that very low-energy and harmless beams can be used to generate this rapid heat.

The second bit of background knowledge necessary for this discussion is: how is cancer treated in the first place? Many current cancer therapies come from small molecules roughly the size of glucose. Whether they use metals or strictly carbon, small molecule cancer therapies usually rely on interrupting one or a few cellular pathways, like DNA replication or a checkpoint before mitosis (cell splitting). One of the first nanoparticles approved for cancer therapy have been gold nanorods, which are thousands of times larger than a small molecule and have used physical rather than chemical mechanisms for therapy. To clarify, instead of changing some pathway in a cell, these nanorods can selectively heat cancer cells until the cell dies. If you were to think about this in terms of pest control, nanoparticle therapy is like burning a nest of cockroaches. In that same case, using small molecules like cisplatin would be like spraying the cockroaches with the latest bugkiller.

Extending this analogy, it’s fairly obvious that setting a fire inside someone’s body is not a good medicinal practice, so it would be fair to question how plasmon therapy might be helpful. There are two strategies for plasmon cancer therapy: precision lasers and radio waves which can pass through a body. The earliest use of plasmon cancer therapy used a fiber optic that was inserted under the skin to a location near the tumor. Then, beams of light would hit only the tumor. This has the advantage of targeted dosing, but can still be considered fairly invasive. Others have begun using plasmons that generate that intense heat with radio waves so that no procedure is necessary: simply an injection or ingestion of nanoparticles and then stepping into a radio transmitter This can be impractical if the tumor is not in a confined space. Common gold plasmonic nanoparticles would go inside all cells so healthy cells would be damaged just as easily as cancerous ones. Recent work shows that the surface of the nanoparticle can be changed so that the majority of uptake occurs by cancer cells. Cancer cell metabolism makes the charge of cancer cell membranes different from the charge of normal cell membranes, so these nanoparticles can exploit that difference to target only cancer cells.

With this targeted dosing, plasmons show promise as a noninvasive form of therapy that do not harm the patient and would be applicable to most forms of cancer. Even though the safest and most effective nanoparticles will use gold, treatment costs are currently around  $1000, thereby promising a treatment that will not be prohibitively expensive for the future.

Peer edited by Kasey Skinner.

Follow us on social media and never miss an article:

To Dye or Not To Dye

Dyeing hair is a common, but not recent, beauty practice. Hair dyeing (coloring) has been around for thousands of years, using plant-based dyes such as indigo and turmeric before synthetic dyes were invented. I recently wondered “What kind of science goes into modern hair coloringWhat is the process?”  I have had my hair colored before but never even wondered how the coloring occurs. I reached out to my very good friend and cosmetologist, Erin Rausch, and asked her to explain the process.

Like most working professionals, cosmetologists attend school before they begin working with actual clients. While each school has a different curriculum, Erin attended Aveda Institute in Madison, where she spent six months in a classroom learning about anatomy, chemistry, color theory, and the study of the hair and scalp – trichology. In cosmetology, it is exceedingly important to understand the biology of hair, particularly in the case of hair coloring. Like the skin, hair has pores and layers. The hair layers open with heat and close with cold, kind of like when you open an umbrella when it’s sunny and close it when it’s dark. A good cosmetologist understands how chemicals affect and interact with the hair, studies the existing melanin (pigment) of the hair to later replace it with new color. Also, like skin, hair contains different kinds of melanin, and different melanins result in different hair colors: pheomelanin results in hair with a pink to red hue (pheomelanin is also in lips), and eumelanin results in the darkness of the hair: more eumelanin – darker hair, less – lighter hair.

Microscopic image of human hair. Hair has layers, like skin, onions, and Shrek.

Cosmetologists need to consider every aspect of the customers’ hair before continuing to color. First, hair color is sorted into color levels, 1-10; 1 being black, 10 being whitest blonde. Aside from natural hair color, it is important to consider the type (thickness of the hair strand), texture (straight or wavy), density (number of strands of hair per square inch) of the hair, whether the hair has had previous treatment, and the distance from the scalp that you’re treating. The scalp produces heat which causes the color to react a little differently near the scalp than hair away from the scalp. For this reason, it is important to consider the density of the hair. A higher density of hair results in more heat near the scalp, changing how color will interact with the hair near the scalp compared to hair away from the scalp.

Hair levels determined by hair color.

Once all of the variables of the clients’ hair have been evaluated, cosmetologists then formulate how to color. There are several steps in the hair coloring process. First, they always formulate according to the existing natural level of hair color (levels 1-10). Dyeing hair is essentially a melanin exchange where natural melanin is exchanged with a different, synthetic one. Removing natural melanin is done with ammonia or bleach, partnered with peroxide; causing a chemical reaction. There are different strengths of peroxide, and the one used depends on type, texture, density, and final desired hair color. It’s extremely important to be careful with peroxide strength, as certain volumes can cause chemical burns and adverse chemical reactions. A common analogy is the peroxide is like the length of time or heat used to cook spaghetti. If you don’t have enough, your spaghetti isn’t cooked, but if you have too much, your spaghetti is overdone. Too much peroxide will severely damage your hair, like overcooking pasta. Depending on the natural hair color or previous hair treatment, it might take more than one peroxide treatment step to remove melanin.

Color Theory. To neutralize Natural Remaining Pigment, the complimentary color is used. For example, Level 10 hair (lightest blonde) has yellow natural remaining pigment, so violet would neutralize the NRP.

After the peroxide treatment, there is still pigment remaining in the hair, called Natural Remaining Pigment (NRP). It is important to neutralize the NRP, so it doesn’t affect the future desired hair color. NRP is neutralized by adding the opposite/complementary color. If Erin wanted to neutralize level 10 (lightest blonde), she needs to use violet. All of the variables are part of color theory and the treatment depends on look you want to create (I should have paid more attention in art class).

A client’s hair is sectioned as color(dye) is being applied.

Once the natural melanin has been removed and the NRP has been neutralized, it’s time to color the hair! The amount of hair color product needed to color the hair depends on hair type and desired color(s). The bigger the hair section (as can be seen in the image above), the less the color will permeate the hair strands evenly. Thus, more hair product will be needed so the hair will be properly colored, and the color will last a long time.

Erin strongly recommends against at home hair dye kits. While they may seem cost effective, their methods for hair coloring can be damaging to your hair. These kits are made to achieve one color and are only designed for people that have no color on their natural hair. Since people have many different kinds of hair and hair color, there is a large chance the at home dye color you pick will be very different from what you want. The peroxide used in the kit is also a very strong peroxide (much stronger than peroxide used in salons) and because the peroxide in the kit is so strong, the color is much harder to remove or correct later on. Essentially, at home hair dye kits are very damaging to hair.

To see Erin’s portfolio, check out her instagram!

Peer edited by Mikayla Armstrong.

Follow us on social media and never miss an article:

Pharmacies of the Future: Chemical Lego Towers

Chemists and engineers are in the process of making on-demand production of pharmaceuticals less of an idea from a movie, and potentially a viable option for situations where medicines may not be easily accessible.

Imagine taking a vacation to an isolated rainforest resort.  You explored your adventurous side, hiking through the lush vegetation with a knowledgeable tour. Less than 10 minutes after arriving back at the hotel, an uncontrollable itch began on your forearms. It traveled up your arms, across your chest, and began rising up your neck. Was it from a bug or a plant you encountered during the hike? At this point, you are unconcerned about the cause, and just want a solution. The closest drug store is hours away; when booking the trip, it seemed like a great idea to pick the most isolated resort for your dream vacation. Even if the drug store was closer, it was not a guarantee that they would even have anything to help you. In the US, there were over 200 instances of drugs shortages from the years 2011-2014. There was no telling how difficult it would be to get medicine to this remote location.

You head to the front desk of the hotel, hoping they have something to give you for relief. They lead you down the hall and into a small room. There are a few chairs and an appliance that is similar in size and shape to a refrigerator. The employee enters a few commands into a keyboard and the machine starts working. In fifteen minutes, the employee hands you two tablets- diphenhydramine hydrochloride, more commonly known as Benadryl®.  

https://www.flickr.com/photos/mindonfire/3249070405

Diphenhydramine, better known by the brand name Benadryl, is one of the four medications that can be synthesized by the original compact, reconfigurable pharmaceutical production system.

While this scenario is not plausible in the current day, it will be in the near future. In a 2016 Science article, researchers from around the world introduced a refrigerator-sized machine that could make four common medicines. More recently, a 2nd generation prototype was released; the new model is 25% smaller and contains enhanced features necessary for the synthesis of four additional drugs that meet US Pharmacopeia standards. This is possible by technology known as flow chemistry. Flow chemistry is a development where chemicals are pumped through tiny tubes. When two tubes merge, a reaction between the two chemicals occurs, resulting in a new molecule. Compared to traditional chemical reactions (stirring two chemicals together in a flask), flow reactions are generally safer and happen faster.

In this new machine, there are different “synthesis modules,” or small boxes that contain the equipment to do a single chemical reaction. Much like an assembly line to build a car, pharmaceutical molecules are made by starting with something very simple, and pieces are added on and manipulated until it is something useful. In the case of pharmaceuticals, the assembly line consists of molecules and reactions. The modules, or boxes, can be rearranged to do the chemical reactions in the order needed to make the desired medicine. To make a different medicine, the modules must simply be rearranged. Researchers can use the original prototype to make Benadryl, Lidocaine (local anesthetic), Valium (anti-anxiety), and Prozac (anti-depressant), using different combinations of the exact same modules. As of July 2018, the FDA reported that both diazepam (Valium) and lidocaine were currently in a shortage, due to manufacturing delays.

http://www.columbus.af.mil/News/Photos/igphoto/2000125986/

On demand pharmaceutical production would allow access to medicines in rural locations and war zones.

The future of this technology would allow anyone to use it. A user could simply input the medicine they want, and computers would rearrange the modules and use the correct starting chemicals, and in about 15 minutes, you could receive the desired medicine. This technology has vast applications. It could help alleviate the aforementioned drug shortages. Additionally, it could allow access to medicine in locations where it may be difficult to ship to, including rural locations or war zones, often places that need medicines most. In these places, delivery may be difficult, and some medicines go bad quickly. With this technology, it would not be necessary to store medicines that could go bad; it could simply be made as soon as it is needed. This could also prevent waste from medicines that are not used before they go out of date. These developments could revolutionize the pharmaceutical industry and I look forward to seeing the good that these technology advances can lead to. 

Peer edited by Nicholas Martinez

Follow us on social media and never miss an article:

Diet Soda: Providing Insight into a Rare Metabolic Disorder

https://pixabay.com/en/beach-soda-diet-coke-ocean-sand-2752753/

Diet Coke is advertised as a sugar free alternative to regular Coke Cola, using aspartame as a sweeteners

Have you ever read the Nutrition Facts on a diet soda or sugar-free gum? If so, you might have noticed a bolded sentence that reads: PHENYLKETONURICS: CONTAINS PHENYLALANINE. In the U.S., this sentence is present on every commercially available medicine, food, or beverage that contains the artificial sweetener aspartame. Often, this warning goes unnoticed, partly because it is nestled quietly at the foot of the list of ingredients and partly because it only applies to 1 in every 10,000 individuals in the United States. Nevertheless, this subtle message is essential for this subset of consumers.

 

Typical nutrition facts label for a product that contains phenylanaline

Phenylketonuria  (Phe·nyl·ke·ton·uria)

Those people are “Phenylketonurics,” or people with a rare metabolic disorder called Phenylketonuria (PKU). PKU is a genetic disorder that results in the inability to convert the amino acid phenylalanine (Phe) into the amino acid tyrosine (Tyr), both essential amino acids that are found in most protein. Phe’s most important biological function is its role as a precursor to Tyr, which is involved in many processes such as the synthesis of neurotransmitters. Without the ability to convert Phe to Tyr, there is less Tyr available for the synthesis of dopamine, norepinephrine, and epinephrine (all important neurotransmitters) and Phe can accumulate in the brain, resulting in local metabolic dysfunction. PKU can result in severe developmental complications and mental retardation if left untreated. Fortunately, PKU can be successfully managed by abiding by a strict low-protein diet, avoiding foods that contain sources of phenylalanine such as meats, fish, dairy, and nuts.

Now, you might ask, “How does PKU have anything to do with the Nutrition Facts on my Diet Coke?” Good question. The answer lies in the chemical structure of the aforementioned artificial sweetener, aspartame.

Image Credit: Blaide Woodburn

Aspartame, an artifical sweetener in Diet Colas, is broken down into phenylalanine, methanol, and aspartate

Aspartame, sold under the brand name NutraSweet, is a dipeptide that contains the amino acids aspartic acid and phenylalanine. Aspartame is rapidly hydrolyzed (i.e. split in two by water) into its respective amino acids in the small intestine, serving as a source of aspartic acid and phenylalanine. Thus, it is important to communicate the presence of aspartame in all artificially sweetened products, especially since the products that usually contain aspartame (soft drinks, candies, etc.) are not typical sources of phenylalanine.

Yet, nutrition labels aren’t just important for Phenylketonurics. With the increased incidence of heart disease, it’s more important now than ever to understand the nutritional value of the foods you’re eating. But, do not fear! Stay up to date on some of my upcoming articles for The Pipettepen and Nutrition at UNC and Translational Science where I’ll outline the basics for healthy eating and overall wellness!

Peer edited by Amanda Tapia.

Follow us on social media and never miss an article:

The Curious Life of Plants: Exploring the Science within the Garden Walls

Cortney CavanaughThe science that plants rely on goes far beyond photosynthesis, the familiar process where plants use sunlight to help make food. In honor of spring’s pending arrival, it’s time to delve into three, unique cases where science influences the plants that color our lives.

The Power of pH

A bitter cup of burnt coffee or a sour lemon wedge can stimulate an intense response from a diner. Just as with humans, plants can respond in curious ways to the acidity (sourness) or basicity (bitterness) of the soil they live in. The acidity and basicity of soil is directly related to its pH. This characteristic is measured in pH units, on a scale that runs from 0 to 14, where 7 is neutral. Values below 7 represent an acidic environment and those above 7 indicate that the soil is basic. While these values can vary based on rainfall and the components of the soil, the typical range falls between 3 and 10, with most plants preferring a pH of 6.5.

https://commons.wikimedia.org/wiki/File:Paper_indicador_.jpg

pH indicator paper

In the plant world, soil pH can have huge consequences. The acidic or basic nature of the soil has a strong influence on whether a plant can absorb the nutrients that surround it. Plants require many elements to survive, including nitrogen, calcium, sulfur, and magnesium. While these nutrients are generally abundant in the earth, a plant may be unable to take them in if the pH is not sufficiently acidic or may take in too much if it is too acidic. A plant suffering from an iron deficiency may exhibit a yellow coloring of its leaves. In this case, there is likely enough iron in the soil but the pH is too high (basic) and the iron does not exist in a form that the plant is capable of absorbing. When the pH is too low, plants may begin to absorb too much of their required nutrients, at poisonous levels, or may even begin to absorb some components of soil that are toxic to plants. As with humans, nutrients are necessary for a plant’s survival, but an overdose, or an insufficient intake, can be problematic. In the case of plants, symptoms of this imbalance usually include the discoloration and death of leaves.

It is apparent that maintaining an appropriate soil pH is necessary for a plant’s health, but given the fact that pH levels can vary based on location and yearly rainfall, it is critical to monitor these levels. While there are commercially available digital monitors, one of the easiest methods to check to the pH is through the use of litmus paper. These strips of paper, treated with dyes, change colors based on the pH of the environment. Interestingly, a plant-based version of this litmus paper exists in many gardens. It turns out that the blooms of the very popular Hydrangea macrophylla, or the bigleaf hydrangea, will change colors in response to the pH of the soil. It has long been known that the blooms will turn blue in the presence of acidic soil and shift to a red/pink color when the soil is basic. A variety of home remedies can be used to change the pH of the soil. Adding vinegar or citrus peels to the soil can make the soil acidic and give the blooms a blue color, while powdered limestone results in basic soil and a red hydrangea.

https://www.publicdomainpictures.net/view-image.php?image=1254

When the pH of the soil is low, aluminum ions become available, and the hydrangea blooms are blue

https://www.publicdomainpictures.net/view-image.php?image=22790&picture=pink-hydrangea

When the pH of the soil is high, and aluminum ions are unavailable, hydrangea blooms display a pink-red color

While many experienced gardeners are familiar with this age-old practice, research over the past decade has begun to reveal that the chemistry behind the colors of the hydrangea goes beyond a simple change in pH and is directly related to the amount of positively charged aluminum ions that the plant can absorb from the soil. When there are no aluminum ions in the hydrangea, the flowers will maintain a red color. This is the result of one type of anthocyanin pigment, a colored molecule, which is present in the bloom. When the pH is lower, and the soil is acidic, the aluminum ions exist in a form that allows them to be absorbed by the roots and carried up to the bloom where they can react with the colored anthocyanin molecule to turn it blue. Basic soil, such as that treated with limestone, causes the aluminum ions to get tied up and prevents them from entering the bloom so the color change doesn’t occur. As a result, the best way to get and maintain blue hydrangeas is to treat the soil with an appropriate amount of aluminum sulfate. This will create an environment that is sufficiently acidic with enough aluminum ions to cause a color change, though this process will take multiple growing seasons. The color change, along with the levels of aluminum ions found within a blue bloom, demonstrate that the color-changing capability of the hydrangea plant is actually a measure of available aluminum ions in the soil, which is directly related to the soil’s pH.

“Life Finds a Way”

When the soil does not provide plants the nutrients they need to survive, certain species will adjust to fit their environment. Commonly, plants will develop specialized leaves that help them to survive despite poor soil quality. One of the liveliest examples of these leaves belong to carnivorous plants, which rely on trap-style leaves to catch their prey and special chemicals, called enzymes, to digest them. The ability to absorb nutrients from prey gives carnivorous plants the chance to exist in wetlands, where the average plant would die after being unable to access essential nutrients from the soil.

https://commons.wikimedia.org/wiki/File:Venus_Flytrap_showing_trigger_hairs.jpg

Hair-like triggers sense when prey enters the Venus flytrap

The Venus flytrap is likely the most well known among carnivorous plant species. While most would associate these bug-eating plants with exotic, swampy-jungle locations, their only native habitat is actually southeastern North Carolina and northeastern South Carolina! Their bizarre ability to trap insects and small amphibians comes at a price. A significant amount of energy is required to trap and digest their prey. To prevent unproductive closing events, the Venus flytrap has a method to differentiate between a brush with prey and a false alarm. The trap is lined with hair-like structures that can sense when they are being touched. Research has shown that the plant is able to keep track of how many hair-like sensors have been touched, in a short range of time, and can recognize if there is a struggling bug in its grasp. After two sensors have been touched, a hormone called jasmonic acid is released and an electrical signal is sent through the leaf, causing the trap to snap shut and catch the prey. After the prey has touched five sensors, the plant quickly releases its digestive enzymes, which help the flytrap to absorb the nutrients from its meal. While the small creature likely stands no chance to survive the interaction, the Venus flytrap’s survival is also at risk due the large amount of energy it uses to obtain its meal. Ominously, the flytrap is only capable of closing its trap about four or five times without catching its prey before the plant dies.

Exploring a Friendship at its Roots

https://commons.wikimedia.org/wiki/File:Phaedriel%27s-orchid.jpeg

Orchids depend on a relationship with the fungi around its roots for survival

Though they are highly desired for their vibrant blooms, orchids are notorious for being one of the most difficult plants to maintain. While they are often considered one of the easiest domestic plants to kill, orchids are also considered fragile in the wild. Their sensitivity to changes  make orchids excellent indicators of environmental change, but sadly, it has also caused the plants to become endangered. This fragility results from the reliance of orchids on their ecosystem, or the entire community of interacting organisms around them. Research from the Smithsonian Environ­mental Research Center, has shown that orchids rely on the presence of fungi at their roots. The teamwork exhibited by the plant and fungus is an example of a symbiotic relationship in which both members benefit. While the fungi are provided with a moist, nutrient-rich home at the plant’s roots, the orchid receives help from the fungi to take in nourishment when photosynthesis is difficult. Orchids take advantage of the fact that the fungus is able to take the organic matter in soil and break it down into simple molecules, like sugar, that the orchid can then absorb for energy. It has been discovered that all orchids depend on this very relationship. In fact, an orchid will not begin to grow until the fungus enters the plant’s seed, as it is the only source of energy for the developing plant. As the plants mature, they are believed to still rely on this relationship as an alternative food source when sunlight is sparse. There is still a lot to learn about the reliant relationship between orchids and fungi, but what is certainly known about these fragile plants is that their ability to survive is a direct result of a healthy and flourishing ecosystem.  

The life of plants is much more complicated than most people consider. In order to survive, plant species must develop creative ways to adapt to changes in their environment over time. Seeing these ingenious beauties firsthand may be the best way to appreciate them and fortunately, a wide variety of unique plants can be found on display at local spaces including the North Carolina and Duke botanical gardens.

Peer edited by Richard Hodge and Ann Marie Brasacchio.

Follow us on social media and never miss an article:

Epigenetics: The Software of the DNA Hardware

https://pixabay.com/en/dna-string-biology-3d-1811955/

Scientists identified the genes of the human genome to understand how the genes influences the function and physical characteristics of human beings.

The Human Genome Project (HGP) was an amazing endeavor to map the full human genome, and so intense an effort that it required an international collaborative research team. One of the ultimate goals of this project was to shed light on human diseases and find the underlying genes causing these health issues. However, the HGP ended up creating more questions than answering them. One thing we found out is that most diseases are complex diseases, meaning that more than one gene causes the disease. Obesity is one such example of a complex disease. This is in contrast to cystic fibrosis which is a disease caused by a mutation in a single gene. To further complicate diseases, there are gene and environment interactions to consider. A gene-environment interaction is a situation in which environmental factors affect individuals differently, depending on their genotype or genetic information. The possible number of gene-environment interactions involved in complex diseases is daunting, but the HGP has given us the information necessary to start better understanding these interaction.

Although the HGP did not end up giving us the answers we were looking for, it pointed us in the direction we needed to take. We needed to consider the role of environmental factors on human health and disease. For not only are complex diseases not fully explained by genetics alone, but another aspect of these diseases remains unexplained by genetics: the health disparities seen within diseases like obesity, diabetes, cancer, etc. The HGP showed us that not all diseases are caused by single mutations and that genetic diversity does not explain the differences in health outcomes. The environment plays a big part.

https://www.flickr.com/photos/redisdead/5309060705/in/photolist-969jMp-8KGU3A-9AmjDU-9Fb6QL-8z8v47-9rToP4-bjisT4-efsHVX-9CwCuC-bWVpnR-efyk59-cv8CSY-2nZbn-8Eo8Qf-AZcxv-avzbwe-ocrr7n-8AmXzh-92UJLA-9x7ERW-52rPZ-53pNfn-awTCTx-9opxuf-5Lb4wd-at19g5-e9uy2K-efsFrD-bzABzz-8PTiNV-9niuDm-aWuyfD-2cWce6-efsMHP-bmFJMy-nMfvHV-efsrfv-9CtGyg-8z5qvi-2bhXQ-ob7nAK-acATxr-2cWbvK-bmFJLA-8W32Lj-5XmFk5-8z5qb8-87XSzM-efsMtF-vFXb8

Epigenome  is the software that controls gene expression to make the different cells that make up the human body.

Gene-environment interactions begin to answer why genetics alone cannot explain  varying health outcomes by considering that the environment may have varying effects on our genetic data. However, there is another dimension to our genetic background that could better answer why genes often can’t be mapped directly to a disease. Imagine that our genome is our computer hardware, with all the information necessary to create the cells we are composed of. However, something needs to configure the genome to differentially express genes so as to make skin and heart cells from the same DNA. Skin and heart cells have the same information (DNA) in their nucleus but only express what’s necessary to function as a skin or heart cell. In other words, software is needed for the hardware. For us, that software is the epigenome. The epigenome consists of a collection of chemical compounds, or marks, that tell the genome what to do; how to make skin and heart cells from the same information. The epigenome, unlike the genome, is flexible. It can change at key points in development and even during the course of one’s lifetime. This flexibility makes the epigenome susceptible to environmental factors and could explain: (1) Why our genome alone cannot explain the incidences of diseases such as obesity, (2) the health disparities within these complex diseases, and (3) the transgenerational inheritance of complex diseases like metabolic syndrome, defined as a cluster of conditions such as high blood pressure and high blood sugar that increase your risk for heart disease and diabetes.

Now of course, the more we find out the more questions are left unanswered. As stated before, the epigenome can change due to lifestyle and environmental factors which can prompt chemical responses. However, the mechanisms by which things like diet and smoking induce these chemical responses is unclear. But researchers have started to fill in the gap. For example, certain types of fats, like polyunsaturated fatty acids (corn oil is high in these), can generate highly reactive molecules and oxidative stress, which can cause epigenetic alterations. Tobacco smoke contains a mixture of chemicals that have been independently investigated with mixed results on the epigenetic effects. Psychological stress, more specifically child abuse, has been seen to cause increased methylation (a sort of mark on the genome) of a receptor for hormones responsible for metabolism (glucocorticoid receptor) in suicide victims. This has also been seen in mouse models where higher maternal care of pups decreased methylation of the glucocorticoid receptor. Increased methylation usually decreases the expression of the glucocorticoid receptor, and decreased methylation would increase the glucocorticoid receptor’s expression.

The HGP was an amazing endeavor of science and has given us amazing insight into the structure, organization, and function of the complete set of human genes. It has also helped point us in a new direction to better understand chronic diseases and seek to find the solutions to address the burden of disease.

Peer edited by Mejs Hasan and Emma Hinkle.

Follow us on social media and never miss an article:

Why is the Flu such a Big Deal?

With each flu season comes a bombardment of new advertisements reminding people to get a flu vaccine. The vaccine is free to most and widely available, yet almost half of the United States chooses to forgo the vaccine.

When Ebola emerged, there was 24 hour news coverage and widespread panic, but the influenza virus (the flu) feels more familiar and much less fear inducing. This familiarity with the flu makes its threat easy to brush aside. Yet, every flu season is met with stern resolve from the medical community. What’s the big deal with the flu?

What makes the flu such a threat?

Influenza is a globetrotting virus: flu season in the northern hemisphere occurs from October to March and April to September in the southern hemisphere. This seasonality makes the flu a year round battle. The virus also evolves at a blistering pace, making it difficult to handle.

To understand why the flu is able to evolve so rapidly, its structure must be understood.The graphic to the right shows an illustration of the ball-shaped flu virus.

https://www.cdc.gov/flu/images/virus/fluvirus-antigentic-characterization-large.jpg

Illustration of flu structure

On the outside of the ball are molecules that let the virus slip into a person’s cells. These molecules are called hemagglutinin and neuraminidase, simply referred to as HA and NA. HA and NA are also used by our body’s immune system to identify and attack the virus, similar to how a license plate identifies a car.

These HA and NA molecules on the surface can chanAntigenic shift in the fluge through two processes. One such process is like changing one license plate number; this is known as antigenic drift. When the flu makes more of itself inside a person’s cells, the instructions for making the HA and NA molecules slightly change over time due to random mutations. When the instructions change, the way the molecules are constructed also changes. This allows the flu to sneak past our immune systems more easily by mixing up its license plate over time.

Another way the virus can evolve is known as antigenic shift. This type of evolution would be more like the virus license plate changing the state it’s from in addition to a majority of its numbers and letters, making the virus completely unidentifiable to our immune systems. Unlike antigenic drift, antigenic shift requires a few improbable factors to coalesce.  

Antigenic shift happens more regularly in the flu when compared to other viruses.For instance, one type of flu virus is able to jump from birds, to pigs, and then to people without the need for substantial change. This ability to jump between different animals enables antigenic shift to occur.

https://commons.wikimedia.org/wiki/File:AntigenicShift_HiRes_vector.svg

How antigenic shift occur in the flu

This cross species jumping raises the odds of two types of the virus to infect the same animal and then infect the same cell. When both types of the flu virus are in that cell, they mix-and-match parts, as can be seen in the picture to the right. When the new mixed-up flu virus bursts out of the cell, it has completely scrambled it’s HA and NA molecules,generating a new strain of flu.

Antigenic shift is rare, but in the case of the swine flu outbreak in 2009, this mixing-and-matching occured within a pig and gave rise to a new flu virus strain.

This rapid evolution enables many different types of the flu to be circulating at the same time and that they are all constantly changing. This persistent evolution results in the previous year’s flu vaccine losing efficacy against the current viruses in circulation. This is why new flu vaccines are needed yearly. Sometimes the flu changes and becomes particularly tough to prevent as was the case with swine flu. At its peak, the swine flu was classified by the World Health Organization (WHO) as a class 6 pandemic, which refers  to how far it had spread rather than its severity. Swine flu was able to easily infect people, fortunately it was not deadly. The constant concern of what the next flu mutation may hold keeps public health officials vigilant.

Why is there a flu season?

A paper by Eric Lofgren and colleagues from Tufts University grapples with the question “Why does a flu season happen?”. The authors highlight several prevailing theories that are believed to contribute to the ebb and flow of the flu.

One contributing factor to the existence of flu seasons is our reliance on air travel. When flu season in the Australia is coming to an end in September, an infected person can fly to Canada and infect several people there, kickstarting the flu season in Canada. This raises the question: why is flu season tied with winter?

The authors touch on this question. During the winter months, people tend to gather in close proximity allowing the flu access to many potential targets and limiting the distance the virus need to cover before infect another person. This gathering in confined areas likely contributes to the spread of flu during the winter, but another theory proposed in this paper is less obvious and centers around the impact of indoor heating.

Heating and recirculating dry air in homes and workplaces creates an ideal environment for viruses. The air is circulated throughout  a building without removing the virus particles from the air, improving the chances of the virus infecting someone. The flu virus is so miniscule that air filters are unable to effectively remove it from the air. The authors come to the conclusion that the seasonality of the flu is dependent on many factors and no single cause explains the complete picture.

What are people doing to fight the flu?

The flu is a global fight, fortunately the WHO tracks the active versions of the flu across the world. This monitoring system relies on coordination from physicians worldwide. When a patient with the flu visits a health clinic, a medical provider, performs a panel of tests to detect the type and subtype of flu present. This data is then submitted to the WHO flu database, which is publicly accessible.

This worldwide collaboration and data is invaluable to the WHO; it allows for flu tracking and informed decision making when formulating a vaccine. Factor in the rapidly evolving nature of the flu and making an effective vaccine seems like a monumental task. Yet, because of this worldwide collaboration twice a year, the WHO is able to issue changes to the formulation of the vaccine as an effort to best defend people from the flu that year.

Peer edited by Rachel Cherney and Blaide Woodburn.

Follow us on social media and never miss an article:

Cambridge Researchers use Mouse Embryonic Stem Cells to Grow Artificial Mouse “Embryos”

Let’s start at the very beginning. When a mammalian egg is successfully fertilized by a single sperm, the result is a single cell called a zygote. A zygote has the potential to grow into a full-bodied organism. It is mind-boggling that this single cell, containing the genetic material from both parents, can divide itself to make two cells, then four cells, then eight cells, and so on, until it becomes a tiny ball of 300-400 stem cells.

https://upload.wikimedia.org/wikipedia/commons/3/3c/Stem_cells_diagram.png

Early Development and Stem Cell Diagram, modified by author to include ESC and TSC labels.

At these early stages, these stem cells are totipotent, meaning that they have the potential to become either embryonic stem cells (ESCs), which will eventually become the fetus itself, or extraembryonic trophoblast cells (TSCs), which go on to help form the placenta. That ball of ESCs and TSCs develops into a blastocyst with a tight ball of ESCs on the inside, and a layer of TSCs on the outside (See Figure 1).

You might imagine the blastocyst as a pomegranate, with the seeds representing the ESCs and the outer skin representing the TSCs. The ESCs have the potential to transform, or differentiate, into any type of cell in the entire body, including heart cells, brain cells, skin cells, etc., which will ultimately become a complete organism. The outer layer TSCs have the ability to differentiate into another type of cell that will ultimately attach itself to the wall of the uterus of the mother to become the placenta, which will provide the embryo with proper nutrients for growth. 

Scientists in the field of developmental biology are absolutely bonkers over this early stage of embryogenesis, or the process of embryo formation and development. How do the cells know to become ESCs or TSCs? What tells the ESCs to then differentiate into heart cells, or brain cells, or skin cells? What signals provide a blueprint for the embryos to continue growing into fully-fledged organisms? The questions are endless.

The challenge with studying embryogenesis is that it is incredibly difficult to find ways to visualize and research the development of mammalian embryos, as they generally do all of their growing, dividing, and differentiating inside the uterus of the mother. In recent years, there have been multiple attempts to grow artificial embryos in a dish from a single cell in order to study the early stages of development. However, previous attempts at growing artificial embryos from stem cells face the challenge that embryonic cells are exquisitely sensitive and require the right environment to properly coordinate with each other to form a functional embryo.

Enter stage right, several teams of researchers at the University of Cambridge are successfully conducting groundbreaking research on how to grow artificial mouse embryos, often called embryoids, in a dish.

In a paper published in Development last month, David Turner and colleagues in the Martinez-Arias laboratory report a unique step-by-step protocol developed in their lab that uses 300 mouse ESCs to form tiny balls that mimic early development.

https://www.biorxiv.org/content/early/2016/05/13/051722

Mouse embryonic stem cell aggregates with polarized gene expression in a dish (4 days in culture). Image courtesy of authors.   doi.org/10.11.01/051722.

These tiny balls of mouse ESCs are collectively termed “Gastruloids” and are able to self-organize and establish a coordinate system that allows the cells to go from a ball-shape to an early-embryo shape with a head-to-tail axis. The formation of an axis is a crucial step in the earliest stages of embryo development, and it is exciting that this new model system may allow scientists to better study the genes that are turned on and off in these early stages.

In a paper published in Science this past April, Sarah Harrison and her team in the Zernicka-Goetz laboratory (also at Cambridge) report another technique in which mouse ESCs and TSCs are grown together in a 3D scaffold instead of simply in a liquid media. The 3D scaffold appears to give the cells a support system that mimics that environment in the uterus and allows the cells to assemble properly and form a blastocyst-like structure. Using this artificial mouse embryo, the researchers are attempting to simulate the growth of a blastocyst and use genetic markers to confirm that the artificial embryo is expressing the same genes as a real embryo at any given stage.

The researchers found that when the two types of stem cells, ESCs and TSCs, were put together in the scaffold, the cells appear to communicate with each other and go through choreographed movement and growth that mimics the developmental stages of a normal developing embryo. This is enormously exciting, as models like this artificial embryo and the Gastruloid have the potential to be used as simplified models to study the earliest stages of embryo development, including how ESCs self-organize, how the ESCs and TSCs communicate with each other to pattern embryonic tissues, and when different genetic markers of development are expressed.

It is important to note that this artificial embryo is missing a third tissue type, called the endoderm, that would eventually form the yolk sac, which is important for providing blood supply to the fetus. Therefore, the artificial embryo does not have the potential to develop into a fetus if it is allowed to continue growing in the dish. The fact that these artificial embryos cannot develop into fully-fledged organisms relieves some of the controversial ethical issues of growing organisms in a dish, and will allow researchers to study critical stages of development in an artificial system.   

These techniques and discoveries developed by these teams of researchers have the potential to be applied to studies of early human development. These models may prove especially useful in studying how the maternal environment around the embryo may contribute to fetal health, birth defects, or loss of pregnancy. In the future, artificial embryos, coupled with the not-so-futuristic gene editing techniques that are currently in development to fix disease genes, may prove key in the quest to ensure healthy offspring. 

Peer Edited by Nicole Smiddy and Megan Justice.

Follow us on social media and never miss an article: