A picture is worth a thousand words…or more

https://commons.wikimedia.org/wiki/File:20090211_thousand_words-01_cropped.jpg

“Use a picture. It’s worth a thousand words.” This timeless expression first appeared in a 1911 Syracuse Post Standard newspaper article. If you ask Mohamad Elgendi, he’ll say it’s more like 10000 words, based on how fast our mind processes words vs. images. Although true for almost everything, this phrase is becoming even more important in the sciences where data visualization is a necessity for clearly communicating complex and large data sets.

The concept of data visualization is simple: it is the representation of data in a graphical or pictorial format. Creating effective data visualizations, however, is quite difficult. Scientists are often tasked with this challenge every day, whether by presenting their work to peers or to the general public through written and oral forms of communication. Data visualizations play a huge role in all of these outputs, so scientists should be pretty good at it, right?

Most scientists would probably say they are decent at preparing figures and graphics for someone that is within their field of study. Beyond just the typical representations of data like bar plots, scatter plots, pie charts, and line graphs, different fields within the life sciences have created various types of plots for representing certain data. For example, protein sequence conservation is sometimes depicted in “sequence logo plots”. But these field specific data representations may not be appropriate for all audiences and branching out to create something that is both visually appealing and effective at conveying the proper message to the right audience is tough.

There are multiple possible explanations for the gap in scientists’ ability to make effective data visualizations. The first is that we simply are not trained in art or graphic design. Additionally, scientists do not always have access to someone, such as a graphic designer, to collaborate with for making figures. Although there are efforts being made, such as this one at the University of Washington, that work to forge collaborations between science and design students. Another factor that introduces a hurdle to scientists making good data visualizations is time. First, a good figure requires a complete and thorough understanding of the data which can take a tremendous amount of time, particularly in the days of big data, where data sets are extremely vast and complex. Finally, it also takes time to create a figure. Creating a beautiful data visualization requires hours of training and working with unfamiliar software, such as Adobe Illustrator, that takes patience and persistence to master.

So scientists need to improve their data visualization skills but it is often difficult to find the time to practice some of these skills. Some helpful beginners tips for data visualization are shown below because the goal is always the same.

Goal of data visualization: To create a story from a set of data in a clear manner

How to get there:

  1.   https://stock.adobe.com/Figure out your narrative, or the story that you want to tell with the data. This requires a comprehensive understanding of the dataset you aim to represent along with the an understanding of your audience.
  2.   Determine the best way to represent the data. This sounds easier than it actually is and could take some time making and comparing multiple different types of figures. Again remember the story and the audience.
  3.   Learn a little bit about how the brain perceives images, color, and depth. Learning the core principles of design, such as color choice, negative space, and typography, can have an immediate impact on
    the visual appearance of the graphic. This
    document highlights data visualization specifically for the life sciences and Nature has compiled a collection of articles related to design.
  4.   Get feedback from everybody. Before finalizing a data visualization make sure to get feedback from multiple people with different backgrounds. Ensure they all interpret the data as you aimed to present it. And, as most things are not perfect the first time, refine and remake until you create your ideal data visualization.

Nearly every scientist hopes to turn the ideas in their head into a beautiful work of art, similar to this process of going from sketch to infographic. It takes time, patience, and practice to develop these skills. If you are a scientist looking to enhance your data visualization skills consider taking an online course, reading up on data visualization, practice making figures from some largely accessible datasets or for your colleagues, entering a contest such as the NSF Vizzies Challenge, or attending a conference or workshop.

Peer edited by Alex Mullins.

Follow us on social media and never miss an article:

Why the Ocean needs the Desert

What do you imagine when you think of the desert? I grew up in the desert and I think of dry hot days and clear cool nights. I think of my home town where mountains and a vast sky surround us.  I think of sand and dust which permeates all crevices and openings. I think of low-lying scrub bushes and an ecosystem that, while not as diverse as the Amazon rain forest or the oceans, is certainly unique.

Source: https://www.panoramio.com/photo/96124662

What do you imagine when you think of the ocean?  I think of the ocean as the exact opposite of the desert. I have seen the ocean maybe five times in my life so my personal experiences are limited, but my first impression of the ocean was that it was overwhelmingly vast and intimidating. However, when I look beyond my own fears, I think of the ocean as a biodiverse ecosystem that is dependent on its diversity.

Courtesy: National Science Foundation

It’s hard to imagine two more different places on earth in terms of environment and animals. So what is the connection between the two, between the desert and the oceans? Dust.

Around 450 million tons of dust enter  the world’s oceans every year, and two-thirds of this dust comes from the Sahara desert. The dust from deserts can stay in the air for weeks and travel thousands of miles. The dust from the Sahara desert travels across the Atlantic ocean and deposits into the Caribbean Sea or even in the Amazon rainforest. This dust carries vital nutrients such as phosphorus, nitrogen, and iron, which are especially important to ocean ecosystems.

Visualization of aerosols produced around the world from dust to sea spray. The tan and red colors are denoting dust aerosols that are traveling.

Source: NASA/Goddard Space Flight Center

 

The importance of dust deposits in the ocean are apparent  in marine phytoplankton. Marine phytoplankton are microscopic plant-like organisms that are at the bottom of the food chain in the ocean and are a critical part for the survival of other ocean creatures. These marine phytoplankton not only serve as a food source for larger creatures in the ocean but they also consume carbon dioxide and produce oxygen. In fact, over half of the world’s oxygen is actually produced by marine phytoplankton. The rate limiting step for marine phytoplankton growth in the ocean is iron availability, which for a majority of the ocean comes from the dust produced by the great deserts of the world.

However, as fantastic as I think dust carried on the wind and contributing to the world’s ecosystems is, there can be some downsides. For example, when a lot of dust enters the ocean in   a short amount time due to dust storms in the Sahara desert, phytoplankton can create  overgrown blooms. These overgrown blooms can actually deplete the oxygen in the surrounding water and cause a dead zone in the ocean.

Despite the previously stated downside to dust, the ocean needs the desert. We, as a species, need the desert. So the next time you are thinking of the desert, think of the dust and where it’s going.

Peer edited by Kaylee Helfrich.

Follow us on social media and never miss an article:

The Science of Survivorship

As a cancer researcher, I often wonder about patients after their ordeal with cancer. How does the body change after facing a life-threatening illness? Do cells in our body hold the memory of disease in some way? Survivorship is a word that describes life after a traumatic event, a life in which many aspects of health, from the psychosocial to the physical, are changed. In this blog post, I hope to delve into the cellular level of survivorship and explore how surviving cancer and cancer therapy can alter our biology fundamentally.

All cells in our body contain DNA, which is an instruction manual for day-to-day duties that the cell must perform to sustain life. DNA is what we inherit from our ancestors, our parents, and what we pass on to our children. Therefore, the cells in our body are very careful about keeping DNA intact and unchanged through a biological process called “fidelity.” Interestingly, most cancer patients carry permanent changes to their DNA after treatment. One striking example is to think of patients that receive bone marrow transplants. These patients will always have two different types of DNA in their bodies: their own DNA and their donor’s! In other examples, the patient’s own DNA has minor or major alterations, changing the writing of the instruction manual, which in turn can affect how the manual is read and interpreted.  

If you were to take a look at many different cancer survivors, even long after they have stopped cancer therapy and have been cancer-free, DNA from various different parts of their body would show low levels of damage. What does damage mean? If you think about a china vase, and if you were to drop it so that it didn’t shatter, but merely cracked on the surface, that would be the closest metaphor for what’s happening here. If you consider the china vase to be the DNA, you can imagine that the DNA is damaged but not completely deteriorated. How does this damage occur? During the course of cancer therapy, patients are exposed to drugs or radiation that directly damage DNA. Most of the time, the cancer cells are the ones impacted; however, normal cells can also be affected. The major consequence of damage is accelerated aging in most survivors. Their tissues and cells look as though they are from an individual much older than they are.

Prolonged levels of stress and damage to the DNA from cancer treatment can change the way a cell reads its DNA. Epigenetics is the study of how DNA is read and interpreted in the cell and epigenetic marks on the DNA help the cell figure out which parts of the DNA to read. In cancer patients, epigenetic marks are globally changed over the course of therapy and remarkably these changes remain long after exposure to chemotherapy is stopped. Patients with cancer have more epigenetic marks signifying “do not read” being added to the DNA. In normal individuals, an increase in these marks has been associated with routine aging., In cancer patients, irrespective of chronological age, these marks can be present post therapy, signifying profound molecular aging. What remains to be investigated is whether these marks and their subsequent accumulation in survivors is a direct result of toxicity of therapy or a by-product of DNA damage leading to changes in epigenetics

With roughly 15.5 million cancer survivors currently alive in the US alone, it becomes absolutely critical to understand the biology of survivorship. The science of survivorship helps us understand the biological burden of going through cancer therapy and, in turn, this valuable knowledge allows us to develop less burdensome therapies as a result. From a physical and now a molecular standpoint as well, survivorship is a monumental feat of resilience that comes with an unwanted side-effect: aging.

Peer edited by Denise St. Jean and Eliza Thulson.

Follow us on social media and never miss an article:

 

To Dye or Not To Dye

Dyeing hair is a common, but not recent, beauty practice. Hair dyeing (coloring) has been around for thousands of years, using plant-based dyes such as indigo and turmeric before synthetic dyes were invented. I recently wondered “What kind of science goes into modern hair coloringWhat is the process?”  I have had my hair colored before but never even wondered how the coloring occurs. I reached out to my very good friend and cosmetologist, Erin Rausch, and asked her to explain the process.

Like most working professionals, cosmetologists attend school before they begin working with actual clients. While each school has a different curriculum, Erin attended Aveda Institute in Madison, where she spent six months in a classroom learning about anatomy, chemistry, color theory, and the study of the hair and scalp – trichology. In cosmetology, it is exceedingly important to understand the biology of hair, particularly in the case of hair coloring. Like the skin, hair has pores and layers. The hair layers open with heat and close with cold, kind of like when you open an umbrella when it’s sunny and close it when it’s dark. A good cosmetologist understands how chemicals affect and interact with the hair, studies the existing melanin (pigment) of the hair to later replace it with new color. Also, like skin, hair contains different kinds of melanin, and different melanins result in different hair colors: pheomelanin results in hair with a pink to red hue (pheomelanin is also in lips), and eumelanin results in the darkness of the hair: more eumelanin – darker hair, less – lighter hair.

Microscopic image of human hair. Hair has layers, like skin, onions, and Shrek.

Cosmetologists need to consider every aspect of the customers’ hair before continuing to color. First, hair color is sorted into color levels, 1-10; 1 being black, 10 being whitest blonde. Aside from natural hair color, it is important to consider the type (thickness of the hair strand), texture (straight or wavy), density (number of strands of hair per square inch) of the hair, whether the hair has had previous treatment, and the distance from the scalp that you’re treating. The scalp produces heat which causes the color to react a little differently near the scalp than hair away from the scalp. For this reason, it is important to consider the density of the hair. A higher density of hair results in more heat near the scalp, changing how color will interact with the hair near the scalp compared to hair away from the scalp.

Hair levels determined by hair color.

Once all of the variables of the clients’ hair have been evaluated, cosmetologists then formulate how to color. There are several steps in the hair coloring process. First, they always formulate according to the existing natural level of hair color (levels 1-10). Dyeing hair is essentially a melanin exchange where natural melanin is exchanged with a different, synthetic one. Removing natural melanin is done with ammonia or bleach, partnered with peroxide; causing a chemical reaction. There are different strengths of peroxide, and the one used depends on type, texture, density, and final desired hair color. It’s extremely important to be careful with peroxide strength, as certain volumes can cause chemical burns and adverse chemical reactions. A common analogy is the peroxide is like the length of time or heat used to cook spaghetti. If you don’t have enough, your spaghetti isn’t cooked, but if you have too much, your spaghetti is overdone. Too much peroxide will severely damage your hair, like overcooking pasta. Depending on the natural hair color or previous hair treatment, it might take more than one peroxide treatment step to remove melanin.

Color Theory. To neutralize Natural Remaining Pigment, the complimentary color is used. For example, Level 10 hair (lightest blonde) has yellow natural remaining pigment, so violet would neutralize the NRP.

After the peroxide treatment, there is still pigment remaining in the hair, called Natural Remaining Pigment (NRP). It is important to neutralize the NRP, so it doesn’t affect the future desired hair color. NRP is neutralized by adding the opposite/complementary color. If Erin wanted to neutralize level 10 (lightest blonde), she needs to use violet. All of the variables are part of color theory and the treatment depends on look you want to create (I should have paid more attention in art class).

A client’s hair is sectioned as color(dye) is being applied.

Once the natural melanin has been removed and the NRP has been neutralized, it’s time to color the hair! The amount of hair color product needed to color the hair depends on hair type and desired color(s). The bigger the hair section (as can be seen in the image above), the less the color will permeate the hair strands evenly. Thus, more hair product will be needed so the hair will be properly colored, and the color will last a long time.

Erin strongly recommends against at home hair dye kits. While they may seem cost effective, their methods for hair coloring can be damaging to your hair. These kits are made to achieve one color and are only designed for people that have no color on their natural hair. Since people have many different kinds of hair and hair color, there is a large chance the at home dye color you pick will be very different from what you want. The peroxide used in the kit is also a very strong peroxide (much stronger than peroxide used in salons) and because the peroxide in the kit is so strong, the color is much harder to remove or correct later on. Essentially, at home hair dye kits are very damaging to hair.

To see Erin’s portfolio, check out her instagram!

Peer edited by Mikayla Armstrong.

Follow us on social media and never miss an article:

Why Pharmaceutical Companies Care About Your Saliva

Earwax stickiness. Neanderthal ancestry. Caffeine metabolism. Tasting soap when you eat cilantro. Direct-to-consumer genotyping companies like 23andme boast this kind of information in exchange for a tube of your finest spit and a chunk of change about equivalent to a week’s worth of groceries. In the process of unlocking the whimsy of the human genome, however, companies that deal in providing genetic information to consumers have amassed an enormous collection of data.

Unique genetic variants are responsible for many of our traits — including disease.

Nearly 80% of 23andme’s consumers consent to have their information used for research purposes. One of the most valuable applications for information about millions of genomes and their owners is to uncover genes that can be targeted for therapies to treat diseases. A new multi-million-dollar collaboration between pharmaceutical company GlaxoSmithKline and 23andme promises to make identification of pharmaceutical targets quicker and aid in the design of therapies that are more likely to succeed in clinical trials on an unprecedentedly large scale.

But what does this have to do with whether you like the taste of cilantro? To find out, we first need to dive in to what kind of data direct-to-consumer genotyping companies have and how they get it.

In order to keep costs lower for the company and the consumer, most direct-to-consumer genetic testing companies use a simple method called a DNA microarray to test for specific snippets of the genome that are correlated with certain characteristics.

DNA microarray chip. Spots glow when a perfect match is made between query DNA and probe DNA that is linked to the chip.
Source: NASA

To generate the DNA microarray, microscopic “spots” of DNA are linked to the surface of a tiny chip. These “probe” DNA sequences pair with specific sequences in the human genome that signify certain traits. Often, these specific sequences are portions  of the genome where one individual may differ from another by a single nucleotide. The substitution of one base pair for another at a specific site in the genome is called a single nucleotide polymorphism, or a SNP. Many SNPs are commonly associated with specific traits, and are therefore a mainstay for direct-to-consumer genotyping companies. Only a fraction of existing SNPs are known to correlate with certain traits or diseases, limiting the number of probe sequences needed to test a single individual.

The company typically requests a saliva sample, which would contain both white blood cells and skin cells from within the mouth that are suitable for purifying large quantities of DNA. Once the saliva sample is received, DNA is extracted and subjected to an amplification process which generates more copies of the DNA from the sample. The DNA is then heated so that it no longer binds to its original pairing strand. Proteins that can cut DNA are then used to chop the genome into smaller pieces. From here, the DNA fragments are tagged with a unique fluorescent molecule that can track the position of the query DNA on the microchip. The processed sample is then applied to the chip, where query DNA will pair with probe DNA if it is complementary in sequence. To obtain test results, the microchip is analyzed for bright spots, which represent a positive test result for the genetic variant at that position on the microarray. Direct-to-consumer genetic testing companies typically include a catalog of probe DNA sequences that are strongly associated with specific traits, common ancestral backgrounds, and predisposition to a small subset of diseases.

So how can this information be useful to pharmaceutical companies? The answer is in the numbers. Associating features of the genome with traits or disease requires a large sample size to give the us more confidence that the association is real, and doesn’t simply occur because of random chance. Take the example of a SNP in the genome. Each human genome has about 10 million SNPs, or about 1 SNP per 300 base pairs. Some of these SNPs are insignificant and will be averaged out in the context of 2 million genomes. However, if a SNP is significantly associated with a disease, it will be found proportionally more often in genomes of people who have or are predisposed to having a disease compared to normal individuals. Pharmaceutical companies increasingly rely on finding new drug targets to develop new therapies, and are looking deeper into the genome to find new gene targets for therapies. Where to start? Genes with variations that correlate with certain traits or diseases –the exact information you can purchase from a direct-to-consumer genotyping service.

An approach to parsing these millions of genomes is referred to as PheWAS (phenome-wide association study). This study begins by looking at a genetic variant such as a SNP, and seeks to determine whether this variant is associated with a particular trait or set of traits in the individuals who have the variant. The data collected by direct-to-consumer genotyping companies are well-suited to be analyzed by PheWAS, as the companies often collect self-reported survey information on their customers. Myriad symptoms can be linked back to a single genetic variant with the power of huge genomic datasets to jumpstart the therapies of the future.

Old approaches allowed scientists to start with a trait and find common genetic variants in individuals with this trait. Here, scientists trace facial features back to common genetic variants. Approaches like PheWAS, however, start with the genetic variant and analyze the range of traits associated with the variant in many individuals.
Source: Kaustubh Adhikari, T. F., et al. Nature Communications. (2016) (Wikimedia Commons)

Access to large databases of genotyping information may also give pharmaceutical companies further insight into more complex traits that are determined by a wide array of genes. An example of a complex trait is human height. There are a massive number of SNPs that contribute to the height of an individual. If you divide the genome up into ~30,000 equivalent-sized sections, nearly every one of these sections contributes to a person’s height. Many of the genetic variants that affect someone’s height are seemingly random and can only be identified by using large amounts of data. Similar to height, human disease is also connected to a combination of many genetic variants. It’s likely that pharmaceutical companies will begin looking not only at the genes that we think contribute to disease, but also throughout the rest of the genome for previously unknown variants that might be to blame. This information can be used to find new genes to target and help to avoid clinical pitfalls of previous drug strategies which were largely inefficient.

Despite the promises of big data in big pharma, it is unclear as yet whether or not partnerships between direct-to-consumer genotyping companies and pharmaceutical companies will be fruitful. Undoubtedly, developing drugs to treat diseases that have ineffective or nonexistent therapies is an attractive possibility. Furthermore, projects like these could promote a personalized medicine approach, where treatments are tailored to one’s specific therapeutic needs, as signified by their genome. An in-house effort within 23andme uses customer data to peg new drug targets. They claim to have uncovered 10 new targets for drug development. However, this collaboration raises the issue of consumers paying for the privilege of having their genomes used to garner profits for both their genotyping service and a pharmaceutical giant. A caveat to using such datasets is that the sample genomes mostly represent middle-to-upper-class individuals who live in Western countries, limiting the scope of the potential therapies developed as a result of the collaboration. Nevertheless, the marriage of genotype big-data and drug development holds exciting promise for the future of targeted therapies.

Peer edited by Jacob Pawlik and Giehae Choi.

Follow us on social media and never miss an article:

Dealing with automatic sinks

It’s time to wash your hands and help prevent the spread of flu, but when you put your hands under the automatic sink, nothing happens. 

You give your hands a shake, praying the sink actually turns on. 

You get frustrated, take a step back, and change your angle of attack. 

The sink finally turns on, but how exactly does this sink work and what is a way to make this whole process easier? 

I had this exact series of events happen and decided I needed to investigate and make a video about it (you can check that out here). If you can’t watch that video I’ll hit the high points here. 

First thing: what is the sink “looking” for? 

My first thought was maybe the sink is looking for visible light, but I turned out all the lights, and the sink still turned on! 

https://9cd0d5d5-a-23898bee-s-sites.googlegroups.com/a/globalsystemsscience.org/digital-earth-watch/key-messages/near-infrared-and-the-electromagnetic-spectrum/irdiagram.png?attachauth=ANoY7cpwwyVQRUG-7iwvjuiEafTo13ImwuzRWfqbXFq5LBZKr3n7H0U2V0yAP5tqL4plhovZzTLNIuyhGt3S1n95rC2vuvH9M9S6ikkrAtdZ1BKuSMHue0I3-K6OntEblZtUK4PjP-D50B-o0pq-CsTGme92vT83c7AXDT00sSJIY0nS93eTcOrdCYH3kdediPy7ns31QXmWybkljvC7Qp1FeWxu-h5x-MiJoMpcW2LkhmkQL4Z5eSxbMysSoHy6MexvVcfbZF61I9ABJsLB4lZHJ0TGUmTo9LkB6zbvuN2L8KONQKdVJLxncpUpaOM5_77jlqJehk3m&attredirects=0

IR Spectrum- http://dew.globalsystemsscience.org/key-messages/near-infrared-and-the-electromagnetic-spectrum/irdiagram.png?attredirects=0

That means it has to be looking for something else: maybe another type of light. Maybe it was looking for infrared (IR) light, which is light that is lower energy than the light we can see This IR light could be in the form of thermal IR (heat) coming off people’s hands. That would mean something at room temperature that doesn’t give off heat, like a notepad, shouldn’t trigger it, but when I put a notepad in front of the sink it turned on! Okay, well maybe it was looking for a different type of IR light. 

I knew of a trick; digital cameras can “see” near-IR light as a white-purple color. You can test this out by pointing a remote at a camera and pressing buttons. I took my smartphone out and pointed it at the sink to look for near-IR light and to my surprise from several feet away the sink turned on! This was very strange; I’ve never seen a sink turn on from this far away. I did some digging about my phone and found out that my phone camera uses a near-IR laser for focus. When I looked at my phone camera with another camera, lo and behold, there was the tell-tale white-purple color. This means that when my phone camera (with the camera app open)  is pointed at the sink, the sink sees the IR laser as a signal to turn on.

Well if this is what the sink is looking for, where does the near-IR light come from? Maybe the sink has its own near-IR light and then is looking for the reflection. To test this I used my hand at just the right angle. As I adjusted my hand to reflect more or less light, I could turn the sink on or off at will! That means that there was some threshold of light the sink needed to turn on.

At this point we know the sink is looking for the near-IR light that bounces off whatever is in front of it, but that doesn’t help with actually using these sinks. If the sink needs a certain amount of IR light to bounce back, then your hands should be put right in front of the sensor to get the most IR light bounced back. When I tested putting my hands in different locations in the sink, it turned on most repeatedly when my hands were right in front of the sensor.

That doesn’t explain why the sink doesn’t seem to turn on right away sometimes. Maybe the sink needs the IR light to be bounced back for a certain length of time. I tested this by moving my hand in front of the sensor for different amounts of time. Any time my hands were in front of the sensor for less than 1 second, it didn’t turn on. Once my hand was there for about 1 second the water would come on.

Conclusion time: the sink that I use most often looks for near-IR light bounced back to the sensor and needs a specific amount of near-IR light to be bounced back for at least 1 second. To avoid the frustration of automatic sinks here’s my recommendation: make your hand flat, hold it still in front of the sensor for about 1 second, and keep your hands in front of the sensor as you wash your hands.

Hopefully this flu season you will get a little less annoyed with that automatic sink!

Peer edited by Shaye Hagler.

Follow us on social media and never miss an article:

 

A World Without Cheese

Imagine a world without pizza, nachos, cheeseburgers, mozzarella sticks, macaroni and cheese; a world without cheese.

https://www.flickr.com/photos/acme/464922464/in/photolist-H5Rfy-H5Ref-4c51id-4c11kz-a1WXew-abLoY3-8nJeGe-4Jg3NC-8YcCsp-6Gfzk1-5yCXTS-j9YQsH-7MoxTv-6aGZW-9sxuxL-28Yfkci-6iD1Qg-dC9GXz-97deqm-bux3TU-rz87-h9SKsZ-h9RvpT-hnFgLx-begJUg-bo5PDW-6ufmpb-7GCvaw-5J1jPY-qC2hY-3sbkt9-5VL1pD-eXiGXs-4MiEN9-Sf5cSW-9djaGd-eVjQGX-yqoeKw-PdB7rU-MztzxJ-Mzf2b4-9Luci4-6VS4ZB-cZExm-e4Waj-rNsQhp-h7h1J-rNra8q-rNsQoX-rNxQY8

The significance cheese goes well beyond its delicious taste.
From: Leon Brocard.

Living in a world with thousands of cheeses from many countries and cultures, it is difficult to imagine there was a time in human history when cheese did not exist. Some of us cheese lovers might even feel a little faint thinking about the so-called cheese-less time. Thankfully, new research suggests the dark time before cheese may be shorter than originally thought. Remnants of cheese fats were found in pottery in Croatia that date back to between 6000 and 5000 B.C. Previous studies suggested cheese did not arrive in the Mediterranean until 3000-1200 B.C.; about 3000 years later! The development of cheese making in these ancient villages may have decreased mortality rates in children and aided in the evolution of dairy tolerance, so we can eat our favorite cheeses today.

You might be wondering, how do you identify cheese remnants that are over 7000 years old? The secret lies in fat. Most foods (whether from plants or animals) contain fat, and anyone who has tried to lose weight can tell you fats are difficult to make disappear, allowing them to withstand the test of time. Fats are made up of chains of carbon, an element that is found in all life on Earth. There are three types of carbon that occur in nature, collectively called carbon isotypes. In fats, carbons are attached in chains that can vary in their length. By looking at variations in chain length and the carbon isotype found in the fat, scientists can determine where the fat came from; meat, vegetables/plants, or dairy. Furthermore, scientists can even determine if the fats are from fermented dairy products, such as cheese. Using this technology, the scientists identified fats in ancient pottery discovered in Croatia that are likely from cheese. A similar technology that looks at what makes up proteins, instead of fat, was used to identify the oldest solid cheese, which was found in an Egyptian tomb earlier this year.

The ability of an individual to eat cheese and increase their chance of survival to adulthood could lead to a population with more people who are able to eat cheese over multiple generations. This diagram assumes everyone who lives to adulthood has three children with the same “cheese-eating status” as their parent.

While many of us might think the significance of cheese lies in the ability to enhance our food with its melty, cheesy goodness, the impact of cheese-making on ancient peoples was likely more significant. Cheese would have provided vital protein, calories, and fat for people in ancient villages, especially during winter or times of famine and disease. Scientists also think the high protein and calorie content in cheese would have been a great source of nutrition for children and likely aided in their survival to adulthood. Furthermore, the advent of cheese-making in these ancient villages probably aided in the evolution of humanity’s ability to eat dairy. People living in ancient times were lactose intolerant. Because cheese contains less lactose than milk or other dairy products, it is likely people in 5000 B.C. could tolerate small amounts of cheese. However, if children who could eat more cheese were more likely to survive to adulthood and then have children who could also consume larger amounts of cheese, over time the people in these villages would become able to eat more and more cheese (see diagram above). Over the course of thousands of years, the human population would evolve to be able to eat food with more lactose, such as milk or ice cream.

A world without cheese would surely be a less tasty one, but the more we learn about the origins of cheese, the more it seems that a world without cheese would not be the world we know today.

Peer edited by Rita Meganck.

Follow us on social media and never miss an article:

In the Wake of Hurricane Florence: How Genetic Tools Can Prevent Ecosystem Damage

https://www.acc.af.mil/News/Article-Display/Article/1638448/afcent-command-and-control-operations-weather-the-storm/

Hurricane Florence approaches the Carolinas

Hurricane Florence was devastating to much of the Carolinas. The flooding that ensued destroyed not only homes and livelihoods, but greatly affected animal agriculture. The effects of Florence resulted in the deaths of millions of animals and caused massive overflows or “overtopping” of animal waste, particularly from swine production facilities. North Carolina is the nation’s second largest producer of swine, and the 9.7 million pigs who call North Carolina home produce nearly 10 billion gallons of waste annually. The damages from Florence are disastrous not only from an economic perspective, but also from an environmental one.

In livestock production, lagoons are large basins primarily used for the treatment of animal wastes. Rather than simply being a collection site for waste, proper care of a lagoon involves “feeding” the lagoon with microbes that help degrade the waste into usable fertilizer. However, during massive flooding events the volume within these lagoons can quickly rise and overflow. This disrupts the waste treatment process and exposes the environment to large quantities of nitrogen, phosphorus, or even pathogenic material derived from animal wastes. According to the North Carolina Department of Environmental Quality, at least 32 lagoons have overtopped due to Florence, and another 61 lagoons have sustained structural damage or are close to overflowing.

An imbalance of nitrogen and phosphorus in any body of water contributes to a condition called eutrophication, in which excess nutrients allow for blooms of algae or plant matter. As these photosynthetic organisms flourish, they cause drastic changes to their environment. Algal blooms can be especially damaging, as many algae produce compounds which quickly reach toxic levels, algal overabundance clouds the water, and their rapid growth utilizes carbon leading to an increase in the pH of surrounding water. Eventually, algae change their own environment so extensively that they quickly die off, leading to subsequent microbial breakdown of the dead algae, which depletes oxygen. This rapid removal of oxygen then produces a “dead zone”, which is unsustainable for most life.

While nitrogen and phosphorus can be detrimental to the environment in large quantities, these are still very necessary components of any animal’s diet. Nitrogen and phosphorus are used as building blocks to produce DNA, proteins, and other compounds necessary for the growth. Therefore, agricultural production will need to find ways to provide these necessary building blocks while also preventing an excess of them in the environment. Thankfully, animal science research efforts have produced a unique potential fix!

Several years ago, researchers at the University of Guelph in Canada developed an animal designed with the world in mind: the Enviropig! The Enviropig is a transgenic animal, meaning its genome has been edited to contain a gene from a different species. In the case of the Enviropig, a bacterial-derived gene for the enzyme phytase is expressed in the pigs’ salivary glands, allowing the animal to break down more of the phosphorus in its diet. With more phosphorus broken down into a form the animal can digest, less phosphorus winds up in pigs’ waste, meaning less phosphorus contributes to eutrophication. A win for the pigs, now able to get better nutrition from their diet, and a win for humans and the environment, as we are able to produce and enjoy a more environmentally sustainable product. While the initial work on this project was put on hold in 2012 due to public scrutiny of genetically modified organisms (GMOs), a team of researchers in China developed their own version of an environmentally-conscious pig earlier this year. This new animal is able to break down greater amounts of both phosphorus and nitrogen from its diet, reducing their abundance in animal wastes.

Neither animal has been approved for use in the United States yet, but they both hold enormous potential to reduce the environmental impacts of animal agriculture. While the Enviropig and similar transgenic animals cannot yet solve every environmental issue, had they been mainstream prior to Hurricane Florence, perhaps there would be less concern with regard to ecosystem health after the flooding. Natural disasters are as unpredictable as they are unfortunate and dangerous; however, genetic tools such as these could provide one additional safeguard to protect the environment and public safety for the future.

https://www.publicdomainpictures.net/en/view-image.php?image=215663&picture=pigs

Livestock, such as pigs, can be genetically designed to help protect the environment!

Peer edited by Joanna Warren.

Follow us on social media and never miss an article: