Cambridge Researchers use Mouse Embryonic Stem Cells to Grow Artificial Mouse “Embryos”

Let’s start at the very beginning. When a mammalian egg is successfully fertilized by a single sperm, the result is a single cell called a zygote. A zygote has the potential to grow into a full-bodied organism. It is mind-boggling that this single cell, containing the genetic material from both parents, can divide itself to make two cells, then four cells, then eight cells, and so on, until it becomes a tiny ball of 300-400 stem cells.

Early Development and Stem Cell Diagram, modified by author to include ESC and TSC labels.

At these early stages, these stem cells are totipotent, meaning that they have the potential to become either embryonic stem cells (ESCs), which will eventually become the fetus itself, or extraembryonic trophoblast cells (TSCs), which go on to help form the placenta. That ball of ESCs and TSCs develops into a blastocyst with a tight ball of ESCs on the inside, and a layer of TSCs on the outside (See Figure 1).

You might imagine the blastocyst as a pomegranate, with the seeds representing the ESCs and the outer skin representing the TSCs. The ESCs have the potential to transform, or differentiate, into any type of cell in the entire body, including heart cells, brain cells, skin cells, etc., which will ultimately become a complete organism. The outer layer TSCs have the ability to differentiate into another type of cell that will ultimately attach itself to the wall of the uterus of the mother to become the placenta, which will provide the embryo with proper nutrients for growth. 

Scientists in the field of developmental biology are absolutely bonkers over this early stage of embryogenesis, or the process of embryo formation and development. How do the cells know to become ESCs or TSCs? What tells the ESCs to then differentiate into heart cells, or brain cells, or skin cells? What signals provide a blueprint for the embryos to continue growing into fully-fledged organisms? The questions are endless.

The challenge with studying embryogenesis is that it is incredibly difficult to find ways to visualize and research the development of mammalian embryos, as they generally do all of their growing, dividing, and differentiating inside the uterus of the mother. In recent years, there have been multiple attempts to grow artificial embryos in a dish from a single cell in order to study the early stages of development. However, previous attempts at growing artificial embryos from stem cells face the challenge that embryonic cells are exquisitely sensitive and require the right environment to properly coordinate with each other to form a functional embryo.

Enter stage right, several teams of researchers at the University of Cambridge are successfully conducting groundbreaking research on how to grow artificial mouse embryos, often called embryoids, in a dish.

In a paper published in Development last month, David Turner and colleagues in the Martinez-Arias laboratory report a unique step-by-step protocol developed in their lab that uses 300 mouse ESCs to form tiny balls that mimic early development.

Mouse embryonic stem cell aggregates with polarized gene expression in a dish (4 days in culture). Image courtesy of authors.

These tiny balls of mouse ESCs are collectively termed “Gastruloids” and are able to self-organize and establish a coordinate system that allows the cells to go from a ball-shape to an early-embryo shape with a head-to-tail axis. The formation of an axis is a crucial step in the earliest stages of embryo development, and it is exciting that this new model system may allow scientists to better study the genes that are turned on and off in these early stages.

In a paper published in Science this past April, Sarah Harrison and her team in the Zernicka-Goetz laboratory (also at Cambridge) report another technique in which mouse ESCs and TSCs are grown together in a 3D scaffold instead of simply in a liquid media. The 3D scaffold appears to give the cells a support system that mimics that environment in the uterus and allows the cells to assemble properly and form a blastocyst-like structure. Using this artificial mouse embryo, the researchers are attempting to simulate the growth of a blastocyst and use genetic markers to confirm that the artificial embryo is expressing the same genes as a real embryo at any given stage.

The researchers found that when the two types of stem cells, ESCs and TSCs, were put together in the scaffold, the cells appear to communicate with each other and go through choreographed movement and growth that mimics the developmental stages of a normal developing embryo. This is enormously exciting, as models like this artificial embryo and the Gastruloid have the potential to be used as simplified models to study the earliest stages of embryo development, including how ESCs self-organize, how the ESCs and TSCs communicate with each other to pattern embryonic tissues, and when different genetic markers of development are expressed.

It is important to note that this artificial embryo is missing a third tissue type, called the endoderm, that would eventually form the yolk sac, which is important for providing blood supply to the fetus. Therefore, the artificial embryo does not have the potential to develop into a fetus if it is allowed to continue growing in the dish. The fact that these artificial embryos cannot develop into fully-fledged organisms relieves some of the controversial ethical issues of growing organisms in a dish, and will allow researchers to study critical stages of development in an artificial system.   

These techniques and discoveries developed by these teams of researchers have the potential to be applied to studies of early human development. These models may prove especially useful in studying how the maternal environment around the embryo may contribute to fetal health, birth defects, or loss of pregnancy. In the future, artificial embryos, coupled with the not-so-futuristic gene editing techniques that are currently in development to fix disease genes, may prove key in the quest to ensure healthy offspring. 

Peer Edited by Nicole Smiddy and Megan Justice.

Follow us on social media and never miss an article:


Cinnamon, Bam! Photo Credit:

Many of us associate the holiday seasons with the smells of cinnamon.

Well the holiday season is upon us. Our calendars and days are now filled with shopping, travel, and social gatherings with friends, family, and loved ones. As the temperature outside turns cold, we turn to many of our favorite treats to fill our bellies and help keep us warm. Our mouths water as we think about all of the delectable items that line our kitchens and tables. I can picture it now… a warm fire keeping the room nice and toasty, glass of wine in hand, friends and relatives conversing and catching up and of course, avoiding awkward conversations with Uncle Gary. All while hovering around various piles of unknown cheeses, meats, and delicious stacks of sweets. And If you’re lucky, you may even find a warm, sticky stack of homemade cinnamon buns. As it turns out, these may be just the thing to reach for to help burn off some of that unwanted extra “padding” that comes with all of those holiday favorites.

What’s that you say? Cinnamon buns burn fat? Well before you go eating the whole tray, it’s not really the cinnamon buns themselves that may help burn fat, but the cinnamon for which they are named. It tastes great, you can use it in all sorts of dishes, and it accelerates fat loss. I’m a fan of all of those things. Now, you probably find yourself asking, where can I learn more about this awesome spice? Well, look no further my friend, I am about to lay enough cinnamon-spiced knowledge on you to guarantee that you can bore your friends and family to tears with your cinnamon information stream at your holiday gathering. You’ll be less popular than Uncle Gary.

Cinnamon contains a compound known as cinnamaldehyde. Cinnamaldehyde is a naturally occurring chemical found in the bark of cinnamon trees that gives cinnamon both its characteristic flavor and odor. A recent study shows that cinnamaldehyde can even help burn fat by increasing metabolism and your body’s ability to breakdown fat! I know, it’s pretty magical. Now before you go running around stabbing cinnamon trees with a spout, there’s a few things you should know. Primarily that you have to fly to Sri Lanka, which is expensive but totally worth it since it’s a beautiful tropical island in the Indian Ocean. And you can even stay at a place called Cinnamon Bey, which looks like this picture I found of it on the interweb. Pretty sweet, huh? (See what I did there!)

Sri Lanka is located off the southeast coast of India.

Anyway, the purest source of cinnamon-derived cinnamaldehyde is the Ceylon Cinnamon tree (say that several times fast while jamming a sticky bun in your face!). Also known as, the “True” Cinnamon tree, which is named after the historical moniker of its native country, Sri Lanka (formerly Ceylon). The country still produces and exports up to 90% of the world’s true cinnamon. The other 10% comes from Seychelle and Madagascar, which are equally far and equally awesome as travel destinations. However, there are six species of cinnamon sold commercially around the world. So If you prefer the regular stuff found cheaply at most grocery stores, then you will have to head to China or Southeast Asia for the most common variant, cassia, which is considered to be less, um, “Top-Shelf”.

The cassia variant is cultivated on a larger scale and is coarser than ceylon cinnamon. It also has a higher oil content and contains more cinnamaldehyde, which gives it a harsher, stronger, spicier flavor than Ceylon cinnamon. Huh? Wait, you thought more cinnamaldehyde might equal more fat loss? You are correct my friend, but before you book that ticket to Guangdong and attempt the cinnamon challenge for the thirtieth time, you should know that the Cassia variety also contains coumarin, which is not found in the Ceylon variety. Coumarin is a naturally occurring blood thinner that can cause damage to the liver in high doses. So, take your pick, though if you really want that good, pure cinnamaldehyde, the “True” kind, then you better hustle it to Sri Lanka.

However, getting there is only part of the story. Isolating cinnamaldehyde from the bark of the Cinnamon tree is a slightly tricky process that involves some rather unsavory chemicals, the potential of explosions, and a few fancy science machines (namely a mass spectrometer) for pulling the oil out of the bark, to leave you with that tasty, cinnamoney goodness. What? You thought you could just grab a tree and squeeze really hard? No, no, no. That might work for your lemongrasses, aloes and coconuts, but not cinnamon.

Actually, I’m guessing from your weird tree-squeezing thoughts that you take Cinnamon for granted. I mean…your cinnamon disrespect is understandable, since you can buy it pretty much everywhere and it’s almost as prolific as pumpkin spice, but this wasn’t always the case. In fact, until recently true cinnamon was extremely rare, since there were no planes or cars…or Amazon, well the internet really…and it only came from one relatively small island in the Indian Ocean. As such, until the 1500’s cinnamon was highly valued and was given to kings and as tribute to gods. Eventually, during the colonial period, the East India Company (the original Amazon) began distributing the spice to the rest of the world and cultivating it on a large scale.

So, cinnamon has been around forever, you say, since remote antiquity and what-not. Great. But what about this cinnamon burns fat thing? First off, settle down. We have arrived, so here’s the details. A recent study from Jun Wu at the University of Michigan Life Sciences Institute showed that cinnamaldehyde increases thermogenesis, which is the process the body uses to create heat. Thermogenesis can burn a lot of calories and accelerate metabolism, and that results in the breakdown of fat. In addition, cinnamaldehyde can decrease and stabilize fasting blood sugar. What’s even more interesting is that chronic treatment with cinnamaldehyde can reprogram your body’s metabolism, which may serve as protection from diet induced obesity.

Cinnamon is used in a variety of holiday treats including cinnamon rolls and apple pies.

So, cinnamon can burn fat and protect you from gaining it back! Now that is a magical spice. Well, there you go. I’m pretty sure that should be just enough information to cause awkward emotional discomfort to those within ear shot at your holiday festivities. Your shining personality may keep you from being the next Uncle Gary, but at least your cinnamon tales will have him running for the eggnog, which contains cinnamon. Bam! Take that Uncle Gary. No one cares about the length of your ear hair!

And while you’re enjoying your holidays, eating those cinnamon packed delicacies, remember the reason for the season! Be good to each other and have some fun, safe, and cinnamon filled holidays! Cheers!


Peer edited by David Abraham.

Follow us on social media and never miss an article:

Tardigrades! The Super-animal of the Animal Kingdom

Tardigrade (aka waterbear or moss piglet)

Tardigrades, also known as waterbears or moss piglets, are microscopic invertebrates that “resemble a cross between a caterpillar and a naked mole rat,” according to science writer, Jason Bittel. First discovered almost 250 years ago, there are now over 1,000 known species of tardigrade that can be found in almost every habitat throughout the world – from the depths of the ocean, to the tops of mountains, to your own backyard. As long as there is a little bit of moisture, you can find them. They are small and chubby, with most species being less than one millimeter in length. Their unique, usually transparent bodies have no specialized organs and four pairs of legs with claws at the end. Tardigrades can reproduce sexually or asexually via self fertilization. Like regular bears, Tardigrades eat a variety of foods, such as plant cells, animal cells and bacteria.

Despite being small, adorable microorganisms, tardigrades are fascinating creatures that have recently garnered the attention of scientists around their world due to their adaptability and resilience towards the most extreme environmental conditions. They have been observed to survive in a vacuum (an environment devoid air and matter) for up to eight days, for years to decades without water, temperatures ranging from under -200˚C to almost 100˚C, and heavy ionizing radiation. Tardigrades survive these conditions through a reversible mechanism known as desiccation (extreme drying), in which an organism loses most of the water in their body. In tardigrades, this can be as high as 97%. This is especially important in freezing temperatures, where water frozen into ice crystals can pierce and rupture the cells in the tardigrades’ body. During desiccation, the metabolic rate slows down to as low as 0.01% of normal function, allowing survival under the harshest of conditions for years.

Scanning Electron Microscopy image of a Tardigrade (Hypsibius dujardini)

In a 2016 Nature paper, scientists sought to answer the question of how a certain specie of tardigrade, Ramazzottius varieornatus, is so tolerant to extreme environmental conditions. They found an increase in several stress-related gene families such as superoxide dismutases (SODs). Most multicellular animals have less than ten SODs, however the study identified sixteen in this tardigrade specie. They also found an increased copy number of a gene known to play an important role in DNA double stranded breaks, MRE11. R. varieornatus had four copies of MRE11 while most other animals have only one.  Aside from having improved mechanisms of handling stress and DNA damage, scientists were able to identify waterbear-specific genes that seemed to explain tardigrades’ radiotolerance, or rather, resistance to radiation. The scientists were curious about whether this tardigrade-specific gene had any effect on DNA protection and radiotolerance in human cells. To their surprise, this gene, called DSUP for DNA damage suppressor, was able to decrease DNA damage in human cultured cells by 40% and decreased both double and single stranded DNA breaks.

At the University of North Carolina at Chapel Hill, Dr. Bob Goldstein studies animal development and cellular mapping during development in C. elegans and recently in tardigrades as well. He is also focusing on developing tardigrades into an new model system while studying their body development! His lab website has a section dedicated to tardigrades, with resources about them along with pictures and videos of tardigrades in motion.

The environmental resilience of tardigrades is incredible, making the tardigrade the super-animal of the animal kingdom (in my opinion). Who knows what other fascinating creatures we have yet to discover that may have characteristics as interesting and unbelievable as those of the tardigrade?

Peer edited by Nick Martinez.

Follow us on social media and never miss an article:


Superbug Super Problem: The Emerging Age of Untreatable Infections

You’ve heard of MRSA. You may even have heard of XDR-TB and CRE. The rise of antibiotic-resistant infections in our communities has been both swift and alarming. But how did these once easily treated infections become the scourges of the healthcare world, and what can we do to stop them?

Antibiotic-resistant bacteria pose an alarming threat to global public health and result in higher mortality, increased medical costs, and longer hospital stays. Disease surveillance shows that infections which were once easily cured, including tuberculosis, pneumonia, gonorrhea, and blood poisoning, are becoming harder and harder to treat. According to the CDC, we are entering the “post-antibiotic era”, where bacterial infections could once again mean a death sentence because no treatment is available. Methicillin-resistant Staphylococcus aureus, or MRSA, kills more Americans every year than emphysema, HIV/AIDS, Parkinson’s disease, and homicide combined. The most serious antibiotic-resistant infections arise in healthcare settings and put particularly vulnerable populations, such as immunosuppressed and elderly patients, at risk. Of the 99,000 Americans per year who die from hospital-acquired infections, the vast majority die due to antibiotic-resistant pathogens.

Cartoon by Nick D Kim, Used by permission

Bacteria become resistant to antibiotics through their inherent biology. Using natural selection and genetic adaptation, they can acquire select genetic mutations that make the bacteria less susceptible to antimicrobial intervention. An example of this could be a bacterium acquiring a mutation that up-regulates the expression of a membrane efflux pump, which is a transport protein that removes toxic substances from the cell. If the gene encoding the transporter is up-regulated or a repressor gene is down-regulated, the pump would then be overexpressed, allowing the bacteria to pump the antibiotic back out of the cell before it can kill the organism. Bacteria can also alter the active sites of antibacterial targets, decreasing the rate with which these drugs can effectively kill the bacteria and requiring higher and higher doses for efficacy. Much of the research on antibiotic resistance is dedicated to better understanding these mutations and developing new and better therapies that can overcome existing resistance mechanisms.


While bacteria naturally acquire mutations in their genome that allow them to evolve and survive, the rapid rise of antibiotic resistance in the last few decades has been accelerated by human actions. Antibiotic drugs are overprescribed, used incorrectly, and applied in the wrong context, which expose bacteria to more opportunities to acquire resistance mechanisms. This starts with healthcare professionals, who often prescribe and dispense antibiotics without ensuring they are required. This could include prescribing antibiotics to someone with a viral infection, such as rhinovirus, as well as prescribing a broad spectrum antibiotic without performing the appropriate  tests to confirm which bacterial species they are targeting. The blame is also on patients, not only for seeking out antibiotics as a “cure-all” when it’s not necessarily appropriate, but for poor patient adherence and inappropriate disposal. It’s absolutely imperative that patients follow the advice of a qualified healthcare professional and finish antibiotics as prescribed. If a patient stops dosing early, they may have only cleared out the antibiotic-susceptible bacteria and enabled the stronger, resistant bacteria to thrive in that void. Additionally, if a patient incorrectly disposes of leftover antibiotics, they may end up in the water supply and present new opportunities for bacteria to develop resistance.

Overuse of antibiotics in the agricultural sector also aggravates this problem, because antibiotics are often obtained without veterinary supervision and used without sufficient medical reasons in livestock, crops, and aquaculture, which can spread the drugs into the environment and food supply. These contributing factors to the rise of antibiotic resistance can be mitigated by proper prescriber and patient education and by limiting unnecessary antibiotic use. Policy makers also hold the power to control the spread of resistance by implementing surveillance of treatment failures, strengthening infection prevention, incentivizing antibiotic development in industry, and promoting proper public outreach and education.


While the pharmaceutical industry desperately needs to research and develop new antimicrobials to combat the rising number of antibiotic-resistant infections, the onus is also on every member of society to both promote appropriate use of antibiotics as well as ensure safe practices. The World Health Organization has issued guidelines that could help prevent the spread of infection and antibiotic resistance. In addition, World Antibiotic Awareness Week is November 13-19, 2017, and could be used as an opportunity to educate others about the risks associated with antibiotic resistance. These actions could significantly slow the spread and development of resistant infections and encourage the drug development industry to develop new antibiotics, vaccines, and diagnostics that can effectively treat and reduce antibiotic-resistant bacteria.

Peer edited by Sara Musetti 

Follow us on social media and never miss an article:

Little Farmers in the Animal Kingdom

Think of a farmer. Chances are, an image of an overall-wearing, pitchfork-wielding man just popped into your head. But humans are only one of a surprisingly large group of animals that cultivate their own food.

You might already know about leaf-cutter ants–some 47 species of ants in the New World that meticulously cut fresh vegetation into fragments that look far too big for them to hold. But they somehow manage to carry those leaf and flower cuttings back to their nests. This plant material is then used to feed the fungus that these ants depend on for food. Just like human farmers, the ants regularly plant, cultivate, and harvest their crop. However, rather than wheat or soybeans, the crop is a specific species of fungus. In fact, the relationship between the ant farmers and fungus is so complete that neither can survive without the other: the fungus can no longer  propagate itself without help from the ants, and the ants need the fungus for nutrition. Leaf-cutter ants are an extreme group of farming ants because they are so dependent on their fungal crop for survival, but about 240 ant species (collectively known as the attine ants) practice some form of fungus farming.

A leaf-cutter ant carrying a leaf back to its nest, where the leaf will be used to grow fungi. Image from Wikimedia.

Farming isn’t limited to ants: some species of termites and ambrosia beetles (a type of weevil) are also known to grow fungus for food. These groups demonstrate some of what we think of as the most ‘advanced’ farming. They’re ‘advanced’ because they have evolved many adaptations specific to farming, such as specialized organs or behaviors, and they often can’t survive without farming. Because of this, and because scientists have long-known about the farming practices of these animals, these three groups are the most heavily studied non-human farmers. But focusing on just ants, termites, and beetles overlooks the fact that farming is likely evolutionarily beneficial for many organisms: when food is in short supply, being able to generate your own can be life saving.

Unsurprisingly then, once scientists started looking for evidence of farming in different organisms, they found it in snails, amoebas, and fish, among others. For example, the dusky farmerfish cultivates a specific species of algae. They do so in little ‘gardens,’ which they aggressively defend from other fish. When the farmerfish are experimentally removed from their gardens, all the algae is quickly eaten by other fish. The algae don’t seem to grow outside of these gardens, and the fish rely on this algae as a staple food, making this another relationship where both players need each other to survive.

But not all farming works this way: a different type of farming relationship was described in 2011 between an amoeba and a bacterium. The social amoeba, Dictyostelium discoideum, lives as single-celled organisms that spend their time eating bacteria. When environmental conditions get tough, the individual cells aggregate to form a ‘slug’ that crawls elsewhere more rapidly than the individual amoeba cells could have. Once in a better environment, the slug changes shape again. This time, it turns into a stalked fruiting body that releases spores. Each spore becomes a new single-celled amoeba. Some strains of this amoeba farm their bacteria: instead of eating all the available bacteria, they take some up and incorporate them into their fruiting bodies. When spores are released, the new amoebas are already carrying the bacteria with them, which they then use to seed their new environment with food–just like humans sowing their fields.

D. dictyostelium in its stalked form, before releasing spores. These spores may or may not contain bacteria for farming, depending on the D. dictyostelium strain. Image from Wikimedia.

Not all Dictyostelium discoideum individuals demonstrate this farming behavior, which suggests that there could be downsides to farming. In this case, farming may be disadvantageous if the amoebas find themselves in a new environment that is already full of food. If this happens, the farming amoebas would have paid a cost by not eating all of the available food (and growing and reproducing) in their prior environment. In comparison, the non-farming amoebas wouldn’t have paid this same cost because they always eat all the food available to them. Because research on non-human farming has often focused on species that must farm to survive, the costly aspects of this behavior have not been extensively considered.

As scientists continue to explore the diversity of life on Earth, finding and characterizing new farming relationships can continue to give us insight into what this unique behavior can look like, and how it might vary in its evolution.


Additional readings:

Ants, termites, beetles: Mueller et al. 2005. The evolution of agriculture in insects. Annual Review of Ecology, Evolution, and Systematics 36:563-95.

Fish and algae: Hata H, Koto M. 2006. A novel obligate cultivation mutualism between damselfish and Polysiphonia algae. Biology Letters 2:593-6.

Amoebas: Brock et al. 2011. Primitive agriculture in a social amoeba. Nature 469:393-8. Brock et al. 2013. Social amoeba farmers carry defensive symbionts to protect and privatize their crops. Nature Communications 4:2385.

Peer edited by Paige Bommarito.

Follow us on social media and never miss an article:

One in a Million: The Importance of Cellular Heterogeneity and the Power of Single Cell Sequencing

One of the most overwhelming aspects of modern-day biomedical research is the overarching heterogeneity that consumes all realms of biology. Ranging from cell to cell to human to human, we have become increasingly aware of the important differences that drive divergent responses to therapeutics and biological stimuli.  The complexity of cancer is one such example.   

A landmark paper published by Gerlinger et al. in The New England Journal of Medicine demonstrated that analyzing multiple biopsies from a single patient’s tumor gives a much different picture of what the biology driving that tumor is, compared to examining a single biopsy alone. In addition, many studies characterizing the cellular heterogeneity of cancer have revealed that a tumor is much more than a mass of identical cells all growing out of control. Rather, a tumor is comprised of cancer cells at different stages of the cell cycle, engaged in different cell signaling pathways, as well as other cell types including immune cells and endothelial cells (the cells lining blood and lymphatic vessels).

Figure 1: A schematic showing the multiple biopsy sites in a single patient tumor. The study found that the number of private mutations (i.e., a mutation found only at a single biopsy site and not at any of the other sites) are immense, suggesting that looking at a single biopsy alone greatly undermines the mutational diversity of a given patient’s tumor.

New technologies are emerging to enable us to better dissect the heterogeneity of disease and biology.  One such technique is single cell sequencing. Sequencing is a technique with which we are able to get a readout of the entire genetic composition of a cell. In its most basic form, sequencing can be performed to get a readout of DNA or RNA, which are two types of molecules  involved in different levels of genetic regulation.  Sequencing has traditionally been performed on bulk samples comprised of hundreds to millions of cells in aggregate. While we have made remarkable advances in our understanding of biology using bulk sequencing, the emergence of single cell sequencing has now allowed us to begin to  probe the genetic profile of a single cell at varying levels of complexity. The technology often takes advantage of molecular indexing, which is a technique whereby individual mRNA or DNA molecules are labeled with a molecular tag that is associated with a single cell, and then the cells are all pooled together into one tube and sequenced. During the post-sequencing analysis, the molecular tags are re-associated with each single cell, and then the profiles of each of the single cells are compared to one another.

This novel technique has allowed us to begin investigating and understanding biology with a higher degree of resolution than we ever could have imagined before, and will undoubtedly lead to the discovery of many new exciting realms of biological regulation. For example, the biomedical company Becton and Dickson have performed a research study analyzing single cells from tumors of mouse models of cancer. They found that within each tumor sample, there were distinct populations of cells with unique gene expression profiles, and these profiles were associated with vastly different biological functions. Understanding how these different populations work together as a community to promote tumor growth may help us better understand how to develop new treatments for cancer.

However, like all cutting-edge technologies, there are still limitations that need to be overcome. One concern is inefficient collection of sample, because the amount of genetic material in a single cell is much smaller than the amount of material from a large group of cells. An additional confounding variable is uncertainty of whether a low yield of genetic information from a single cell is the result of technical error, inefficiency of small sample collection, or simply the lack of expression of a gene in that particular cell. Discerning the difference between a true, biological negative result and simply a technical deficiency are often difficult to parse out.

Especially in fields such as cancer biology, we have increasingly begun to realize that heterogeneity has largely been an obstacle in our ability to develop effective therapies and to truly understand the mechanisms of regulation of biological processes. Advances in single cell sequencing research have allowed us to further realize that there is not one function or process driving disease progression, but rather a network of cells with distinct roles. Uncoupling the forces dictating the progression of this heterogeneity is what will help us to make the next great advances in therapeutic development in cancer and many other diseases.

Peer edited by Chiungwei Huang and Richard Hodge.

Follow us on social media and never miss an article:

Nanotechnology in Your Sunscreen: Doing More Harm than Good?

While soaking in the sunshine may feel good, and you may have heard about solar ultraviolet (UV) radiation harm, you may not be aware of what’s in your sunscreen. Lee Hong explored the benefits of sunscreen in his post on The Pipettepen, and today, we dive deeper into a smaller world – the nanotechnology in our sunscreen.

The two minerals available to sunscreen in form of nanosized particles (NPs) are zinc oxide (ZnO) and titanium dioxide (TiO2). They are less than 1/1000th the size of a human hair. Bulkier minerals in traditional sunscreen reflect visible light, making it opaque and cakey on your skin. NPs on the other side, scatter light instead of reflecting it, resulting in a disappearing and lighter feeling sunscreen.

Traditional sunscreens block out UV rays but many with ghostly white color. Nanotechnology make them disappear on your face!

While the resulting nanoproduct can be a big help, people have raised concerns over the safety of NPs-based cosmetic sunscreens. With their smaller size, NPs could in theory be absorbed into the skin at a higher level than their bulkier counterparts. The real question to ask is if these tiny particles are more harmful if absorbed, than good in protecting us from UV rays.

Studies are divided about whether NPs can pass through the skin. A few reassuring words from Paul Wright, a toxicology researcher at RMIT University, “There’s a negligible penetration of sunscreen particles,” as he told The Guardian, “They don’t get past the outermost dead layer of human skin cells.” In 2017, the Australian Therapeutic Goods Administration (TGA) published its review that NPs absorption is unlikely, based on both via in-vitro (i.e. studies using isolated skin cells) and in-vivo (i.e. studies on live skin tissue) studies. It appears that we are in a safe zone!

Other scientists have tested on the toxicity of these tiny metal oxides when exposed to UV light, simulating the real-life scenario for use of sunscreens. Their results indicated that the metal oxides may generate reactive free-radical species, leading to cancer due to DNA damage. However, this alarming impact on human health depends on whether NPs in sunscreen are absorbed into our skin. Providing some comfort, research associate Simon James at the Australian Synchrotron told to The Guardian that “Our study demonstrates that the human immune system has the right equipment to remove any nanoparticles that somehow make it through the skin, assuming some do at all.” Their work showed that human natural defenses can gather and destroy ZnO nanoparticles. Moreover, sunscreen manufacturers utilize surface coatings to improve transparent effect and as a result, the coated components can essentially reduce toxicity from lessen reactivity to UV lights.

It’s not a bad idea shielding your skin from burning sun with an umbrella’s shade whenever you are up for outdoor activities.

With the increased popularity of the nanotech-based products, another concern is noxious effects caused by inhalation of NPs. The Environmental Working Group (EWG), based out of Washington, D. C., announced a warning to refrain from spray sunscreen and loose powder cosmetics containing ZnO or TiO2 particles. The lungs have difficulty removing small particles and thus end up with organ damage possibly in the same way that air pollution is linked to lung cancer.

Evidence suggests there is more harm from skipping the sunscreen than exposing your skin to nanoparticles, but, if you’re not comfortable with these tiny oxides, UV protection umbrellas are another option!

Peer edited by Bailey DeBarmore.

Follow us on social media and never miss an article:

H What N What? A Designer Protein Hits the Science Runway

Image ID: 10073

TEM Image of Influenza Virion. Content Providers: CDC/ Erskine. L. Palmer, Ph.D.; M. L. Martin, 1981.  Photo Credit: Frederick Murphy

Influenza is a virus that straddles two worlds: that of the past and that of the future. Responsible for more deaths than HIV/AIDS in the past century, the flu is one of the world’s’ most dangerous infectious diseases though it may not seem so, especially in the United States. However, the flu is responsible for millions of cases of severe illness and approximately 250,000 to 500,000 deaths worldwide each year.

Influenza Pandemics
Influenza A and B circulate each flu season, but it is the emergence of new influenza A strains that have been responsible for worldwide epidemics, or pandemics, in the past such as the 1918 ‘Spanish Flu’ pandemic and the 2009 H1N1 pandemic. There are 2 ways a new influenza virus can emerge. Every time the virus replicates, small genetic changes occur that result in non-identical but similar flu viruses: this is called “antigenic drift”. If you get infected with a certain flu virus, or get a vaccine targeting a certain flu virus, your body develops antibodies to that virus. With accumulating changes, these antibodies won’t work against the new changed virus, and the person can be infected again. The other source of change is “antigenic shift”, which results in a virus with a different type of hemagglutinin and/or neuraminidase, such as H3N2 to H1N1. The 2009 H1N1 virus is cited by some as a result of antigenic shift, because the virus was so different than previous H1N1 subtypes; however, as there was no change in the actual hemagglutinin or neuraminidase proteins, it was technically a case of antigenic drift.

Image ID: 13469

This diagram depicts how the human cases of swine-origin H3N2 Influenza virus resulted from the reassortment of two different Influenza viruses. The diagram shows three Influenza viruses placed side by side, with eight color-coded RNA segments inside of each virus. The virus from the 2009 pandemic (right) has HA/NA proteins and RNA from Eurasian and North American swine instead of from humans like in previous years (left two viruses).
Content Provider: CDC/Douglas Jordan, M.A. 2011.


Challenges in Studying the Flu
Scientists and policymakers face many challenges when studying the influenza virus. For instance, the virus can be transmitted among people not showing symptoms and cough, sneeze, or handshake can spread infectious droplets from someone who doesn’t know they’re sick. Scientific study is further complicated by the virus itself: there are 3 antigenic types of influenza virus that infect humans, (A, B, C), with various subtypes and strains. Each year, government agencies work with scientists to decide which strains to target in that year’s vaccine manufacturing. The lag time between production in the spring and the flu season in the winter provides time for unexpected types to emerge.

Image ID: 17345

This is a 3D illustration of a generic Influenza virion’s fine structure. The panel on the right identifies the virion’s surface protein constituents. Content Provider: CDC/ Douglas Jordan, Dr. Ruben Donis, Dr. James Stevens, Dr. Jerry Tokars, Influenza Division. 2014. Illustrator: Dan Higgins.

The H#N# nomenclature for influenza A subtypes refers to the hemagglutinin (H) and neuraminidase (N) proteins that sit on the surface of the virus. There are 18 types of hemagglutinin and 11 types of neuraminidase. Hemagglutinin aids the virus in fusing with host cells and emptying the virus’ contents inside. Neuraminidase is an enzyme embedded in the virus membrane that facilitates newly synthesized viruses to be released from the host cells to spread the infection from one cell to another.

Targeted Therapy
In studying ways to prevent and battle influenza, research scientists have focused their efforts on blocking the actions of neuraminidase and hemagglutinin. Antiviral drugs, such as oseltamivir (Tamiflu®) and zanamivir (Relenza®), bind neuraminidase, both interact with neuraminidase at sites crucial for its activity. The drugs act to render the virus incapable of self-propagating. A computational biologist at the University of Washington in Seattle, David Baker and his team know the hemagglutinin protein well. In 2011, they utilized nature’s design by studying antibodies that bind hemagglutinin in order to design a protein that targets the glycoprotein’s stem in H1 subtype flu viruses and prevent the virion from infecting the host cell. However, antiviral resistance contributed to by antigenic drift, is a serious issue. Researchers much constantly develop new drugs to keep up with changes in the virus.

David Baker and his team now focus their research on the hemagglutinin protein. Utilizing a computational biology approach, they designed a protein that fits snugly into hemagglutinin’s binding sites. They tested their designer protein on 10 mice and found that in mice exposed to the H3N2 influenza virus, their protein worked both as a preventative measure and as a treatment.  Though there is a long road to human testing, this binding protein shows promise for bedside influenza diagnosis as well as a model for possible treatments.

Want to know more? 

Image ID: 8675

This photograph depicts a microbiologist in what had been the Influenza Branch at the Centers for Disease Control and Prevention (CDC) while she was conducting an experiment.  Content Provider: CDC/Taronna Maines. 2006. Photo Credit: Greg Knobloch.

Learn how scientists monitor circulating influenza types and create new vaccines each year.

See flu activity and surveillance efforts with the CDC’s FluView and vaccination trends for the United States using the FluVaxView.


Peer edited by Richard Hodge and Tyler Farnsworth.

Follow us on social media and never miss an article:

Spice is Nice (for Birds)

My labmate was having a problem one morning – a fuzzy, gluttonous problem. To help keep her indoor cat entertained during her time at work, she thought it was a great idea to set up a bird feeder. Being so close to the woods, surely this would bring all of the birds to the yard. Unfortunately, the local squirrels soon flocked to the free food source like a group of grad students and left not one seed for their colorful avian counterparts. My advice to her? Grab the hot sauce!

Capsaicin is the compound in peppers that makes them hot and spicy

Making the bird feeder a literal hot spot by sprinkling high-Scoville sauces on seeds is not meant to suggest that birds engage in the same machismo food challenges as humans. They simply are not irritated by the chemical in peppers that is “hot” to humans and other mammals: capsaicin.


There are biological differences between certain pain perception receptors in birds and mammals. The TRPV1 (transient receptor potential vanilloid subfamily, member 1) receptor, also known as the capsaicin receptor, is involved in the perception of unpleasant or harmful chemical, physical, and thermal stimuli. The molecular sequence of this receptor between birds and mammals is only 68% similar, as compared to the >95% similarity for most other central nervous system receptors. What this translates to is an unusually high avian threshold for tolerating the spice in hot peppers. While mammals are put off by concentrations of 10-100 ppm (up to about the heat level of a jalapeño pepper), birds are not even fazed by capsaicin levels >20,000 ppm (habanero pepper territory).

Picture by Lindsay Walton

Birds are insensitive to the burn of capsaicin from hot peppers, making spicy bird food less appealing to squirrels.

Chili peppers take advantage of this natural disparity in receptor sensitivity, according to the directed deterrence hypothesis, which states that fruits produce noxious or toxic chemicals that make them more appealing to organisms that will disperse their seeds and less appealing to those that would destroy the seeds. This is especially handy because the regions in which hot peppers grow (for example, throughout Central and South America) are favorable places for seed-eating rodents to thrive. It is common knowledge to chefs and laymen alike that the seeds of chili peppers are much hotter than the surrounding fruit, which serves as extra insurance against seed destruction by mammals that may be more desensitized to capsaicin than the average bear, so to speak.


While sprinkling hot sauce on your bird feed or suet cakes stands a good chance at repelling squirrels, anybody with That One Friend Who Puts Sriracha on Everything can tell you that a taste for spicy food can be acquired easily enough. Indeed, while the initial sensitivity to capsaicin differs from individual to individual, mammals can become desensitized to the unpleasant sensations associated with TRPV1 receptor activation, and can actually develop a preference for pungency.


Becoming acclimated to capsaicin via desensitizing TRPV1 receptors can open up a new world of chili pepper-related culinary possibilities, but quieting these receptors can have other effects. These receptors are activated by higher temperatures, and studies have shown that capsaicin-desensitized animals have impaired body temperature-regulating behaviors, which can make them more prone to accidental overheating. Because these receptors also convey information regarding painful physical or chemical stimuli, exposure to capsaicin has been used to better understand different types of pain in both rodents and humans alike.


So as we go from summer into what Chapel Hill calls “winter,” and if you feel like feeding the birds instead of the squirrels, there are a number of spicy bird foods available or you could make your own. (For those of you interested in feeding squirrels exclusively, birds are repelled by the compound methyl anthranilate, which is strangely enough a component of artificial grape flavoring.) Just keep in mind that the bolder squirrels might end up taking a liking to the spicy birdseed, but you can probably identify them easily enough by the little bottles of Sriracha that they will bring.



Peer edited by Amanda Tapia.


Follow us on social media and never miss an article:

How Evolution Gave Us Dragons

Hiccup and his friendly dragon, Toothless, from How to Train Your Dragon. Credit: Brett Jordan

Whether our favorite characters are trying to train them, ride them, or simply escape from them, there is no denying the prevalence of dragons in popular culture. Dragon myths have existed for centuries in every civilization. In medieval Europe, uncharted parts of maps were supposedly marked with “Here be dragons” to designate danger and the unknown. In contrast, Chinese culture sees dragons as symbols of wisdom and benevolence. It is remarkable that such similar looking mythical creatures popped up separately in various cultures – and there may be an evolutionary explanation.

Dragons are depicted as large, reptilian-like creatures that can sometimes fly and breath fire. But why reptiles and why do they have to be so huge? It has often been speculated that some of the first discoveries of dinosaur or whales bones helped spark dragon mythology. People who discovered these huge bones, which resembled nothing they were familiar with, could have come up with a mythical creature like the dragon to explain them. However, it may not have been the curious mind that spawned dragons, but the fearful one.

In the book An Instinct for Dragons, anthropologist David E. Jones makes the case that a primal fear of predators, such as big cats and snakes, generated the dragon myth. Studies of Vervet monkeys demonstrate that they are especially fearful of three particular predators – lions, eagles, and snakes – and Vervet monkeys have specific cries they make when they spot these predators. Jones argues that this primal fear could have been passed along to humans through evolution. It is not hard to imagine our fearful ancestors combining the body of a snake with something as ferocious as a lion, and tacking on the ability to fly like an eagle, to get a dragon. (left) (right)

A primal fear of snakes, lions, and eagles may have inspired the creation of dragons in Western (left) and Eastern culture (right).

The hypothesis proposed by Jones is an interesting one, but has received criticism. It is nearly impossible to test and as powerful as evolution is, there is also the possibility that the myth was passed from culture to culture through storytelling. With cultures being so isolated for much of history, it would have been difficult for that to occur, but not impossible. And while their huge size would make it impossible for dragons to fly, it is not impossible to imagine that dragon myths will continue to mesmerize people for centuries to come.

Peer edited by Tom Gilliss.

Follow us on social media and never miss an article: