Musings is an informal newsletter mainly highlighting recent science. It is intended as both fun and instructive. Items are posted a few times each week. See the Introduction, listed below, for more information.
If you got here from a search engine... Do a simple text search of this page to find your topic. Searches for a single word (or root) are most likely to work.
If you would like to get an e-mail announcement of the new posts each week, you can sign up at e-mail announcements.
Introduction (separate page).
Current posts -- 2019 (May - ??)
New items Posted since most recent e-mail; they will be announced in next e-mail, but feel free... !
* If you would like to get an e-mail announcement of the new posts each week, you can sign up at e-mail announcements.
July 17 (Current e-mail)
July 10 July 3 June 26 June 18 June 12 June 5 May 29 May 22 May 15 May 8
Older items are on the archive pages, listed below.
2019 current posts. This page, see detail above.
2012 (September- December)
2011 (September- December)
Links to external sites will open in a new window.
Posted since most recent e-mail; they will be announced in next e-mail, but feel free...
July 21, 2019
Waterways near petrochemical plants often have high levels of pollution. That certainly applies to the Houston Ship Channel, an artificial waterway between the inland port city of Houston, Texas, and Galveston Bay in the Gulf of Mexico. It's oil country, and the Houston Ship Channel is lined with petrochemical plants -- as is obvious to anyone driving in the area.
A recent article reports some interesting analysis of one type of fish in the Channel, and how it has responded to one type of pollutant: halogenated aromatic hydrocarbons.
The fish the scientists study is the Gulf killifish (Fundulus grandis). The specific pollutant they use for lab work is a polychlorinated biphenyl PCB126.
Here are the results from one experiment...
In this experiment, fish were tested in the lab. The fish had been collected from waters with various levels of pollution. They were tested here to see how one particular pollutant affected the development of embryos.
The figure shows the effect of the PCB on various groups of fish. The effect is shown as the amount of cardiac deformity (y-axis) in the resulting embryos. The PCB concentration is shown on the x-axis.
Look at the blue data (top, for the most part). For these fish, the cardiac deformity started very low, then rose dramatically as the PCB concentration increased.
At the other extreme, the fish with black data (bottom) showed very little cardiac deformity even at the highest level of PCB tested.
Now look at the "Pollution level" key above the graph. The blue is for fish from water with minimum pollution (out in the Bay); the black is for fish from the most polluted water (very near the petrochemical plants in the Channel).
The conclusion... the fish from the polluted water have adapted to the pollutant, so that they are much less affected by it.
The other two data colors are for fish from intermediate levels of pollution. The results for these fish follow the same trend. The more polluted the environment the fish are from, the more resistant they are to the pollutant.
The scoring of cardiac deformity is not described in the article. As common nowadays, the Methods section is in the Supplement; in this case, that doesn't help much: it refers to an earlier article for this procedure. Briefly, "2" is serious deformity.
This is trimmed from parts of Figure 1 from the article. The graph itself is Part B. The "key" is from Part A, but serves well with B.
One more fact... For a while, the population of this fish in the Channel was declining. Then it turned around. That is, it seems that the fish adapted to the pollution relatively recently, perhaps in the 1970s.
The resistance is genetic. The resistance trait is stable even in the absence of the pollutant, and resistance behaves like a genetic trait in crosses.
In the next part of the work, the scientists identified the enzyme responsible for the resistance. It is an enzyme that metabolizes the PCB. It's often said that such enzymes are responsible for degrading the pollutant, leading to its elimination. But the enzyme actually creates toxic intermediates, which can be the main concern. The resistant mutants have less of the enzyme to metabolize PCB. In fact, there is a general trend: the fish from more polluted waters make less enzyme, which correlates with greater resistance.
The final part of the work is an attempt to explain how the fish became resistant. It's possible that simple mutation is the answer, but the authors suggest that something more interesting happened. The specific form of the defective gene in the Houston fish is very similar to that in a population of a related fish a thousand miles away. The authors suggest that occasional inter-breeding between the two populations transferred the resistance gene into the Houston population -- where it spread rapidly in the polluted waters. A problem with that suggestion is that the two populations are so far apart that such inter-breeding seems unlikely. The authors suggest that it occurred as a result of human intervention -- probably inadvertently.
The details are more complex than suggested above. The mutations are not in the gene for the enzyme itself, but in parts of the system for inducing the enzyme. That is, resistant mutants have less enzyme, as we said above, but the reason is more complex than suggested: the induction system fails. Further, the scientists have evidence for resistance that seems to have originated in the other fish species for two different genes.
Of course, the scientists have no direct evidence for such human intervention, but it is an interesting and provocative idea. The evidence that the genes that confer resistance come from the distant population is fairly strong, and stands in any case.
* Killifish Survive Polluted Waters Thanks to Genes from Another Fish -- Gulf killifish have made a stunning comeback in Houston with the help of genetic mutations imported from interspecies mating with Atlantic killifish. (E Yasinski, The Scientist, May 6, 2019.)
* An evolutionary rescue in polluted waters -- How genetics, resources and a long-distant relative helped one lucky fish species adapt to extreme pollution. (Science Daily (University of California - Davis), May 2, 2019.)
* Evolution 2019: Evolutionary Rescue from Extreme Environmental Pollution Enabled by Recent Adaptive Introgression of Highly Advantageous Haplotypes. (Urban Evolution, June 27, 2019.) This is about a talk on the work at a scientific meeting by one of the senior authors of the article. A video of the talk is included (15 minutes).
* News story accompanying the article: Ecology: How to survive in a human-dominated world -- Mating between species can yield adaptive genes that facilitate species survival. (K S Pfennig, Science 364:433, May 3, 2019.)
* The article: Adaptive introgression enables evolutionary rescue from extreme environmental pollution. (E M Oziolor et al, Science 364:455, May 3, 2019.)
A post about a related group of pollutants, the polycyclic aromatic hydrocarbons: Does using printer toner lead to carcinogens? (October 31, 2017).
A post about an unusual event that may relate to species invasion: What if a fishing dock fell into the ocean off the east coast of Japan? (October 29, 2017). We tend to think of invasive species as being bad. The current post may be an example of one having a good effect. This older post illustrates the diversity of invasion events themselves. Big message? Be careful about generalizing.
This post is listed on my page Introduction to Organic and Biochemistry -- Internet resources in the section on Aromatic compounds. That section includes a list of related Musings posts.
July 19, 2019
When one thinks of stick insects, camouflage comes to mind. That's the point, isn't it?
The stick insects in the two parts on the left, below (Parts A & C), fit the pattern. Especially the one in part C, which could easily pass as Twiggy.
However, the two right-hand frames (B & D) contain rather prominent blue stick insects.
This is part of Figure 4 from the article.
In a recent article, a team of scientists carried out a comprehensive analysis of the stick insects of Madagascar. The amount of color in these sticks is striking. That in itself is not a new finding; the current article provides modern analysis, including DNA work, to improve the characterization. That includes defining two new species of colorful sticks.
The green stick (part A) is colorful but also reasonably camouflaged. However, the blue sticks seem to have missed a lesson on camouflage. Interestingly, the blue sticks are all males. That is, some species are dimorphic for color, with camouflaged females and bright blue males. The coloration of the males develops only as they mature into adults. One might wonder if the color is about being noticed by females. The colors may also serve as a warning to predators that the animals are toxic. However, there is little direct evidence on these points for now.
Another species described in the article has males that are yellow and black.
It's all part of the diversity of nature.
The title of this post refers not just to blue sticks, but big sticks. There are no scale bars on the figures, but the authors note that some of the sticks they studied are among the largest known insects, nearly 25 centimeters (10 inches) long.
As an extra challenge... The figure legend says that part B contains a mating pair. Can you find them both?
* Malagasy Giant Stick Insects Play with Colors. (E de Lazaro, Sci-News.com, April 4, 2019.)
* Colorful display of newly described stick insects confounds scientists. (M Vyawahare, Mongabay, April 16, 2019.)
The article, which is freely available: When Giant Stick Insects Play With Colors: Molecular Phylogeny of the Achriopterini and Description of Two New Splendid Species (Phasmatodea: Achrioptera) From Madagascar. (F Glaw et al, Frontiers in Ecology and Evolution 7:105, April 2019.)
More blue animals...
* Why do many tarantulas have blue hair? (March 7, 2016).
* A newly described monkey species (October 22, 2012). See the news stories.
Other things blue...
* A better way to make (the dye for) blue jeans, using bacteria? (March 5, 2018).
* Electrons... The explosive reaction of sodium metal with water (April 20, 2015).
July 17, 2019
Baloxavir update: activity against diverse flu viruses. Last year we noted a new flu drug -- a new type of flu drug. The drug is baloxavir; it inhibits an enzyme needed to replicate the viral genome. A new article reports that it is widely active against not only the common influenza A virus, but also flu viruses of types B, C, and D; this is based on lab testing in cell culture. The article also discusses the details of the target protein from various strains, with some consideration of the implications for drug resistance. Overall, the article extends our knowledge about this new type of flu drug, and is generally encouraging.
* News story: Study says baloxavir fights all 4 flu types, many animal flu viruses. (R Roos, CIDRAP, July 9, 2019.) Links to the article, which is freely available. (The article is currently in press, scheduled for the October 2019 issue of EID. Available only as a web page until then; no pdf.)
* Background post about baloxavir: Baloxavir marboxil: a new type of anti-influenza drug (September 14, 2018). I have added this new information to that post.
July 16, 2019
Brain-computer interface (BCI)? That's the use of captured brain waves to control an action. It bypasses the natural pathway from brain to action, replacing it with brain to computer to action. It has the potential to benefit those with disabilities in the normal transmission of brain signals. For example, it might allow a paralyzed person to control their limbs with their thoughts. Musings has noted examples of such work [link at the end].
A recent article extends the use of BCI to speech, a process that requires extremely complex muscular control. The following figure gives an idea of the plan...
Starting at the left... A person thinks of a sentence to say. More specifically, they think about saying it. That is, the analysis is not just of the sentence, but of the brain signals for saying the sentence -- for moving the muscles involved in speech. The neural activity is recorded (part a), using electrodes implanted in the brain.
Then the computer analyzes the brain signals (parts b and c). This depends on training from real samples, shown in the lower frames.
Finally, the computer speaks (part d) -- what it has decided the person wanted to say.
This is part of Figure 1 from the article.
Does it work? How do you tell if it works?
The best way (for most of us) to tell if it works is to listen. There is a movie file with the article, with several examples of the computer output. I encourage you to listen to that movie.
A direct link to the movie file is: movie, with examples of the synthetic speech. (2 minutes.) If that link doesn't work, you can get to the movie from the article web page, listed below; choose Supplementary information. The movie should be accessible regardless of your subscription status for the article itself.
For lab work, the scientists look at the waveforms. (Cartoon waveforms are shown in the figure above.) They compare the computer-generated speech with authentic speech.
They have also transcribed some of the results, and summarized them in a table...
You can see that error rates vary widely. And the types of error vary widely. Remember, the source is a thought along with the attempt to convert it to speech.
This is Table 1 from the article.
Whether you judge the results yourself by listening to the movie, or just read the summary, I think most will agree that the system is providing useful, if imperfect, speech. If a person cannot speak directly, surely the current system is a big improvement. And this is still an early implementation.
* Scientists translate brain signals into speech sounds. (Neuroscience News (NIH), April 24, 2019.)
* Computer Program Converts Brain Signals to a Synthetic Voice. (D Adam (The Scientist), April 24, 2019.)
* Speech synthesis from neural decoding of spoken sentences. (BioNews Central (University of California - San Francisco), April 24, 2019.)
* News story accompanying the article: Neuroscience: Brain implants that let you speak your mind. (C Pandarinath & Y Ali, Nature 568:466, April 25, 2019.)
* The article: Speech synthesis from neural decoding of spoken sentences. (G K Anumanchipalli et al, Nature 568:493, April 25, 2019.)
A background post on BCI: Brain-computer interface -- without invasive electrodes (December 28, 2016). Links to more.
A post, from nearly a decade ago, that can be thought of as an early stage of exploring the use of BCI for speech... Reading the brain waves from speech (October 17, 2010).
More about brains is on my page Biotechnology in the News (BITN) -- Other topics under Brain. It includes a list of related Musings posts.
July 14, 2019
It's a small marsupial -- and an endangered species.
Adults are typically 1-2 kilograms.
This is reduced and trimmed from the figure in the news story.
Among the threats to the bilby are cats. Cats are not native to the environment of the bilbies, but feral cats are now a significant threat to them. It's a particular problem with bilbies raised in captivity, then re-introduced to the wild.
Is it possible to train bilbies to fear cats before they are returned to the wild? That's the question addressed by a recent article. There are some encouraging results, as shown by the following figure...
The graph shows survival curves for two groups of bilbies. One group had received "predator training". The other group was a control, without that training.
The upper curve is for "trained" bilbies. The lower curve is for the control group: untrained bilbies.
You can see that the survival of the animals that received "predator training" was considerably higher than for the naive animals. The training reduced the death rate to about half. There is a particularly big effect in the first few days.
The authors think that all deaths observed were due to predation by cats.
This is Figure 3 from the article.
The training works -- at least in the general sense that it led to reduced deaths.
What was this training? The logic is simple: expose the bilbies to the cats under controlled conditions, which promote exposure and learning but minimize the actual danger. This is achieved by maintaining a colony of bilbies with a small number of cats. They get to observe the cats, and can see that the cats are likely to be bad for them. They learn to recognize the cats as a threat, but the actual damage is small because of the small number of cats. That's what is behind the results shown above. The authors call the training in situ predator exposure.
How well will this work over the long term? Can the approach be generalized to other prey and their predators? Those are questions for further work.
Has no one really tried this before? In fact, there have been attempts to overcome prey naivete, often by using models and surrogate signals. The authors discuss such attempts, and note that they have generally failed -- if they were even evaluated. What's new here is an attempt to create a real, but low-level, predator-prey interaction. And they evaluated what they tried.
We should also note that the training may lead to selection. The less wary members of the population are more likely to be caught. The authors note this possibility, but have no information about its importance.
News story: Predator exposure can help vulnerable species survive in the wild. (Phys.org (I Dubach, University of New South Wales), May 15, 2019.)
The article: Reversing the effects of evolutionary prey naivete through controlled predator exposure. (A K Ross et al, Journal of Applied Ecology 56:1761, July 2019.)
Previous posts that mention bilbies: none.
Among posts on conservation issues:
* Is Harry Potter responsible for the increased owl trade in Indonesia? (August 6, 2017).
* Human-wildlife conflict -- what is the proper way to get rid of a pest? (July 12, 2017).
July 12, 2019
A new article shows that highly processed foods aren't good for you. You already knew that? What makes this article of interest is the experimental system the scientists use. It's a direct well-controlled test.
The key design point is that this is an in-patient test. That is, the test takes some people and isolates them for the duration of the test. Everything is controlled. (In contrast, much nutritional work is done by collecting information on people out in the ordinary world. It's well known that there are pitfalls in collecting such information.)
The main test variable for the current work is the type of diet. Two diets were prepared: one based on highly processed foods, and one based on unprocessed foods. The nutritional contents of the two diets were matched as much as possible for major nutrient classes, such as fat. Participants were given access to the foods at regular meal times plus snacks. The amounts of food available were "plenty" -- about twice the expected consumption. The participants were allowed to eat ad libitum -- freely, as much as they wanted.
Each participant spent two weeks on each diet. Half of them were given the unprocessed-foods diet first, then switched to the processed-foods diet. The other half of the participants used the two diets in the other order.
Here are some results...
Part A (top) shows the major finding of the work: the energy consumed per day over the test period. The upper curve (blue) is for processed food; the lower curve (red) is for unprocessed food. The difference is about 500 kcal per day -- about a 20% increase for the processed-food diet.
Part C (bottom) is an example of the kind of detailed information that the work also provided. This graph shows the energy consumed by meal. You can see that the increase is significant for breakfast and lunch; it is small but not significant for dinner. Perhaps interestingly, there is no apparent difference for snacks.
For those who are used to thinking about daily food consumption as being about 2000 Cal per day... That's the big-C Calorie, which is actually a kilocalorie (kcal).
The test panel consisted of 20 adults, with stable weight and generally good health.
This is part of Figure 2 from the article.
Graph "A" above shows the increased energy consumption on the diet of processed food. The difference is clear even on the first day with a diet. The scientists also measured body weight; it correlated very well with the energy consumption. Changes in body weight were clear by about four days on each diet.
Among the other "miscellaneous" findings...
- The increased consumption is due to eating more carbohydrate and fat; there is no change in protein consumption.
- When asked questions about aspects of the diets such as attractiveness or satiety, the participants' responses were not significantly different for the two diets. That is, food "preferences" did not seem to be an issue.
- People ate faster on the processed-food diet. An interesting clue?
So? One might say that most of what is reported here was already known. What strikes me is the quality of the test system. By using in-patients, the scientists have full control over the test. And that means they could do more -- to help understand what is behind the observed effects, and to test other kinds of foods. What, specifically, is it about processed food that leads to increased consumption? Surely, there is an answer to that, and knowing it could lead to healthier processed food.
* Controlled study links processed food to increased calorie consumption. (EurekAlert! (Cell Press), May 16, 2019.)
* Eating ultra-processed foods will make you gain weight. Here's the scientific proof. (Medical Xpress (E Baumgaertner, Los Angeles Times), May 18, 2019.)
* Expert reaction to study looking at processed food, calorie consumption and weight gain. (Science Media Centre, May 16, 2019.) A collection of comments from experts. A big theme in the comments is caution about what the term "processed" means.
The article: Ultra-Processed Diets Cause Excess Calorie Intake and Weight Gain: An Inpatient Randomized Controlled Trial of Ad Libitum Food Intake. (K D Hall et al, Cell Metabolism 30:67, July 2, 2019.)
My page Internet resources: Biology - Miscellaneous contains a section on Nutrition; Food safety. It includes a list of related Musings posts.
July 10, 2019
A robot that moves like a plant. Very slowly, of course. But the point is that it moves based on osmotic changes, which is how plant tendrils wrap around a host structure, for example. Why? It offers potential advantages in terms of delicate handling. The new work makes progress in showing that such a system can operate reversibly under practical conditions.
* News story: Researchers design the first soft robot that moves like a plant. (A Micu, ZME Science, January 29, 2019.) Links to the article, which is freely available. The article includes three movie files (about 1 minute each, no sound). The first two show how the device works; the third is on how it is made.
* I have listed this item on my page Biotechnology in the News (BITN) under Bio-inspiration (biomimetics).
July 8, 2019
One way to deal with a pest is to make use of a natural enemy. A fungus that naturally infects mosquitoes, for example.
That's the starting point for some recent work. The next step was to modify the fungus so that it made an additional insecticidal toxin; that was reported in 2017. In a new article, we now have a test of the effectiveness of the modified fungus, under "semi-field" trial conditions..
In this test, three groups of mosquitoes were used. Two were treated with fungi; one group served as a control.
The fungal treatment was only for one evening. The graph shows the survival of the mosquitoes (y-axis) vs time (x-axis) following that treatment.
The top (blue) curve is for the control mosquitoes, not treated. The middle (red) curve is for treatment with a fungus that is essentially wild-type; it is called RFP. The bottom (green) curve is for the modified fungal strain, called hybrid.
The pattern is clear... The wild type fungus reduces survival of the mosquitoes. The modified fungus does so -- better.
The trial is called "semi-field". That means it was done in a screened enclosure in an area where the mosquitoes -- and malaria -- are endemic.
RFP = red fluorescent protein. The "wild-type" fungus has been "marked" for ease of tracking. The "hybrid" fungus has also been marked, with a different color fluorescent protein. It is assumed that the markers do not affect the biological activities of the two fungal strains.
This is Figure 1 from the article.
In a longer experiment, with continuous availability of the fungal spores, the mosquito population was reduced by about 99% over 45 days (two mosquito generations).
What is this new toxin? It's a toxin that spiders make to kill insects. That is, the gene for the toxin has been transferred from spider to fungus.
How does the system work? Fungal spores are applied to a dark surface -- the kind mosquitoes like to rest on after a meal. Mosquitoes landing on the surface pick up the fungus, which infects them. And the infection process activates the toxin gene.
The authors note some merits of the system. For example, it is easier to make genetic modifications of the fungus than of mosquitoes. This point will facilitate development of variants as people gain experience with the system. The fungus used here is specific for certain mosquitoes, thus limiting off-target effects.
With good data on the effectiveness, as sampled above, and other merits, the authors suggest that this system deserves serious consideration as a tool against malaria-carrying mosquitoes.
* Genetically modified fungus kills malaria-spreading mosquitos in landmark West African trial. (New Atlas, May 30, 2019.)
* GM fungi to kill malaria mosquitoes. (Naked Scientists, May 31, 2019.) Interview with authors R St Leger and B Lovett, by C Smith. (Audio file available.)
* News story accompanying the article: Malaria: Fungus with a venom gene could be new mosquito killer. (G Vogel, Science 364:817, May 31, 2019.)
* The article: Transgenic Metarhizium rapidly kills mosquitoes in a malaria-endemic region of Burkina Faso. (B Lovett et al, Science 364:894, May 31, 2019.)
A recent post about another novel approach to dealing with mosquitoes: What if we gave mosquitoes anti-malarial drugs? (April 7, 2019).
There is a section of my page Biotechnology in the News (BITN) -- Other topics on Malaria. It includes a list of related Musings posts.
July 6, 2019
It's a common type of experiment in modern biology... Take a gene from one organism, insert it into another, and see what the effect is. But when it comes to brain genes, we may get uneasy.
A recent article offers a new example of such an experiment. Let's focus on the science for now.
A simple summary... The scientists took a human gene thought to be important for development of the distinctively human brain, and added it to the genome of rhesus monkeys. The resulting monkeys showed aspects of brain development that were more human-like. Further, they were "smarter", at least as judged by one test.
The gene is called MCPH1. It is known to be a rapidly-evolving gene within the primates. In previous work, the gene had been added to mice, with some "humanizing" effect. But mice are not primates; monkeys are.
What was done was to add the gene, along with some of its local regulatory sequence, to the monkeys as a transgene; it did not replace the monkey version, but added into the monkey genome. There are a number of technical details about such experiments, but the general design is standard.
The first graph shows how the transgene affected the development of one type of brain cell: the glia.
The test uses a marker (called FABP) that is considered characteristic of immature glia cells. Tissue samples at two monkey ages are stained for the marker.
The two ages are E136 (day 136 of embryonic development; left side) and P76 (day 76 post-birth; right side).
Each bar is for one monkey. The label for the monkey shows whether it is wild-type (WT; blue) or transgenic (TG; red).
The bar height shows the percentage of cells with the FABP marker; that is taken as the percentage of immature glia cells.
The big picture...
- There were more immature glia cells in the transgenic monkeys than in the wild type monkeys. That was true at both time points.
- The percentage of immature glia cells declined for the TG monkeys. That is, the glia cells continued to mature for the TG monkeys.
(For the wild-type, the percentage of immature glia cells is not very different; it is some kind of background level.)
This is part of Figure 2E from the article.
The testing here is on brain tissue. The monkeys were sacrificed for the analysis. The identifying numbers show that different monkeys were used at the two time points.
The full analysis included several measurements of this type. The general observation... The brains of the TG monkeys showed delayed development -- a characteristic of human brains.
How do the TG monkeys "perform"? The following graph shows some results from a test of short-term memory...
In this test, monkeys were asked to identify something they had seen earlier. The particular test shown here is recognition following an 8 second delay. (The test is called delayed-matching-to-sample (DMS).)
The y-axis shows the percentage of correct responses. Red is for TG monkeys, blue for WT -- as above.
The big picture... The TG monkeys scored better on this test. Similar results were obtained with a range of delay times (from 4 to 32 seconds).
This is part of Figure 5B from the article.
As we noted at the top, the work suggests that adding this particular human brain gene to rhesus monkeys leads to monkeys with more human-like brains, as judged by cellular development and performance.
These are difficult and long experiments. The first of the TG monkeys used here were from 2011.
And that all leads to... Should scientists do experiments such as this? Or, better... How do we decide which experiments involving human brain function in other animals are acceptable and which are not? The authors of the article note the question, but pretty much dismiss it. They do note that the work received review and approval from the appropriate regulatory agencies. The relevant regulations vary by country, and such work would not be allowed in many countries.
* Transgenic monkeys carrying human gene show human-like brain development. (Xinhua, April 2, 2019.)
* Chinese scientists have put human brain genes in monkeys -- and yes, they may be smarter. (A Regalado, MIT Technology Review, April 10, 2019.) Considerable discussion of the ethical issues.
The article, which is freely available: Transgenic rhesus monkeys carrying the human MCPH1 gene copies show human-like neoteny of brain development . (L Shi et al, National Science Review 6:480, May 2019.)
* A possible genetic cause for the large human brain (March 25, 2017).
* Do human genes function in yeast? Yeast-human hybrids. (August 21, 2015).
* As we add human cells to the mouse brain, at what point ... (August 3, 2015).
Two sections of my page Biotechnology in the News (BITN) -- Other topics are relevant here:
* Brain (autism, schizophrenia).
* Ethical and social issues; the nature of science.
Each includes a list of related Musings posts.
July 3, 2019
What if there was a dog in the MRI machine before you? Or a man with a beard? Is there a risk of acquiring a bacterial infection from the previous occupant? A new article addresses the question. The short answer is probably not, especially if the previous occupant was a dog; MRI operators are good at cleaning the machines after a veterinary guest. There is more to the article, which is amusing in places; be careful about generalizing from it. There are some serious issues, too, not entirely answered by this article.
* News story: A Dog's Fur Contains Fewer Harmful Germs than a Man's Beard -- Dogs shed fewer microbes during medical scanning than do bearded men. (S Coren, Psychology Today, April 16, 2019.) It gives the reference to the article, but not a link. The article is: Would it be safe to have a dog in the MRI scanner before your own examination? A multicenter study to establish hygiene facts related to dogs and men. (A Gutzeit et al, European Radiology 29:527, February 2019.)
July 2, 2019
Some sports, such as American football, lead to a high level of head injuries. The long term neurological consequences of such injuries are becoming increasingly apparent. Musings has discussed this topic before [link at the end].
What's less clear is whether the head injuries lead to earlier death. A new article offers some evidence on this question. Up front... The answer may be less clear than it seems.
The graph shows the survival curves for two groups of athletes: those who played professional-level football (NFL; dark line) or baseball (MLB; yellow line).
The baseball players show higher survival than the football players. The difference tests as statistically significant; see the p value at the lower left.
This is Figure 2A from the article.
That is followed by analyses for deaths due to certain types of conditions. The following graph is the most striking of those. It compares the death rates for football and baseball players due to neurodegenerative disorders.
Both figures, and indeed all the survival curves in the article, follow the same basic layout. The color coding for the two sports is the same, as shown on the graph above. The x-axis scale (age) is the same. However, the scaling for the y-axis (survival) varies.
The two curves are very different. Look at the numbers, and you will see that the death rate is about three times higher for the football players (9%) than for the baseball players (3%) by the last time point.
That's the rate for cases where neurodegenerative diseases contributed to the death.
The number of deaths considered here was 39 (NFL) and 16 (MLB). Those small numbers illustrate one of the difficulties in doing such analyses, especially as one tries to break the large group down into sub-groups.
This is Figure 3C from the article.
Taken together, the two graphs show that football players survive more poorly than baseball players. In particular, neurodegenerative diseases are a more prevalent factor in the deaths of football players. Other data in the article shows, for example, that football players also have a higher death rate from cardiovascular disease, but that the death rates from cancer are about the same for the two groups.
But perhaps we have not yet gotten to the interesting issues. What are these data sets, and do they address the intended question?
The data sets themselves are logical enough, though it was surprisingly difficult to get them.
The original question was whether head injuries in football lead to a higher death rate. Compared to what? Logically, we might ask, compared to similar people who did not play football. But how could we collect a data set for such people? So, instead of analyzing what we really want, the scientists analyzed a data set that is available. The problem? Baseball and football are different; although both may involve extensive physical activity and training, they make different demands on the body. And it is likely that baseball players and football players are different types of people. We can't say that the differences in death rate observed here are due to the cause of interest (head injuries).
In fact, others have looked at the death rate for football players, using one or another control set, and found various things.
These comments are not a criticism of the work or of the article. The authors are clear about what they did, and they discuss the limitations. Further, the accompanying commentary focuses on them. The caution is that simple summaries of the work may lose the nuances. The article is a useful contribution to the story of the aging of athletes; it provides one more piece of a big puzzle.
* Pro Football Players Die at a Higher Rate than Pro Baseball Players. (J Akst, The Scientist, May 28, 2019.)
* NFL Players Have a Higher Rate of Mortality than Major League Baseball Players. (R Dillard, DocWire News, May 28, 2019.)
* Commentary accompanying the article, freely available: Considerations for Present and Future Research on Former Athlete Health andWell-being. (Z Y Kerr et al, JAMA Network Open 2:e194222, May 24, 2019.) Reading this first could be good.
* The article, which is freely available: Mortality Among Professional American-Style Football Players and Professional American Baseball Players. (V T Nguyen et al, JAMA Network Open 2:e194223, May 24, 2019.)
A background post on football injuries: Evidence for brain damage in players of (American) football at the high school level (August 23, 2017).
A reminder about interpreting statistics... What does a p value mean? Statisticians make a statement (August 6, 2016).
More athletes... High-performing athletes: might they have performance-enhancing microbes in their gut? (June 28, 2019). (Just a couple posts below.)
Among posts about baseball... The origins of baseball -- two million years ago? (August 18, 2013).
My page for Biotechnology in the News (BITN) -- Other topics includes sections on Aging and Brain (autism, schizophrenia). Each of those includes a list of related posts.
June 30, 2019
We get a lot of earthquakes in California.
A recent article re-examines the seismological records, and finds a million more quakes than we had thought. That's for a ten-year period (2008-17), and is for Southern California. The count rose about 10-fold, to about 1.8 million.
The approach was to use a more sensitive analysis, thus increasing the detection of small quakes. The quakes added to the catalog were below magnitude (M) about 2 -- mostly below M 1.
No surprise. It's well known that the number of quakes is much larger for low magnitudes. The important question is whether knowing about the smaller quakes matters. These are quakes that would probably not be noticed by anyone, but which are part of the story of what the ground is doing. The graph below suggests that they might matter.
In August 2012, there was a significant earthquake (M > 5) near the Southern California town of Brawley. The graph shows the time course of quakes recorded in the area over a three day period.
Look at the upper graph. It shows the quakes vs time. Each point is for one quake: magnitude (left-hand y-axis) and time (x-axis; the tic marks are 12 hours apart).
The first events shown are three quakes with M just below 2. There is then a gap of about 10 hours during which no quakes were recorded. The gap is followed by a swarm of quakes, including the "main event": two quakes with M > 5. (And there were eight more quakes with M > 4 over two days.)
That's the analysis based on the "old" catalog, labeled here as the SCSN Catalog.
Now look at the lower graph. Same time, same place, but using the larger catalog that was developed with the lower threshold (QTM Catalog). The gap seen in the top graph is now full of many -- tiny -- quakes. The accompanying map shows that most of these tiny quakes were fairly close to the site of the main (M 5) quakes that followed soon.
The red curve? It shows the cumulative count vs time, for quakes of any magnitude -- from that catalog. Log scale. It's not particularly important here, but you might note that the final total is about 10-fold higher with the new catalog. See the right-hand y-axis scale.
The two catalogs? SCSN = Southern California Seismic Network. QTM = quake template matching.
The x-axis is almost certainly mislabeled. The first two digits are for the month (MM; August), not the year (YY).
This is the left side of Figure 3 from the article.
The significance of the three early quakes (M ~ 2) changes when we see the more complete record. Instead of being an isolated cluster of quakes, they are now connected to the main event. The tiny quakes, newly revealed here, during the gap seem to point to the main event that is coming. Are such tiny quakes, in general, useful information? The only way we can find out is to see the more complete record for many events, and learn from the experience.
In any case, the new work is an advance in analyzing seismological data. It should lead to a better understanding of what is underground.
How does one go back and find tiny quakes in the records? It's a signal-noise problem. The key to uncovering more signals among the noise is more detailed analysis, which allows the scientists to recognize features in the data that are typical of a particular location, and also to correlate data from nearby locations. It's computationally expensive!
* Scientists Have Identified Almost 2 Million 'Hidden' Earthquakes Shaking California. (P Dockrill, Science Alert, April 23, 2019.)
* Scientists identify almost 2 million previously 'hidden' earthquakes -- A closer look at seismic data from 2008-17 expands Southern California's earthquake catalog by a factor of 10. (EurekAlert! (Caltech), April 23, 2019.)
* News story accompanying the article: Geophysics: The importance of studying small earthquakes. (E E Brodsky et al, Science 364:736, May 24, 2019.)
* The article: Searching for hidden earthquakes in Southern California. (Z E Ross et al, Science 364:767, May 24, 2019.)
Among posts about California earthquakes...
* Earthquakes induced by human activity: oil drilling in Los Angeles (February 12, 2019).
* A significant local earthquake: identifying a contributing "cause"? (July 31, 2018).
* How PBRs survive major earthquakes; why being near two faults may be safer than being near just one (September 22, 2015).
June 28, 2019
An intriguing article!
Let's start with a summary of the findings and interpretation, as a framework...
1. The scientists find higher levels of a certain type of bacteria in the guts of people following a marathon.
2. Inoculating those bacteria into mice improves their endurance.
3. From knowing the biochemistry of these bacteria, there is a plausible explanation for how this might work.
The following figure shows some of the results for the runners...
In this figure, each column is for one runner from the 2015 Boston Marathon. Each row is for one day, from 5 days before the race (-5) to 5 days after the race (+5). The days are shown on the right-hand y-axis.
On each day, fecal samples from each runner were tested for the abundance of various bacteria, using analysis of the ribosomal RNA.
Each bar shows the relative abundance of Veillonella bacteria. That relative abundance is shown on the left-hand y-axis. (I think that the scaling is the same for all graphs. Therefore, your visual impression of bar height is meaningful and can be compared throughout the figure.)
That is, each column shows how the abundance of Veillonella bacteria changed over time for a specific runner. Each row shows the abundance in the set of runners at a given time.
For example... The first runner (SG01) had a level of Veillonella at day -5 that was below the limit shown here, but the amount increased in the days leading up to the race (presumably due to training). (Day -1 doesn't quite fit; samples vary, for reasons we don't understand.) The amount of Veillonella increased further on days 2 and 3 following the race, then declined.
One big observation is that people vary! But it also seems that there is a tendency for the amount of Veillonella bacteria to be higher after the race, at least for some people.
Veillonella is the only bacterium examined for which there is any suggestion of a change associated with the race.
This is Figure 1b from the article.
The runners were chosen as middle-of-the-pack. That is, these are not the elite of marathoners, but they are serious exercisers.
Results such as those pointed to Veillonella as possibly being of interest. That led to a controlled test -- with mice.
The general nature of the test is that the mice were asked to run for as long as possible; it is an endurance test. There were two treatments: the mice were treated (intrarectally) with either Veillonella bacteria (a strain of Veillonella atypica isolated from the marathoners) or control bacteria (Lactobacillus bulgaricus).
The y-axis shows the endurance result: how long each mouse ran until it stopped, presumably of exhaustion. (Actually, each mouse had three trials, and the result reported here was the longest time it achieved in the three trials.)
The results show that the mice ran longer with the Veillonella bacteria (right-hand data set) than with the control. The effect, while seemingly small, tests here as statistically significant. That is, the results suggest that giving the mice Veillonella bacteria improved their endurance.
This is Figure 2a from the article.
As we noted at the top, there is a plausible biochemical explanation for the effect. Veillonella bacteria can metabolize the lactate that builds up during exercise. The bacteria convert lactate to propionate, which is useful.
The control bacteria, a Lactobacillus, may well be making lactate; that complicates the interpretation. This point needs to be sorted out in further work.
As you read about this work, be sure to distinguish two issues. One is... Do athletes make use of a natural shift in their microbiome, involving Veillonella bacteria and lactate metabolism, to achieve higher performance? That's the focus of the article itself, and there is at least suggestive evidence, as well as a plausible mechanism, to support the claim.
Beyond that, it's easy to imagine how this might be used -- and the questions it could lead to. In the context of competitive athletics, even small improvements can be important. Interestingly, beyond the article the authors even suggest that their findings might be of use for those who simply don't exercise much, though there is no particular evidence to support that suggestion.
In terms of the work leading to a "useful" treatment for enhancing exercise, we should note one further result from the article. Giving the mice propionate led to a performance gain similar to that seen with the Veillonella bacteria. It is possible that the main benefit of the bacteria is making propionate, a useful energy source (rather than reducing lactate).
* Microbiomes of Elite Athletes Contain Performance-Enhancing Bacteria. (GEN, June 25, 2019.)
* Could a Gut Bacteria Supplement Make Us Run Faster? Running a marathon ramps up levels of a gut bacteria that made mice run faster, but it's unclear whether it would work in people. (G Reynolds, New York Times, June 26, 2019.) Good discussion of the limitations of the work so far.
* Discovery of performance-enhancing bacteria in the human microbiome -- A single microbe accumulating in the microbiome of elite athletes can enhance exercise performance in mice, paving the way to highly-validated performance-enhancing probiotics. (B Boettner, Wyss Institute (Harvard), June 24, 2019.) From the lead institution.
* News story in another member of the journal family: Microbiome: Working out the bugs: microbial modulation of athletic performance -- A multi-faceted translational study provides the first evidence that gut microbial conversion of lactate to propionate may enhance athletic performance during high-intensity endurance exercise. (R N Carmody & A L Baggish, Nature Metabolism 1:658, July 2019.)
* The article: Meta-omics analysis of elite athletes identifies a performance-enhancing microbe that functions via lactate metabolism. (J Scheiman et al, Nature Medicine 25:1104, July 2019.)
The article discloses that some of the authors have formed a company, presumably to develop the findings into a commercial product.
* * * * *
More about improving the endurance of mice: Sparing glucose for athletic endurance (August 21, 2017).
A recent post about the human gut microbiome: How a "low-gluten" diet may benefit those who are not gluten-sensitive (January 27, 2019).
More about running: Should you run barefoot? (February 22, 2010). Links to more.
Added July 2, 2019. Next post about athletes... Comparing the death rates of American football and baseball players (July 2, 2019). Just a couple posts above.
June 26, 2019
A database of women scientists. A resource for making connections. One aspect of that is promoting the involvement of women in activities such as conferences and news resources. All levels of involvement, all fields of science, everywhere. Seems like a good idea; please read and share.
* News story: Scientists create international database of women scientists. (Phys.org (University of Colorado), April 23, 2019.) Links to the article, which is freely available, and directly to the database web site. The article describes the purpose, the first year of experience, and plans. The database is called 500 Women Scientists; as of the article, it includes over 7,500.
* Also see... Women in science: How about at the highest level, the national academies? (April 12, 2016).
June 25, 2019
Growing maize results in about 4300 deaths in the United States each year. The main reason is the ammonia released from fertilizer.
That's the gist of a recent article. It is based on modeling the total corn-growing process.
The following figure summarizes the main findings, and also gives an idea of the nature of the analysis.
Start with the bar at the left. It is for total PM2.5 -- fine particulate matter. (The 2.5 refers to the particle size: smaller than 2.5 micrometers.) The total height of that bar shows that PM2.5 from growing corn results in a total of about 13 deaths per million tons of corn per year.
The parts of that bar show the contributions of various process steps to the total. The biggest contribution, by far, is "yellow", for on-field corn production. (The other steps that contribute are listed in the key at the upper right; many are related to fertilizer production.)
The other bars are for types of chemicals that contribute to the total load of PM2.5. The largest, by far, is ammonia, NH3, again related to on-field corn production.
"Primary" PM is that produced directly from combustion. "Secondary" PM is that made by atmospheric chemistry from the indicated source.
You may wonder... synthetic fertilizer vs animal manure? Both release NH3. The latter is particularly bad.
This is Figure 5a from the article.
The problem, of course, is that corn needs nitrogen -- usually considerably more than is present in the soil. Therefore, farmers fertilize the crop with an N-containing fertilizer. However, much of that N ends up in the air, largely as ammonia. In addition to the direct effect of NH3 (the odor), much of it ends up as particulate matter, the small kind that may be the worst, as it embeds deep in the lungs.
It's all modeling. That's about all one can do; there is usually no way to trace an individual death to a specific pollutant source. The quality of the conclusions depends on the input numbers and the modeling assumptions. The authors have made a start, and published what they did. Others are free to challenge numbers and assumptions; over time that process leads to a better understanding.
But for now, the message is that fertilizer use in growing maize is leading to more air pollution. Can we learn to fertilize better?
Perspective... It's hard to get a big picture from the article. How big is this problem compared to other pollution problems and causes of death? One comment suggested that, in regions with a lot of corn, the ammonia from growing the corn may be the major source of air pollution. The article does contain estimates of the damage in economic terms. The numbers are big, but it is hard to put them in perspective. The important point here is to develop our understanding of growing corn, and to ask if we can do better.
* Air pollution from corn kills thousands of people each year. (M Andrei, ZME Science, April 17, 2019.)
* Corn Pollution Kills Thousands of Americans a Year, Study Finds. (Y Funes, Gizmodo, April 5, 2019.)
The article: Air-quality-related health damages of maize. (J Hill et al, Nature Sustainability 2:397, May 2019.)
A recent post about ammonia pollution: Global map of ammonia emissions, as measured from space (January 22, 2019).
A post about agricultural efficiency: Implementing improved agriculture (January 6, 2017).
More about corn: What can we learn from a five thousand year old corn cob? (March 21, 2017).
June 23, 2019
A recent article reports observations of whales from space -- using the latest in high resolution satellite technology.
Here are two of the photos...
Not very clear? You do see the fluke?
This is the bottom row of Figure 2 from the article.
How are the scientists so sure they can identify whales? Of course, they have the original digital photos, which they can subject to various analyses.
A big problem is distinguishing whales from objects of similar size over the ocean. The article presents some typical photos of boats and planes. The plane wings are clear. And the complexity of the boat seems different from the generally uniform whale.
The satellite images provide additional data. For example...
The graph shows the radiance measured by the satellite sensors for four spectral bands. (NIR1 is a near-infrared band.)
The radiance is the reflected light as measured at the satellite.
Radiance is shown here for three ocean water locations, and for one type of whale -- the fin whale in this case (black line; solid circles).
You can see that the spectral information is somewhat helpful in distinguishing animal from water.
The spatial resolution is 1.24 meters for these spectral bands. (It is about 30 cm for black and white images.)
This is slightly modified from the upper right frame of Figure 4 from the article. I have restored the axis labels.
The full figure shows similar results for other types of whales; each type has a distinctive spectrum. The authors note that the fin whale, data for which is shown above, is one of the easier types to detect.
The analysis is complex; I have only hinted at some of the issues above. The goal, of course, is that the images will be analyzed automatically by computer. Complex analysis is fine, if it works. The exploratory work, such as above, provides the basis for the analysis.
Overall, the authors think their analyses are rather good. Certainly, they are better than what had been available previously. But there are still significant limitations. For example, the ability to detect calves -- and smaller species -- is limited.
The point of all this? Monitoring whale populations is still a big issue. Satellites have the potential to provide comprehensive coverage in both time and space.
* Watching Whales from Space. (ECO Magazine (British Antarctic Survey), November 1, 2018.)
* Whale-Watching from Space: Why Satellites Are Monitoring Wildlife. (M Prosser, Singularity Hub, December 13, 2018.)
The article, which is freely available: Whales from space: Four mysticete species described using new VHR satellite imagery. (H C Cubaynes et al, Marine Mammal Science 35:466, April 2019.) VHR = very high resolution.
Previous post about whales: A better way to collect a sample of whale blow (November 28, 2017).
More on aerial monitoring...
* An ion-drive engine for an airplane? (February 15, 2019).
* Global map of ammonia emissions, as measured from space (January 22, 2019).
* Improved high altitude weather monitoring (July 18, 2016).
June 21, 2019
What's a sticky pesticide? Think about how a pesticide is used. You spray it onto a plant. Then the rain comes, and your pesticide ends up in the river. A sticky pesticide would stay on the plant when it rains.
A recent article reports making a sticky pesticide. The new pesticide contains two protein domains. One domain sticks to the waxy layer of the leaf; it is hydrophobic. The other domain is the active agent.
The first figure shows that the leaf-binding domain works. The test here is with a simple model system, using leaves in the lab. The test pesticide was applied to the leaves, and then the leaves were rinsed.
In this test, green fluorescent protein (GFP) was used as the "active" domain -- something one can see. (eGFP? The e means enhanced.) It was attached (or not) to another domain, which may (or may not) promote rain-resistant binding to the leaf.
The top row of pictures shows the results before the leaves were rinsed; the bottom row shows the results after rinsing (lab rain).
Start with the right-hand column. That is for GFP alone. It bound to the leaves before rinsing, but was easily rinsed off.
The first two columns show the results for two different two-domain proteins, with the GFP attached to a sticky domain. The sticky domain candidates are called THA and LCI. Both stuck and survived the "rain": substantial amounts of GFP were seen on the leaves after the rinsing.
The scale bar is 0.25 millimeters. So, each image is about a millimeter across.
This is Figure 1B from the article.
So it binds. Does it do real biology? The next figure shows the results for a real biological test, though still with a simple lab system. In this test, a two-domain protein was used, containing the sticky THA domain shown above and a domain, called DS01, known to inhibit the fungus that causes Asian soybean rust.
In this test, soybean leaves were infected in the lab with the fungus.
The figure shows the severity of infection following two treatments.
The left-hand treatment was just water; the average severity for this control was set to 100%.
The right-hand treatment was with DSO1-THA -- the two-domain protein with one domain each for sticking and for killing the pest. The pesticide was applied, and the leaves were rinsed. You can see that the severity of infection was significantly lower with the new pesticide. Again, this is with rinsing.
This is slightly modified from Figure 7b of the article. I added a label on the y-axis.
How good is it? It's hard to tell. There is no reference data for other pesticides that might have been used here (without rinsing).
So let's just take this as proof of principle. They designed and made a pesticide with a new property. It worked. The goal seems good, and the approach seems sound. I am surprised that this has not been done before; showing that it can work in a model system may open the door to further development.
I suggested earlier that the pesticide binds to the waxy layer on the left surface. Evidence? Aside from the hydrophobic nature of the binding domain... It binds less well to leaves lacking the waxy layer (because of either mutation or chemical treatment).
News story: Rainproof pesticide uses sticky peptides to defend against Asian soybean rust. (A Shearer, Chemistry World, May 13, 2019.)
The article, which is freely available: A bifunctional dermaseptin-thanatin dipeptide functionalizes the crop surface for sustainable pest management. (P Schwinges et al, Green Chemistry 21:2316, May 7, 2019.)
Among posts about pesticides: Largest field trials yet... Neonicotinoid pesticides may harm bees -- except in Germany; role of fungicide (August 20, 2017). This may lead to the question of the environmental impact of the new type of pesticide. Of course, the purpose is to reduce the amount "in the river" -- pollution beyond the plant. The effect on the amount of pesticide on the plant, to a first approximation, might seem zero. I can think of reasons why it might be otherwise, in either direction. It probably depends on the specifics. The question remains on the table, and needs to be addressed for any specific pesticide and application.
Posts about soybeans include the following consecutive posts...
* Improving soybean oil by using high voltage plasma (January 9, 2017).
* Improving soybean oil by gene editing (January 8, 2017).
June 18, 2019
Small nucleons. Some neutrons and protons, especially in heavy atoms, are smaller than usual. It happens when two nucleons, most often one neutron and one proton, momentarily pair up. Measuring the size of a nucleon, using electron scattering, is not a trivial matter.
* News story: Correlated nucleons may solve 35-year-old mystery. (Phys.org (Thomas Jefferson National Accelerator Facility), February 20, 2019.) Links to the article.
June 17, 2019
CCR5 is a human gene best known for its relevance to HIV. The CCR5 protein is the major receptor for HIV (more specifically, for the common HIV-1). Some people -- a few percent of the population -- carry a CCR5 mutation, with 32 bases of the gene missing. That is called CCR5-Δ32, where the Δ indicates a deletion. The mutation appears to lead to loss of any active protein for the gene. People with two copies of the mutant gene are substantially resistant to HIV, and, at least superficially, appear otherwise normal.
A case where an HIV+ person received the CCR5 mutation as a result of a bone marrow transplant received attention several years ago. The person became HIV-; residual virus could not get into any new cells. A second such case was noted recently.
There is another reason CCR5 has been in the news recently. We'll leave that for the moment.
How good is the story that people with the CCR5 mutation are otherwise normal (in addition to being HIV-resistant)? A new article examines a database of 400,000 individuals, and looks at survival as a function of the CCR5 genotype.
The graph shows survival (y-axis) vs age (x-axis) for three groups of people in the database.
The three groups are classified by a single criterion: their genotype for CCR5. For simplicity, we refer to the Δ32 mutation as "-". Each person is either +/+, -/+, or -/-.
Survival is shown relative to the start age for this graph, which is 41. (That is the youngest age for which the database provides useful data.)
You can see that people carrying two copies of the mutated form of the gene (-/-; black curve) have lower survival than the other two groups. (There seems to be no difference between the people with one or two copies of the normal form of the gene.)
The database used here is the UK Biobank. Analysis is restricted to those of British ancestry.
This is Figure 1a from the article.
The graph shows survival. Another way the results are presented is with death rate, which is (1 - survival rate). For example, if the survival is 85%, then 15% have died. The analysis for age 76 (not quite the end of the graph, but the highest age with enough numbers for good statistics) shows that the -/- group has a 20% higher death rate over the age range shown.
That is, the CCR5 mutation, which leads to resistance to HIV, overall leads to lower survival.
Why? There is nothing in the current work to address that point. However, there is other work suggesting that the CCR5 mutation leads to increased death from influenza, and adversely affects some other diseases. Since CCR5 is a normal part of our immune system, even if we don't know exactly what it does, it shouldn't be surprising that it is beneficial.
How solid is the conclusion here? Well, it is a single study. It is based on a single database, focusing on people of one type of genetic background. It does not address people of any other group. The way the database is maintained, it is subject to some biases. However, it seems likely that the biases would not affect the general conclusion here.
That is, the study has limitations. Hopefully, other analyses will be done. In the meantime, the analysis here suggests the effects of CCR5, whose biological role is poorly understood, is complex.
There is one more point to be noted about the CCR5 mutation. There was a big news story in recent months about two babies being born after having their CCR5 genes inactivated (using CRISPR for gene editing). The work was met with outrage for a number of reasons. We can now add to those reasons... The genetic mutation they received may well do the kids harm.
* Genetic Mutation that Prevents HIV Infection Tied to Earlier Death. (E Yasinski, The Scientist, June 3, 2019.)
* Expert reaction to mutated CCR5 gene and mortality. (Science Media Centre, June 3, 2019.) Comments from experts.
* News story accompanying the article: HIV infections: The hidden cost of genetic resistance to HIV-1 -- Assessment of more than 400,000 people over the age of 40 demonstrates that homozygosity for a CCR5 variant that prevents HIV-1 infection comes at the cost of increased rates of mortality. (J Luban, Nature Medicine 25:878, June 2019.)
* The article: CCR5-Δ32 is deleterious in the homozygous state in humans. (X Wei & R Nielsen, Nature Medicine 25:909, June 2019.)
A recent post about CCR5: Role of a receptor for HIV in stroke recovery (March 23, 2019). This post suggests that CCR5 may be relevant in recovery from stroke. More specifically, it suggests that the wild type CCR5 inhibits stroke recovery. The big point for now is to emphasize how little we understand about CCR5. (The article discussed in this earlier post is reference 8 of the current article.)
My page for Biotechnology in the News (BITN) -- Other topics has a section on HIV. It includes a list of related posts.
June 16, 2019
How do planetary moons form? There are various possible ways. How did Earth's Moon form? We're still trying to figure that out.
In recent decades, the main view has been that the Moon was formed following a collision of another substantial object (Mars-size?) with the early Earth. The Moon formed from the ejecta of that collision. The invading object, known as the impactor, has gained the name Theia.
It's an interesting and appealing idea. However, as we learn more about Moon and Earth, the new data provide constraints on what happened. So far, the picture is confusing. In particular, some aspects of the composition of Moon are surprisingly similar to those of Earth. That includes the content of specific isotopes. Common understanding of the collision suggests that the Moon should be more like the impactor. Further, most large bodies in the Solar System have distinctive isotopic compositions. We would guess -- but cannot know -- that the composition of Theia was distinct from that of Earth. Something doesn't fit. Musings has discussed the Theia problem before [links at the end].
A recent article offers a new solution to this dilemma. It's just a model based on simulations, but that's a useful step.
The key point of the new model is that it suggests that the surface of the Earth was liquid at the time of impact. With that change, the scientists now predict that the Moon would be formed largely of Earth material following the collision. The following graph summarizes the results of their simulations on this point...
The graph shows the composition of the ejected disk material over 70 hours following the collision. The y-axis is the mass of the disk -- in units of lunar masses.
You can see that the collision is a complex event, but that the ejecta disk is mostly blue and red material, with a composition that changes over time. It is mostly red by the end.
Blue and red material? Blue is from the impactor; red is from the magma ocean (MO) on the Earth surface. That is, the collision ultimately produces a cloud of material that is mostly Earth. And that is how, according to this model, the Moon is so similar to the Earth.
The figure also shows a small amount of gray material. That is material from the cores of the two objects. It's not important in the long run.
This is slightly modified from Figure 1c of the article. I have added text labels to identify the two main materials.
The scientists explicitly show in their modeling that the difference between liquid and solid silicate minerals matters. They have very different heating characteristics.
The model also explains one additional feature of the Moon. It has a surprisingly high content of FeO (iron(II) oxide, or ferrous oxide). The model here offers an explanation: the proposed MO probably would have been enriched in FeO.
The main novel feature here is the proposal of the magma ocean at the Earth surface. Is this a reasonable idea? There have been other proposals to explain the similarity of Moon and Earth compositions, but they have made assumptions not considered reasonable. In this case, the key assumption is indeed reasonable, even likely for at least some time during the early history of Earth.
That's it. A new model, and computer simulations to see what would happen if Theia collided with an Earth with a liquid surface. There are a lot of details in the modeling, but the big message is that such a collision could account for what we know about Earth and Moon. The new model is now on the table, for critique and further development.
* Ocean of magma blasted into space may explain how the moon formed. (T Puiu, ZME Science, April 30, 2019.)
* Behind the paper: Terrestrial magma ocean origin of the Moon. (N Hosono, Nature Research Astronomy Community, April 29, 2019.) By the lead author of the article.
* News story accompanying the article: Planetary science: Why the Moon is so like the Earth -- The Moon's isotopic composition is uncannily similar to Earth's. This may be the signature of a magma ocean on Earth at the time of the Moon-forming giant impact, according to numerical simulations. (H J Melosh, Nature Geoscience 12:402, June 2019.)
* The article: Terrestrial magma ocean origin of the Moon. (N Hosono et al, Nature Geoscience 12:418, June 2019.)
More about the Earth's moons: How many moons hath Earth? In: Briefly noted... (September 5, 2018).
June 12, 2019
Jackass and fish. If you're one of those who thinks it is cute to put a live fish down your throat, at least you should know about fish that erect barbed spines on their body to defend themselves when distressed.
* News story: This Is What Happens When You Drunkenly Swallow a Live Catfish -- A hard lesson in a very strange party tradition. (H Weiss, Atlantic, January 26, 2019.) Links to the article, in the journal Acta Oto-Laryngologica Case Reports; it is freely available. The news story itself perhaps provides some useful perspective on the broader issues.
June 11, 2019
Even very simple organisms modify their behavior based on experience. For example, the slime mold Physarum polycephalum is repelled by sodium ions. However, if exposed to Na+ over time, the organism learns to tolerate it. The process is commonly called habituation. In some way, the slime mold must "know" and/or "remember" that sodium is ok. However, slime molds have no nervous system.
A recent article explores how they do it. Here are some results, establishing the basic phenomenon...
In this test, the slime molds need to cross a bridge that has a high concentration of Na+ in order to get to some food. The aversion index shown on the graph in part a (y-axis) is related to the time it takes them to do so.
Two groups of slime molds were tested. One group was habituated to sodium ions; the other group was a control, untrained group. Testing was done after 1 and 6 days of habituation.
You can see that both groups scored similarly at day 1. However, by day 6 of training, the habituated group showed very little aversion to the sodium ions.
Part b of the figure (right side) shows the amount of Na+ in the two types of cells (day 6). You can see that there is much more Na+ in the habituated cells, which were exposed to high levels of it. For now, just take this as an observation, one that is perhaps not surprising.
This is part of Figure 2 from the article.
Part a above shows habituation, which we might consider a form of learning. It says nothing about memory, at least long-term memory.
The following test was done with the habituated culture a month later. Actually, it's a little more complicated than that. Under normal conditions, the slime molds would lose their stored sodium -- and their habituation -- within a few days. What was done here was to store the slime molds in a dormant state. The results...
Two different measures were used here, but they are closely related. By either measure, the habituated cells showed low aversion, compared to the controls.
This is Figure 3a from the article.
The habituated slime molds remembered that Na+ is ok, even after a month of storage under conditions of physiological dormancy.
The question, then, is how these little things store their memories. The authors make a case that the stored Na+ is the memory. In one experiment, they injected Na+ into the cells, and showed that they now behaved as if they are habituated.
Perhaps what is most important is that the scientists are studying the nature of memory in such an organism, and have a model that is a start toward describing a mechanism of how this "liquid brain" works.
The article: Memory inception and preservation in slime moulds: the quest for a common mechanism. (A Boussard et al, Philosophical Transactions of the Royal Society B 374:20180368, April 22, 2019.) The journal issue has the theme of exploring the differences between "liquid" and "solid" brains.
A post about cellular slime molds: Farming by amoebae (February 15, 2011). This also serves as a reminder that the term slime mold is used for two unrelated types of organisms: the true slime molds of the current post, and the cellular slime molds of this earlier post.
More about memory in simple organisms: Can memories survive if head is lost? (November 23, 2013).
My page for Biotechnology in the News (BITN) -- Other topics includes a section on Brain (autism, schizophrenia). It includes an extensive list of brain-related Musings posts.
June 10, 2019
Mix some chemicals together, under the right conditions, and life will emerge. Something like that must have happened long long ago. But what, how, when, where -- pretty much all the questions a journalist would ask -- are unknown, and perhaps unknowable.
The classic Miller-Urey experiment of 1953 showed that some biochemicals can be generated abiotically. A recent article reports an unusually intriguing example: a simple set-up and a rich mixture of life's chemicals.
What did the scientists do? Mix (n water) two simple organic molecules: pyruvate and glyoxylate. Add some Fe2+ (ferrous) ions. Check back in a while. The two organics are likely to be formed abiotically; ferrous ions would have been abundant in the ancient prebiotic world, with no oxygen in the atmosphere.
Here is part of what they found...
The figure shows the citric acid cycle of modern organisms. (It is also known as the TCA cycle or Krebs cycle.) The format of the cycle shown here is a little odd, but that doesn't matter for us.
The chemicals of the cycle that were made in their reaction mix are shown in dark type.
The chemicals that were not detected in the analysis are shown in light type (and marked with ***): citrate and oxalosuccinate.
That is, of the 11 common biochemicals shown here, 9 were made in their reaction system.
The two organic molecules used as initial reactants contain 2 and 3 C atoms. (Both are alpha-keto acids, present mainly as the anions. Glyoxylic acid is 2-oxoethanoic acid; pyruvic acid has one more C in the chain, as -CH3.) Molecules of various sizes, up to 6 C atoms, were made. That is, the system makes C-C bonds. (Both missing molecules had 6 C atoms.)
The reaction system was run at 70 °C, well within the growth range of modern thermophilic microbes. The main distribution of chemicals was evident within a few hours.
This is slightly modified from Figure 2a in the article. I added some *** for the two chemicals not found.
In another part of their testing, the scientists added a nitrogen source (hydroxylamine) and metallic iron. The products then included four of the modern amino acids.
The authors discuss some of the chemical transformations observed. For example, they found both reductive and oxidative reactions. The initial Fe2+ can serve as a reducing agent. But that leads to Fe3+ appearing; it can then serve as an oxidizing agent.
What does one make of this? We have no idea how this might be relevant to the origin of life. It's a fascinating demonstration of what abiotic (prebiotic?) chemistry can do. It offers an example of how a plausible set of conditions could have led to an interesting set of chemicals. But life is much more than the citric acid pathway. On the other hand, there was lots of time available for many such pieces to come together.
* Iron can catalyze metabolic reactions without enzymes -- Findings suggest that the abundant metal might have played a key role in early biochemistry before enzymes evolved. (A Katsnelson, C&EN, May 1, 2019.)
* Life's biochemical networks could have formed spontaneously on Earth. (Phys.org (University of Strasbourg), May 3, 2019.)
* News story accompanying the article: Origins of life: A possible prebiotic basis for metabolism -- Early life forms established a network of reactions for converting carbon dioxide into organic compounds. A non-biological system of reactions that could have formed the network's core on ancient Earth has been reported. (R Pascal, Nature 569:47, May 2, 2019.)
* The article: Synthesis and breakdown of universal metabolic precursors promoted by iron. (K B Muchowska et al, Nature 569:104, May 2, 2019.)
* Can we pinpoint a specific molecular explanation for tissue damage following a heart attack? (March 24, 2015).
* Did life start in a geothermal pond? (February 28, 2012).
* I think I created life (May 21, 2009).
June 8, 2019
Four years ago, Musings noted the prediction and experimental finding that sulfur hydride is a superconductor [link at the end]. In fact, the work set the high temperature (T) record for superconductivity.
Ordinary sulfur hydride is H2S (hydrogen sulfide). The actual superconducting species is likely to be a "superhydride", such as H3S. The result is seen only at ultra-high pressure, which makes possible the superhydride configuration.
A year or so ago, some new predictions appeared. We now have an experimental test of one of those new predictions -- and a new record.
Here are some of the key results, from a new article...
The graph is simple. Resistance (the reciprocal of conductivity) on the y-axis, T on the x-axis.
The results, too, are simple. It helps to read the graph "backwards", from right to left. Start at the high T. There is substantial resistance. At about 250 K, the resistance starts to drop -- and it is zero by about 230 K. The drop to zero resistance is the distinguishing feature of superconductivity.
Scientists use a single number from the graph to describe where the superconductivity starts. The critical temperature (Tc) is where the curve begins its precipitous drop. Here, Tc is 249 K. It's a new record for high-T superconductivity.
What is the substance? It is a hydride of lanthanum; a superhydride. LaH10. (Don't try to make sense of the chemical composition. Odd things happen at ultra-high pressure. And take the H subscript 10 as approximate.)
It's about what was predicted. (The prediction was 270-290 K. The actual result here is not quite as high, but the general agreement is encouraging, given the difficulty of both the theoretical and experimental work.)
The curve on the left? LaD10, where all of the H has been replaced by the heavy isotope D (deuterium). It, too, superconducts, but Tc is about 70 K lower. That shift, too, is about what the theory predicts. The low mass of the H plays a key role in the superconductivity; the higher mass of the D lowers Tc -- by both prediction and measurement.
That may all seem simple, but the work is not. Synthesis and measurement take place in a diamond anvil cell. The Tc is measured here at 151 gigapascal. (1 GPa = 109 Pa ~ 10,000 atmospheres.)
This is Figure 4 from the article.
There are two important parts to this story. First, there is a new record. We like records. But second, this result was predicted, as was the previous one for sulfur hydride. Much of the history of superconductivity has been trial and error. There are theories, which sometimes work, and sometimes don't. Now scientists are predicting high-T superconductivity -- and then verifying it.
The superconducting hydrides, here with lanthanum and previously with sulfur, are metallic materials. As metallic superconductors, they follow the major theory for how superconductivity works. For many years the record holders for high-T superconductivity were non-metallic materials, for which there is no clear theory.
What's the next prediction? In fact, there is another prediction, and it is quite exciting. The theory predicts that a hydride of yttrium, YH10, will be superconducting at T around 320 K (47° C), once again at ultra-high pressure. That's not just room T; it is well above room T. (In the field of superconductivity, "room T" is often taken to be 273 K, or 0° C.) It's a non-trivial project to try to make it, but perhaps superconductivity at -- or above -- room T is almost here, with a theoretical underpinning.
There are actually two articles on superconductivity in lanthanum superhydride. They substantially agree on the big points, though the exact numbers are different for the two groups. The article discussed above, which was published last month, is from the same lab as the sulfur hydride work discussed in a previous post. The other article came out earlier this year. (It reports Tc "above 260 K".) Some news stories refer (and link) to both articles.
* Another major step towards room-temperature superconductivity -- A hydrogen-rich material becomes superconductive under high pressure and at minus 23 degrees Celsius. (Max Planck Institute for Chemistry (Mainz), May 24, 2019.) From the lead institution for the current article.
* Viewpoint: Pushing Towards Room-Temperature Superconductivity. (E Zurek, Physics 12:1, January 14, 2019.) Refers to both articles. If you want optimistic predictions, see the final paragraph of this item.
* News story accompanying the article: Condensed-matter physics: Superconductivity near room temperature. (J J Hamlin, Nature 569:491, May 23, 2019.)
* The article: Superconductivity at 250 K in lanthanum hydride under high pressures. (A P Drozdov et al, Nature 569:528, May 23, 2019.)
Background post: What's the connection: rotten eggs and high-temperature superconductivity? (June 8, 2015). Note that this post links to two update posts on the story; the basic message remains the same. Links to more, on both superconductivity and unusual chemistry at high pressure. This post was made four years ago -- to the day.
Previous post about yttrium: Y-Y: the first (May 5, 2019).
June 5, 2019
Google Scholar (GS) and citation searching. Two recent articles suggest that GS may now be the largest database for scholarly or academic articles. That leads to... It may really be as good as any for doing a citation search. If you have an article and want to know what followed from it (that is, what articles cited it), find the item in GS and look at the bottom line, for citations.
* News story: Revisiting Google Scholar. (Swansea University, November 21, 2018.) Very brief, but it links to both articles. The first one they list is freely available. The news story also links to a short guide to using GS, which is available in both English and Welsh. This guide, and other library guides linked there, are partially customized for their university, but much of the information is general.
* My page on Library matters includes a section on Citation searches. It provides a brief introduction to the why and how of doing citation searches. (Much of that page has information about the UC Berkeley library system, but this section is general.) I have added the information here to that section. (Hm... Some of the info for Web of Science is out of date. Gotta fix that.)
June 4, 2019
Artemisinin is a first-line drug for treating malaria. However, there is a serious problem of supplying enough of it. The drug is isolated from a particular plant, Artemisia annua -- from special glands on some leaf hairs.
A new article offers the prospect of an increased supply, from variants of the plant.
The basic finding is that, under some conditions, plants make the drug throughout leaf tissue, not just in the glands.
The following figure shows some data on that point...
The figure shows the analysis of artemisinin in four samples, by a combination of chromatography and mass spectrometry.
More specifically, the graphs are for material of molecular weight 305; that's artemisinin. Each graph shows how material of that molecular weight came off the chromatography column.
The simple result is that the same material was found in all the samples. What are those samples? Briefly, from the top down...
- reference material (pure artemisinin);
- material extracted from trichomes (leaf hairs);
- material from total leaf;
- material from leaves with the trichomes removed.
When the results were expressed as concentration, the levels of the drug in the various tissues were similar.
EIC? Extracted ion chromatograms.
This is Figure S5A from the article supplement.
That is, the finding is that artemisinin is made in non-gland tissue, too. Studies of the enzymes needed to make the drug show that the enzymes, too, are distributed throughout the leaf tissue.
What are the conditions that lead to the wider distribution of drug synthesis? The scientists have two conditions. One is from their own work... They find that inbreeding leads to plants with non-gland drug synthesis. The second is based on a gland-less mutant. They find that it, too, makes the drug.
The genetic basis of the effect is not clear in either case. Further, the drug levels are low in the current work. What makes the work of interest is that they have opened up the possibility of getting drug from more of the plant. Further work can build on this what is shown here; it might lead to plants with a greater overall production of artemisinin.
News story: Study upends 'dogma' on malaria drug component. (Phys.org (M Kulikowski, North Carolina State University), April 9, 2019.)
The article, which may be freely available: Artemisinin Biosynthesis in Non-glandular Trichome Cells of Artemisia annua. (R Judd et al, Molecular Plant 12:704, May 2019.)
I found parts of the article rather murky, including some aspects of the experiment I described above. As I read it, my sense was that I had confidence in what they claimed; the problem is with the writing of the article. The article mainly provides the basis for some further work; it does not itself claim a useful product. Therefore, I'm comfortable presenting the main ideas from it.
* * * * *
You may wonder... What about the process for making artemisinin using engineered yeast? They note it, and say that it has not yet proven itself economically.
* * * * *
A recent post about anti-malarial drugs: What if we gave mosquitoes anti-malarial drugs? (April 7, 2019).
Some of the work on developing a process for making artemisinin in yeast is noted on my page Internet Resources for Organic and Biochemistry under Alkenes.
There is a section of my page Biotechnology in the News (BITN) -- Other topics on Malaria. It includes a list of related Musings posts.
June 3, 2019
The mammalian brain decays rapidly after death; that's the dogma.
A new article suggests that it decays less rapidly than we thought.
It's a fascinating article, for the approach and for the findings. It's also important to understand the limitations of what was shown.
Here is the general approach... Pigs. Dead pigs. Four hours after death, the brains were hooked up to a device the scientists had developed; it perfused the brains with a specially designed fluid, which effectively served as a blood substitute. The scientists then made observations and measurements on the brains, over six hours.
The device is called BrainEx (or BEx). The term refers to it supporting the brain ex vivo.
The pig brains were obtained by arrangement with a local slaughterhouse. Thus the scientists had access to a large ongoing supply of brains, under fairly standardized conditions. However, these were not lab-grown pigs.
A simple summary is that treatment of brains with BEx perfusion resulted in improvements. The comparison was with brains not treated at all, or treated with a control perfusion fluid. Some aspects of BEx-treated brains appeared near normal, even ten hours after death.
At the outset... There is no evidence for any type of global brain function.
Here are examples of the measurements...
Part c (left side) shows the number of cells carrying a particular brain cell marker, called IBA1 (a marker for microglia).
The four conditions, from left to right, are...
- 1 hour after death. PMI means post-mortem interval. This measurement is the earliest they can get; it is effectively a reference (baseline) value.
- 10 hours PMI. No treatment.
- 10 hours PMI. BEx device, with control perfusion solution (or "perfusate").
- 10 hours PMI. The treatment... BEx with the special perfusate they developed.
The BEx treatments started at 4 h PMI. That is, the scientists measured the effect of six hours of treatment, started four hours after death.
The results seem clear. The BEx treatment with the special perfusate restored the count of this type of cell to about the reference value. Without the treatment (middle two bars), that cell number dropped drastically.
Microscopic examination of tissue samples showed that the cells were falling apart in the cases without full treatment. However, the full BEx treatment substantially reduced cell degradation.
Part d (right side) is similar, for a second marker (GFAP, a marker for astrocytes). The results are similar.
This is part of Figure 5 from the article.
That's the idea. The article contains lots of data. Measurements. Pictures of tissues. The example shown above is representative.
What's the take home lesson? Well, the scientists have developed a new method for studying dead brains. That in itself is a big deal. And some of the findings from this early work with the method suggests that brains survive better after death than we had thought. That's about where we should stop. We emphasize again, as they do in the article, that they have not restored "brain function" in any big sense.
The work will continue.
* Restoration of Brain Circulation and Cellular Function after Death. (D Joye, BrainPost, April 23, 2019.)
* The pigs were dead. But four hours later, scientists restored cellular functions in their brains. (S Begley, STAT, April 17, 2019.)
* Expert reaction to study on restoring cellular functions in the pig brain after death. (Science Media Centre (SMC), April 17, 2019.) As usual, the SMC presents the views of scientists in the field -- several of them in this case. They are in general agreement about what this article does and does not do. I encourage people to read at least part of this page for some professional perspectives on the work.
The work has, not surprisingly, raised ethical questions. The following pair of items, published together in Nature in the same issue as the article, are examples of discussions of the ethical implications. They are both freely available.
* Part-revived pig brains raise slew of ethical quandaries. (N A Farahany, Nature, April 17, 2019.) In print: Nature 568:299, April 18, 2019.
* Pig experiment challenges assumptions around brain damage in people. (S Youngner & I Hyun, Nature, April 17, 2019.) In print: Nature 568:302, April 18, 2019.
* News story accompanying the article, freely available: Neuroscience: Pig brains kept alive for hours outside body A system that revives pig brains after death raises a slew of ethical and legal questions.. (S Reardon, Nature 568:283, April 18, 2019.)
* The article: Restoration of brain circulation and cellular functions hours post-mortem. (Z Vrselja et al, Nature 568:336, April 18, 2019.)
More about brains is on my page Biotechnology in the News (BITN) -- Other topics under Brain (autism, schizophrenia). It includes a list of related Musings posts.
That page also includes a section on Ethical and social issues; the nature of science. I have listed this post there.
May 31, 2019
Analysis of an unusual type of glass, found in the northern Sahara desert and used in artefacts found in the tomb of Pharaoh Tutankhamen (commonly called King Tut), suggests that scientists may be over-estimating the hazards from meteor strikes.
Here is an example of such an artefact...
It is a piece of armor, called a breastplate.
That yellowish piece near the center (maybe a bit greenish, too)... that's the part of interest. It is a sculpture of a scarab -- what we commonly call a dung beetle. (In ancient Egyptian culture, the scarab was considered responsible for rolling out the morning sun.)
It's made from Libyan desert glass -- or LDG. (The major site for LDG is in modern Egypt.)
This is (reduced and trimmed from) the second figure in the news story from ZME Science.
What is the connection to meteors?
It is likely that LDG was made as a result of a meteor strike about 29 million years ago. However, there has been disagreement over what kind of an event was involved. One possibility is that the LDG resulted from a direct impact. But it is also possible that it resulted from the airburst around such an event. If the latter possibility is correct, it implies a huge event, with more than a hundred times the energy of the 2013 meteor strike in Siberia. On the other hand, if direct impact was involved, much lower energy events could have been sufficient.
Can we tell what kind of event caused the formation of LDG? Previous work on LDG samples had not yielded any evidence for direct impact. If there was no evidence for direct impact, it was plausible that high energy airbursts were involved. That is, the lack of evidence for direct impact effects became evidence suggesting the indirect airburst effects, which require much larger -- much more dangerous -- events.
And that leads to the current article. In this new work, the authors do new analyses of LDG samples. Some of the details they find can only be explained by the conditions of pressure and temperature that would occur with a direct impact.
If LDG was made by direct impact, it eliminates the need to invoke high energy airburst events to explain it. It doesn't mean such high energy events don't happen, but one type of evidence -- indirect evidence -- for them has been removed. Perhaps such events are not as common as we might have thought.
It's important to distinguish the experimental work and the discussion that follows. The experimental work creates facts. The analysis of the structure is complex, and the facts can be challenged by further work. Beyond the facts, there is interpretation and even speculation.
In any case, it's a fun story with many aspects. Maybe that is what makes it fun. Unusual rocks in the Libyan desert... rocks that make an appearance in the legendary tomb of King Tut, rocks that are slowly revealing their story, a story that may have implications for our future.
* Scientists solve 100-year-old mystery of yellow desert glass prized by Egyptian pharaohs. (T Puiu, ZME Science, May 16, 2019.)
* Planetary scientists unravel mystery of Egyptian desert glass. (Phys.org (Curtin University), May 15, 2019.)
The article: Overestimation of threat from 100 Mt-class airbursts? High-pressure evidence from zircon in Libyan Desert Glass. (A J Cavosie & C Koeberl, Geology 47:609, July 2019.)
More from King Tut's tomb: The Most Remarkable Funeral Treasures (September 1, 2010).
Previous post about scarabs: Dung beetles follow the Milky Way (February 24, 2013).
The 2013 Siberian meteor strike was among the events discussed in the post Of disasters, asteroids and meteors (February 19, 2013).
Another artefact with a meteorite connection: An extraterrestrial god (October 9, 2012). (Be sure to note the update at the end of the post. One of the original claims has been contested.)
More from the Libyan desert area: Hottest temperature ever recorded on Earth? Libya or Death Valley (California)? (June 30, 2013).
May 29, 2019
Trees, land use, food -- and more. A recent post was about pollution from trees -- a complicating factor in considering the role of trees as a weapon against greenhouse gases. We now have two more "Comment" stories from Nature on aspects of the broad issue. The first emphasizes the difference between natural and "plantation" forests. The second deals with the land use implications of forests vs food production; it pleads for an integrated view. Each of these stories (the two here plus the earlier ones) is an interesting view of part of a big complex topic.
* "Comment" stories:
1) Restoring natural forests is the best way to remove atmospheric carbon. (S L Lewis, Nature, April 2, 2019.) In print, with a different title: Nature 568:25, April 2, 2019.
2) Fix the broken food system in three steps. (G Schmidt-Traub, Nature, May 8, 2019.) In print: Nature 569:181, May 9, 2019.
* Background post: Interaction of pollution sources: Can the whole be less than the sum of the parts? (March 9, 2019). It links to a "briefly noted" item about a related recent news story.
May 28, 2019
The use of hydrogen as a fuel has some appeal. It has a very high energy density (energy/mass), and it burns cleanly. It also presents challenges. One challenge is the source of the hydrogen. Making it using fossil fuel is not consistent with the grand plan.
One possibility is to make hydrogen fuel by electrolysis of water, using solar energy to drive the process. After all, both water and solar energy are abundant and cheap. Aren't they?
Not according to the authors of a recent article. Fresh water is becoming an increasingly problematic resource; the amounts of water needed to make hydrogen fuel at large scale would significantly impact the water supply.
A response to that might be... use seawater (salt water), not fresh water. But electrolysis of seawater to make hydrogen has its own problems. First, it may produce chlorine gas as a by-product. And second, salt (more specifically, the chloride ion in salt) is corrosive; ordinary electrodes don't last very long in seawater.
What's wrong with making chlorine gas, Cl2? It is a commercial product, and it is made by electrolysis of salt water. The numbers. If we are talking about large scale production of hydrogen so that it becomes a significant part of the energy budget, the amount of chlorine made as a by-product would far exceed the demand.
The new article presents a process for making hydrogen from seawater. Here are some key results...
The top (black) line is the one of main interest. It is for the electrolysis of real seawater.
The voltage to maintain a stable current (y-axis) is stable for a thousand hours. (With ordinary electrodes, little would be happening by even ten hours.)
The lower two curves are for water with about three times the salt content of seawater. The processes here, too, are stable.
This is Figure 2D from the article.
How did the scientists achieve this?
We noted two problems earlier; they have addressed both of them.
First, they used potassium hydroxide, KOH, so that the electrolysis is run under basic conditions. This suppresses the formation of Cl2; instead, O2 is made at the anode. This step is well known.
The second step was their new development: making an electrode that is not corroded by chloride ion. The following figure is a diagram of its design...
The main point you can see here is that the electrode is multi-layered.
What you can't tell from this diagram, or the labeling here, is that the layering is designed so that chloride ions are repelled.
This is trimmed from part of Figure 1A of the article.
Why are chloride ions, Cl-, repelled? Because the outer parts of the electrode have a high density of negative charges.
The authors refer to the three-layer nickel-based electrode as Ni3. The outer layer, which is NiFe hydroxide, is the catalyst. The inner layer, which is metallic nickel, serves as the conductor. The middle layer, which gets oxidized to sulfate, plays the key role of protecting the metallic nickel.
The bottom line? The article has some interesting ideas, and the scientists demonstrate at lab level a process that allows electrolysis in otherwise-corrosive seawater. As so often, we don't take this as a practical process at this point, but as a useful developmental step.
Most of the work here was done inside the lab. However, one experiment was done outside -- using water from San Francisco Bay and authentic California sunshine.
* New process can make hydrogen fuel out of seawater without destroying the devices. (A Micu, ZME Science, March 19, 2019.)
* Clean hydrogen not dead end yet, as new green method creates fuel from seawater. (P Dzikiy, Electrek, March 18 2019.) The comments section below the news story contains a lively discussion of the pro and con of hydrogen.
The article, which is freely available: Solar-driven, highly sustained splitting of seawater into hydrogen and oxygen fuels. (Y Kuang et al, PNAS 116:6624, April 2, 2019.)
A post about water resources: Evaluating the world's water resources (August 11, 2015).
There is more about energy issues on my page Internet Resources for Organic and Biochemistry under Energy resources. It includes a list of some related Musings posts.
May 25, 2019
Some results, from a recent article...
In the left-hand graph, the voracious predator Pristionchus entomophagus was offered four species of food. Both predator and prey are labeled across the bottom (and are shown with consistent colors). The y-axis is labeled corpses (and the assay is called the corpse assay). You can see that the test predator consumed many members of other species, but not its own species. The right-hand graph shows a similar experiment for one of the other species. (The full figure in the article includes data for all four species.)
The results are consistent... The worms are indeed voracious, but none of them ate members of their own species.
To be more specific... The worms here are nematodes. The test is done with adult worms doing the eating, and larval worms as the food.
This is the lower half of Figure 1B from the article. The upper half shows similar results using the other two species as predators.
It turns out that the worms will eat some members of their own species, but not their own offspring. How do the worms recognize their kids? The scientists identified a key protein, called SELF-1. That protein has a region that is hypervariable. It varies so much that only very close relatives share the same version of the protein.
The following graph shows the results of an experiment involving different versions of the SELF-1 protein...
The general experimental design is the same as in the top figure. What is different here is that the prey vary in the form of the recognition protein SELF-1.
In this data set, the predator is PS312. And they eat all the different kinds of prey except PS312. That is, they avoid eating their own kind -- defined here by the form of the self-recognition protein SELF-1.
This is the first (left-hand) part of Figure 2 from the article. The other parts show similar results for two more types.
After reading about both of these experiments, one might wonder what was actually being tested in the first experiment. A single strain was used for each species. Thus the larvae were closely related within a species; in fact, they were essentially "self" to the predator adult.
How does this self-recognition system work? The authors observed that the feeding worm touches the candidate larval prey before deciding whether to eat it. SELF-1 is found on the animal surface, in both larval and adult stages. It seems likely that this direct contact is the basis of the recognition event. Further molecular and neurological details are not known at this point.
Self-recognition is a big topic in biology. It pops up in a wide range of situations. Among them... worms that avoid eating their kids; vertebrates that keep their immune system from attacking their own tissues. We also note that hypervariable proteins, used here as the basis of self-recognition in worms, are also a key part of the vertebrate immune system.
News story: A peptide against cannibalism -- A small molecule safeguards roundworm larvae against parental attacks. (Max-Planck-Gesellschaft, April 4, 2019.) Includes an electron micrograph looking into the mouth of one of these worms. You can see the two teeth.
The article: Small peptide-mediated self-recognition prevents cannibalism in predatory nematodes. (J W Lightfoot et al, Science 364:86, April 5, 2019.)
Previous post about cannibalism... Cannibalism in the uterus (May 31, 2013).
A post about a nematode that is a workhorse of lab research, Caenorhabditis elegans: Extending lifespan by dietary restriction: can we fake it? (August 10, 2016).
Another nematode... How does worm "fur" divide? (January 4, 2015).
Previous use of the word corpse in a Musings post: none.
May 22, 2019
Transmissible cancers. These are cancers that can be transmitted from one animal to another. Not common, but there are now examples in diverse organisms, and considerable study of what is going on for at least one case. A recent news feature in The Scientist is a nice overview and update.
* News feature: Some Cancers Become Contagious -- So far, six animal species are known to carry transmissible, "parasitic" forms of cancer, but researchers are still mystified as to how cancer can become infectious. (K Zimmer, The Scientist, April 1, 2019.) In print, with a slightly different title... p 36 of April 2019 issue.
* My page for Biotechnology in the News (BITN) -- Other topics includes a section on Cancer. It includes an extensive list of relevant Musings posts. You can scan/search that list for 'devil' or 'clam' to get posts on transmissible cancers. I have noted this new item there.
May 21, 2019
Adult neurogenesis? Making new neurons as an adult. Humans. It is a controversial topic. The traditional view was that we didn't do it. But modern technology has allowed the question to be re-opened. Work in recent years has provided good evidence on both sides. There is no consensus. Evidence "for" is probably more important than evidence "against" at this point, so long as the methodology is accepted -- a non-trivial problem. The level of adult neurogenesis, if real, may be low. But even a low level could be of great interest. After all, this is our brain we are talking about.
A recent article makes an interesting contribution to the field. The following figure provides a simple summary of a complex story...
The graph shows the density of a particular type of cell in the brain (y-axis) as a function of age (x-axis), for several groups of people. Each point is for one person.
The cell type measured here is considered a measure of new neurons.
The main observations...
- People are making new neurons, out to the oldest ages examined.
- The number of new neurons tends to decrease with age.
- The redder the person's symbol, the fewer new neurons they have.
Red symbols? They mean the person had Alzheimer's disease (AD). Clear symbols are for people without AD. Red is for AD; the redder the symbol, the more advanced the AD.
The asterisk at the end of the line for the controls? It is for statistical significance, but it is not clear what is being tested.
This is Figure 3l from the article. (3l? That 2nd character is an "el". It's a complicated figure!)
What are the scientists measuring here? First, they are looking at the hippocampus, an area of the brain involved in memory. That is, they are looking at adult hippocampal neurogenesis -- AHN, as they say in the paper. They look for a specific protein marker, called doublecortin (DC). The label DCX+ on the y-axis label means doublecortin-expressing. It is considered a marker for immature neurons (neuroblasts); that is, it is taken as a marker of neurons being made. There is no way to follow the process of neuron formation in living human brains. The samples are autopsy samples, stored in brain banks. In some of the work, the scientists examine other markers that are considered markers for various stages of neuron development. A lot of the effort is about building the case that the inference is correct.
The big messages of the article are very clear...
- Adults make new (hippocampal) neurons.
- People with AD make fewer of them -- judged by direct comparison within a single study.
As so often, be cautious. We have already noted the controversy around the question of adult neurogenesis. The article claims important methodological developments that have made for improved measurements of neurogenesis. It's common to read such claims, and they get debated in subsequent work. The methodology of the article will undergo great scrutiny.
The AD result is of particular interest. It's important that we have a direct comparison of AD and control samples by the same procedures. But even if the basic comparison stands, we don't know what it means. The result is a correlation. The work tells us nothing about the role of neurogenesis in the disease -- though we certainly can come up with interesting possibilities.
Overall, the article appears to be an interesting step in studying the formation of new neurons in adult human brains. And it shows an interesting connection between neurogenesis and Alzheimer's disease. There is plenty here to drive further work.
* New neurons are formed in the brain well into old age - but this stops in Alzheimer's. (M Andrei, ZME Science, March 25, 2019.)
* More Evidence that Humans Do Appear to Create New Neurons in Old Age -- Despite doubts last year about human adult neurogenesis, a study shows even 80-year-olds develop new cells in the hippocampus, but such growth is diminished in patients with Alzheimer's disease. (A Yeager, The Scientist, March 25, 2019.) Includes a comment from one scientist who is skeptical of the interpretation. Nevertheless, he finds the AD result of interest, since it is side-by-side with non-AD controls. That is, the article seems to show something about AD, even if we are not sure what.
* News story accompanying the article: Neurodegeneration: A fresh look at adult neurogenesis -- Improved protocols for the visualization of immature neurons in the human brain provide evidence for generation of neurons in the adult hippocampus and uncover reduced neurogenesis in Alzheimer's disease. (E Steiner et al, Nature Medicine 25:542, April 2019.)
* The article: Adult hippocampal neurogenesis is abundant in neurologically healthy subjects and drops sharply in patients with Alzheimer's disease. (E P Moreno-Jiménez et al, Nature Medicine 25:554, April 2019.)
A previous post on the question: Atomic bombs and growing new brain cells (November 1, 2013). The article of this earlier post is reference 6 of the current article.
Previous post about AD: Games genes play -- Alzheimer genes, in your brain (January 4, 2019).
My page for Biotechnology in the News (BITN) -- Other topics includes a section on Alzheimer's disease. It includes a list of related Musings posts.
May 18, 2019
Well, here are some results, from a recent article...
This test measured how long it took for mosquitoes to make their first bite, under controlled lab conditions.
The variable was whether or not music was playing: Audio player status, OFF or ON.
The victims were hamsters. (The mosquitoes were Aedes aegypti. Females were used for the test; only females bite for blood meals.)
It is clear that the mosquitoes were much slower to bite when the music was playing.
This is Figure 3 from the article.
Here is the music... Music video: Skrillex - Scary Monsters And Nice Sprites. (YouTube, 4 minutes.)
The article contains other data from such tests. The music reduced the number of bites over a set time period, and also reduced the frequency of matings. That is, there is a general pattern that the activity of the mosquitoes is disrupted by the music.
Is there some reason to do such tests? Yes -- and it is something that has been noted in Musings [link at the end]. Mosquitoes communicate with each other during mating rituals with sound -- from their wings. Further, the scientific literature contains many studies of the effects of extraneous sounds on insect behaviors.
What else can we say about this? Not much. There are no other variables in the work. Just OFF/ON. The article includes a vibragram of the song, which shows that it contains "strong sound pressure/vibration with constantly rising pitches" (Section 2.2). The authors conclude that the song is "noisy".
There are reasons to find this article amusing -- starting with its title. However, the broad issue of how sound affects insect behavior is interesting. Can we learn to use sound as a weapon against insects? Perhaps we should be open to the possibility.
* Blasting This Skrillex Track Will Reduce Mosquitoes' Desire to Bite, Study Finds. (J Bowler, Science Alert, April 1, 2019.)
* Here's how Skrillex's music could help fight Zika and dengue fever. (M Sanicas, ZME Science, April 4, 2019.)
The article: The electronic song "Scary Monsters and Nice Sprites" reduces host attack and mating success in the dengue vector Aedes aegypti. (H Dieng et al, Acta Tropica 194:93, June 2019.)
Background post on how mosquitoes sing: Science: Love songs (March 26, 2009). The article discussed in this post is among many references in the current article on insects and sound.
Among other posts on repelling mosquitoes: Can chickens prevent malaria? (August 12, 2016). The synergy between the current post and this older one needs to be tested.
There is a section of my page Biotechnology in the News (BITN) -- Other topics on Dengue virus (and miscellaneous flaviviruses). It includes a list of related Musings posts.
There is more about music on my page Internet resources: Miscellaneous in the section Art & Music. It includes a list of related Musings posts.
May 17, 2019
Wood contains lignin and cellulose. The lignin presents a special problem for those wanting to make useful products from wood. Lignin contains multiple types of subunits, and the chemical linkages between subunits are not easily attacked. Musings has noted the problem before [link at the end].
A recent article develops another approach to using lignin. Briefly, the products from a general treatment of three kinds of lignin are fed to a specially-developed bacterial strain, which converts all of them to the same final -- and useful -- product.
The following figure shows the plan. For now, just follow the general flow; don't worry about the details of structure (which are hard to read at this scale).
The top row shows the structures of the three types of lignin, and gives each one a letter, which is from one of the key chemicals involved.
The second row (thin box) shows some general processing, which leads to the three specific chemicals at the top of the main (bottom) box.
That big bottom box shows how a particular strain of bacteria metabolizes those three chemicals. In particular, note two red "X", showing steps that the scientists "knocked out" in the new strain they developed. As a result of those two knock-out changes, the metabolism of all three starting materials is diverted to a single final product: PDC (near the lower right, just above the red X there).
This is Figure 1 from the article.
If you want details of the chemical structures, check the web site for the article, which includes a high-res version of the figure.
Briefly, the three types of lignin units differ by the number of -OCH3 (methoxy) groups on the ring: 2, 1, 0 from left to right. Those groups are difficult to modify, but are important for the properties of the ultimate product.
With the original strain, before the red-X knockouts, all three starting chemicals end up being converted to pyruvate + oxaloacetate, as shown at the lower right. Those chemicals are part of general metabolism.
The following figure gives an idea of how it works, though this experiment only tests two of the three types of lignin.
The left and right sides of this figure are for two of the three types of lignin. In each case, the modified bacterial strain is grown on glucose, and given the lignin product: vanillic acid (left) or p-hydroxybenzoic acid (right). Growth of the bacteria is measured (top graphs), as are the concentrations of some metabolites (bottom graphs),
The top row shows the growth of the bacteria over time. The general result is that the bacteria grew in both cases (even if one of the growth curves looks odd).
In the bottom graphs, the red curve rises in both cases. That is for PDC, the desired product.
Some curves decline. They are the curves for glucose (yellow) and for the lignin material that was added at the start in each case (green or blue). That is, the things that were fed were used up, and the desired product accumulated. Checking the numbers on the y-axis, it appears that about 2/3 of the lignin material was converted to PDC in each case. (Actual conversion, average from multiple tests: 81% and 73%.)
(One curve is very low all the time; that is for one of the intermediates, but it need not concern us here.)
The bacteria used in this experiment, on two lignin types, had only one of the red-X blocks (the one at the lower right). The second block is needed only for the third lignin type.
This is Figure 3 from the article.
In another test, the scientists used a mixture of lignin types with the final doubly-blocked bacteria. PDC was made at about 60% efficiency.
That the conversion to PDC is, reproducibly, less than 100% suggests that some of the lignin material is being consumed for growth. Thus there may be other pathways involved. Further work to reveal and block those pathways could be worthwhile.
Why make PDC (2-pyrone-4,6-dicarboxylic acid)? It is a dicarboxylic acid, and may be useful in making polyester plastics. Again, an important issue here is moving toward making one single (major) product.
It's a new type of development. As usual with articles of this type, there is no economic analysis -- and no claim that they have achieved a useful process.
* Engineered microbe may be key to producing plastic from plants. (Science Daily (C Barncard, University of Wisconsin-Madison), March 6, 2019.)
* Biological funneling of aromatics from chemically depolymerized lignin produces a desirable chemical product. (Great Lakes Bioenergy Research Center, March 8, 2019.)
The article, which is freely available: Funneling aromatic products of chemically depolymerized lignin into 2-pyrone-4-6-dicarboxylic acid with Novosphingobium aromaticivorans. (J M Perez et al, Green Chemistry 21:1340, March 21, 2019.)
A background post about processing lignin: Turning lignin into a useful product (April 11, 2015).
There is more about energy issues on my page Internet Resources for Organic and Biochemistry under Energy resources. It includes a list of some related Musings posts. Why is this post about energy resources? Indirectly... Utilization of lignin is coupled to that of cellulose; the latter is often used for biofuel.
* Also see the section on that page for Aromatic compounds.
May 15, 2019
Ebola vaccine. The Ebola news from the current outbreak in the Democratic Republic of the Congo (DRC) is mostly depressing. However, results from vaccination, announced recently, are encouraging. The vaccination work was done by the ring strategy discussed in earlier posts, administering vaccine to contacts of known cases. Analysis suggests that the vaccine is 97% effective. Further, the death rate of those who do get the disease after vaccination is very low. The announcement, from the DRC and WHO, is preliminary; a proper scientific article is promised.
* News story: Ebola cases climb by 44 as vaccine trial affirms high efficacy. (L Schnirring, CIDRAP, April 15, 2019.) Links to the report, which is freely available; see the item near the end "Apr 13 INRB WHO preliminary VSV-EBOV results".
* For more about Ebola... Ebola and Marburg (and Lassa) (a section of a BITN page). This news is noted there. The section also has a list of Musings post about Ebola, including the vaccine and the ring-vaccination approach.
May 14, 2019
Regeneration of heart muscle, to repair a damaged heart, is an important topic. At least, humans think so, especially as more reach extended ages. It's not so clear that Nature thinks the topic is high priority.
Why don't humans do heart regeneration? One level of answer has become clear over recent years. Most of the muscle cells in human hearts are not ordinary diploid cells. They are mostly polyploid (with multiple chromosome sets). The advantage of that is not particularly clear, but the disadvantage is clear: being polyploid makes ordinary cell division difficult.
If you look at a wide range of vertebrates, there is a general correlation: the higher the percentage of diploid cells in heart muscle, the more likely that heart regeneration will succeed. But that just pushes the question back... Why do we have so few diploid heart muscle cells?
The following figure, from a new article, offers some clues...
Three graphs. For all of them, the y-axis is the percentage of cardiomyocytes (CM; heart muscle cells) that are diploid, for various animals. (Caution... the scales are not the same.)
That percentage of diploid CM is plotted against the standard metabolic rate (SMR; part A), body temperature (part B), and level of T4, a thyroid hormone (part C). (The standard metabolic rate is normalized to body weight. That may sound complicated, but it is a known factor: small animals have faster metabolism than big ones for the same amount of mass. They simply correct for that known factor.)
In each case, there is a good trend. The % diploid CM declines as the other plotted parameters increase. We don't need any more detail than that from this figure.
This is Figure 2 from the article.
Those graphs show correlations. Is it possible that any of these things might be related by cause? In fact, it is known that thyroid hormone is involved in the control of body temperature and metabolic rate.
Is it possible that thyroid hormone controls the nature of cardiomyocytes -- and therefore heart regeneration? That question is subject to experimental testing; the article goes on to do some tests.
A simple test would be to elevate thyroid hormone levels in an annual that normally shows good heart regeneration. Zebrafish, for example; it is a common subject for studying heart regeneration in the lab. Doing that led to a marked reduction in heart regeneration in a standard test.
A more interesting test would be to see if we could stimulate heart regeneration in an animal where it usually fails -- by blocking the action of thyroid hormone. This is a technically complicated test, but the basic logic is straightforward. Mice were genetically engineered so that thyroid activity in the heart was blocked. Such mice were given an artificial heart attack, and their recovery was followed.
Here are some results...
The graph shows ejection fraction (EF) percentage vs time. The EF% is a measure of heart function.
Before you get lost in a blur of data points (and asterisks), look at the final data set, to the right, for day 28 following the heart attack. It's clear that the red-square mice are doing much better than the black-circle mice.
The red squares are for the engineered mice, where the thyroid hormone doesn't act on the heart. The black circles are for ordinary (control) mice.
It's quite clear: the mice recovered from their heart attack much better if the action of thyroid hormone in the heart was blocked.
Let's fill out what the data shows. The first data set is labeled baseline; this is before the heart attack. The two types of mice gave similar results, with EF above 80%. (The vertical line by that data set says NS, for not significantly different.)
At the first measurement following the injury (7 d), the EF is lower. Visual inspection suggests a small difference between the two types of mice, but statistically it is NS.
What happens after that is interesting. For the engineered mice, the heart function gradually improved. By the end of the experiment, it was about what it was at baseline. The control mice showed steadily declining heart function.
This is the right-hand part of Figure 4G from the article. The full Figure 4 shows a variety of data consistent with the small part shown here.
Consider this mouse experiment along with the zebrafish one mentioned briefly before that... The evidence supports a role for thyroid hormone in controlling the ability to regenerate heart tissue.
It's interesting for two reasons. First, there is a story of how warm-blooded animals developed. We know that thyroid hormone is a key player in that story; we now associate that with loss of ability to regenerate heart tissue.
Second, we must wonder about the implications for human health. including possible therapeutic intervention. Some comments...
- We have no direct evidence about what is going on in humans. We might, of course, suspect that humans follow the general picture developed here, but we have no details. For example, we do not know whether the mouse experiment discussed above would work in humans -- even if we could do it.
- That mouse experiment is not possible with humans. Further, we don't know what kind of intervention would be needed. We might imagine having a drug that inhibits thyroid action in the heart. However, it seems unlikely that giving such a drug at the time of heart injury would be helpful. When would it have to be given? We don't know.
It's a fascinating article. It should stimulate a range of work. But it's important to realize that any application to human health is speculative at this point.
* Warm-Blooded Animals Lost Ability to Heal the Heart. (C Intagliata, Scientific American, March 7, 2019.) Podcast, with transcript.
* Hormone Made Our Ancestors Warm-Blooded but Left Us Susceptible to Heart Damage. (J Alvarez, University of California San Francisco, March 7, 2019.) From the lead institution.
* News story accompanying the article: Evolution: Lost in the fire -- Thyroid hormones tip the balance between regeneration and temperature regulation. (S Marchiano & C E Murry, Science 364:123, April 12, 2019.)
* The article: Evidence for hormonal control of heart regenerative capacity during endothermy acquisition. (K Hirose et al, Science 364:184, April 12, 2019.)
A post about the importance of diploid cells for regeneration: Heart regeneration? Role of MNDCMs (November 10, 2017).
A post about heart regeneration in zebrafish: Zebrafish reveal another clue about how to regenerate heart muscle (December 11, 2016).
Among posts about thyroid function...
* Bigger spleens for a bigger oxygen supply in Sea Nomad people with unusual ability to hold their breath (July 2, 2018).
* How the giant panda survives on a poor diet (August 2, 2015).
Among posts about the complexity of warm-bloodedness... Facultative endothermy: a lizard that is warm-blooded in October (February 1, 2016). Links to more.
There is more about regeneration on my page Biotechnology in the News (BITN) for Cloning and stem cells. It includes an extensive list of related Musings posts.
May 13, 2019
A recent post was about how caffeine improves the health of premature babies [link at the end].
We now have an article about how caffeine improves the health of solar cells.
Let's start with some bottom-line data, so you can see that there is a significant effect...
The graph shows the power conversion efficiency (PCE) of the solar cells over time. PCE is shown normalized to the initial value, set as 1.0 (for each device). That is, this is a test of the stability of the solar cells. (The actual value for the initial PCE was about 20%.)
The red curve is for the regular solar cells (controls). The black curve is for the solar cells with caffeine.
The control solar cells lose efficiency from the start. They are down by about 1/3 over the first 100 hours (4 days). The caffeinated solar cells are still operating at about 85% of initial efficiency at the end of the test (1300 hours = 54 days).
This is Figure 4A from the article.
That is, caffeine improves the health of the solar cells. By a lot.
What's going on?
First, this work is about a specific type of solar cell, called perovskite. That term refers to a type of crystal structure. Perovskite solar cells are a recent development. There has been considerable progress, with the efficiency of perovskite cells now approaching that of traditional silicon-based solar cells. (As noted earlier, the cells in the current work were operating with about 20% efficiency, which is quite good. The caffeine led to a slightly higher efficiency.)
Perovskite cells have the potential to become a major type of solar cell; they are cheaper and easier to make than traditional cells. However, they have one major limitation: they are unstable. The current work addresses that limitation -- with some success, as the figure above indicates.
Is this all a joke? Well, it may have started as one, according to the news coverage. But then someone did the experiment -- and looked at the details of the chemical structures. Not only does a little caffeine (1% by weight) improve the performance of these solar cells, but the scientists have a good idea why. The caffeine fits into the molecular structure and stabilizes it.
Is this a practical improvement? Probably. Caffeine is an inexpensive chemical, available in large quantities. To be fair, conventional silicon-based solar cells are even more stable than what is shown above for the caffeine-perovskite cells. There is more to be done, but the article is an encouraging development.
* Researchers figure out how coffee can boost (some) solar cells. (A Micu, ZME Science, April 26, 2019.)
* Science: Caffeine improved the performance of perovskite solar cells. (Solar Builder, April 29, 2019.)
* Caffeine boosts perovskite solar cells. (B Dumé, Physics World, April 30, 2019.) Excellent overview.
The article: Caffeine Improves the Performance and Thermal Stability of Perovskite Solar Cells. (R Wang et al, Joule, in press. Scheduled for June 19, 2019. issue.)
Background post on caffeine... Using caffeine to treat premature babies: risk of neurological effects? (April 27, 2019).
Among recent posts on solar energy... Is solar energy a good idea, given the energy cost of making solar cells? (March 24, 2017).
There is more about energy issues on my page Internet Resources for Organic and Biochemistry under Energy resources. It includes a list of some related Musings posts.
May 11, 2019
Ice is complicated. Not just the various Roman-numbered forms you hear about from exotic lab work, but natural ice -- the stuff you find in Antarctica.
An Antarctic iceberg.
At the far left, it is a "bubbly blue-white". But some of it is quite green.
This is Figure 3 from the article.
Why is part of the iceberg green?
The bluish ice at the left is glacier ice, made by snow consolidating into large rigid blocks of ice. Glacier ice is pretty much pure water, just as the snow is. The color is the normal color of large amounts of water.
The green ice is marine ice, formed as sea water freezes out. This happens at the bottom surface of ice, where sea meets ice. Marine ice contains things from the sea.
Marine ice can be various colors, presumably due to different "contaminants" from the sea. So, why is some of the marine ice green? The short answer is that we don't know. A recent article explores the question, but it may be more interesting for the exploration -- and the pictures -- than in actually finding an answer.
The figure also shows snow, which is presumably white -- though you couldn't tell that from this picture. Snow, too, is a form of ice, so we have three kinds of ice there.
Another figure in the article shows five kinds of ice in a natural Antarctic scene. Included is an ice cloud.
There are two general ideas for what causes the green icebergs. One is dissolved organic matter. We're not talking about algae growing on the surface. The green color is within the ice, and distributed rather uniformly. Maybe it is cellular debris, which might be within the sea. Green? It's not that the organic matter is green, but that it shifts the spectrum so that the ice appears green rather than blue. The other suggested cause of green ice is iron. The color of iron (more specifically, its oxides) is a complicated topic, but it is certainly plausible that it could make for green icebergs.
Here are some results...
The figure shows spectra for several kinds of ice -- all from one Antarctic iceberg.
Spectra. Albedo spectra. Albedo refers to the fraction of light reflected. An object with high albedo reflects a lot of light. (Albedo = 1 means total reflection. Albedo = 0 means no reflection; the object is dark.) The scientists are measuring the spectra for light reflected from various kinds of iceberg ice.
You can see that the two ices labeled as blue (top two on the left, where they are labeled) reflect mainly light with short wavelength: bluish light. For the two ices labeled as green, much of that blue light has also been removed. The green ices reflect less light, and it is more spread over the entire spectrum, with a slight peak in the middle.
This is Figure 7 from the article.
That figure takes us from a qualitative observation of how we describe the ice color to a quantitative analysis of the nature of the reflected light.
Beyond that... There is much more analysis. The authors measure the amounts of dissolved organic matter and iron in iceberg samples. They end up arguing that iron, in the form of iron oxides, is the more likely cause of the green color. But the arguments are complex and incomplete. The spectral properties of ice with iron oxides are not well understood. The article has considerable discussion of the limitations of the work, and concludes with a plea for more data.
News stories. Both of the following stories provide excellent overviews, with more spectacular pictures.
* Why Are Some Icebergs Green? (C Prend, Oceanbites, February 1, 2019.)
* Mystery of green icebergs may soon be solved. (American Geophysical Union (AGU), March 4, 2019.)
The article, which is freely available: Green Icebergs Revisited. (S G Warren et al, Journal of Geophysical Research: Oceans 124:925, February 2019.)
Among posts about the Antarctic... IceCube finds 28 neutrinos -- from beyond the solar system (June 8, 2014). Links to more. But this one includes a picture.
Among posts about ice... Why is ice slippery? (September 9, 2018).
A post about iron in the oceans... Fertilizing the ocean may lead to reducing atmospheric CO2 (August 24, 2012). In the current work, the authors suggest that the iron in the icebergs is effectively being transported from the Antarctic continent to the iron-deficient oceans. If so, the high-iron icebergs could be playing an important role in determining the biological productivity of the Southern oceans. That issue is addressed in the post linked here.
May 8, 2019
Where should a self-driving car park during the day after it takes you to work? Parking is now so expensive in dense urban areas that it may be in the car's self-interest to not park at all. Instead, it should just cruise around the streets all day, probably at about 2 miles per hour -- thus increasing congestion on the streets. That's the conclusion of a recent analysis. It's a plea for reconsideration of policies.
* News story: Mean streets: Self-driving cars will "cruise" to avoid paying to park -- Autonomous vehicles "have every incentive to create havoc," transportation planner says. (J McNulty, University of California Santa Cruz, January 31, 2019.) Has an invalid link to the article, so...
* The article: The autonomous vehicle parking problem. (A Millard-Ball et al, Transport Policy 75:99, March 2019.)
* A post about self-driving (autonomous) cars: The moral car: when is it ok for your car to kill you? (July 23, 2016).
May 7, 2019
The story of Denisovan man is one of the great science stories of the decade. It is about a newly-discovered type of human, with the first publication on the topic in 2010. It started with a finger bone and a few teeth found in Denisova Cave, in Siberia. Those early samples of Denisovan man yielded enough DNA that we have a Denisovan genome, and can now track Denisovan genes in humankind throughout the modern world. But it is important to realize that those few samples have constituted the only direct physical evidence about Denisovan man. It is a story with much mystery, a scientific story very much in progress. Musings has noted several parts of the story [links at the end].
Now there is something new. Look...
A jaw bone (mandible).
- The left side (brownish) is real. The right side (gray) is a digitally constructed mirror image of the left side to make a picture of an entire symmetrical jaw.
- The image has been processed to remove extraneous mineral material on the outside.
That is, the picture here is based on a real bone, but with some processing.
This is Figure 1b from the article.
This is the Xiahe mandible. A new article reports the characterization of the Xiahe mandible as being from a Denisovan. (Xiahe, where the bone was found, is a county in Gansu province, China.)
Why is this bone so exciting?
- First, it is now the largest sample of Denisovan man we have.
- Second, it is not from Denisova Cave. It is from China, from the Tibetan plateau.
The big question is, what is the evidence this is a sample of Denisovan man? The main results to support that claim are summarized in the following figure...
That's a genealogy chart of several hominids. You can see that the Xiahe specimen (thick line) clusters close to the Denisova Cave sample.
What's the basis of that grouping? The Xiahe sample has not yielded any usable DNA. However, it has yielded some protein (collagen). Sequencing of ancient proteins is another recent development -- and that is the basis of the grouping shown here.
This is Figure 2 from the article.
How good is the story that the Xiahe specimen is Denisovan (or closely related)? Perhaps the biggest uncertainty is simply the limited amount of data at this point. "Denisovan" and "Xiahe" are each defined by one sample. The progress with Denisovans over the current decade has been remarkable. This is one more step. We'll see how it holds up.
The Xiahe specimen is dated to about 160,000 years ago. It is older than the Denisova Cave specimens. It is also the oldest known human sample from the Tibetan Plateau.
The specimen was found in 1980. What's new here is the analysis.
Although the only physical specimens of Denisovans were from Siberia, the genetic evidence has pointed to a widespread distribution, especially through east Asia. It has been hoped for some time that analyses of specimens from China would turn up Denisovans there. Xiahe would appear to be step 1 in that direction. Surely, there are more Denisovans to be found in China. We also note that knowing this one jaw may bring attention to other samples that look similar in existing collections.
Previous genetic work had indicated that modern Tibetans got genes for survival at high altitude from Denisovans. Finding that the Denisovans were the first people in the Tibetan highlands complements that nicely.
* Denisovan Fossil Identified in Tibetan Cave -- A mandible dating to 160,000 years ago is the first evidence of Denisovan hominins outside the Russian cave where they were first discovered in 2010. (S Williams, The Scientist, May 1, 2019.)
* Scientists found that the Tibetan Plateau was first occupied by Middle Pleistocene Denisovans. (Institute of Tibetan Plateau Research, Chinese Academy of Sciences, May 2, 2019.) From one of the lead institutions involved.
* How We Found an Elusive Hominin in China -- An ancient jawbone collected by a monk has been identified as the first Denisovan discovered outside of Siberia. (J-J Hublin, SAPIENS, May 1, 2019.) By one of the authors of the article. Excellent overview of the work, with good context.
The article: A late Middle Pleistocene Denisovan mandible from the Tibetan Plateau. (F Chen et al, Nature 569:409, May 16, 2019.)
Among posts on Denisovans...
* Contributions of Neandertals and Denisovans to the genomes of modern humans (July 6, 2016).
* The Siberian finger: a new human species? (April 27, 2010). The original post, about the first article.
I usually don't refer back to "Briefly noted" items, but there are two that deserve mention here...
* Briefly noted... Denisova Cave (April 10, 2019).
* Briefly noted... A Neanderthal-Denisovan hybrid (August 29, 2018).
Among posts about ancient proteins:
* Reconstructing an ancient enzyme (February 26, 2019).
* Did the Neandertals make jewelry? Evidence from ancient proteins (February 26, 2017).
May 5, 2019
What's Y-Y? Two atoms of yttrium joined by a single covalent bond.
The rare earth metal yttrium is a useful catalyst, but its chemistry is not well understood.
A new article reports the first observation of Y-Y bonds. They are a bit unruly; it helps to keep them caged.
The first figure is a drawing showing the structure of a cage with an yttrium dimer inside...
Y2@C82. The @ sign means "inside of". That is, this is Y2 inside a C82 cage. It is a well-defined molecule; the Y2 can't get out.
The Y-Y is shown in blue.
This is part of Figure S10a from the article supplement.
The "cage" is a fullerene. This one is somewhat larger than the classical C60; many sizes of fullerenes are known. It wasn't long after fullerenes were first characterized that people started finding things inside them. The term endohedral was coined for such things, and the @ sign introduced to denote the unusual relationship.
The following figure shows some detail of that compound -- and more...
Part a (top) shows the same chemical as above. This drawing shows a cut-away to make it easier to see the bonding inside the cage.
You can see that there are two Y atoms, with a bond of about 3.6 Å (Ångstroms) between them. That's just about the bond length predicted.
The structure shown here is based on X-ray crystallographic measurements.
That is the evidence for covalent Y-Y single bonds. Such bonds are found in this chemical, and a closely related one in the study.
The study also showed something different in some cases. This is illustrated in part d (the bottom structure of this set). It's another cage, similar but a little bigger: C88. And it has 2 Y atoms. But in this case, there is also a C2 unit inside the cage, with a Y on each side of it. That is, this is Y2C2@C88.
The distance between Y atoms here is nearly 4.3 Å, considerably more than the directly-bonded Y-Y distance of the previous case.
The dihedral angle shown on the figure is the angle between the planes of the two triangles formed by the C2 and one of the Y. (At least, I think that is what it refers to. It doesn't seem to say.)
This is part of Figure S8 from the article supplement.
So the study reveals two kinds of structures: cages with Y-Y and cages with a non-linear Y-C2-Y. The scientists found Y-Y in C82 cages, and the more complex structure in larger cages.
One should remember how these compounds are made. One does not set out to make a specific fullerene chemical. Instead, carbon material is burned at high temperature. The resulting soot is studied for its content of fullerenes, the cage chemicals. In the current work, the burning was done in the presence of yttrium oxide, so some of the resulting cages contained Y.
The work involved purifying and characterizing individual components from the soot. That experimental work was accompanied by theoretical work, predicting the properties of such Y-containing cages.
Overall, the work here enhances our understanding of an unfamiliar element.
News story: Fullerene cage stabilises first yttrium-yttrium single bond. (T Easton, Chemistry World, April 16, 2019.)
The article, which is freely available: Crystallographic characterization of Y2C2n (2n = 82, 88-94): direct Y-Y bonding and cage-dependent cluster evolution. (C Pan et al, Chemical Science 10:4707, May 7, 2019.)
Other posts that mention yttrium:
* Added June 8, 2019. Superconductivity in lanthanum hydride: a new temperature record (June 8, 2019).
* Penidiella and dysprosium (September 11, 2015). Notes the terminology of rare earth elements and lanthanoids.
* Lead-rich stars (August 30, 2013).
Among posts about novel chemical bonds... A chemical bond to an atom that isn't there (October 31, 2018).
Don't confuse the Y-Y of the current post with the YY of a previous post... YY in the mouth? (April 4, 2014).
May 3, 2019
Do you have separate jackets for "cool" and "cold" weather? What if you could just use a single jacket, and throw a switch on it to change it from being a cool-weather jacket to a cold-weather jacket? Better yet, what if the jacket knew how cold you were, and just made the switch by itself?
A recent article offers a step toward the development of such an intelligent jacket.
To start, we need to understand how a jacket works to keep you warm. It's actually simple... Your body gives off heat -- as infrared (IR) radiation. The jacket traps the IR. As a result, you benefit from that heat you gave off.
If you get too warm, you take off the jacket. It would be easier if you could just tell the jacket to let some of the IR through. And easier still, if the jacket took action on its own. How could the jacket tell if you got too warm? You start to sweat. The humidity goes up. So, if the jacket responded to higher humidity by allowing IR to go through, it would serve the purpose.
Here's some data...
The graphs show the IR transiittance of two materials as the humidity changes.
It would have been simple if the authors had plotted IR vs humidity, but they did it differently. The graphs show both IR and humidity over time. IR is shown with the dark curve (and left-hand y-axis scale); humidity is shown with the light curve (right-hand y-axis scale).
The big picture... In one case the two curves are similar; as humidity increases, so does IR transiittance. In the other case, they are not; the IR transiittance remains fairly constant -- and low -- as the humidity changes.
This is part of Figure 4 from the article.
The upper graph is for the new material that the scientists have designed. They call it a metatextile. The lower graph is for the control material. Accompanying thermal analyses (by IR imaging!) show that the new material becomes cooler as the humidity rises.
What is this new material? It's based on a common textile yarn, but modified so that it responds to humidity by changing structure and IR transmission. A key part of the modification involves carbon nanotubes.
The following figure shows the idea...
The squiggly lines show the IR (as labeled in one case, lower left). The IR at the bottom is what the person gives off; the IR at the top is what passes through the fabric. On the left side, the squiggly lines at top and bottom match. The material lets IR pass through. On the right side, the squiggly lines at the top are small, showing that IR loss from the material is low.
Look at the arrows showing the transition. "Cold/dry" shifts the material to the right, where it blocks IR loss and keeps you warm. "Hot/wet" shifts the material to the left, where it irradiates and keeps you cool.
The terms open and closed may be confusing. The authors' use the terms to refer to transmission of IR. But we also note... What's shown here are the individual yarns. The tighter an individual yarn, the more open the overall fabric.
This is part of Figure 1 from the article.
Overall... When the humidity changes, the fabric structure changes. That happens because the fabric has a mixture of hydrophobic and hydrophilic regions. Therefore, it is distorted when the humidity changes. That changes the bulk porosity of the fabric. It also changes the organization of the carbon nanotubes -- and that changes the IR transmission. Together, the two effects (on bulk porosity and IR transmission) help the material retain heat when cool, but lose heat when warm.
So that's the idea... If you get too hot, you sweat. Your jacket responds by letting heat out. At least in principle, the current article shows how it works.
What if it rained? The authors acknowledge (in one of the news stories) that could be a problem.
* 'Cool' Textile Automatically Regulates Amount of Heat that Passes through It. (Sci-News.com, February 11, 2019.)
* Smart textile uses sweat as switch to keep wearer cool or warm. (J Urquhart, Chemistry World, February 8, 2019.) (They mix up the water binding properties of the fabric components. Whoops. And this is a chemistry site!)
The article: Dynamic gating of infrared radiation in a textile. (X A Zhang et al, Science 363:619, February 8, 2019.)
A post about controlling IR transmission by windows... Windows: independent control of light and heat transmission (February 3, 2014).
Among posts about sweat: What if your house could sweat when it got hot? (November 30, 2012).
Posts about carbon nanotubes (and related structures) are listed on my page Introduction to Organic and Biochemistry -- Internet resources in the section on Aromatic compounds.
Older items are on the archive pages, starting with 2019 (January-April).
Top of page
Older items are on the archive pages, starting with 2019 (January-April).
E-mail announcement of the new posts each week -- information and sign-up: e-mail announcements.
Contact information Site home page
Last update: July 21, 2019