Musings is an informal newsletter mainly highlighting recent science. It is intended as both fun and instructive. Items are posted a few times each week. See the Introduction, listed below, for more information.
If you got here from a search engine... Do a simple text search of this page to find your topic. Searches for a single word (or root) are most likely to work.
If you would like to get an e-mail announcement of the new posts each week, you can sign up at e-mail announcements.
Introduction (separate page).
Current posts -- 2017 (January - ??)
New items (Posted since most recent e-mail; they will be announced in next e-mail, but feel free... !
* If you would like to get an e-mail announcement of the new posts each week, you can sign up at e-mail announcements.
February 22 (Current e-mail)
February 15 February 8 February 1 January 25 January 18 January 11 January 4
Older items are on the archive pages, listed below.
2017 Current posts. This page, see detail above.
2012 (September- December)
2011 (September- December)
Links to external sites will open in a new window.
Posted since most recent e-mail; they will be announced in next e-mail, but feel free...
February 25, 2017
A few days ago a Musings post discussed making functional mouse pancreas tissue in rats [link at the end]. It showed the possibility of interspecies organogenesis.
About the same time, there was another article, from another group, on another aspect of the problem. This article had only preliminary results, and perhaps was not even very encouraging about the development of interspecies organogenesis. But the article got far more attention. Why? It dealt with the possibility of making human organs in other animals. Here we look at that article.
The following figure summarizes one key experiment. The basic plan of the experiment... Human pluripotent stem cells (iPSC) were injected into early pig embryos (blastocysts). The injected embryos were then implanted into surrogate sows for development. At a later stage, the developing embryos were checked.
The two frames here go together; they are two parts of the analysis of the same work.
For simplicity, let's focus on the first (left-hand) bar of each frame (labeled 2iLD).
Two features of the embryos were scored. First, was the growth normal or retarded? Second, were human genes expressed, as judged by a fluorescence marker?
So, there were four types of embryos: all combinations of growth retarded or not, fluorescent or not. Those four types are shown in the two frames of the figure, using four different colors, as shown in the key at the bottom.
Frame D is for embryos that were fluorescence-positive (FO+), reflecting the presence of human cells. It shows the percentage of normal (blue) and growth-retarded (yellow) embryos among those that showed fluorescence. Frame E is similar for the embryos that were fluorescence-negative. The bar heights show percentage, but each bar segment has a number on it showing the actual number of embryos.
Two key observations...
* Qualitatively, all four types were seen. This means that some normal-sized pig embryos with human cells were found.
* The frequency of retarded growth was higher when the human cells were present. That's the size of the yellow part of each bar compared to the blue part. This finding suggests that the human cells are interfering with pig development.
This is Figure 5 parts D and E from the article.
What about all the other bars? The different bars are for different types of stem cell preparations. The big picture is that the results were qualitatively similar for all. (The differences between the types of stem cells will be important for follow-up work.)
There were signs that the human cells were differentiating into specific cell types as the embryo developed.
The scientists also showed that the human stem cells could form chimeras with cattle embryos. The efficiency of human cell incorporation was actually higher in the cattle system, but there are disadvantages to working with cattle.
Overall, the experiment shows that human stem cells can be incorporated into pig embryos. The resulting embryos are called chimeras, since they contain cells of two distinct origins. Human-pig chimeras. The experiment also shows that it isn't very efficient, and isn't very good for the pigs.
It's step one. It's not the first time that human stem cells have been shown to function during development in another type of embryo, but it is the first involving large animals, where the work might lead to organ farming.
* Scientists Create First Human-Pig Chimeric Embryos. (D Kwon, The Scientist, January 26, 2017.)
* New findings highlight promise of chimeric organisms for science and medicine. (Phys.org, January 26, 2017.)
The article: Interspecies Chimerism with Mammalian Pluripotent Stem Cells. (J Wu et al, Cell 168:473, January 26, 2017.) There is much more in the article. This post focuses on one leading-edge line of work. The article also includes work with rodents; in general, the work here agrees with the work in the background post.
Background post: Making a functional mouse pancreas in a rat (February 17, 2017).
A post on the possibility of use of bona fide pig organs in humans: Organ transplantation: from pig to human -- a status report (November 23, 2015).
Posts on chimeras include:
* As we add human cells to the mouse brain, at what point ... (August 3, 2015).
* The first chimeric monkeys (February 5, 2012).
There is more about stem cells, and other issues of replacement body parts, on my page Biotechnology in the News (BITN) for Cloning and stem cells. It includes an extensive list of related Musings posts.
February 22, 2017
The nearest star outside our Solar System is Alpha Centauri, about 4.2 light-years away. An article last year suggested that a planet there, Proxima b, may be in the habitable zone.
In fact, a program to develop a mission to visit Alpha Centauri and its planet Proxima b is in progress. It's Breakthrough Starshot, with start-up funding of a hundred million dollars (US) from Russian investor Yuri Milner. That's enough to bring together key people and develop a serious plan, with milestones. An important part of the project at this point is to define what needs to be done to make it possible. For example, the idea of using lasers to accelerate the craft to its intended speed, about 20% of the speed of light, is fine -- except that suitable lasers do not yet exist.
Nature recently ran a news feature on the project. It almost reads like a science-fiction fantasy. But it's not. It's a serious effort to try to visit a star within about 50 years. Hopefully, some of those involved in the early development of the project will live to see data from the encounter.
If this works, the tiny spacecraft -- the size of a small coin -- will send back pictures during the 2060s. Of course, it will be a while until we see them. The pictures will take 4.2 years to return, traveling at the speed of light. However, the project should yield interesting technical developments in the coming decades. For now, that's the point.
It's a delightful little article.
News feature: First trip to the stars -- A wild plan is taking shape to visit the nearest planet outside our Solar System. Here's how we could get to Proxima b. (G Popkin, Nature 542:20, February 2, 2017.) (Online version has a different title.) Outlines the plan, with a suggested timeline. Includes extensive discussion of the hurdles.
A post about our most distant explorations so far: At the edge of the solar system (September 28, 2012).
A post about planning space missions: Quiz: NASA's boat (June 29, 2011).
A recent post about habitable planets... Habitable planets very close to a star (June 19, 2016).
February 21, 2017
Placebos are fascinating, maybe even important. Sometimes people respond to a fake drug (a "sugar pill") as if it were real. For example, if you test a drug intended to reduce pain, the standard procedure is to use a fake pill as a control. Some people experience pain relief from this control or "placebo". There is increasing recognition that the placebo effect is real -- worthy of being understood in its own right. Musings has noted some aspects of the placebo effect before [links at the end].
A new article explores the basis of the placebo effect. The following figure shows the final claim...
In this work, patients with painful osteoarthritis of the knee were given a placebo pain killer. Their response, in terms of pain relief, was recorded.
Prior to the "drug" administration, the patients were given an fMRI test of the brain. On the basis of that, their response to the placebo was predicted.
The graph shows the analgesic response that was observed (y-axis) vs what was predicted (x-axis).
This is part of Figure 5B from the article.
You can see that there is a correlation between observed and predicted responses. The correlation coefficient is about 0.6; r2 is 0.36, indicating that the prediction accounts for about 36% of the observed variation in response.
What is this prediction? In the earlier parts of the work, the scientists did fMRI brain scans of patients, and looked for differences between those who responded to the placebo and those who did not. They found differences that appeared to be significant. In particular, the signal for certain brain regions was stronger in those who responded to the placebo. That is the basis of the work shown above. Based on what they had learned from the earlier tests, the scientists did the fMRI measurement, and predicted the response to the placebo. The figure above shows that their prediction was correct.
Correct, with reservations. The first reservation, of course, is that this is a single, fairly small test. Time will tell if the results seen here can be replicated by others. Further, the correlation is partial. It's an impressive correlation if you didn't think a prediction would be possible. But it is limited. Will it get better as we gain experience? Would the prediction be improved by combining measurements from multiple brain regions, as well as other information? Again, time will tell.
It's an intriguing result.
The authors suggest that their test might be useful during testing of new drugs. It can help distinguish true drug responses and placebo responses. Perhaps subjects who are more susceptible to placebo responses should be excluded from clinical trials of new drugs. At least, it is likely that they should be identified. More speculatively, I wonder if the new finding could lead to manipulation of the placebo response. After all, placebos are cheaper, and perhaps safer, than drugs.
* Placebo sweet spot for pain relief identified in brain. (Science Daily, October 28, 2016.)
* The placebo effect: is there something in it after all? A study of the sometimes positive effects of taking drug-free pills suggests a biological factor at work. (S Connor, Guardian, November 6, 2016.) Excellent, with some review of related work on the basis of placebo responses.
The article, which is freely available: Brain Connectivity Predicts Placebo Response across Chronic Pain Clinical Trials. (P Tétreault et al, PLoS Biology 14:e1002570, October 27, 2016.) It's a long and complex article. I've pulled out one result to give the article some attention.
Background posts about placebos include...
* Would a placebo work even if you knew? (January 31, 2014).
* The placebo effect: a mutation that makes some people more likely to respond (October 30, 2012).
More fMRI... Dog fMRI (June 8, 2012).
Instead of just treating the pain... Using your nose to fix knee damage (January 28, 2017).
More about brains is on my page Biotechnology in the News (BITN) -- Other topics under Brain (autism, schizophrenia). It includes a list of brain-related Musings posts.
February 19, 2017
Imagine two people together; call them A and B. They have a ball, and a couple of pots, one red and one blue. They put the ball in the red pot. Person A leaves the room. While A is away, the ball is moved to the blue pot. A returns, and looks for the ball. Where will A look: in the red pot or the blue one? More importantly for our purpose here, where does person B think A will look?
You probably expect that A will look for the ball in the red pot, where it was when he left the room. You know what A knows -- and so does B. B knows that the ball is in the blue pot, but also knows that A thinks it is in the red pot. A has a false belief, but B understands that A has that false belief -- and that he will act on it.
A young child wouldn't make that prediction. If B was a young child, the child would predict that A would look in the blue pot. After all, the child knows the ball is in the blue pot; why wouldn't A know? The child does not understand that A has a false belief.
The ability to make that distinction requires what is called a theory of mind. An adult B understands what A thinks, even though it is wrong. Children acquire the ability to make the distinction around age 4 (though it varies with the details of the test).
That's a long introduction to set the stage for a recent article -- which reports such a test with three species of apes.
The challenge is to figure out how to do the test. The general logic of the test is similar to what was described above. The problem is, how do you tell what ape B predicts for where A will look?
The scientists addressed that problem with a method used for children too young to tell the investigator their choice. It relies on eye-tracking. The investigators watched the eyes of the ape (just as they would when testing an infant). The eyes reveal what the ape (or infant) expects.
The following figure shows the results for the various apes tested over two tests.
29 apes participated in two tests. They are listed in the figure, grouped by species.
In each test, each ape either correctly predicted the target (red dot), predicted the wrong response (blue dot), or did not make a prediction (clear dot).
Of the 29 apes, eight made two correct predictions (two red dots). None made two wrong predictions (two blue dots), but four made no predictions at all.
This is Figure 3B from the article.
Overall, the eye tracking indicated that the apes made the correct prediction about 2/3 of the time -- significantly greater than chance. That is, the apes knew what A was thinking, even though they knew it was wrong. The apes appear to have a theory of mind, as judged by such a test.
The results here, suggesting that the apes have a theory of mind, disagree with previous work. The authors argue that they have a better test, one that finally shows what the apes really think. Is that correct, or is the current test, for some reason, giving a wrong answer? We can only await further testing, presumably with a range of tests.
News story: Apes understand that some things are all in your head. (Max Planck Institute, October 6, 2016.) From one of the institutions involved.
Videos. There are two videos posted with the article as Supplementary Materials. (2 minutes each, no sound.) They should be freely available. The videos show sequences from the testing. If you want to get into how the testing was done, the videos may be a good place to start; it's a little confusing from the article itself.
The article: Great apes anticipate that other individuals will act according to false beliefs. (C Krupenye et al, Science 354:110, October 7, 2016.)
A mind post: The animal mind (July 23, 2009).
A recent post about apes: Age-related development of far-sightedness in bonobos (January 10, 2017).
My page Biotechnology in the News (BITN) -- Other topics includes a section on Brain (autism, schizophrenia). It includes a list of brain-related Musings posts.
February 17, 2017
A new article reports an interesting development in organ transplantation.
Let's start with the bottom line. Here is what they accomplished...
The figure shows glucose tolerance tests with diabetic mice. The graph shows the initial level of blood glucose (time zero), and then the level at various times after giving a high dose of glucose. It's a standard test for diabetes.
The animals were divided into six groups. We won't go through all of them, but the results show two clear types of results.
- The three high curves have a high initial level of blood glucose (above 300 mg/dL).
- The three low curves have a low initial level of blood glucose (below 200 mg/dL).
The initial glucose levels are sufficient for our purposes here. The high blood glucose level reflects the diabetic state. The low level shows that the animals have been successfully treated.
The three groups that gave the high result were negatives. They included negative controls, which received treatments that were not expected to work. For example, one received a sham treatment.
One of the low groups received normal pancreas islets from healthy mice. This, of course, should -- and did -- work. A positive control.
The other two low groups, reflecting success, received mouse pancreas islets that the scientists had grown in rats.
The tests shown above were done 60 days after islet transplantation to the mice. Other data showed that low glucose level was maintained for over a year.
This is Figure 4d from the article.
The work is a demonstration that one can grow organs of one species in another, and then transplant them back to "where they belong". They work. The diabetic mice were successfully treated using mouse islets that had been grown in rats.
That's easy to say. But it is a tremendous technical accomplishment, encompassing years of work. We'll just outline the process. The following figure summarizes it.
Start with a rat embryo, at the left. It's a very special rat embryo; we'll come back to that.
Inject mouse stem cells into the rat embryo. The stem cells were induced pluripotent stem cells, commonly called iPSC.
Grow the rats.
Isolate the pancreas. More specifically, isolate the islets, which are the insulin-forming tissues from the pancreas.
Transplant the islets into the diabetic mice. The rat-grown mouse islets.
This is Figure 1 from the news story in the journal, by Zhou.
We've glossed over one step -- a key step. Why would the rats make a mouse pancreas? The mouse stem cells are pluripotent, capable of making anything. We noted above that the rat embryos were special. Why? They were unable to make a rat pancreas. The rats carried a mutation in a key gene for pancreas development. They were dependent on the mouse stem cells to provide the pancreas. That is, the system was designed to promote growth of a mouse pancreas in the rat.
The rat mutation to prevent normal pancreas formation was introduced by gene editing, using the TALEN method.
One of the failures -- high glucose curves -- in the graph above was for a rat that carried one copy of the mutation. That is, it was heterozygous for the gene, carrying one good copy and one bad copy. One good copy was enough to promote development of a rat pancreas, and the transplant to the mice failed.
It's a proof of principle that we can grow organs intended for transplantation in another species. "Interspecies organogenesis", as the article title says. Or organ farming.
Of course, what we would really like to do ...
Let's leave that for another post. In a few days. Added February 25, 2017. [This post is now listed below.]
* Lab-Grown Pancreas Reverse Diabetes In Mice. (Asian Scientist, February 8, 2017.) Good overview.
* Expert reaction to study reporting functional mouse pancreatic islets grown in rats and transplanted into mice. (Science Media Centre, January 25, 2017.) Comments from two expert scientists.
* News story accompanying the article: Regenerative medicine: Interspecies pancreas transplants. (Q Zhou, Nature 542:168, February 9, 2017.)
* The article: Interspecies organogenesis generates autologous functional islets. (T Yamaguchi et al, Nature 542:191, February 9, 2017.)
Added February 25, 2017. A (small) step toward making human organs by such a procedure: Using human stem cells to make chimeras in pig embryos (February 25, 2017).
A recent post on another approach to restoring pancreatic function: Treatment of Type 1 diabetes with encapsulated insulin-producing cells derived from stem cells (March 11, 2016).
More on diabetes is on my page Biotechnology in the News (BITN) -- Other topics under Diabetes. That includes a list of related Musings posts.
A post that includes a complete list of posts on gene editing (including using TALENs): CRISPR: an overview (February 15, 2015).
There is more about stem cells, and other issues of replacement body parts, on my page Biotechnology in the News (BITN) for Cloning and stem cells. It includes an extensive list of related Musings posts.
February 14, 2017
A new article reports the major chemicals responsible for the odor of an interesting fruit. In order, starting with the most important, the odors are described as: fruity; rotten onion; roasted onion; rotten, cabbage; sulfury, durian; fruity; fruity; fruity; rotten, durian; fresh, fruity; roasted onion; roasted onion; skunky. The list goes on, but that's enough for now. Rotten egg is a few places further down the list.
What do we mean by order of importance? It's based on the odor activity value (OAV). That is the ratio of the concentration (in the fruit) to the odor threshold (the amount we can detect). For example, the odor ingredient at the end of the list above, described as skunky, is present at about 0.9 µg per kg of fruit. We can detect that chemical at 0.00076 µg/kg (in water). Therefore, its OAV, the ratio of those two numbers, is about 1200.
The information above is from Tables 2 and 3 of the article. Table 2 presents their results for the concentrations; the results are then summarized and presented in order by OAV in Table 3. The names of the chemicals are given in both tables. That skunky odor, for example, is due to 3-methylbut-2-ene-1-thiol.
Now that you know what this fruit smells like, would you like to try some? It's apparently a quite tasty fruit -- if you can get by its odor. It's the durian, more specifically the Monthong durian (Durio zibethinus L. 'Monthong').
One type of follow-up test is to make artificial mixtures of selected compounds, and see how people respond. In this case, the scientists found that a mixture of two of the major odor chemicals was identified by a panel of human testers as durian about as often as the real thing. The two chemicals used here were the first and third from the list above: one that is very fruity and one that smells of roasted onion.
* Compounds responsible for world's stinkiest fruit revealed. (E Stoye, Chemistry World, January 25, 2017.)
* Chemists Identify Key Compounds Responsible for Durian's Pungent Odor. (N Anderson, Sci-News.com, January 19, 2017.)
The article: Insights into the Key Compounds of Durian (Durio zibethinus L. 'Monthong') Pulp Odor by Odorant Quantitation and Aroma Simulation Experiments. (J-X Li et al, Journal of Agricultural and Food Chemistry 65:639, January 25, 2017.)
A recent post about sulfur odors... Copper ions in your nose: a key to smelling sulfur compounds (October 10, 2016). Links to more about odors.
A post about the chemical responsible for the rotten egg part of the durian odor: What's the connection: rotten eggs and high-temperature superconductivity? (June 8, 2015).
More about fruit ingredients...
* Is the lychee (litchi) a toxic food? (May 11, 2015).
* Grapefruit and medicine (March 26, 2012).
February 13, 2017
Fracking involves injecting liquids underground, at substantial pressures. There is concern that the process might cause slippage of fragile structure within the Earth -- that is, earthquakes.
There are actually two injection processes. The first is the injection used during extraction. The second is the injection of the waste water for disposal. The latter is the more substantial process.
Early experience with fracking led to anecdotal reports of earthquakes, but it was hard to make any sense of the reports. More recently, systematic reports have made clear that earthquakes can be associated with waste water disposal from fracking -- at some sites. This was discussed in an earlier post [link at the end].
We now have an article reporting earthquakes caused by the fracking injection itself. Interestingly, two different processes may be involved.
The new article is based on observations in an oil field in Alberta. Here is an example of the observations...
It's a complicated graph, showing several things (various y-axes) over the time period of December 2014 through March 2015 (x-axis). Here are the major things to see...
In the lower frame, the blue bars in December and January show injections at a particular site. The bar heights show the pressure of each injection (scale at the left).
In the upper frame, the red dots show earthquakes, very near that injection site. The vertical position of the dot gives the magnitude (scale at the left).
You can see that there was a swarm of earthquakes during the January injections. Most are small, detected only because the field is carefully monitored. However, the biggest quakes in the initial swarm are about magnitude 3, with some up to 4 later.
Now look at another curve: the red line in the lower frame. That shows the cumulative volume of the injections (scale at the right). One might suggest that the quakes began when the cumulative injection volume rose past a certain point.
This is part of Figure 3 from the article. The full figure shows data from other sites.
Our discussion of the graph above makes a connection between the fracking injection and earthquakes. However, it is important to emphasize that all that analysis can possibly show is a correlation. A single such data set cannot show a causal connection. That is, the graph above is presented to show the nature of the results and how they are described, not to prove that there is a connection.
Importantly, the article has multiple data sets of the type shown above, and more. From the analysis of all the data at hand, for various sites, the authors argue that there is a casual connection. Actually, two causal connections.
First, the effect noted above, that injection may lead to an immediate swarm of quakes near the injection site, seems to hold for various sites in the field. This is a direct effect of the injection.
Second, there is another, delayed effect... There may be small quakes, at greater distances from the original injection, a few weeks later. In the figure above, there are bursts of earthquakes at various times. The arrows across the top of the figure help to group those bursts of quakes. These quakes occur at considerable depth, and are clearly distinct from the first group of quakes, which were a direct local effect of the injection.
Overall, the article makes a strong case for connections between fracking injections and earthquakes, and explains how the connections work.
This is the best-documented case of earthquakes being directly associated with fracking itself -- as distinct from waste water disposal. But in some ways, the story is similar. Both activities may cause earthquakes, depending on the local ground structure. Monitoring is important, and warning signs should be heeded.
News story: Study reveals two seismic processes by which hydraulic fracturing induces tremors. (Phys.org, November 18, 2016.)
* News story accompanying the article: Geophysics: Understanding induced seismicity. (D Elsworth et al, Science 354:1380, December 16, 2016.)
* The article: Fault activation by hydraulic fracturing in western Canada. (X Bao & D W Eaton, Science 354:1406, December 16, 2016.)
Background post: Fracking: the earthquake connection (June 19, 2015). This is about quakes associated with waste water disposal. Links to more, both about fracking and about earthquakes.
There is more about energy issues on my page Internet Resources for Organic and Biochemistry under Energy resources. It includes a list of some related Musings posts.
February 11, 2017
What if we had some bacteria that had not seen "modern civilization" for four million years? Perhaps they have not even seen animals during that time. Is it possible that they might be resistant to some modern antibiotics?
Of course, the answer is yes. After all, many of our antibiotics are natural chemicals, made by other microbes. These antibiotics undoubtedly play a role in competition between microbes in nature. It is natural that bacteria would develop resistance to them.
A team of scientists has been studying the bacteria in Lechuguilla Cave in New Mexico. (The cave is part of Carlsbad Caverns National Park.) From the nature of the situation, it is likely that these bacteria have been isolated from the surface world for four million years. Interestingly, one of the bacterial strains is resistant to most of our modern antibiotics.
The bacteria studied here are living modern bacteria. What's unusual is that, so far as we know, they have been isolated from the "outside world" for millions of years. They have not undergone genetic exchange with surface organisms, and have not been exposed to modern sources of pathogens or antibiotics. In that sense, they are "ancient".
In a new article, they explore the antibiotic resistance of this organism, called Paenibacillus sp. LC231. The study is based on analyzing the genome, and then doing follow-up biochemistry of genes of interest. For example, they studied the resistance genes they found in the cave Paenibacillus after transferring them to common Escherichia coli.
The basic finding is that numerous resistance genes were found. Some of them are similar to known modern genes, some appear to be novel. All of the resistance genes they found were on the main chromosome of the bacterium; none were on plasmids.
The scientists also found that most of these resistance genes from the cave bacteria were in some strains of the same type of bacteria from the surface. This raises a question about their origin.
It's an interesting view of an unusual situation. Finding resistance genes in ancient organisms is not a surprise. Importantly, it does not reduce our responsibility to monitor and limit modern antibiotic resistance. Antibiotics may have been around in ancient times, but human activity has increased their prevalence, and thus increased the pressure to develop resistance.
Nowadays, spread of antibiotic resistance is often via plasmids; it's easier to acquire a ready-made resistance from a neighbor than to make your own from scratch. The lack of plasmid-borne resistance genes in these cave bacteria is interesting.
News story: Multi-Drug Resistant Bacteria Found Deep in New Mexico's Lechuguilla Cave. (Sci-News.com, December 13, 2016.)
The article, which is freely available: A diverse intrinsic antibiotic resistome from a cave bacterium. (A C Pawlowski et al, Nature Communications 7:13803, December 8, 2016.)
Previous post about antibiotics: Staph in your nose -- making antibiotics (October 9, 2016).
More on antibiotics is on my page Biotechnology in the News (BITN) -- Other topics under Antibiotics. It includes an extensive list of related Musings posts.
February 10, 2017
In an earlier post, we noted a detailed calibration curve that allows carbon-14 (C-14) dating to an accuracy of about one year for recent samples [link at the end]. This high-resolution C-14 dating of essentially "current" materials is made possible by the burst of C-14 released into the atmosphere by atomic bomb testing in the 1950s.
One application of such dating proposed there was tracking the ivory trade. We now have an article on that application.
The work is based on 231 samples of ivory that were seized between 2002 and 2014. The samples are from 14 seizures, which are listed in the article. Samples of the newest ivory from each tusk were dated, using the C-14 calibration curve. That gave an estimate of the date of death. The time from the estimated date of death to the time of seizure is called the lag. The following graph shows the lags that were obtained for 230 samples.
The results are shown here for ivory from elephants from four regions of Africa. (The source of the ivory had previously been determined by DNA analysis.) For our purposes, the similarities of these four curves are more important than the differences. It is fine to focus on just one of the curves, at least at the start.
Lag time -- the time from death of the animal to the ivory seizure -- for 230 samples.
The actual counts (y-axis) are plotted. Note that the y-axis scales vary. Each red line shows a normal curve fitted to the data.
You can see that the peak counts are for lags of 12-24 months.
Only three samples shown have a lag greater than 60 months. One sample had a lag beyond the range on the graph: 231 months (19 years).
A few samples are shown with negative values for the lag. The largest such value is 10 months. That's within the uncertainty of the measurement, which is about +/- one year. A value of zero would mean that the ivory was seized at the time it was made; that is, the sample was fresh.
Tridom? That's the Tri-National Dja-Odzala-Minkébé. It's a transborder forest, officially recognized as a single protected area by Cameroon, Gabon and the Republic of Congo.
This is Figure 3B from the article.
The big picture? Almost all of the ivory samples were from animals killed within the five years prior to the ivory seizure.
That's important. One claim made by some in the trade is that much of the ivory is from stockpiles accumulated before trading was made illegal. That ban was instituted in 1989, 28 years ago. The data here are quite inconsistent with that claim. Only one sample here, out of 231, was even close to being old enough to have come from such stockpiles.
* Clues in poached elephant ivory reveal ages and locations of origin. (myScience, November 8, 2016.)
* Most Ivory for Sale Comes From Recently Killed Elephants -- Suggesting Poaching Is Taking Its Toll. (R Nuwer, Smithsonian, November 7, 2016.)
* Most illegal ivory comes from recently killed elephants: new study. (S Dasgupta, Mongabay, November 8, 2016.)
The article, which is freely available: Radiocarbon dating of seized ivory confirms rapid decline in African elephant populations and provides insight into illegal trade. (T E Cerling et al, PNAS 113:13330, November 22, 2016.) Very readable, with good discussion of the measurements, and their implications for monitoring elephant populations.
Background post: Atomic bombs and elephant poaching (October 25, 2013). The article discussed there is reference 25 of the current article, and includes some of the same authors.
Recent post about elephants... Why do elephants have a low incidence of cancer? (March 20, 2016).
My page of Introductory Chemistry Internet resources includes a section on Nucleosynthesis; astrochemistry; nuclear energy; radioactivity. That section links to Musings posts on related topics, including the use of radioactive isotopes.
February 7, 2017
The cost of genome sequencing has plummeted in recent years. That has allowed massive levels of sequencing, beyond what we might have dreamed of just a few years ago.
Sometimes, such massive sequencing may tell us more than we wanted to know.
In the early days of genome sequencing, getting one complete genome sequence for an organism was a significant achievement. We understood that genomes varied, but having one gave us a "reference genome" for the species.
However, cheap sequencing has allowed us to accumulate many genomes for a species, and sometimes there is a surprise. It's illustrated by the following cartoon.
The graph plots the number of genes found (y-axis) vs the number of samples of the species sequenced (x-axis).
There are two curves!
After sequencing one genome, we see that there are about 4000 genes. Traditionally, that genome -- that set of genes -- would be considered the reference genome for the species.
We now sequence a second genome for the same species. Let's say it also has about 4000 genes. However, it lacks 500 of the genes from the first genome, and has 500 genes not seen there. Now, a third genome. It, too, lacks a few hundred genes from before, and has a few hundred genes not seen before. And so on.
The upper curve shows the total number of genes found, even if in only one case; that number keeps rising. (It may eventually level off.) The lower curve shows the number of genes found in every case; that number keeps getting smaller.
This Figure is from the news feature listed below.
What's shown there is what is actually found, especially for genomes of microbes. The details (such as how fast the upper curve levels off -- if at all) vary.
The lower curve is now considered to be the core genome for the species. The upper curve reflects a new idea, the pangenome: the collection of all genes associated with the species.
In one case, the bacterial genus Prochlorococcus, analysis of 45 strains has revealed about 80,000 genes -- the pangenome. The core genome is about a thousand genes.
The Scientist recently ran a news feature on the emerging idea of the pangenome. It's an interesting -- and incomplete -- story. It challenges our notions of what a species is. In particular, we see that getting one genome may tell us very little about the species.
Feature story, which is freely available: The Pangenome: Are Single Reference Genomes Dead? -- Researchers are abandoning the concept of a list of genes sequenced from a single individual, instead aiming for a way to describe all the genetic variation within a species. (C Offord, The Scientist, December 2016, p 31.)
Most recent post on the wonders from genome sequencing: The Asgard superphylum: More progress toward understanding the origin of the eukaryotic cell (February 6, 2017). That's the post immediately below.
A post on the cost of genome sequencing... The $1000 genome: we are there (maybe) (January 27, 2014).
A post about the possibility of getting too much genome information, though for a different reason: Are there genetic issues that we don't want to know about? (October 22, 2013).
There is more about genomes and sequencing on my page Biotechnology in the News (BITN) - DNA and the genome. It includes an extensive list of Musings posts on the topics.
One reason members of a species may have different genes is horizontal gene transfer (HGT), the direct transfer of genes between organisms. It's considered common in prokaryotes, but increasingly recognized to have some role in higher organisms. Here is an example: An extremist alga -- and how it got that way (May 3, 2013).
February 6, 2017
Less than two years ago, Musings presented a newly described group of microbes, the Lokiarchaeota. Genome evidence suggested that these archaea might be more closely related to eukaryotes than any prokaryote known so far [link at the end].
A new article extends the story. Scientists around the world, led by those who found the Lokiarchaeota, have looked for related organisms -- and found them. The catalog of these eukaryotic-like archaea now includes four phyla -- all named after figures in Norse mythology. They are now collected together in a superphylum called Asgard -- the home of the Norse gods.
Here is the current picture...
The figure shows a family tree for several groups of archaea.
Focus on the four groups highlighted in color. One of those (green) is the Lokiarchaeota, discussed previously. These four groups form the cluster that the scientists call Asgard.
And within the Asgard are the Eukarya.
The four groups highlighted are considered phylum level; Asgard is a superphylum.
If you compare this family tree with that shown in the background post, you will see that it is rather similar. The main point is that the eukaryotes are now associated with a superphylum of several groups of eukaryotic-like archaea, rather than just the Lokiarchaeota.
This is Figure 1b from the article.
The key point about the Asgards is that they contain genes for proteins usually considered characteristic of eukaryotes. The authors call them eukaryotic signature proteins (ESPs). The following figure summarizes some of the findings on this point. A caution... don't get bogged down with detail in this figure.
The figure lists several Asgards along the side. Across the bottom are several proteins considered to be characteristic of eukaryotes -- the ESPs. (These are shown in groups, labeled at the top.) It's not important that you pay attention to any of the labels.
The filled dots, whether gray or black, mean that there is evidence for the gene (shown at the bottom) in the particular organism (shown at the side).
You can see that the organisms listed, all Asgards, contain many of these genes. It is quite uncommon for prokaryotes, whether bacteria or archaea, to have any of these genes.
This is Figure 1d from the article.
The pattern above shows the distinctive feature of the Asgards: archaea containing genes considered characteristic of eukaryotes. That was a key feature that caught our attention earlier for the Lokiarchaeota. It is now extended to a group -- a superphylum -- of archaea.
The archaea were first recognized as a distinct group by Carl Woese in 1977. That was a landmark in microbiology, both for the discovery and for the molecular method used. Woese also suggested that the bacteria, the archaea, and the eukaryotes were three equally distinct groups -- the three domains of life. That was a bold proposal, based on very limited evidence. Since Woese's pioneering work, that three-domain model of life has been challenged by the suggestion that the eukaryotes emerged from within the archaea. The current article is the latest along that line. We now not only suggest that the eukaryotes emerged from the archaea, but the type of archaea involved is becoming increasingly specified. In his early work, Woese underestimated the diversity -- and importance -- of the archaea.
Despite this development, we must remember some limitations of the work and the conclusions. So far, none of these Asgard organisms have ever been seen. They have been inferred from metagenomic work: analyzing DNA found in the environment and inferring its source, without ever having the organism at hand. Scientists will now be highly motivated to try to find -- and hopefully even grow -- actual Asgard cells. It will not be easy. It's also important to emphasize that the relationship between Asgard and eukaryote is a proposal, a model. The evidence supports it; they seem to share some key genes. But there is a huge gap between the model and having any real description of what happened. We may never know what happened, but we can continue to seek clues.
* 'Marvel microbes' illuminate how cells became complex. (Science Daily, January 11, 2017.) This is based on the press release from Uppsala University, the lead institution for the work.
* A Break in the Search for the Origin of Complex Life. (E Yong, The Atlantic, January 11, 2017.) Good overview from an independent science journalist. (Science writer Ed Yong's work has often been noted in Musings, and I list him as a generally good source on my page Science on the Internet: an introduction; see item 6 there. He now has a new home, at The Atlantic.)
* Discovery of New Microbes Sheds Light on How Complex Life Arose. (M G Airhart, University of Texas College of Natural Sciences, January 11, 2017.) From one of the universities.
* News story accompanying the article: Microbiology: Mind the gaps in cellular evolution. (J O Mcinerney & M J O 'Connell, Nature 541:297, January 19, 2017.)
* The article: Asgard archaea illuminate the origin of eukaryotic cellular complexity. (K Zaremba-Niedzwiedzka et al, Nature 541:353, January 19, 2017.)
Background post: Our Loki ancestor? A possible missing link between prokaryotic and eukaryotic cells? (July 6, 2015). Links to more.
Discovery of the archaea... Carl Woese and the archaea (January 12, 2013).
Added February 7, 2017. Next post about genomes... Pangenomes and reference genomes: insight into the nature of species (February 7, 2017). Immediately above.
There is more about genomes and sequencing on my page Biotechnology in the News (BITN) - DNA and the genome. It includes an extensive list of Musings posts on the topics.
This post is noted on my page Unusual microbes.
February 5, 2017
Not much, according to a new article.
The article deals with energy usage for houses in the State of California (CA), where a series of building regulations intended to reduce energy consumption was introduced starting in the late 1970s.
The following graph summarizes the basic data...
The graph shows annual energy usage (y-axis) as a function of the year the house was built (x-axis).
Energy usage is shown for both natural gas and electricity. Those are the two major energy sources used in CA homes.
The energy usage for each source is given in energy units: MBTU -- thousands of BTU. (Apparently, MBTU means thousands to some people, and millions to others. It is an amusing ambiguity of units.)
The vertical red line marks the year the state started introducing the energy-saving regulations that affected how new homes were built.
Importantly, the energy usage is for two specific recent years. For example, the energy used in 2009 for houses of various ages is part of the data set plotted above. Analyzing energy usage for the same year provides a control, though an imperfect one as we shall see.
The graph is based on data collected during surveys by a California state agency.
This is Figure 1 from the article.
What does the graph show? Well, if you wanted to argue that energy usage declined for houses built after the red line (the start of regulations), you might note that gas usage declines after that time, and electricity usage levels off. (It may be questionable whether these trends really correlate with the regulations, but let's not worry that.)
Overall, there seems to be a decline in energy usage for newer houses. Perhaps 15% for houses built since the regulations were introduced.
Here is how I estimated that number of a 15% decline. It is a rough estimate, but what follows does not critically depend on the exact value.
I estimated the values for gas and electricity from the graph, and added them together to get the total energy usage. For the oldest houses (extreme left data), I estimate 67 MBTU. For the houses just before the red line, I get 72 MBTU. For the newest houses (extreme right data), I get 62 MBTU.
Using those numbers, it appears that the newest houses use about 15% less energy than the houses built just before the regulations were introduced.
So, it would seem that the regulations for making houses more energy-efficient led to a 15% decline in energy usage.
When the regulations were introduced, it was predicted that they would lead to an 80% reduction in energy usage. And therein lies the problem. The energy reduction that resulted from the regulations is -- according to the new analysis -- far less than promised or expected.
Why the discrepancy? Is there something wrong with the analysis above, or have the savings really been far less than expected?
The author examines some possible problems with the analysis. For example... Maybe there is some other difference between new houses and old ones that affects their energy usage. Maybe new houses are in areas of the state that need more energy. Maybe people who live in newer houses are more affluent, and tend to use more energy. The author looks at these and some other possibilities, and does not find any major flaw in the analysis.
One might wonder whether there is a political motivation here. Perhaps the author is anti-regulation, and trying to show that regulations don't work. I have not examined the author's background. But importantly, in the end, it is data that matters. If the analysis is flawed, someone needs to show why.
The author has done his analysis, and published it. If others have objections, or alternative analyses, they should do the same. I did not find much comment on the article at this point, but it is early.
What's the point of this post? The article caught my attention as being about an interesting issue -- and it is about my home state. It's interesting, and fairly readable -- and it challenges us. In science, we do not take a single article as "the answer"; let's see what comes from this article. Perhaps this article along with its follow-up will lead to a better understanding of what works well and what does not.
* Have green building codes succeeded in saving energy? Energy savings from efficiency requirements prove hard to find. (T Hyde, American Economic Association, October 19, 2016.) Good discussion.
* I'm Not Really Down with Most Top Down Evaluations. (M Auffhammer, blog at Energy Institute, Haas School of Business, University of California Berkeley, September 19, 2016.) A discussion of the problem of evaluating energy efficiency in the real world. It is not specifically about the current article, though the article is noted in the comments that follow.
The article, which may be freely available: How Much Energy Do Building Energy Codes Save? Evidence from California Houses. (A Levinson, American Economic Review 106:2867, October 2016.)
A post about reducing energy usage of a house: What if your house could sweat when it got hot? (November 30, 2012).
There is a section of my page Internet Resources for Organic and Biochemistry on Energy resources. It includes a list of some related Musings posts.
February 3, 2017
The first step in eating is to open your mouth, as shown in the following figure...
The mouth is shown in gray. As you can see, it is open.
The open mouth allows food to flow into the digestive organ. In this case, that is a microbial fuel cell (MFC).
This is Figure 5b from the article.
That's a diagram, of course. A diagram used to guide making a robot. A robot that trawls the waters, finds food, and burns it in a fuel cell. A microbial fuel cell -- a bacterial culture that generates electricity while metabolizing. Overall, the robot, with its microbial stomach, makes electricity to run itself.
The figure shows the power output (y-axis) for two MFC over time (x-axis). They are set up similarly, except that one is set up as a robot stomach with the new mouth, and one serves as a standard lab control.
The solid line is for the stomach MFC; for the most part, it is the lower line. Results are shown here for two cycles of testing.
You can see that both MFC work. However, the power output for the stomach is less than for the control. The latter is under ideal lab conditions.
The y-axis scale is in microwatts; that is not clear on the label (even in the original pdf file).
This is Figure 12 from the article.
The emphasis in the current work is on development of the mouth, so that the MFC operates with isolated electrodes, as is necessary, but can also feed.
The primary motivation for this type of development is to get robots that are more autonomous. Common robots require a tether to provide electricity, or use batteries, which run down. Of course, a robot of the type developed here would require a food supply. Under bad conditions, such a robot could starve.
There is also some talk of how these robots could supply information about the environment, or even be useful in cleaning up algae. True, but those functions do not require the level of integration shown here.
News story: This Algae-Eating Robot Could Solve Water Contamination. (Nature World News, November 4, 2016.)
The article: Toward Energetically Autonomous Foraging Soft Robots. (H Philamore et al, Soft Robotics 3:186, December 1, 2016.)
Previous robot post... An artificial hand that can tell if a tomato is ripe (January 3, 2017).
Microbial fuel cells are noted in the post A new source of electricity (November 10, 2009).
More fuel cells... Hydrogen fuel cell cars (June 8, 2010).
More about what robots eat... A robot that eats flies -- and more (August 4, 2008). There's not much here, but this seems to be about some earlier work from the same lab.
Previous post on the gut microbiome... How intestinal worms benefit the host immune system (February 27, 2016).
January 31, 2017
Neopalpa donaldtrumpi sp. n.
It's a newly discovered moth, announced in an article published on January 17 -- three days before the inauguration of the new US President.
Scale bar is 2 millimeters.
This is Figure 1g from the article.
We note two sentences from the article, in the section where the author explains the name (page 89, bottom). The sentences are in the opposite order in the article.
* "The specific epithet is selected because of the resemblance of the scales on the frons (head) of the moth to Mr. Trump's hairstyle." You may want to check a close-up of the (moth's) head, in Figure 2cd of the article (or the news story).
* "The reason for this choice of name is to bring wider public attention to the need to continue protecting fragile habitats in the US that still contain many undescribed species."
The specimen is from a collection at the University of California, Davis. It was originally collected in Imperial County, at the southern end of California. The range of the moth is the southeast of California and the north of Baja California (Mexico), based on the few specimens found so far.
News story: New species of moth named in honor of Donald Trump ahead of his swearing-in as president. (Phys.org, January 17, 2017.)
The article, which is freely available: Review of Neopalpa Povolný, 1998 with description of a new species from California and Baja California, Mexico (Lepidoptera, Gelechiidae). (V Nazari, ZooKeys 646:79, January 17, 2017.)
A previous moth post: The story of the peppered moth (July 9, 2012).
Also see: The Obama lizard (March 20, 2013).
January 30, 2017
For over a century, tungsten (or wolfram, as many say; symbol W) was the key ingredient of light bulbs. That use is being phased out, due to the inefficiency of ordinary incandescent bulbs.
A new article suggests that we might make paper out of tungsten. More specifically, that we make rewritable paper out of tungsten oxide, WO3.
Here's the idea...
The first frame of the figure shows how to write on the material. A mask for the desired image is placed over the membrane (the "paper"). UV irradiation leads to color development where the membrane is exposed.
The following frames show various images that were printed on a single membrane, in succession.
In each case, there are two steps. Step 1 is to erase the old image. This is done with either ozone or heat (shown as Δ). Step 2 is to write the new image.
You can see that the image quality is quite good, even with successive printings on the same membrane. (The authors report doing 40 cycles with some membranes, with minimal loss of quality.)
This is Figure 5d from the article.
The writing process takes about two minutes. The erasing processes take several minutes. This is not rewritable paper for routine note-taking. The authors suggest thinking in terms of posters and billboards as examples where the technology, with these parameters, might be useful.
They also say... "Such reprintable clothes can be printed with temporary marks or advertisements for athletic purposes of sports meets and games." (Last sentence of Results and Discussion section, p 29718.)
The chemical nature of both writing and erasure is known. Briefly, it involves the oxidation state of the W. The regular W(VI), in WO3, is colorless. W(V) is blue. The UV step shown above leads to reduction of the W from the 6+ state to the 5+ state, which is blue.
Erasure, then, involves oxidation of the blue W(V) back to colorless W(VI). Two methods of erasure are shown above. But common O2 works -- and so does air. The images are not all that stable under ambient conditions. The following figure shows an example.
The figure shows an image after various times, stored at ambient conditions.
This is Figure 4d from the article.
What is this "paper"? As noted above, the active ingredient is WO3, a material known to be photochromic -- to change color with light. The current work involves working out ways to build on this basic effect, and make it practical for rewritable paper. It's formed here into a paper-like material, with common polymers. The polymers electronically couple with the WO3; details of the composition affect properties such as ease of writing and how rapidly it fades in air.
The authors suggest that their rewritable paper is easily manufactured, inexpensive and non-toxic. We'll leave it to others to compare alternatives, but it's an interesting type of development.
News story: Rewritable material could help reduce paper waste. (Phys.org, November 2, 2016.)
The article: Electrospun Photochromic Hybrid Membranes for Flexible Rewritable Media. (J Wei et al, ACS Applied Materials & Interfaces 8:29713, November 2, 2016.)
Related... Windows: independent control of light and heat transmission (February 3, 2014). There are similarities between the work in this post and the current one. In both cases, energy (electrical or light) is used to -- reversibly -- change the optical properties of a material, to our benefit. In fact, WO3 has been used in smart windows, though not in this post.
Light bulbs... Light bulbs (July 1, 2009). Links to more.
Previous panda post... How the giant panda survives on a poor diet (August 2, 2015).
Previous posts about paper? Beats me. "Paper" is a horrible search term. But there is no mention of tungsten in Musings prior to this post.
January 28, 2017
Cartilage damage in the knee is a significant health problem. There are various approaches to treating knee cartilage damage, but none are particularly satisfactory.
A recent article offers a new approach: use your nose. The structural material in the nasal septum is actually the same type of material as knee cartilage. Interestingly, the nasal chondrocytes (cartilage-forming cells) seem to have better regeneration capability than those from the joints. And it's much easier to get a sample from the nose than from the knee.
You might wonder about the size of the sample from the donor site. What the scientists do is to excise a small piece of the nasal septum, and then expand it in lab culture. The lab-grown material, derived from the nasal septum, is what is implanted into the knee.
The present study, a Phase I trial, is the first trial of the method in humans. It involved ten patients, with two years of follow-up. Analysis included MRI of the knee, as well as the patients' subjective evaluation of their status. Most of the results are encouraging. The article here is a preliminary report; the current trial will continue. A Phase 2 trial, comparing cell sources and procedures, is in progress. Nose-to-knee cartilage transfer seems a promising approach.
Pictures? There are many in the article, some with plenty of blood. Help yourself.
* Engineered Autologous Nasal Chondrocytes Repair Articular Cartilage. (Rheumatology Advisor, October 24, 2016.)
* Nose cells could help repair damaged knee cartilage. (C Paddock, Medical News Today, October 21, 2016.)
* "Comment" accompanying the article: Cartilage repair across germ layer origins. (N Rotter & R E Brenner, Lancet 388:1957, October 22, 2016.)
* The article: Nasal chondrocyte-based engineered autologous cartilage tissue for repair of articular cartilage defects: an observational first-in-human trial. (M Mumme et al, Lancet 388:1985, October 22, 2016.)
* Added February 21, 2017. Can we predict whether a person will respond to a placebo by looking at the brain? (February 21, 2017).
* Jumping -- flea-style (February 21, 2011).
* Should you run barefoot? (February 22, 2010).
A recent nose post... Copper ions in your nose: a key to smelling sulfur compounds (October 10, 2016). Links to more.
January 27, 2017
Let's look at the reaction that is the focus of the work in a new article...
Start with the product, chemical #3 at the bottom. It contains a carbon-silicon bond. Further, the C of that C-Si bond is a stereocenter; the mirror image of the product shown is a distinct chemical.
That product is made from the two chemicals shown at the top. #1 has a silane group, with an Si-H bond. #2 is a diazo compound, with =N2 activating a C. The activated C attacks the Si of the silane group, leading to the product.
The figure shows "heme protein" acting as a catalyst. More about that as we continue.
This is Figure 1A from the article.
In the first part of the work, the authors explore a little. They find that heme itself catalyzes the reaction, though poorly. They then try some proteins that contain heme. They vary. One heme-protein they try stands out: not only does it enhance the rate of the reaction, but it makes mainly one of the two possible stereoisomers, a hint that this is an enzyme-catalyzed reaction. That successful protein is the cytochrome c from the bacterium Rhodothermus marinus.
They used that protein as the base for further work. They knew the structure of the protein; that let them focus on amino acids likely to be important. Trial and error work focused on those parts of the protein led to improvements. The following figure shows some results...
The figure shows the rate of the catalyzed reaction for four versions of the enzyme.
The rate (y-axis) is shown as the TOF (turnover frequency), in reactions per minute.
The four enzymes are shown across the bottom. WT = wild type, the original enzyme. The others have 1 to 3 mutations, as listed. (The details of the mutations are in the article but we'll skip them.)
You can see that the enzyme rate increases as they develop the improved versions of the enzyme. The rate for the enzyme with three mutations for improvement is about 7-fold better than the starting rate.
This is Figure 1E from the article.
We noted that the original enzyme was stereospecific, making mainly one of the two possible stereoisomer products. This feature was retained, and even improved a little, during the further development. The final enzyme produced more than 99% of one stereoisomer.
In summary, the scientists have developed an enzyme that catalyzes the formation of C-Si bonds. They found the activity in a natural protein, and then developed it further in the lab.
So far as they know, this is the first known case of the enzymatic formation of C-Si bonds. (There is no reason to think that the natural protein makes such bonds in nature.) The ability to make C-Si bonds in a stereospecific manner is of industrial interest; there may be a future for this enzyme. It offers the possibility of inexpensive C-Si bond formation under environmentally friendly conditions. Further, the work should serve as a model: if you want an enzyme to do something unusual, go look. There may be such an enzyme activity already in nature, even if not obvious.
There is one further experiment in the article that is of interest. They clone the enzyme into common E coli bacteria. The bacteria are now able to carry out the reaction. That is, the scientists not only made an enzyme that can make C-Si bonds, but they now have a living organism that can do so.
* Engineered enzyme first to forge carbon-silicon bond. (J Durrani, Chemistry World, November 25, 2016.)
* Caltech scientists use bacterial protein to merge silicon and carbon and create new organosilicon compounds. (Kurzweil, November 25, 2016.) Interesting picture (at the top).
* News story accompanying the article: Biochemistry: Teaching nature the unnatural -- A reengineered enzyme catalyzes C-Si bond formation. (H F T Klare & M Oestreich, Science 354:970, November 25, 2016.)
* The article: Directed evolution of cytochrome c for carbon-silicon bond formation: Bringing silicon to life. (S B J Kan et al, Science 354:1048, November 25, 2016.)
Posts about silicon include... Black silicon and dragonfly wings kill bacteria by punching holes in them (January 28, 2014).
Enzyme development... Better enzymes through nanoflowers (July 7, 2012).
More cytochromes... On sharing electrons (May 3, 2011).
January 24, 2017
Original post: An Ebola vaccine: 100% effective? (August 7, 2015).
As the recent Ebola outbreak in West Africa waned, there was an important trial of an Ebola vaccine. It was a clever trial, focusing on those who were likely to be contacts of known cases, thus most likely to have been exposed. That strategy is called ring vaccination. Musings noted the preliminary report of the trial's results: the vaccine was 100% effective, though with small numbers.
The final report from the trial is now in. The short message is that the initial findings hold. In fact, the numbers are not much larger than in the preliminary report.
The original Musings post, which discussed the trial in some detail, still holds, too. I encourage you to read that for the general nature of the trial and its results. More, on the update, is below, but there is no big news in the update.
Perhaps the key limitation is that we do not know how well the current vaccine will do against other strains of Ebola, including those that develop during an outbreak. With that inevitable reservation, it appears that we now have an effective Ebola vaccine and a strategy for using it during an outbreak. Perhaps it will help to stop a new outbreak before it becomes serious.
Whether an Ebola vaccine should be used as a general preventive is another question. Ebola is still an uncommon disease. On the other hand, most Ebola has occurred within a fairly small area, and we now know that it can affect thousands of people in an outbreak. Public health authorities will need to consider whether general vaccination against Ebola is warranted in the affected areas.
* Bye Bye Ebola? (J Bloom, American Council on Science and Health, December 23, 2016.) The question mark in the title is to emphasize an important point: the vaccine was tested in a particular situation, with a particular virus. We do not know how the virus will evolve, or what it will take to keep the vaccine effective.
* Ebola Vaccines Update. (Center for Vaccine Ethics & Policy, January 2, 2017.) A compilation of various things about the vaccine, including the announcement from WHO.
Both of the following are freely available.
* "Comment" accompanying the article: First Ebola virus vaccine to protect human beings? (T W Geisbert, Lancet 389:479, February 4, 2017.)
* The article: Efficacy and effectiveness of an rVSV-vectored vaccine in preventing Ebola virus disease: final results from the Guinea ring vaccination, open-label, cluster-randomised trial (Ebola Ça Suffit!). (A M Henao-Restrepo et al, Lancet 389:505, February 4, 2017.)
The background post for this topic is listed at the top of the post.
Most recent Ebola post: Ebola survivors: are they a risk to others? (June 5, 2016).
There is more about Ebola on my page Biotechnology in the News (BITN) -- Other topics in the section Ebola and Marburg (and Lassa). That section links to related Musings posts, and to good sources of information and news.
January 23, 2017
You may hear that the Internet is an equalizer, increasing access to information. On the other hand, in some countries Internet access is controlled primarily by the government. Is it possible, then, that disenfranchised groups may have less access than favored groups, because of political discrimination?
It's an interesting question -- and a complex one. A recent article addresses it, and it is interesting for that reason. The article is itself complex, and sometimes hard to follow. I suggest you emphasize the nature of the work, and not necessarily try to reach a conclusion from it.
The first figure shows Internet access in democracies and non-democracies. It is based on data for a substantial part of the world's population.
The y-axis is a measure of Internet access. (It's based on the number of active networks; the detail of the scale is not clear.)
You can see that Internet access, as defined here, has been increasing over recent years. Access is higher in democracies than in non-democracies.
This is Figure 2A from the article.
The next figure shows the gap in Internet access between favored and disfavored groups. The focus is on groups that are disfavored on an ethnic basis.
The access gap, too, has been increasing.
However, that is on an absolute basis. The previous figure, above, showed that access is increasing. When the gap is re-calculated on a percentage basis, it is nearly constant over the time period shown.
But there is a gap.
This is Figure 2C from the article.
The message from the figures above is that there is a gap in Internet access between favored and disfavored groups. On a relative basis, the gap may be stable. However, the finding argues against the notion that the Internet is necessarily liberating for disfavored groups.
What does this mean? Is there deliberate bias that limits Internet access for disfavored groups? Or, is this bias simply the result of other known biases, such as that lower economic classes have less access? That's an important question; as you can imagine, it is hard to get at. The authors argue that there is specific bias against disfavored ethnic groups, independent of the other factors. They do this using statistical analyses. I suspect that others will examine such analyses carefully.
For example... The authors' analysis shows that democracies are just as bad in limiting Internet access to excluded groups as are non-democracies. However, democracies usually have fewer people in such excluded groups.
It's hard to know what to make of this article. As noted, some of it is not very clear (and depends on data detailed in the literature, but not clear within this article). Perhaps we can agree that the question is worthwhile, and that the authors deserve credit for tackling it, and laying out what they did.
News stories -- several of them, with various strengths. If you read through some of these, I think you will get a good sense of what the article is about -- probably better than you will get from the article itself.
* Marginalized ethnic groups have less internet access. (Deutsche Welle, November 3, 2016.) A good overview, including an interview with the lead author.
* Study finds politically marginalized groups around the world are being systematically cut off from internet access. (A Sankin, Daily Dot, September 11, 2016.)
* Political Power Could Determine Certain Groups' Internet Access. (E Harfenist, Vocativ, September 10, 2016.)
* Study: Ethnic groups' government influence and internet access go hand in hand. (A Khan, Phys.org, September 9, 2016.)
* Statement about the publication by Science of the article on "Digital Discrimination". (S Baleato, Reflective Journal: Notes on Political Technology since 1997, September 9, 2016.) An interesting statement about the article by one of the authors (on his own web site).
The article: Digital discrimination: Political bias in Internet service provision across ethnic groups. (N B Weidmann et al, Science 351:1151, September 9, 2016.)
* Immigration and asylum-seeking (December 14, 2016).
* Using a smartphone as your extended brain (November 17, 2015).
January 22, 2017
The red ball in the middle is a hydride ion, H-.
It is surrounded by 12 molecules of H2. The "inner" H of the 12 molecules of H2 form an icosahedron (20-sided structure), which serves as a cage for the hydride ion.
That's a proposed structure based on work reported in a recent article.
Overall, it is H25-.
This is from the news article in Physics.
Although the structure shown above is theoretical, there is evidence for the H25- ion.
The scientists made hydride ions from H2 dissolved in liquid helium, very near zero Kelvins. They measured the masses of what formed, using mass spectrometry. Here is what they found...
The graph shows how much material of each mass they found. The y-axis shows the amount, as "ion count". Under the assumption that everything they found has one H- ion plus some additional H atoms, each species can be described as Hn-. The x-axis shows n, the number of H in the ion.
The first finding is that only species with an odd number of H are found. (You can't see that clearly from this figure. But you can see that there are about five peaks in each n-interval of 10.) That is, each ion has one H-, plus an even number of additional H. It is likely that those n-1 additional H atoms are present as (n-1)/2 H2 molecules.
What you can see clearly from this figure is that a very wide range of species is found. And in particular, there is a peak at n = 25. That's H- with 12 H2.
This is the top part of Figure 2 from the article. I have added labels for the axes.
(The bottom part of the figure shows results for the same experiment but using deuterium, the heavy isotope of H with mass 2; the figure is very similar.)
That is, the measurement of the mass distribution suggests that H25- ion is a particularly stable species. That leads them to propose the structure shown above, which provides a highly symmetrical cage for the hydride ion.
There's more in the figure above. There seem to be smaller points of stability at n = 65 and 89. There aren't exactly peaks at those n values, but there is a noticeable drop off to the next value. The authors suggest that those n values define second and third levels of shells of H2 around the central ion.
There are no obvious points of stability beyond that, but there are certainly lots of bigger ions. The authors suggest this means there are no longer rigid structures, but that the ion is more fluid-like.
The graph provides evidence for ions as big as H129-. That would represent a species with a hydride ion plus 64 H2 molecules.
How stable are these cluster ions? As the authors note, all they know is that they are stable enough to measure in the mass spec. That takes a few microseconds.
The point of all this? It's basic chemistry. H+ ions have been studied a lot, but H- ions are harder to deal with. There has been no agreement on the structure of even fairly simply H- ions. The current work is the first to provide evidence for much larger ions. There is also speculation that such structures might occur in outer space.
News story: New form of hydrogen created. (E Conover, Science News, January 9, 2017.)
* News story from the publisher's news magazine. Freely available: Synopsis: Hydrogen Clusters Go Negative. (M Schirber, Physics, December 27, 2016.)
* The article: Anionic Hydrogen Cluster Ions as a New Form of Condensed Hydrogen. (M Renzler et al, Physical Review Letters 117:273001, December 30, 2016.)
Among many posts involving mass spectrometry:
* Blood vessels from dinosaurs? (April 22, 2016). The mass spec results are not in the post, but the idea of doing mass spec on dinosaurs is intriguing.
* Close-up view of an unwashed human (July 29, 2015).
* Iridium(IX): the highest oxidation state (December 14, 2014).
January 20, 2017
One way to combat global warming is to do something that will cause cooling. An example is to add reflective particles to the atmosphere so that less solar energy reaches the Earth surface. We know this works: volcanic emissions of sulfate aerosols cool the Earth. Perhaps we could add sulfate aerosols to the atmosphere intentionally, and reduce CO2-induced warming.
It is actually a serious proposal, but one drawback is well understood: sulfates in the atmosphere lead to loss of ozone.
A new article proposes that we might add carbonates to the atmosphere, instead of sulfates. Specifically, calcium carbonate, or limestone. The particles would be effective at reducing solar radiation at the surface, but would not cause ozone loss.
The effect on ozone is well understood. Sulfates are acidic; it is the acidity that promotes ozone loss. CaCO3 is not acidic.
The following graph shows the effect of some aerosol materials, liquid or solid, on atmospheric ozone, according to calculations...
The graph shows the ozone effect (y-axis) vs the cooling effect (x-axis). Calculated results are shown for various aerosol materials.
A good place to start might be with the two curves for sulfur-based aerosols; these are the purple and yellow lines, and are labeled at the right. For both of these, as the cooling effect increases, the loss of ozone becomes greater. The ozone loss is about 10% by the right hand side of the graph.
In contrast, look at the curves at the top. These are for CaCO3. You can see that the effect on ozone is positive... There is an ozone gain of about 5% by the right hand side of the graph.
What's γ? It is a parameter for how reactive the CaCO3 is. The graph shows that the main result holds regardless of the value of γ. So we won't worry about it.
The x-axis goes to 2 watts per square meter. That's about the current amount of CO2-induced warming.
(There are also some results for diamond and aluminum oxide aerosols. As with the S-materials, they cause ozone loss.)
This is Figure 3 from the article. I have added labels at the right to identify some of the lines.
That graph is the basis for suggesting that CaCO3 (limestone) should be considered as a cooling material for the Earth's atmosphere, with the advantage that it would avoid ozone depletion.
Why does CaCO3 lead to some increase in ozone? CaCO3 is actually basic, and thus consumes some of the acidity otherwise in the atmosphere.
The article also discusses other issues relating to the atmospheric additions, including direct effects on heating and the implications of the additions falling back to Earth.
The authors suggest that testing is appropriate. They note that any large scale geoengineering should be tried cautiously. The article here is theoretical, and makes a prediction. Some parameters used in the prediction are uncertain, and it is possible that important factors have been omitted. That's why testing is needed. Aerosol additions are a good candidate for cautious testing. It is easy to test small amounts, and the additions are short-lasting.
* Atmospheric limestone dust injection could halt global warming. (A King, Chemistry World, December 16, 2016.)
* Mitigating the risk of geoengineering -- Aerosols could cool the planet without ozone damage. (L Burrows, Harvard, December 12, 2016.) From the university.
The article, which is freely available: Stratospheric solar geoengineering without ozone loss. (D W Keith et al, PNAS 113:14910, December 27, 2016.)
Posts on geoengineering include:
* Capturing CO2 -- and converting it to stone (July 11, 2016).
* Climate engineering: How should we proceed? (March 4, 2015).
* Geoengineering: a sunscreen for the earth? (February 20, 2010).
A post about natural aerosols, and their effect on climate... SO2 reduces global warming; where does it come from? (April 9, 2013). Links to more.
More calcium carbonate: Underwater "lost city" explained (July 25, 2016).
January 18, 2017
A recent article explores a bit of the history of malaria. It is perhaps most interesting for how it was done.
By comparing genome sequences of related organisms, we can reconstruct their history. Sometimes we are even able to get genome sequences for extinct organisms. The development of genome sequences for Neandertal and Denisovan humans over recent years is an example; it has made a major contribution to our understanding of the history of our species.
We now have some genome sequences for extinct lines of the malaria parasite. The source of the samples is shown in the following figure...
A couple of slides of malaria parasites.
This is Figure 1A from the article.
Those slides date from 1942-4, and are from the collection of a leading Spanish physician, Dr. Ildefonso Canicio. The samples are of special interest, because there has not been any malaria native to Europe in several decades. Attempts, for example, to explain how malaria might have spread to the Americas from Europe have been hampered by lack of knowledge of what the European malaria actually was.
The scientists were able to recover parasite DNA from the slides. They found both of the common malaria species, Plasmodium vivax and P falciparum, and were able to relate them to other known strains.
We'll leave most of the detail; there are some complicated genealogy charts in the article. But we note that one of the P vivax strains the scientists found here is almost identical to one now found in the Americas. That supports the model that malaria was taken from Europe to the Americas, presumably sometime post-Columbus. (To be cautious... That is not a proof, only the simplest interpretation of the data at hand. Further, it is possible, even likely, that there many have been multiple introductions of malaria into the Americas.)
* Missing Link in Malaria Evolution Discovered in Historical Specimens -- A family's collection of antique microscope slides became a trove of genetic information about the eradicated European malaria pathogen.. (B A Henry, The Scientist, December 1, 2016.)
* Light shed on what European malaria was like, 50 years after its eradication. (Universitat Pompeu Fabra, September 27, 2016.) From the lead institution.
The article, which is freely available: Mitochondrial DNA from the eradicated European Plasmodium vivax and P. falciparum from 70-year-old slides from the Ebro Delta in Spain. (P Gelabert et al, PNAS 113:11495, October 11, 2016.) The first paragraph of the Materials and Methods section tells the history of the samples. The news stories tell more of that history. At the end of the Discussion, the authors issue a plea for more slides of European malaria. We also note that the Introduction is a nice overview of what is known of the history of malaria.
A recent post about malaria: Can chickens prevent malaria? (August 12, 2016).
More on malaria is on my page Biotechnology in the News (BITN) -- Other topics under Malaria. It includes a listing of related Musings posts. including posts about mosquitoes.
There is more about genomes and sequencing on my page Biotechnology in the News (BITN) - DNA and the genome. It includes an extensive list of related Musings posts.
January 17, 2017
PCl3 + Cl2? They can react to form PCl5. You may have heard about that reaction in your beginning chemistry course.
NF3 + F2? N and F are in the same families as P and Cl (respectively). And F2 is a stronger oxidizing agent than Cl2. However, there is no reaction. Why not? Well, as a second row element N normally forms only three bonds. By using its lone pair of electrons it can form a 4th bond, as in NH4+. The P atom uses its d-shell electrons to form the additional bonds; N doesn't have any.
Since I've brought it up, you must wonder if there is a catch. Is there something wrong with that argument? Maybe, according to a new article.
A pair of theoreticians have considered the reaction. They have calculated what might happen. There are actually several possibilities, and the authors just explore. The following graph shows their results. Caution, it's a complicated graph; we'll look at small pieces of it.
Start with the key, at the lower right; it shows the possible products they considered. The first one listed is the original reactants, as if there was no reaction. The second one is the simple product NF5 that we hinted at above. And then there are some other possibilities, involving ionic species.
The scientists calculated the energy of the various possible products. The energy is shown on the y-axis. It's shown in a rather complex way, but what matters for now is to see which is the lowest energy state. Lower energy means greater stability.
What's novel -- and important -- is the x-axis. Pressure (in GPa, or gigapascals). They explore what happens to this reaction as the pressure increases. A quick glance shows that a lot happens!
At low pressure (left side), the black line is the lowest. That's for the reactants. That this is low reinforces what we said at the start: no reaction.
As P increases, the black line shoots up. Other lines do various things. Eventually, the green and orange lines show the lowest energy. Look at the key; those are the last two entries. Those forms share a couple of ions. One of the ions they share is NF6-. A N with bonds to 6 F, forming an ion with charge -1.
This is Figure 2 from the article.
That is, based on their calculations, the scientists predict that NF3 + F2 will react -- at high pressure. The expected product isn't the simple NF5 we might expect, but something more complex.
The following figure shows the calculated structure for one of those products -- the simpler one...
You can see that the compound consists of an array of the two ions, NF4+ and NF6-.
And you can see that the NF6- ion has a central N with bonds to 6 F. That octahedral structure is what one would expect for a species of that form. It's just that we didn't think N would do that.
This is part of Figure 1 from the article.
There is N with 6 bonds. But it hasn't been made. (The NF4+ ion accompanying it is a known species ) What's above is a prediction. They say, do the reaction with pressure, and it may work. How much pressure? The gray bar in the first figure above goes out to about 40 GPa. Get above that, and it begins to look promising for making the NF6- ion. 40 GPa is about 400,000 atmospheres. That sounds like a lot, but it is well within the range of pressures that chemists use regularly, in diamond anvil cells. The scientists make a prediction, based on their theoretical models, and they say it could be easily tested.
We await the test.
News story: Going against the grain -- nitrogen turns out to be hypersociable. (Phys.org, December 1, 2016.)
The article, which is freely available: Hexacoordinated nitrogen(V) stabilized by high pressure. (D Kurzydłowski & P Zaleski-Ejgierd, Scientific Reports 6:36049, November 3, 2016.)
A little more about the first graph... The energy scale on the y-axis is in electron volts (eV). More specifically, eV per molecule, as shown at the top. To put it in units more familiar to chemistry people, 1 eV per molecule is about 100 kJ/mole. That's about the energy of common covalent bonds.
What makes the graph a little confusing is that everything is shown relative to one of the possible products. The line for that product is zero at all pressures, because it is set that way. For our purposes, that's ok; all we really want is to see which structure is the lowest energy at any given P, and that's easy enough. What you can't tell from the graph is the energy of the reaction at any P.
The post immediately below has a title similar to this post, and perhaps a similar answer. But remember, the C post is about a measured structure for a chemical they made. The N post, here, is about a prediction. (Recall... we noted in the C post that two theoretical models gave different predictions.) How many atoms can one carbon atom bond to? (January 14, 2017).
More high pressure chemistry...
* What's the connection: rotten eggs and high-temperature superconductivity? (June 8, 2015).
* Novel forms of sodium chloride, such as NaCl3 (January 17, 2014). Includes a discussion of pressure units.
** Both of those involve pressures greater than needed to test the new prediction.
January 14, 2017
You've answered the title question, and wonder why we would bring it up? Look at the following structure, which was reported in a recent article.
The atoms are all C and H.
The C atoms are big and gray. The H atoms are small and white, For example, you'll see a regular methyl group, -CH3, at the left.
This is Figure 1 from the article.
The C at the top of the pyramid is bonded to six other C: five in the base of the pyramid, and the methyl C at the top.
So what is this thing? Is this a real chemical?
The second part first... It's real. The structure shown above comes from an X-ray analysis of a chemical the authors made.
What is it? That's where this gets complicated. A short answer is that it is the hexamethylbenzene di-cation. We'll walk through that in a moment, but first note that this is not a neutral molecule. It is an ion. It's an unusual ion, a double cation (2+ charge) of a hydrocarbon, a type of molecule that's usually not very good at making ions. And it is shown here free of its counter-ion, the anion it is paired with. That's fine; it simplifies the picture, but remember that this unusual structure exists within the context of a more complex structure.
Hexamethylbenzene. A common benzene ring, with a methyl group at each position on the ring. Nothing unusual about that. Now imagine removing two electrons from that starting chemical. That gives the di-cation we noted above. It's not easy to do, but it is easy enough to imagine. The resulting di-cation is stable -- stable enough to determine its structure, which is shown above.
You can see that one of the C of the original benzene ring has "popped out", to form the apex of a pentagonal pyramid. Up there, it is now bonded to six other C.
Let's count electrons. Focus on how the C atoms in the ring bond to each other; we will assume that the other bonding is normal. In an ordinary benzene ring, there are 6 single bonds and 3 double bonds between the C atoms. That's 9 bonds, or 18 electrons. The di-cation has lost 2 of those; it has only 16 electrons bonding the ring carbons. There are 5 single bonds in the new ring, using 10 electrons. That leaves 6 electrons -- for those 5 bonds between the apex C and the base.
Those 5 bonds (apex to base) are not ordinary C-C bonds. In fact, each is, in a sense, only 3/5 of a bond -- where a "bond" has 2 electrons. The apex C actually has the equivalent of 4 bonds... It has 5 of those 3/5 bonds to the base; that totals 3 bonds. Add in the regular bond to the methyl group above, and you get 4 bonds total. Each of the C on the base has three ordinary bonds, plus a 3/5 bond to the apex. Each of the ring C, then, has a charge of +2/5. Five C each with +2/5 charge... that's 2+ total charge. That is, the 2+ charge of this di-cation is spread around the base of the ring.
Would one have predicted this structure? That's an interesting question. The scientists ran some theoretical calculations on the structure of the cation under the relevant conditions. They used two different, well-respected models for calculating the structure. And they got different answers from the two models. One predicts the unusual structure they found, but one does not. If nothing else, that's a reminder of the limitations of computational chemistry. I'm sure people will be looking at this in detail, to see what they can learn about how the models are predicting the structure.
Scientists have caught C doing unusual things before. However, this does appear to be the first reported case of it forming bonds to six other C atoms. We also note that the "3/5" bonds may remind some of the bonds found in compounds of boron with hydrogen.
Making the di-cation was a piece of rather exotic chemistry in itself. The scientists didn't actually start with hexamethylbenzene, but rather an isomer of it. If you're intrigued by Dewar benzene and magic acid, look at how the di-cation was made. But the result doesn't depend on that.
* Carbon can exceed four-bond limit -- Chemists confirm links to six other atoms in unusual molecule. (L Hamers, Science News, January 4, 2017.)
* Carbon seen bonding with six other atoms for the first time. (R Boyle, New Scientist, January 11, 2017.)
The article: Crystal Structure Determination of the Pentagonal-Pyramidal Hexamethylbenzene Dication C6(CH3)62+ . (M Malischewski & K Seppelt, Angewandte Chemie International Edition 56:368, January 2, 2017.)
Added January 17, 2017. The post immediately above raises a similar issue for nitrogen. Caution, it is entirely a theoretical article -- for now. How many atoms can one nitrogen atom bond to? (January 17, 2017).
Another example of unusual chemical bonds, not involving the usual two electrons: An unusual hydrogen bond, involving boron (March 26, 2016). Also involves benzene.
There are many posts about carbon. Among them...
* The mass of an electron (March 23, 2014). The C5+ ion -- a C atom that has lost 5 electrons.
* Image of a carbon atom that isn't there (August 17, 2008). A C atom that has lost everything.
This post is listed on my page Introduction to Organic and Biochemistry -- Internet resources in the section on Aromatic compounds.
January 13, 2017
CO (carbon monoxide) is deadly. It binds tightly to the hemoglobin in your blood, preventing it from carrying oxygen. Current treatments are not fully satisfactory.
The ideal treatment? Maybe something that binds CO even more tightly, with no ill effect. And that's what a new article claims.
Let's jump to the bottom line. At least, the mouse bottom line. Take some mice, give them a lethal dose of CO. Treat some with the new antidote, and measure the survival of treated and untreated mice. Here are the results...
The experimental plan is shown at the top.
CO was given for the first 4.5 minutes (3% CO in the air), followed by "clean" air. Immediately after the CO, the drug was given -- during the time shown as "infusion".
Survival of the mice was followed for 40 days, for the drug-treated group and two control groups.
One group survived well; two did not. The group with high survival is the drug-treated group. Survival was 7 of 8 mice even at 40 days. Both control groups (one treated with PBS buffer and one with a control protein, albumin) showed poor survival, with 16 of 17 mice dying by day 30.
This is Figure 5D from the article.
The results are impressive. Other measurements, such as heart rate, blood pressure and lactic acid, support the survival observations.
What is this drug? Well, it says: Ngb-H64Q-CCC. Ngb stands for neuroglobin, another natural globin protein. It's been modified a little (genetically); that's what the rest of the name is about.
How does it work? As with other globin proteins, it has a heme that binds CO. But this one binds CO about 500 times more tightly than does hemoglobin (Hb). Biochemical experiments show that it effectively "pulls" CO off of the Hb -- which is the idea. What happens to the CO? Apparently, the Ngb is rapidly excreted, via the kidneys, with its CO attached.
The treatment with the modified Ngb is more effective in rapidly removing CO than currently available treatments. It is probably also at least as easy to administer "in the field".
The test shown above, even along with the other work in the article, leaves questions. (The article has no long term follow-up about the cognitive ability of the surviving mice.) But the authors think that the results are sufficiently encouraging that the drug should be studied further.
* A Possible Antidote for Carbon Monoxide Poisoning. (J Bloom, American Council on Science and Health, December 8, 2016.)
* Team designs molecule that could be first antidote for carbon monoxide poisoning. (Medical Xpress, December 7, 2016.)
The article: Five-coordinate H64Q neuroglobin as a ligand-trap antidote for carbon monoxide poisoning. (I Azarov et al, Science Translational Medicine 8:368ra173, December 7, 2016.)
Previous post about carbon monoxide: Cooperation: a key to separating gases? (March 28, 2014). Links to more.
Other posts about hemoglobin include...
* Pop goes the hemozoin: the bubble test for malaria (January 24, 2014).
* Mammoth hemoglobin (February 1, 2011).
January 10, 2017
As people age, they lose the ability to focus at short distances. For example, they will tend to hold a newspaper further from their eyes as they age. It's rather reproducible; one can estimate a person's age from how they hold the newspaper (assuming that their vision is otherwise normal).
Bonobos don't read newspapers, but they do close-up work: grooming others. Scientists have now measured the distance a bonobo maintains from its grooming target -- as a function of age.
Here are some results...
The graph shows the focal distance (y-axis) vs age (x-axis).
The red points are for bonobos doing grooming. as observed in the new work.
The blue lines show the expected range for humans.
This is Figure 1B from the article.
It's rather clear: the results for bonobos are essentially the same as would be expected for humans.
The simple interpretation is that both human and bonobo inherited this vision-aging characteristic from a common ancestor.
For one of the bonobos, the authors happen to have a video from six years earlier. Analysis of that video, as best they can, suggests that this individual bonobo has developed farsightedness consistent with the curve shown above.
The authors note anecdotal evidence for similar farsightedness in older chimpanzees. They also note evidence for farsightedness in older rhesus monkeys, though these animals have a quite different scale for lifespan.
* Just like humans, old bonobos suffer from long-sightedness. (M Andrei, ZME Science, November 9, 2016.)
* Aging bonobos in the wild could use reading glasses too. (Phys.org, November 7, 2016.)
Video. (0.4 minutes; no meaningful sound.) Two examples of bonobo grooming events. The bonobo at the far right is 45 years old. The one in the middle is 27. You can see the difference in the distances they maintain for grooming. A still from this video is in the article as Figure 1A; I found it hard to sort out what was in that figure. The video is clear.
The article: Long-sightedness in old wild bonobos during grooming. (H Ryu et al, Current Biology 26:R1131, November 7, 2016.)
Another post comparing bonobos and humans: The metabolic rate of humans vs the great apes: some data (August 1, 2016).
My page for Biotechnology in the News (BITN) -- Other topics includes a section on Aging. It includes a list of related Musings posts.
Added February 19, 2017. More apes... Do apes have a "theory of mind"? (February 19, 2017).
January 9, 2017
In the previous post (immediately below) we discussed why there is interest in reducing the content of polyunsaturated fatty acids in soybean oil, and presented one approach for doing so. We now present a second approach, also published recently. I suggest you read that post for the background, but otherwise this post substantially stands on its own.
The following figure diagrams the apparatus used in the new work to reduce the content of polyunsaturated fatty acids.
That's right, a high-voltage plasma chamber.
What's not shown there is the gas content of the chamber. It is 5% H2, 95% N2. The H2 is the "active ingredient"; the gas mixture is intended to be safe.
It is a novel way to partially hydrogenate the oil.
This is Figure 1a from the article.
Here are some results...
The graph shows the percent of each of several fatty acids (y-axis) over treatment time in the plasma chamber (x-axis).
The fatty acids listed here are the five major ones in soybean oil; their structures are shown in the previous post.
Two of the curves decline over time. These are the curves for the polyunsaturated fatty acids (18:2 and 18:3).
The amounts of the other fatty acids increase a little over time. Most of those are the fatty acids with fewer double bonds, as expected.
At the very bottom of the graph is a curve mysteriously labeled "a", which also increases. We'll discuss it below.
This is Figure 2 from the article. I have added labels for three of the lines; in each case, my label is just above the line.
Analysis shows no detectable levels of the trans fatty acids, which are commonly made during the traditional hydrogenation process. The low temperature of the new process probably explains why no trans fats are made.
The general conclusion, then, is that the primary goal is being met: the plasma treatment reduces the polyunsaturated fatty acid content, with no production of trans fats. There is a corresponding increase in the less unsaturated fatty acids. It's promising.
What about "a"? The scientists don't know what it is. They have considered some possibilities, but so far "a" doesn't match any of them. It's probably fairly routine chemistry to figure out what "a" is. Until then, it is a question mark, one that may or may not be important.
Let's make a few points comparing the work of this and the previous post.
* One method develops a new plant, whereas the other modifies the oil once collected. The latter is more flexible, and corresponds to the older practice of partial hydrogenation of the oil.
* The older practice of partial hydrogenation was abandoned (for food use) when it was realized that one of its products (the trans fats) was undesirable. In that context, the mystery component "a" above is of potential concern. (Of course, a genetic modification of the plant could produce an undesirable component.)
* At face value, the genetic manipulation is more effective in reducing the content of polyunsaturated fatty acids. However, the plasma method is new, and subject to further development.
* We have little information on the economics of either process at this point. The authors note that their plasma process uses less energy than the traditional hydrogenation process, but that would be only one part of the analysis.
For now, it is perhaps best just to note that the two approaches are tools, and are under development.
News story: Plasma-zapping process could yield trans fat-free soybean oil product. (J Merzdorf, Purdue University, December 1, 2016.) From the lead institution.
The article: High-voltage Atmospheric Cold Plasma (HVACP) hydrogenation of soybean oil without trans-fatty acids. (X V Yepez & K M Keener, Innovative Food Science and Emerging Technologies 38:169, December 2016.)
Related post, immediately below: Improving soybean oil by gene editing (January 8, 2017).
Another application of plasma: Using a plasma to kill norovirus (June 5, 2015).
For more about lipids, see the section of my page Organic/Biochemistry Internet resources on Lipids. It includes a list of related Musings posts. It also includes some links to items about trans fats.
January 8, 2017
Fatty acids differ in their chain length and the number of double bonds. (They may also differ in the position of the double bonds, but that is not an issue here.) These features affect various properties -- physical, chemical, and biological.
Plant "oils" are typically high in fats with double bonds -- called unsaturated fatty acids. In fact, some have a high content of fatty acids with more than one double bond -- the polyunsaturated fatty acids.
The polyunsaturated fatty acids are more easily oxidized, becoming rancid. Related to that, they are not suitable for baking. In the old days, scientists developed ways to reduce the content of polyunsaturated fatty acids, by partial hydrogenation: converting some of the double bonds to single bonds by reaction with hydrogen. It worked, but, as a side effect, also produced a novel type of fatty acid, called trans fatty acids (or trans fats). These have a double bond, but it is oriented the wrong way. It turns out that trans fatty acids are bad for people, and they have been substantially eliminated from foods,
That leaves a question: How can we use highly polyunsaturated plant oils, such as soybean oil, for applications where that feature is undesirable? Two recent articles address this, with very different approaches. We'll present one of them in this post, and one in the following post.
The first approach is a variation of an old standby: genetics. Develop a soybean plant that makes a lower level of the polyunsaturated fats. What makes the new work novel is that the scientists do it with the relatively new tool of gene editing. They use TALENs, not the newer CRISPR, but the basic idea is the same. Gene editing allows targeted knock-out of a specific gene. In this work, the scientists knock out three genes, sequentially, to achieve the desired strain.
The following figure shows the relevant fatty acids, and gives an idea of the results...
The first fatty acid shown, palmitic acid, has a 16-C chain, with no double bonds (C=C). In shorthand, it is 16:0; the two numbers give the number of C atoms and the number of double bonds. The other fatty acids all have 18 C, with 0-3 double bonds. thus they are 18:0 through 18:3.
The columns at the right show the composition of the oil from two soybean strains. One is the original (wild type; WT) strain. The other is a strain the scientists made in which the enzyme FAD2 has been removed by gene editing (fad2-1). FAD2 is the major enzyme that introduces the second double bond, as shown in the left side with the fatty acid structures.
The new strain has a greatly reduced content of linoleic acid (18:2); there is a corresponding increased content of oleic acid (18:1). This is what you would expect.
FAD stands for fatty acid desaturase.
Soybean contains two major genes for FAD2. Making the strain deficient in this enzyme required knocking out both of them. (Actually, soy contains a third gene for this step, but it has little effect on oil production.)
This is Figure 1a from the article.
That's the idea. That shows how gene editing can reduce the content of polyunsaturated fatty acids in a common plant oil.
Interestingly, all that was done prior to the current work. The same team of scientists now goes further... Using an additional round of gene editing, they remove the enzyme FAD3, shown as the enzyme between linoleic and linolenic acids (18:2 and 18:3). This reduces the level of polyunsaturated fatty acids even further, to about half the level shown above for the fad2-1 strain. (I chose to use the figure above because it is so clear, and it fully illustrates the idea, though not the final result.)
The work shows that it is practical to edit multiple genes to develop the intended characteristics. In this case, the early work edited two genes, and the current work edits a third gene.
We'll discuss another approach to reducing the content of polyunsaturated fatty acids in the next post, and then comment on the two articles together.
News story: Gene editing used to produce soybean oil with less trans fats. (Genetic Literacy Project, October 20, 2016.) This seems to consist entirely of selected excerpts from the article. That's ok, but it is misleadingly labeled, claiming more than that. The Genetic Literacy Project should do better than this. (To be clear, the content is ok, but the integrity is not.)
The article, which is freely available: Direct stacking of sequence-specific nuclease-induced mutations to produce high oleic and low linolenic soybean oil. (Z L Demorest et al, BMC Plant Biology 16:225, October 13, 2016.)
Related post, immediately above: Improving soybean oil by using high voltage plasma (January 9, 2017).
A previous post about work using TALENs: Polled cattle -- by gene editing (July 8, 2016).
A previous post about editing multiple genes. This used CRISPR, which is commonly regarded as easier to use. But it also targeted closely related genes, so a single guide could target many genes. In fact, editing the entire set of genes required only two guides. How to do 62 things at once -- and take a step towards making a pig that is better suited as an organ donor for humans (January 17, 2016).
A post that includes a complete list of posts on gene editing (including using TALENs): CRISPR: an overview (February 15, 2015).
For more about lipids, see the section of my page Organic/Biochemistry Internet resources on Lipids. It includes a list of related Musings posts. It also includes some links to items about trans fats.
Previous post about soybeans: Effect of food crops on the environment (November 20, 2015).
January 6, 2017
Scientists figure out ways to make things better, such as improving agricultural productivity. Do the improvements actually get implemented out in "the real world"?
A recent article shows how one university facility, in cooperation with the government, made a special effort to develop and implement improvements at the local level. The work here is about small farmers in rural China. The agricultural scientists lived in the community, and worked closely with the farmers to determine the local issues and help guide improvements.
The following graph summarizes the results...
The graph shows the agricultural productivity for three groups over several years.
The productivity is shown (y-axis) as the total yield of corn and wheat, in Mg/ha; that is megagrams (or tonnes) per hectare.
For each year, there are three bars. The left (red) bar is for the experimental station, the university "lab" where innovations were developed. The other two bars are for the local farmers. The middle (green) bar is for the best ("elite") farmers; the right (yellow) bar is the county average.
For each year, the productivity of the experimental station is set to 100%. That is not shown, but the bars for the local farmers are labeled with their percentage relative to the experimental station for that year.
For the first year shown (2008-9), the local farmers got about 65% of the productivity of the experimental station.
After that baseline year, the experimental station began a program to work with local farmers. You can see that both the absolute production (the bar height) and the percentage relative to the station were higher in the following years.
This is Figure 3 from the article.
Agricultural productivity varies from year to year, for many reasons, including weather. You can see some fluctuations in the graph above. Unfortunately, we have only one year used here as baseline. If that just happened to be a poor year, it could lead to a bias in the conclusions here. We have no way to address that, and will just accept the story at face value from the given data.
The article also includes data beyond the yield. Issues include fertilizer and water use, and labor requirements.
The work here can also be examined as part of the "big picture". Are the short term gains reported here due to changes that are wise in the long run? Is tweaking production of the current crops even the right question? Those are good questions. However, the goal of the current work was precisely to make short term improvements, so don't change the rules during the game. What such criticisms do is to remind us that the food supply issue is a big problem, with many aspects.
The nature of the results should not be a surprise. Nevertheless, it is good to see active effort to implement improvements, and data to support the effort. The article notes that such collaborative efforts are now active in 71 provinces, It is also good to see the broader issues being raised.
News story: Transferring innovation from universities to farms. (SciDev.Net, September 14, 2016.) Includes some discussion of the limitations of the work.
* News story accompanying the article: Food security: A collaboration worth its weight in grain. (L H Samberg, Nature 537:624, September 29, 2016.) The author of this item is quoted in the news story listed above, with some emphasis on her being skeptical of the significance of the work. Her own story here is more balanced and more thorough. For those who want to delve into this further, this story could be a good place to start.
* The article: Closing yield gaps in China by empowering smallholder farmers. (W Zhang et al, Nature 537:671, September 29, 2016.)
* Doggy bags and the food waste problem (January 4, 2017). That's the post immediately below.
* Can growing rice help keep you from becoming WEIRD? (July 22, 2014). More about agriculture in China.
* What is the proper use of crop land? (August 23, 2013).
January 4, 2017
It is generally recognized that there is a food shortage, which will only get worse as the population continues to increase. Interestingly, a lot of food gets wasted -- at various stages, from losses on the farm to the home -- or restaurant. One solution to the food shortage is to waste less food.
A new article addresses one part of the food waste problem, with some intriguing observations.
We like it when a restaurant serves generous portions. But those generous portions may encourage both over-eating and waste. What do you do with the food left after a generous meal? One option is to take it home; after all, you have paid for it. Most restaurants will provide a container; it's often called a doggy bag. (It sounds better to say that we'll take it home for the dog. But it doesn't matter who eats it; we all share substantially the same food supply.)
The article explores attitudes toward the use of doggy bags in two European countries. 20 consumers in each country were asked a series of questions. The article is quite informal. Everything is presented as narrative, with no tables summarizing the results. But the major findings were interesting.
So what did the scientists find? On the one hand, most people were opposed to wasting food, and they liked the concept of the doggy bag. On the other hand, those same people thought it was not socially acceptable to ask for their left-over food to be packaged for them -- especially in higher class restaurants. Interestingly, many thought that doggy bags were not appropriate for the food of their country, though it might be for others.
The authors summarize the findings as indicating that people's personal values favor the use of doggy bags, but that they perceive it as against social norms. A paradox, as the authors note. And that is what makes the article interesting. It's interesting sociology.
Neither country involved in the study has a tradition of using doggy bags. One is starting a program to reduce food waste. The results here suggest that there will be barriers to reducing the wastage of food that has been served.
Perhaps it should be restaurant policy to offer customers doggy bags. (The authors suggest this.) Now, should that be mandated by law?
The study here is small. It is reasonable to think of it as raising some issues, rather than providing final answers.
Editorial, which is freely available: Researchers serve up suggestions to reduce food waste. A change in cultural and social factors -- such as overcoming a distaste for doggy bags -- will be required to shift people's behaviour. (Nature, 540:8, December 1, 2016.) It was posted at the Nature News site the previous day.
The article: Understanding the antecedents of consumers' attitudes towards doggy bags in restaurants: Concern about food waste, culture, norms and emotions. (L Sirieix et al, Journal of Retailing and Consumer Services 34:153, January 2017.)
More about the food supply:
* Added January 6, 2017. Implementing improved agriculture (January 6, 2017). Immediately above.
* What is the proper use of crop land? (August 23, 2013).
More about sharing with the dog:
* Sharing microbes within the family: kids and dogs (May 14, 2013).
* It's a dog-eat-starch world (April 23, 2013).
More about bags: How to fold a bag (May 13, 2011).
January 3, 2017
A new article, in the first issue of a new journal, reports interesting progress in the development of an artificial hand.
A Cornell University graduate student, and co-author of the article, shakes hands with a robot.
This is trimmed from the figure in the Cornell news story. It may be the same as Figure 4B of the article. The full Fig 4 shows several pictures of the hand in use.
Their figure legend: "Doctoral student Shuo Li shakes hands with an optoelectronically innervated prosthesis."
Hands are much more complicated than feet. Hands have a sophisticated ability to hold (grip) things, and that is coupled with a complex sensory ability. Designing artificial hands that can mimic those abilities has been a continuing challenge.
The key step in the new work is to make use of optical systems in detecting sensory signals.
Here's the idea...
A diagram of a finger.
Note the three lights (LEDs, red) at the lower right, and the three detectors (photodiodes, yellow) just to their left.
Look carefully, and you will see there is an inverted-U structure from the front LED to the front photodiode. That is a waveguide for the light. In fact, there is such a waveguide between each light and its detector at the other end.
The design of the waveguides is such that these fingers can rapidly and sensitively detect small changes in the shape of the finger. Such changes would reflect, for example, touching something, causing deformation of the finger.
The waveguide at the finger bottom (or "back", in the figure) has a special role. It is on the palm side. This waveguide is in the position to sense touch at the finger tip.
Ignore "plane A" for our purposes. The full figure shows a cross-section of the finger at this plane.
This is part of Figure 1E from the article.
There are several movie files posted with the article, as Supplementary Materials. Most involve technical specifications, and they are not well-labeled. But do check out Movie S7, Object recognition (0.5 minute; no sound). It shows the robotic arm distinguishing three tomatoes and choosing the ripest one, by softness.
Measuring changes in the waveguide -- in the light path -- is a way to measure changes in the shape of the hand. That effectively makes it a way to measure touch. Recent developments in fabrication technology have made the waveguides practical. The authors argue that the optical system, with its rapid response, is an improved way to measure touch.
The hand is shown above as part of a robot. However, it could also be part of a human. It is being developed with both goals in mind: robotic hands, and prosthetic hands for humans.
* A robotic hand with a human's delicate sense of touch. (Kurzweil, December 16, 2016.)
* Engineers get under robot's skin to heighten senses. (T Fleischman, Cornell Chronicle, December 8, 2016.) From the university.
The article: Optoelectronically innervated soft prosthetic hand via stretchable optical waveguides. (H Zhao et al, Science Robotics 1:eaai7529, December 6, 2016.)
The first post on prosthetic arms and hands: Prosthetic arms (September 16, 2009).
A post that focused on the issue of touch: eSkin: Developing better sense of touch for artificial skin (November 29, 2010).
There is more about replacement body parts on my page Biotechnology in the News (BITN) for Cloning and stem cells. It includes an extensive list of related Musings posts.
There is more to a good tomato than just softness... The chemistry of a tasty tomato (June 18, 2012).
Added February 3, 2017. Next post about robots... A robot that can feed itself (February 3, 2017).
Older items are on the archive pages, starting with 2016 (September-December).
Top of page
Older items are on the archive pages, starting with 2016 (September-December).
E-mail announcement of the new posts each week -- information and sign-up: e-mail announcements.
Contact information Site home page
Last update: February 25, 2017