When patients see doctors late in the day, the rate of orders for cancer screenings declines significantly compared to that of earlier appointments, according to a new study.
Researchers point to “decision fatigue”—which results from the cumulative burden of earlier screening discussions—and doctors falling behind in their busy schedules as the probable cause.
“Our findings suggest that future interventions targeting improvements in cancer screening might focus on time of day as an important factor in influencing behaviors,” says Esther Hsiang, a student and researcher at the Wharton Business School with the Penn Medicine Nudge Unit.
“We believe that the downward trend of ordering may be the result of ‘decision fatigue,’ where people may be less inclined to consider a new decision after they’ve been making them all day. It may also stem from overloaded clinicians getting behind as the day progresses.”
Among eligible patients, primary care doctors ordered breast cancer screening more often for patients seen in the 8 a.m. hour (64 percent) as compared to those with appointments at 5 p.m. (48 percent). Similarly, doctors ordered colon cancer screening tests more frequently for 8 a.m. patients (37 percent) compared to those coming in later in the day (23 percent).
Examining data from 2014 through 2016 across 33 Pennsylvania and New Jersey primary care practices, the researchers found that ordering rates had far-reaching effects.
When looking at the entire population eligible for screenings at these practices (roughly 19,000 for breast cancer and 33,000 for colorectal cancer), the researchers tracked whether the patients completed a screening within a year of their appointment. The data showed that the downward trend associated with the timing of the appointments carried over.
Breast cancer screening—which included mammograms—stood at a 33 percent one-year completion rate for the entire eligible population who had their appointment in the 8 a.m. hour. But for those who had clinic visits at 5 p.m. or later, just 18 percent completed screenings.
For colorectal cancer, 28 percent of the patients with appointments in the 8 a.m. hour completed screenings such as colonoscopies, sigmoidoscopies, and fecal occult blood tests. That number dropped to 18 percent for patients who saw the doctor at 5 p.m. or later.
The one-year completion results cast long shadows. While doctors may simply defer discussions about screening to future appointments, it assumes that they will make the decision at the next visit. Additionally, these types of cancer screenings also require coordination with a different department and another visit on the part of the patient, which provide several opportunities for further lapses in screening.
Researchers also observed that although order rates fell as the day progressed, there was a brief spike in screening orders for breast and colon cancers when patients saw their clinician around noon.
For example, breast cancer screening orders dropped to 48.7 percent at 11 a.m. but increased to 56.2 percent around noon, before gradually falling off again. This trend held true for one-year completion rates, as well. The study team suggests that this may be due to lunch breaks that give clinicians an opportunity to catch up and start fresh.
A study in 2018 which examined the rates of flu vaccinations by the time of day that patients saw a clinician also noted the downward trend in outcomes. In that study, a “nudge” was built into the system that prompted doctors to accept or decline an influenza vaccine order, which helped spur an increase of vaccinations by nearly 20 percent, as compared to patients with doctors who didn’t get a nudge.
“Our new study adds to the growing evidence that time of day and decision fatigue impacts patient care,” says Mitesh Patel, director of the Penn Medicine Nudge Unit and an assistant professor of medicine.
“In past work, we’ve found that nudges in the electronic health record can be used to address these types of gaps in care, which we suspect will be the case here. Future research could evaluate how nudges may be implemented in order to improve cancer screening.”
Additional researchers from Penn contributed to the study, which appears in JAMA Network Open.
Getting more exercise or spending more time sedentary—even for just one day—can change a teenager’s sleep, a new study shows.
In a one-week, micro-longitudinal study, researchers found that when teens got more physical activity than they usually did, they fell asleep earlier, slept longer, and slept better that night.
Specifically, for every extra hour of moderate-to-vigorous physical activity, the teens fell asleep 18 minutes earlier, slept 10 minutes longer, and had about one percent greater sleep maintenance efficiency that night.
“Adolescence is a critical period to obtain adequate sleep, as sleep can affect cognitive and classroom performance, stress, and eating behaviors,” says Lindsay Master, a data scientist at Penn State. “Our research suggests that encouraging adolescents to spend more time exercising during the day may help their sleep health later that night.”
In contrast, researchers also found that being sedentary more during the day was associated with worse sleep health. When participants were sedentary for more minutes during the day, they fell asleep and woke up later but slept for a shorter amount of time overall.
The findings, which appear in Scientific Reports, help illuminate the complex relationship between physical activity and sleep, says Orfeu Buxton, professor of biobehavioral health.
“You can think of these relationships between physical activity and sleep almost like a teeter totter,” Buxton says. “When you’re getting more steps, essentially, your sleep begins earlier, expands in duration, and is more efficient. Whereas if you’re spending more time sedentary, it’s like sitting on your sleep health: Sleep length and quality goes down.”
Tossing and turning
While previous research suggests that adolescents need 8 to 10 hours of sleep a night, recent estimates suggest that as many as 73 percent of adolescents are getting less than eight.
Previous research also shows that people who are generally more physically active tend to sleep longer and have better sleep quality. But less is known about whether day-to-day changes in physical activity and sedentary behavior affected sleep length and quality, researcher say.
For the new study, researchers used data from 417 participants in the Fragile Families and Child Well-being study, a national cohort from 20 United States cities. When they were 15, participants wore accelerometers on their wrists and hips to measure sleep and physical activity for one week.
“One of the strengths of this study was using the devices to get precise measurements about sleep and activity instead of asking participants about their own behavior, which can sometimes be skewed,” Master says.
“The hip device measured activity during the day, and the wrist device measured what time the participants fell asleep and woke up, and also how efficiently they slept, which means how often they were sleeping versus tossing and turning.”
In addition to finding links between how physical activity affects sleep later that night, the researchers also found connections between sleep and activity the following day. When participants slept longer and woke up later, they engaged in less moderate-to-vigorous physical activity and sedentary behavior the next day.
“This finding might be related to a lack of time and opportunity the following day,” Master says. “We can’t know for sure, but it’s possible that if you’re sleeping later into the day, you won’t have as much time to spend exercising or even being sedentary.”
Becoming your best self
Improving health is something that can, and should, take place over time, Buxton says.
“Becoming our best selves means being more like our best selves more often. We were able to show that the beneficial effects of exercise and sleep go together, and that health risk behaviors like sedentary time affect sleep that same night. So if we can encourage people to engage in more physical activity and better sleep health behaviors on a more regular basis, it could improve their health over time.”
The researchers will continue to follow up with the participants to see how health and health risk behaviors continue to interact, and how sleep health influences thriving in early adulthood.
Additional researchers are from Penn State, Stony Brook University, the University of South Florida, and Harvard Medical School. The Eunice Kennedy Shriver National Institute of Child Health and Human Development of the National Institutes of Health and several private foundations funded the work.
Futurity by Abby Simmons-carnegie Mellon - 11h ago
New research sheds light on how specific circuits in the brain can simultaneously make decisions and learn from their outcomes.
Consider eating brunch at your favorite restaurant: How do you know whether the eggs benedict will be a better choice than the waffles? Usually, you accumulate evidence over time. At first, you randomly pick one, and then you try the other on your next visit. Perhaps you find one varies in quality every time you try it or that you consistently prefer the taste of the same dish over the other.
“For decades neuroscience and cognitive science have tried to understand how we make decisions and how we learn from their consequences, but they have done so independently,” says Timothy Verstynen, an associate professor in the psychology department at Carnegie Mellon University and the university’s Neuroscience Institute.
“We have learned a lot about the what—the brain systems—and the how—the computational algorithms—of adaptive decision-making, but we lacked a bridge that linked the two together.”
Building the bridge
In a recent paper in PLOS Computational Biology, Verstynen and colleagues attempted to build such a theoretical bridge using a series of increasingly complex computational models.
“We started by modeling microscopic synapses of cells in an area of the brain called the basal ganglia,” says Kyle Dunovan, a former postdoctoral fellow. “We built models of small sets of neurons that made decisions between two actions. By modeling the dopamine response following whether an action had a good outcome or a bad, we were able to see how dopamine shapes these synapses over time as they learn.”
The researchers then took this information and built a much larger model of the cortical and subcortical brain networks that regulate both decision-making and learning called the cortico-basal ganglia-thalamic loops. Using large simulation of different brain areas, they altered a few of the critical synapses to reflect different degrees of learning and had the network make a series of selections between two targets. They took this behavior from the simulated brain and analyzed it as if it were a human participant, using a cognitive model that captures the process of how the brain accumulates information during decision-making.
This nested set of models, from individual synapses, to whole brain networks, and finally to behavior, allowed the researchers to identify two novel ways that learning impacts the way that the brain makes decisions.
When the network was configured in a way such that competition increased between a pathway that promotes behavior and a pathway that suppresses behavior, called the direct and indirect pathways, respectively, the rate at which the simulated brain accumulated information got slower. In contrast, when the network was configured in a way that increased the sensitivity of the suppressive pathway alone, the artificial brain relied on less overall information before making a decision.
“This was a somewhat unexpected discovery,” Dunovan says. “Going into this project, we expected that there should be one mechanism in the circuit that relates to how fast the agent accrues information. But the biological models revealed this second path that changes the criterion for how much information the system needs before making a response.
“This showed us how the same circuits in the brain can impact our decisions in different ways. It also suggested to us that these circuits might rely on different forms of feedback to alter the different parts of the decision process.”
Not so separate, after all
To this end, Verstynen and Dunovan published a second paper in the Journal of Neuroscience that analyzes human behavior in the same way that they analyzed the artificial brain’s behavior in the first paper. They tasked research participants with either pressing a button at a specific time or withholding their response when researchers presented them with a cue called a “stop-signal.”
The researchers found that humans rely on different feedback to change different parts of the decision process. For example, hitting a button just a few milliseconds too fast or too slow led to a quick correction of the rate of information accumulation in the brain. In contrast, errors in selecting the correct response, like pressing the button when presented with the stop-signal changed the level of information that the brain needed to make a decision.
“Despite testing a number of possible alternative explanations of the behavior, the results kept coming back to the same two processes that simulated brain networks identified,” Verstynen says.
“For a long time, we not only looked at decision-making and learning as two separate problems, but we also studied the neural circuits and cognitive algorithms separately. What we have shown is that the architecture of the circuits in the brain gives us specific clues as to what the algorithms of behavior should look like and, in this case, that clue is telling us not to think of decision-making without thinking of learning as well.”
Additional researchers from Universitat de les Illes Baleras, the University of South Carolina, and the University of Pittsburgh contributed to the work.
Years of homeschooling don’t appear to influence the general health of children, according to a new report.
The report puts forth evidence that the amount of time a student spends in homeschool is “weakly or not at all related to multiple aspects of youth physical health.”
“Although there may be differences in the health of elementary through high school homeschoolers, those differences don’t seem to change with additional time spent in homeschool,” says corresponding author Laura Kabiri, a kinesiology lecturer at Rice University and corresponding author of the report in Health Promotion International. “In other words, staying in homeschool longer isn’t related to increased health benefits or deficits.”
“The relationship between their health and the time they spend in homeschool seems to be irrelevant.”
Earlier this year Kabiri and her team reported that homeschooled students who depended on maintaining physical fitness through outside activities were often falling short.
The flip side presented in the new report should come as good news to parents and students.
The results from studies of more than 140 children in grades kindergarten through 5, who researchers tested against statistically normal data for children of their age and gender, accounted for prior published research that showed homeschooled children have less upper-body and abdominal muscle strength and more abdominal fat when compared to public school students. Additional studies also showed that homeschooling benefited sleep patterns, overall body composition, and diet.
However, to the researchers’ surprise, increased time in homeschool did not appear to affect these differences in homeschooler health either way.
“Body composition can relate to sleep as well as diet,” Kabiri says. “And as far as muscular health goes, these kids are still active. We’re not saying there’s not an upfront benefit or detriment to their health, but after an initial gain or loss, there aren’t additional gains or losses over time if you’re going to homeschool your children for one year or their entire careers. The relationship between their health and the time they spend in homeschool seems to be irrelevant.”
Additional coauthors are from Texas Woman’s University and the University of Texas Health Science Center at San Antonio, where the researchers conducted their work. Support for the research came, in part, from the Texas Physical Therapy Foundation.
Researchers have charted the distribution of phytoplankton in the world’s oceans for the first time and investigated the environmental factors that explain it.
They conclude that plankton diversity is only partially congruent with previous theories of biodiversity for the seas between the equator and the poles.
With some 10,000 to 20,000 different species in the world’s oceans, the diversity of phytoplankton (“phyto” from the Greek for plant) species is extremely rich. These phytoplankton form a key element of ocean ecosystems and life on this planet, producing more oxygen than all the world’s rainforests combined. They also serve as the fundamental basis of the marine food chain.
Although researchers have identified many species of phytoplankton, the question of when and where they occur is largely unexplored; in light of the current biodiversity crisis, this represents a serious knowledge gap.
Now, in Science Advances, a team of researchers models the spatial and temporal distribution of over 530 different species of phytoplankton. As the basis for their distribution charts, they used around 700,000 water samples from across the world’s oceans.
The global distribution of phytoplankton in January. Dark areas indicate a high biodiversity, light areas a low one. The number of species was not determined for the white areas. (Credit: Righetti et al. Science Advances, 2019)
The work reveals that tropical waters hold the richest diversity of species at all times of the year. Phytoplankton diversity is particularly high in the seas of the Indonesian-Australian archipelago, in parts of the Indian Ocean, and in the equatorial Pacific Ocean. In the subtropics, biodiversity drops off markedly beyond 30 degrees latitude North and South, reaching its lowest values around a latitude of 55 degrees. Diversity then picks up again slightly towards the poles.
“We were surprised to find that on a monthly basis, the polar seas present greater diversity than the mid-latitudes,” says Damiano Righetti, the lead author of the study. He is a PhD student with ETH Zurich professor Nicolas Gruber and senior scientist Meike Vogt. “It’s remarkable because global species distribution and diversity are normally closely linked to environmental temperature trends.”
Species diversity typically decreases continuously towards the poles, where it is normally lowest. Temperature could plausibly be the direct driver of this decline. According to metabolic theory, higher temperatures accelerate metabolism, mutations of genetic material, and speciation. This explains why the tropics are richer in species than the mid-latitudes and the polar regions.
The study reveals that phytoplankton do not always behave in line with this theory. “Evidently, there are factors other than temperature affecting plankton diversity,” Righetti says. Two of these might be the strong currents and turbulence, which are prevalent in the mid-latitudes, but less so in polar or tropical seas. “The seasonal fluctuations and ocean turbulence in these latitudes might suppress the development of biodiversity, even though the temperatures here are higher than in the polar oceans,” he says.
Righetti and colleagues also found that phytoplankton diversity in the mid-latitudes, unlike in the tropics, varies greatly from season to season. Righetti explains that although the number of species in the mid-latitudes is constant over time, the species composition changes over the course of the year: “In contrast to tropical seas, the diversity here is dynamic throughout the year, but hardly any research has been done on this.”
Working with ETH adjunct professor Niklaus Zimmermann and colleagues from the Swiss Federal Institute for Forest, Snow, and Landscape Research WSL, Righetti developed a computer model to map the diversity distribution of phytoplankton. They fed this model with observational data and used it to project where each species occurs with a temporal resolution of one month.
The observational data came from water samples collected during research trips as well as from normal shipping routes. Phytoplankton specialists subsequently studied the samples under the microscope to determine which species they contained. Over time, the research cruises amassed huge amounts of observational data on several thousand different species. Righetti and colleagues then gathered the available data into a database and analyzed it.
It must be noted, however, that sampling has not been evenly distributed across the oceans and, in many regions, has not spanned all seasons. Thanks to British researchers, the North Atlantic is very well represented, but very little data exist for large parts of the other oceans. The researchers compensated for this distortion in their models.
The distribution maps are the first to chart phytoplankton and their models can also help predict how the diversity of phytoplankton could develop under changing temperature conditions. Warmer waters as a result of climate change could alter the distribution of phytoplankton. “In turn, this could have a serious impact on the entire marine food chain,” Righetti says.
Minuscule glassy beads formed from debris of the atomic bomb blast that devastated Hiroshima nearly 75 years ago litter nearby beaches, according to a new study.
The beads, which no one seems to have noticed until now, apparently formed in the atomic cloud from melted or vaporized concrete, marble, stainless steel, and rubber, among other materials of daily life in Hiroshima. The researchers estimate that a square kilometer (0.4 square mile) of beach sand collected from a depth of about 4 inches would contain about 2,200 to 3,100 tons of the particles.
“This was the worst man-made event ever, by far,” says Mario Wannier, a retired geologist from Berkeley Lab who led the study. “In the surprise of finding these particles, the big question for me was, you have a city, and a minute later you have no city… Where is the city? Where is the material? It is a trove to have discovered these particles. It is an incredible story.”
Examples of the broad range of particles that researchers collected from beach sands in Japan’s Motoujima Peninsula. They range from clear glass (A-E) and glass-covered debris (I-J) to rubber-like (L) and iron (M) particles. White scale bar is 1 mm; red scale bar is 0.5 mm; yellow scale bar is 0.2 mm. (Credit: Berkeley Lab)
The fission bomb an American bomber dropped on Hiroshima on August 6, 1945, instantly killed more than 70,000 people, while an equal number died afterward from radiation effects. The bomb and resulting firestorms mostly leveled an area measuring more than 4 square miles and destroyed or damaged an estimated 90 percent of the structures in the city. The US dropped a second bomb on the city of Nagasaki three days later, bringing a tragic end to World War II.
Wannier first noticed the irregularly shaped glass beads in 2015, while combing through beach sand his colleague Marc de Urreiztieta collected from Japan’s Motoujina Peninsula, about four miles from Hiroshima. Wannier studies sand around the world to monitor the health of local marine environments.
He thought the unusual particles, measuring .5 to 1 millimeter across, resembled glassy beads resulting from meteor impacts, such as the one that killed off the dinosaurs 66 million years ago. So, he teamed up with mineralogist Rudy Wenk to analyze the beads using electron microscopy and X-ray microdiffraction at the Berkeley Lab’s Advanced Light Source.
Wenk found a wide variety of chemical compositions in the samples, including aluminum, silicon, and calcium; microscopic globules of chromium-rich iron; and microscopic branching of crystalline structures. Other beads were composed mostly of carbon and oxygen.
“Some of these look similar to what we have from meteorite impacts, but the composition is quite different,” says Wenk, a professor of the graduate school in the earth and planetary science department at UC Berkeley and a Berkeley Lab affiliate. “There were quite unusual shapes. There was some pure iron and steel. Some of these had the composition of building materials.”
The experiments and related analyses determined that the particles had formed in extreme conditions, with temperatures exceeding 3,300 degrees Fahrenheit (1,800 Celsius). The composition and formation led the researchers to conclude that they were formed in the bomb blast.
“It was quite fascinating to look at all of these materials,” Wenk says, noting that the beads may be radioactive. “What we hope is to get other people interested in looking at this in more detail and in looking for examples around the Nagasaki A-bomb site.”
Futurity by National University Of Singapore - 20h ago
The leaves of a variety of medicinal plants can stop the growth of breast, cervical, colon, leukemia, liver, ovarian, and uterine cancer, a new study shows.
Researchers found the effects in leaves of the bandicoot berry (Leea indica), South African leaf (Vernonia amygdalina), and simpleleaf chastetree (Vitex trifolia). Three other medicinal plants also demonstrated anti-cancer properties.
“Medicinal plants have been used for the treatment of diverse ailments since ancient times, but their anti-cancer properties have not been well studied,” says Koh Hwee Ling, associate professor from the National University of Singapore Pharmacy.
“Our findings provide new scientific evidence for the use of traditional herbs for cancer treatment, and pave the way for the development of new therapeutic agents.”
The findings, which appear in the Journal of Ethnopharmacology, highlight the importance of conserving these indigenous plants as resources for drug discovery and understanding these natural resources.
While modern medicine is the primary form of healthcare in Southeast Asian countries such as Singapore and Malaysia, there remains a tradition of using local medicinal plants for health promotion and the treatment of diseases.
“Given the scarcity of land due to rapid urbanization and the dearth of records on herbal knowledge, there is a pressing need to document and investigate how indigenous medicinal plants were used before the knowledge is lost,” says Siew Yin Yin, who did the research as part of her doctoral thesis under Koh’s supervision.
For the study, conducted between 2010 and 2013, researchers documented the different types of medicinal plants that grow in Singapore and the region. They found that the top three reasons for using medicinal plants included general health promotion, detoxification, and boosting the immune system. Among the medicinal plants documented, people also used some to treat cancer.
The researchers reviewed the pharmacological properties of the tropical plants reportedly used for cancer, and selected seven promising plant species for further investigation: bandicoot berry, sabah snake grass, fool’s curry leaf, seven star needle, black face general, South African leaf, and simpleleaf chastetree.
Out of the leaf extracts of the seven plants researchers tested, they found the sabah snake grass (bottom right) had weak effects or no effect against almost all the cell lines they tested. (Credit: NUS)
The experiments involved preparing extracts of fresh, healthy and mature leaves of the seven plants, and testing the extracts with the cell lines of seven different types of cancers—breast, cervical, colon, leukemia, liver, ovarian, and uterine. The team opted to examine leaves as they can regrow without harming the plants—making it a sustainable choice, unlike using the bark or roots.
Among the seven plants, the researchers found the extracts of the leaves of the bandicoot berry, South African Leaf, and simpleleaf chastetree promising in fighting against the seven types of cancers. The leaf extracts of the seven star needle performed well against cervical, colon, liver, ovarian, and uterine cancer cells. The leaf extracts of two other plants—fool’s curry leaf and black face general—demonstrated efficacy against some cancer cell lines, too.
“What we did not expect is that the leaf extract of the sabah snake grass was not very effective in inhibiting growth of cancer cells. In our earlier study, this plant was frequently reported to be used by cancer patients in the region. One possibility could be that it may be helping cancer patients in other ways, rather than killing the cancer cells directly,” Koh says.
While the results of this study provide a scientific basis for the traditional practice of using tropical medicinal plants to fight cancer, the researchers stress that people should not self-medicate without consulting qualified practitioners.
“More research is required to identify the active components responsible for the anti-cancer effects. Meanwhile, conservation of these medicinal plants is highly crucial so that there is a rich and sustainable source that could be tapped [into] for the discovery of anti-cancer drugs,” Koh says.
Misguided nutrition advice can be more than ineffective: It can be dangerous.
Over 20,000 people end up in the emergency room because of supplements each year, and a quarter of those cases are due to weight-loss related supplements.
So how can you steer clear of bad or unhelpful information? Michelle Cardel, who has a doctorate in nutrition science, shares three signs that a fit-fluencer’s advice might be less than scientific.
What a registered dietitian thinks of your latest diet — and what really works - SoundCloud (95 secs long, 28 plays)Play in SoundCloud
Red flag #1: Huge claims
They make blanket statements such as “sugar causes cancer,” “carbs are bad for you,” or “no one should eat dairy” without peer-reviewed references to back them up.
“There is no one right way to eat,” says Cardel, a nutrition scientist and registered dietitian at the University of Florida. “Your dietary pattern should be determined by your needs, likes, dislikes, and medical history. A one-size-fits-all approach doesn’t work. Just because somebody is fit and young and attractive doesn’t mean that if you do what they do, you’re going to look like them.”
It’s not just influencers: News stories mess up nutritional science, too. A 2014 study in the American Journal of Public Health found that news and media sites published lower-quality weight-loss information than medical, government or university sites and blogs. And in a 2016 study, Cardel and colleagues at the University of Florida’s College of Medicine’s department of health outcomes and policy found the accuracy of Spanish-language weight-loss content to be even lower than English-language content.
Red flag #2: Shilling a supplement, detox, or tea.
“Not everyone selling you something is a quack, but all quacks are selling you something,” Cardel says. “When people are struggling with their weight, they’re in a really vulnerable place. It makes me sad to see people putting their trust in people who are in the business of offering a quick fix rather than evidence-based medicine.”
Red Flag #3: Missing credentials.
Registered dietitians and registered dietitian nutritionists hold a bachelor’s, master’s, or doctoral science degree from an accredited university, have completed at least 1,200 hours of supervised practice, and have passed a board-certifying exam. They also take continuing-education credits to maintain their credentials. Any other “nutritionist” isn’t held to these requirements, Cardel says.
In general, says Cardel, “fit-fluencers” who offer tips on healthy eating might not have the scientific background required to give sound advice.
Even the title “nutritionist” means different things in different states, with some requiring no qualifications at all to dispense guidance.
“Do I believe most of these people are doing this to help others? Absolutely,” Cardel says. “But do good intentions make someone exempt from peddling misinformation? Unfortunately, no.”
Some in the fitness community dispute the value of those qualifications: CrossFit recently sparked a social-media uproar by tweeting an article arguing that licensing did more for dietitians’ paychecks than patients’ health. Cardel refutes that notion.
“We go to lawyers for legal advice, doctors for medical advice, and physical therapists for physical therapy advice,” she says. “Training and credibility are important.”
Ashfall from ancient volcanic explosions is the likely source of a strange mineral deposit near the landing site for NASA’s next Mars rover, a new study finds.
The research could help scientists assemble a timeline of volcanic activity and environmental conditions on early Mars.
“This is one of the most tangible pieces of evidence yet for the idea that explosive volcanism was more common on early Mars,” says Christopher Kremer, a graduate student at Brown University who led the work.
“Understanding how important explosive volcanism was on early Mars is ultimately important for understand the water budget in Martian magma, groundwater abundance, and the thickness of the atmosphere.”
The Mars 2020 rover is headed for Jezero crater. In addition to a stunning river delta, Jezero also has exposures of the potential ashfall deposit. The rover could confirm these new findings, which will be one of the rover’s “top 10 discoveries,” says Brown professor Jack Mustard. (Credit: NASA)
Volcanic explosions happen when gases like water vapor dissolve in underground magma. When the pressure of that dissolved gas is more than the rock above can hold, it explodes, sending a fiery cloud of ash and lava into the air. Scientists think that these kinds of eruptions should have happened very early in Martian history, when there was more water available to get mixed with magma.
As the planet dried out, the volcanic explosions would have died down and given way to more effusive volcanism—a gentler oozing of lava onto the surface. There’s plenty of evidence of an effusive phase on the Martian surface, but evidence of the early explosive phase hasn’t been easy to spot with orbital instruments, Kremer says.
Ashfall origin theory
This new study looked at a deposit located in a region called Nili Fossae that’s long been of interest to scientists. The deposit is rich in the mineral olivine, which is common in planetary interiors. That suggests that the deposit is derived from deep underground, but it hasn’t been clear how the material got to the surface.
Some researchers have suggested that it’s yet another example of an effusive lava flow. Others have suggested that a large asteroid impact dredged up the material—the impact that formed the giant Isidis Basin in which the deposit sits.
For this study, Kremer and colleagues used high-resolution images from NASA’s Mars Reconnaissance Orbiter to look at the geology of the deposit in fine detail.
“This work departed methodologically from what other folks have done by looking at the physical shape of the terrains that are composed of this bedrock,” Kremer says.
“What’s the geometry, the thickness, and orientation of the layers that make it up. We found that the explosive volcanism and ashfall explanation ticks all the right boxes, while all of the alternative ideas for what this deposit might be disagree in several important respects with what we observe from orbit.”
The work showed the deposit extends across the surface evenly in long continuous layers that drape evenly across hills, valleys, craters, and other features. That even distribution, Kremer says, is much more consistent with ashfall than lava flow. Researchers would expect a lava flow to pool in low-lying areas and leave thin or nonexistent traces in highlands.
And the stratigraphic relationships in the area rule out an origin associated with the Isidis impact, the researchers say. They showed that the deposit sits on top of features that are known to have come after the Isidis event, suggesting that the deposit itself came after as well.
The ashfall explanation also helps to account for the deposit’s unusual mineral signatures, the researchers say. The olivine shows signs of widespread alteration through contact with water—far more alteration than other olivine deposits on Mars. That makes sense if this were ashfall, which is porous and therefore susceptible to alteration by small amounts of water, the researchers say.
Red planet Rover
All told, the researchers say, these orbital data strongly lean toward an ashfall origin. But the team won’t have to rely only on orbital data for long. NASA’s Mars2020 rover is scheduled to land in Jezero Crater, which sits within the olivine deposit. And there are exposures of the deposit within the crater. The olivine-rich unit will almost certainly be one of the rover’s exploration targets, and it might have the final say on what this deposit is.
“What’s exciting is that we’ll see very soon if I’m right or wrong,” Kremer says. “So that’s a little nerve wracking, but if it’s not an ashfall, it’s probably going to be something much stranger. That’s just as fun if not more so.”
If it does turn out to be ashfall, Kremer says, it validates the methodology used in this study as a means of looking at potential ashfall deposits elsewhere on Mars.
But whatever the rover finds will be important in understanding the evolution of the Red Planet.
“One of Mars 2020’s top 10 discoveries is going to be figuring out what this olivine-bearing unit is,” says Jack Mustard, Kremer’s advisor and professor in the department of earth, environmental, and planetary sciences. “That’s something people will be writing and talking about for a long time.”
A new discovery could improve the nutritional value and crop yields of corn, report researchers.
The research could benefit millions of people who rely on corn for nutrition in South America, Africa, and elsewhere.
The world’s corn supply depends on improving its yield and quality, which relies on the accumulation of starch and proteins in the grain’s endosperm, the study says. Endosperm, an important source of human nutrition that contains starch, oils, and proteins, is the seed tissue that surrounds embryos.
“We found a novel approach to discover new regulators in the synthesis of starch and protein, which determine grain yield and quality,” says study lead author Zhiyong Zhang, a postdoctoral fellow at the Waksman Institute of Microbiology at Rutgers University-New Brunswick.
The scientists discovered how corn starch and protein are simultaneously synthesized in the endosperm, which could allow them to find a good balance between nutrient quality and yield, Zhang says. Corn domestication and modern breeding have gradually increased starch content but decreased protein accumulation in endosperms.
The researchers looked at key proteins in corn kernels known as zeins, which are devoid of lysine, an essential amino acid (a building block of proteins), resulting in poor nutrient quality. During corn breeding over decades, people increased lysine content by cultivating corn with lower levels of zeins. Still, today’s lysine levels are too low to meet the needs of the world’s rapidly growing population.
So, molecular geneticists and corn breeders are trying to dramatically reduce zein levels to improve corn nutrient quality by focusing on blocking them and so-called transcription factors. Transcription is when the information in a gene’s DNA is transferred to RNA, resulting in proteins that play key roles in the body’s tissues, organs, structure, and functions.
The research team found that two transcription factors play key roles in regulating the synthesis of starch and protein, paving the way for further research to fully understand the balance between nutrient quality and yield at a molecular level.
The study appears in the journal Proceedings of the National Academy of Sciences. Additional researchers from Rutgers, the Shanghai Institutes for Biological Sciences, the Institute of Plant Physiology & Ecology, and the Chinese Academy of Sciences contributed to the study.