Physics Buzz includes articles about physics research, videos, posters, comic books, and more. We're here to share the exciting world of physics with the wider community. Physics Central represents one part of The American Physical Society's (APS Physics) outreach efforts. APS Physics represents over 50,000 physicists.
Ramin Golestanian, a researcher at the Max Planck Institute for Dynamics and Self-Organization, occupies himself with the big questions: How is the thing we call life possible? In particular, he wonders, how can complex subcellular structures so critical for life as we know it form from a soup of enzymes?
“This is basically the Lego-like ingredients [of life],” he says, referring to the fundamental nature of these structures.
As these particles travel, chemical gradients propel certain particles to aggregate together. Over time, these different species of particles can form a singular conglomeration. Image credit: Jaime Agudo-Canalejo and Ramin Golestanian.
He and his University of Oxford postdoctoral scholar, Jaime Agudo-Canalejo, attempt to at least partially answer this question in a recent Physical Reviewpaper. Their study extends the theoretical framework that had been developed to study the movement of so-called Janus particles and extends it to catalytically active particles and enzymes more generally.
“[Janus particles are] primarily a way of synthesizing a colloidal system that can integrate two different tendencies in one particle,” Golestanian explains. In other words, Janus particles—like the Roman god for which they are named—exhibit two-faced behavior, reacting differently when approached from various angles. More concretely, a spherical Janus particle has distinct physical and chemical properties on each hemisphere. In the case of a catalytically active particle, one end might facilitate chemical reactions in the surrounding environment while the other is inert.
Particles in this experiment act similarly to an Alka-Seltzer in water. The tablet dissolves, creating a force pushing the tablet down to the bottom of the glass. (via Giphy)
This lopsided behavior can create an asymmetric force that propels the particle forward, just as the chemical reaction between an Alka-Seltzer tablet and water can shoot a tiny rocket high into the air. Because of their asymmetric interactions, these particles are also highly sensitive to a gradient, an indication of how abruptly the chemical composition of their surroundings changes. In fact, Oxford graduate student Tunrayo Adeleke-Larodo recently worked with Golestanian and Agudo-Canalejo to show that certain enzymes can align themselves using this trick—a tantalizing hint that gradient sensitivity could have something to do with enzyme’s ability to self-assemble.
But where it gets really interesting is when researchers introduce two or more various types of catalytically active particles, especially ones that share reactants or products. This produces a complex system of give-and-take where each particle interacts with the chemical gradients left by another and changes the local environment in its own turn.
As two or more particles influence each other via chemical gradients, they can exhibit a sort of pseudo-interaction that causes them to move around. What’s particularly fascinating about this, says Golestanian, is the fact that these interactions are often non-reciprocal—Particle A’s influence on Particle B may not be the same as Particle B’s influence on Particle A. Golestanian compares this complex behavior to human relationships, where mutual attraction is never a guarantee. This can lead to some very interesting behavior patterns where different types of particles chase each other around in a fluid or even join together to create a single large cluster that pushes itself along like a little motorboat.
This simulation shows how over time a large number of two different catalytically active particle species can aggregate thanks to gradient sensitivity. Image credit: Jaime Agudo-Canalejo and Ramin Golestanian.
If this feels wrong somehow, it could be because this behavior seems to act in direct contradiction to Newton’s Third Law, which states that for any action there is an equal and opposite reaction. Because of this, the force that two particles exert on each other is almost always equal—the Sun’s gravitational influence on the Earth is precisely equivalent to the Earth’s pull on the Sun, keeping the relative motion of each body stable. From our everyday experience, it just doesn’t seem right that one particle can chase another around indefinitely.
This gif depicts what Newton’s Third Law looks like with Newton’s Craddle. The red light demonstrates how the force moves in the system. (via Giphy)
However, there is no real contradiction with Newtonian physics; this effect isn’t occurring in a vacuum, and the particles’ interactions with the fluid itself also result in tiny forces. The integrated effect of these forces across the entire system nets to zero, in compliance with the concept of equal and opposite reactions.
Golestanian firmly identifies as a theoretical physicist, but that doesn’t mean he is uninterested in the practical implications of his work. He says that the models the team has developed can apply to a multitude of situations, just as the equations for projectile motion apply equally to a shot put or a volleyball. “This is the beauty of theoretical physics,” he says. “When you build a theoretical framework to explain a phenomenon, there could be many realizations of the type of system that would follow those rules.”
Many bacteria use chemical gradients and catalysis to signal each other and hunt for food, for example. At the same time, Golestanian expects the same equations to (at least partially) describe how subcellular structures can form from individual enzymes, or how man-made Janus colloids aggregate in a Petri dish. Although he intends to stick with theory, he is excited to see where experimentalists take his framework. “The ultimate reward is when something comes out of the theoretical investigation which can later on be found in labs,” he says. And who knows? Maybe one of them will help answer Golestanian’s questions.
Eleanor Hook is a freelance science writer based in Chapel Hill, NC. She contributes regularly to Physics Buzz, where she writes about everything from dead fish to lasers in space.
You could say that Tim Larson, Seth Shafer, and Elaine diFalco were brought together by the Sun. Now the three of them are sharing those sounds with scientists, musicians, and the general public through a unique effort called the Sonification of Solar Harmonics (SoSH) Project.
Credit: Mike Giles (via Unsplash)
In the early 1960s, scientists realized that the surface of the Sun oscillates, expanding and contracting in regular cycles of about five minutes. Eventually, they identified the source of this motion: as sound waves beneath the Sun’s surface tried to escape, they instead bounced back inside the Sun. That might sound a little creepy, but don’t worry—it’s science.
The hottest part of the Sun is its core, where intense pressure fuses hydrogen into helium and releases energy in the process (thanks to E=mc2). This energy travels outward, first via radiation and then through convection as it nears the surface of the Sun. The convection zone is a rough place, full of turbulence. Just as turbulent skies make for a bumpy airplane ride, turbulence in the convection zone makes for a bumpy trip to the surface of the Sun. Hot plasma rises to the surface and radiates its energy into space. The cooled plasma then descends back into the interior. This turnover at the surface creates sound waves.
Most of the time we think of sounds as noises thatare audible to the human ear, but that’s actually a narrow definition. Sound waves are disturbances in pressure that travel away from their source, regardless of whether there is an ear around to hear them.
When a lady down the street looks up and calls “hello” to you, she creates disturbances in air pressure (vibrations) that travel away from her mouth and are picked up by your ears. In the convective layer of the Sun, turbulence plays the role of your neighbor and plasma plays the role of the air. Rather than hear these sound waves, we can see their visible impact as they reach the surface of the Sun and are reflected back inward.
Sound waves are characterized by frequency—the number of cycles that pass by a stationary point in one second. Our ears perceive frequency as pitch. The five-minute cycle of the Sun’s surface corresponds to a frequency of about 0.0033 hertz. You might think that cycle reflects when or how the sound waves are created, but it actually reflects something much bigger—the physical properties of the Sun.
Guitar strings, drum membranes, and flute bodies all have what’s called resonant frequencies, frequencies at which they naturally vibrate. When you pluck a string, hit a membrane, or blow into a tube, you create sound waves with various frequencies. However, the frequencies that match the instrument’s resonant frequencies are amplified. They dominate the sound. The Sun is also a resonator, and its strongest resonant frequency is at 0.0033 hertz.
The resonant frequencies of an object depend on the object’s physical properties. For example, the resonant frequencies of guitar strings depend on mass, length, and tension. For each such frequency, the string has a corresponding shape. This shape, together with its frequency, is called a harmonic.
Different strings on a guitar oscillate at different frequencies, creating the various shapes seen here. Credit: Kinja
The Sun has many harmonics that each vibrate at a characteristic frequency. The harmonics influence the motion of the Sun’s surface in small but measurable ways. Furthermore, different regions of the Sun have different harmonics. By measuring the motion of the solar surface, you can separate these harmonics from one another and make inferences about the structure and composition of the Sun’s interior. That was the focus of Tim Larson’s research as a physics PhD student at Stanford University. “I spent all day every day studying sound waves and never listened to them,” says Larson, who now teaches at Moberly Area Community College.
It’s not that he didn’t want to. Human ears are sensitive to sounds in the frequency range of about 20-20,000 hertz. At 0.0033 hertz, the Sun’s dominant resonance is way too low for us to hear. Translating the data into an audible range while preserving its integrity isn’t a simple task. Although he tried, Larson didn’t have much luck convincing his colleagues that it was a worthwhile effort.
Range of audible frequencies for different species. Even elephants can't hear a frequency below 0.0033 Hz. Credit: Codeelectron
But then Elaine diFalco reached out. As a graduate student in music composition at the University of North Texas (UNT), she was interested in using data from the Michelson Doppler Imager (MDI) and the Helioseismic and Magnetic Imager (HMI)—the instruments whose data on the Sun Larson was studying—for musical purposes.
“I became an astronomy enthusiast a few years back, and the topic became so pervasive in my mind, that when I began working on my masters, I drew inspiration from it for my compositions,” she explains.
Seth Shafer, a recent UNT graduate (now a professor at the University of Nebraska at Omaha), joined Larson and diFalco, and the SoSH Project was born. The team has now built an interactive tool that takes real data from the Sun and maps it to clips that you can hear. It does this by shifting the frequencies into the audible range while preserving the relationships between the harmonics.
Three of the sun’s harmonics are represented in this audio file; they first sound in sequence and then simultaneously. The data was taken by the Michelson Doppler Imager. The frequencies are transposed by a factor of 90,000. And the file plays in about 35.6 seconds. The difference in volume for each mode corresponds to its amplitude as measured on the Sun. Credit: SoSH Project. To hear more sounds of the sun, visit the SoSH Project website. The SoSH tool is interactive and free, so composers, scientists, and even you can download the interface and create your own Sun songs. You can mess around with different parameters, such as playback rate and frequency range, and hear their influence. You can also listen to several sample clips on the SoSH Project website. Looking forward, the group is thinking about planetarium shows, virtual reality experiences, smartphone apps, and maybe someday even sonifying a solar flare.
Solar flare. Credit: NASA (via Giphy)
Experiencing the sounds of the Sun is definitely a cool experience, but it’s one that may also have scientific merit. “Because scientists never listen to their (already acoustic!) data, they simply have no idea what they might learn from a sonification that they can't find in plots and charts,” says Larson. SoSH is also great for composers, according to DiFalco. She plans to compose “a sort of solar concerto” based on the musical intervals of the Sun.
“Music is an art and the great challenge is to create something that is aesthetically intriguing while remaining true to whatever system you're using,” she says.
Kendra Redmond is a freelance science writer and editor. After earning a master’s degree in physics, she's worked for years in science education and communication, regularly contributing to Physics Buzz and other science news outlets, which you can find on her Facebook and LinkedIn. Kendra lives in Bloomington, MN with her husband and three kids.
Friday, June 5th, 2019 was the 2nd International LGBT STEM Day*, an observance designed to celebrate the contributions that LGBTQ+ people have made in STEM, and raise awareness of the issues that LGBTQ+ scientists still face in their daily life. While not always visible, LGBTQ+ scientists have existed throughout history, from the inventor of the computer Alan Turing, to astronaut Sally Ride. While significant progress has been made towards equality, significant barriers remain.
To conclude pride month and celebrate the second annual LGBT STEM Day, we spoke with LGBTQ+ scientists to highlight the personal experiences of LGBTQ+ people in STEM, put a spotlight on the issues that scientists still face today, and share resources for the benefit of the LGBTQ+ community and allies.
This information was gathered through social media, e-mails, phone interviews, and in-person conversations with scientists from a variety of career stages and professional areas with diverse experiences to share. While many experiences are shared across the LGBTQ+ community, it is important to acknowledge that not one person’s experience is all-encompassing, and that the acronym “LGBTQ+” purposefully encompasses a large and diverse group of identities.
How do you see the LGBTQ+ community and the STEM community come together? Visibility contributes significantly to how STEM can become a safer, more inclusive space. However, it looks different depending on the context. Initially, it can mean having more LGBTQ+ science icons, but it is also seeing other LGBTQ+ scientists in your lab, classroom, or workplace. Even if you don’t identify as being apart of the LGBTQ+ community, including your pronouns on email signatures and saying your pronouns when you introduce yourself creates a space for LGBTQ+ people to enter the conversation with a lesser chance of being misidentified (Murphree). Visibility can also include having LGBTQ+ groups or clubs at your institution. Creating spaces for people to feel free to be who they are is really powerful. Academic institutions generally have a more robust tool kit for supporting further education and support for LGBTQ+ students.
Some other ways people have combined their LGBTQ+ and STEM identities are grassroots efforts, for example, Barthelemy has worked to influence policy changes through forming groups of passionate LGBTQ+ scientists wanting to help make a change.
What can institutions do to foster a climate of inclusivity? There were two themes with the responses we received for this answer. Institutions can 1) offer training and resources for how to foster an inclusive environment, and 2) displaying better support for their students and employees.
Training for employees and supporting staff to learn about their implicit biases, inclusive language, and methods for creating a more supportive environment for those around them. This training shows that individuals have been equipped with tools for inclusivity. Even if there are no verbal interactions, seeing stickers or signs that the area you’re in is a Safe Zone can be really beneficial. Supporting students and/or employees might seem pretty obvious, but comprehensive efforts to support students and employees means that these people will support the institution and organization (Roti Roti). Examples of this are: adding statements of diversity and inclusion; considering Best Practice documents for creating inclusive spaces; encouraging friendly discussions on a continual basis; showing dedication to anti-discrimination policies even after the hiring process; and providing resources for continued education.
“A climate of inclusivity can only be achieved when those who exclude, harass, and assault are held fully accountable. Women, LGBT people, disabled people, people of color, and especially those whose identities intersect are made more vulnerable by institutions that passively or actively allow misconduct to continue.”- Marinelli
“Encourage individuals to get Safe Zone training (available on many university campuses) and be visible allies by putting up a rainbow flag sticker or other indication of support for the LGBT+ community.”- Plisch
From your perspective, what are the biggest challenges LGBTQ+ scientists face?
In STEM, “there tends to be a bias against discussing issues in the personal domain, including things that can affect the participation of individuals in the discipline. This disproportionately affects those in marginalized groups, which experience exclusionary behavior more frequently, and impedes efforts to advance inclusion. As physicists, we need to remember that people do physics, and the well-being of all physicists is paramount to advancing knowledge in the discipline,” said Plisch.
There is a lack of representation that is required to ensure that marginalized populations in STEM are well supported and, creating a more diverse community of scientists (Murphree).
This would mean that LGBTQ+ people would not have to “come out” in every interaction they have in their professional community, an act which is exhausting and risky when in new settings (Barthelemy).
And discrimination on the basis of orientation or gender identity is not protected against in many states across the country (Marinelli).
What resources have been the most beneficial for you that we can share with others? While there are plenty of difficulties faced by LGBTQ+ scientists, there is also a growing number of resources out there for LGBTQ+ scientists, and those who want to be a better ally in the community.
We can make a greater impact when we stand together, which is why groups like oSTEM and NOGLSTP, professional societies specifically for LGBTQ+ people exist. These organizations have scholarships, conferences, and professional development opportunities specifically for people in the LGBTQ community. Connecting with other LGBTQ+ scientists can be very helpful, whether it’s through LGBT-specific organizations, or through events like APS’s LGBTQ discussions and receptions at the March and April Meetings.
Even social media platforms like Twitter or Instagram can be a great way to connect with other LGBTQ+ scientists. Hashtags like #LGBTSTEM, #QueerInSTEM, or #BiInSci and are a great way to get to know other LGBTQ+ across the globe. There are also so many accounts to follow, like @Also_AScientist, @outforundergrad, @HouseofSTEM @LGBT_Physics, @LGBTSTEM, @500QueerSci, @TigersInSTEM, and more that discuss these issues. Join the Conversation.
By educating yourself about issues in the LGBTQ+ community, you can be a better ally towards your peers. There are lots of ways that you can show your support. Get certified for Safe Zone training. Show that you are mindful of LGBTQ+ spaces through “safe zone” stickers. Include your pronouns when introducing yourself and in your emails. Ask other people’s pronouns.
We are really grateful to the generous scientists that contributed their thoughts, opinions, stories, and advice for this article. STEM is meant to help advance our society, but we have to recognize the scientists behind this progress as well. If you would like to share thoughts, questions, and feedback, please email email@example.com.
*This article was intended to be published on July 5; however, the APS office was closed and we wanted to do our due diligence before publishing to allow our contributors the time for their approval. Additionally, while LGBT STEM Day serves as an important observance, we’d like to continue this conversation beyond the annual celebration.
Background on contributors:
Ramón Barthelemy is an assistant professor at the University of Utah in Physics Education Research. He got his Ph.D. from Western Michigan, was a Fulbright Scholar in Finland and a AAAS fellow sponsored by APS before working in the private sector, and continually working on supporting spaces of inclusion for LGBTQ+ students.
Mariarosa Marinelli is an undergraduate student studying physics at Virginia Commonwealth University. She researches galaxy evolution using spectroscopic data from the MaNGA Survey, and is also an educator at the Science Museum of Virginia’s Planetarium, delivering live astronomy shows to museum visitors.
Anna Murphree is a rising junior at Rhodes College in Memphis, TN earning a BS in Physics researching the spectroscopy of active galactic nuclei in Dr. Rupke’s Group and at her REU at the University of Wyoming this summer, looking to pursue a PhD in astrophysics.
Monica Plisch is the Director of the Education and Diversity Department at the American Physical Society. Having gotten her Ph.D. from Cornel, she has served in many roles working to advance physics towards being a more diverse and inclusive community.
Annelise Roti Roti recently transferred out of graduate studies at Lehigh University and is currently a Project Development Intern with APS’s STEP UP project. They intend to pursue doctoral studies at the University of Maryland in the near future and maintain significant involvement in community social justice efforts.
Nine million. That’s how many icy-cold, sugary-sweet Slurpees the convenience store 7-Eleven will give away to US customers today. The annual Free Slurpee Day tradition began in 2002, a brilliant marketing move that has made July 11th (7-11 in US notation) the store’s busiest day of the year. In honor of the brain-freezing drink, today we’re highlighting some of the science behind this treat.
The likely speed at which 7-Eleven Slurpees will be distributed during Free Slurpee Day.
As the story goes, Slurpees are so named because of the slurping sound consumers make while eagerly sucking up the drink through a straw. But its name is surprisingly similar to the technical name for a mixture of water and ice crystals—ice slurry. So, let’s start there.
Ice slurry, also known as slurry ice, is the subject of a lot of research. It comes up frequently in the context of refrigeration, forming the basis of energy efficient technologies found in grocery store meat displays and air conditioners. It also has medical applications. Scientists at Argonne National Laboratory developed a machine that can rapidly deliver ice slurry to targeted organs in the human body, protectively cooling them in response to brain swelling, heart attacks, or severe trauma. In a study more closely connected to the drink at hand, scientists recently found that drinking ice slurry may cool the body and brain more efficiently than drinking water alone.
What’s so special about a slurry? As any Slurpee enthusiast will tell you, they are cold! If you replace the straw in a newly poured Slurpee with a thermometer, you’ll see that the temperature is several degrees below the freezing point of water (32°F). But how is that possible? Shouldn’t water that cold just be ice? To fully answer that question, it helps to go back in time to the Slurpee’s humble beginnings.
It was the late 1950s and the soda machine in Omar Knedlik’s Dairy Queen franchise was broken. As a quick fix, he stashed some bottles of soda in the freezer. When he took them out and popped the tops for his customers, the liquid soda instantly transformed into a delicious slushy drink that everyone loved. It was a lucky, supercooled accident.
Supercooled Water - Explained! - YouTube
When a liquid reaches its freezing temperature, it doesn’t instantaneously turn into a block of ice. Ice crystals form first at nucleation sites—impurities, irregularities, or disturbances in the liquid—and they grow outward from there. In a pure liquid with few nucleation sites, it’s possible (although tricky) to supercool the liquid below its freezing point. In this case, once a nucleation site is finally introduced (for example, by popping the top of the container or slamming the bottle down), the liquid turns to slush or ice, depending on the conditions, right before your eyes. Don’t believe it? Try it yourself.
To get a similar result but in a controlled, consistent way, Knedlik invented a machine that mixed and chilled water, crushed ice, cola-flavored syrup, and carbon dioxide under pressure. The result was the Icee—a frosty treat still sold today in fast food restaurants, theaters, and stores. A few years later, 7-Eleven bought some Icee machines, introduced their own flavors, and rebranded the drink as today’s honored Slurpee.
A human enjoying a Slurpee during a hot supernatural day
Achieving the arctic temperatures of Icees and Slurpees without supercooling requires a bit of chemistry and physics. There are two key components.
The sugar: Sugar is a tasty form of anti-freeze. When dissolved in water, sugar molecules block the growth of ice crystals that would otherwise become ice chunks and, eventually, a solid block of ice. In technical terms, sugar “depresses freezing” by lowering the temperature at which water turns to ice.
Constant mixing: Slurpee ingredients are mixed—churned, really—inside of a chilled barrel. Continual motion and side-scraping prevent ice crystals from building up, keeping the mixture light and frosty right up until you fill your cup.
To fully experience the science behind the Slurpee, you really need to taste one. You can get yours for free today between 11am and 7pm at any US 7-Eleven. I hear they also have hotdogs for $1, but I’ll let someone else write that “science of” story.
Road damage after the Ridgecrest earthquake. via California Geological Survey and USGS
Especially if you live under a rock, you’ve probably heard of the series of earthquakes that have hit Southern California outside the town of Ridgecrest this past week. So far no deaths or major injuries have been reported, but extensive damage has occurred. The quake was felt widespread, with residents as far away as Sacramento and Phoenix feeling the tremors. It's a fantastic example that our planet is dynamic, but a sobering reminder our infrastructure is not. With any large-scale natural event, the press rush to get the news out to the public. Most will present factual, responsible, and useful information...others miss the mark. While ludicrous headlines can be funny, they can also be dangerous. Let's quickly debunk these myths, and get down to the facts:
Myth: The Ridgecrest earthquake is a precursor to “The Big One”
The “Big One” refers to a high magnitude earthquake (far higher than the Ridgecrest) that will eventually occur along the San Andreas fault. The San Andreas is a large fault line extending ~800 miles along the coastline of California, and as brittle tectonic plates try to slide past one another in this area, stress accumulates. Communities are preparing for the eventually that this fault will rupture in a massive earthquake. The Big One is sometimes referred to as being “overdue”, but this is perhaps a mischaracterization. Earthquakes don’t have due dates.
The Ridgecrest earthquake in comparison to the San Andreas fault. Shades of red denote earthquake risk. Credit: BuzzFeed & USGS Earthquakes
Here’s what’s actually going on: This earthquake did not originate along at the San Andreas fault system. The San Andreas is not the only fault in California, the western US is covered with small faults. The Ridgecrest earthquake occurred on a different network of tiny faults east of the San Andreas called the Eastern California Shear Zone. Seismologists are still studying the complex inner-working of this fault system, and it is possible that motion along this fault can trigger motion along other faults. Here’s the bottom line though: Ridgecrest is 150 miles away from the San Andreas Fault (at its closest point!). Based on what scientists know about the fault system, a subsequent quake along the San Andreas is a possibility, but it is very, very unlikely.
Myth: California is going to fall into the ocean
An entire state can’t really “fall” into anything.
As the Pacific plate moves northwest and the North American Plate moves southeast, stress accumulates along the San Andreas Fault. via USGS
Here's what’s actually going on: While California has a large number of faults, it's securely attached to the earth’s miles of crust. Also, the Ridgecrest earthquake resulted in horizontal motion along the fault, so rather than moving up or down, the earth has simply been offset to the side. Eventually, motion along the larger San Andreas fault will mean that San Francisco and LA will move closer together. More faults will rupture in the future, but they will not send California into the sea. Rising sea levels however, are not a myth, and will certainly threaten residents across the coast of California in the future. So while seaside cliffs can be swept away by the ocean, entire states cannot.
Myth: Scientists can predict earthquakes As much as we’d like to, there is no person who can predict earthquakes. These natural processes are far too complex to precisely calculate their timing.
Here’s what we can do: Scientists can assess the probability of an earthquake occurring over a certain period of time, and ensure that communities are ready for earthquakes when they do occur. The forecast is based on a model that uses the tectonic history and statistical probabilities to estimate future events. As of July 9th, the USGS Aftershock Forecast states that a quake above magnitude 7.1 has a 1% chance of occurring, but the area will likely experience 220 to 339 aftershocks magnitude 3 or higher. So we know for certain that future earthquakes will happen, we can’t move our schedules around for them.
Myth: Mercury in retrograde is causing earthquakes
The positions of celestial objects are unrelated to the motion of tectonic plates. Next.
Myth: The Ridgecrest earthquake is going to trigger a cataclysmic supereruption at Yellowstone CalderaLast but not least, this ridiculous idea was posted in a headline by both the Daily Express and Fox News. To be crystal clear, earthquakes in California are not related to activity in Yellowstone whatsoever.
The epicenter of the Ridgecrest earthquake is very far from Yellowstone, as indicated by this map.
Here’s what’s actually going on: There are so many reasons why this can’t happen. An earthquake can’t impact a volcano over 1150 miles away. Yes, earthquakes can occur before, during, and after volcanic eruptions, and in rare cases, earthquakes can trigger eruptions. For this to occur, however, the volcano must already be poised to erupt. The Yellowstone Caldera is not. Because there are no other signs of an eruption (i.e. ground deformation, volcanic gas emissions, unusual seismic activity), we can be confident that this earthquake had no effect on the Yellowstone Supervolcano.
Additionally, larger and more proximal earthquakes have occurred near Yellowstone, and none have triggered any eruptions. Earthquakes at Yellowstone are common because there are lots of faults in the area: they are not harbingers of impending volcanic armageddon! Research scientists are closely monitoring the area, and have not observed any other signs of an impending eruption. No cause for alarm here.
How can I combat misinformation?The internet is full of information, which is great! It is, however, important to be aware of where your information is coming from. When natural disasters strike, there are many ways you can inoculate yourself against misinformation. First, if something looks like clickbait, it probably is. Tabloids like The Daily Mail and Daily Express tend to be the main offenders in this category, but even mainstream news outlets can write misleading headlines. For the most accurate info, follow news sources you can trust; for earthquakes, you can follow the USGS, or expert scientists on social media, or a trusted and transparent news outlet. If you suspect something is misinformation, you can personally debunk it too!
Lissie Connors (@LissieOfficial) covers social media and writes about science for Physics Central and the American Physical Society. When she’s not internally combusting from bad science headlines, she enjoys cycling.
Each year, nearly one million visitors are left breathless by the sand dunes of Death Valley in California, stunning structures that curve gracefully, rippling upwards to an impossibly crisp ridge winding its way down the length of each dune. To a distant observer, they could be a single solid mass that morphs and grows imperceptibly over the course of time. To a physicist, though, they could be a model of spacetime itself.
The sweeping dunes of Death Valley appear uniform but are actually made of countless sand particles. Image credit: Brocken Inaglory (via Wikimedia Creative Commons)
Ralph Bagnold was one of the first physicists to acknowledge what any family of tourists can see clearly: these massive structures are made of countless tiny grains of sand. He spent his life working to perfect a mathematical description of dunes based on their granular, not fluid, structure and correctly accounted for the rippling features that characterize many dune surfaces.
Spacetime is generally regarded in Einstein’s famous general relativity as perfectly smooth, if curved here and there. But some physicists think that it may actually be granular on the smallest of scales. Like Bagnold, these researchers look beyond the smooth big-scale structures and analyze the effect of each tiny grain. Although this idea is not yet mainstream in the physics community, a recent Physical Review Letter hints that granular spacetime could—just maybe—solve two of the most pressing problems in astronomy today.
The first is the inconsistency between two otherwise robust mathematical frameworks: general relativity and quantum mechanics. General relativity describes the behavior of mass and gravity through the introduction of a warped spacetime, while quantum mechanics focuses instead on the behavior of minuscule particles. Each works exceedingly well within the confines of its own regime, but the problem arises where a system has a very large mass in a very small space—for example, at the time of the Big Bang or at the center of a black hole. There, physicists find, the theories break down into mathematical gibberish, often flatly contradicting each other. The search is on for a so-called “Grand Unification Theory” to unite general relativity and quantum mechanics, but although theories abound, none has been satisfactorily proven.
The second problem is the expansion of the Universe. We have known for nearly a century that the space between galaxies is rapidly growing, but it was only a few decades ago that astronomers realized that this growth is actually accelerating, throwing scientists into great consternation. “A Universe that is expanding should be slowing down because gravity is attractive,” says Alejandro Perez of Aix-Marseille University. In the same way that an apple tossed in the air slows down before reversing its course, astronomers expected the expansion of the Universe to decelerate, not speed up. In response to this conundrum, physicists did the only logical thing they could think of: they added a mathematical term to counteract the effect of gravity and gave it a placeholder name, dark energy. Although it makes up 70% of the Universe, no one knows whatdark energy is or why it exists, a challenge known as the dark energy problem.
So here we have two of the biggest outstanding problems in physics. Many brilliant minds have set themselves to the task of developing candidate theories to explain one or both of these mysteries, though none has been fully accepted in the scientific community. As Perez explains, however, among these theories certain concepts tend to pop up over and over again. “These theories are candidate theories, they are tentative theories, many questions remain open,” he cautions. “But there is a common idea from these theories, that is that spacetime might be discrete.”
According to some varieties of quantum gravity—one possibility for the Grand Unifying Theory—space is made up of a mind-boggling number of tiny particle-like entities, each on the order of 10-35 meters (technically speaking, the Planck length). As matter moves through spacetime, it hops from one of these particles to another—there is no such thing as “in-between”. We are so large in comparison with the granular structure that we only see the large-scale, apparently smooth curvature of spacetime, but that’s only part of the picture—it’s like studying a sand dune without considering the effect of each grain.
Spacetime is traditionally considered by physicists to be a continuous entity that distorts in the presence of matter; this image shows a two-dimensional visualization of this effect. However, it may actually be that spacetime is granular, not smooth. Image credit: Johnstone (via Wikimedia Creative Commons)
And these effects could be paradigm changing. Imagine riding a bicycle along the sandy base of a dune. If at any point you decide to stop pedaling, you will soon come to a stop as the kinetic energy of the bicycle is slowly lost, transformed into heat and sound energy and transferred to the surrounding air and sand. In a similar way, if spacetime is granular, the math suggests that small amounts of energy would be transformed away from matter—and that it would begin to behave exactly like dark energy.
Of course, there is an obvious problem with this theory: if energy were being “lost” into spacetime, wouldn’t we have noticed? Instead, all of physics is built upon the notion that energy cannot disappear, an idea known as conservation of energy. Technically speaking, this theory doesn’t do away with the conservation of energy, since the energy is merely transformed, but the fact remains that it would disappear from our measurements. In their recent paper, Perez and his colleague, Daniel Sudarsky of the Universidad Nacional Autónoma de México, tried to shed some light on this question with a series of order-of-magnitude calculations.
To begin with, they reasoned, spacetime would be granular only on the very smallest of scales, far smaller than we could hope to measure. The energy transferred away from matter as a result of this granular structure must be correspondingly minuscule. They also calculated that the amount of energy lost is proportional to density squared; since the modern universe is relatively rarified, current energy losses would be tiny. In fact, the entire planet Earth would take 10 million years to lose the energetic equivalent of a single electron mass through this process! Current technology is nowhere near the capabilities that would be required to measure such tiny effects, and so it would be impossible for researchers to measure in the lab.
But, tiny effects can accumulate into larger ones. Starting at a 10-11 seconds after the Big Bang*, Perez and Sudarsky added together all of the energy expected to have been lost in the Universe to date. Very little is known about the theoretically granular nature of spacetime; “We have only hints about it,” Perez comments. Nevertheless, using these hints they were able to produce an order of magnitude estimate of the result by introducing only a single parameter encoding the theoretical uncertainties.
The exciting thing about their result is that the energy lost through this mechanism corresponds to the dark energy observed in the Universe today for this free constant of order unity! “If this is correct, it would be the first observable manifestation of quantum gravity,” Perez says. Not only that, it would solve the mystery of dark energy—effectively killing two very elusive birds with one beautiful theory.
Of course, this is all still speculation. Perez is the first to admit that the theory needs more work, which could lead to testable predictions. He is particularly interested in what the implications are for black holes, which formally have infinite density at their singularities. Does this mean that dark energy is being produced at infinitely high rates in these regions? He shakes his head. “I don’t know the answers to these questions,” he says. However, if astronomers discover higher concentrations of dark energy surrounding black holes, that could be a point in favor of this theory.
While this idea is still a long way from becoming an accepted part of the cosmological model, it is awfully intriguing. Perez thinks of it in terms of Planck’s original hypothesis of energy quantization, the implications of which were not fully understood until years later. “We don’t really understand the physics at the Planck scale,” he says “but I would say that nobody understands the physics at the Planck scale.” Maybe it’s just a matter of time.
*The Big Bang presents an interesting nuance to this theory. Recall that the energy lost is proportional to matter density squared, so when the Universe began with an infinite density one would expect an infinite amount of energy to be transferred into spacetime—not a very helpful (or accurate) prediction. Fortunately, according to their ideas, the equations only involve the density of massive particles with spin, which did not emerge until after the density of the Universe had become finite.
A modern French take on a classic tragedy: You see a beautiful crêpe in a restaurant, soft, thin, perhaps full of Nutella. You think to yourself “Oh! It shouldn’t be too hard to make this at home, what’s the worst that could happen?” You go to the store, pick out your ingredients, and set out to make those crêpes. The result? It's okay, but it's just not perfect.
In search of the perfect crêpe, researchers in New Zealand and France teamed up to unmask the secrets to these ultrathin pancakes. The answer, they found, lies in fluid dynamics.
Crêpe batter is not unique among liquids, but the process of cooking crêpesv is a rather complicated display of thermodynamics. Immediately after the batter is poured onto the hot pan, it begins to cook, and thus begins to solidify and become more viscous. The end result? One part of the crêpe is overcooked, the other undercooked.
See, as the batter cooks, it gets thicker and thicker, making it harder and harder for the batter to flow on top of the pan. By developing a computer model that takes the pan’s orientation, temperature, and thickness into account, the researchers were able to come up with a method that maximizes a uniform thickness on the pan.
Based on their calculations, the best way to evenly spread the batter on the pan is to immediately tilt the pan to the side after the batter hits the hot pan. Then, while the pan is still inclined, rotate it in a circle so the batter is distributed across the entire area of the pan. After all the holes are filled, you can put down the pan and let it continue cooking before flipping to the other side.
Now you're (Nu)tell(a)ing me, there's a scientific way to make great crepes? - YouTube
In the video above, we attempted to replicate their technique (using this recipe from BuzzFeed). While we aren't master crêpe chefs just yet, their tips certainly helped us step up our crêpe game!
While we can confirm crepes are a tasty test subject, the applications for this research span far beyond the culinary industry. In the future, their idea may be used to improve chocolate manufacturing, the coating of surfaces, or the production of thin elastic shells. For now though, we certainly appreciate their delicious results.
–Lissie Connors & Phoebe Sharp
Lissie Connors (@LissieOfficial) covers social media and writes about science in a slightly snarky manner for APS and PhysicsCentral. This fall she's moving to the Pacific Northwest to study volcano geochemistry and finally find Bigfoot.
Phoebe Sharp (@phee_sharp) is the editor of the Physics Buzz blog, curating goofy puns and overachieving science topics. She is starting her Ph.D. in Physics Education Research in the fall and aims to finally crochet an appropriately sized sweater before next summer.
Microscopes are powerful tools for examining biological cells. Under the right conditions and magnification, cell components and activity become visible and diseases can be exposed. Microscopes can inform treatment and saves lives—if they are within reach. With an inexpensive, versatile, and portable new microscope design, researchers at the University of Connecticut (UConn) and the University of Memphis (U of M) are hoping to increase access to high-resolution microscopes.
A 3D animation generated from the 3D printed microscope. Video via Bahram Javidi/OSA
As the researchers report in the OSA journal Optical Letters, their new design can resolve subcellular features in three-dimensional images and capture tiny fluctuations in biological cells over time. It’s research quality yet robust enough to carry to a patient’s bedside. The team was led by Bahram Javidi (UConn) and included Timothy O’Connor (UConn) and Ana Doblas (U of M).
Objects can be magnified by many different arrangements of mirrors, lenses, and optical components. For this project, the team wanted an arrangement that didn’t require extensive sample preparation (like staining or labeling) or laboratory-style control of environmental conditions like noise, but that produced useful 3D images. So, they turned to a modified version of a technique called digital holographic microscopy (DHM).
The high-resolution, low-cost microscope in use. Image via Bahram Javidi/OSA.
The “digital” in DHM refers to the fact that instead of looking through an eyepiece directly at a sample, the microscope feeds data into a computer program and an image of the sample is generated on a screen. “Holographic” refers to the fact that the image is produced using the technique for making holograms. If you’re thinking Star Trek, Red Dwarf, or Avatar, you’re kind of on the right track. Holograms are 3D images of objects that you don’t need special glasses to see. Put simply, a hologram is created by illuminating an object with a laser beam and then recording how the light around the object is changed by the object.
In traditional DHM, you first illuminate a sample with a laser. Then, using a digital camera, you record the interference pattern created when light transmitted or reflected by the object interferes with an uninterrupted reference beam. That data is processed by a computer program and output as a 3D image of the sample. DHM is a great way to study cells, but here’s the problem—it requires a complicated setup and is really sensitive to vibrations. For these reasons, it’s usually done on a special optical table in the lab.
To make this technique viable outside of the lab, the researchers modified a traditional DHM setup. Instead of illuminating a sample with a traditional laser beam, their design uses a patterned or “structured” beam. The structure affects the interference pattern in such a way that you can double the resolution of the 3D image.
DHM and structured illumination microscopy (SIM) have been combined before, but in a way that required two separate light sources. The problem is that the more sources you have, the more sensitive the microscope is to vibrations. To get around this problem, the new design takes advantage of geometry in a way that requires only one light source. The researchers add structure to the illuminating beam by passing it through a diffraction grating (a clear cd) before it reached the object, which created an alternating light and dark pattern.
The researchers estimate that the microscope costs around several hundred dollars in parts, less if it could be manufactured in bulk. All of the optical parts are commercially available and the rest of the microscope can be printed with a 3D printer. The capabilities of 3D printing are key to the success of this design.
“3D printing the microscope allowed us to precisely and permanently align the optical components necessary to provide the resolution improvement while also making the system very compact,” said Javidi in a press release.
In addition to working out the design specifics, the team wrote a computer algorithm to process the data and reconstruct the images. Tests of their system showed that the SIM modification doubled the resolution of DHM, enabling microscope to resolve features as small as 0.775 micrometers (for comparison, the diameter of a red blood cell is about 5 micrometers). In addition, the microscope can expose changes in biological cells over time that happen on a scale as small as a few tens of nanometers.
Javidi and his team are exploring ways to further reduce the impact of environmental noise on the images and collaborating with international partners to see how well the microscope can diagnose conditions like diabetes, sickle cell disease, and malaria.
“This new microscope doesn't require any special staining or labels and could help increase access to low-cost medical diagnostic testing," says Javidi. "This would be especially beneficial in developing parts of the world where there is limited access to health care and few high-tech diagnostic facilities."
How does your right arm know to be as long as your left? What tells your body how tall you are? Why does a giraffe’s neck grow tall, as its body stays the same size? As much as we have learned in biology, we still don’t know how organisms are aware of their size–at least not on a cellular level. New research from a team of physicists suggests that subtle chemical frequencies tell organisms how large they are. A frequency is a pattern, and nature, whether we notice it or not, is full of patterns. We see them in plants and animals, in chemicals, in rocks, and even in outer space. These patterns count for much more than just aesthetics; we can use them to answer big questions about how our world works.
Artists rendering of a network of neurons firing signals back and forth between one another. Credit: Tom Morris (via Tumblr)
Sizing up the problem
Physics grad student at the Universität Saarland, Frederic Folz, didn't plan on doing biology research when he started his Bachelor’s thesis in 2016, but after talking with physics professors Giovanna Morigi and Karsten Kruse, they came up with an interesting idea, one that integrated electrodynamics with biology.
A few years ago, Nobel Prize-winning quantum physicist Robert B. Laughlin, wrote a unique paper on a topic he called “the length problem”. He wrote, “On the matter of length determination, per se, very little progress has been made beyond Thompson’s 1917 treatise on biological form.” Laughlin goes on to suggest living things can essentially “measure themselves” through a process commonly seen in electrodynamic systems.
See, for an organism to regulate their growth, their body must first somehow be aware of their own size–they must have some biological way of measuring themselves. Then, they must have some way to respond to this information, alerting their appendages to grow, shrink, or maintain the status quo. Perhaps, thought Laughlin, biological organisms are measuring their size using the frequency of oscillating chemical signals in their cells.
“For example” explained Folz, “if you have a bacteria, you can consider it as a cavity where chemicals are traveling. When certain resonances occur, the cell could stop its growth”
Information is carried between axons by waves of chemicals that oscillate back and forth, creating a resonating signal. Credit: LucasVB (via Tumblr)
Folz and his collaborators sought to test Laughlin’s idea using axons, the long fibers connected to neurons, as a model. These spindly filaments are perfect to study, because unlike other parts of the body, they have a relatively simple geometry. When an axon grows, it’s really only growing lengthwise, unlike most body parts with complex and irregular growth cycles.
Your axons and most other cells are part of a highway system of chemicals racing back and forth, delivering messages throughout your organs. In the nervous system, motor proteins called kinesin and dynein run back and forth from the root to the end of the axon in a loop.
Inside the axon, as chemicals travel to the growth cone (the end of the axon) they jumpstart the motion of other chemicals to the soma (the root of the axon). If the axon lengthens, then this cycle would slow, and we’d see what’s called a “negative feedback loop”. Researchers have studied this process in the lab, finding that if the concentrations of these motor proteins are changed, then the length of the axon itself will change.
Together with fellow physicist Lukas Wettmann, the researchers recreated the axon’s chemical system in their model, breaking it down into a mechanics problem. Simply put, If chemical “I” moves back and forth at a constant velocity “X”, completing “N” oscillations over a certain time, then how far did it travel?
Because it will take the chemical more time to travel through a longer axon, a lower frequency means a longer axon. Just as you can measure the length of an axon, their equations can also be used to measure the frequency and shape of the oscillations themselves.
An artists rendering of a neuron. The axion is the long spindly filament at the end of the cell.
So what does this all mean? Perhaps our bodies are finely tuned in to these oscillations, using the information to make key decisions about how our bodies grow, decisions made without us even realizing it.
In the future, the team is looking to expand its model beyond axons to other, more complex organ systems. They think this model could be applied not only to the length problem but to a myriad of biologic processes, including how tiny cells can measure their internal pressure.
“We can also cast our equations in the physical form as an electronic circuit. We can build a circuit that emulates the behavior we see here [in the axon]...You can use this mechanism to regulate different physical quantities that, at first glance, have nothing to do with one another” said Folz
Even though circuits and brain cells don’t seem like they have much in common, the nature of their signals allows researchers to extract information about how physical elements move inside them. While it started as a biology project, this research could grow into much more than nerves.
Weather permitting, a SpaceX Falcon Heavy rocket will blast off into the dark Florida sky late tonight (live stream here). In each of its 27 first-stage engines, liquid propellant RP-1 will mix with liquid oxygen, igniting chemical reactions that will thrust the 3-million-pound system into the night sky. After its first-stage engines are spent and the boosters fall away, a second-stage engine will kick in, powered by the same reaction. This engine will provide the final pushes required to deliver the 24 satellites on board to their appropriate orbits.
Artist's concept of LightSail 2 above Earth. Image credit: Josh Spradling / The Planetary Society.
Folded up neatly within one of these satellites is a small, crowdfunded package that will be set free a week after launch. The spacecraft inside that package—LightSail 2—will be powered by a completely different, much gentler propulsion system than the one that carried it to space. If all goes as planned, LightSail 2 will be the first satellite to go around the Earth in a controlled orbit propelled solely by sunlight. (In case you’re wondering, LightSail 1 was a proof-of-concept spacecraft successfully deployed for testing in 2015.)
LightSail 2 weighs just over 10 pounds and, one unfolded, will be about the size of a boxing ring. It consists of a 32-square-meter sail composed of extremely thin mylar and supported by four rigid, metallic booms. The spacecraft also has electronic systems for communication, orientation, and navigation.
A solar-powered sail has no resemblance to the solar-powered systems on Earth that convert energy from the sun into electricity. Instead, solar sails harness the momentum of photons emitted by the sun.
Shiny objects, like mirrors and pieces of mylar, reflect incoming photons. When this happens, some of the photon’s momentum is transferred to the object. In other words, the photon gives the object a tiny shove forward before bouncing backwards. One shove might not do much, but continuous small shoves can add up to a force strong enough to propel the object forward. That’s the idea behind solar sails—large, lightweight structures that harness the nonstop flow of photons traveling out from the sun for propulsion.
By carefully orienting its sail with respect to the sun, LightSail 2 should be able to use this momentum to reach a higher orbital altitude than where it starts. Simulations suggest that it should be able to increase its orbit altitude by about 0.5 km per day.
Bill Nye holding the CubeSat for the LightSail 1 project. Credit: The Planetary Society
Solar sailing isn’t a new idea, but because of its nature, there are a limited number of possible applications. LightSail 2 is designed to test how well solar sails can propel CubeSats, mini-research satellites with a standard size and weight. CubeSats have become increasingly popular ways for scientists, students, and companies to explore space. More than 1,000 have been launched since 2003 and at least that many are in development now. Solar sailing is potentially an inexpensive, small, lightweight propulsion option that could expand the research capacity of CubeSats.
A lot of people are invested in this project, and not just for professional reasons. LightSail 2 was entirely funded by individuals, in part through a wildly successful Kickstarter campaign. That’s really uncommon for such a big research effort! The project is housed within The Planetary Society, a nonprofit foundation established in 1980 to “empower the world’s citizens to advance space science and exploration.” You might be familiar with the CEO: Bill Nye the Science Guy.
The Planetary Society has been promoting and developing solar sail technology since its early days but has no plan for a LightSail 3. That’s because the technology is now mature enough for space agencies, companies, and other researchers to put it to good use, and because the society’s membership has lots of other ideas worth exploring, according to COO Jennifer Vaughn. The society plans to hold a wide-open call for science and technology project proposals next year. “[W]e're hoping to really democratize this process of getting the best ideas into the hands of the people that could potentially make these dreams into reality,” Vaughn said at a press conference on Thursday.
In the meantime, all eyes will be on tonight’s launch. “It's really exciting to be flying this thing at last. It's almost 2020 and we've been talking about for, well, for 40 years. It's very, very cool,” says Nye.