SHRIMP is a new DARPA program to develop insect-scale robots for disaster recovery and high-risk environments
The DARPA Robotics Challenge was a showcase for how very large, very expensive robots could potentially be useful in disaster recovery and high-risk environments. Humanoids are particularly capable in some very specific situations, but the rest of the time, they’re probably overkill, and using smaller, cheaper, more specialized robots is much more efficient. This is especially true when you’re concerned with data collection as opposed to manipulation—for the “search” part of “search and rescue,” for example, you’re better off with lots of very small robots covering as much ground as possible.
Yesterday, DARPA announced a new program called SHRIMP: SHort-Range Independent Microrobotic Platforms. The goal is “to develop and demonstrate multi-functional micro-to-milli robotic platforms for use in natural and critical disaster scenarios.” To enable robots that are both tiny and useful, SHRIMP will support fundamental research in the component parts that are the most difficult to engineer, including actuators, mobility systems, and power storage.
From the DARPA program announcement:
Imagine a natural disaster scenario, such as an earthquake, that inflicts widespread damage to buildings and structures, critical utilities and infrastructure, and threatens human safety. Having the ability to navigate the rubble and enter highly unstable areas could prove invaluable to saving lives or detecting additional hazards among the wreckage. Partnering rescue personnel with robots to evaluate high-risk scenarios and environments can help increase the likelihood of successful search and recovery efforts, or other critical tasks while minimizing the threat to human teams.
Technological advances in microelectromechanical systems (MEMS), additive manufacturing, piezoelectric actuators, and low-power sensors have allowed researchers to expand into the realm of micro-to-milli robotics. However, due to the technical obstacles experienced as the technology shrinks, these platforms lack the power, navigation, and control to accomplish complex tasks proficiently.
To help overcome the challenges of creating extremely SWaP-constrained microrobotics, DARPA is launching a new program called SHort-Range Independent Microrobotic Platforms (SHRIMP). The goal of SHRIMP is to develop and demonstrate multi-functional micro-to-milli robotic platforms for use in natural and critical disaster scenarios. To achieve this mission, SHRIMP will explore fundamental research in actuator materials and mechanisms as well as power storage components, both of which are necessary to create the strength, dexterity, and independence of functional microrobotics platforms.
That term “SWaP” translates into “size, weight, and power,” which are just some of the constraints that very small robots operate under. Power is probably the biggest one—tiny robots that aren’t tethered either run out of power within just a minute or two, or rely on some kind of nifty and exotic source, like lasers or magnets. There’s also control to consider, with truly tiny robots almost always using off-board processors. These sorts of things substantially limit the real-world usefulness of microrobots, which is why DARPA is tackling them directly with SHRIMP.
One of our favorite things about DARPA programs like these is their competitive nature, and SHRIMP is no exception. Both components and integrated robots will compete in “a series of Olympic-themed competitions [for] multi-functional mm-to-cm scale robotic platforms,” performing tasks “associated with maneuverability, dexterity, [and] manipulation.” DARPA will be splitting the competition into two parts: one for actuators and power sources, and the other for complete robots.
Here are the tentative events for the actuator and power source competition; DARPA expects that teams will develop systems that weigh less than one gram and fit into one cubic centimeter.
High Jump: The microrobotic actuator-power system must propel itself vertically from a stationary starting position, with distance measured only in the vertical direction and survivability as the judging criteria. Expected result: >5cm.
Long Jump: The microrobotic actuator-power system must propel itself horizontally from a stationary starting position, with the distance measured only in the horizontal direction and survivability as the judging criteria. Expected result: >5cm
Weightlifting: The microrobotic actuator-power system must lift a mass, with progressively larger masses until the actuator system fails to lift the weight. Expected result: >10g.
Shotput: The microrobotic actuator-power system must propel a mass horizontally, with the distance measured only in the horizontal direction as the judging criteria. Both 1-gram and 5-gram masses must be attempted. Expected result: >10cm @ 1g, >5cm @ 2g.
Tug of War: The microrobotic actuator-power system will be connected to a load cell to measure the blocking force of the actuator mechanism. Expected result: > 25mN.
Teams competing with entire robots will have a separate set of events, and DARPA is looking for a lot of capability in a very, very small package—in a volume of less than one cubic centimeter and a weight of less than one gram, DARPA wants to see “a micro power source, power converters, actuation mechanism and mechanical transmission and structural elements, computational control, sensors for stability and control, and any necessary sensors and actuators required to improve the maneuverability and dexterity of the platforms.” The robots should be able to move for 3 minutes, with a cost of transport of less than 50. Teams are allowed to develop different robots for different events, but DARPA is hoping that the winning design will be able to compete in at least four events.
Rock Piling: For each attempt, the microrobot must travel to, lift, and stack weights (varying from 0.5 to 2.0 g) in a minimum of two layers without human interaction. Expected result: 2g, 2 layers.
Steeplechase: Competing teams will be given precise locations and types of obstacles (e.g. hurdle, gap, step, etc.) relative to the starting location. For each attempt, the microrobot must traverse the course without human interaction or recharge between each obstacle. The number of cleared obstacles and total distance will be used as the judging criteria. Expected result: 2 obstacles, 5m.
Biathlon: Competing teams will be given the choice between three beacon types (temperature, light, or sound) or they may choose to use all 3 types of beacons. For each attempt, the microrobot must traverse to a series of beacon waypoints to create an open circuit without human interaction or recharge between each waypoint. Expected result: 2 beacons, 5m.
Vertical Ascent: Microrobots will traverse up two surfaces, one with a shallow incline (10º) and the other with a sharp incline (80º). The total vertical distance traveled will be the judging criteria. Expected result: 10m at 10°, 1m at 80°.
DARPA has US $32 million of funding to spread around across multiple projects for SHRIMP. Abstracts are due August 10, proposals are due September 26, and the competition could happen as early as March of next year.
The Ground X Vehicle Technologies program explores the advantages of powering vehicles with multiple electric motors
In late June, shortly after IEEE Spectrum’s recent feature on in-wheel motors for electric vehicles went to press, the Defense Advanced Research Projects Agency (DARPA) issued a press release describing various novel concepts for military vehicles, some of which make use of distributed electric motors. These prototypes were developed as part of DARPA’s Ground X Vehicle Technologies (GXV-T) program, which is intended to find ways to make military vehicles less vulnerable to attack. The usual technique for doing that is, of course, to add heavy armor. The GXV-T program explored another approach—to make vehicles nimbler and thus “improve survivability without up-armoring the vehicle,” according to program manager Amber Walker.
One of the technologies developed under the GXV-T program is an in-wheel motor developed by U.K.-based QinetiQ. Like the commercial in-wheel motors described in Spectrum’s recent feature article, QinetiQ’s motor can improve the handling of a vehicle by allowing each wheel to be powered separately. It differs from the commercial offering described in Spectrum's July issue, though, in that it has a built-in transmission. (The commercial unit makes do with no gearing at all.)
For a military vehicle, a key advantage of using in-wheel motors rather than a centrally mounted electric motor and transmission is that it eliminates the drive shafts or axles that you’d otherwise need under a vehicle—components that can become deadly projectiles in a military vehicle should it run over an explosive.
Demonstrations of DARPA's Ground X-Vehicle Technologies - YouTube
Another thrust of the GXV-T program was to develop technology that would allow vehicles to negotiate extremely rough terrain. Engineers from Pratt & Miller achieved that goal by constructing an odd-looking vehicle with a suspension system that allowed each wheel to be moved up and down over a range of almost 2 meters. This Multi-mode Extreme Travel Suspension (METS) system enabled the prototype to ride smoothly over enormous bumps and to keep its cab level while traversing steep slopes.
While the METS vehicle doesn’t appear to have true in-wheel motors, images of it suggest that it uses motors positioned near each drive wheel. GXV-T researchers have not yet been cleared by DARPA to share technical details of their designs, so I can't confirm that surmise, but that's certainly what it looks like. And it makes sense: Distributed electric motors offer vehicle designers more latitude than they are normally used to. And I bet they really needed that here.
The use of distributed electric motors for these military prototypes is reminiscent of a program sponsored by DARPA and the Office of Naval Research in the early 2000s that resulted in the development of the Shadow Reconnaissance, Surveillance, Targeting (RST) Vehicle. That hybrid electric vehicle used in-wheel motors in part for the flexibility they offered—literal flexibility. You see, one aim of that earlier program (which, incidentally, was managed at DARPA by Stephen Welby, currently IEEE’s Executive Director and COO) was to create a vehicle that could fit into the cargo hold of a V-22 Osprey tilt-rotor aircraft. To do so, the vehicle, which in normal operations had substantial ground clearance, needed to crouch to within 10 centimeters of the ground and pull its wheels inward. Using in-wheel electric motors helped make such contortions possible.
Although some once saw it as a possible replacement for the military’s High Mobility Multipurpose Wheeled Vehicle (better known as the Humvee), the Shadow never went beyond the demonstrator phase. The technologies recently developed in the GXV-T program could, of course, suffer the same fate. But even if the U.S. military doesn’t choose to follow through and put them into widespread use, I wouldn’t be surprised to see some of these technologies—the METS system in particular—commercialized, perhaps for next-generation electrically powered all-terrain vehicles. I suppose that could help save the lives of many young people, not on the battlefield, but by making notoriously dangerous ATVs that much less likely to roll over.
The U.S. government requests information on high-demand skillsets, hiring difficulties, apprenticeship programs, and pipeline challenges
The U.S. National Institute of Standards and Technology (NIST) wants to know about efforts to educate and train the future semiconductor workforce, and how the U.S. government can help.
NIST has asked for input on this topic from semiconductor companies, and their suppliers, trade associations, equipment manufacturers, educational institutions, and other related organizations, for a new report. The deadline to submit ideas is 15 August 2018.
Government assistance could include “enhanced support for K-12, undergraduate, and graduate STEM education (with a particular focus on semiconductor technology), targeted technical training, internship and apprenticeship programs, and cooperative education programs,” according to NIST’s request for information.
The outreach effort was sparked by President Donald Trump’s 2017 National Security Strategy, and its concerns about the impact of semiconductor-dependent technologies, like encryption, advanced computing, and artificial intelligence, on U.S. economic growth and security.
Among the specific issues those submitting information are asked to consider include:
The types of technical positions for which hiring is most difficult
Educational levels at which hiring is most difficult
Expected changes in staffing levels over the next five to 15 years
Skillsets that are likely to grow in importance
Things your company is doing to bring people into the tech workforce
Ideas for ways of stimulating semiconductor workforce growth
But a new study suggests that the color white can also be a social cue that results in a perception of race, especially if it’s presented in an anthropomorphic context, such as being the color of the outer shell of a humanoid robot. In addition, the same issue applies to robots that are black in color, according to the study. The findings suggest that people perceive robots with anthropomorphic features to have race, and as a result, the same race-related prejudices that humans experience extend to robots.
“We hope that our study encourages robot designers to create robots that represent the diversity of their communities,” Bartneck told me. “There is no need for all robots to be white.”
Bartneck suspected the research could prove controversial, but he and his collaborators—from Guizhou University of Engineering Science, China; Monash University, Australia; and University of Bielefeld, Germany—were determined to pursue the issue. “The discussion on this topic was like walking through a minefield,” he said, adding that their paper received extensive scrutiny from reviewers, some of whom accused the authors of sensationalism.
To learn more about the project, and the controversy surrounding it, we spoke with Bartneck via email. If you’d like more details on the methods used, statistical analyses applied, and numerical results, the full paper is available for download here.
IEEE Spectrum: Why hasn’t this topic been studied before, and what made you decide to study it? Why is this an important thing to study?
Christoph Bartneck: Many engineers are busy working on implementing the basic functions of robots, such as enabling them to walk and to navigate their environment. This does occupy much of their attention and the social consequences of their work are not particularly high on their priority list. Often robots are designed from the inside out, meaning that first all the functional parts of the robots are built and tested. Only at the end some sort of cover is added. How this cover affects the human users, or more broadly, how the robot as a whole is perceived by its users is more often than not only an afterthought.
Therefore, racism has not been on the radar for almost all robot creators. The members of the Human-Robot Interaction community have worked already for many years to better understand the interaction between humans and robots and we try to inform robot creators on how to design the robots so that they integrate into our society. Racism is causing considerable damage to people and to our society as a whole. Today racism is still part of our reality and the Black Lives Matter movement demonstrates this with utmost urgency. At the same time, we are about to introduce social robots, that is, robots that are designed to interact with humans, into our society. These robots will take on the roles of caretakers, educators, and companions.
“If robots are supposed to function as teachers, friends, or caretakers, then it will be a serious problem if all of these roles are only ever occupied by robots that are racialized as white”
A Google image search result for “humanoid robots” shows predominantly robots with gleaming white surfaces or that have a metallic appearance. There are currently very few humanoid robots that might plausibly be identified as anything other than white or Asian. Most of the main research platforms for social robotics, including Nao, Pepper, and PR2, are stylized with white materials and are presumably white. There are some exceptions to this rule, including some of the robots produced by Hiroshi Ishiguro’s team, which are modeled on the faces of particular Japanese individuals and are thereby clearly—if they have race at all—Asian. Another exception is the Bina 48 robot that is racialized as black (although it is again worth noting that this robot was created to replicate the appearance and mannerisms of a particular individual rather than to serve a more general role).
This lack of racial diversity among social robots may be anticipated to produce all of the problematic outcomes associated with a lack of racial diversity in other fields. We judge people according to societal stereotypes that are associated with these social categories. Social stereotypes do, for example, play out at times in the form of discrimination. If robots are supposed to function as teachers, friends, or caretakers, for instance, then it will be a serious problem if all of these roles are only ever occupied by robots that are racialized as white. We hope that our study might serve as a prompt for reflection on the social and historical forces that have brought what is now quite a racially diverse community of engineers to, almost entirely and seemingly without recognizing it, design and manufacture robots that, our research suggests, are easily identified by those outside this community as being white.
What does racism mean in the context of robotics? How can a robot have a race if robots aren’t people?
A golden rule of communication theory is that you cannot not communicate. Even if the robot creators did not racialize their robot, people will still perceive it to have one. When asked directly what race the robots in our study have only 11 percent of the people selected the “Does Not Apply” option. But our implicit measures demonstrate that people do racialize the robots and that they adapt their behavior accordingly. The participants in our studies showed a racial bias towards robots.
If robots can be perceived to have a race, what are the implications for HRI?
We believe our findings make a case for more diversity in the design of social robots so that the impact of this promising technology is not blighted by a racial bias. The development of an Arabic looking robot as well as the significant tradition of designing Asian robots in Japan are encouraging steps in this direction. Especially since these robots were not intentionally designed to increase diversity, but they were the result of a natural design process.
What specific questions are you answering in this study?
Do people ascribe race to robots and if so, does the ascription of race to robots affect people’s behavior towards them? More specifically, using the shooter bias framework, such racial bias would be evidenced by participants being faster to shoot armed agents when they are black (versus white), faster to not shoot unarmed agents when they are white (versus black), and more accurate in their discernment of white (versus black) aggressors.
Image: University of Canterbury
Results of a Google image search for the term “robot.”
Can you describe the method you used to study these questions, and why you chose this particular method?
The present research examined the role of racialized robots on participants’ responses on the shooter bias task, a task widely used in social psychological intergroup research to uncover automatic prejudice towards black men relative to white men. We conducted two online experiments to replicate and extend the classic study on shooter bias towards black agents. To do so, we adapted the original research materials by Correll et al. and sought to explore the shooter bias effect in the context of social robots that were racialized either as of black or white agents.
Similar to previous work, we explored the shooter bias using different response windows and focused both on error rates and latencies as indicators of an automatic bias. In shooter bias studies, participants are put into the role of a police officer who has to decide whether to shoot or not to shoot when confronted with images in which people do either hold a gun in their hand or a benign object. The image is shown for only a split second and participants in the study do not have the option to rationalize their choices. They have to act within less than a second.
“This bias is both a clear indication of racism towards black people, as well as the automaticity of its extension to robots racialized as black”
What were the results of your study?
Our study revealed that participants were quicker to shoot an armed black agent than an armed white agent, and simultaneously faster to refrain from shooting an unarmed white agent than an unarmed black agent regardless of whether it was a human or robot. These findings illustrate the shooter bias towards both human and robot agents. This bias is both a clear indication of racism towards black people, as well as the automaticity of its extension to robots racialized as black.
Were the results what you expected?
Given the novelty of our research questions we did not have a clear prediction of the results. We really did not know whether people would ascribe a race to a robot and if this would impact their behavior towards the robots. We were certainly surprised how clearly people associated a race to robots when asked directly, in particular since the “Does Not Apply” option was the first option. In studies or racism, implicit measurements are normally preferred over explicit measures since people tend to respond with socially acceptable responses. Barely anybody would admit to be a racist when asked directly while many studies using implicit measure showed that even people that do not consider themselves to be racist exhibit racial biases.
Are you concerned that asking people questions about robots in a racial context makes people more likely to ascribe race to them?
This would hold true for explicit measures. Asking what race a robot might have suggests that there is at least a possibility for a robot to have a race, no matter if there is a “Does Not Apply” option offered. The implicit measurements allow us to study racial biases without leading the participants on. During the main part of the study, race was never brought up as the topic of investigation.
Can you discuss what the limitations of your current research are? How would you improve research in this area in the future?
We may speculate that different levels of anthropomorphism might result in different outcomes. If the robot would be indistinguishable from humans then we would expect to find the same results as the original study while a far more machine-like robot might have yet-to-be-determined effects. One may also speculate about the racialization approach we used. To best replicate the original shooter bias stimuli, we opted to utilize human-calibrated racialization of the Nao robot rather than employ the Nao’s default appearance (white plastic), comparing it against the robot stylized with black materials.
It is important to note that the Nao robot did not wear any clothes while the people in the original study did. Strangely, the people in the original study did not cast a shadow. Given the powerful functions of Adobe Photoshop, we were able to include a more realistic montage of the Nao robot in the background by casting shadows. Future studies should include multiple postures of the Nao robot holding the gun and the objects.
It sounded like there was some hesitation about accepting the paper into the HRI conference. Can you elaborate on that?
The paper submitted to the HRI conference went through an unparalleled review process. Our paper was around 5,000 words and the reviews we received added up to around 6,000 words. The paper was discussed at length during the conference program committee meeting and a total of nine reviewers were asked to evaluate the study. The paper was conditionally accepted and we were assigned a dedicated editor that worked with us to address all the issues raised by all the reviewers. It pushed the authors and the editor to their personal boundaries to address all the arguments and to find appropriate responses.
“The method and statistics of our paper were never in doubt. Most of the reviewers were caught up in terminology and language. We were even accused of sensationalism and tokenism”
The method and statistics of our paper were never in doubt. Most of the reviewers were caught up in terminology and language. We were even accused of sensationalism and tokenism. These comments are based on discussion that are on the ideological level. For example, the term “Caucasian” was considered inappropriate. From a European perspective this makes little sense. Also, the use of the term “color” was deemed inappropriate, which made it difficult to talk about the light absorbing properties of the robots’ shell. We were instructed to use the term “melanation” instead. While this might be a more scientific term, it makes it difficult to talk about the study with the general public.
Before the conference I contacted the program chairs of the conference and suggested having a panel discussion at the conference to allow for a public debate on the topic. Although initially enthusiastic the proposal was eventually turned down. I then proposed to use the presentation slot assigned to the study to include a small panel discussion. After an initial okay, and after I had solicited two experts who agreed to participate, the conference organizers forbid this panel discussion. I was instructed to present the paper without any commentary or panel discussion the day before the presentation slot.
What this shows is that our academic community is struggling with addressing controversial social issues. All attempts to have an open discussion at the conference about the results of our paper were turned down. The problem that I have with this is that it inhibits further studies in this area. Why would you expose yourself to such harsh and ideology-driven criticism? I think we need to have a supportive and encouraging culture of conducting studies of problematic topics.
What was the reaction to your presentation at HRI and afterwards?
The presentation of the study at the conference was extremely well attended and the international media widely reported on the results with the notable exception of the U.S. This is particularly problematic since our study was executed with participants only from the U.S.
How would you like to see the results of your research applied?
We hope that our study encourages robot designers to create robots that represent the diversity of their communities. There is no need for all robots to be white.
What are you working on next?
We are currently conducting a study in which we expand the gradients of the robots’ surface color to include several shades of brown. In addition, we are investigating to what degree the anthropomorphism of the robot may influence the perception of race.
“Robots and Racism,” by Christoph Bartneck, Kumar Yogeeswaran, Qi Min Ser, Graeme Woodward, Robert Sparrow, Siheng Wang, and Friederike Eyssel, from the University of Canterbury, Monash University, Guizhou University, and University of Bielefeld, was presented at HRI 2018 in Chicago.
More of today's design problems are being caused by the systems that power them. Designs are placing higher demands on their power systems and so during the design phase, your power supply needs to be able to provide reliable power to your device under test. Learn expert tips in the "4 Ways to Build Your Power Supply Skill Set" eBook so you can better understand these issues and how to prevent them.
Nanoparticle contacts open the way to better sensors
Last week, researchers at IBM Research-Zurich in Rüschlikon, Switzerland, and the Universities of Basel and Zürich announced in a Letter published in Nature a new method for creating electrical contacts to individual molecules on a silicon chip. The advance could open up a promising new way to develop sensors and possibly other electronic or photonic applications of manipulating single molecules.
When, in the mid-1970s, researchers discovered single molecules with interesting electronic properties such as that of a diode, hopes were high that this would spur the development of a new semiconductor technology that might compete with silicon-based electronics. However, establishing electrical contacts to such molecules remained essentially an activity confined to the laboratory. While it is possible to make contact with these molecules from the tips of scanning tunneling microscopes (STMs), these experiments required vacuum and low-temperature conditions. Moreover, single electrical junctions remained difficult to reproduce because they varied widely in the current they admitted to the molecule. These problems were the main reason why, up to now, no molecular-electronic devices were available.
“We needed to fabricate devices that are more or less identical, ambient stable, and that can be placed on a robust platform, such as a silicon chip, in numbers of several billions, to compete with CMOS technology,” says Emanuel Lörtscher, of IBM Research, who is a coauthor of the Nature paper.
To accomplish that, the researchers first turned to a sandwich-on-silicon approach attempted in the past. But that did not work. On a silicon wafer, they created platinum electrodes, which they covered with a dielectric, a thin layer of non-conducting material. Then they created nanopores in this layer, using conventional etching techniques. They filled these pores with a solution of alkane-dithiol molecules and allowed the molecules in the solution to form a self-assembled monolayer in the pores and form a single monolayer of density packed parallel oriented molecules.
Like the bottoms of wine bottles in a crate, one end of the molecules made contact with the exposed platinum layer at the bottom of the pores. Up to now, researchers had attempted to cover these nanopores with another thin platinum layer to form the upper contact. But the electrical contacts obtained in this way showed a wide variation in contact resistance caused by variations in the distances between the molecules and the contact layer. The resulting device was unusable. They also tried graphene, with the same disappointing results, remembers Lörtscher.
The researchers finally hit upon a solution that was ingenious by its simplicity. Their golden idea: After the pores were filled with the self-assembled monolayer (SAM) material, they covered the SAMs in the pores with gold nanoparticles. These nanoparticles, big enough not to fall in between the self-assembled molecules, made contact with the molecules without destroying them or altering their properties.
“The nanoparticles auto-adjust to the size of the molecules,” says coauthor Marcel Mayor of the University of Basel. “This now looks so simple, and we did so much work to get there,”
Image: IBM Research-Zurich
The arrow and the dotted circle show where the 10-micrometer-diameter pore sits beneath the bulk metal contact that seals the molecular monolayer film to provide ambient stability.
The researchers created around 3000 nanopores on the wafer, each housing self-assembled molecules. When they tested the molecules’ response to applied voltages, they found that for same-size pores, the spread in responses was very small. Although the contact resistance for individual molecules in the pores may differ because of defects, there is an effective all-sample averaging caused by their SAM approach, explains Lörtscher.
Mayor is not sure whether the SAM devices will be able to compete with silicon devices for data storage or switching. Because the electrical properties of SAM molecules are affected by the presence of other molecules, they could be useful in sensing applications, he says. “There are many structures in which this behavior is known,” says Mayor. For example, the SAM molecules are pH sensitive, and they rearrange their structure, or swell, when exposed to certain vapors or solvents. “This is where all the interest from industry in these devices is coming from; they are interested in much more precise analytical devices,” says Mayor. Zhenan Bao, a materials scientist at Stanford University, agrees. “Contacting to single molecules reliably has always been a major challenge. It is impressive that they got very reproducible results and how beautifully the electrical conduction scaled with the length of the molecule. Their approach could be very promising for making molecular memory and circuits in the future,” she says.
However, Youngkyoo Kim, a researcher at Kyungpook National University, in South Korea, expresses some reservations about the SAM devices as sensors: “I feel that the present nanoparticle and self-assembly approach sounds good in terms of large-scale fabrication of electrical contacts in molecular devices but the performance reproducibility and stability may remain a big hurdle to overcome. In the case of the present device structure, both metal electrodes (including metal nanoparticles) and SAM layers need to be well encapsulated for stable operation.”
The new iBeat Heart Watch raises the alert if your heart rate falters
Consumers have proven fickle about wearables: Many a well-intentioned person has started out enthusiastic about a Fitbit, only to dump it in a drawer after a few months. But iBeat, the startup behind the iBeat Heart Watch, is betting that people will be more faithful to their wearable devices if their lives are on the line.
If you saw someone wearing this new smartwatch, which launched last week, you’d assume they were using it for the quaint purpose of telling time. Its true purpose would be apparent only if the watch’s sensors detected the telltale signs of cardiac arrest.
Then the dial hands on the watch face would be replaced by a stark question: “Are you okay?” Two big touch-screen buttons allow the user to either respond “yes,” in which case the watch goes back to being a watch and life goes on, or “no,” in which case the watch sends an alert to the user’s emergency contacts and also notifies the iBeat dispatch office, which can place a call to emergency responders.
The watch is designed for people who have heart problems and know that they’re at risk of sudden cardiac arrest—which occurs when the heart’s electrical system malfunctions and it stops beating. (A heart attack is a different thing that happens when a blockage in a blood vessel halts blood flow to the heart.) These at-risk people will have to wear the watch 24/7 so it can monitor them continuously.
Because of that around-the-clock requirement, company founder and CEO Ryan Howard says his team worked hard to design a watch that would look normal on users’ wrists, whether they’re wearing business suits or jogging suits. Otherwise the people who need it simply won’t wear it. “My dad still wears purple polo shirts—he still thinks he’s stylish,” Howard said during a product demo at the IEEE Spectrum office.
The iBeat watch first gained attention in 2016 with a campaign on Indiegogo and an investment announcement by TV’s own Dr. Oz. Like many a company with a crowdfunded project, iBeat has missed a number of deadlines: It originally promised to start shipping watches to backers in summer of 2017.
The delays resulted from a major change in the company’s business plan, Howard says. He initially intended to put his software on an existing smartwatch such as the Apple Watch or one of its competitors, which have built-in sensors that measure heartbeat. But during testing, he became convinced that the sensors weren’t good enough. “The Apple Watch sensor—I don’t want to say it’s a toy, but it’s optimized for fitness,” he says. “We think we would have had three false positives a day.”
Instead the company decided to design a bespoke smartwatch with sensors that met its own standards, and then get a production line up and running in China. And, well, that took a little while.
The watch doesn’t use particularly fancy sensors, but Howard says they had to be optimized to use minimal power yet provide reliable results for people with all sorts of different body and skin types. It uses optical sensors to measure pulse rate and blood oxygenation levels, shining lights through the skin and measuring how the wavelengths are absorbed by the flowing blood. Howard says his team worked with researchers at the University of California, San Francisco to validate the sensors’ readings.
To ensure good contact with the skin and therefore reliable readings, the watch has multiple redundant sensors. And to keep the power requirements down and the battery life long (it works for about three days before needing a recharge, which can be done with a small snap-on gadget), the watch automatically shifts between low-power and high-fidelity modes based on sensor data. For example, if the user is walking around with a steady heartbeat, the watch can decrease the frequency of its readings. But if it detects the irregular heartbeats that can be a precursor to cardiac arrest, it can immediately up its sampling rate.
There’s a huge market for medical alert systems that’s only getting bigger as Baby Boomers age: One recent research report estimates that the market will grow to $11 billion by 2025.
Some of the existing systems seem to harken back to an earlier technological era (anyone who watched TV in the late 1980s may remember the “I’ve fallen and I can’t get up!” commercial), with the alert button on a pendant, which communicates with a base station in the house. Howard argues that such systems take away the autonomy of seniors with heart problems. “If it only works within 200 or 300 feet from house, it becomes an end-of-life prison sentence,” he says. “Whereas our watch is really driving independence.”
The iBeat watch works directly with two cellular networks—AT&T and T-Mobile—and so works wherever those networks provide coverage.
For an outside perspective on this new gadget, IEEE Spectrum asked the opinion of Brandon Ballinger, cofounder of the Cardiogram app that performs continuous heart monitoring via the Apple Watch or Android-based watches. Ballinger says he sees a market for the iBeat Watch and other stand-alone heart monitoring wearables, but he’s somewhat skeptical that it will provide more reliable results based on its sensors.
“I think the Apple Watch's heart rate sensor is quite good,” Ballinger says. “A peer-reviewed study from Stanford measured the accuracy of multiple wearables, and concluded Apple Watch had the highest accuracy. It’d surprise me if a startup with $10 million in funding could beat Apple on sensor hardware. However, I’d suspect making their own watch probably gave them more control over things like sampling rate.”
Ballinger says he’d need to see peer-reviewed studies of the iWatch before trusting it. “In general, it makes me uneasy when healthcare startups take patients’ money with the promise of detecting a serious health condition, without doing the legwork of a published clinical study,” he says. “Enduring businesses are built on stable foundations, and in healthcare, that foundation is clinical proof.”
Tesla batteries and behind-the-meter Li-ion batteries could deliver electricity to meet peak Silicon Valley demand
Energy storage could get a big boost if California officials green light plans by utility Pacific Gas and Electric Co. to move forward with some 567 megawatts (MW) of capacity.
Included in the mix is more than 180 MW of lithium-ion battery storage from Elon Musk’s Tesla Inc. The Tesla-supplied battery array would be owned by PG&E and offer a four-hour discharge duration. The other projects would be owned by third parties and operated on behalf the utility under long-term contracts. All of the projects would be in and around Silicon Valley in the South Bay area.
Once deployed, the storage would sideline three gas-fired power plants: the 605-MW Metcalf Energy Center, the 47-MW Feather River Energy Center, and the 47-MW Yuba City Energy Center that lack long-term energy supply contracts with utilities. Even without the contracts, the state’s grid operator identified the units as needed for local grid reliability. It, and independent power producer Calpine, which owns the plants, asked federal regulators to label the plants as “must run.” That would let them generate electricity and be paid for it even without firm utility contracts.
The storage would sideline three gas-fired power plants that lack long-term energy supply contracts with utilities.
Both PG&E and California’s utility regulators object to that idea. They argue that the must-run designation without firm contracts would distort the state’s power market and lead to unfair prices. Backing up its objection, regulators earlier this year directed the utility to seek offers to replace the gas-fired power plants with energy storage.
The utility says that its search prompted more than two dozen storage proposals with 100 variations. PG&E narrowed the list to four, which it presented to state regulators in late June.
One of the projects, Vistra Energy Moss Landing storage project, would be owned by Dynegy Marketing and Trade, LLC, a unit Vistra Energy Corp. The holding company manages more than 40 gigawatts of generating capacity across 12 states. The project would be a transmission-connected, stand-alone lithium-ion battery energy storage resource in Monterey County. The facility, which would feature a 300-MW, four-hour duration battery array, could enter service in December 2020 under a 20-year contract.
A second project, Hummingbird Energy Storage, would be owned by a unit of esVolta, a new company that is partnering with Oregon-based Powin Energy Corp. and Australia-based Blue Sky Alternative Investments. The Santa Clara County–sited resource would include a 75-MW, four-hour-duration Li-ion battery array. It also could enter service in December 2020 and would operate under a 15-year contract.
PG&E sought to play up storage’s role in helping to integrate increasing amounts of renewable energy onto California’s grid.
One so-called “behind-the-meter” proposal also was accepted by PG&E. It came from Micronoc Inc. and would total 10 MW of four-hour-duration storage. In practice, the project would bundle the discharge capacity of lithium-ion batteries located at multiple customer sites. That’s in step with Micronoc’s business model of developing distributed energy storage projects, most of them so far in South Korea. A 10-year service contract with PG&E could start in October 2019.
PowerPack lithium-ion batteries from Tesla would form the backbone of a 182.5-MW array with a four-hour discharge duration. The batteries would be located at a PG&E substation in Monterrey County. The array could enter service by the end of 2020 and include a 20-year performance guarantee from Tesla.
The regulatory mandate directing PG&E to seek energy storage proposals was not the first time California regulators acted to boost storage.
In February 2013, regulators told utility Southern California Edison to secure energy storage and other resources to meet an expected shortfall stemming from the closure of the San Onofre nuclear power plant. In that instance, the utility’s energy storage target was 50 MW. It ultimately procured more than 260 MW of storage capacity.
Then, in May 2016, regulators again directed SoCalEd to procure storage to ease electric supply shortages that were feared as a result of a leak at a natural gas storage facility. As a result, more than 100 MW of grid-level energy storage was placed into service.
In announcing the Silicon Valley projects, PG&E sought to play up storage’s role in helping to integrate increasing amounts of renewable energy onto California’s grid. It also cited recent decreases in battery prices as enabling energy storage to compete with “traditional solutions” such as fossil-fueled power plants.
Evidence is growing that battery energy storage can beat natural gas on price when it comes to providing electricity on hot summer days when air conditioning units are cranked up.
No cost details were provided by the utility in making its announcement. And a nearly 250-page supporting document justifying the four projects was scrubbed of cost details before being released to the public.
Evidence is growing, however, that battery energy storage can beat natural gas on price when it comes to a specific type of power generation known as “peaking capacity.” Known as peakers, the fast-start power plants typically are called on to generate power on days when consumer demand for electricity is highest. For most places, that’s a hot summer day when air conditioners are cranked up.
One milestone came earlier this year when Arizona Public Service signed a 15-year deal for peaking power from a solar-powered battery. The 50 MW of storage beat out other forms of peaking generation, including natural gas. A field of solar PV panels from FirstSolar will energize the storage array when the sun is high in the sky, allowing electricity to be delivered to customers during times of peak demand.
Over the next 15 years, Arizona Public Service says it plans to put more than 500 MW of additional battery storage capacity in place. In January 2018, Arizona utility regulator Andy Tobin proposed that the state’s utilities deploy 3,000 MW of energy storage by 2030.
Efforts to increase the amount of energy storage in places like Arizona and California received a boost in February when the U.S. Federal Energy Regulatory Commission (FERC) voted 5-0 to remove what it said were nationwide barriers that kept storage sources from taking part in various markets that are run by Regional Transmission Organizations and Independent System Operators.
In a November 2016 proposal, the Obama-era FERC said that market rules designed for traditional generation resources created barriers to entry for emerging technologies such as electric storage resources. The February action was the next step in that effort to reduce or remove barriers to entry.
You're expected to deliver excellent results while staying within stringent budget constraints. Did you know that your test assets are critical to your success? Our new complimentary eBook offers eight ways to shift your perspective, enhance your processes, and transform test asset insights into an increased return on investment.
If it all works out, the effect could be to make small groups of engineers capable of feats that would take 100 engineers to achieve today. “We envision a much more specialized, secure, and heavily automated electronics community, which will change how everything is done in electronics, top to bottom,” says Darpa’s ERI director Bill Chappell. And that means your job is probably going to feel the effects.
The agency will kick off the initiative and reveal some of the winning proposals at a summit in San Francisco from 23 to 25 July, headlined by bigwigs like Nvidia’s Bill Dally and Intel’s Mike Mayberry. Chappell spoke to IEEE Spectrum ahead of the conference about the initiative’s aims and potential impacts.
IEEE Spectrum: What are the problems with the U.S. electronics industry that prompted this massive effort?
Bill Chappell: I think it’s a unique point in time. We’ve got underlying trends where the physics is already hard and getting harder. And that’s expressing itself in the cost across the board, whether that’s design, manufacturing, or even writing the software on top of a system-on-chip. Most aspects of electronics are getting more expensive, and larger design teams are needed to manage the underlying complexity. That has consequences across commercial industry and across the defense industry.
IEEE Spectrum: What is it about the problem that prevents industry from solving it on its own?
Chappell: Industry is very good at solving immediate problems. Where the government has stepped in in the past, and is trying to step in now, is at moments where there’s a larger leap ahead required. We’re aiming for 2025 to 2030 timelines. And oftentimes, industry isn’t looking out across those timelines as they have more immediate pressures and concerns.
They also don’t always do what’s best for the collective industry. One thing the government has done well in the past is build communities to tackle big problems as an aligned group, as opposed to just having individual entities tackle smaller problems.
IEEE Spectrum: Has this community building become more important as the original version of the semiconductor roadmap ended?
Chappell: That’s true. When it was quite clear what the roadmap was, everybody in the electronics industry could pull in the same direction and know that it would be best for the collective if they kept the roadmap going. That was true for Darpa as well. We were sponsors of the Sematech consortium, and when it was clear what the goals were, it was an easier time in terms of building the collective.
IEEE Spectrum: Why make this big push now, and why organize it as a high-level summit?
Chappell: Typically, we run individual projects. Darpa’s been doing projects in the electronics space since its inception. In this case, we felt that an initiative was important, first because we’re concentrating on the electronic sector more than ever, and, second because it’s the connectivity of many different projects. The teaming that can happen between projects, we think, is where a lot of innovation can happen.
We kicked six programs off simultaneously last summer. So it was a good opportunity to pull the entire electronics community together, to be able to see what we’ve invested in, and then to help brainstorm what the next round of investments should look like.
IEEE Spectrum: What’s the best possible outcome from the summit?
Chappell: There are two aspects that we’re hoping for. First, that we realize the synergy between the individual projects, so that new teams form from universities, companies, and federal labs that might otherwise not partner. And second, that we get a basis for new and exciting ideas for a next round of funding that we hope to announce in the fall.
IEEE Spectrum: One of the three major efforts Darpa is backing centers around chip architecture. Why is that, and what do you hope it will accomplish?
Chappell: In architectures, we believe that aggressive specialization is a part of the answer to what happens next. That’s mapping applications to the specific architectural choices. And you already see that in machine learning, where there’s a really hot field in terms of deep neural nets and other implementations. [See “IBM’s Do-It-All Deep Learning Chip” for an example of this.]
But a lot of our applications are much broader than that. We’re looking to collect the different applications where it makes sense to commit specific specialized resources.
IEEE Spectrum: Can you give an example?
Chappell: We started a year ago in a program called Hive, where we took a look at sparse graph parsing; that’s making associations across data sets which aren’t densely connected. An example application would just be logistics, where you’ve got lots of connections that didn’t map to the way computing architectures were laid out. In that program, Intel and Qualcomm are doing base-level designs; doing things like updating memory access patterns, updating the type of cores that would be doing the processing, and working across the software stack to do a hardware/software co-design for a variety of applications.
That was step one. Step two, to be kicked off at the Summit, is something we call “software-defined hardware.” That’s where the hardware is smart enough to reconfigure itself to be the type of hardware you want, based on an analysis of the data type that you’re working on.
In that case, the very hard thing is to figure out how to do that data introspection, how to reconfigure the chip on a microsecond or millisecond timescale to be what you need it to be. And more importantly, it has to monitor whether you’re right or not, so that you can iterate and be constantly evolving toward the ideal solution.
IEEE Spectrum: Is any of that possible now?
Chappell: It ultimately is a coarse-grained reconfigurable architecture; those have existed. What hasn’t existed is how aggressively we’re asking the timescales to do the reconfiguration.
It’s very hard. That’s why we step in to ask a question like that. It’s too early to be starting companies in that space. It’s a very basic question we’re asking.
IEEE Spectrum: What’s the third part of the architecture push?
Chappell: The third wing of the architecture piece is the “domain specific system-on-chip.” We were able to hire the former lead of GNU Radio Foundation, who is an expert in software-defined radio. He had continual frustration trying to lead a community that had difficulty in utilizing specialized hardware. If an SoC was designed, say, for 4G, it was fairly difficult to repurpose that for the more general needs that the software-defined radio community wanted.
So what we’re asking is: If you built a new radio from the ground up, what would it need to include to be able to have specialized resources like accelerators and yet also have the ability for a broad community to build on top of it?
We’re starting with the software-defined radio domain but then extending that to machine vision and machine learning and other domains to see if you can still have simplified programming models running on top of hyper-specialized hardware. That’s an architecture play, but it’s just as much a software play.
IEEE Spectrum: So in a sense, you’re starting with software and then asking what kind of hardware you’d need to execute it?
Chappell: It’s a bit broader than that. We’re starting with all the algorithms that exist in the software-defined radio community, and then collecting the ones where it would be appropriate to do specialization of the architecture. Right now, it’s pretty easy to make an ASIC if you know what your application is. If you have a broader domain, it can be more difficult to get that specialization right, and then it can be even more difficult to write code with an entire community doing very different things on top of that specialization.
IEEE Spectrum: What do you have planned in the design space?
Chappell: The first program we did is called Craft. Craft is looking at mechanisms to empower small design teams to do much larger designs. For example, we’ve been working with University of California, Berkeley on the Chisel design flow. It allows you to write high-level programs to create a system-on-chip. But equally as important to us is to create variants of that chip very quickly. Chisel can port to a new technology node with about 20 percent of the effort of the initial design. That would give us flexibility in manufacturing.
IEEE Spectrum: How’s that compare to what can be done now?
Chappell: It’s not atypical for a large SoC design team to be 100 people or more. What we’ve challenged the university community to do is to come up with design methodologies that empower two or three graduate students to automate a lot of that design flow. The Chisel team showed that two to three student designers can do full system-on-chip designs by abstraction and automation.
IEEE Spectrum: That sounds like the kind of thing big design automation companies would want to do.
Chappell:Synopsys, Cadence, and Mentor are involved in that program. So yes, they’re interested, but it’s not something that’s critical on their roadmap initially.
One of the things we’re trading off is the ultimate efficiency of the design. If one percent of area matters because you’re making millions of parts, then this level of abstraction may not be the right one for you. In our case, we’re not as cost-driven. So, if we give away 10 percent of the area, but maximize the capabilities of the smaller design team, that’s the kind of tradeoff that would work for the Defense Department, but it also might work for startups and other smaller players like universities.
IEEE Spectrum: What are the semiconductor design programs you’re kicking off this month?
Chappell: Those are Idea and Posh. Idea is really the intersection of machine learning and electronic design automation (EDA). What we’re trying to do is to be able to capture the capabilities of the designer inside the EDA itself. So that every time you use an EDA package, it gets smarter and your next design is that much easier.
IEEE Spectrum: Does this intelligence get shared so that everyone’s EDA tools get smarter?
Chappell: Eventually. First, we need to demonstrate that the concept is possible. Right now, it’s a vision, and it’s very hard. We have some inklings that show that it should be possible, but it needs a pretty big push from the research community to show what is and isn’t possible in that space.
Once it works, you can envision a cloud resource where multiple people are sharing and can all get better simultaneously. We’d definitely envision that for the university community. How that breaks down for corporate and proprietary lines is yet to be determined.
Chappell: Posh stands for posh open source hardware. It is an effort to create a foundation of building blocks where we have full understanding and analysis capability as deep as we want to go to understand how these blocks are going to work.
IEEE Spectrum: How’s that different from today?
Chappell: It’s pretty rare to go to GitHub, find high-quality hardware blocks that are available, and have the verification tools and everything you would need to trust that, even though that block has been altered by many different designers, it is in a state that is useable for your design.
Posh is as much about the verification tools as it is about the IP blocks that will be freely available.
IEEE Spectrum: How will this change hardware design?
Chappell: Our vision is that if you’re starting a new design, you have a foundation to build off of. Because so many people have had eyes on a block that you’re not guessing as to what its functionality is and what its origin was. A lot of time when we use third party [intellectual property], you don’t have full custody of it from the beginning to when it actually gets utilized.
This alternative approach looks at how you, hopefully, enhance security of chip design because an open source community has the opportunity to inspect the blocks before you use them.
The emerging example is the RISC-V community, which we sponsored over the last decade. That’s maturing into something that’s interesting, but that’s just one example of what could be contributed to an open source pool of capabilities.
Ultimately, the parallel is to the software community. The hardware community really hasn’t figured out that ethos of sharing. We’re trying to pull some of that excitement and methodology into hardware design.
IEEE Spectrum: What’s held back sharing in hardware?
Chappell: Some of it is culture, in that it’s historically been a proprietary business, because you’ve had to make really large bets. That risk has been rewarded specifically through these proprietary capabilities. As the abstraction of the hardware design gets higher and higher, it gets closer to the software community’s mentality. So that’s why I think you see an emergence of this concept.
Ultimately the reason I don’t think it has taken root in the hardware community is verification. You’re not going to bet $100 to $200 million on a block that was maybe built by a university or, even if it was from another industrial location, you don’t really know the quality of it. So you have to have a methodology to understand how good something is at a deep level before it’s used.
IEEE Spectrum: What are you planning in the third part of the ERI effort—materials and integration?
Chappell: The existing program is called Chips, which looks at really dense integration methodologies to take the monolithic design and split it into many smaller pieces called chiplets. That way we could create, in essence, a “pseudolithic” design—something that would function as though it was monolithic but was in fact composed of smaller parts which can be designed independent of each other. In Chips, we’re working with Intel on some of their integration strategies and on some of their I/O standards and working with the broader community—a mix of startups, universities, and defense contractors—to explore how to stand up the interfaces so that you can do that composable design.
With chiplets, if you’re a smaller team, you don’t have to recreate the rest of the chip. If 10 percent of the chip is where you’re going to be innovative and creative, you can focus there and add that into the broader ecosystem. It’s another way of lowering the barrier toward a new hardware capability.
But it depends on really lowering the energy of moving a bit between different chiplets below one picojoule per bit or even lower. You’ll never get to monolithic capabilities, but you really want to minimize the barrier between different chips. What we’ve shown is that, if you’re subpicojoule per bit, at least in many cases, then you can do the composable design.
IEEE Spectrum: What are the two new programs you’ll be revealing at the ERI Summit?
Chappell: One is called 3D SoC. It’s looking at a system-on-chip and asking: What can you do with older manufacturing nodes (like 90-nanometers) to make them competitive with 5-nm or 7-nm design?
We did a series of seedling projects at different universities to show that if you can truly mix dense memory and logic in a monolithic 3D stack, for many applications it would be better in absolute terms than a 7-nm processor design sitting next to a memory block. [Monolithic 3D chips have two or more layers of transistors built on the same piece of silicon.] But it’s ultimately very hard, because if you’re going to do the monolithic 3D, then your processing needs to be fully compatible, including low-temperature deposition of materials on top of the silicon base. [For an extreme example, see “3D Electronic Nose Demonstrates Advantages of Carbon Nanotubes”.]
IEEE Spectrum: Would reviving older, trusted fabs or keeping them going longer by using this 3D stacking provide the Defense Department some security against potential foreign hardware trojans and hidden back doors?
Chappell: It does. There are many smaller facilities across our country that if you could do 3D monolithic stacking, they could contribute high-quality electronics.
IEEE Spectrum: What’s the last program you’ll be introducing?
Chappell: The last one is Franc [Framework for Novel Computing], and it looks at new materials for processing-in-memory. We look at where memory is stored and ask: Can those same materials be used for processing information?
What you’d like to do is to be able to explore a variety of different materials and use some of their eccentricities to map to the problems that you’re trying to solve. In the past, we’ve been able to show deep neural nets that use nonvolatile memory as part of the computation in both storing weights and doing the summing of, for example, a dot product. That can be more efficient than just using the memory to store information and then having to transfer it all the way back to the processor.
IEEE Spectrum: That sounds similar to what deep learning chip startups Mythic and Syntiant are doing using flash memory.
Chappell: Most of those cases are looking to use today’s existing technology. Franc would be new devices that would amplify that type of capability. Next-generation nonvolatile memory, for example, that switches very fast and with a low write energy.
IEEE Spectrum: Can you give us an example or three?
Chappell: That should be announced at the Summit, so I think I’m going to hold off on that one. There’s one big bet we’re making that will be announced once it’s on contract. And we’ll make lots of smaller bets at universities.
IEEE Spectrum: Taken together, this stuff sounds like it could lead to a huge change not just in the electronics industry, but in the engineering profession. Was that a goal?
Chappell: A goal is to alter the electronics industry and shape it in a way that works best for our country including the Department of Defense, and you do that by working hand in hand with industry, not around industry.
We’ve had a really long collaboration working with our mainstream industry partners. Way back with the FCRP program, 25 years or more, we’ve done foundational investments in universities across the..