Loading...

Follow Robohub on Feedspot

Continue with Google
Continue with Facebook
or

Valid
Robohub by Robots In Depth - 1w ago

In this episode of Robots in Depth, Per Sjöborg speaks with Andreas Bihlmaier about modular robotics and starting a robotics company.

Andreas shares how he started out in computers and later felt that robotics, through its combination of software and hardware that interacts with the world, was what he found most interesting.

Andreas is one of the founders of RoboDev, a company that aims to make automation more available using modular robotics. He explains how modular systems are especially well suited for automating low volume series and how they work with customers to simplify automation.

He also discusses how a system that can easily be assembled into many different robots creates an advantage both in education and in industrial automation, by providing efficiency, flexibility and speed.

We get a personal, behind the scenes account of how the company has evolved as well as insights into the reasoning behind strategic choices made in product development.

Andreas Bihlmaier in Robots in Depth #38. Sponsors: Carbon Robotics and Aptomica - YouTube

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
Robohub by Australian Centre For Robotic Visio.. - 1w ago

In a world first, Australian Centre for Robotic Vision researchers are pushing the boundaries of evolution to create bespoke, miniaturised surgical robots, uniquely matched to individual patient anatomy.

The cutting-edge research project is the brainchild of Centre PhD researcher Andrew Razjigaev, who impressed HRH The Duke of York with the Centre’s first SnakeBot prototype designed for knee arthroscopy, last November.

Now, the young researcher, backed by the Centre’s world-leading Medical and Healthcare Robotics Group, is taking the next step in surgical SnakeBot’s design.

In place of a single robot, the new plan envisages multiple snake-like robots attached to a RAVEN II surgical robotic research platform, all working together to improve patient outcomes.

The novelty of the project extends to development of an evolutionary computational design algorithm that creates one-of-a-kind, patient-specific SnakeBots in a ‘survival-of-the-fittest’ battle.

Only the most optimal design survives, specifically suited to fit, flexibly manoeuvre and see inside a patient’s knee, doubling as a surgeon’s eyes and tools, with the added bonus of being low-cost (3D printed) and disposable.

Leading the QUT-based Medical and Healthcare Robotics Group, Centre Chief Investigator Jonathan Roberts and Associate Investigator Ross Crawford (who is also an orthopaedic surgeon) said the semi-autonomous surgical system could revolutionise keyhole surgery in ways not before imagined.

Professor Crawford stressed the aim of the robotic system – expected to incorporate surgical dual-arm telemanipulation and autonomous vision-based control – was to assist, not replace surgeons, ultimately improving patient outcomes.

“At the moment surgeons use what are best described as rigid ‘one-size-fits-all’ tools for knee arthroscopy procedures, even though patients and their anatomy can vary significantly,” Professor Crawford said.

He said the surgical system being explored had the potential to vastly surpass capabilities of current state-of-the-art surgical tools.

“The research project aims to design snake-like robots as miniaturised and highly dexterous surgical tools, fitted with computer vision capabilities and the ability to navigate around obstacles in confined spaces such as the anatomy of the human body,” Professor Crawford said.

“Dexterity is incredibly important as the robots are not only required to reach surgical sites but perform complicated surgical procedures via telemanipulation.”

Professor Roberts said the research project was a world-first for surgical robotics targeting knee arthroscopy and would not be possible without the multi-disciplinary expertise of researchers at the Australian Centre for Robotic Vision.

“One of the most exciting things about this project is that it is bringing many ideas from the robotics community together to form a practical solution to a real-world problem,” he said.

“The project has been proceeding at a rapid pace, mainly due to the hard work and brilliance of Andrew, supported by a team of advisors with backgrounds in mechanical engineering, mechatronics, aerospace, medicine, biology, physics and chemistry.”

Due to complete his PhD research project by early 2021, Andrew Razjigaev graduated as a mechatronics engineer at QUT in 2017 and has been a part of the Centre’s Medical and Healthcare Robotics Group since 2016.

The 23-year-old said: “Robotics is all about helping people in some way and what I’m most excited about is that this project may lead to improved health outcomes, fewer complications and faster patient recovery.

“That’s what really drives my research – being able to help people and make a positive difference. Knee arthroscopy is one of most common orthopaedic procedures in the world, with around four million procedures a year, so this project could have a huge impact.”

Andrew said he hoped his work would lead to real-world development of new surgical tools.

“Surgeons want to do the best they can and face a lot of challenges,” he said. “Our objective is to provide surgeons with new tools to be able to perform existing surgery, like knee arthroscopy, more efficiently and safely and to perhaps perform surgery that is simply too difficult to attempt with today’s tools.

“It’s also incredibly cool to use evolution in my work! There’s no question we’re witnessing the age-old process – the only difference being it’s happening inside a computer instead of nature.”

  • The process starts with a scan of a patient’s knee. With the supervision of a doctor, the computer classifies the regions for the SnakeBots to reach in the knee (green area) and regions to avoid (red area).
  • The resulting geometry makes a 3D environment for the SnakeBots to compete in the simulated evolution. It enables a number of standard SnakeBot designs to be tested and scored on how well they perform – namely how well they manoeuvre to sites inside a patient’s knee. The black lines in the test show some of the trajectories a SnakeBot took to manoeuvre to those sites.
  • The evolutionary computational design algorithm kicks in, continually creating new generations of SnakeBots, re-testing and killing off weaker variants until one survives, uniquely matched to an individual patient’s anatomy.  The SnakeBot that can safely reach those targets with more dexterity wins the battle of evolution and claims the optimal design.
  • The optimal SnakeBots are generated into 3D models to be 3D printed as low-cost, disposable surgical tools unique to each patient.
  • They are now ready to be deployed for surgery! The micro SnakeBots are attached to a larger, table-top robotic platform (like the RAVEN II) that positions them for entry into surgical incision sites.
  • It is expected that two SnakeBots are fitted with surgical instruments at their tips to enable a surgeon to perform dual-arm teleoperated surgical procedures.
  • A third SnakeBot in the multi-bot system will have a camera installed at its tip. This camera system will be used by a robotic vision system to map a patient’s body cavity so that the robot can be steered towards the areas of interest and away from delicate areas that should be avoided. It will track the two arms and surgical area simultaneously, working as the eyes of the surgeon.

Find out more about the work of the Centre’s Medical and Healthcare Robotics Group in our latest annual report.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
This walking microrobot was built by the MIT team from a set of just five basic parts, including a coil, a magnet, and stiff and flexible structural pieces.
Photo by Will Langford

By David L. Chandler

Years ago, MIT Professor Neil Gershenfeld had an audacious thought. Struck by the fact that all the world’s living things are built out of combinations of just 20 amino acids, he wondered: Might it be possible to create a kit of just 20 fundamental parts that could be used to assemble all of the different technological products in the world?

Gershenfeld and his students have been making steady progress in that direction ever since. Their latest achievement, presented this week at an international robotics conference, consists of a set of five tiny fundamental parts that can be assembled into a wide variety of functional devices, including a tiny “walking” motor that can move back and forth across a surface or turn the gears of a machine.

Previously, Gershenfeld and his students showed that structures assembled from many small, identical subunits can have numerous mechanical properties. Next, they demonstrated that a combination of rigid and flexible part types can be used to create morphing airplane wings, a longstanding goal in aerospace engineering. Their latest work adds components for movement and logic, and will be presented at the International Conference on Manipulation, Automation and Robotics at Small Scales (MARSS) in Helsinki, Finland, in a paper by Gershenfeld and MIT graduate student Will Langford.

Their work offers an alternative to today’s approaches to contructing robots, which largely fall into one of two types: custom machines that work well but are relatively expensive and inflexible, and reconfigurable ones that sacrifice performance for versatility. In the new approach, Langford came up with a set of five millimeter-scale components, all of which can be attached to each other by a standard connector. These parts include the previous rigid and flexible types, along with electromagnetic parts, a coil, and a magnet. In the future, the team plans to make these out of still smaller basic part types.

Using this simple kit of tiny parts, Langford assembled them into a novel kind of motor that moves an appendage in discrete mechanical steps, which can be used to turn a gear wheel, and a mobile form of the motor that turns those steps into locomotion, allowing it to “walk” across a surface in a way that is reminiscent of the molecular motors that move muscles. These parts could also be assembled into hands for gripping, or legs for walking, as needed for a particular task, and then later reassembled as those needs change. Gershenfeld refers to them as “digital materials,” discrete parts that can be reversibly joined, forming a kind of functional micro-LEGO.

The new system is a significant step toward creating a standardized kit of parts that could be used to assemble robots with specific capabilities adapted to a particular task or set of tasks. Such purpose-built robots could then be disassembled and reassembled as needed in a variety of forms, without the need to design and manufacture new robots from scratch for each application.

Langford’s initial motor has an ant-like ability to lift seven times its own weight. But if greater forces are required, many of these parts can be added to provide more oomph. Or if the robot needs to move in more complex ways, these parts could be distributed throughout the structure. The size of the building blocks can be chosen to match their application; the team has made nanometer-sized parts to make nanorobots, and meter-sized parts to make megarobots. Previously, specialized techniques were needed at each of these length scale extremes.

“One emerging application is to make tiny robots that can work in confined spaces,” Gershenfeld says. Some of the devices assembled in this project, for example, are smaller than a penny yet can carry out useful tasks.

To build in the “brains,” Langford has added part types that contain millimeter-sized integrated circuits, along with a few other part types to take care of connecting electrical signals in three dimensions.

The simplicity and regularity of these structures makes it relatively easy for their assembly to be automated. To do that, Langford has developed a novel machine that’s like a cross between a 3-D printer and the pick-and-place machines that manufacture electronic circuits, but unlike either of those, this one can produce complete robotic systems directly from digital designs. Gershenfeld says this machine is a first step toward to the project’s ultimate goal of “making an assembler that can assemble itself out of the parts that it’s assembling.”

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 


A team of EPFL researchers has developed tiny 10-gram robots that are inspired by ants: they can communicate with each other, assign roles among themselves and complete complex tasks together. These reconfigurable robots are simple in structure, yet they can jump and crawl to explore uneven surfaces. The researchers have just published their work in Nature.

Individually, ants have only so much strength and intelligence. However, as a colony, they can use complex strategies for achieving sophisticated tasks to survive their larger predators.
At EPFL, researchers in NCCR Robotics Professor Jamie Paik’s Laboratory have reproduced this phenomenon, developing tiny robots that display minimal physical intelligence on an individual level but that are able to communicate and act collectively. Despite being simple in design and weighing only 10 grams, each robot has multiple locomotion modes to navigate any type of surface. Collectively, they can quickly detect and overcome obstacles, pass them and move objects much larger and heavier than themselves. The related research has been published in Nature.

Robots modeled on trap-jaw ants
These three-legged, T-shaped origami robots are called Tribots. They can be assembled in only a few minutes by folding a stack of thin, multi-material sheets, making them suitable for mass production. Completely autonomous and untethered, Tribots are equipped with infrared and proximity sensors for detection and communication purposes. They could accommodate even more sensors depending on the application.

“Their movements are modeled on those of Odontomachus ants. These insects normally crawl, but to escape a predator, they snap their powerful jaws together to jump from leaf to leaf”, says Zhenishbek Zhakypov, the first author. The Tribots replicate this catapult mechanism through an elegant origami robot design that combines multiple shape-memory alloy actuators. As a result, a single robot can produce three distinctive locomotive motions – crawling, rolling and jumping both vertically and horizontally – just like these creatively resilient ants.

Roles: leader, worker and explorer
Despite having the same “anatomy”, each robot is assigned a specific role depending on the situation. ‘Explorers’ detect physical obstacles in their path, such as objects, valleys and mountains. After detecting an obstacle, they inform the rest of the group. Then, the “leader” gives the instructions. The ‘workers’, meanwhile, pool their strength to move objects. “Each Tribot, just like Odontomachus ants, can have different roles. However, they can also take on new roles instantaneously when faced with a new mission or an unknown environment, or even when other members get lost. This goes beyond what the real ants can do” says Paik.

Future applications
In practical situations, such as in an emergency search mission, Tribots could be deployed en masse. And thanks to their multi-locomotive and multi-agent communication capabilities, they could locate a target quickly over a large surface without relying on GPS or visual feedback. “Since they can be manufactured and deployed in large numbers, having some ‘casualties’ would not affect the success of the mission,” adds Paik. “With their unique collective intelligence, our tiny robots are better equipped to adapt to unknown environments. Therefore, for certain missions, they would outperform larger, more powerful robots.” The development of robots for search-and-rescue applications and the study of collective robotics are key research areas within the NCCR Robotics consortium, of which Jamie Paik’s lab is part.
In April, Jamie Paik has presented her reconfigurable robots at the TED2019 Conference in Vancouver. Her talk is available here.

Literature
Zhenishbek Zhakypov, Kazuaki Mori, Koh Hosoda and Jamie Paik, Designing minimal and scalable insect-inspired multi-locomotion millirobots, Nature
DOI: 10.1038/s41586-019-1388-8

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

AI Powered Robotic Picking at - SoundCloud
(1309 secs long, 99 plays)Play in SoundCloud

In this episode, join our interviewer Andrew Vaziri at Promat 2019, the largest expo for manufacturing and supply chain professionals in North and South America. Andrew interviews a handful of companies which provide warehouse fulfillment robots that can autonomous pick and place items. Our guests explain how advances in AI have made autonomous picking possible. They also talk about the unique technologies they use to stand out in a crowded field of competing products.

The guests in the order they appear are:
Vince Martinelli, Head of Product and Marketing, RightHand Robotics
Jim Liefer, CEO, Kindred
Pete Blair, VP of Marketing, Berkshire Grey
Sean Davis, Technical Product Manager, Osaro
Erik Nieves, CEO, Plus One Robotics

See video of each robot picking items on our YouTube page:

AI Powered Robotic Picking Demos Live at Promat 2019 - YouTube

Links

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Pipeline inspection robot

I was on the phone recently with a large multinational corporate investor discussing the applications for robotics in the energy market. He expressed his frustration about the lack of products to inspect and repair active oil and gas pipelines, citing too many catastrophic accidents. His point was further endorsed by a Huffington Post article that reported in a twenty-year period such tragedies have led to 534 deaths, more than 2,400 injuries, and more than $7.5 billion in damages. The study concluded that an incident occurs every 30 hours across America’s vast transcontinental pipelines.

The global market for pipeline inspection robots is estimated to exceed $2 billion in the next six years, more than tripling today’s $600 million in sales. The Zion Market Research report states: “Robots are being used increasingly in various verticals in order to reduce human intervention from work environments that are dangerous … Pipeline networks are laid down for the transportation of oil and gas, drinking waters, etc. These pipelines face the problem of corrosion, aging, cracks, and various other types of damages…. As the demand for oil and gas is increasing across the globe, it is expected that the pipeline network will increase in length in the near future thereby increasing the popularity of the in-pipe inspection robots market.”

Industry consolidation plays key role

Another big indicator of this burgeoning industry is growth of consolidation. Starting in December 2017, Pure Technologies was purchased by New York-based Xylem for more than $500 million. Xylem was already a leader in smart technology solutions for water and waste management pump facilities. Its acquisition of Pure enabled the industrial company to expand its footprint into the oil and gas market. Utilizing Pure’s digital inspection expertise with mechatronics, the combined companies are able to take a leading position in pipeline diagnostics.

Patrick Decker, Xylem president and chief executive, explained, “Pure’s solutions strongly complement the broader Xylem portfolio, particularly our recently acquired Visenti and Sensus solutions, creating a unique and disruptive platform of diagnostic, analytics and optimization solutions for clean and wastewater networks. Pure will also bring greater scale to our growing data analytics and software-as-a-service capabilities.”

America's Dangerous Pipelines, 1986-2014 - YouTube

According to estimates at the time of the merger, almost 25% of Pure’s business was in the oil and gas industry. Today, Pure offers a suite of products for above ground and inline inspections, as well as data management software. In addition to selling its machines, sensors and analytics to the energy sector, it has successfully deployed units in thousands of waterways globally.

This past February, Eddyfi (a leading provider of testing equipment) acquired Inuktun, a robot manufacturer of semi-autonomous crawling systems. This was the sixth acquisition by fast growing Eddyfi in less than three years. As Martin Thériault, Eddyfi’s CEO, elaborates: “We are making a significant bet that the combination of Inuktun robots with our sensors and instruments will meet the increasing needs from asset owners. Customers can now select from a range of standard Inuktun crawlers, cameras and controllers to create their own off-the-shelf, yet customized, solutions.”

Colin Dobell, president of Inuktun, echoed Thériault sentiments, “This transaction links us with one of the best! Our systems and technology are suitable to many of Eddyfi Technologies’ current customers and the combination of the two companies will strengthen our position as an industry leader and allow us to offer truly unique solutions by combining some of the industry’s best NDT [Non Destructive Testing] products with our mobile robotic solutions. The future opportunities are seemingly endless. It’s very exciting.” In addition to Xylem and Eddyfi, other entrees into this space, include: CUES, Envirosight, GE Inspection Robotics, IBAK Helmut Hunger, Medit (Fiberscope), RedZone Robotics, MISTRAS Group, RIEZLER Inspektions Systeme, and Honeybee Robotics.

Repairing lines with micro-robots

While most of the current technologies focus on inspection, the bigger opportunity could be in actively repairing pipelines with micro-bots. Last year, the government of the United Kingdom began a $35 million study with six universities to develop mechanical insect-like robots to automatically fix its large underground network. According to the government’s press release, the goal is to develop robots of one centimeter in size that will crawl, swim and quite possibly fly through water, gas and sewage pipes. The government estimates that underground infrastructure accounts for $6 billion annually in labor and business disruption costs.

Professor Simon Tait and Professor Kirill Horoshenkov from The University of Sheffield - YouTube

One of the institutions charged with this endeavor is the University of Sheffield’s Department of Mechanical Engineering led by Professor Kirill Horoshenkov. Dr. Horoshenkov boasts that his mission is more than commercial as “Maintaining a safe and secure water and energy supply is fundamental for society but faces many challenges such as increased customer demand and climate change.”

Horoshenkov, a leader in acoustical technology, expands further on the research objectives of his team, “Our new research programme will help utility companies monitor hidden pipe infrastructure and solve problems quickly and efficiently when they arise. This will mean less disruption for traffic and general public. This innovation will be the first of its kind to deploy swarms of miniaturised robots in buried pipes together with other emerging in-pipe sensor, navigation and communication solutions with long-term autonomy.”

England is becoming a hotbed for robotic insects; last summer Rolls-Royce shared with reporters its efforts in developing mechanical bugs to repair airplane engines. The engineers at the British aerospace giant were inspired by the research of Harvard professor Robert Wood with its ambulatory microrobot for search and rescue missions. James Kell of Rolls-Royce proclaims this could be a game changer, “They could go off scuttling around reaching all different parts of the combustion chamber. If we did it conventionally it would take us five hours; with these little robots, who knows, it might take five minutes.”

Currently the Harvard robot is too large to buzz through jet engines, but Rolls-Royce is not waiting for Boston’s scientist as it has established with the University of Nottingham a Centre for Manufacturing and On-Wing Technologies “to design and build a range of bespoke prototype robots capable of performing jet engine repairs remotely.” The project lead Dragos Axinte is optimistic about the spillover effect of this work into the energy market, “The emergence of robots capable of replicating human interventions on industrial equipment can be coupled with remote control strategies to reduce the response time from several days to a few hours. As well as with any Rolls-Royce engine, our robots could one day be used in other industries such as oil, gas and nuclear.”

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
Robohub by Wyss Institute - 2w ago
Changes to the Robobee — including an additional pair of wings and improvements to the actuators and transmission ratio — made the vehicle more efficient and allowed the addition of solar cells and an electronics panel. This Robobee is the first to fly without a power cord and is the lightest, untethered vehicle to achieve sustained flight. Credit: Harvard Microrobotics Lab/Harvard SEAS

By Leah Burrows

In the Harvard Microrobotics Lab, on a late afternoon in August, decades of research culminated in a moment of stress as the tiny, groundbreaking Robobee made its first solo flight.

Graduate student Elizabeth Farrell Helbling, Ph.D.’19, and postdoctoral fellow Noah T. Jafferis, Ph.D. from Harvard’s Wyss Institute for Biologically Inspired Engineering, the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS), and the Graduate School of Arts and Sciences caught the moment on camera.

Helbling, who has worked on the project for six years, counted down: “Three, two, one, go.”

The bright halogens switched on and the solar-powered Robobee launched into the air. For a terrifying second, the tiny robot, still without on-board steering and control, careened towards the lights.

Off camera, Helbling exclaimed and cut the power. The Robobee fell dead out of the air, caught by its Kevlar safety harness.

“That went really close to me,” Helbling said, with a nervous laugh.

“It went up,” Jafferis, who has also worked on the project for about six years, responded excitedly from the high-speed camera monitor where he was recording the test.

And with that, Harvard University’s Robobee reached its latest major milestone — becoming the lightest vehicle ever to achieve sustained untethered flight.

“This is a result several decades in the making,” said Robert Wood, Ph.D., Core Faculty member of the Wyss Institute, the Charles River Professor of Engineering and Applied Sciences at SEAS, and principle investigator of the Robobee project. “Powering flight is something of a Catch-22 as the tradeoff between mass and power becomes extremely problematic at small scales where flight is inherently inefficient.  It doesn’t help that even the smallest commercially available batteries weigh much more than the robot. We have developed strategies to address this challenge by increasing vehicle efficiency, creating extremely lightweight power circuits, and integrating high efficiency solar cells.”

The milestone is described in Nature.

To achieve untethered flight, this latest iteration of the Robobee underwent several important changes, including the addition of a second pair of wings. “The change from two to four wings, along with less visible changes to the actuator and transmission ratio, made the vehicle more efficient, gave it more lift, and allowed us to put everything we need on board without using more power,” said Jafferis. (The addition of the wings also earned this Robobee the nickname X-Wing, after the four-winged starfighters from Star Wars.)

That extra lift, with no additional power requirements, allowed the researchers to cut the power cord — which has kept the Robobee tethered for nearly a decade — and attach solar cells and an electronics panel to the vehicle.

The solar cells, the smallest commercially available, weigh 10 milligrams each and get 0.76 milliwatts per milligram of power when the sun is at full intensity. The Robobee X-Wing needs the power of about three Earth suns to fly, making outdoor flight out of reach for now. Instead, the researchers simulate that level of sunlight in the lab with halogen lights. The solar cells are connected to an electronics panel under the bee, which converts the low voltage signals of the solar array into high voltage drive signals needed to control the actuators. The solar cells sit about three centimeters above the wings, to avoid interference.

In all, the final vehicle, with the solar cells and electronics, weights 259 milligrams (about a quarter of a paper clip) and uses about 120 milliwatts of power, which is less power than it would take to light a single bulb on a string of LED Christmas lights.

The Untethered RoboBee - YouTube

“When you see engineering in movies, if something doesn’t work, people hack at it once or twice and suddenly it works. Real science isn’t like that,” said Helbling. “We hacked at this problem in every which way to finally achieve what we did. In the end, it’s pretty thrilling.” The researchers will continue to hack away, aiming to bring down the power and add on-board control to enable the Robobee to fly outside.

To achieve untethered flight, the latest iteration of the Robobee underwent several important changes, including the addition of a second pair of wings. (Video courtesy of the Harvard Microrobotics Lab/Harvard SEAS)

“Over the life of this project we have sequentially developed solutions to challenging problems, like how to build complex devices at millimeter scales, how to create high-performance millimeter-scale artificial muscles, bioinspired designs, and novel sensors, and flight control strategies,” said Wood. “Now that power solutions are emerging, the next step is onboard control. Beyond these robots, we are excited that these underlying technologies are finding applications in other areas such as minimally-invasive surgical devices, wearable sensors, assistive robots, and haptic communication devices – to name just a few.”

Harvard has developed a portfolio of intellectual property (IP) related to the fabrication process for millimeter-scale devices. This IP, as well as related technologies, can be applied to microrobotics, medical devices, consumer electronics and a wide range of complex electromechanical systems. Harvard’s Office of Technology Development is exploring opportunities for commercial impact in these fields.

This research was co-authored by Michael Karpelson, Ph.D., Staff Electrical Engineer on the Institute’s Advanced Technology Team. It was supported by the National Science Foundation and the Office of Naval Research.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
A new study by researchers from MIT, Boston Children’s Hospital, and elsewhere shows that a “social robot,” named Huggable (pictured), can be used in support sessions to boost positive emotions in hospitalized children.
Image: Courtesy of the Personal Robots Group, MIT Media Lab

A new study demonstrates, for the first time, that “social robots” used in support sessions held in pediatric units at hospitals can lead to more positive emotions in sick children.

Many hospitals host interventions in pediatric units, where child life specialists will provide clinical interventions to hospitalized children for developmental and coping support. This involves play, preparation, education, and behavioral distraction for both routine medical care, as well as before, during, and after difficult procedures. Traditional interventions include therapeutic medical play and normalizing the environment through activities such as arts and crafts, games, and celebrations.

For the study, published today in the journal Pediatrics, researchers from the MIT Media Lab, Boston Children’s Hospital, and Northeastern University deployed a robotic teddy bear, “Huggable,” across several pediatric units at Boston Children’s Hospital. More than 50 hospitalized children were randomly split into three groups of interventions that involved Huggable, a tablet-based virtual Huggable, or a traditional plush teddy bear. In general, Huggable improved various patient outcomes over those other two options.  

The study primarily demonstrated the feasibility of integrating Huggable into the interventions. But results also indicated that children playing with Huggable experienced more positive emotions overall. They also got out of bed and moved around more, and emotionally connected with the robot, asking it personal questions and inviting it to come back later to meet their families. “Such improved emotional, physical, and verbal outcomes are all positive factors that could contribute to better and faster recovery in hospitalized children,” the researchers write in their study.

Although it is a small study, it is the first to explore social robotics in a real-world inpatient pediatric setting with ill children, the researchers say. Other studies have been conducted in labs, have studied very few children, or were conducted in public settings without any patient identification.

But Huggable is designed only to assist health care specialists — not replace them, the researchers stress. “It’s a companion,” says co-author Cynthia Breazeal, an associate professor of media arts and sciences and founding director of the Personal Robots group. “Our group designs technologies with the mindset that they’re teammates. We don’t just look at the child-robot interaction. It’s about [helping] specialists and parents, because we want technology to support everyone who’s invested in the quality care of a child.”

“Child life staff provide a lot of human interaction to help normalize the hospital experience, but they can’t be with every kid, all the time. Social robots create a more consistent presence throughout the day,” adds first author Deirdre Logan, a pediatric psychologist at Boston Children’s Hospital. “There may also be kids who don’t always want to talk to people, and respond better to having a robotic stuffed animal with them. It’s exciting knowing what types of support we can provide kids who may feel isolated or scared about what they’re going through.”

Joining Breazeal and Logan on the paper are: Sooyeon Jeong, a PhD student in the Personal Robots group; Brianna O’Connell, Duncan Smith-Freedman, and Peter Weinstock, all of Boston Children’s Hospital; and Matthew Goodwin and James Heathers, both of Northeastern University.

Boosting mood

First prototyped in 2006, Huggable is a plush teddy bear with a screen depicting animated eyes. While the eventual goal is to make the robot fully autonomous, it is currently operated remotely by a specialist in the hall outside a child’s room. Through custom software, a specialist can control the robot’s facial expressions and body actions, and direct its gaze. The specialists could also talk through a speaker — with their voice automatically shifted to a higher pitch to sound more childlike — and monitor the participants via camera feed. The tablet-based avatar of the bear had identical gestures and was also remotely operated.

During the interventions involving Huggable — involving kids ages 3 to 10 years — a specialist would sing nursery rhymes to younger children through robot and move the arms during the song. Older kids would play the I Spy game, where they have to guess an object in the room described by the specialist through Huggable.  

Through self-reports and questionnaires, the researchers recorded how much the patients and families liked interacting with Huggable. Additional questionnaires assessed patient’s positive moods, as well as anxiety and perceived pain levels. The researchers also used cameras mounted in the child’s room to capture and analyze speech patterns, characterizing them as joyful or sad, using software.

A greater percentage of children and their parents reported that the children enjoyed playing with Huggable more than with the avatar or traditional teddy bear. Speech analysis backed up that result, detecting significantly more joyful expressions among the children during robotic interventions. Additionally, parents noted lower levels of perceived pain among their children.

The researchers noted that 93 percent of patients completed the Huggable-based interventions, and found few barriers to practical implementation, as determined by comments from the specialists.

A previous paper based on the same study found that the robot also seemed to facilitate greater family involvement in the interventions, compared to the other two methods, which improved the intervention overall. “Those are findings we didn’t necessarily expect in the beginning,” says Jeong, also a co-author on the previous paper. “We didn’t tell family to join any of the play sessions — it just happened naturally. When the robot came in, the child and robot and parents all interacted more, playing games or in introducing the robot.”

An automated, take-home bot

The study also generated valuable insights for developing a fully autonomous Huggable robot, which is the researchers’ ultimate goal. They were able to determine which physical gestures are used most and least often, and which features specialists may want for future iterations. Huggable, for instance, could introduce doctors before they enter a child’s room or learn a child’s interests and share that information with specialists. The researchers may also equip the robot with computer vision, so it can detect certain objects in a room to talk about those with children.

“In these early studies, we capture data … to wrap our heads around an authentic use-case scenario where, if the bear was automated, what does it need to do to provide high-quality standard of care,” Breazeal says.

In the future, that automated robot could be used to improve continuity of care. A child would take home a robot after a hospital visit to further support engagement, adherence to care regimens, and monitoring well-being.

“We want to continue thinking about how robots can become part of the whole clinical team and help everyone,” Jeong says. “When the robot goes home, we want to see the robot monitor a child’s progress. … If there’s something clinicians need to know earlier, the robot can let the clinicians know, so [they’re not] surprised at the next appointment that the child hasn’t been doing well.”

Next, the researchers are hoping to zero in on which specific patient populations may benefit the most from the Huggable interventions. “We want to find the sweet spot for the children who need this type of of extra support,” Logan says.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 


On Design in Human-Robot Interaction - SoundCloud
(1936 secs long, 73 plays)Play in SoundCloud

In this episode, Audrow Nash interviews Bilge Mutlu, Associate Professor at the University of Wisconsin–Madison, about design-thinking in human-robot interaction. Professor Mutlu discusses design-thinking at a high-level, how design relates to science, and he speaks about the main areas of his work: the design space, the evaluation space, and how features are used within a context. He also gives advice on how to apply a design-oriented mindset.

Bilge Mutlu

Bilge Mutlu is an Associate Professor of Computer Science, Psychology, and Industrial Engineering at the University of Wisconsin–Madison. He directs the Wisconsin HCI Laboratory and organizes the WHCI+D Group. He received his PhD degree from Carnegie Mellon University‘s Human-Computer Interaction Institute.

Links

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Racing team 2018-2019: Christophe De Wagter, Guido de Croon, Shuo Li, Phillipp Dürnay, Jiahao Lin, Simon Spronk

Autonomous drone racing
Drone racing is becoming a major e-sports. Enthusiasts – and now also professionals – transform drones into seriously fast racing platforms. Expert drone racers can reach speeds up to 190 km/h. They fly by looking at a first-person view (FPV) of their drone, which has a camera transmitting images mounted on the front.

In recent years, the advance in areas such as artificial intelligence, computer vision, and control has raised the question whether drones would not be able to fly faster than humans. The advantage for the drone could be that it can sense much more than the human pilot (like accelerations and rotation rates with its inertial sensors) and process all image data quicker on board of the drone. Moreover, its intelligence could be shaped purely for only one goal: racing as fast as possible.

In the quest for a fast-flying, autonomous racing drone, multiple autonomous racing drone competitions have been organized in the academic community. These “IROS” drone races (where IROS stands for one of the most well-known world-wide robotics conferences) have been held from 2016 on. Over these years, the speed of the drones has been gradually improving, with the faster drones in the competition now moving at ~2 m/s.

Computationally Efficient Autonomous Racing of a 72-gram Drone - YouTube

Smaller
Most of the autonomous racing drones are equipped with high-performance processors, with multiple, high-quality cameras and sometimes even with laser scanners. This allows these drones to use state-of-the-art solutions to visual perception, like building maps of the environment or tracking accurately how the drone is moving over time. However, it also makes the drones relatively heavy and expensive.

At the Micro Air Vehicle laboratory (MAVLab) of TU Delft, we have as aim to make light-weight and cheap autonomous racing drones. Such drones could be used by many drone racing enthusiasts to train with or fly against. If the drone becomes small enough, it could even be used for racing at home. Aiming for “small” means serious limitations to the sensors and processing that can be carried onboard. This is why in the IROS drone races we have always focused on monocular vision (a single camera) and on software algorithms for vision, state estimation, and control that are computationally highly efficient.


With its 72 grams and 10 cm diameter, the modified “Trashcan” drone is currently the smallest autonomous racing drone in the world. In the background, Shuo Li, PhD student at working on autonomous drone racing at the MAVLab.

A 72-gram autonomous racing drone
Here, we report on how we made a tiny autonomous racing drone fly through a racing track with on average 2 m/s, which is competitive with other, larger state-of-the-art autonomous racing drones.

The drone, which is a modified Eachine “Trashcan”, is 10 cm in diameter and weighs 72 grams. This weight includes a 17-gram JeVois smart-camera, which consists of a single, rolling shutter CMOS camera, a 4-core ARM v7 1.34 GHz processor with 256 MB RAM, and a 2-core Mali GPU. Although limited compared to the processors used on other drones, we consider it as more than powerful enough: With the algorithms we explain below, the drone actually only uses a single CPU core. The JeVois camera communicates with a 4.2gram Crazybee F4 Pro Flight Controller running Paparazzi autopilot, via the MAVLink communication protocol. Both the JeVois code and Paparazzi code is open source and available to the community.

An important characteristic of our approach to drone racing is that we do not rely on accurate, but computationally expensive methods for visual Simultaneous Localization And Mapping (SLAM) or Visual Inertial Odometry (VIO). Instead, we focus on having the drone predict its motion as good as possible with an efficient prediction model and correct any drift of the model with vision-based gate detections.

Prediction
A typical prediction model would involve the integration of the accelerometer readings. However, on small drones the Inertial Measurement Unit (IMU) is subject to a lot of vibration, leading to noisier accelerometer readings. Integrating such noisy measurements quickly leads to an enormous drift in both the velocity and position estimates of the drone. Therefore, we have opted for a simpler solution, in which the IMU is only used to determine the attitude of the drone. This attitude can then be used to predict the forward acceleration, as illustrated in the figure below. If one assumes the drone to fly at a constant height, the force in the z-direction has to equal the gravity force. Given a specific pitch angle, this relation leads to a specific forward force due to the thrust. The prediction model then updates the velocity based on this predicted forward force and the expected drag force given the estimated velocity.

Prediction model for the tiny drone. The drone has an estimate of its attitude, including the pitch angle (ѳ). Assuming the drone to fly at a constant height, the force straight up (Tz) should equal gravity (g). Together, these two pieces allow us to calculate the thrust force that should be delivered by the drone’s four propellers (T), and, consequently also the force that is exerted forwards (Tx). The model uses this forward force, and resulting (backward) drag force (Dx), to update the velocity (vx) and position of the drone when not seeing gates.

Vision-based corrections
The prediction model is corrected with the help of vision-based position measurements. First, a snake-gate algorithm is used to detect the colored gate in the image. This algorithm is extremely efficient, as it only processes a small portion of the image’s pixels. It samples random image locations and when it finds the right color, it starts following it around to determine the shape. After a detection, the known size of the gate is used to determine the drone’s relative position to the gate (see the figure below). This is a standard perspective-N-point problem. The output of this process is a relative position to a gate. Subsequently, we figure out which gate on the racing track is most likely in our view, and transform the relative position to the gate to a global position measurement. Since our vision process often outputs quite precise position estimates but sometimes also produces significant outliers, we do not use a Kalman filter but a Moving Horizon Estimator for the state estimation. This leads to much more robust position and velocity estimates in the presence of outliers.

The gates and their sizes are known. When the drone detects a gate, it can use this knowledge to calculate its relative position to a gate. The global layout of the track and current supposed position of the drone are used to determine which gate the drone is most likely looking at. This way, the relative position can be transformed to a global position estimate.

Racing performance and future steps
The drone used the newly developed algorithms to race along a 4-gate race track in TU Delft’s Cyberzoo. It can fly multiple laps at an average speed of 2 m/s, which is competitive with larger, state-of-the-art autonomous racing drones (see the video at the top). Thanks to the central role of gate detections in the drone’s algorithms, the drone can cope with moderate displacements of the gates.
Possible future directions of research are to make the drone smaller and fly faster. In principle, being small is an advantage, since the gates are relatively bigger. This allows the drone to choose its trajectory more freely than a big drone, which may allow for faster trajectories. In order to better exploit this characteristic, we would have to fit optimal control algorithms into the onboard processing. Moreover, we want to make the vision algorithms more robust – as the current color-based snake gate algorithm is quite dependent on lighting conditions. An obvious option here is to start using deep neural networks, which would have to fit within the dual-core Mali GPU on the JeVois.

Arxiv article: Visual Model-predictive Localization for Computationally Efficient Autonomous Racing of a 72-gram Drone, Shuo Li, Erik van der Horst, Philipp Duernay, Christophe De Wagter, Guido C.H.E. de Croon.

Read for later

Articles marked as Favorite are saved for later viewing.
close
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview