Loading...

Follow IEEE Spectrum | Robotics on Feedspot

Continue with Google
Continue with Facebook
or

Valid
By automating assessments of road conditions, RoadBotics could help cities save time and money

‘Tis the season for potholes. When temperatures drop, the expansion and contraction of water that seeps into cracks in asphalt can create giant tire-chompers. But these problem spots are too often the final result of damage that has been brewing for a while.

“There are things you can do five or even ten years before that happens to push the lifespan of a road,” says Benjamin Schmidt, CTO of RoadBotics in Pittsburgh, Pennsylvania.

RoadBotics is using state-of-the-art computer vision techniques to help local governments better manage roads. The company’s machine-learning algorithms process images of the road collected via smartphone. Then, it uses these images to produce an in-depth online map of road conditions that officials can use to make maintenance and repair decisions.

Since launching in December 2016, the Carnegie Mellon University spinoff has assessed roads for more than 90 cities, towns, and counties around the United States. Detroit will soon become the latest city to use RoadBotics’ technology to inspect its 4,200-kilometer (2,600-mile) network.

Roads are expensive to maintain. Asphalt takes a pounding from weather and traffic and cracks naturally over time. These cracks allow salts, fuels, and water to infiltrate and damage roads from underneath, and can lead to larger problems that are more expensive to repair.

Cities and towns regularly assess their roads by sending out inspectors who drive around looking for signs of damage and manually recording type, extent, and location. This takes time, Schmidt says. “People get tired, or if you have multiple people assessing big road networks, they may not agree on conditions. We solve a lot of those problems with objective, quantitative data.”

Photo: RoadBotics RoadBotics uses a standard Android phone mounted on the windshield to capture videos and corresponding GPS data for their software to analyze.

RoadBotics offers road assessment as a service. The company sends out a vehicle—or the city could use their own service vehicles like garbage trucks or street-sweepers— with a dashboard-mounted smartphone to drive every mile of a city’s network. Videos along with their GPS location are uploaded to a cloud server.

Then, the company’s AI algorithm uses deep-learning techniques to analyze each frame in the video, pixel by pixel. The company trains the neural networks by feeding them marked-up images of road surfaces where different colors correspond to different types of damage. RoadBotics says its AI can identity every useful aspect of the road surface as well as a human can, from giant potholes to more nuanced factors like bumps, depressions, and cracks.

The software creates a map of the road network with an overlay that shows each 3-meter stretch of road on a color-coded scale, with 1 (or green) being excellent to 5 (or red) meaning terrible and in need of repair. City officials can use the map to assess, say based on conditions and usage, where best to spend their road-repair dollars.

This automated assessment saves time and money. The city of Savannah, Georgia, for instance, had used 18 part-time interns to complete an assessment of its 1,100 kilometers of roads. The process took three years, cost US $130,000 and led to unreliable, poorly organized data. RoadBotics gave the city an objective road rating report in three months for a third of the cost. The affordability means that the city should be able to scan the network more regularly and identify small problems before they become large ones, according to Joe Shearouse in the Savannah City Manager's Office.

Roads might be just the start, Schmidt says. RoadBotics plans to use its AI technology to evaluate the condition of other transportation assets. “Power lines, signage, vegetation overgrowth, streetlights—all these things that need to be maintained are where we want to move our technology,” he says.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
IEEE Spectrum | Robotics by Evan Ackerman, Erico Guizzo And Fan.. - 3d ago
Your weekly selection of awesome robot videos

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

HRI 2019 – March 11-14, 2019 – Daegu, South Korea Nîmes Robotics Festival – May 17-19, 2019 – Nîmes, France

Let us know if you have suggestions for next week, and enjoy today’s videos.

AMBIDEX is a robot arm resulting from collaborative R&D on human-robot coexistence. The arm uses innovative cable-driven mechanisms that make any interaction with humans safe.

NAVER LABS - AMBIDEX - YouTube

[ NAVER Labs ]

Somehow we missed this video from November but here’s Cassie Blue equipped with a torso full of lidars (!) and wearing sneakers (!!). The robot was taken to a construction site to map out the building that will be the new home of the University of Michigan’s robotics program. On the way there Cassie blew a capacitor in its left leg and needed a little help from its human friends. But this kind of experiment is a good example of the potential of legged robots to do useful work in real-world environments.

Cassie Blue maps out the Robotics Building! - YouTube

[ Michigan Robotics ]

Using the fossil and fossilized footprints of a 300-million-year-old animal, an interdisciplinary team that includes scientists from EPFL and Humboldt-Universität zu Berlin have developed a method for identifying the most likely gaits of extinct animals and designed a robot that can recreate their walk. This innovative study of animal biomechanics using robots can help researchers better understand how vertebrate locomotion evolved over time.

A robot recreates the walk of a 300-million-year-old animal - YouTube

EPFL put together a nifty interactive website that lets you compare the gaits of real animals, which you can check out at the link below.

[ Nature ] via [ EPFL ]

Using a computer system wired similarly to animal brains, a four-legged, dog-like robot successfully “learned” to run faster and recover from falls in various positions, a skill not previously observed in other four-legged robots of its kind, a new study finds. The advancement may pave the way to real-world applications such as climbing stairs to carry heavy loads in construction sites, inspecting unstructured underground tunnels and exploring other planets.

Learning Agile and Dynamic Motor Skills for Legged Robots - YouTube

Jemin Hwangbo and colleagues trained a neural network (or computer system) through multiple RL simulations, which they then transferred into an existing medium dog-sized robot named ANYmal. The training sessions were run nearly 1,000 times faster than in real time and were conducted on a personal computer with a single processor, operating more efficiently and costing less compared to comparable networks. Importantly, ANYmal broke its previous speed record by 25% and followed velocity commands more accurately than existing technologies that have been used to control ANYmals, the authors say. The robot was also able to flip over from falls, a feat that requires a high level of coordination and control of momentum.

[ Science ]

KUKA partner, Life Science Robotics, has developed a revolutionary rehabilitation therapy system that uses a KUKA LBR Med robot. The robot is named ROBERT® and it is developed for rehabilitation of bedridden patients. The purpose of ROBERT® is to identify needs and to make a difference for patients, healthcare workers and society. With our LBR Med as a main component, ROBERT® is the first robot in the world that is custom-made for the purpose of taking care of rehabilitation of bedridden patients.

ROBERT the Robot Helps Patients Get Out of Bed Faster - YouTube

This seems like a great application, since the human does the clever things, and the robot can do the boring repetition very, very well.

[ Life Science Robotics ]

The World Is Not Enough (WINE) is a concept for a new generation of spacecraft that takes advantage of In-Situ Resource Utilization (ISRU) to explore space. WINE mines to extract water from planetary regolith, capturing the water as ice in a cold trap and heating it to create steam for propulsion. By propulsively "hopping" from location to location, WINE can explore Solar System bodies as well as individual bodies (e.g. WINE could cover much greater distances on Europa or the Moon than a rover, and can reach otherwise inaccessible regions). And by refueling itself as it goes, WINE’s range is not limited by consumables. This makes WINE particularly well suited to prospecting and reconnaissance missions.

W.I.N.E. The World Is Not Enough - YouTube

This video shows a series of tests performed on a proof-of-concept WINE prototype vehicle at Honeybee Robotics. The vehicle demonstrates several of the primary operations that would be required of the WINE spacecraft including: mining and heating regolith to extract water; capturing water as ice in a cold trap; reorienting the vehicle to allow for further mining; pushing captured water into a propulsion tank; and heating propellant to create steam for thrust. All systems demonstrated are fully functional. All tests are conducted with regolith simulant in a vacuum chamber.

[ Honeybee Robotics ]

NO 26 vs High Jump robot Level 3//Street Fighters - YouTube

Okay, I’m impressed. Even more than I’m normally impressed with Hinamitetu’s robots. Wow.

[ Hinamitetu ]

Team BlackSheep is doing something practical for once, by sending airmail by drone over the Swiss Alps, a distance of 100km.

100km DRONE AIR MAIL ACROSS THE ALPS - YouTube

[ Team BlackSheep ]

A half-scale version of the ExoMars rover, called ExoMars Testing Rover (ExoTeR), manoeuvred itself carefully through the red rocks and sand of 9x9 m Planetary Utilisation Testbed, part of ESA’s Planetary Robotics Laboratory in its ESTEC technical centre in the Netherlands. This was a test of autonomous navigation software destined for ESA’s ExoMars 2020 mission to the red planet.

ExoMars rover self-driving software test - YouTube

[ ESA ]

We wrote about these clever little robotic toys a while back, but this new demo is worth watching.

toio™ 「工作生物 ゲズンロイド」紹介動画|toio™ "Papercraft Creatures - Gesundroid" Trailer - YouTube

[ Sony Toio ]

The Digital Farmhand comprises of a small mobile platform that can be remotely or autonomously controlled. On the mobile platform exists a smartphone, sensors, and computing. The robot also has a three-point-hitch system which allows the use of farming implements to do activities such as precision seeding, spraying and weeding; and, through its ability to monitoring individual plants, the data it produces has the potential to support better on-farm decision making helping growers increase yield and productivity, reduce input costs, and maximise nutrition security. In this video, we travelled to Samoa to trial the robot on three different farms and conducted a workshop with local farmers to get feedback on how a system like Digital Farmhand could be used in the region.

Digital Farmhand Trials - Samoa - YouTube

[ Digital Farmhand ]

Misty Robotics has always said that they’re going to be relying on developers to come up with useful applications for Misty, but they’re doing some work themselves, too.

Developer Security Skill Demo 1 - YouTube

Developer Temp Alarm Demo 2 - YouTube

[ Misty Robotics ]

Safe autonomous navigation of micro air vehicles in cluttered dynamic environments is challenging due to the uncertainties arising from robot localization, sensing and motion disturbances. This paper presents a probabilistic collision avoidance method for navigation among other robots and moving obstacles, such as humans.

Chance-constrained Collision Avoidance for MAVs - YouTube

I was hoping that by the end, we’d have seen at least one collision. Oh well.

[ TU Delft ]

We aim to enable a mobile robot to navigate through environments with dense crowds, e.g., shopping malls, canteens, train stations, or airport terminals. In these challenging environments, existing approaches suffer from two common problems: the robot may get frozen and cannot make any progress toward its goal; or it may get lost due to severe occlusions inside a crowd. Here we propose a navigation framework that handles the robot freezing and the navigation lost problems simultaneously.

Getting Robots Unfrozen and Unlost in Dense Pedestrian Crowds - YouTube

[ Paper ] via [ RL_SLAM ]

Here’s some stuff that ROBOTIS has been experimenting with lately:

OpenManipulator 09 PLANAR - YouTube

OpenManipulator 11 Pen Holder - YouTube

[ Robotis ]

UBTECH brought plenty of robots to CES, and here’s some footage of the biggest ones.

CES 2019: Walker - YouTube

CES 2019: Cruzr - YouTube

[ UBTECH ]

In this work, we present the integration of multiple components of a full-size humanoid robot system, including control, planning, and perception methods for manipulation tasks. In particular, we introduce a set of modules based on visual object localization, whole-body control, and real-time compliant stabilization on the robot. The introduced methodologies are demonstrated on a box lifting task, performed by our newly developed humanoid bipedal robot COMAN+.

Whole-Body Stabilization for Visual-based Box Lifting with the COMAN+ Robot - YouTube

[ Dimitrios Kanoulas ]

Watch students in CMU’s Introduction to Robotics course do a Lego Mindstorms search and rescue scenario.

Intro to Robotics : Urban Search and Rescue 2018 - YouTube

[ CMU ]

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
With artificial intelligence and voice control, smart microwaves, cameras, and doorbells, could finally catch on
Illustration: James Steinberg

Over the holidays, droves of consumers bought video doorbells, connected lights, and smart outlets that work with Amazon’s Alexa, Apple’s HomeKit, or Google Home. Plenty of people unwrapped connected speakers and image-processing cameras on Christmas morning.

Many of these purchases will get returned. Or they’ll be thrown away after one too many updates or a security scare. Perhaps luckier devices will find homes with tech-savvy friends. But most will be abandoned, in one way or another, because most of the smart devices on the market are stupid.

Over the six years I’ve covered smart home devices, they’ve presented their owners with four real problems: First, the devices were expensive. They also didn’t offer much functionality beyond remote control from an app. Even more frustrating, getting devices from different vendors to play nice together was tough. But perhaps the biggest problem is that consumers had no idea what to do with these devices.

Thankfully, that’s changing: Now there are more meaningful uses for smart devices because smart devices are finally living up to their name. Companies are now designing products that use artificial intelligence. Alongside that intelligence, the growth of voice as a user interface can now provide effortless interactions.

To see how important intelligence is, consider a camera. There’s a big difference between a camera that can tell you it saw something and one that can tell you what it saw. Adding face recognition and computer vision to that camera turns a product that pesters you with useless notifications into something actually helpful.

The kitchen is a great place to see the growing usefulness of smart devices. My connected June oven has a camera and a graphics chip inside, so it can track what food is in the oven and recognize how it needs to be cooked. But true intelligence goes beyond just computer vision. With a connected device, manufacturers can embed intelligence into its accompanying app so that the user doesn’t have to think about it.

For example, the Joule Sous Vide cooker doesn’t have an interface: Everything is embedded in the app to help cooks take the guesswork out of cooking. The cook tells the app what meat or vegetable is in the bag and its approximate thickness, and from there the Joule sets the temp and timer on the user’s behalf.

This abdication of thought to the device is why voice has been so essential in making products smarter—and more useful—even if at first glance it seems superfluous. Take the new Alexa-enabled US $60 microwave launched last year by Amazon as an example. When the company launched the oven, people reacted with confusion: Why give a microwave voice control if you have to put the food inside it anyway?

In this case, the voice control offers an intuitive way to interact with the artificial intelligence that provides the cook times and settings for various foodstuffs. At the end of the day, the microwave can offer a better result than if you just punched in 60 seconds—even if you still have to put the food inside the oven yourself.

That’s when the high price of a connected gadget becomes justifiable. Now the challenge is to explain why these devices are worthwhile. Given how many people mocked the Alexa microwave as silly, it seems manufacturers haven’t succeeded in that last bit yet.

This article appears in the February 2019 print issue as “Do You Need a Smart Microwave?”

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
The implant restored leg movement in rats and could be scaled to fit humans

3D bioprinting—building tissues by putting down layers of cells and other materials—has led to the manufacturing of human tissues including corneas, skin, and blood vessels.

Now, a team at the University of California, San Diego, is raising the bar. In a paper published this week in the journal Nature Medicine, they describe a 3D-printed spinal cord implant that restored function in the hind limbs of rats with spinal cord injuries.

It is the first 3D printing of a complex central nervous system structure, according to the authors. It was made possible by an advanced bioprinting method called microscale continuous projection printing (μCPP), which prints biological materials roughly 1,000 times faster and at higher resolutions than traditional 3D printing methods.

The ultimate goal is to create a personalized implant that helps repair an injured spinal cord and then degrades away, says study co-author Shaochen Chen, vice chairman of the nanoengineering department at UCSD and co-founder of Allegro 3D, a startup commercializing the bioprinting tech. “At the end of the work, we hope that you have exactly a functional spinal cord there—and nothing else,” he says.

The spine is not an easy thing to repair, which is why spinal cord injuries are so devastating and often lead to permanent paralysis. Part of the challenge is that the slender, bundled axons of spinal cord nerves are delicate and finicky. These axons are highly sensitive to materials used in implants or other repair systems. In past studies, the axons will even avoid a material by growing around it or returning back the way they came.

Chen, neuroscientist Mark Tuszynski, and colleagues tested numerous materials to see which would be the most compatible with spinal axons, and settled on a hydrogel called polyethylene glycol–gelatin methacrylate. Using that hydrogel, they designed and printed a rat spinal cord scaffold that looks like a piece of oval Swiss cheese about the thickness of a penny.

3D Printer Makes a Spinal Cord Implant - YouTube
Video: Shaochen Chen lab/UC San Diego

To mimic axon-free gray matter, the inner, H-shaped part of the scaffold is solid. To mimic white matter, through which the axons run, the outer part of the scaffold contains 200 μm microchannels, like tunnels, to guide the nerves.

Such channels simply can’t be achieved with traditional inkjet or extrusion bioprinting methods, says Chen, as those methods only print down to a resolution of 200 μm. “When you lay down a droplet, that’s already 200 microns. You won’t be able to form a 200-micron pore,” he says. μCPP, on the other hand, prints down to the resolution of a single micron.

After the printing process, the team filled the microchannels with neural stem cells to encourage axon growth. 

And remember how I mentioned the speed? The μCPP method is fast. A traditional nozzle printer might take several hours to produce a personalized two-millimeter implant. μCPP does it in 1.6 seconds.

Here’s how: Instead of laying down individual droplets of material like an inkjet 3D printer, μCPP involves shining UV light across an entire plane of material—in this case, a mix of hydrogel and neural stem cells. Guided by digital images, the light fuses together the liquid material to create a solid shape of one’s choice. So instead of forming a structure drop by drop or bit by bit, μCPP creates a complex scaffold of hydrogel and cells in one continuous print.

Photo: David Baillot/UC San Diego Researchers at the University of California, San Diego made a four-centimeter implant modeled to fit an actual human spinal cord injury.

To prove the technique is scalable to humans, Chen’s team also printed scaffolds to match the size and shape of human spinal cord lesions using real MRI scans. The human implants, about 4 centimeters tall, took just 10 minutes to print.

But back to the rats. With the implants, the rat spines slowly repaired over four weeks as axons grew through the microchannels and into the lower spinal cord. The scaffolds retained their shape at the four-week mark, says Chen, then began to degrade over the following five months, with no sign of inflammation.

The rats with the stem cell-rich implants regained significant function in their hind limbs compared to animals with an empty scaffold or no scaffold at all.

The team has also used the μCPP method to make human liver tissue, which has a sophisticated microarchitecture, and blood vessels.

As for the spinal implants, “we’d like to do more fundamental studies of the materials and structures, then we’ll move onto larger animal experiments before planning clinical trials,” says Chen. He estimates human tests could begin in five years.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
Learn about current trends in wearables and take a deep dive into breakthrough technology for emerging applications.

Download this free 34-page Trend Paper to learn about current trends in wearables and take a deep dive into breakthrough technology for emerging applications in B2B and B2C markets.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Discover a select set of case studies showing how researchers and industrial partners are revolutionizing transportation.
Download this free handbook to discover a select set of case studies showing how researchers and industrial partners are revolutionizing transportation in areas such as vehicle electrification, HIL simulation, autonomous driving and V2X communication.
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
This latest example of a government restricting Internet access to control its citizenry came had unintended consequences

The government of Zimbabwe cut its citizen’s access to the Internet for 24 hours beginning mid-morning Tuesday in a bid to quell violent protests on Monday over government-ordered doubling of both petrol and diesel fuel prices last Saturday.

According to the Irish Times, the government ordered the Postal and Telecommunications Regulatory Authority of Zimbabwe to block Internet access provided by Zimbabwe’s two largest ISPs, Econet and TelOne. The intention was to prevent citizens from using social media to organize protests similar to those that erupted on Monday.

What happened next is a case study of the impact of unintended consequences and humanity’s ever-increasing reliance on the Internet. A Bloomberg story reports that because Zimbabweans use the Internet to pay for their electricity on a daily basis, many homes lost their electricity along with Internet access. The story states that most Zimbabweans use Econet Wireless Zimbabwe Ecocash mobile-phone payment system to purchase “electricity in units of $5 or less and almost all domestic users are on prepaid meters, so many buy for $1 at a time.”

The United States Embassy in Zimbabwe condemned the government’s action, which did lift some Internet restrictions on Wednesday evening. However, access to social media platforms like Facebook, WhatsApp and Twitter apparently is still restricted. Many Zimbabweans have been using virtual private networks (VPNs) to circumvent the restrictions, but reports say that the government has blocked that workaround as well.

The South African news site fin 24 reported that Zimbabwe's Deputy Minister for Information, Publicity and Broadcasting Services, Energy Mutodi claimed on national television that the government had not blocked the Internet, but it had gone down because of increased traffic. However, the same report noted that Econet founder Strive Masiyiwa confessed on his Facebook page that the government had indeed ordered the Internet blockage; to refuse would have meant jail for Econet’s management, he said.

The Zimbabwean government’s action follows in the wake of the government of the Democratic Republic of the Congo decision to cut access to the Internet and text messaging for two days at the beginning of the year to quell expected protests over the preliminary results of its highly-contested presidential election. The election, which was the first in 18 years, has been filled with accusations of wide-spread voting fraud.

A Financial Times analysis of leaked voting data showed that second place finisher, Martin Fayulu, had actually won the election with a clear majority over last week’s declared winner, Felix Tshisekedi. The DRC’s electoral commission denied that the election results were fraudulent. Unless something drastic happens, Tshisekedi will be inaugurated as president this coming Tuesday.

The UN General Assembly and the UN Human Rights Council have adopted multiple resolutions, including this one [PDF] passed in 2016, unequivocally condemning actions that “intentionally prevent or disrupt access to or dissemination of information online in violation of international human rights law, and calls upon all States to refrain from and cease such measures.” Those resolutions, notes the international digital rights group Access Now, are routinely ignored.

Access Now claims its data show that governments’ cutting or restricting access to the Internet under the guise of maintaining public safety, national security, and the like, has risen sharply from 75 incidents in 2016 to 188 in 2018.

Some disruptions are long lived. For example, the government of Cameroon blocked the Internet over 230 days in two Southwest and Northwest regions of the country between January 2017 and March 2018, with one incident lasting 93 days, in attempts to control violent anti-government protests. China restricted Internet access for 10 months in the western region of Xinjiang after ethnic violence between the Muslim Uighurs and Han Chinese in 2009.

Of course, blocking access to the Internet is a blunt instrument. Actively censoring the Internet and social media can be just as effective, and has the added “benefit” of identifying those individuals or groups who might be seen as troublemakers. China, with its censorship factories and Great Firewall, is leading the way and showing other governments how to exert control over their populace if they so wish.

For instance, at the beginning of this year, Vietnam introduced a cybersecurity law that parallels many of China’s Internet and social media restrictions. The law makes it a criminal offense to criticize the government, and requires ISPs to provide the government user data on request. Not surprisingly, a study [PDF] released late last year by the democracy watchdog organization Freedom House reports that Internet freedom has declined for eight consecutive years.

As demonstrated in Zimbabwe, shutting down ISPs also can have unexpected side effects that go beyond inhibiting communications. It will be interesting to watch how IoT device manufacturers are going to address the increasing number of government cut-offs and restrictions of the Internet, not only in terms of IoT reliability of operation, but also in terms of giving governments even more weapons to control their populations. IoT security may be currently a big concern, but dealing with deliberate outages may turn out to be a more important one in the future.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
Columbus is piloting a fleet of autonomous electric shuttles as part of a multimillion-dollar smart transportation initiative

Visitors to Columbus, Ohio, have a new way to see the city’s downtown attractions. A fleet of electric, self-driving vehicles now shuttle passengers around a cluster of museums and parks, using sensors and software in lieu of engines and auto parts. The pilot project, which began in mid-December, belongs to a larger statewide effort to improve road safety and mobility in this car-dependent capital.

“What we’re looking at is, how do we apply technology to improve people’s lives in a transportation context?” says Jordan Davis, director of Smart Columbus, which spearheads the fleet project. “We want to keep stretching the technology of self-driving vehicles to solve real use cases in our communities.”

Smart Columbus, launched in 2016 after the city bested 77 mid-sized U.S. cities for a pool of “smart transportation” funding. The U.S. Department of Transportation granted Columbus up to $40 million, while Vulcan Inc., the private company of late Microsoft co-founder Paul Allen, will chip in up to $10 million. The program aims to develop advanced data and intelligent systems technologies to solve problems plaguing many urban transportation networks: traffic congestion, accidents, tailpipe pollution, commuter delays and inaccessibility.

“The human is actually one of the most expensive pieces of the equation….Once we’re truly able to pull drivers out of these vehicles, it will be much more economical than other types of solutions that [Columbus] is using today.” —Zafar Razzaki, May Mobility

On a January afternoon in downtown Columbus, three green-and-white shuttles loop around a 2.4-kilometer circuit, past a handful of skyscrapers and along the snaking Scioto River. Three more shuttles sit in a parking facility, charging their batteries or waiting their turn in the rotation. Michigan startup May Mobility operates the vehicles, which can seat six people and travel up to 40 kilometers per hour.

Jhane Gaines, a shuttle attendant, sits up front behind a digital dashboard. A T-bar for manual steering rests near her lap, along with a control panel of push buttons and an emergency hand brake. The sky is clear, but the sidewalks are still buried beneath the previous day’s snowfall. As a precaution, none of the shuttles are operating autonomously when I visit. Should the snow return, the sensors could interpret the flakes as obstructions and stop the vehicle, Gaines says as she drives.

“It’s still early, and we’re still learning and observing,” Zafar Razzacki, head of product at May Mobility, tells me earlier by phone. “With any self-driving system, how the system manages environmental changes, changes in precipitation, et cetera, those are things we’re watching very closely.”

Still, Gaines says “it’s the coolest thing” when she can sit back and let the shuttle drive itself. She especially likes when the vehicle turns down a side street, performs a U-turn, then turns left and continues along the circuit. “It does everything. I don’t have to do anything,” she marvels, though she admits it was “nerve-racking” at first to experience.

Gif: Maria Gallucci

Razzacki says the “secret sauce” behind May Mobility’s self-driving shuttles is the software, which is built on a proprietary set of algorithms that the company calls “Multi-Policy Decision Making.” The work began in the lab of CEO and founder Edwin Olson, a robotics professor at the University of Michigan in Ann Arbor.

“The common approach to artificial intelligence is to train the system by feeding it tons and tons of data that represents many different scenarios, and then try to teach the system to react to those scenarios,” Razzacki says. By contrast, May Mobility’s software “is designed to understand situations at a much more granular level and actually be able to predict what’s happening on an agent-by-agent basis. Instead of ‘recognize and react’ we like to ‘understand and plan.’”

Photo: Maria Gallucci The custom-designed sensor housing protects an array of radar and short-range LIDAR units.

May Mobility equips its shuttles with a combination of cameras, radar and multiple LIDAR (light detection and ranging) modalities, which gives the system 360-degree vision at the high-resolution, mid-range level and up close, for rapid response. “We’ve tried to come up with the hardware stack that’s reasonable to put out on the road but not prohibitively expensive,” Razzacki says.

The startup operates another self-driving shuttle service in Detroit, which launched last summer but isn’t open to the public. May Mobility has plans for a third service in Grand Rapids, Michigan, and several yet-to-be-announced locations.

Separately, Smart Columbus is planning a second self-driving shuttle route in Linden, a low-income neighborhood with a dearth of public transportation options. Davis says the city will begin soliciting proposals from developers later this month. The goal is to connect daily commuters to bus stops and other transit hubs, closing what's known as the “first-mile/last-mile” access gap.

Asked why Columbus is opting to test self-driving shuttles, rather than add more people-driven buses in Linden, Davis says the city’s ultimate objective is to help pioneer technologies that—through improved connectivity, awareness and sensitivity—can drastically reduce traffic-related fatalities and serious injuries. “Safety is a primary long-term hope for the technology,” she says.

Razzacki says there’s another potential benefit to replacing humans with software, though bus drivers won’t like the sound of it. “The human is actually one of the most expensive pieces of the equation,” he says. “Once we’re truly able to pull drivers out of these vehicles, it will be much more economical than other types of solutions that [Columbus] is using today.”

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
Passerine's fixed-wing drones can take off (and land) using a pair of legs

Drones have a fundamental design problem. The kind of drone that can carry large payloads at high speeds over long distances is fundamentally different from the kind of drone that can take off and land from a small area. In very simple terms, for the former, you want fixed wings, and for the latter, you want rotors.

This problem has resulted in a bunch of weird drones that try to do both of these things at once, usually by combining desired features from fixed-wing drones and rotorcraft. We’ve seen tail-sitter drones that can transition from vertical take off to horizontal flight; we’ve seen drones with propeller systems that swivel; and we’ve seen a variety of airframes that are essentially quadrotors stapled to fixed-wing aircraft to give them vertical take-off and landing capability. These sorts of compromises do work, more or less, but being compromises, they’re inevitably adding weight, cost, and complexity in order to be able to do everything they need to do.

A South African startup called Passerine has a better idea, which is to do what birds do: Use wings to fly efficiently, while relying on legs and feet for takeoff and landing.

Image: Passerine A computer rendering of Sparrow, one of Passerine’s drones.

This is a rendering of Passerine’s drone, called Sparrow. The first thing to notice is of course those legs, which we’ll certainly get into. But the fixed-wing design of the airframe might be a better place to start. Those over-wing engines create what’s called a blown wing, where the engine exhaust passes over the top of the wing and over a portion of the wing flaps. The forced high-speed air passing over the wings and flaps generates a substantial amount of lift (two or three times the lift of a conventional wing), and since the air is coming directly from the engines, you get that lift even if the aircraft isn’t moving very much. This is in contrast to most conventional wings and flaps, the performance of which depends on the aircraft’s forward velocity. The upshot is that aircraft with blown wings or blown flaps can takeoff and land over a much shorter distance, and can fly much more slowly before they stall.

To be clear, blown wings aren’t Passerine’s idea, and they’ve been around for a while. Ukraine’s Antonov currently produces a freight aircraft with a similar over-wing engine arrangement, and NASA tested this Quiet Short-Haul Research Aircraft (QSRA) in the 1970s, showing that it could takeoff and land on an aircraft carrier without catapults or arresting gear, with room to spare.

Photos: Evan Ackerman/IEEE Spectrum NASA’s Quiet Short-Haul Research Aircraft on display at NASA Ames. Photos show blown wing design, with engines exhausting over the wings and flaps.

There are several reasons why over-wing engines never really caught on. The first is that they’re more difficult and expensive to maintain, because you can’t easily reach them from the ground. They’re also potentially riskier to use—since the engine itself produces so much lift, losing an engine during takeoff or landing has much more immediate consequences than with a standard engine arrangement. But the biggest reason why blown wings aren’t used in more aircraft seems to be simply that they’re not really necessary—runways are long enough that the extra lift they offer just isn’t worth the downsides.

Passerine has their eye on drone delivery at the moment, where payload, range, and speed are all critical, and being able to takeoff and land without requiring infrastructure like runways opens up many more options

For drones, however, these downsides are minimal. At a smaller scale, the over-wing engines are actually easier to maintain. There’s still some risk with engine loss on takeoff or landing, but since you’re only hauling cargo, it’s not nearly as serious. And for many drone use cases, you have little or no infrastructure to rely on, making short takeoffs and landings far more important.

The blown wings that Passerine’s Sparrow drone uses definitely can’t lift the aircraft off of the ground by themselves, which is where the legs come in. You can think of the legs sort of like a self-contained and reusable catapult system that the drone carries along with it. They’re spring loaded, and provide the majority (80 percent) of the energy required for takeoff. 

Sparrow Jumper — March 2018 Showcase - YouTube

After takeoff, the legs retract into fairings alongside the fuselage to make sure that they don’t cause undue amounts of drag, and Sparrow flies just like any other fixed-wing aircraft. Once it gets to its destination, the legs can be used in reverse: The drone slows down as much as possible (using the blown wing to maintain lift), extends it legs, and then uses them as shock absorbers. 

What this system means is that you can have all of the advantages of a fixed-wing drone (payload, speed, range, and efficiency) along with the pinpoint landing capabilities of a rotorcraft, without having to compromise with some kind of hybrid design. Sparrow can’t hover like a rotorcraft does, which will be a slightly limitation on the kinds of missions it’s able to perform. It might not be ideal for camera work, for example. But that’s okay, since Passerine has their eye on deliveries at the moment, where payload, range, and speed are all critical, and being able to takeoff and land without requiring infrastructure like runways opens up many more options.

For more details, we spoke with Passerine founder and CEO Matthew Whalley. But first, here’s a quick clip to illustrate that the combination of legs and a blown wing can in fact get Sparrow into the air:

Sparrow Jumper Take-Off Test - YouTube

IEEE Spectrum: Can you put the takeoff test video into context for us?

Matthew Whalley: The video is with our test airframe: It shows that on launch we got past our stall speed and had some control, although we didn’t have our full control system onboard. The transition from what you see in the video to actual flight is basically control: The drone retracts the legs and keeps accelerating, climbing out at about 30 degrees. The flaps get raised, and it goes into cruise configuration. The real takeoff won’t look too much more exciting than the part you see in the video.

Can you describe what happens when the aircraft launches?

When it launches, it essentially jumps into the air. The launch is very similar to a bird. What a lot of people don’t realize is that when a bird takes off from the ground, it’s not generating the lift with its wings. Most of that initial takeoff velocity comes from a jump. Many small birds will do about a 5g jump to get them up to speed before they start flapping their wings. Our aircraft does essentially the exact same thing. When it jumps, it’s not about gaining height, it’s about launching the drone forward to get it up past its minimum flight speed, and at that point it’s flying like a conventional aircraft. 

Where did this idea come from?

The idea of having a drone that could do long range and carry a fairly large payload originated back when I was at university, where it was this need for something in Africa to basically bridge the infrastructure gap that we have present in a lot of countries here. Interestingly enough, South Africa was the first place to do drone delivery—about 15 years ago we were trying to do medical deliveries. So that was the basis for the drone. I knew the capability it needed to have. 

Also, knowing that specifically in Africa, but also generally in the developing world, there is potential for this massive improvement by using drones, but there’s not a lot of infrastructure, not a lot of places where you could use a conventional fixed-wing airplane or drone. So you need something that can operate from very low infrastructure, and the legs came about as being a more efficient way of getting a long-range airplane into the air rather than trying to strap a quadcopter to it.

How closely are the legs modeled on the legs of birds?

It’s an interesting story. We started off with something that looked nothing like a bird’s leg. We knew that we needed a certain acceleration to get to flight speed, and every time we did an iteration that gave us more efficiency, it started to look more and more like a bird’s leg. The current design that we have now, which resembles a bird’s leg very closely, is actually the result of an iterative design process. It wasn’t our intention to make a bird’s leg, it just turned out that that was quite an efficient way of doing it.

What are the advantages of Passerine’s drones relative to hybrid drones, tail sitters, and other VTOL designs?

There are several different things. One is, hover is the least energy efficient point of flight by quite some margin. What we’ve essentially done is looked at most of the missions people are trying to do with drones and said, well actually, none of these require hover, they just need zero infrastructure takeoff and landing. So, by avoiding hover, we avoid the least efficient point of flight, which means we don’t need to carry as many batteries, we don’t need as powerful motors, and the result of that is that everything can be made a bit lighter, which means either you can fly farther, or you can carry additional payload. 

How much of an advantage is this for your design over other systems?

That depends on the mission, and on the specific system, obviously. But generally, we’re looking at 10 percent better performance than a hybrid drone. It depends on what the mission is: If you’re just doing a low-speed survey mission, it’s more in the 10 percent range. Where we really shine is on delivery or high speed missions, because we’re able to cruise at about 50 percent higher speed, and still have the same sort of energy usage as any of these other drones. We can do deliveries a lot faster, we can respond to emergencies a lot faster, and that’s where there’s a really big advantage over a hybrid drone.

Where does that higher speed come from?

Compared to most fixed-wing drones that are out there, our big advantage comes from our engine layout. We can design a drone that is optimal for high speed flight, but because we can use our blown wing to fly very slowly, we’re still able to bring it in at a low speed so that the autopilot can land it easily. That’s one of the major advantages that we have—our ability to be optimal for high speeds, but still able to perform low speed maneuvers, due to the wing and engine layout.

This system can also be used for landing, right?

If you can imagine how a bird lands, or how a hang glider lands, essentially you flare the aircraft very sharply, stalling the wings and using the entire aircraft as an air brake. You do an approach at a reasonably low speed and altitude, and then you do the flare maneuver which essentially stops the aircraft and it starts dropping towards the ground. One of the advantages of our engine layout and airframe is that in that flared, stalled configuration, we still generate a fair amount of lift.  Not enough to fly, but enough that you’re only getting a very small acceleration as you descend. So, even though we’re sinking, we sink quite slowly towards the ground. And then we deploy the legs below the aircraft to essentially act as shock absorbers for the last remaining energy that wasn’t dissipated by the flare maneuver.

Sparrow Jumper Take-off and Landing Animation - YouTube

How important is being able to land?

My personal belief is that yes, the landing is critical. Particularly when what you’re delivering is either sensitive or critical. In my opinion, things like blood really shouldn’t be being dropped by parachute. You want to be able to deliver it very precisely, and to the correct person, possibly into a secure environment.

Is this kind of landing really achievable with a useful payload?

Yes. Absolutely. We’re very confident on this approach to landing. 

Are there significant downsides to your design? For example, do the legs add a significant amount of weight, is the autopilot particularly complex, that sort of thing?

The one obvious disadvantage that our design does have is that it’s not able to hover. There are some missions, not many, but some missions where you want to be able to hover. We’re not designing for that. The other things are, yes, having legs on this airframe, they do weigh something. But the weight is not that significant—they’re more comparable to a retractable undercarriage that you’d have on a large remote controlled aircraft. You do need a reasonably complex flight controller, but nothing beyond what’s available off the shelf.

Can you give me a sense of what your targets are for speed, range, and payload?

For the first aircraft that we’re building, which we’re hoping will be the smallest of several versions, we’re looking at a cruise speed of 120 km/h, carrying 2 kg of payload, and a flight time of an hour. We plan to scale up to 100 kg payload with a 6-meter wingspan; that’s the largest size that we’re doing on-paper designs of, and that’s where we want to get to in the fairly near future.

Photo: Passerine Passerine founder and CEO Matthew Whalley.

Whalley tells us that Passerine will be starting pilot programs in the second quarter of 2019, deploying in several places across Africa doing real-world missions. They’re still working on a complete flight cycle (takeoff and landing), but Whalley expects that they’ll achieve that within the next month or two. 

Drones have the potential to be a valuable logistics tool in Africa in the near future, and from what we’ve seen, the ability to takeoff and land from areas without infrastructure will be critical to their effectiveness, especially when it comes to serving the areas that need them most. Passerine is certainly not the only drone company targeting this space, but they’re one of the most innovative, and we’re very much looking forward to seeing how Sparrow performs.

[ Passerine ]

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
The number of new wireless devices continues to escalate and the amount of data consumed continues to grow at an exponential rate

Overview

Wireless technology is everywhere. Every day more and more new wireless devices are being created and accessing today’s wireless networks, consuming more and more data. The number of new wireless devices continues to escalate and the amount of data consumed continues to grow at an exponential rate. In order to address the demand, new wireless technologies are being investigated to evolve the existing wireless infrastructure. To that end, the world’s wireless standardization bodies have begun the arduous task of defining the next generation wireless systems commonly known as 5G. The 5G charter includes three specific use cases: Enhanced Mobile Broadband (eMBB), Massive Machine Type Communication (mMTC), and Ultra Reliable Machine Type Communication (uRMTC).

These three different use cases can be mapped to different requirements, like an emphasis on peak data rate for the eMBB and latency for the uRMTC. Because of all of the different requirements, one specific technology won’t be able to address all of the requirements, rather 5G will be a combination of new technologies. For the eMBB case specifically, researchers must increase the peak data rates by 100x over 4G with very limited “available” spectrum below 6 GHz. Data rates are empirically linked to spectrum availability according to the Shannon Hartley theorem which states that capacity is a function of bandwidth (ie spectrum) and channel noise. Because spectrum below 6 GHz is almost fully allocated, researchers must explore spectrum above 6 GHz and into the mmWave range to address eMBB use case.

The Need for mmWave Software Defined Radios (SDRs)

Service operators around the world have paid billions of dollars for spectrum to service their customers. The exorbitant auction prices for spectrum below 6 GHz highlights the strength of the competitive market forces but also the scarcity of this precious resource. As stated above, enhanced data rates and increased capacity are constrained by spectrum according to Shannon. More spectrum yields higher data rates, which enable service operators to accommodate more users while also delivering a consistent mobile broadband data experience. In contrast, mmWave spectrum is plentiful and lightly licensed, meaning it is accessible to service operators around the world. The challenges impacting mmWave adoption lie primarily in the unanswered technical questions regarding this unexplored and largely uninvestigated spectrum.

To capitalize on the promise of mmWave for 5G, researchers must develop new technologies, algorithms and communications protocols as the fundamental properties of the mmWave channel are different from current cellular models, and are relatively unknown. The importance of building mmWave prototypes cannot be overstated especially in this early time frame. Building mmWave system prototypes demonstrate the viability and feasibility of a technology or concept that simulations alone cannot. mmWave prototypes communicating in real­time and over the air in a variety of scenarios will unlock the secrets of the mmWave channel, and enable technology adoption and proliferation.

Figure 1: Three high level 5G use cases as defined by 3GPP and IMT 2020

To that end, NI offers the world’s first real time mmWave prototyping system that enables engineers and researchers to quickly prototype mmWave systems. By combining modular, flexible hardware with powerful application software, the NI mmWave Transceiver System is an SDR for mmWave applications. Because the mmWave Transceiver is a full SDR, researchers can deploy their designs quickly as both the hardware and software are fully featured and modular, and then they can quickly iterate to optimize the design using software.

mmWave Communications System Prototyping

There are numerous challenges to building a complete mmWave communications prototype. Consider a baseband subsystem capable processing a multi­GHz channel. As most of today’s LTE implementations typically use 10 MHz channels (20 MHz maximally), the computation load increases linearly with bandwidth. In other words, the computational capacity must increase by a factor of 100 or more to address the 5G data rate requirements. The algorithms used in today’s LTE turbo decoders are computationally intensive and require high performance computing hardware to process data in real time. FPGAs provide an ideal hardware solution for these computations, and for ultra­wide bandwidth turbo decoding, FPGAs are essential.

Although FPGAs must be considered a core component to an mmWave prototyping system, programming a multi­FPGA system capable of processing multi­GHz channels presents increased system complexity. To address the system complexity and software challenge, NI provides an mmWave physical layer in source code that accounts for the fundamental aspects of a mmWave system baseband and also provides abstractions for data movement and processing across multiple FPGAs to simplify the task.

FPGAs are only one piece of a mmWave prototyping system. Data needs to be able to flow back and forth between the digital domain where it is processed and the analog domain where signals sent and received over the air. Improvements in DAC and ADC technologies make it possible to capture between 1­2 GHz. There are some mmWave frequency ICs on the market today. These ICs can be connected to the ADCs and DACs in the mmWave Transceiver System to evaluate them and do prototyping work. However, RFICs do not offer high power output or the quality of RF that is needed for channel sounding and communications prototyping.  To integrate higher power and better quality RF, there an IF stage is used to convert signals to 12 GHz. Finally, mmWave radio heads are then connected to the IF module. Each piece of this prototyping system would require a different set of design expertise and significant engineering resources to develop if one wanted to build a prototyping system from scratch. The hardware design of each piece is non trivial, and the software needed to control and synchronize all of the stages adds an extra layer of complexity to custom system designs. The mmWave transceiver system offers a complete prototyping solution to help engineers go from concept and algorithm design to prototyping faster.

NI offers four off the shelf mmWave prototyping system configurations which are discussed in more detail below. Based on the PXIe platform, the mmWave Transceiver Systems are composed of 2 GHz bandwidth baseband processing subsystem, a 2 GHz bandwidth filtered intermediate frequency (IF) stage and LO module, and modular mmWave radio heads that reside external to the chassis. A system diagram can be seen below in Figure 2.

Figure 2: mmWave system diagram

This modular approach creates a flexible hardware platform that can be modified by adding or removing modules to accommodate a wide variety of channels and configurations. Users may choose to use the full NI mmWave solution, or integrate their own RF into the NI IF and baseband system. The user can also use the same system to prototype different bands with the same IF and baseband hardware and software. The system can scale from a unidirectional SISO system for applications such as channel sounding to a bidirectional MIMO system capable of transmitting and receiving in parallel for a full 2 channel 2 way communications link. The various system components and configurations will be discussed in detail in this paper. Application specific software is not discussed in this white paper.

mmWave Transceiver System Hardware

The mmWave Transceiver System is an SDR platform for building mmWave applications including system prototyping. It gives users access to a flexible hardware platform and application software that enables real­time over the air mmWave communications research. The software is open to the user and can be modified as research needs change so designed can be iterated and optimized to meet specific goals or objectives.

The NI mmWave transceiver system is comprised of PXIe chassis, controllers, a clock distribution module, FlexRIO FPGA modules, high speed DACs, high speed ADCs, LO and IF modules, and mmWave radio heads. The modules can be assembled in various configurations to address a large number of mmWave applications ranging from channel sounding to MIMO communications link prototyping. This document provides a detailed overview of the hardware that is used in the mmWave transceiver system and how modules interact with each other. Detailed performance specifications for the system can be found in the mmWave transceiver system specifications sheet (http://www.ni.com/pdf/manuals/375847b.pdf) and model page (http://sine.ni.com/nips/cds/view/p/lang/en/nid/213722).

PXI Express Chassis

The mmWave prototyping system is based on the PXIe­1085 chassis. The chassis houses the different processing modules and provides power supply, interconnectivity and timing and synchronization infrastructure. This 18 slot chassis features PCI Express (PCIe) Generation 3 technologies in every slot for high­throughput, low­latency applications. The chassis is capable of 4 GB/s of per­slot bandwidth and 24 GB/s of system bandwidth. The PXIe­1085 uses a dual­switch backplane architecture, shown in the system diagram of Figure 3. Because of the flexible design of PXI, multiple chassis can be daisy­chained together or put in a star configuration when building higher channel­count systems.

Figure 3: 180Slot PXIe­1085 Chassis (a) and System Diagram (b)

High­ Performance Reconfigurable FPGA Processing Module

Central to all SDRs is the software and computational elements that compose the physical layer. The mmWave prototyping system uses single slot FPGA modules to add flexible, high­performance processing modules, programmable with LabVIEW, within the PXIe form factor. The PXIe­7976R FlexRIO FPGA module can be used standalone, providing a large and customizable Xilinx Kintex­7 410T with PCI Express Generation 2x8 connectivity to the PXI Express backplane. The mmWave transceiver system maps the different processing tasks to multiple FPGAs, depending on the particular configuration in a software configurable manner.

Figure 4: PXIe­7976R FlexRIO Module (a) and System Diagram (b)

High performance FPGA for high data throughput applications

The NI PXIe­7902 FPGA module is powerful processing module built with a Xilinx Virtex 7 485T. The large FPGA makes it ideal for processing heavy applications such as the mmWave physical layer. This module can transfer data across the backplane of a PXIe chassis with PCIe gen 2x8 speeds. For applications needing to support faster data rates, the PXIe­7902 also features 6 miniSAS HD front panel connectors composed of 24 Multi Gigabit Transceivers (MGTs). The MGTs can be connected to other PXIe­7902 modules or to other modules such as a DAC or ADC to enable up to 2 GHz of real­time bandwidth on multi­channel baseband signals.

Figure 5: PXIe­7902R FPGA module (a) and System Diagram (b)

Ultra wideband DAC and ADC

The PXIe­3610 DAC is shown below in Figure 6, and the PXIe­3630 ADC is shown in Figure 7. They provide access to analog baseband differential I/Q pairs through 4 MCX front panel connectors. These modules can be connected together to create a baseband loopback test system, connected to the PXIe­3620 IF module, or to 3rd party baseband hardware. Basic performance information is shown below in Table 1. Detailed performance information can be found in the mmWave transceiver system data sheet.

Table 1: Basic performance specifications of the PXIe­3610 and PXIe­3630

Figure 6: DAC module and block diagram

Figure 7: ADC module and block diagram

IF module

The PXIe­3620 LO/IF module is capable of processing one transmit and one receive chain for up to 2 GHz of bandwidth each. The NI PXIe­3620 mixes the input signals with the integrated LO to upconvert the baseband signal to a software programmable IF between 8.5 – 13 GHz. For receive, the NI PXIe­3620 takes an 8.5 to 13 GHz input IF and converts to baseband. This module contains internal gain control and is capable of transmitting up to 7 dBm and receiving a 20 dBm signal. The PXIe­3620 also provides the LO reference signal for the NI mmWave radio heads (mmRH modules). The LO/IF module can optionally accept an external LO signal or can drive the LO signal for other IF modules to synchronize multiple transmit/receiver streams in a MIMO topology. The differential I/Q pairs are accessible through MXC connections on the device front panel.

  Figure 8: PXIe­3620 IF module

mmWave heads

The modular mmWave radio heads provide a high quality RF signal for the mmWave Transceiver System and support the following frequency bands.

  • 24.25 -­ 33.4 GHz
  • 37 -­ 43.5 GHz
  • 71 ­- 76 GHz

All of the radio heads support 2 GHz of bandwidth. There is an option of purchasing a transmitter, receiver, or transceiver. For details on the products, please refer to Table 2 below.The mmWave heads contain attenuators and amplifiers for maximum gain control and noise figure. A detailed RF characteristics are listed in Table 3. The radio heads can be connected to a user provided antenna, such as a horn antenna or a phased array antenna.

Table 2: Product names of the mmWave radio heads and part numbers for required digital cables for each model.

Table 3: Characteristics of each radio head.

Near minimum gain, For lower gain settings, 1 dB compression is higher than fullscale.

At maximum gain.

Figure 9: Left: 24.25­33.4 GHz and 37­43.5 GHz radio head Right:71­76 GHz mmWave radio head.

System Configuration Options

The NI mmWave transceiver system is meant to be a flexible hardware platform capable of servicing a wide variety of communications needs. While hardware can be added or removed from a system to create various types of systems, there are 4 base configurations offered to meet the most common use cases:

  • Unidirectional SISO
  • Unidirectional 2x2 MIMO
  • Bidirectional SISO
  • Bidirectional 2x2 MIMO

Unidirectional Systems

The 2 unidirectional options consist of 2 PXIe chassis with the transmitter(s) in one chassis and the receiver(s) in the other chassis. This configuration is well suited for making channel sounding measurements. With this configuration, one can physically separate the transmit and receive sub systems to take channel sounding measurements in a wide variety of environments. Because of the modular nature of this PXIe based system, extra hardware can easily be added to accommodate different research needs such as adding more receive channels for more accurate angle of arrival measurements. As an alternative to adding extra receive channels for a parallel receive or parallel transmit and receive implementation, an external switch can also be added to the SISO system. This flexibility allows researchers to choose the hardware configuration that best meets their needs of measurement speed and configuration. Because the mmWave transceiver system was designed for MIMO architectures, it is easy to share an LO between each channel for phase coherence. A diagram of the SISO and 2x2 MIMO version of this system can be seen below in Figures 10 and 11.

Figure 10: Unidirectional SISO configuration

Figure 11: Unidirectional MIMO configuration

Bidirectional Systems

Each of the bidirectional system configurations consists of 2 PXIe chassis with one transmit and one receive in each chassis per channel. These systems are designed for communications prototyping, giving researchers the hardware they need to create a real­time 2 way communications link. There are many unknowns in mmWave communications research.

Determining how a signals behave in a mmWave channel is important. Having a well defined channel model will help algorithm developers, but ultimately real­time communications links need to be prototyped to validate their performance in these new frequency spectrums. Whether trying to validate a new physical layer and air interface or explore how the existing LTE physical layer can be adapted for ultra wide bandwidths like 2 GHz, the NI mmWave transceiver system can be used to prototype their performance in real­time. Combining NI’s mmWave system with additional FPGA processing and LabVIEW, it is possible to perform modulation, demodulation, coding, and turbo decoding on up to 2 GHz of bandwidth in real time. These systems are designed to be used as a platform for researchers to develop and test communication protocols. Unlike sub 6 GHz communications, mmWave signals are highly directional, and the protocol needs to ensure that two or more modes are able to locate each other. Nodes need to be able to exchange control and measurement information, as a part of a beam steering or random access protocol for example. Figures 11 and 12 show diagrams of these bidirectional system configurations.

Figure 12: Bidirectional SISO configuration

Figure 13: Bidirectional MIMO configuration

Summary

NI’s mmWave transceiver system is a modular set of hardware that can be used for a variety of applications from channel sounding to prototyping real­time 2 way communications systems. This system is build on the PXI platform and provides a flexible set of modules that can be combined in a number of different configurations to meet ever changing research needs. The mmWave radio heads themselves are also modular and can be replaced with other RF front ends to investigate multiple different frequencies with the same base set of hardware and software to save engineering design time and to get maximum system reuse. This hardware combined with the power of LabVIEW provides an excellent platform for mmWave communications prototyping and helps engineers innovate faster.

Additional Resources

  • See the mmWave manuals (http://search.ni.com/nisearch/app/main/p/ap/tech/lang/en/pg/1/sn/catnav:pm/fil/AND(nilanguage:en,phwebnt:21476,nicontent

Read Full Article

Read for later

Articles marked as Favorite are saved for later viewing.
close
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview