Loading...

Follow Leap Motion on Feedspot

Continue with Google
Continue with Facebook
or

Valid

The future of open source augmented reality just got easier to build. Since our last major release, we’ve streamlined Project North Star even further, including improvements to the calibration system and a simplified optics assembly that 3D prints in half the time. Thanks to feedback from the developer community, we’ve focused on lower part counts, minimizing support material, and reducing the barriers to entry as much as possible. Here’s what’s new with version 3.1.

Introducing the Calibration Rig

As we discussed in our post on the North Star calibration system, small variations in the headset’s optical components affect the alignment of the left- and right-eye images. We have to compensate for this in software to produce a convergent image that minimizes eye strain.

Before we designed the calibration stand, each headset would need to have its screen positions and orientations manually compensated for in software. With the North Star calibrator, we’ve automated this step using two visible-light stereo cameras. The optimization algorithm finds the best distortion parameters automatically by comparing images inside the headset with a known reference. This means that auto-calibration can find best possible image quality within a few minutes. Check out our GitHub project for instructions on the calibration process.

Mechanical Updates

Building on feedback from the developer community, we’ve made the assembly easier and faster to put together. Mechanical Update 3.1 introduces a simplified optics assembly, designated #130-000, that cuts print time in half (as well as being much sturdier).

The biggest cut in print time comes from the fact that we no longer need support material on the lateral overhangs. In addition, two parts were combined into one. This compounding effect saves an entire workday’s worth of print time!

Left: 1 part, 95g, 7 hours, no supports. Right: 2 parts, 87g, 15 hour print, supports needed.

The new assembly, #130-000, is backwards compatible with Release 3. Its components substitute #110-000 and #120-000, the optics assembly, and electronics module respectively. Check out the assembly drawings in the GitHub repo for the four parts you need!

Cutout for Power Pins

Last but not least, we’ve made a small cutout for the power pins on the driver board mount. When we received our NOA Labs driver board, we quickly noticed the interference and made the change to all the assemblies.

This change makes it easy if you’re using pins or soldered wires, either on the top or bottom.

Want to stay in the loop on the latest North Star updates? Join the discussion on Discord!

The post Project North Star: Mechanical and Calibration Update 3.1 appeared first on Leap Motion Blog.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

 Over the past few months we’ve hit several major milestones in the development of Project North Star. At the same time, hardware hackers have built their own versions of the AR headset, with new prototypes appearing in Tokyo and New York. But the most surprising developments come from North Carolina, where a 19-year-old AR enthusiast has built multiple North Star headsets and several new demos.

Graham Atlee is a sophomore at High Point University in North Carolina, majoring in entrepreneurship with a minor in computer science. In just a few months, he went from concept sketches and tutorials to building his own headsets. Building augmented reality demos in Unity with North Star is Graham’s first time programming.

Graham records his North Star videos through a hacked Logitech webcam. (As this lacks a heatsink, it’s not recommended for use by anyone.)

“You have to go in and click around, and see what breaks this and that.” For Graham, it’s been a mix of experimentation with computer science textbooks and (naturally) Stack Overflow. Coding “was kind of daunting at first, but it’s like learning a language. Once you pick it up it becomes part of you.”

On the hardware side, Graham is entirely self-taught. He was able to follow build tutorials from Japanese dev group exiii, which include links to all the parts. “Assembling the headset itself is pretty stressful. Be careful with the screws you use, because the plastic is kind of fragile and can crack.”

Augmented reality is going to change the Internet and surpass the World Wide Web.
Click To Tweet
Graham built his first North Star headset using reflectors from exiii, and later upgraded to higher-quality injection-molded lenses from Wearpoint founder Noah Zerkin.

“Augmented reality is going to change the Internet and surpass the World Wide Web. I think it’s going to be bigger than that. It might sound ridiculous or idealistic, but I truly believe that’s where it’s going.” But the real impact of AR however won’t be felt until the latter half of the 2020s. “People in the AR industry like to argue from analogy – ‘this is where the iPhone was.’ The more cynical people say it’s closer to Alan Turing’s machine.”

 

We need help to figure out what we call the TUI (Tangible User Interface). With North Star I’ve realized how important hands are going to be to the future of AR.
Click To Tweet
By starting a new sharing site – Pumori.io, named after a Himalayan mountain – Graham hopes to collaborate with the open source AR community to explore and create new ways of manipulating information.

“Ideally, we want a situation where anyone can build an AR headset and run spatial computing applications on it. We need help to figure out what we call the TUI (Tangible User Interface). I want to explore rich new interactions, provide stable 3D interfaces, and open-source them for people to use. With North Star I’ve realized how important hands are going to be to the future of AR.”

The post How a Self-Taught Teen Built His Own North Star Headset appeared first on Leap Motion Blog.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Bringing new worlds to life doesn’t end with bleeding-edge software – it’s also a battle with the laws of physics. With new community-created headsets appearing in Tokyo and New York, Project North Star is a compelling glimpse into the future of AR interaction. It’s also an exciting engineering challenge, with wide-FOV displays and optics that demanded a whole new calibration and distortion system.

Just as a quick primer: the North Star headset has two screens on either side. These screens face towards the reflectors in front of the wearer. As their name suggests, the reflectors reflect the light coming from the screens, and into the wearer’s eyes.

As you can imagine, this requires a high degree of calibration and alignment, especially in AR. In VR, our brains often gloss over mismatches in time and space, because we have nothing to visually compare them to. In AR, we can see the virtual and real worlds simultaneously – an unforgiving standard that requires a high degree of accuracy.

North Star sets an even higher bar for accuracy and performance, since it must be maintained across a much wider field of view than any previous AR headset. To top it all off, North Star’s optics create a stereo-divergent off-axis distortion that can’t be modelled accurately with conventional radial polynomials.

North Star sets a high bar for accuracy and performance, since it must be maintained across a much wider field of view than any previous augmented reality headset.
Click To Tweet
How can we achieve this high standard? Only with a distortion model that faithfully represents the physical geometry of the optical system. The best way to model any optical system is by raytracing – the process of tracing the path rays of light travel from the light source, through the optical system, to the eye.[1] Raytracing makes it possible to simulate where a given ray of light entering the eye came from on the display, so we can precisely map the distortion between the eye and the screen.[2]

But wait! This only works properly if we know the geometry of the optical system. This is hard with modern small-scale prototyping techniques, which achieve price effectiveness at the cost of poor mechanical tolerancing (relative to the requirements of near-eye optical systems). In developing North Star, we needed a way to measure these mechanical deviations to create a valid distortion mapping.

One of the best ways to understand an optical system is… looking through it!. By comparing what we see against some real-world reference, we can measure the aggregate deviation of the components in the system. A special class of algorithms called “numerical optimizers” lets us solve for the configuration of optical components that minimizes the distortion mismatch between the real-world reference and the virtual image.

Leap Motion North Star calibration combines a foundational principle of Newtonian optics with virtual jiggling.
Click To Tweet
For convenience, we found it was possible to construct our calibration system entirely in the same base 3D environment that handles optical raytracing and 3D rendering. We begin by setting up one of our newer 64mm modules inside the headset and pointing it towards a large flat-screen LCD monitor. A pattern on the monitor lets us to triangulate its position and orientation relative to the headset rig.

With this, we can render an inverted virtual monitor on the headset in the same position as the real monitor in the world. If the two versions of the monitor matched up perfectly, they would additively cancel out to uniform white.[3] (Thanks Newton!) The module can now measure this “deviation from perfect white” as the distortion error caused by the mechanical discrepancy between the physical optical system and the CAD model the raytracer is based on.

This “one-shot” photometric cost metric allows for a speedy enough evaluation to run a gradientless simplex Nelder-Mead optimizer in-the-loop. (Basically, it jiggles the optical elements around until the deviation is below an acceptable level.) While this might sound inefficient, in practice it lets us converge on the correct configuration with a very high degree of precision.[4]
 

 
This might be where the story ends – but there are two subtle ways that the optimizer can reach a wrong conclusion. The first kind of local minima rarely arises in practice.[5] The more devious kind comes from the fact that there are multiple optical configurations that can yield the same geometric distortion when viewed from a single perspective. The equally devious solution is to film each eye’s optics from two cameras simultaneously. This lets us solve for a truly accurate optical system for each headset that can be raytraced from any perspective.

In static optical systems, it usually isn’t worth going through the trouble of determining per-headset optical models for distortion correction. However, near-eye displays are anything but static. Eye positions change for lots of reasons – different people’s interpupillary distances (IPDs), headset ergonomics, even the gradual shift of the headset on the head over a session. Any one of these factors alone can hamper the illusion of augmented reality.

Fortunately, by combining the raytracing model with eye tracking, we can compensate for these inconsistencies in real-time for free![6] We’ll cover the North Star headset’s eye tracking capabilities in a future blog post.

The post Bending Reality: North Star’s Calibration System appeared first on Leap Motion Blog.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Today we’re excited to share the latest major design update for the Leap Motion North Star headset. North Star Release 3 consolidates several months of research and insight into a new set of 3D files and drawings. Our goal with this release is to make Project North Star more inviting, less hacked together, and more reliable. The design includes more adjustments and mechanisms for a greater variety of head and facial geometries – lighter, more balanced, stiffer, and more inclusive.

With each design improvement and new prototype, we’ve been guided by the experiences of our test participants. One of our biggest challenges was the facial interface, providing stability without getting in the way of emoting.

Now, the headset only touches the user’s forehead, and optics simply “float” in in front of you. The breakthrough was allowing the headgear and optics to self-align between face and forehead independently. As a bonus, for the first time, it’s usable with glasses!

Release 3 has a lot packed into it. Here are a few more problems we tackled:

New forehead piece. While we enjoyed the flexibility of the welder’s headgear, it interfered with the optics bracket, preventing the optics from getting close enough. Because the forehead band sat so low, the welder’s headgear also required a top strap.

Our new headgear design sits higher and wider, taking on the role of the top strap while dispersing more weight. Choosing against a top strap was important to make it self-evident how the device is worn, making it more inviting and a more seamless experience. New users shouldn’t need help to put on the device.

Another problem with the previous designs was slide-away optics. The optics bracket would slide away from the face occasionally, especially if the user tried to look downward.

Now, in addition to the new forehead, brakes are mounted to each side of the headgear. The one-way brake mechanism allows the user to slide the headset towards their face, but not outwards without holding the brake release. The spring is strong enough to resist slipping – even when looking straight down – but can be easily defeated by simply pulling with medium force in case of emergency.

Weight, balance, and stiffness comes as a whole. Most of the North Star headset’s weight comes from the cables. Counterbalancing the weight of the optics by guiding the cables to the back is crucial for comfort, even if no weight is removed. Routing the cables evenly between left and right sides ensures the headset isn’t imbalanced.

By thickening certain areas and interlocking all the components, we stiffened the design so the whole structure acts cohesively. Now there is much less flexure throughout. Earlier prototypes included aluminum rods to stiffen the structure, but clever geometry and better print settings offered similar performance (with a few grams of weight saved)! Finally, instead of thread-forming screws, brass inserts were added for a more reliable and repeatable connection.

Interchangeable focal distances. Fixed focal distances are one of the leading limiting factors in current VR technology. Our eyes naturally change focus to accommodate the real world, while current VR tech renders everything to the same fixed focus. We spent considerable time determining where North Star’s focal distance should be set, and found that it depends on the application. Today we’re releasing two pairs of display mounts – one at 25cm (the same as previous releases) and the other at an arm length’s 75cm. Naturally 75cm is much more comfortable for content further away.

Finally, a little trick we developed for this headgear design: bending 3D prints. An ideal VR/AR headset is light yet strong, but 3D prints are anisotropic – strong in one direction, brittle in another. This means that printing large thin curves will likely result in breaks.

Instead, we printed most of the parts flat. While the plastic is still warm from the print bed, we drape the plastic over a mannequin head. A few seconds later, the plastic cools enough to retain the curved shape. The end result is very strong while using very little plastic.

While the bleeding edge of Project North Star development is in our San Francisco tech hub, the work of the open source community is a constant source of inspiration. With so many people independently 3D printing, adapting, and sharing our AR headset design, we can’t wait to see what you do next with Project North Star. You can download the latest designs from the Project North Star GitHub.

The post Project North Star: Mechanical Update 3 appeared first on Leap Motion Blog.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Earlier this week, we shared an experimental build of our LeapUVC API, which gives you a new level of access to the Leap Motion Controller cameras. Today we’re excited to share a second experimental build – multiple device support.

With this build, you can now run more than one Leap Motion Controller on the same Windows 64-bit computer. To get started, make sure you have sufficient CPU power and enough USB bandwidth to support both devices running at full speed.

The package includes an experimental installer and example code in Unity. The devices are not synchronized but are timestamped, and there’s example code to help you manually calibrate their relative offsets.

Multiple device support has been a longstanding feature request in our developer community, and we’re excited to share this experimental release with everyone. Multiple interactive spaces can be used for multiuser AR/VR, art installations, location-based entertainment, and more.

While there’s no out-of-the-box support for adjacent spaces (where a tracked hand retains the same ID when moving from one device to another) or overlapping spaces (where the same hand could be tracked from multiple angles), today’s build puts these possibilities into reach. To get started, download the experimental installer and multidevice Unity Modules, and create your project.

The post Experimental Release #2: Multiple Device Support appeared first on Leap Motion Blog.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

In 2014 we released the Leap Motion Image API, to unlock the possibilities of using the Leap Motion Controller’s twin infrared cameras. Today we’re releasing an experimental expansion of our Image API called LeapUVC.

LeapUVC gives you access to the Leap Motion Controller through the industry standard UVC (Universal Video Class) interface. This gives you low level controls such as LED brightness, gamma, exposure, gain, resolution, and more.

All of this data access works no matter how many Leap Motion Controllers you have plugged into your PC.

Discover the network of veins under your skin, revealed in infrared.

Track a physical object like this ArUco-markered cube.

The LeapUVC release features examples in C, Python, and Matlab, as well as OpenCV bindings that show how to stream from multiple devices, track ArUco markers, change camera settings, grab lens distortion parameters, and compute stereo depth maps from the images. Use the Leap Motion Controller to track physical objects, capture high-speed infrared footage, or see the world in new ways.

Control exposure time and capture images within 1/2000th of a second.

Play with different variables to accentuate different parts of the environment.

We hope this experimental release will open up entirely new use cases for the Leap Motion Controller in education, robotics, art, academic research, and more.

The post Introducing LeapUVC: A New API for Education, Robotics and More appeared first on Leap Motion Blog.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Earlier this summer, we open sourced the design for Project North Star, the world’s most advanced augmented reality R&D platform. Like the first chocolate waterfall outside of Willy Wonka’s factory, now the first North Star-style headsets outside our lab have been born – in Japan.

Several creative developers and open hardware studios are propelling open source efforts, working together to create a simplified headset based on the North Star design. Developer group exiii shared their experience on their blog along with a build guide, which uses off-the-shelf components. Psychic VR Lab, the developers of VR creative platform STYLY, took charge of the software.

Masahiro Yamaguchi (CEO, Psychic VR Lab), God Scorpion (Media artist, Psychic VR Lab), Keiichi Matsuda (VP Design, Leap Motion), Oda Yuda (Designer), Akihiro Fujii (CTO, Psychic VR Lab). Not pictured: Yamato Kaneko (COO, Product Lead, exiii), Hiroshi Yamaura (CEO, exiii).

Together, Psychic VR Lap and exiii have been showcasing their work at developer events in Tokyo. Recently we caught up with them.

Alex Colgan (Leap Motion): What inspired you to build a North Star headset?

Yamaura: Our company originally started from 3D printing a bionic hand, and we open-sourced that project. So we’re generally very passionate about open-source projects. Our main focus currently is to create really touchable virtual reality experiences, but of course what we see in the future is augmented reality in the world, where virtual objects and physical objects coexist together. We want to make everything touchable, just like real objects.

God Scorpion: I have a mixed reality team that is developing some ideas combining fashion, retail, performance and art. We’ve been working with Vive and Hololens, and wanted to see what else would be possible with North Star.

Akihiro Fujii: The North Star official demos and HYPER-REALITY film by Keiichi Matsuda gave us a huge inspiration about the future of AR. I’ve been using the Leap Motion hand tracker for six years and know its precision. I was excited when I saw the news about the AR headset with the hand tracker. We visited the exiii team who had already started building North Star and shared our excitement about the open source project.

Three weeks after the visit, we held the first North Star meetup with 50 XR enthusiasts in Tokyo with our very first North Star headset. I guess most of the participants were convinced the future is right before our eyes.

Alex: What changes do you think AR will have have on people’s lives?

Kaneko: We really like the idea of mirrorworlds. That’s the world we are trying to achieve on our side of development as well. If that kind of environment is possible, that’s where we want to touch virtual reality as well.

Yamaura: One of the biggest advantages to be in Japan is to work with car manufacturers, who are very eager to introduce new technology to their design and engineering process. They’ve invested a lot of money and effort for prototyping or making mockups in virtual reality, even before the Oculus/Vive era The next step for them is to be able to touch the model they designed in virtual reality. So naturally it will be mixed reality; it’s more seamless between the virtual and the physical world.

It is said that long-used tools acquire spirit, then become alive and self-aware in Japanese folklore. The concept, Tsukumogami, may be realized with AR in our everyday lives.
Click To Tweet
God Scorpion: It is said that long-used tools acquire spirit, then become alive and self-aware in Japanese folklore. The concept, Tsukumogami, may be realized with AR in our everyday lives. The relationship between objects and users will be changed. Objects may afford us actions as objects have self-awareness.

We also may use functions in a very different way with AR devices. Ninjutsu can be used with hand seals like the Japanese manga Naruto. Functions would be implemented based on actual coordinate space of the reality or based on actions, thus your ordinary behavior may trigger different functions in different layers. We will live in many over-wrapped layers even at a single moment. You could send 100 emails during a 5-meter walk from your desk to the resting sofa.

Alex: What was the most challenging part of putting the headset together?

Yamaura: The reflector took a lot of time. After CNC milling, we polished it by hand and added a half-mirror film for the window just to control the reflection and the transmitters.

Kaneko: Although it’s not close to the teaser video you guys released, we tried to emulate it. We really felt the potential of the device, like immediately the reaction was “alright, this is the future.” That was our first reaction

Fujii: Calibration was the difficult part and required a lot of patience with the current SDK.It took whole two days to make us satisfied with the calibration. Handmade North Stars have an individual difference, and our setup has some differences from the official North Star such as LCD resolution, so customized settings were needed. I posted the steps for the calibration on our blog, so that others don’t need to have the same patience. Besides, it’s an open source project, so it’s our great pleasure to contribute to the North Star project. I hope the next version of the SDK gets improved calibration functionalities.

Alex: What’s next for your teams?

Yamaguchi: We’re interested in applications which can be used outside of the room. Some experience which can be used for shopping and communicating with other people.

Kaneko: The natural next step for us is to include positional tracking so it can be used to see the world, and also interact with virtual objects in an AR environment. To me the wearable UI thing is something we want to try. It’s definitely the future of the user interface I think.

God Scorpion: We have a mixed reality lab that is researching and developing user interfaces, what is the best operating system, what is the best experience. The possibility of MR is equivalent to real rebuilding, re-recognition. The world makes an affordance to us and we will live in many layers. The augmented reality will change our perception of the world greatly.

If your team is looking to build the augmented future with North Star, get in touch! You can contact us here.

The post Japan Joins Project North Star appeared first on Leap Motion Blog.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
Leap Motion by Keiichi Matsuda (illustrations By A.. - 11M ago

Virtual reality. Augmented reality. Mixed, hyper, modulated, mediated, diminished reality. All of these flavours are really just entry points into a vast world of possibilities where we can navigate between our physical world and limitless virtual spaces. These technologies contain immense raw potential, and have advanced to the stage where they are pretty good, and pretty accessible. But in terms of what they enable, we’ve only seen a sliver of what’s possible.

Virtual, augmented, mixed, hyper, modulated, mediated, diminished reality. These are just entry points into a vast world of possibilities where we navigate between our physical world and limitless virtual spaces.
Click To Tweet
The cultural expectations for the technology, popularised by Hollywood, present our future as cities filled with holographic signage and characters, escapist VR sex-pods, or specially equipped ‘holodeck’ rooms that are used for entertainment or simulation. We can have relationships with virtual beings, or might give ourselves over completely to virtuality and upload our souls to the network.

Our actual future will be much stranger, subtler and more complex. These technologies will have a more profound impact on the way we interact with our environments and each other. We will spend our days effortlessly slipping between realities, connecting with others who may be physically or virtually present, dialling up and down our level of immersion. Creation will be fast, collaborative and inexpensive. Where it used to take years of hard labour and valuable resources to build a cathedral, we will be able to build and share environments in moments, giving birth to impossible new forms of architecture. We will warp time and space, bending or breaking the rules of physical reality to suit our needs. All of this will be normal and obvious to us.

VR isn't about features, it's about the kinds of experiences those features enable.
Click To Tweet
The current generation of VR is great for transporting you to far-off lands, trading up your physical environment for separate virtual worlds. Some let you exist in other times and places. Some put you in a blank canvas and encourage you to create. The latest VR devices to be announced have a host of new features: high resolution screens, untethered capabilities, inside-out tracking. Headsets are getting more comfortable, costs are coming down. These make VR more accessible and portable, which will undoubtedly help reach larger audiences. But VR isn’t about features, it’s about the kinds of experiences those features enable. We’re on a course toward a new set of possibilities that are tantalisingly close, and will open up huge new areas of VR for exploration – a whole new category of experience. It’s a space we’ve been exploring, that we call…

M I R R O R W O R L D S

Mirrorworlds are alternative dimensions of reality, layered over the physical world. Rather than completely removing you from your environment, they are parallel to reality, transforming your surroundings into refracted versions of themselves. Think Frodo when he puts on the One Ring, or the Upside Down in Stranger Things. These realities are linked to our own, and have strange new properties. Limited in some ways. Superpowered in others.

Physical obstacles like cabinets and chairs become mountains that can be moved to affect a weather system. Mirrorworlds transform aspects of the physical world into a new experience, rather than augmenting or replacing it.

Mirrorworlds immerse you without removing you from the space. You are still present, but on a different plane of reality. You will be able to see and engage with other people in your environment, walk around, sit down on a chair. But you can also shoot fireballs, summon complex 3D models, or tear down your walls to look out on a Martian sunrise. Mirrorworlds re-contextualise your space. They change its meaning and purpose, integrating with our daily lives while radically increasing the possibilities for a space.

Social Context

From command-lines to mobile interfaces, tech companies have made huge advances in making complex technology accessible to the masses. However, our relationship with technology is still largely based around an interaction between a human and a computer. When a person looks down at their smartphone, they are immediately disconnected from their social context – a phenomenon widely complained about by parents, friends and romantic partners around the world.

VR is perhaps the ultimate example of technological isolation, where our link with the physical environment is almost totally severed. This is fine if you’re alone in your bedroom, but can be a big limiting factor for its adoption in almost any other situation. People feel embarrassed, insecure or just unwilling to put themselves in such a profoundly vulnerable situation.

Our relationship with technology is still largely based around an interaction between a human and a computer. Mirrorworlds form new connections with the world around you.
Click To Tweet
Mirrorworlds don’t break social convention in the same way that conventional VR (or even mobile) does. Rather than cutting you off from the world, they form a new connection to it. At a basic level, we may just be aware of other people’s presence by making out their shape. In time, devices will be able to recognise these shapes as people, and replace them with avatars. In both cases, your social context is preserved.  You will be able to stay engaged with the people and environment around you. In fact, we could say that Mirrorworlds move us away from human-computer interfaces, and towards human-environment interfaces, with technology as a mediating filter on our perception.

Truly Mobile

This awareness also allows VR to be not just portable, but truly mobile. Currently, both tethered and untethered devices still require you to stay in a clear, relatively small area. Even then people often move around tentatively, worried about stubbing their toes, walking into a wall, or stepping on a cat. In Mirrorworlds, we will be able to walk out of the door, down the stairs, get on the train, all in VR.

Truly mobile VR: two friends battle it out in a mirrorworld, immersed but not removed from their physical environment.

This requires a big shift in thinking about how virtual environments are designed. In today’s VR, developers and designers build 3D models of rooms, landscapes and dungeons, and drop us into them. We then have to find ways to move around them. This is fine with small environments, but to be able to move around larger spaces, we either have to climb into a virtual vehicle, or invent new ways like flying or teleporting. VR is intuitive and compelling because it matches our physical movement to the virtual world; asking users to learn an additional set of controls just to be able to move around around could be confusing and alienating to many.

These kinds of environments give the developer a lot of control, but they can also feel isolated and self-contained. Mirrorworld environments are not predefined 3D models, they are procedural. They incorporate elements from your physical environment and transform them. Developers building Mirrorworlds will think in a different way. Turn the floor to water. Remove the ceiling. Change furniture into mountains. Make them snow capped if over 6ft tall. Apply these rules, and the whole world is reinvented.

As well as transforming our environments, Mirrorworlds can also transform the physical objects within it. We can pick up a pencil, and use it as a magic wand. We can turn our tables into touchscreens. We can access virtual control panels for our connected IoT devices. We will obviously want to use our hands, but we will also use our bodies, our voices. In some cases we might want specialist controllers.

Over time, more and more of the physical world becomes available to us. But unlike AR, the creators of Mirrorworlds can choose how much they bring into their experiences. Mirrorworlds aren’t additive, they’re transformative. They will be able to selectively draw from the physical world – to simplify, focus, or completely restructure reality. It will be up to developers and users to decide how deep they want to go.

A Design Framework for New Realities

The emergence of Mirrorworlds will give rise to new types of spatial and social relationships. We will need to figure out how to present spaces that can be shared by physically and virtually present people, invisible audiences, and virtual assistants. We will collide worlds, meshing together boardrooms that are separated by thousands of miles into a continuous space. We will need to establish a design language to understand who is visible to who, which objects are shared, and which are private.

A consultant uses overlaid scan data to advise a surgeon in a remote operating theatre. The virtual and physical scenes intersect, and physical tools and objects can be used in the virtual world.

Unlike AR, the creators of Mirrorworlds can choose how much they bring into their experiences. Mirrorworlds aren’t additive, they’re transformative.
Click To Tweet
We may also need to consider what an application is. Should we continue combining virtual tools, environments, and objects together into isolated worlds? Or should we allow users to bring tools and objects with them between worlds? Should we combine tools made by different developers?

We are being faced with what feels like limitless possibility, but over time rules and standards will emerge. A shared set of principles that start to feel intuitive, maybe even inevitable. Some of these rules might be migrated from mobile/desktop. Some might be drawn from the physical world. And some might be entirely new, native to the medium of immersive media. Conversely, these new limitations will allow more to happen. The structure of rules and conventions will act like a scaffold, allowing us to reach further in our colonisation of virtual space.

But we must be careful that our structure is built on the right principles. As with pioneering any new territory, the opportunities for exploitation are rife, and there are many interested parties with different priorities. Should we be locked into a single ecosystem? Do we have to sacrifice privacy for convenience? Can we turn consumers of these experiences into producers? How can we elevate people without compromising them?

AR and VR are often presented to the public as separate, even competing technologies. Ultimately though, devices will be able to span the entire continuum, from AR to VR and all of the rich shades of reality in between. In this future, we will be constantly moving between worlds, shifting between perspectives, changing the rules of reality to suit our purposes. We will be able to fluidly and intuitively navigate, build and modify our environments, creating spaces where physically present people and objects intersect seamlessly with their virtual counterparts. We will look back on the current era and try to remember what it was like being trapped in one place, in one body, obsessed with devices and squinting at our tiny screens.

AR and VR are often presented to the public as separate, even competing technologies. Ultimately though, devices will be able to span the entire continuum.
Click To Tweet
This future is closer than you might think. It’s largely possible on today’s hardware, and now the limitations are less about technical constraints, and more in our ability to conceptualise, structure and prioritise the aspects of the world we want to build. That’s the brief we’ve been working on at Leap Motion Design Research. As we continue to build this framework, we’ll be exploring all facets of virtuality, from its materials to its grammar and spatial logic. We are working to carve out a robust, believable and honest vision of a world elevated by technology, with people (and their hands) at the centre.

The post Mirrorworlds appeared first on Leap Motion Blog.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Animate from fingers to forearms with @LeapMotion and Reallusion iClone 7 for professional motion capture animation.
Click To Tweet

This week we’re excited to share a new engine integration with the professional animation community – Leap Motion and iClone 7 Motion LIVE. A full body motion capture platform designed for performance animation, Motion LIVE aggregates motion data streams from industry-leading mocap devices, and drives 3D characters’ faces, hands, and bodies simultaneously. Its easy workflow opens up extraordinary possibilities for virtual production, performance capture, live television, and web broadcasting.

With their new Leap Motion integration, iClone now gives you access to the following features:

Add Realistic Hand Motions to Body Mocap

 Most professional motion capture systems can capture perfect body movement; however, hand animation is always a separate challenging task. Now adding delicate hand animation is affordably streamlined with the Leap Motion Controller.

Enhance Communication with Hand Gestures

People use lots of hand gestures when talking. Adding appropriate hand and finger movement can instantly upgrade your talking animation with enhanced motions to convey the performance.

Animate with Detailed Hand Performance

Grab a bottle, open the lid, and have a drink. Even such a simple movement already causes sleepless nights for animators. With the Leap Motion Controller, playing a musical instrument is just a few moments of performance and motion layer tweaks.

Animate from Forearms to Fingers

Motion LIVE supports three hand capture options, from forearm (elbow twist and bend), to wrist rotation, all the way to detailed finger movements.

Desktop and Head Mount Modes

 Desktop mode (sensor view upward) gives you setup convenience, while the Head Mount VR mode (sensor view same as your eye level) gives you the best view coverage and freedom of movement.

One-Hand Capture

Besides using two hands for performance capture, set one hand free for mouse operation. Choose data from one hand to drive two-handed animation, or use the left hand to capture the right hand animation.

Gesture Mirror

A quick way to switch left and right hand data. This function is useful especially when you wish the virtual character to mirror motion data from screen view.

Free Mocap-ready Templates

Install the trial or full version of Leap Motion Profile and gain access to two pre-aligned pose templates calibrated for forearm, hand, and finger motion capture.

For a limited time you can get a full iClone 7 package on our web store. (Note that the engine uses features which may not work properly with the V4 beta software; for now we recommend using the V3 software.)

The post Leap Motion + iClone 7 for Professional Animation appeared first on Leap Motion Blog.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

This morning, we released an update to the North Star headset assembly. The project CAD files now fit the Leap Motion Controller and add support for alternate headgear and torsion spring hinges.

With these incremental additions, we want to broaden the ability to put together a North Star headset of your own. These are still works in progress as we grow more confident with what works and what doesn’t in augmented reality – both in terms of industrial design and core user experience.

Leap Motion Controller Support

If you’re reading this, odds are good that you already own a Leap Motion Controller. (If you don’t, it’s available today on our web store.) Featuring the speed and responsiveness of our V4 hand tracking, its 135° field of view extends beyond the North Star headset’s reflectors. The device can be easily taken in and out of the headset for rapid prototyping across a range of projects.

This alternate 3D printed bracket is a drop-in replacement for Project North Star. Since parts had to move to fit the Leap Motion Controller at the same origin point, we took the opportunity to cover the display driver board and thicken certain areas. Overall these updates make the assembly stiffer and more rugged to handle.

Alternate Headgear Option

When we first started developing North Star prototypes, we used 3M’s Speedglas Utility headgear. At the time, the optics would bounce around, causing the reflected image to shake wildly as we moved our heads. We minimized this by switching to the stiffer Miller headgear and continued other improvements for several months.

However, the 3M headgear was sorely missed, as it was simple to put on and less intimidating for demos. Since then we added cheek alignment features, which solved the image bounce. As a result we’ve brought back the earlier design as a supplement to the release headgear. The headgear and optics are interchangeable – only the hinges need to match the headgear. Hopefully this enables more choices in building North Star prototypes.

Torsion Spring Hinges

One of the best features of the old headgear was torsion hinges, which we’ve introduced with the latest release. Torsion hinges lighten the clamping load needed to keep the optics from pressing on users’ faces. (Think of a heavy VR HMD – the weight resting on the nose becomes uncomfortable quickly.)

Two torsion springs constantly apply twisting force on the aluminum legs, fighting gravity acting on the optics. The end result is the user can acutely suspend the optics above the nose, and even completely flip up the optics with little effort. After dusting off the original hinge prototypes, rotation limits and other simple modifications were made (e.g. using the same screws as the rest of the assembly). Check out the build guide for details.

We can’t wait to share more of our progress in the upcoming weeks – gathering feedback, making improvements, and seeing what you’ve been doing with Leap Motion and AR. The community’s progress so far has been inspirational. Given the barriers to producing reflectors, we’re currently exploring automating the calibration process as well as a few DIY low-cost reflector options.

You can catch up on the updated parts on the Project North Star documentation page, and print the latest files from our GitHub project. Come back soon for the latest updates!

The post Project North Star: Mechanical Update 1 appeared first on Leap Motion Blog.

Read Full Article

Read for later

Articles marked as Favorite are saved for later viewing.
close
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview