Loading...

Follow The ghost howls on Feedspot

Continue with Google
Continue with Facebook
or

Valid

Yesterday, finally Elon Musk has detailed in a public event the work that its secretive BCI startup Neuralink has done in these two years of stealth mode. What has been announced there is astonishing and I want to discuss with you here what has been revealed, what I think about it and what are the possible future projections of this work for AR and VR.

What does BCI mean?

“BCI” stands for Brain Computer Interface, that is a hi-tech solution (hardware+software) that lets you read or insert information directly into your brain.

So, imagine that you can think about a movie that you would really love to watch and that your smartphone could automatically launch that movie on Netflix just because you have thought it. Imagine that you can turn on the lights of your house just by thinking about it when you enter your home at night with the hands full of groceries you have just bought. All of this will be possible in the future thanks to Brain Computer Interfaces, or Brain Machine Interfaces (BMI) as someone else calls them.

BCIs is a very fascinating topic, that I really love. If you are interested into it, you can read this long deep dive on them (related to AR and VR) that I wrote some years ago.

The problems of BCI

BCIs are really in their early stages of development. There are lots of problems in developing them.

The first one is that we don’t know how the brain works… we have understood only a little part of all the brain processes. And creating an interface with something that you don’t know ain’t easy.

To interface to the brain you may use invasive or non-invasive methods. The most famous example of non-invasive BCI in VR is Neurable, a Vive-X startup that lets you use your brain to select objects in VR applications. This is possible thanks to a Vive headset with EEG sensors installed in the back. It feels like black magic and it is cool, but the problem of EEG is that they can only process the average signal of millions of neurons together, furthermore after this signal has been distorted by passing through the skull. So, the applications of EEG are pretty limited and can’t lead us to the Matrix.

Neurable setup: as you can see there is a game that you play with a modified version of the Vive including an EEG (Image by Christie Hemm Klok for The New York Times)

Regarding insertion of data into the brain with non-invasive methods, there are some experimentations with TMS (Transcranic Magnetic Stimulation), that has already let some people play a dull 2D game without actually seeing it with their eyes.

To overcome the limitations of non-invasive methods, there are invasive methods that actually put sensors inside your brain. This direct contact allows for easier reading and insertion of data into the brain (think about the cochlear implant that injects audio data into the brain), but at the same time, this creates other problems, like the fact that patient must have the skull drilled, or the risk of infections. Furthermore, they can only access a very specific part of the brain (the one they are installed onto), and so lose a general view of what happens into the brain, where actually multiple areas activate every moment to perform every action.

The research is investing a lot in BCI, on both types of methods. Regarding non-invasive methods, that genius that answers to the name of Mary Lou Jepsen is investigating how to use infrared light to read brain data even from neurons that are deep inside the skull.

For what concerns invasive methods, instead, after yesterday’s talk, Neuralink appears as one of the most innovative companies on the market.

When the technology will be ready, we will have other non-technical problems haunting BCIs, like various social concerns that we will have to address. One of them is privacy. As my friend Amir Bozorgzadeh of Virtuleap has told me: “In BCI, there’s no privacy. The computer is you”. Companies like Facebook may be able to harvest all the data from your brain. And regarding injecting data, governments could put propaganda directly into your brain. That’s why when I asked prof. Greenberg about BCIs, he told me “I’m not interested in it. I hope that this doesn’t happen. It will happen and that’s the problem.

As Mister President Alvin Wang Graylin loves to say, the problems that a technology can carry are directly proportional to its benefits. BCI can give us enormous benefits, but also dystopian scenarios.

Neuralink vision

Before going on discussing what has been announced yesterday, it is interesting to understand what is the vision of Neuralink and of its CEO Elon Musk.

Elon Musk is convinced that the long-term progress of AI (and robotics) are a risk for humanity. Long down the road, AIs will become more intelligent than humans and so they can take possession of the whole planet, dominate humans or even eliminate them.

He thinks that a smart way to solve this Terminator-scenario is that we don’t see ourselves as an alternative to AI, but that we blend with it. If we will be able to let AI become an additional part of the brain, AI can’t dominate us, because it will be part of us, it will be like an additional layer of our brain.

His vision is that we will become superhumans: when we will have to think about how to play chess, we will trigger automatically the AI-layer and see what will be the best move, but we will still be able to use other parts of the brain to love other people or to enjoy watching cat videos on Youtube.

I’ve detailed this vision in the article on BCIs that I linked you before, if you need more details.

Neuralink announcements

In a very long livestream, Neuralink has detailed the work that it has done in these two years of history.

Neuralink Launch Event - YouTube

If you want to read all what has been announced, I advise you to read the just-released Neuralink whitepaper (thanks Eloi Gerard for having linked me this!) or this cool article on VentureBeat.

In short, Neuralink has been able to create an innovative invasive BCI technology. Instead of relying on rigid electrodes as the ones mostly used until now, it uses flexible threads full of sensors. These threads are safer to be inserted into the brain because, being flexible, they cope better with brain movements and cause less bleeding in the brain and fewer scars. This means less inflammation of the brain and better reading of data from it (because the brain doesn’t create scars around the sensors). These threads are all scattered with electrodes that are able to detect the activity of the neurons.

The threads full of electrodes that Neuralink uses to read your brain data (Image by Neuralink)

To insert these threads into the brain, of course you need to drill the skull and connect the threads to the brain in a way that they remain fixed in position. For this purpose, Neuralink has created a “sewing machine”, that has a very tiny needle and that uses it to make the threads enter into the brain surface. This sewing machine is a robot that can insert the threads pretty fast and without causing bleeding (it avoids the vessels of the brain). Its nominal speed is up to 6 threads per minute. The robot operates automatically, but with the supervision of a surgeon, that can adjust the process if he sees some little problems. Drilling of the skull is still necessary, but in the future, Neuralink plans using lasers. The idea is having a process that is fast and painless… like doing a Lasik surgery for the eyes.

Zoom on the part of the robot that installs the threads into the brain of the patient (Image by Neuralink)Zoom on the needle of the sewing machine robot. A coin is used for scale (Image by Neuralink)Process of inserting the threads into your brain (Image by Neuralink)

These flexible threads have a diameter of 4 to 6 micrometers, a length of around 2cm and each one of them contains 32 electrodes. These threads are connected to a little chip, called the N1, that contains a little ASIC that reads the data from the threads, amplifies it, filters it and then transmits it to an external processing unit.

Neuralink’s current prototype of chip for experimentation on rats. It is composed of various parts, as you can see in the caption. The part tagged with B are the actual threads we have discussed before. D is the USB-C connector, that you can use for scale (Image by Neuralink)

Currently, the transmission is wired (via USB-C), but in the future, the plan is making everything completely wireless, so you won’t have cables going out from your head. Every N1 chip that will be installed inside the skull, can connect to up to 96 flexible threads and so have a clear reading of a tiny specific region of the brain.

To read different brain areas, multiple N1 chips, with attached threads, must be installed in the skull. Every one will have a diameter of 8mm. Neuralink says that currently, up to 10 can be installed and that the first human patients will probably have 4: 3 in the motor areas, and 1 in the somatosensory area of the brain.

Neuralink technology operates on a very little scale (Image by Neuralink)

The chips have also already the logic to transfer data into the brain, even if in current tests, Neuralink is only experimenting with reading data.

These N1 chips will connect (at the beginning through wires inside the head and then wirelessly) to a pod that can be installed behind the ear. This pod contains a battery to power the whole system and a wireless module to communicate with external softwares. The communication can be useful to configure the system through a smartphone app or to let doctors analyze brain data.

The vision of Neuralink..
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Virtuleap is for sure one of the most promising startups I have talked with in this 2019. I had the pleasure of talking with the CEO Amir Bozorgzadeh some weeks ago (sorry for the delay in publishing the article, Amir!) and I was astonished by his ambitious project, that is understanding the emotions of the user while he/she is just using a commercial headset. This means that you may just wear your plain Oculus Go, without using any additional sensor, and the Virtuleap software may be able to “read your mind”.

I can read in your mind that you are intrigued, so keep reading…

How Virtuleap has born Amir Bozorgzadeh, the CEO and cofounder of Virtuleap (Image by Amir Bozorgzadeh)

Amir Bozorgzadeh told me that he comes from a background in market research and mobile gaming. Thanks to an accelerator program he attended in 2015, he got in contact with virtual reality and remained fascinated by it. At the same time, he started writing technical articles for big magazines like VentureBeat and TechCrunch as a freelancer (this may be the reason why you have surely already read his name).

Due to his background, he started thinking about performing analytics and market analysis in virtual reality. He got quite bored by the related solutions of the moment, since as he states, companies were just “copying-and-pasting Google Analytics into virtual reality”, while he thought that virtual reality had much more potential for the analytics sector. He thought that copying traditional tools to VR was wrong, and so he asked himself “What is the new type of data that can make analytics and consumer insights interesting in AR and VR?” and founded Virtuleap to answer this question.

Analysis of emotions in VR

AR and VR are actually great technologies to study the behavior of the users. Virtual Reality is able to offer the emulation of reality and this is great to study the behavior of the users in real situations. If you apply devices like eye tracking, skin conductance sensors, heart rate sensors or even EEG/EMG to the VR experience, it is possible to detect important information about the psychological state of the user that is living in VR.

This is of fundamental importance for psychologists, for instance, to be able to detect and cure anxious states in patients. Measuring the stress levels during the VR experiences may be useful for a psychologist to detect if the arachnophobia in a patient is still present by showing him a virtual spider, for instance. Or can be used for training to see if the trainee can adapt well to stressful situations. Or in marketing to see if the user is interested in the product he/she’s being showcased.

As you can see, the applications are really many, but there is a problem: to have reliable results, the user must have complicated setups featuring all or some of the above cited devices. And they are expensive and cumbersome… no consumers want actually to wear all of them.

Virtuleap and the analysis of psychosomatic insights
Amir Bozorgzadeh talks about Virtuleap, the company that wants to read your mind in VR - YouTube
Amir talks details the vision of Virtuleap

Amir noticed that all the consumer VR headset have almost no one of the above sensor embedded inside: this means that creators have no access to biometric data. So he asked himself “How can I offer biometric analytic data on the bottom layer of mainstream AR/VR devices? How can I offer biometric data on a cheap device like Oculus Go?”.

Thanks to his passion for psychology, he started so studying what he calls “psychosomatic insights”, that is what can be inferred of the status of the user from body language alone. We have years of scientifical studies on animals and human beings that relate bio-language to emotional insights and so he and the other Virtuleap founders started working on porting all this knowledge from the research environment to a viable business product.

Oculus Go is really a simple 3 DOF VR device, but actually, it can give us 16 channels of body capture. To be exact, this is the data it can provide:

  • Headset Data
    • Angular Acceleration
    • Accelerometry
    • Angular Velocity
    • Rotation
    • Camera Rotation
    • Node Pose Position
    • Node Pose Velocity
    • Node Pose Orientation
    • Line of Sight
  • Hand Controller Data
    • Rotation
    • Local Rotation
    • Local Position
    • Local Velocity
    • Local Acceleration
    • Local Angular Acceleration
    • Local Angular Velocity
    • Camera Controller Rotation

All these data, got from the sensors installed on the Go and its controller, may be used to track the nervous system of the user. By tracking all the movements of the users, including all his/her subliminal movements, like micro-gestures and micro-motions that he/she is not even aware about, it could be possible to infer something about his/her emotional state.

Using neuroscience research, it may be so possible to just let a user play with a plain Oculus Go and infer if he’s angry, bored, stressed or interested during all the stages of the game. This is massive. Collecting all this data over time while you perform some specific VR tasks, it is also possible to profile your brain, and so understand how you handle stress, how you are attentive, how you are good in problem–solving and so on.

The idea of Virtuleap is to offer a whole framework for emotional detection and analysis for AR and VR. In the long run, the framework will adapt on the sensors with which the XR system will be equipped: so if the user will use a plain Oculus Go, the system will just analyze the micro-gestures defined above; if there will be eye tracking on the VR headset, then the system will use the micro-gestures + eye data; if there will be a brainwave reading device, it will be used as well, and so on. The more sensors will be used, the more the detection will be reliable. But the system should work with all the most common commercial XR headsets, so that the analysis can run with consumers and not only in selected enterprise environments.

The various characteristics that Virtuleap aims at tracking in your brain (Image by Virtuleap)

The company also plans adapting the algorithms to the kind of application that the user is running: detecting stress levels in a horror game is different than detecting it in a creative application, for instance, because different brain areas are involved or the same brain areas are involved, but with different activity levels. So, the company will tailor the detection for various different scenarios, so that to offer better emotion analysis in the experiences.

Current status of the project

The plan of Virtuleap is incredibly ambitious and I think that if Amir will actually be able to realize it, his startup has the potential of becoming incredibly successful and profitable.

But the road in front of it is still long and complicated. Currently, the theory that is possible to go from micro-movements data to a reliable detection of all the emotion of the user has still to be proven. Also because current commercial devices like Oculus Go supply noisy data for the analysis. The company has understood how to clean the data it has to analyze and what mathematical models are most promising, but it has actually just started going out from research mode to enter the production mode. So, it is just at the beginning and nothing has been properly validated yet.

What it can do now is detecting “arousal” states in the users. And no, we are not talking about THAT KIND of “arousal” (you pervert!), but of a triggered state of the brain, a spike in emotional activity of any kind. So, Virtuleap is currently able to detect when you’re getting emotional for some reasons while you’re using VR. But it is currently not able to detect why you’re getting this spike: maybe you’re scared, you’re overly happy, you’re stressed, or you’re aroused in-that-sense. I have to say that in my opinion is already something exciting (my arousal level has risen when I heard this!), but of course, it is not enough.

That’s why the company is currently looking for partners in different sectors so that to gather more data that will make the machine learning system more reliable and able to distinguish the different emotions. These partners are studied so that the “arousal detection” can be already useful for them because the experience has already a defined context. For instance, if the user is playing a horror game and there is a detection of a spike in arousal, it is almost sure that it has happened because the user is overly scared… it is not necessary to understand the emotion, because it is obvious. Currently, Virtuleap is partnering with companies in sports streaming, e-commerce, security training, and many others.

Neuroscientists know how to correlate behaviors to parts of the brain that get activated. So mixing the data of what the user is supposed to be doing with the one detected from the headset, it is easier to understand what is happening in the mind of the user.

The plan is gathering data, train the machine learning system and then validate the results using the ones from other devices like eye tracking and EEG.

The startup has an internal test app called The Attention Lab. You can enter this virtual lab environment and play little games inside it. One game that there is now, for example, shows you a pirate ship that attacks you and you have to fight it back by launching bombs at it with precise timing. This game wants to analyze how you do cope with stress: how easy you do get stressed, how do you behave in stressful situations, etc…

The Attention Lab Teaser #2 - YouTube

After that you have played the games, you can return to the virtual laboratory and see the data about your brain that have been collected. You can look at the profile of your brain, and understand better how is your personality. You can also share it with other people and challenge your friends on who is better at stress management, for instance.

At the moment, the app is for internal use, but the company is planning to make it public in the future, also to be able to gather data from more users and train its ML models better. I hope I will be able to try it soon, because I’m curious to experiment with this black magic of reading my emotions without using sensors!

During fall 2019, Virtuleap plans to already make available in beta the API, reporting and dashboard systems. So, interested companies will already be able to experiment with emotional analysis in VR. The prices won’t be high, says Amir, and mostly are needed to cover the costs of the cloud.

But when I asked Amir how much time is needed for the framework to become fully complete, including all the possible sensors and giving fully reliable data, he told me that probably 3 years is a probable timeframe. So, the road in front of this startup is still very long.

Emotional analysis and privacy
Amir Bozorgzadeh talks about privacy and brain computer interfaces - YouTube
Amir talks about privacy and brain computer interfaces

When talking about the collection of emotional data, there is the obvious concern of privacy. Amir told me that his purpose is helping people that have psychological or neurological problems, but he agrees that the technology may be misused by marketers and Zuckerbergs of all kinds.

“With BCI, privacy doesn’t exist. The computer is you”. I think that this quote of his summarizes it all how BCIs have great potential but also create great risks for our lives.

In his opinion, to mitigate this issue, first of all, all gathered data should be anonymized by default. Then, every user should be informed with a clear prompt (not long documents in legalese) on what data will be gathered and from which sensors, letting him/her choose what could be harvested from his/her brain. This should help, even if the industry is going forward too fast and privacy protection is a bit lagging behind.

The Attention Lab Teaser - YouTube
A final word for VR entrepreneurs

I could really feel the passion of Amir for AR and VR. I asked him a piece of advice for all people wanting to enter the industry. He told me that he doesn’t like cowboys, people that entered the field just to make money when there was the hype.

He also doesn’t like who is in the tech only in part. “You can’t stay with a hand attached to a safe branch and the other hand to AR/VR”. He advises people that if they want to enter this field, they have to burn the bridges behind and fully commit to immersive realities. This is what the industry needs and this is the only way to gather his respect.

He also added that in his opinion all this focus on VR and AR for gaming is wrong: companies should invest more money in enterprise products, in useful solutions, in standards, and in WebVR technology… that is, in making the technology go forward. Gaming is just a little part of VR, he added.

I really want to thank Amir for the time he spent with me (he’s really a great guy) and I wish him good luck for Virtuleap. I think that the idea is very ambitious and overly interesting, but all will depend on how much it will be validated by scientific verification in these months. If he will manage to obtain that, boom, his technology can be disruptive.

If you want to collaborate with Virtuleap, get in touch with Amir or feel free to write me and ask for an introduction. And if your arousal level has gone above the required threshold, don’t forget to share this post with your peers, to make my arousal level go up as well

(Header image by Virtuleap)

The post Virtuleap wants to read your mind while you play in VR appeared first on The Ghost Howls.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

I wish you’re having a wonderful Sunday. Here you are my recap of the most interesting XR news of the week!

Top news of the week (Image by Martin Hajek, from iDropNews) Apple has killed its AR glasses project… maybe

A piece of unexpected news has shaken the XR communities this week: according to a report on Digitimes Taiwan, Apple has abandoned its AR/VR glasses project and all the members of the team have been re-allocated in other teams. This would have happened on May 2019, in the same period when Apple’s senior manager of prototype development Avi Bar-Zeev left the company. The reason for such a decision would be that was impossible to create a glass that could be thin enough and that could contain all the desired technology (e.g. 5G).
 
If the report were true, this would mean that Apple doesn’t believe in AR anymore and this would be terrible news for all the immersive realities ecosystem.
 
But before anyone starts saying that “AR is dead”, we should sit down and think about a bit. Apple is clearly committed in augmented reality, and the recent release of ARKit 3 proves that. Tim Cook continues saying that AR is the future and this is another proof of their commitment. All the other major companies (Samsung, Huawei, etc…) are working on AR and Apple for sure can’t stop experimenting with if it doesn’t want to miss the next communication platform as Microsoft did with Windows Phone. And Robert Scoble confirms that there are still lots of people working in AR at Apple (and there are also many open job positions). The same Scoble reminds us that before the iPhone was launched, there was a similar rumor about the iPhone project being abandoned.
 
So, what’s happened? Usually, rumors have always some true foundations, so most likely these are things that could be happened:

  • Apple is working on various AR/VR glasses designs and one of them proved to be a failure, so THAT only team working on only that version has been dismantled (this is the theory of Robert Scoble);
  • Apple is switching from one design to the other, like from pass-through to screen-through MR and this required reorganization of the resources (theory of Charlie Fink);
  • Apple is just having a delay: they are waiting for the technology to be ready to create a device that is thin and fashionable enough, a device that is ready for consumers;
  • Apple is just reorganizing all the internal teams after the departures of Bar-Zeev, Ive and other key figures;
  • Apple is reorganizing its suppliers (especially the Chinese ones) to take in count the problems after the trade wars.

So, don’t panic. AR is still alive, Apple will still release its AR glasses. Probably not this year, but next one or in 2021… but it will happen.

More info (News on VentureBeat)
More info (News on Road To VR)
More info (Article by Robert Scoble with its point of view)
More info (Article by Fink & Mikenseer with their point of view)

Other relevant news Oculus is working on 2nd gen VR

Mark Zuckerberg told that Oculus Quest closed the first generation of VR and so Oculus is now working on its second generation. Oculus has already pre-announced great things for Oculus Connect 6 (involving AR and VR) and for sure that event will be the occasion to let us understand what is this new VR that Oculus will give us.
 
This week, Jon Lax, a Facebook executive, tweeted that “Quest is the end of our first chapter of VR. What’s next is where things really get interesting”. We don’t know what he means, but for sure he wanted to raise the hype for OC6. And he’s doing that pretty well since all communities are intrigued by this sentence. But on the other side, as Jeri Ellsworth of Tilt Five points out, this can lead to the Osborne Effect, that is: there will be people not buying the Rift S or the Quest because they know that Oculus will release something better.
 
We don’t know what Oculus will present, but we have some hints. A new job listing points out to the fact that Oculus is planning to integrate eye tracking in its next devices. And strangely, Upload VR has published this week an old video of Abrash showcasing an experiment to create in VR an environment that has the same shape of the real environment (including the real props) the user is in. This may mean nothing, but since Upload is close to Oculus and that video is completely out of context… the conspiracy theorist that is in me thinks that Oculus will show something that blends the real and the virtual.
 
And about content… what if I told you that Oculus will probably take Assassin’s Creed and Splinter Cell to VR? :O

More info (Jon Lax’s quote)
More info (Osborne Effect)
More info (Eye tracking job listing)
More info (Oculus mixing real and virtual)
More info (Assassin’s Creed and Splinter Cell)

Minecraft Earth is coming in private beta

Minecraft Earth is, in my opinion, the game having the potential to disrupt the AR market in a similar way to what Pokemon Go did some years ago. And after having seen the new video announcing the beta, I am even more convinced of it.
 
Microsoft has released this week a new video showing how the game will work. The game will tie you to play it every day so that you can gain blocks with which you will be able to create your virtual stuff. After you have enough blocks, you can create your buildings on a planar surface (like with many ARCore or ARKit games) alone or with your friends. And when you’re ready, you can also see your creations in real-world-scale wherever you want. And you can also enter them, as if they were your house! That’s amazing: geolocation, real/virtual blending, multiplayer and a strong brand like Minecraft only in one game!
 
Microsoft is launching in two weeks a private beta for iOS users. It is coming also for Android but after some time. Interesting users may apply on a dedicated website. I can’t wait to read the feedback from the first real users!

More info
More info

Some updates on VR content

This week we had various updates on VR content, that I want to tell you:

  • Defector, one of the most awaited exclusive games for Oculus Rift has been released. I’m sorry to say that reviews have been pretty mild;
  • We will know the release date of Stormland at OC6, but probably it will be released during the Holidays of 2019;
  • CCP Games, the studio that abandoned VR, has actually updated its game Sparc. In an interview with Upload, they confirmed they are still convinced that VR is the future of gaming… it’s just that the market is not profitable at this moment;
  • Road To VR has published a long gameplay video of Lone Echo II;
  • Until You Fall has been judged as one of the games with the best melée system out there;
  • We have some new info on “A Kites tale”, the short VR film that Disney will present at SIGGRAPH;

More info (Defector review)
More info (Stormland release window)
More info (CCP Games updates Sparc)
More info (CCP Games on VR)
More info (Lone Echo II gameplay)
More info (Until You Fall)
More info (A Kites Tale)

Elon Musk will detail Neuralink soon

After some years of silence, Elon Musk will give us an update of Neuralink on July, 18th 2019. Some people will be invited at the event, while all the others will be able to see the livestream video after the event itself.
 
Neuralink is the mysterious startup about Brain Computer Interfaces that Elon Musk has set up to revolutionize the BCI sector with the short term goal to help disabled people and the long term goal of mixing our brain with artificial intelligence. If you want to discover more on BCI and on Neuralink goals, you can read this long post of mine on the topic.
 
I am incredibly intrigued. Can’t wait to discover more.

More info

News worth a mention (Image by Valve Corporation) The debate on the Valve Index continues

As I’ve highlighted last week, now that the Index is in the wild, people are discovering its little problems. One is that the device becomes warm while using it and during summer, this is not a good thing. That’s why people are hacking it adding fans so that to stay fresh while using it! To be fair, it is not the only device being hacked with fans: some Redditors have hacked the Rift S as well, and for Vive there is the Vive ‘n Chill addon.
 
But what sparks the biggest debates are the controllers, with the thumbsticks that are not perfect and don’t click in all directions. People have also not appreciated that Valve has not admitted the error and claimed that this is by design. And regarding the controllers in general, the reviews are mixed, in the sense that some people are not sure that they are worth all that money when in the end they result just being an evolution of Oculus Touch. Everyone agrees that they are great controllers, but not everyone agrees on the fact that they are truly revolutionary.
 
In all of this, someone has created the design for a Leap Motion mount for the Index that exploits the frunk.

More info (Valve Index fans mod)
More info (Rift S fans mod)
More info (Rant on the thumbsticks of the Index)
More info (A review of Index controllers)
More info (Irony on Knuckles’s thumbsticks)
More info (Leap Motion mount for Index)

Companies are studying foveated rendering for AR glasses

NVIDIA is going to showcase at SIGGRAPH a technology it has developed to implement foveated rendering in augmented reality glasses. It reminds me a bit of Varjo, but for AR. Very interesting.

More info

How to use Vive Wireless Adapter with a laptop

The great VR innovator Steven Sato has explained everyone in a blog post how to use the Vive Wireless Adapter with a laptop computer. It requires a bit of hacking and creativity, but it works!

More info

Steam is releasing Steam Labs

Every day lots of games get released on Steam and so for gamers it is hard to find a new interesting game in this sea of new released games. That’s why Steam is going to experiment with various technologies to help players in discovering new content that is relevant for them. This set of experiment is called Steam Labs and will roll out soon.

More info

HTC has released some interesting statistics on the VR market

Every year, HTC conducts a big survey on the Chinese VR market and shares the results with the community. These stats contain very interesting data, like the trends of VR, the differences between the VR tastes of males and females and the differences between US and Chinese users. Yes, there are some biased data telling that HTC is the best brand ever… but I think that is part of the game 

More info

MediView lets surgeons see the internal parts of patients in AR

This week, Robert Scoble has highlighted how the most interesting medical XR startup that he has seen in these years is MediView, a company that lets surgeons see in augmented reality the organs, bones and veins of a patient while performing a surgery. After having watched a video, I confirm that this company looks impressive.

More info

Hands-on with HaptX gloves

My latest article has been a long deep-dive on Dexmo gloves. Curiously, at the same time, TESTED has published a hands-on with HaptX gloves, that are the main competitors of Dexmo.
 
The video is overly interesting and if you pair it with my article, you can see the pros and cons of both solutions. HaptX can convey better haptics, while Dexmo better force-feedback and a lighter form factor.
 
Both solutions are very cool, but they are also expensive and can’t offer a sensation of haptics that is identical to reality.
It is still gen 1 of haptics, so it is normal.

More info

5G is coming to Italy

As an Italian, let me say that I’m proud to discover that we’re getting 5G as well, and my city, Turin, is one of the first ones that will get it. By 2021, 120 cities (22% of population) will be covered by 5G. But the telco companies do not plan to offer a flat plan for it.

More info

A good story for Reddit

A disabled guy (missing one hand) showed on reddit how he plays with Rift S controllers and in the comments people started organizing to design and 3D print for him a prosthetic addon that he can use to play in VR! This story made me restore my faith in humanity…

More info

Some XR fun

A good use of Valve Index’s frunk

Funny link

The generalist press has a confused idea of the Quest…

Funny link

Support Me on Patreon!

This newsletter has been possible thanks to the support of my fantastic Patrons (I love you all):

  • Ilias Kapouranis
  • Jennifer Granger
  • Jason Moore
  • Matias Nassi
  • Caroline

These weekly roundup require me a lot of time and effort to read all the news during the week… so, if you’ve found it useful,support me on Patreon to keep this column alive!
 
 Happy VR

(Header image by Apple)

The post The Ghost Howls’s VR Week Peek (2019.07.15): Apple kills its AR glasses, Oculus full-steam ahead on 2nd gen VR and much more! appeared first on The Ghost Howls.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

One of the most interesting encounters I had in my trip to Beijing has been with Dexta Robotics, the company behind the Dexmo force-feedback gloves.

They have been so kind to set up a demo just for me in Beijing (the company is from Shenzhen) to let me try their product and have a chat about it, and I really want to thank them for their kindness and the time they spent with me.

So, here you are a very long deep dive into Dexmo gloves and my experience with them.

Making a Chinese heart with my hands while wearing Dexta Robotics’s Dexmo Gloves. Just look at the gloves and not at my face: I seem an idiot in this photo… Dexta Robotics

If you are into VR since a while, for sure the name Dexta Robotics isn’t new to you. They announced some years ago a glove to let you feel the objects in VR and got some interest from the VR community. At a certain point, information got confused: someone received devkits, but then we had no more info about the company and the product. Honestly, I thought that they had shut down.

This until I was contacted by one of their PRs (Miss Zhang Zizi) about the launch of an enterprise edition of the gloves. Dexta CEO Aler Gu (Xiaochi Gu) explained to me that that period of silence was due to the fact that they decided to focus less on marketing and focus fully on making the product better in stealth mode. They were going from the prototype stage to the actual shipping-ready production stage. “The world had seen enough fancy demos people can’t have access to. A product cannot make a difference unless it actually ships” he said.

Production stages of Dexmo gloves: Idea was conceived in 2014, the working prototype was realized in 2016, the product was ready in late 2017 and the shipping is ready now in 2019. The company claims that now can even ship tens of thousands of units if needed (Image by Dexta Robotics)

This required a lot of effort, ri-organization, money raising and other stuff that made the status of the company appear confused from the outside. Actually, they never had stopped working on the project. They were working on it even more than before, just giving less priority to marketing.

Now Dexta has managed to perform this jump from the devkit to the actual product and it is ready to show it to the world.

The Manufacturing of the Dexmo Enterprise Edition | Dexta Robotics - YouTube
Dexmo

The Dexmo force-feedback glove is an exoskeleton that you wear to have true fingers presence in any virtual environment. It can be used both with a standard screen or an AR/VR headset, but I think that virtual reality is where this device really shines. It comes in three flavors:

  1. Low tier: 11 DoF finger tracking and 7-points tactile sensations on the palm and fingertips;
  2. Medium tier: 11 DoF fingers tracking and 5-points force feedback on all fingers;
  3. High tier: 11 DoF fingers tracking and 7–points tactile + 5-points force feedback sensations on all fingers.

Force-feedback is the true specialty of this company and this is why Dexta has made me try the version 2 and 3 of the gloves.

Features of Dexmo gloves (Image by Dexta Robotics)

All three versions of Dexmo are intended for enterprise usage for now (e.g. for advanced training) and so even the price is very enterprise-oriented. Interested companies from all over the world may directly contact Dexmo to discuss the sale.

I specified “for now” because people from Dexta told me that they have a plan to firmly march towards delivering Dexmo to general consumers, once they gain enough economy of scale. I really hope this can happen, because VR with haptic feedback is overly cool!

Force-feedback

You may wonder what “force-feedback” means. Basically, the glove is able to simulate the forces that objects apply to your hands in the real world. Here you are some examples:

  • If you close your hand around a real object, your fingers can’t penetrate it, so the object exerts a force against your fingers that prevent them from entering its surface;
  • If something falls on your fingers, this applies a force on your fingers, that so move as a consequence of this;
  • If you push a virtual button, the virtual object applies a counter-force to your fingertips.

These are all examples of things that happen in real life and that currently don’t happen in VR. In current commercial VR setups, your hands permeate objects, and no object can apply a force on you… and that’s why all action titles feel so fake. But with “Force-feedback” gloves, all of this change.

Force-feedback gloves are able to apply a force to your fingers so that to simulate a force happening in real life. For instance, if you hold a virtual object in your hands and you try to close your hand, the force-feedback glove will prevent your fingers to enter the surface of the virtual object, exactly as it happens in reality. This will increase a lot the realism of your experience. This kind of gloves can also simulate something actively touching your fingers: e.g. if someone pushes against your fingers in VR, you would feel that sensations applied to your real fingers in real life. That’s like magic.

This is one of the demo that I tried myself: you hold a pumping heart in your hand and you can feel it applying a rhytmical pressure to your fingers. In this GIF the user is not moving its fingers voluntarily… it is the force-feedback system that moves them Dexmo force-feedback gloves

Aler told me that creating this force-feedback effect is very difficult. He explained to me that once that you have found the right way of implementing force-feedback, adding all the other features to the glove is an easier task. That’s why he and his team have experimented a lot with various methods until they found the one that in their opinion was the best one. And now that they have been able to set up this main part of the device, they can focus on adding features over time, until they will have the perfect glove for VR.

The technology that they are using for force-feedback is servo motors. This gives the Dexmo various advantages over other methods, like strings pulling. The most important of them is the compact factor: Dexmo exoskeletons are small and light. Some of its competitors, like HaptX (formally named AxonVR) are really bulky and heavy, while other ones require a haptic system that extends all over the arm.

HaptX Gloves Development Kit - Launch Trailer - YouTube
Astonishing video of the HaptX gloves in action. As you can see, they are pretty bulky, much bigger than the ones by Dexta Robotics (also because of the spacial haptic system that they have)

The other big advantage is that with servo motors you can apply a variable force: some other methods act as a on/off mechanism, so for instance you can let the fingers move or stop them to simulate an external force, while with servo motors you can even only apply a resistance force. Your fingers will still be able to move, but the movements will be harder to perform. Think of what happens when your hands have to move in a fluid or when they have to squeeze an object made of rubber.

Squeezing a rubber chicken in Virtual Reality! - YouTube
Me squeezing a rubber chicken in VR! It was overly funny, and it was great that Dexta’s engineer could change the stiffness of the rubber on the fly, and I could perceive it immediately in VR

The third advantage is that the gloves provide pressure only to places where you expect them to be, like fingertips, due to its exoskeleton design that isolates part of the glove from the back of fingers. With other methods, this is not guaranteed: for instance, the continuous pulling of the strings may cause downward pressure to the back of hands and create confused haptic sensations.

Dexmo fingers tracking

Dexmo gloves track 11 degrees of freedom for each hand. Exactly, a single glove is able to detect:

  • The bending factor for all fingers (how much the fingers are closed);
  • The horizontal angle (splitting) of all fingers (how much the fingers are spread);
  • 3DoF for the thumb, so to better approximate its movements.

You may recall that actually, every hand has 27 degrees of freedom, so it is clear that this glove is not able to recreate the exact pose of all the fingers, but only a smart approximation. Anyway, this is enough to simulate many of the actions and the gestures needed in VR (e.g. grabbing objects, throwing them, activating a button, etc…).

For fingers tracking Dexmo chose an original approach and went for mechanical tracking, that is there is a mechanical sensor that measures each tracked angle for each finger. Aler here made me a very long and interesting explanation of the difficulties that there are in performing fingers tracking in a glove and the various methods that they tried before getting to this solution (that in his opinion, is one of the best on the market). Maybe I will explain that in detail to you if we will ever drink a bubble tea together in Shenzhen, but for now, let me just explain briefly what are the tracking alternatives on the market and their pros and cons:

  • Optical tracking solutions like Leap Motion are great because they can provide all 27 degrees of freedom of the hands, but the tracking is still not fully reliable and in case of occlusions, it just doesn’t work. And you can’t use a method that sometimes doesn’t work in industrial environments;
  • Commercial solutions like Oculus Touch or Valve Knuckles are good and cheap, but only provide 3/5 DOF for every hand (and with a flexion detection that usually does not go all the way… it is mostly on/off detection) and no real force feedback;
  • IMU tracking systems may seem the best approach since they can offer up to 10 DoF per hand, but IMU sensors tend to drift over time and do not perform well in a system full of vibrations, metallic elements and electro-magnetical fields like a force-feedback glove;
  • Flex-sensors tracking is another common approach, and while it offers good measurements, the continuous bending of the sensors reduces their lifespan and so they tend to break fast;
  • Rotational sensors are very reliable, but this comes with a cost: they take up more space to set up.
Of course, one of the first thing that I did was performing a middle finger. I think that “Time to middle finger” is the “time to penis” of finger tracking systems

Dexta went for the last one: the mechanical tracking approach, that is more reliable and resistant. Regarding the big occupied space, since Dexmo has force feedback units that would take up space anyway, this is not a big deal (it would have been if Dexta thought to produce thin tracking gloves).

Aler proudly highlighted that while 11 DoF is not perfect, it is still better than most gloves and controllers on the market that usually are around 7 DoF.

He put much emphasis on thumbs tracking. In Dexmo gloves, the thumb tracking alone takes 3DoF, and this is needed to make the grasping and all the other virtual hands’ interaction more realistic. He explained to me that they put extra emphasis on the thumb because their physics engine rely on accurate position of thumb to generate realistic force, while most data glove doesn’t have feedback, so accurate representation is less relevant.

All the movements of the thumb are tracked thanks to a special joint Dexmo haptic feedback

Currently, Dexmo is able to offer some haptic feedback thanks to tactile vibrations applied to the fingertips and palm of the hands of the user. These vibrations are made possible by LRAs (Linear Resonant Actuators), that are installed in the exoskeleton.

Their vibration gets propagated to the fingertips, to simulate for instance the sensation of touching a very rough surface or a smooth one. Since LRAs allow a specific waveform to be generated with adjustable amplitude, the haptic system is very flexible and able to simulate different haptic sensations.

Dexmo positional tracking

Actually, the glove is not able to detect its position in space, and that’s why in VR it has to be used by applying one Vive Tracker to it. The Vive Tracker must be positioned next to the thumb articulation: this choice has not been made by chance: being in that position, the weight of the Tracker is less perceivable by the elbow joint and so it interferes less with the movements of the arm and makes the user less tired.

What is incredible is that the glove can work with WHATEVER tracking system. You choose… and Dexta even made a demo using the glove with Oculus Quest controllers! I couldn’t believe it until I saw the video: Quest with Force Feedback must be overly cool.

Whatever tracking technology you choose, you have to make sure that the system is well calibrated: in some demos, I had the real and virtual position of the hands having a little mismatch and so the force feedback was not coherent with what I was seeing in VR. This caused some weird sensations in me.

Connectivity

The gloves are completely wireless: they communicate with the PC via a customized 2.4GHz dedicated network. You establish this network thanks to USB wireless modules plugged into the PC.

Theoretically they could work also via Bluetooth, but Bluetooth communication would add extra latency to the haptics system.

Battery

The nominal battery duration for the gloves is 6 hours.

Price

Dexta actually revealed me the price of the device, but I can’t disclose it to you. What I can tell you is that the price is actually enterprise-oriented, and most VR enthusiasts won’t be able to afford it for now. But companies interested in having it, will be able to afford it and use it for instance in their training programs and actually spare money thanks to the more effective training procedure.

What else can I say? The main competitor of Dexta is probably the HaptX haptic system, and Dexmo is far cheaper than it.

Hands-on

I have tried two models of Dexmo: the older one only applying variable force feedback employing direct-drive motors, and a new one that gave me haptic sensations on top of the force feedback.

Setup

There are three different wearing methods for Dexmo gloves:

  1. Swappable rubber finger caps. This is particularly comfortable if this is a personal glove. Once you pick the right caps, swap it for the first time like you would with a pair of headphones, you may wear them very easily afterwards. In this case, you can wear the gloves on your own;
  2. Glove solution. This is much easier to wear, but it comes with different sizes. When the glove is shipping in large quantity, people can just pick ones that fit their needs. In this case, you can wear the gloves on your own too;
  3. Strips method. This is the most complicated yet the most universal method. It is guaranteed to fit most people’s hand, so it is ideal for exhibitions and events. But it is also the most uncomfortable solution and it usually requires external assistance to wear.
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Last post about my experience at Sandbox Immersive Festival! (If you lost the other ones, you can find them at these links: episode 1, episode 2, episode 3, episode 4, episode 5).

Today I will talk to you about other 4 experiences: Mandala, that has been one of my favorite installations; Andy’s World, that let me finally try redirected walking in VR; The Tide and The Emergence that were quite innovative. Are you interested? Then keep reading!

Mandala Tibetan Mandala (Image by Kosi Gramatikoff)

Mandala has been one of the best VR experiences that I have tried at Sandbox Festival. It has also been the last one that I tried, and I have entered it so late that the Chinese policemen even came to the booth wanting to cut out the power to close the venue! :O

Mandala is an experience mixing interactive theater with VR developed by Sandman Studios, the company that is actually organizing the Sandbox Festival. It reminded me a bit the MetaMovie project that I described here on this blog, but instead of happening all in virtual reality in High Fidelity, it is organized in an installation in real reality. I’m telling you what happens inside, so be careful that this article will contain HUGE SPOILERS: read it only if you’re sure you will never experience Mandala in some exhibitions.

It is an experience for 3 people at a time. I entered the booth with two guys I had never met before and I found a beautiful girl telling us that it was a special experience, a travel into the esoterism and things like that. After that, she invited us to leave all our belongings in some boxes and then we were ready to go. She asked us who was going to lead the group and after some seconds in which we all looked in the eyes of the others hoping for someone else to propose, I decided to be the leader. For this, I was awarded the red color and I was the first one entering the experience. Just to let you know: actually, this “leadership” thing had no actual practical meaning in the experience… but it had a psychological effect. Because I told that I was the leader, throughout the whole experience I thought that I had to lead the group, take initiative and protect my team.

When we entered the main stage, we just found a space with some curtains and some weird props. I was given a backpack PC on my shoulders, a VR headset on my face and some OptiTrack gloves on my hands. After that we were all ready for the adventure, the VR experience started.

We three found ourselves in something that resembled an old temple. Every one of us looked like a simple silhouette avatar, a ghost, of a precise color: red, blue and green. Every one of us could see the other ones, could use the hands thanks to the tracked gloves and communicate via voice (because we were very close to each other). We started, pretty confused about what we had to do: in this experience, no one gives you the instructions of what you must do. So we started discussing what we were supposed to find in this old temple. And we started exploring the temple with our eyes and our hands.

It was interesting that Mandala mixed the virtual and the real world. In the virtual world, we could clearly see at the center of the room a bowl full of sand and when we got there, we actually found a bowl full of sand. It was fun playing with the sand both in VR and in real life.

After some investigations, I understood that every one of us had to push a symbol written with his own color (e.g. I was red and so I had to touch a red symbol next to the sand). After that we all performed this operation, some animation happened in the VR world and the visuals changed. Then, we got stuck again. Until suddenly, we heard a voice behind us.

Looking there, we saw a monkey-man. Was it real? Was it only virtual? Was he another participant? We had no idea. The monkey man started saying that he was very afraid of us because we were hungry ghosts (oh the irony of telling the SkarredGhost that he’s a ghost!) wanting to eat him. At that point, we were confused about what to do. Until I thought “hey, come on, this is my VR adventure, I am the leader… let’s have fun!” and started howling like a ghost (the (skarred)ghost howls while experiencing something to review for The Ghost Howls! This moment was like when Peter Griffin hears the titles of the movies…)

Family Guy - say the title of a movie in the movie - YouTube

The monkey man started acting scared and so my buddies started pretending to be scary ghosts as well. The monkey man reacted to our behavior perfectly showing himself always more scared. Then he proposed us to spare his life for a moment to talk about something very important. I answered I was not sure, because I really wanted to eat him (I am an evil ghost!)… but in the end, we agreed and so the adventure could go on.

What happened was cool: the monkey was actually one of the actors of Sandman Studios and he was improvising in the classical style of interactive theater. He reacted to what we were doing (acting as scary ghosts) but then he smartly put us in the binary of the story again. And since I set up a funny mood, he continued in that style, being continuously funny with us as well. For instance, at a certain point, the monkey man introduced himself to us all and shook the hands of my two pals, but when it came to me, he refused to do that, since I was the bad ghost that wanted to eat him. It was overly funny.

After a while, he teleported us all in another world on the clouds and our virtual visuals changed completely. Where there was the bowl full of sand there was actually a tree stump now. It was great because when I went to touch it, there was no bowl and no sand anymore: someone, exploiting the fact that the VR headsets were occluding our visuals, had transformed that prop in another one. At that point, there was a stump in the center of the room, both in the real and virtual world. This has been one of the things that made me love this experience: without you noticing, it tries to keep the immersion strong by keeping coherent the relationship between the real and the virtual world.

A bit later on, the monkey hid in the stump and an evil giant came to find him. He asked us to reveal him where the monkey was or he would have killed us. Of course, I answered him “he went that way”, pointing a direction completely different from the one of the stomp. The evil big creature understood I was mocking him, so he menaced me to kill me using a long virtual spear… that was actually also physical, because when he pointed the spear towards me, I could really feel it on my body. Even if I knew that everything was fake, this gave me some discomfort.

As a team, we discovered that we could defeat the creature by grabbing the eyes that were floating all around him. When he was defeated, the monkey came out again, thanking us for having not betrayed him. Then he sat down with us and we talked about some philosophical things of life. After that, we all hugged virtually and physically, the monkey disappeared and then the experience ended.

The Monkey God, Sun Wukong (Image by Shaolin.org)

I loved Mandala for these reasons:

  • It mixes two different elements like virtual reality and interactive theater;
  • It is an experience that you make with other people, and with these people, you develop a bond because you have lived a magical experience together. You can make new friends;
  • You are the one that sets the mood of the experience. We started behaving in a funny way, so the actor of the monkey acted in a funny way as well, and so we had a great time together. I laughed a lot while living Mandala;
  • Since it is improvised, every time you live it, it may be different from the other ones;
  • The experience feels more realistic because it doesn’t follow a fixed binary: yes, there is a fixed story that you live, but you can modify it a bit with your behavior, so it feels more real;
  • It mixes real and virtual props in a great way and this increases the realism of the experience;
  • It has a very positive mood, with the monkey that wants you to think about the important things of life.

The only two downsides of the experience have been:

  • It mixes various elements of Asian culture in a non-coherent way;
  • The tracking technology was not working that well: the green ghost disappeared at a certain point of the story! The creators highlighted that actually, the installation was still a prototype, so this kind of problems was still normal.

I liked a lot Mandala, it was a very positive experience that I lived, and when I think about it, a smile always appears on my face.

Andy’s World

Andy’s World is an explorative experience that you make with another person. There is a story that takes you to various places… and I would really like to tell you what it was about, but it was all in Chinese, so I actually have no idea! For this reason, I will just tell you about the technical stuff.

Andy’s World is an experience that lets you navigate inside lots of different environments: spaceships, environments full of lava, of ice and other cool stuff of this kind. The graphics are good, but lack some kind of “coolness factor” that would make me define them as fantastic. They seem a bit standardized, I don’t know how to explain this.

You live Andy’s World equipped with Vive headsets connected to backpack PCs that you wear on your shoulders. Thanks to Leap Motion tracking, you can actually see your hands in VR and you can use your hands to activate some objects that make you go further in the adventure. You have no hand controllers, so the only way of navigating the environments is walking.

The experience is all about showing you how it is possible to explore a complex environment by just staying in a 5mx5m booth. So, you have the impression of moving forward a lot, going through curved corridors, moving up and down on moving platforms, entering spaceships, etc… while actually, you are always only moving inside a small booth.

One of the mechanisms through which this is possible is redirected walking. Basically, the VR system rotates your visuals slowly across time so that you think you are going straight while actually, you are physically moving in circle around an imaginary point. This way, you can live VR experiences that use big VR environments by just staying in a little physical space. The company implementing redirected walking in Andy’s World is a startup of Shenzhen’s ViveX venue that I met when I attended the VEC.

Walking in Virtual Reality Redirected Walking Unlimited Corridor Trailer - YouTube

I had always wanted to try redirected walking because it always seemed black magic to me: something that tricks your brain in believing that you’re walking an infinite corridor while actually you’re moving in circle like an idiot is surely fascinating. And some friends trying it actually told me that it is an unbelievable experience.

Sadly, trying it, I haven’t come out much excited by it. Maybe it was because the volunteers following us during the experience made us walk too fast, maybe it was because the heavy graphics made the program run sloppily, maybe it was because the corridor we had to follow virtually was not straight but curved, but my body realized very well that something strange was happening.

I can’t explain well what I felt, but I’ll try doing my best: I felt my body very unbalanced… like if I was walking with one leg don’t working very well, like if I was limping a bit, with my body posture that was not coherent with what I was doing. I had a bad balance and once even had the sensation I could fall to the floor because of it.

The reason is this one: the redirected walking may trick your visuals easily, but what about your body posture? If I walk straight, my body is in a perfectly symmetrical balanced position, while if I walk in circle, I put more weight on the leg that is closer to the center of the circle. If I’m seeing myself going straight, but actually I’m walking in circle, my body will have the asymmetric balance of the walking in circle, but the visuals of walking straight. This sensor mismatch makes my brain and my balancing system go crazy.

Surely there are tricks to reduce these bad sensations (like making the users walk slowly or walk along a circle with a big radius), but this proved to me that redirected walking is not a perfect solution for all VR experiences requiring a lot of space. Sometimes it can be used, some other times not.

The Tide

The Tide is a storytelling experience about some big evil fishes that start populating the seas and the costs of Japan. These fishes are big like whales and they make ships to sink and eat people, so they are not very kind.

One of the big evil fishes that eats humans in The Tide

The experience follows the adventure of a young fisherman that is on a ship that gets sunk by these evil whales. He got almost eaten. Mysteriously, he finds himself alive on the coast and then he starts trying to understand how to continue his life after this unlucky episode. The experience ends with a guy inviting him to follow him to save his life and then a “to be continued” appears.

I admit that I found the story a bit weird and also very short, considering that it is just the beginning (probably it was only the episode 1). What I found intriguing instead was the graphical style. The Tide was real-time rendered, but was all drawn in comic-style. It seemed to me to be inside a Japanese manga.

And this analogy revealed true even for what concerned how you lived the experience. You could see just a tiny bit of action, and then you had to press a button to make the experience go on, with a new point of view and another little shot of action… exactly as if you were reading the comic frames one by one. That was cool.

The Emergence
Emergence VR - Sundance trailer - Vimeo

The Emergence was a quite original experience. It is very complicated to define it, so I will copy-paste the definition of the authors:

Emergence is an open-world environment, expressing the primal desire to maintain your individual identity whilst being part of a crowd.

As you immerse yourself in a crowd of thousands, shafts of light beckon you closer. As you touch the light, the environment – its atmosphere, its gravity and the choreography of the crowd – transform in powerful ways, continually challenging your perception.

Showing 5000+ intelligent human behaviours, Emergence offers a powerful, unique experience of a crowd, that is only possible with the latest graphics technology.

It was interesting seeing these scene in 3rd person with all this big crowd of human beings behaving in a weird way and all following me (an enlighted character) or other humans. I could move with the thumbstick of my controllers. I could see some light rays, and if I touched them, everything changed: all the colors around me changed, all the people started behaving in different ways. With the last ray, we even all started flying! It was cool. A bit weird, but cool.

It is not something that I would do for hours, but the experience was quite short, so the duration was perfect to always keep me interested in it.

And that’s it! This was my last post about the Sandbox Immersive Festival! It has been a great event, and I really hope to attend it also next year! I want to thank especially Eloi, Eddie and Coco to made my participation possible this year… I felt really enriched by it.

I also hope that you loved performing this journey into VR storytelling with me… and if this is the case, subscribe to my newsletter to not miss my next posts!

The post SIF: Mandala mixes interactive theater with VR, my experience with redirected walking and much more! appeared first on The Ghost Howls.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

It is something that I say since the beginning of 2019: nReal is one of the most interesting AR startups out there. That’s why when I traveled to Beijing, I really did everything that I could to visit its HQ.

Entrance of nReal offices in Beijing, China

There, I have been able to discuss with people of the company about the status of the HMD and I also had a try-on session with the latest version of nReal glasses. Here you are my hands-on impressions on the device.

What is nReal?

If you still have no idea on what is nReal, I’ll make a short recap for you. nReal is a Chinese augmented reality company that is developing an augmented reality glass that is cheap, light and fashionable.

The idea of nReal is producing the first consumer AR headset, that, when you wear it, you really look like wearing a pair of normal glasses. Since the device must be cheap and light, it can’t contain any computational power or battery, that’s why it must be attached to a computational unit that you hold in your pocket. In current devkits, this computational unit is a mini-PC called the Oreo, while in the production phase, thanks to Qualcomm Snapdragon reference design, it will just be your phone.

Me trying the nReal glasses for the review. As you can see, they look really like fashionable sunglasses, and make all people look cool

nReal has been sued by Epic Games for the name too similar to Unreal and by Magic Leap that claims that nReal CEO Chi Xu has stolen the technology of the American company while working there. No one from Magic Leap or nReal is authorized to talk about this topic, so I won’t address it here.

Design

nReal glasses are designed to be the closest possible to regular sunglasses.

They present a design that is very light since all the processing power is in a separate computational unit. Of course, they appear a bit bigger than regular sunglasses, since they have to contain all the electronics and the lenses for the optics. This is especially noticeable in the front part of the device, the one containing the displays, that is far thicker than the corresponding part on a regular pair of glasses. The lenses themselves actually are transparent only for the lower half, since the top half is black and opaque because it has to contain the circuitry of the cameras and the other sensors, plus parts of the optics. Of course, there is also the cable that comes out from one of the temples that shows that this is an electronic device.

This means that at a first look, they look like sunglasses, but if someone comes closer, clearly realizes that they are something different.

Upper view of the nReal AR glasses. As you can see, the front region is quite thick and is not similar at all to the one of a pair of regular glasses

nReal glasses look stylish and this is what I love of them. They are all black but feature a front part of the frames that exhibits very bright colors like blue or red that makes them very fashionable. The temples are connected to the front part through a hinge exactly as normal glasses and this means that you can actually fold them to carry them in the pocket with you (even if the cable may prevent you from actually doing this).

Comfort

nReal glasses are very light, but not as light as a pair of standard glasses, not even like the glasses associated with zSpace tablets. Anyway, it is a big step forward from HoloLens or Magic Leap One.

Regarding the comfort, current devkits (that do not represent the final form of the device), left me with mixed feelings. I mean, the headset is light and quite comfortable, but there are some regions of my head where I actually felt discomfort. For instance, the design of the temple tip was absolutely wrong for the shape of my head and my ear, so my ears had to stay folded most of the time. Also, my big Western nose wasn’t that happy wearing the glasses.

The cable that comes out from the device can actually be a nuisance in some moments… and the fact that is attached to only one of the temple may cause the perception of asymmetric weight on your head, or even the perception of a wrong balance of the device. This is why Magic Leap actually attached two cables on both sides of the device: so that to give the Magic Leap One a more balanced fit.

Bottom view of the glasses

nReal answered that they are aware of these issues and they are working on them for the actual release of the device.

Visuals

The visuals of the device are pretty cool. nReal features two 1080p displays and a diagonal Field Of View of 52 degrees… something that is far better than the original HoloLens for instance. The visuals are not offered through waveguides, but thanks to images that are rendered on a screen and then reflected onto the surface of the semi-transparent lenses, to give you the impressions of augmented reality. This mechanism is far easier and cheaper than the waveguides employed by HoloLens or Magic Leap, but it gets the job done anyway.

This is the optical system that takes the images of the holograms rendered on a screen and reflects them onto the lenses, so that you can see them while looking through the lenses

When trying the nReal glasses, I was surprised by the vivid colors and by the consistency of the augmentations. I mean, while looking to some holograms produced by HoloLens or Magic Leap, I have the impression that these elements are semi-transparent (especially with ML), but with nReal this is almost not true. Virtual objects appear more vivid, more realistic, more solid. I think that the high resolution contributes to this sensation as well.

If you shake your head fast while looking at objects, they become blurred, though.

The field of view is a mixed bag. The horizontal field of view is great, much wider than the original HoloLens. I never had the impression that something was cut out from the horizontal FOV, it was really impressive. The vertical FOV, instead, was pretty little. On the top, it is limited by the black opaque region of the lens, while on the bottom, the images abruptly get cut down at a certain point. And while on the top your brain accepts that the augmentations stop because there is the frame of the glass (something similar to what happens with ML), on the bottom part, you just see the virtual elements stopping far before the lower limit of the frames, at a random point. So, the horizontal FOV is great, but the vertical one presents all the problems that we already know from AR glasses. Someone from nReal told me that they wanted to give the glass the same wide horizontal form factor of current smartphone screens, but I don’t know if it has been a great idea.

A little cute kitten seen through nReal lenses. As you can see, its colors make it pretty believable

The lenses are a bit opaque, like in all other augmented reality devices. So, when you wear nReal, you lose a bit the illumination of the world. This light dimming is not that extreme, though, and is very acceptable to me. Anyway, because of this and the vivid colors of the holograms, the virtual elements occlude completely all the real ones. For instance, if you put your real hand where there is a virtual element to touch it, you can’t see your hand anymore and this breaks a bit the magic.

Audio

Even if they are very little, nReal glasses feature integrated audio: there are little speakers embedded into the frame, close to the ears of the user.

Little speaker embedded into the temple of nReal glasses

I’m not a great expert in audio, but I have not come out impressed by the speakers: audio seemed always not clear enough, especially because the volume seemed always too low.

Tracking

nReal glasses feature 6 DOF tracking and this is impressive considering the low price of the headset. Most cheap AR headsets are just 3 DOF, while this can offer you the possibility of moving in your space and see the virtual elements reacting accordingly.

The tracking is solid in the sense that in my little tests, the headset never lost the tracking. I tried moving across the office, shaking my head fast and other things and the tracking always performed well.

The big problem is jitter: virtual elements appear fixed in a position in space, but actually, they are not perfectly stable. Especially while you move, you see them oscillating a lot.

Having studied computer vision and having tried developing a rudimentary AR engine myself, I can tell you that the stability of virtual elements is one of the most difficult things to accomplish. Microsoft has surely spent lots of money to make sure that HoloLens’s holograms look absolutely fixed in space. nReal is just a startup and it has a short story, so the fact it hasn’t been able to obtain the same results was quite predictable.

nReal tracking is not comparable to the one of HoloLens or Magic Leap because holograms do not appear stable, but they noticeably move while you move your head. This is another problem on which nReal is working hard.

Controller

nReal glasses work with a 3DOF circular controller. If the headset is connected to the phone, it is the phone that acts as a controller. You can use the phone to point at virtual objects, and then press the screen to interact with them. On the screen, you can currently tap and swipe to give different commands.

In some demos, I have been able to point at virtual objects, press the screen of the phone and then move them in space by just pointing the phone to where I wanted them to be.

The phone to which you connect the nReal glasses can also be used as their controller. That big red area is the one that you can use to interact with the device by tapping and swiping. The top area is instead to launch some apps on the device.

Having the phone as the controller is a smart idea, since this removes the need of having another device with you to interact..

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

I’m in Italy again and here in these days, it is damn hot and wet. I’m hoping to attend a VR event in the North Pole, so that I can feel some cold again.
 
This week in XR has been quite cold instead, and there have not been many interesting news. Anyway, it has not been that useless either…

Top news of the week (Image by Valve) The Valve Index is out in the wild

After the shipment of the first units, the Valve Index is finally out in the wild. We have so some honest feedback from people having it.
 
And not surprisingly, we are also reading about the first problems. The Index has been over-hyped, and as always happens in these cases, when people actually try the device, they discover that it is not perfect, and has its drawbacks. One of the major complains are in the controllers, whose thumbsticks do not “click” when the stick is not in upright position (and some assistance guy from Valve even defined it as a feature, lol). Other people with big hands are finding the whole Knuckles uncomfortable. Some others are finding little defects in the production quality. Others can’t make most games to run on the PC because the 120/144Hz mode is too demanding. Some people are even already returning the Index!
 
Don’t misunderstand me: the Index is still to be considered a great device, a high-quality device that is trying to innovate VR. Most people having it, love it. But as with all the other devices: time is not ready for a perfect VR headset, so if you want to buy it, expect some compromises in this as well.
 
The good news is that Valve has released opensource the CAD files of the headset and all the instructions to create aftermarket accessories. Being a device for an elite niche, I don’t believe much in a big accessories market, but I think that this may be overly important for creating custom accessories in research centers and such. I love this idea by Valve.
 
Youtuber Mike VR Oasis has also published a video of the passthrough vision from the Index. It seems our experience Beat Reality!

More info (Unsatisfied user review n.1)
More info (Unsatisfied user review n.2)
More info (Unsatisfied user review n.3)
More info (Unsatisfied user review n.4)
More info (Valve releasing Index CAD files)
More info (Valve Index CAD repository)
More info (Valve Index passthrough vision)

Other relevant news (Image by Upload VR) HTC Vive Cosmos will work with Vive Wireless Adapter

This week, HTC has not made another big reveal about the Vive Cosmos, so we’ll still have to wait to discover more about its price and its features. But it has revealed one little detail that is anyway very interesting: the Vive Cosmos will work with the Vive Wireless Adapter, the same that works with the Vive Pro.
 
This means that every Vive Pro users can buy a Vive Cosmos and enjoy wireless PC VR experience without having to pay for another accessory. As Upload points out, this means that the Cosmos will be the only major new headset of 2019 coming with a wireless solution already on the market.
 
At the same time, HTC has announced the creation of Vive Enterprise Solutions, a new business unit dedicated to managing enterprise VR stuff. It’s more a logistic thing internal to HTC, so I think most of us can not care about this :D.

More info (Vive Cosmos wireless adapter)
More info (Vive Enterprise Solutions)

WeChat now lets you create AR mini apps

You know that I’m very interested in the Chinese market… and in China, with WeChat you can do almost everything. Recently, Tencent (the company behind WeChat) has launched Mini Apps, that is the possibility for developers to create little applications that run inside WeChat. These can be games, applications, marketing experiences or whatever the developer wants.
 
Now, inside these applications, it is possible to add AR features. The first company that has showcased these features has been the popular luxury fashion brand Emporio Armani (yay! Italian people!) that lets the wearer try different Armani lipsticks in AR. If you have a recent phone, you can select the lipstick that you wish and see the image from your front camera updated automatically so that to let you preview how you look with that lipstick on (of course, I looked fantabulous!).
 
You may wonder why you should care. Well, if you are in a company that wants to make marketing in China, WeChat is maybe the most important marketing channel that you can exploit. In the West, it has been proved that AR marketing is very powerful and increases conversions: L’Oreal claims 2x engagement on its website and 3x conversions since when it has implemented an AR try-on. If you can make marketing AR experiences on Wechat, this means that you can increase your conversions in the enormous Chinese market!

More info (AR on WeChat)
More info (AR conversions for L’Oreal)

Microsoft is shipping Azure Kinect

Microsoft is finally shipping its Azure Kinect devkits. Unluckily for us Europeans, it will currently ship only in the US and China, for $399.
 
Interested developers can already have a look at its SDK. As promised, Microsoft has released the documentation on its website. Previous Kinect developers will be able to notice that there are some similarities with the previous Kinect 2 development, but also some differences.
 
Since my first works in mixing Kinects with VR headsets for offering full body VR, I see huge potentials for the Azure Kinect in the VR ecosystem.

More info (All you need to know on Azure Kinect)
More info (Tony Rogers using the new Azure Kinect)
More info (Azure Kinect’s SDK)

SIGGRAPH and Gamescom are coming

Notwithstanding the heat, the summer will include some interesting big tech events, and they will feature AR/VR for sure.
 
SIGGRAPH showcase experiences have been unveiled and there are many high-quality titles. 4 of them are even premiére experiences by big important studios: A Kite’s Tale by Walt Disney Animation Studios (VR Theatre), Undersea by Magic Leap (Immersive Pavilion), II Divino: Michelangelo’s Sistine Ceiling in VR by Epic Games, and Mary and the Monster: Chapter One by Parallux and New York University’s Future Reality Lab (Immersive Pavilion). If you are in Los Angeles, at the end of July, the SIGGRAPH seems the place to go.
 
If you are in Germany in August, instead, you could attend the Gamescom. It is more a game-oriented event, but there you can find some interesting VR experiences and hardware as well. I was there last year, and I found a lot of cool stuff to try. This year I will be there again, so it can be the right occasion to meet 
At Gamescom, Fast Travel Games is organizing a VR showcase, a bit similar to the one that UploadVR has made for the E3. Some indie developers have partnered to make a bigger impact inside the event. Of course, I will sustain this effort as I can!

More info (AR and VR at SIGGRAPH)
More info (AR and VR at Gamescom)

News worth a mention (Image by Screengrab/Motherboard) Researchers are studying hacking in VR

Two cybersecurity researchers have found some vulnerability in popular social VR experiences like SteamVR, VRChat, and High Fidelity and have exploited them to hack the experience perceived by the users.
 
The researchers have warned the creators of such experiences (that have been patched), but this makes me wonder what will be the consequences of hacking in the VR age. Crackers could take complete control of the world the user is in, causing him psychological harm or injecting subliminal info into his mind.

More info

TestHMD helps you in detecting the features of your HMD

One of the journalists of Real O Virtual has created an application that helps you in detecting the basic features of a headset, like FOV, sweet spot, resolution, etc… I tested it and I can tell you that is cool, and it is overly useful if you are a hardware reviewer or such.
 
It costs 4$, and in my opinion it is worth every penny.

More info

IDC has released another positive report about XR

In this short report, it is highlighted how shipment of VR headsets has increased of a 27.2% factor from the same period of previous year. Top VR headset makers have been Sony, Facebook, HTC, Pico, and 3Glasses.

More info

Reality Rebels wants to turn your house into an arcade

Little indie studio Reality Rebels is creating a framework to let you play LBVR games in your house using Oculus Quest. The system is able to procedurally generate levels that follow the planimetry of your house and then you can play them in VR using your Oculus Quest. Of course, the system supports multiplayer… and this means that you could play in your house with your friends games that are a bit similar to the ones of the arcades!
 
The project is overly cool, but I have only one question: what about the furniture?

More info

Researchers are studying if VR can hurt your brain

There are some people that think that VR may lead to dementia. This may seem a very stupid claim, but actually, some research on mice has highlighted how some parts of the brain (the hippocampus) get crazy while the user is in VR, because of the sensorial mismatch between what is seen and what all the rest of the body perceives.
 
The long term effect of this is unknown: considering that our brain evolves over time, adapting to the conditions… can it adapt to the wrong behavior induced by VR, becoming permanently damaged? Currently there is no evidence of this, but of course, some research is needed.
 
The article is very interesting, and especially lets us wonder why there are mice in the lab that have better VR than us 

More info

XR4All is a grant for EU XR projects

The program XR4ALL, that is inside the Horizon2020 EU grants will award 10,000€ for 50 European projects that will try to push further the boundaries of XR. 25 of this projects will be selected to have an additional funding of €40,000.
 
If you are interested, just follow the link 

More info

Blood&Truth developers explain some design choice of the game

In an interesting article on Road To VR, the developers of Blood&Truth, one of the most successful games for PSVR, detail some design choices related to the game. If you are a game developer, I advise you to give it a read.
 
Ah, if you are a game developer you may also be interested in knowing that they are hiring

More info (game design)
More info (hiring)

Oculus Summer Sale has arrived!

If you want to buy some Oculus game, this is the right time!

More info

NASA has a long story in using VR for training

Training the astronauts for missions in the outer space is crucial to make sure that they won’t make errors while they are outside the spaceships to perform some operations. That’s why NASA is using VR to train them since lots of years, even before the advent of the Oculus Rift.
 
This long article details the use of VR by NASA and how it has been important all over these years.

More info

Palmer Luckey got married!!!

My best wishes to him and his new wife. I really hope they will have a long and prosperous life together. In sickness and in health. In reality and virtuality. 

More info

Some XR fun

I asked the community to caption a terrible photo that my friend (and VR director) Gianluigi Perrone has made me in China.

The results have been fun… even Rony Abovitz took part of the game!

Funny link

Surfing is fun with the Oculus Quest!

Funny link

Playing Rec Room as an adult is not always funny…

Funny link

Support me on Patreon!

This newsletter has been possible thanks to the support of my Patrons:

  • Ilias Kapouranis
  • Jason Moore
  • Matias Nassi
  • Caroline

These weekly roundup require me a lot of time and effort to read all the news during the week… so, if you found them useful,support me on Patreon to keep this column alive!
 
 Happy VR

The post The Ghost Howls’s VR Week Peek (2019.07.07): Valve Index shows thumbstick issues, Cosmos works with Wireless Adapter, WeChat introduces AR and much more! appeared first on The Ghost Howls.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

In today’s recap of my favorite experiences of Sandbox Immersive Festival, I will talk you about Baobab Studios‘s latest creation Bonfire, the cute Ello Echo, the harsh Home After War and the trippy Ayahuasca! If you are into immersive storytelling, I’m sure you will love these experiences…

Bonfire

Bonfire is a funny story by Baobab Studios. Baobab Studios is well-known in the VR ecosystem since it is the studio behind amazing VR experiences like “Invasion!”, “Crow: The Legend” and many others. Every creation of its has always been very interesting and Bonfire is no exception to this rule.

Bonfire | Official Reveal Trailer [HD] | Starring Ali Wong | 2019 Tribeca World Premiere - YouTube

In Bonfire, you are a space scout that is visiting a planet to see if it is suitable for the life of human beings. We are in the future and people will have to evacuate from Earth, so they are looking for another planet in the outer space where to move.

Your spaceship has some problems, so you crash on this planet. During the crash, your robo-assistant Debbie loses a wheel, but she’s still functional. It’s night, so you light a bonfire in the forest and you two start staying there, while an alien approaches you. It’s all very dark, so at the beginning it may seem scary, but in the end you discover that it is a very cute creature that just wants to have fun and to eat.

After some funny moments with Debbie and Pork Bun (this is the name you give the alien), you will have to decide (SPOILER ALERT!) if you want the humankind to occupy this planet, killing all the existing creatures (including Pork Bun) or telling the HQ that this planet is not suitable, so mankind will have to look for another planet (if there will be one). The decision that you will take will change the final. Personally, I have decided to save Pork Bun… he was too cute!!

Debbie and Pork Bun of Bonfire VR. They are two lovely characters (Image by Baobab Studios)

There have been mainly four reasons why I really loved Bonfire.

The first one is the high production quality. Graphical assets are great, animations are great, the story is great, the voices are great (e.g. Debbie has been dubbed by popular actress Ali Wong). Everything is great.

The second one is the sense of “community” it fostered in me. I was sitting on a chair in real life and I was sitting next to the fire in VR. Around me, there was my robot Debbie and my new alien friend Pork Bun. Being there, eating with them, talking with them, playing with them around the virtual heat of the fire was really like an intimate experience. Especially because I loved them all: Debbie was nice with me and Pork Bun was overly cute, like a dog puppy. It was for me like being around a fire on the beach with some friends. This helped me a lot in creating a sense of commonality with the virtual characters, I really felt a bond with them.

Me, my two virtual friends and a cute bad alien that approached us

The third one has been the amazing interactions. The experience features some interactions, that of course, make you feel more part of the story since you become an agent, not only a viewer of what happens. I loved the interactions because they were well inserted into the story, they were coherent and didn’t feel artificial.

They felt incredibly natural: at a certain point, the cute alien wanted to play fetch with me, so I started throwing him some cylinders similar to marshmallows. He ran and gave them back to me, and I had fun playing with him and this also increased my bond with him. But the best moment has happened when I faked launching one (as I sometimes do with dogs) and the alien reacted exactly like a real-life dog, looking in the direction I was pretending to throw it and then acting confused. I was amazed by it. When at a certain point, a big alien approached me and ate Debbie, my instinctive reaction has been the one of defending her, so I did what I would have done in real life, getting a big piece of wood from the bonfire and throwing it against the alien. And it worked! All these interactions, that were so similar to real life, increased a lot the sense of presence in me.

The fourth and last reason for which I loved this experience is the positive atmosphere. Bonfire is very relaxing, it is funny, it is lovely. Even if there are themes like mankind extinction, defending yourself from evil aliens and such, everything is always treated with irony and fun. There’s never tension, so it is good to be watched both by kids and adults to have some minutes of fun.

In my opinion, Bonfire is the best experience created by Baobab Studios until now. I sincerely advise you to watch it.

Ello Echo Ello Echo: these are the lights that the cute character has lit to greet all the passing-by aliens

Ello Echo is one of the cutest VR experiences I have tried at Sandbox Festival.

It is the story of a cute creature that lives on a little asteroid traveling in the outer space. Of course, he feels alone, so he tries to contact some other creatures in all possible ways: with his voice, with some messages in a bottle and with some big lights saying “Ello” to everyone that could see them. He never succeeds in his efforts, so he feels very lonely and sad. His struggle to find new friends take him to put even his life at risk, and then… well, I won’t tell you how it ends.

The story reminded me a lot of Oculus Studio’s Henry. The similarities are many, especially the common theme of the lonely and sad creature looking for friends. So, I won’t tell you that the story is original, but it is enjoyable nonetheless.

The cute character inside his house, hinting you to help him in finding a stick

Ello Echo is a realtime rendered experience and it features cartoon graphics that are quite cute to watch. Everything seems made of play dough. He is very little and lovely as well… and this is why you will have a cuteness overload while watching it.

There are also some interactions to be performed: I don’t think that they are fundamental for the story (it could have been the same even with no interactions), but I appreciated the fact that they tried to do them in an original way. For instance, at a certain point, you have to wake the creature by screaming with your voice and this is something that is quite original in a VR experience (even if a bit embarassing to be performed in an exhibition…)

Ello Echo has not been my favorite experience at SIF, but for sure I loved spending my time in that.

Home After War

If Ello Echo was all about cuteness, Home After War was all about sadness. It is an experience that narrates you the problem of the “booby-traps” that have become common in war zones.

Home After War | Oculus VR for Good Creators Lab - YouTube

Home After War narrates the story of Ahmaied, an Iraqi man that has escaped from this home when ISIS has conquered Fallujah. When his city gets freed, he returns his home, to discover that his and all other people’s houses have been filled by “booby-traps”. These are rudimentary explosive traps made with all possible materials that the IS has put in all the houses of people that have gone away from the country.

This means that when Ahmaied returns to his house, he doesn’t know what it is safe to be touched. These traps can be everywhere: in the kitchen, under the beds, in the kids’ toys… and can kill him or his family members in whatever moment of their daily life. Ahmaied can finally return to his house with his family, but his home is not a safe place anymore. And then, at a certain point, his sons go to another house to do a job, and they die because of one of these traps. Ahmaied dies inside because of this, and at the end of the experience, tells you his hopes for a world without war.

From a technical standpoint, Home After War features some interesting features. All the house of Ahmaied has been reconstructed with photogrammetry and so you can really feel as being there. You can move inside it… it is not just a set of 360 photo. Sometimes there are also 360 videos to show some typical moments of the life of an Iraqi family. Ahmaied has been recorded in a video file and he stays there, in front of you, narrating these sad moments of his life. For technical reasons, he can only be a 2D silhouette in front of you, and this breaks a bit the immersion, but apart from that, you have the sensation of being hosted in his house, of being really there, with him narrating you what is like living there. It is a bit like flying to Fallujah and see how it is living in a war zone.

For most of the experience, I wondered why this experience was in VR: this story could have been simply told with a 2D video and would have been interesting the same. Then, towards the end, I could see and hear the explosion of the booby-trap that has killed the sons of Ahmaied. A scent emitter also made me feel the smell of this explosion. After a flash, all the visuals became darker, and the Iraqi man started to cry in front of me. In that moment, I understood why this experience was in VR. This moment wouldn’t have been so powerful on a 2D screen. VR makes you feel how it is being a victim of one of this evil IEDs (Improvised Explosive Devices).

Making of Returning to Fear In Fallujah | Home After War - YouTube

Apart from the technical features, Home After War is astonishing for the efforts its creators put in realizing it. They flew to Fallujah, with all the logistic difficulties to go there. They listened to the stories that no one was interested in telling, because no one usually go there. They had to select one of these many sad and interesting stories. They had to record it, in a place that is still destroyed by the recent war. All just to tell us about this problem of the rudimentary explosive traps in war zones. Really kudos to them. It is an experience that deserves to be praised just for this.

Ayahuasca

Ayahuasca is a trippy VR experience. The experience has the same name of a drug, the Ayahuasca, that is used by some Amazon tribes as a traditional spiritual medicine.

AYAHUASCA VR WORLD TRAILER - YouTube

I had to experience Ayahuasca while sit down on the floor. At the beginning of the VR experience, I saw a shaman in front of me singing some ceremonial chants, then my visuals started becoming distorted and then I started seeing a lot of hallucinations. I saw snakes, parts of the bodies, birds, fractals, skeletons and a lot of other weird stuff. It lasted like 20 minutes and it has been a trip inside the hallucinations that this drug causes in people trying it. After that, I saw again the shaman of the beginning and the adventure ended.

The visuals were well made and were quite creepy. I liked also the transitions between the various hallucinations… so maybe at a certain point you see skeletons, and various animations concerning them, then slowly these turns into snakes and so you start hallucinating about snakes. It is really like being into a disturbing dream. A guy that actually tried the Ayahuasca told me that the VR experience was very close to the effect that you have in real life.

Experience AYAHUASCA in VR without the Need to go in the Jungle - YouTube

The problem for me was that these virtual hallucinations were not associated with an altered state of mind, and the experience was not able to induce it via some kind of hypnotic techniques. This means that I watched 20 minutes of weird stuff without being drunk, high, hypnotized or whatever, so I found no sense in them. For this reason, I found Ayahuasca quite boring. Maybe the volunteers at SIF should have given us some bottles of beer before watching it to improve the experience.

And that’s it! Have you tried these experiences? What is your opinion on them? Let me know here in the comments or on my social media channels!

…and don’t forget to subscribe to my newsletter!

(Header Image by Baobab Studios)

The post SIF: My reviews of Bonfire, Ello Echo, Home After War and Ayahuasca! appeared first on The Ghost Howls.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

In Beijing, I have visited again my friends of 7Invensun, one of the worldwide leaders in eye tracking technologies. During my first visit, they showed me some cool prototypes: a prototypical eye tracking add-on for HoloLens and a prototypical Vive Focus with embedded eye tracking.

This time, they showed me a full roster of eye tracking solutions for both standalone and PC headsets.

Me, with Richard and Lee of 7Invensun. Great guys to talk with… Pico G2 4K with eye tracking

The product that surprised me the most has been a Pico Goblin 2 4K with embedded eye tracking. I thought that this was another prototype that they made to showcase their expertise, but actually, Richard of 7Invensun told me that it is a product that the company is already selling in China.

It looks exactly like a Pico G2, but there are two noticeable differences: the first one is a cable that goes out from the facemask and gets plugged into the USB port of the device. The second one is the two eye-tracking inserts around the lenses.

Pico G2 4K with embedded eye tracking. Seen from this side, it just seems a standard Pico device

I asked why the need of that cable coming out from the headset: since this product has been made in partnership with Pico, I thought that everything should be embedded into the device. The guys at 7Invensun answered me that this product has been developed only after the Pico G2 had already been engineered. This means that the PCB of the device was already been produced and it didn’t feature the connectors where to attach the eye tracking devices. So, the eye tracking system could only be connected to the USB port of the headset. But in future versions this hack should disappear, and you will be able to buy standalone headsets with completely integrated eye tracking.

The cable coming out from the side of the Pico and whose connector is plugged into the USB port of the headset

I went on to try the device and I was pleasantly surprised by it. The eye tracking addition didn’t made the device more uncomfortable, it was for me exactly as a Pico G2. After I wore it, I started a calibration procedure, and I had to follow some points on the screen by just moving my eyes. The calibration for eye tracking is always quite short (it lasts 30 seconds), but it is still, in my opinion, a step that has to be removed to make eye tracking more usable and widespread. In some contexts, like for instance exhibitions, it is really a big nuisance.

After the calibration, I have been able to try some experiences:

  • In the first one, I was looking into a mirror, seeing myself as a female avatar. I could move my eyes and the avatar in the mirror could follow those movements. The eye blinking was also tracked: not only the eyelid opened/closed status, but also various intermediate statuses;
  • In the second one, I could play a whack-a-mole game by just looking at the moles;
  • In the third one, I could see some bullseyes in front of me, and looking at their centers, I could spot if the eye tracking was precise enough.

From these tests, I could see that the eye tracking performed really well. In my last year’s tests, the tracking on the standalone headset was already quite good, but this year it was absolutely faster and more precise, especially if I looked in the periphery of my vision. I was impressed.

7Invensun claims having only 1 degree of error in the perception of the gaze direction and I think that if your eyes are looking straight in front of you, this claim is right. If you are looking at the periphery of your vision, the precision seems less… but it is anyway far better than how it was one year ago. I think that now standalone headsets with eye tracking are finally usable.

The only problem that I found in my demo was a slight delay in the blink detection, that seemed to be performed by the avatar some instants after my actual blink. 7Invensun guys answered me that they have reduced the tracking framerate to 30Hz to spare the mobile device battery, and so the tracking could have a slight delay and may also miss some fast eye-lid movements.

Eye tracking inserts inside the 3DOF Pico headset

I so asked them what is the power consumption of having eye tracking on a standalone headset. They answered that since all the algorithms run on a dedicated DSP, the consumption is not that much. The headset can run for more or less 2 hours, a time that is not that distant from the duration of the battery of the Pico G2 without eye tracking.

The Pico G2 4K with integrated eye tracking is on sale in China for 10,000 yuan (more or less $1500). Typical customers of this device are companies interested in training, education, rehabilitation. If you are interested in buying it, you should know that 7Invensun is open to ship the device worldwide to interested customers. Just contact them directly to purchase your device or contact me and I will be very happy to help you with the introductions.

Vive Pro with eye tracking

I have already described you my experience with the Vive Pro Eye, that is a Vive Pro that has embedded the eye tracking by Tobii. 7Invensun wanted to make me try a Vive Pro with the eye tracking provided by them.

Me doing a hands-on session with the Vive Pro Eye

From the short demos that I have tried on both, I can tell you that the performances are really comparable. On PC, I have not even experienced the lag on the blinking detection that I have found on the standalone headset. This is because thanks to the high computational power of the desktop PC, the eye tracking can run at 120 Hz and so be more precise and reactive.

For software developers, the good news is that all applications that are developed with HTC’s SRanipal SDK, the one used to implement lip and eye tracking in applications for Vive devices, work in the same way with Tobii’s and 7Invensun’s eye tracking accessories. This means that an app developed for the Vive Pro Eye can work without a single change with the Vive Pro + 7Invensun eye tracking add-on.

Standard glasses with eye tracking

What about attaching an eye-tracking device to a pair of standard glasses? 7Invensun is experimenting with this as well.

It has developed an add-on that you can attach to certain frames of glasses to see what the wearer is looking at. This special eye-tracking frame can be connected to a PC or to a phone to analyze the gaze data of the user.

Standard glasses with eye tracking add-on mounted inside

I have tried it and it worked quite well. I followed the finger of one of 7Invensun’s employees for some seconds, then we looked at the data collected by the device. I could see clearly a recording of what I was looking at in a video, with a heatmap indicating the parts of the images I was focusing on and in what order.

This may be useful for jealous women that want to check if their boyfriends are looking at other women Jokes apart, it can be useful for companies wanting to analyze users’ behavior. For instance, some people of a test group can be sent in a supermarket to buy some stuff with these special glasses on. After that, some experts will analyze their gaze data to understand what is the user behaviour inside the shop and modify the disposition of the sold items so that to increase the sales.

Closer look at the eye-tracking frame

As every time I visit 7Invensun, I came out from my visit very satisfied both from a professional and personal standpoint. I really thank Lee and Richard for the time that we have spent together.

And even if I shouldn’t tell you, the company showed me some of its future products that will be revealed soon and that are great! Stay tuned for future updates on this company…

The post 7Invensun shows that eye-tracking on a standalone headset is now possible appeared first on The Ghost Howls.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

I’ve returned to Italy, but there is still so much I have to tell you about the Sandbox Immersive Festival in Qingdao! Are you ready for some new reviews?

Beautiful view of Qingdao at night Spheres

I know the name “Spheres” since a lot of time. This experience created some buzz because it has been the first virtual reality storytelling experience that has been acquired for a seven-figure amount. Some months ago, it has been showcased in an exclusive location in New York and people even paid $50 to be able to try this 3-episodes content set in space.

That’s why when I discovered that Spheres was available at SIF, I did everything I could to be able to experience it, praying people in Chinese to let me see how this fantastic experience was. I put the headset on with much excitement, and my impression has then been

I mean, Spheres is a good experience, but it has not been my favorite VR experience at Sandbox festival, not even in the group of the first three. I would never pay $50 to watch it, but maybe just because it is not my genre. Let me explain why.

Spheres is an experience by Eliza McNitt that talks you about the solar system and the whole universe. The name “Spheres” clearly alludes to the shape of the planets, that are just spheres in the outer space. The first episode is about the solar system and lets you see all the planets of our system. The second is about the universe and the black holes. The third one… well, was not available at SIF.

SPHERES (2018) - Trailer (International) - YouTube

It could be defined as a mix of art and education. It is a real-time rendered experience (I guess they have used Unreal Engine), in which you can navigate in the cosmos and see the planets, the stars, etc… You can hear the sounds of the universe, you can see things that you are not able to see in real life (e.g. you can see the electromagnetical fields of the Earth, represented as abstract colored waves surrounding it), you can try things that you will never be able to try (like navigating into a Black Hole).

While you explore the universe, a soothing voice narrates you what you are seeing and what you should learn about it. For instance, in the first episode, the voice described all the planets of the solar system: their name, their features, their orbits, etc… The voice narrating the experience is different in every episode. The first one is narrated by Millie Bobby Brown, the second one by Jessica Chastain and the third one by the popular singer Patti Smith. Voices are very calm and relaxing, and are perfect for an experience that is set in such a relaxing environment like the outer space.

The experience also features various interactions: for instance, when the voice talks you about the various planets of the Solar System, you are then able to take your Oculus Touch controllers and swing the planets in a sense or in the other one. In the second episode, you can make a star grow by moving your hands. Interactions all happen in a binary, in the sense that you can’t interact with the visual elements as you wish, but just as instructed by the voice. Their purpose is to make you feel less as a passive viewer in the experience.

What I loved of Spheres have been the quality of the multimedia assets: the graphics are top-notch ones and the visuals effects are crafted with astonishing particle systems. Waves, clouds, rays, are all created with stunning quality, a quality you won’t probably find in other experiences about the cosmos. The part that I loved the most has been when I have been able to enter a black hole: it is something that I will never be able to do in my life and it had been depicted with amazing colors and visual effects. I started entering the orbit of the black hole and then I got sucked slowly, completely, until I entered the hole. I was able to see the rays emanating from my hands being attracted towards the hole, from which nothing could escape. It has been a memorable experience.

SPHERES - Songs Of Spacetime -VR- Oculus Rift - VR EXPERIENCE - YouTube

Also, the audio quality is awesome and the voice makes the experience more complete: it is always relaxing and explains to you what you are seeing with elegance and clarity.

After all this highlight of mine on the high quality of the experience, you may wonder why I have not liked it. The answer is: I think that it is a high-quality experience… but it is not an experience I have pleasure in being in. First of all, it is too slow. Everything happens in a very slow time-scale: exactly as the Universe only moves very slowly, also the experience moves very slowly. Even the initial credits shown before every episode take a lot of time to be seen. This slow pace makes me incredibly bored. It is a matter of taste, of course: probably the creator wanted to make the experience more relaxing, probably wanted to show that it is slow like the rhythm of the universe, probably wanted it to be slow so that to make people learn few concepts in an efficient way. In any case, the pace felt really artificial to me: there was no reason to go THAT slow and this made me feel bored and irritated.

Spheres VR - Chapter 1 : Chorus of the cosmos - YouTube

Furthermore, I couldn’t understand what was the point of the experience. I mean, it contains some educational notions, but what it teaches about the universe is absolutely not enough: just few concepts here and there, without offering a full astronomy lesson. It could be considered an artistic experience, but at this point, I don’t get the need of adding interactions and a voice explaining some scientific topics. I have not even understood completely the reason for many of the included interactions, that felt to me sometimes to not be completely blended into the experience. For instance, when I had to make the planets rotate around the sun with my hands, rotating them actively didn’t give me any advantage over if they had rotated automatically. This didn’t give me more sense of presence.

I think that if you like the idea of traveling the outer space, Spheres can be seen as a very good relaxing experience. Thanks to its slow rhythm, the soothing voice, and very high-quality visuals, I think that it is ideal to relax. In fact, many videos on Youtube about Spheres are tagged as ASMR (that is something like a relaxing technique). If this is your genre, then Spheres is something you will love. If instead, you are not into slow relaxing experiences, then it is not for you. And it is not for me, either.

Buddy VR

Buddy VR is an experience by the Korean director Sooeung “Chuck” Chae, that has been awarded as “Best VR experience” at the 75th Venice Film Festival.

Chuck Chae BUDDY VR Best VR Experience @ 75th Venice Film Festival - YouTube

It narrates the story of a little mouse, Buddy, that lives in a small concession stand in a theme park in 1962. You save his life by removing the cheese from a trap that is in this place and letting him enjoy the food in peace without dying. After that, you become as little as him and you can start having some little adventures with him.

What you will do with him won’t be incredible, you won’t fly on a unicorn shooting laser rays at zombies (even if that would be cool), you will just live together some funny and less funny moments, as all friends do in real life. When you become little, Buddy introduces itself by writing his name on a little piece of plastic and then it gives the same object to you so that you can write your name with your controllers. After you have presented yourselves, you will start having fun with Buddy, eating stuff, playing the drums with the objects that you have around you and fighting against a little angry fat annoying girl that enter the stands and that wants to eat the candies.

BUDDY VR - The Virtual Relationship Experience Trailer - YouTube

The experience is about friendship: Buddy is a nice mouse and you spend some time together with him, becoming his friend. As I’ve told you, there’s nothing particularly special you do together, but thanks to the fact that he is very friendly, that you have fun together and that at the end you fight to save his life from the evil girl, you will start feeling a bond with this little creature. And the final moment (spoiler alert!) when Buddy makes for you a drawing depicting you two together, using the name that you wrote for him at the beginning, makes you also move a bit.

Regarding the technical features, the experience has a cartoon graphical style and the visuals are very good (I loved especially the lighting), exactly as the sound. Maybe personally I would have modeled Buddy a bit in a different way, but this is a matter of taste. I appreciated the interactions because they felt really well integrated into the story: it is an experience about doing things with your virtual friend, so it is natural that you will have to interact with various stuff. Even the first interaction that requires you to write your name, and that feels useless in that moment, reveals to be incredibly powerful in the end with the final drawing.

The final moment of Buddy VR, when the little mouse gave me a postcard with a drawing of us two together. I had written my name as “Hi! :)”, that’s why there is that writing below the stickman that represents me

I loved being at the beginning as big as in real life and then becoming very little. It was magical seeing all the normal-sized objects (e.g. a chocolate bar, or a fork) suddenly appear as enormous. It was scary to see the hands of the nasty girl moving towards me because they looked like the dangerous hands of a giant. The magic of VR let me see how it is living as a little mouse, with all the fun and all the dangers.

In the end, I liked Buddy VR, especially for the bond that it made me create with this little lovely mouse.

Kobold

Kobold is an experience that I’ve been able to try only in part. It is a thriller horror adventure: a boy has disappeared and you have to investigate to find him.

Of course, it won’t be easy and especially, it will be quite creepy: I started the experience into a house that looked abandoned, in the middle of nowhere. There was no electricity and I had to find how to activate it. Since there was no light, I had to use a flashlight, and strange noises from all around me scared me all the time. The objects that I found during my adventure where all creepy and this increased my level of discomfort a lot.

KOBOLD VR EXPERIENCE TRAILER - YouTube

Unluckily, I have been able to try it only 15 minutes because the SIF venue had to close. But in that time, I realized how the horror atmosphere has been crafted perfectly: I was always scared, thanks to a wise combination of lighting, creepy sounds, scary visuals, etc… In my time enjoying the experience I found no real danger (e.g. a killer), but I was always uncomfortable. I also loved the fact that you can find a lot of materials (videos, photos, etc…) inside the experience that will let you feel more into the story.

Speaking with other people at Sandbox, I discovered that no one there has been able to finish it: the experience is long and it is very easy to die, because every action in this game has its consequences. And some of them are really bad.

Movie Realistic VR Horror Game - KOBOLD Chapter 1 - YouTube

I love horror and so I loved the horror vibes of Kobold. Unluckily, I have not had a full hands-on with it, so consider this just a first impression on the experience.

Common Ground
Common Ground Trailer - YouTube

Common Ground is a long journey inside the crisis of UK’s housing system. As the author describes it

Common Ground explores the notorious Aylesbury Estate, home to thousands of South Londoners, and a concrete monument to the history and legacy of social housing in the UK. The Aylesbury Estate is undergoing a massive regeneration scheme that will see big changes to the community of thousands that live there and call it home. Common Ground mixes 360 video and real-time environments to allow people access to areas of the estate itself and personal spaces of residents, in order to examine how design, planning, dreams of utopian living and the political will of the day has affected the ordinary people caught in its midst. Utilising stereoscopic 360 video, photogrammetry, 3D modeling, and archive the viewer enters the world of the estate from its birth in 1960’s, through its decline and up to its controversial regeneration today. This multifaceted documentary questions notions of community, examines the dis-enfranchisement and demonization of the working class, and ask whether current housing policy today is destined to repeat the mistakes of the past.

While I loved the idea of giving voice to people that are usually un-heard, I have found the experience mostly boring. It was very slow and without an actual climax. I haven’t even understood completely why it should be in VR. “VR is an empathy machine”, but I think that to explore this topic, a 2D video would have been enough. 30 minutes is also too much for it, IMHO.

That’s it for today! Have you tried these experiences? If yes, let me know your impressions in the comments or on my social media channels!

The post SIF: Hands-on with Spheres, Buddy VR and much more! appeared first on The Ghost Howls.

Read for later

Articles marked as Favorite are saved for later viewing.
close
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview