This is an excerpt of my new book CONVERGENCE, HOW THE WORLD WILL BE PAINTED WITH DATA contributed by Pattie Maes.
No one can deny that smartphones play a central role in our lives. Children get their first phone around the age of 13, and forever after they will keep that phone close to them, consulting it hundreds of times a day. Many people even sleep with their phones. One in three even use their devices in the middle of the night. Of course, it is wonderful that these devices give us access to the world’s knowledge, but increasingly we are becoming aware of the negative impact they have on our physical, mental, and social well-being.
Life with Smartphones
The Blackberry, the first mass-market Internet-capable phone, was introduced in 1999 and quickly became a staple of the financial, digital, and political elite. The iPhone brought this technology to the general consumer market in 2007 and today, 12 years later, smartphones are in the hands of almost a third of the world’s population. Only in recent years have researchers started to report the effects of ubiquitous smartphone use, and that news is not very good.
Multiple studies report negative effects on people’s posture and respiration. Using these devices before going to sleep is correlated with poor sleep quality and duration. The devices also have negative effects on our mental well-being. While they give us access to the world’s information, they actually make it harder for us to demonstrate some of the qualities that are key to a successful life.
To thrive, a person does not just need access to information, but other people. They also need skills like attention, memory, emotional regulation, and creativity, skills which are negatively impacted by smartphone use. Professor Betsy Sparrow of Columbia University has shown that smartphone use has a negative effect on memory. People don’t exercise their own mental capabilities because it is much easier just to ask Google. Over time, unaided problem solving becomes more difficult.
Other studies have looked at the smartphone’s effects on attention and task performance. The mere presence of a phone on the table where a person is doing a task, results in that individual doing worse at the task, even if he or she never checks the phone. Just the sight of the phone is a significant distraction, possibly because it reminds the person of the online world and what else is out there.
Also worrisome is the impact of smartphone use on mental well-being. A study in England showed that smartphone use is directly correlated with anxiety. Currently, there is an epidemic of anxiety among young adults in high school and college. While researchers have not definitely proven that there is indeed a causal relationship, many experts are hypothesizing that smartphone use is contributing to the problem.
BioEssence releases calming or stimulating scents based on the user’s context and state.
Three Fundamental Problems With Smartphone Use
We live in two worlds, time shifting between the physical world around us and the digital world in our palm. More often than not, activities in these separate worlds have no relation to one another. With the exception of navigation apps, most other apps and services are unrelated to what someone is physically doing in that moment. Apart from our location, the phone is not aware of where we are and what we are doing. It interrupts before it augments.
A second problem with smartphones is that their use is cumbersome. Professor Albert Schmidt, of the University of Munich, Germany, compares today’s phone interfaces with the 18th-century monocle: they both take a while to locate and make use of. Of course, since the 18th-century we have invented glasses that can be worn all day and contact lenses put in once and forgotten about for 30 days. These lenses are so completely personalized that some even forget that they’re wearing them.
A third problem is bad user experience. We have gone from using 10 fingers to just one or two fingers for input. Admittedly, the relatively recent introduction of speech-based interaction is making it a bit more convenient to ask a device to do something, but that interaction is still a long ways away from being as efficient and natural as human-to-human verbal communication.
A New, Better Form
It’s time to rethink smartphones and come up with a radically different, superior type of interface, not just because there is an urgent need for an alternative that has fewer negative consequences, but also because three emerging technologies are making it possible to do so.
We are working to develop systems that, like contact lenses, practically disappear, are always with us, and are always on. In contrast with today’s smartphones, these cognitive enhancement systems will be highly aware of the user, their current state, typical behavior, and their current context. These systems will mediate the user’s perception and experience of reality not just in the visual realm, but also using other modalities such as audio, scent, and haptics.
These new interfaces are designed using the principle of minimal distraction. They make use of stimuli or outputs that put minimal demands on the user’s attention, but nevertheless still affect the user’s behavior and thinking. These new systems will be personalized. They’ll learn about the user, and adapt to the user. By adopting this trifecta of approaches, cognitive enhancement devices provide timely assistance with a minimum of disruption.
Augmentation technologies or systems can seamlessly integrate virtual information into our perception of our physical surroundings. This can be done for all of our senses, not just our visual one. We can augment our auditory perception using some of the new headphones becoming available by Bose and Sony. Bose already has a product called the HearPhones, developed for people with hearing impairments. For example, it increase the volume of the person that you are talking to at a busy party, while reducing background noise.
Audio AR interfaces may well reach success before their visual counterparts, as they can be more socially acceptable and realized in existing form factors such as headphones or regular glasses. But there are also opportunities for other modalities such as scent and touch. Scent is an under-appreciated, powerful modality that plays a key role in memory, mood, and other cognitive functions. Similarly, the sense of touch, has the potential to affect people’s cognitive and emotional functioning in significant ways.
Sensors That Sense Us
A second technology that has made major advances recently is cheap and small sensors. They work with algorithms that can make sense of the data they collect. There are now tiny, powerful cameras that can create an awareness of the context the user finds themselves in, what activity they are engaged in, and what objects and people are around them. Deep learning software systems are now powerful enough to interpret the images these cameras produce, making sense of the user’s context. Similarly, microphones can listen in and know what sounds are surrounding the user, such as a conversation, the siren of an ambulance, or a dog barking.
In addition to outward facing sensors, we now have a wealth of small and wearable technologies that can sense our internal well-being. For example, EEG (Electroencephalography- brain wave activity) or EOG (Electrooculography-eye movement) sensors, in the form of glasses or headphones, can read our mental state and can know how our eyes are moving. Wristbands like the Empatica can read heart rate and skin conductance, which is a measure of a person’s level of excitement. These sensors can give a system insight into the emotional state of the user. Cognitive enhancement systems will have access to a wealth of information about the user’s current internal state as well as their external context.
A Stands For AI
The third technology that will change the future of smartphones and AR is, of course, artificial intelligence technology, and specifically, machine learning algorithms that can be used to make sense of all that data. Other than voice recognition, today’s AI has limited use. However, computer vision, driven by tech developed for self driving cars, is rapidly making its way into consumer devices. These will be smarter and more personalized. They will learn to predict the user’s behavior. They will have the ability to be more efficient and effective. Because they know the user, they will be able to predict what the user is about to do or need.
Cognitive enhancement interfaces will also play a key role in learning. Today, children and adults still primarily learn in a classroom, while sitting down, listening to a teacher. But increasingly learning will happen in situ, in the real world. Kids tossing around a ball may see the physics concepts and formulas that are associated with the ball’s movement. They will see the ball’s position, speed, and acceleration over time — all right there in 3D, overlayed on the real world. They can learn while engaged in a playful activity with visualizations and explanations customized to their specific level of understanding.
Seymour Papert, one of the founders of the MIT Media Lab, once said that the best way to learn French is to grow up in France. Similarly, we will have more natural and effortless ways of learning a variety of skills and knowledge simply by being immersed in the relevant information in our daily experience. If you want to learn another language you will have your device describe the world in front of you in that language, using both auditory and visual AR. The augmentations could start with simple words and progress to word phrases and gradually complete sentences. User studies of such immersive language learning systems have shown that people recall the vocabulary they learned better than if they were using a traditional book or video-based study method.
Enhancement systems also have the advantage of teaching or training a person when the need arises. When your car breaks down in a remote area, the system can walk you through a process of diagnosis and a possible fix, even if you have never looked under the hood of a car before. The only thing left to decide will be which knowledge and skills to master.
Our Work At The MIT Media Lab
The NeverMind system designed by my group at the Media Lab uses AR technology to help a user learn and memorize a set of facts. It does so by turning a fact-learning task into an episodic memory learning task, which people happen to be better at.
Nevermind uses Augmented Reality to support Memorization
In a small experiment, we compared visual learning with AR and traditional methods. In this case, a person wants to memorize the list of Super Bowl winners. The system visualized the winners along a physical path that the user often frequents, like the walk from the subway to their office. By seeing the list of Super Bowl winners in specific consecutive locations along their walk, they memorized the list without effort. Afterward, they easily recalled and recreated the list of winners simply by retracing their path in their mind and remembering which team they saw at what point along the path.
We compared people memorizing the list this way with a paper-based method. Immediately after the experiment, both subject groups performed similarly. However, one day later, participants who attempted to memorize the sequence on paper had already forgotten half of what they learned, while participants using the AR model were able to recall the list perfectly.
Cognitive enhancement technologies can go beyond just giving us access to information. Because of their close integration with the user and the access that they have to data about the user’s internal state, they can help with cognitive abilities such as attention and better control over our emotions.
One of our projects, called AttentivU, consists of a pair of glasses with built-in EEG and EOG sensors that can help the user stay attentive and engaged. For example, when listening to a lecture, the system uses its sensors in real-time to monitor the user’s engagement level and gives them a gentle nudge in the form of a short audio sound or vibration In user studies, people who received such biofeedback were not just more attentive, but also performed better on quizzes about the material.
AttentivU helps a user stay attentive using biofeedback
The biggest potential for enhancement technologies may be helping us regulate our emotions. In our lab, we have built and tested several devices such as BioEssence, HeartBit, and PsychicVR that collect information about the affective state of the user and use methods such as biofeedback and mindfulness techniques, as well as scent-based, unconscious interventions, to help the user alter their mental state.
For example, If someone is anxious right before a big presentation, the BioEssence system can release a calming scent like vanilla. If that person is driving late at night, and feeling a bit drowsy, the device will notice and can release a stimulating peppermint scent to wake them up. Going further, enhancement devices can even help a person get into the right state of mind for a particular task or activity. Sometimes we need more focus and detail orientation, for example when doing our taxes or calculations, while at other times we need more divergent or creative thinking, and enhancement devices can help set the mood for these tasks.
Dormio is a system that uses EEG, heart rate and EDA (skin conductance) sensors to detect when you are on the cusp of falling asleep. It helps you to think more creatively about a problem by engaging you in a conversation on that topic at the moment when you enter Hypnagogia, a state of drowsiness in between being awake and falling asleep were one can still have conversations, but they are less inhibited and it is much easier to do divergent thinking. Using voice, Dormio engages you in the right moment and records anything you say in these micro dreams for later recall and review.
Dormio supports creative thinking at the edge of sleep
AR To The Rescue?
While AR and AI-enabled personal devices are inevitable, it is critical that we are mindful of the potential dangers and problems these systems could produce. We can design solutions that minimize or prevent these negative outcomes.
First, it is key that agency and controls remain completely with the individual human user. Users should not only be completely aware of how and when the system augments them, but they should be the only ones who make conscious decisions regarding what enhancements they want to activate.
Second, we should be aware that we may become dependent on the augmentation and should make conscious and informed decisions regarding which skills and tasks we want to delegate versus which ones we want to develop and strengthen internally.
Third, we should avoid enhancement interfaces which result in people living in their own different realities. We do not want to recreate the bubble worlds of the online world in our physical lives. We can do so by adopting a principle of minimal interventions, as opposed to entirely overwriting our physical experience with a virtual one.
Last but not least, enhancement technologies represent a potential invasion of privacy, as they collect and make use of highly personal data. It is key that data and decisions on how it can be shared or used by others belongs to the user exclusively. To the extent possible, designers of systems should try to store and process data locally, so that there is less of a risk of personal data to being compromised.
Overall, there is a reason to be optimistic about the potential for enhancement technologies like AR to redefine and improve our relationship with our smartphones. Cognitive enhancement systems may well become the fourth major era of computing, after mainframes, desktops, and mobile devices. They will take the form of compact devices in acceptable form factors, such as glasses and earbuds, that we carry with us all the time, are always on and aware of our current state and context, and offer real-time assistance with a range of cognitive functions in a minimally disruptive way.
Scope AR, one of the pioneers of enterprise-class augmented reality (AR) solutions, today announced it has secured a $9.7 million round of Series A funding. The round was led by Boston-based Romulus Capital, which is focused on enterprise software, with follow-on investment participation from existing investors.
Scope AR provides tools to make knowledge-sharing easy and just-in-time enableing companies to link remote workers in the field with specialists in the office, turning low-skilled workers into high skilled ones. Their AR software supports employee training, product, and equipment assembly, maintenance and repair, field and customer support. The company’s device-agnostic technology supports smartphones, tablets, and wearables, making it easy for large organizations like Boeing, Toyota, Lockheed Martin, Honeywell, GE, and others to quickly scale their use of AR. The company was founded by Graham Melley, Scott Montgomerie, David Nedohin in 2011 and is based in San Francisco with offices in Edmonton, Canada.
Using AR to reduce mistakes at Lockheed saves millions. SCOPE AR
Companies serving the enterprise market are growing in number and size, including public companies like Microsoft and PTC, the large consultancies like Deloitte, Accenture, and McKinsey, and startups with their own AR development and delivery platforms like Scope AR. They work as both consultants and vendors, guiding clients in the implementation of AR and leaving them with proprietary tools (a source of recurring revenue) so they can update their AR apps themselves without having to do coding. “AR is becoming an important tool for how knowledge is shared within heavy industry, allowing workers to get the information they need, when they need it, in an intuitive way,” said Scott Montgomerie, CEO, and co-founder of Scope AR in our conversation preceding the announcement. Montgomerie explained that Scope was delivering noteworthy ROI for Fortune 500 companies around the world citing use cases across aerospace, consumer packaged goods, and manufacturing industries. Using the company’s products — WorkLink and Remote AR — industry leaders such as Lockheed Martin and Unilever have achieved impactful results around improving worker efficiencies, reducing equipment downtime and more accurately diagnosing repair issues.
Scope AR will use the new capital to grow its sales and marketing teams and speed the continuous work of optimizing on their products for real-world use cases and customer needs. One area of focus is the integration of client companies’ existing data in Oracle and SAP data management applications. “Mobile computing tools need to fit into clients’ information architecture,” Montgomerie explained. He cited remote experts and work instructions as two popular Scope AR applications needed by companies in every industry. “Worklink is enterprise-ready, scalable, secure, and easy to use.”
Scope AR founders from L: Graham Melley, Scott Montgomerie, and David Nedohin. Scope AR
Also participating in this round are SignalFire, Susa Ventures, Haystack, New Stack Ventures, North American Corporation, and Angel List. Krishna Gupta of Romulus Capital and Wayne Hu from SignalFire will join Scope AR’s Board of Directors
Mention the name Jeremy Nickel to any long-standing user of AltspaceVR, Microsoft’s Social VR reality platform, and you’ll likely get a response that induces both recognition and praise. On a largely unrestricted platform that revolves around social interaction, Jeremy holds a weekly meditation session for free.
As the Founder and CEO of EvolVR (pronounced Evolver), Jeremy is a pioneer in promoting mental well-being through virtual reality, and is experimenting with the potentials of combining advanced technology with spirituality.
EvolVR offers yoga as well as meditation classes in purpose-built VR spaces, designed to allow users to relax and take a break from the stresses of the modern world. Jeremy was introduced to meditation by his father, who taught him the body scan technique from a young age to counteract his hyperactivity.
Following High School Jeremy wasn’t ready to go to college yet, he instead followed the famous jam bands The Grateful Dead and Phish around the US having a blast. His parents managed to persuade him to apply to college and he was allowed to differ a year if he promised to engage in something less rivalrous than the lifestyle of a Deadhead.
Jeremy went to Nepal for six months and stayed with a Tibetan family, an adventure that proved to be significant in cracking open previously unchartered spiritual and religious possibilities. “In Nepal, everyone assumes that you believe in God, you can’t walk ten feet without going past a sacred area where you should stop and say a blessing”.
He left Nepal not as a Buddhist monk, but as someone who had been exposed to a different perspective on the deeper questions of our existence. Ever since then he’s been on what he describes as ‘an adventure of ideas’; a quest to listen and understand different viewpoints on life’s purpose and meaning without becoming restricted by any particular belief system.
Jeremy’s quest led him to seminary, and he became ordained as a Unitarian Universalist minister. For the uninitiated like myself, Jeremy explained what a Unitarian Universalist is, “They’re not Christian, Jewish or Buddhist, although some people bring understandings from those traditions in with them, we’re instead an interfaith and no-faith denomination that allows you to believe in a God that makes sense to you, or in no God at all. Instead of fighting over differences, we choose to celebrate and rejoice over our individual truths and our shared community life”. Jeremy led a congregation just South of the Oakland, CA for seven years before deciding to change path to find a new space to bring his spiritual leadership.
Growing up in a house where both his parents were running a computer and video company, Jeremy was surrounded by technology and became fascinated by its power to tell stories from a young age. When VR started to show signs of its capabilities, he wanted to make use of its unique ability to break down barriers and connect the world through a new medium.
Jeremy saw in VR a way to reach people who would otherwise struggle to connect to a community. Those who are homebound due to disabilities or anxieties, who are unable to engage with a meditative or yogic group. Lots of people don’t think they belong in a sacred space or have no interest in seeking one out. EvolVR gives people in those circumstances a chance to connect to a larger community in the comfort and safety of their own homes, knowing they can leave by taking the headset off at any time and be back in a familiar environment.
There’s already an ecosystem of virtual reality content that is continuously seeing new developments and applications, especially in the fields of mental and physical well-being. What Jeremy and the others at EvolVR have done is separate themselves from the crowd by offering an experience that allows real-time group meditation and yoga for people all over the world, free of charge.
Jeremy explains the benefit of meditating within a group setting, “I previously went to a lot of Quaker meetings, where they would just sit in silence and I felt it was very similar to meditation. When I meditated or got into that quiet state with other people who were also really being conscious about being in a quiet focused state, I could feel an energy crackling between our bodies, a shared energy, and it was powerful”.
In virtual reality you sit with your headset on alone, but when you are able to see and hear others in the same virtual environment and you are all experiencing the same thing, the geographical distance no longer plays a part. People from the United States, Ukraine, Germany, the UK, South Korea, Japan all connect in hyperspace and it really does feel as if they are all there, meditating along with you. A new form of crackling is made possible, but this time extending around the globe.
The future is bright for EvolVR, and Jeremy has big plans to continue expanding the company into new realms. Right now, the company offers four weekly sessions on Altspace, and Jeremy also holds a regular discussion meet-up about the mysteries of life called ‘The Spot’. EvolVR also offers paid one to one sessions in either yoga or meditation at EvolVR.us.
Follow EvolVR on Facebook for their latest updates.
There’s a crate in the corner of Magic Leap founder and CEO Rony Abovitz’s office. “Inside that box is Hiro’s samurai sword from Snow Crash.” [Snow Crash is the seminal 1992 novel by Neal Stephenson which introduced the idea of the “metaverse” into popular culture.] “That is the one of one, actual sword, officially spec’d out by Neal,” said Abovitz, 48. Stephenson is now Magic Leap’s resident futurist. Abovitz promised to make him the sword when he joined the company in 2014, and it’s finally arrived. The gifting ceremony is pending. “That’s the nerdiest thing I think I’ve ever done,” Abovitz proudly says.
“It’s taken almost five years for a master swordsman to make this sword.” Abovitz continued. “It’s not a prop. It’s been folded over 20,000 times. You could go to war with that thing. Isn’t that awesome?” Abovitz beams beneath a mop of thinning, curly gray hair and a yarmulke. Behind him are other props, drawings, and a full-size ray gun model created for the first seminal Magic Leap experience, “Dr. Grordbort’s Invaders,” made by Magic Leap and Weta Workshop in New Zealand.
According to his college classmates, Abovitz was a combination of semi-serious alternative artist (he was a cartoonist for the U. Miami paper, “The Hurricane”), and part serious engineer. He got his B.S. in Mechanical Engineering and his M.S. in Biomedical Engineering. Abovitz left the Ph.D. program at U. Miami (he continues to have a close relationship with the school) to start Z-Kat, an R&D think tank he created in his now-wife’s grad student apartment. Perhaps most unexpectedly, Abovitz was also a college athlete at Miami, which is a highly competitive Division I school. He walked onto the track team as a freshman and eventually was invited to join the team as a javelin thrower, one of his proudest accomplishments. In many ways, Abovitz is a prototypical nerd, but in other ways, not. There is always an unexpected twist.
“I used to haunt [computer graphics convention] SIGGRAPH when I was a grad student in the early 90s,” he says. VR was a hot topic then. Jaron Lanier, creator of VPL, famous for creating the “data glove,” and Bran Ferren, Disney’s then-VR guru, were pushing the edge of what could be accomplished with the bulky technology of the time, and they showed it off at SIGGRAPH. “The Imagineers were a huge inspiration. I was super inspired by books like Neal Stephenson’s Snow Crash and movies like The Matrix.” Years later, both Stephenson and John Gaeta, who won an Oscar for the mind-bending visuals of The Matrix, would join Magic Leap’s staff.
In the late 90s and early 2000s, Abovitz explored every VR system he could as a possible robotic surgery visualization solution. Doctors have long complained of “screen pollution,” where needed information requires them to look away from the patient to check a monitor. Surgical visualization of robotic instruments on another two-dimensional screen aggravated an existing problem. “Magic Leap would have been an amazing solution to have then and through the work of our amazing partners in the medical field it soon will be.”
One of the things I find most interesting about Abovitz is that he’s never worked for anyone else, other than his board of directors. He did odd jobs as a kid. He worked for his dad. He had an internship, but never a boss. Mako’s rise to a public company, and its subsequent acquisition by Stryker Surgical for $1.65 BN in 2013, was a sixteen-year marathon, punctuated by moments of terror when things were almost derailed, first by the terrorist attack of 9/11, then by the financial crisis in fall of 2008.
For Abovitz, Stryker’s acquisition of Mako was a watershed event in many ways. First, it gave him long-awaited financial security, because CEOs of public companies like Mako can only sell shares slowly and in certain windows. Following the acquisition, Abovitz was liquid. He used the money to pay off student loans and to buy a comfortable house, where he built a music studio. Abovitz spent long hours there, playing music, and thinking about what he really wanted to do. He painted Magic Leap on the wall when it was nothing more than a compelling idea.
“When I was at Mako, we did robotic surgery, really serious stuff. FDA-level. But to me, it was like I was doing Star Wars droids. So my mindset was really different from all the other people in the field,” Abovitz explained. “I’m making Star Wars droids, and I’m inside the med-tech world, trying to mentally integrate where I’m coming from. I would go from SIGGRAPH, to bone and joint meetings, and neuroscience meetings. But my brain wasn’t there, it was in the movie world… the medical device world doesn’t really blend creativity and technology in the same way.”
[Author’s note: in 2010, Dr. Arthur Kobrine used Mako’s robotic microsurgery tools at Sibley Hospital in Washington, DC, to save my life when three discs in my neck collapsed. I never thought to ask Dr. Kobrine about the tools he used until I heard of Mako.]
Abovitz characterizes the period of 2011–2013 as “serious garage days,” but emphasized he was not alone in his explorations. He had contacted Weta Workshop in 2011, which he describes as “adopting him.” With Richard Taylor (who later became a Magic Leap board member) and Greg Broadmore, Abovitz began to develop what would become “Dr. Grordbort’s Invaders,” which had a great influence on the development of the Magic Leap One on the other side of the world.
Meet Gimble, your robot companion in “Dr. Grordbort’s Invaders.”
After the acquisition of Mako, things began to happen quickly. “No matter who you are, when you start something, including Z-Kat, you can’t do it alone. Entrepreneurs have to have partners. They have teams of people. The early people are so important.” Even though the company is made in his image, Abovitz insists the credit for creating Magic Leap lies with his collaborators more than himself.
As the Magic Leap idea began to coalesce, Abovitz undertook a quixotic journey, iterating his ideas about spatial computing with artists, filmmakers, authors, engineers, and geeky friends who spent hours with him on the phone and in the divey restaurants he prefers for their anonymity. As he refined his vision, Abovitz carefully broke the problems into pieces, and looked for solutions and workarounds, and the people who could help figure it out. When pressed about this… well, magic leap… to a broader conception of AR, Abovitz demurs. “I just wanted to make something really cool, and thought it would be awesome to really do it.”
Panel presentation of “Dr. Grordbort’s Invaders” at LeapCon 2018. From L to R, Rony Abovitz, Greg Broadmire, and Richard Taylor.
The Four Wise Men
Abovitz credits four people for helping him refine the Magic Leap vision into reality in those early days: Richard Taylor of Weta Workshop, Sam Miller, formerly of NASA, his friend Graham Macnamara, a physicist who went to Caltech and Scott Hassan, who has been characterized as “Google’s unknown third creator.” He introduced the companies in 2014.
“I got to give credit to my cynical friend [Graham Macnamara],” said Abovitz. “We’d get into these eight-hour debates about things… And he’s like, well, you got to do the real thing if you’re going to do it.” Macnamara, now Magic Leap’s Chief Creative Scientist, has been friends with Abovitz since ninth grade. He went on to study theoretical physics at the California Institute of Technology and the University of Miami but became disillusioned with academia. He wanted to do something more practical than physics but resisted Abovitz’s entreaties to join Mako. Macnamara was exploring an alternative career in architecture when Abovitz started to puzzle through how Magic Leap might work with him.
Flying Guy, Magic Leap Inspirational Artwork
“The brilliant thing with Graham was, at some point, we stopped arguing about, is it physics or biology? We decided it was somehow both. Which I don’t think our industry still understands. And that’s probably the most important thing about Magic Leap… our number one science obsession in the company,” Abovitz says.
Macnamara and Abovitz set out to “to properly fool the brain” in order to create multi-planar spatial computing. To do this they broke down problems to be solved. “We tackled every problem, laid them all out,” Macnamara told me. Unsurprisingly, hardware presents a number of challenges. “Optics are the last hurdle. Only a true multi-planar light field display can merge information and abstraction with the real world convincingly,” said Macnamara. At the time their ideas seemed strange to him, almost psychedelic. “Rony’s brain defaults to the future and then he works backward.”
Macnamara was already working on Magic Leap’s first patents when Abovitz connected with a NASA scientist, Sam Miller. There emerged a white paper that “pulled together all the ideas about spatial computing that have been science fiction stuff for years and turned it into a product,” said Macnamara. After a year of working on the initial technical patents as a contractor, he became one of Magic Leap’s first twenty-five employees.
Abovitz describes an apocryphal visit to a friend in Hollywood who ran a music label and guided him to meet “the right kind of people, who wouldn’t eat me alive,” not those would smell his fanboy enthusiasm and pick his pocket. Abovitz bemusedly relates how he prepared for the trip by binging Entourage and watching Swimming with Sharks. Storytelling and movies are deeply embedded in Abovitz’ psyche, even though his own system is not yet fully capable of telling the kinds of stories that inspire him. In addition to John Gaeta, a lot of the senior creative people at Magic Leap have a background in entertainment and movies.
Richard Taylor, President of Weta Workshop, and member of Magic Leap’s Board of Directors.
No one compares to the relationship with and influence of Richard Taylor of Weta. “I had found kindred souls on the other side of the world,” Abovitz told me. Weta is the design and special effects shop founded by director Peter Jackson to create the award-winning special effects in movies like The Hobbit. Based in Auckland, New Zealand, Weta now hosts a team of embedded Magic Leap employees, mostly technical, who work shoulder to shoulder with the creatives there. “Here at Magic Leap, we have a lot of ultra high-end tech and software people,” said Abovitz. “But they need to be combined with creative minds. They are in New Zealand, far away — off the grid in a way — and we want that. They don’t limit what they are attempting for financial reasons, necessarily. When they go for things, they do it because they want the best,” he said.
Abovitz met Sam Miller at the Conference on the Future of Engineering Software, which is presented by Miller’s grandfather, the legendary software designer Joel Orr. Miller was a ten-year NASA veteran, having started as an intern and worked his way up to Creative Scientist. A Ph.D. with a knack for hacking the government from the inside, Miller worked on a range of projects, from cybersecurity to rockets and robots, which made him a perfect collaborator for Abovitz, who seems to have a talent for finding talent exactly when he needs it. When I talked to Miller in December 2018, he told me our conversation was taking place exactly seven years to the date he met Abovitz and found a mission big enough, hard enough, and compelling enough to entice him away from NASA.
Miller was dragged into the Hollywood phase of ideation as well. “Some of those people are crazy. I mean really crazy. Crazier than we are,” said Miller, describing his conversations with Abovitz’s non-technical creative advisors.
Like Macnamara, like Abovitz, Miller has a romantic side and a practical, engineering side. It took a year of talking about spatial computing with Abovitz, of breaking it down into can I do this? “Talking is exciting, but you don’t leave the best job in the world, drag your family to Florida, literally change your life for an idea. You have to get there technologically. It’s world-class talent, inspired by vision, and execution. There are hundreds of difficult little pieces, but by breaking it down we make impossible merely complicated,” he said. I hear that the phrase, “make the impossible merely difficult,” from a lot of Leapers.
Visualization of Layers, a concept Magic Leap introduced at Leap Con.
Magic Leap’s vision of how devices will work and interact with the world around them is far beyond the capabilities of the company’s current developer version, the Magic Leap One Creator Edition. It’s beyond even what most forward-looking reviewers could imagine. To understand Magic Leap, you have to look past the device to the coming convergence of technologies that will make wearable invisible computing commonplace.
The device is the least important part. It’s an AR enabler. Abovitz prefers the term “spatial computing” to “augmented” or “mixed” reality, because AR is associated with smartphones, and MR has been appropriated by Microsoft, obfuscating its original meaning. The company says the Magic Leap headset is a “location-based” device, because it is constantly scanning the world around it, partly to map it, but also to detect an invisible, digital “layer,” sometimes called “the spatial web,” or “AR Cloud,” and “register” (or “anchor,” in the parlance of mobile AR) the content of the layer onto the real world. The vision consists of three distinct elements: ubiquitous 5G networks, layers, or geolocated cloud content, and more advanced AI-enabled devices, which would access the layers when needed. The company calls this promised land “The Magicverse,” which is part tech and part Abovitz’s meticulous yet fantastical vision of the future.
Much has been said about Magic Leap’s years of secrecy, always catnip to the press, especially after securing half a billion dollars of investment from Google and attracting elite tech, financial and entertainment investors. In a 2014 profile, The South Florida Business Journal quoted Abovitz as saying “In our industry, there are so many competing companies and games, and they have people constantly out spying on the competition.” Abovitz says that entire story is bull. Not what he said, and not what he meant.
“We wanted to create the kind of anticipation that special event movies like Star Wars or Game of Thrones have, where people wait in line overnight to be the first to see it. I thought that would be really cool,” he said. Unfortunately, this was taken another way by the press, which even now doesn’t seem to fully comprehend Abovitz and Magic Leap. I’m not going to lie. I’ve talked to him and John Gaeta for hours. What they are doing and thinking is not as obvious to the rest of us as they think.
For the next four and a half years, only an eclectic group of people were offered the opportunity to taste spatial computing using a bulky machine that Leapers call “The Beast,” at Magic Leap HQ in Plantation, Florida. Guests included celebrities, filmmakers, pop stars, athletes, investors, potential investors, politicians, and strategic partners like AT&T. To this day, an invitation to Plantation confers great status on the recipient. Abovitz is surprised to hear me say this. I am not surprised he is surprised.
I have always been sympathetic to Magic Leap. Building a new operating system, a new optical system, and a custom chipset in four years is unbelievable. Hardware is hard, complicated, and mad expensive. Pulling that off for even $2.3 B seems remarkable, given the time and money Microsoft, Apple, Facebook, and Google have spent solving similar problems. I said so in several columns in Forbes. That, probably more than anything, is why I was sitting with Abovitz and Gaeta in Plantation, FL, in November of 2018, interviewing them for this book.
Companies like Apple and Microsoft are also good at keeping secrets. Apple’s secret plans for its AR glasses are the subject of relentless rumors. Apple and Microsoft are massive organizations with tens of thousands of employees in locations distributed around the world. Projects are broken into pieces and one group often doesn’t know another. Projects can be more easily hidden this way, even from the people working on them.
However, these giants are not simultaneously raising $2.3 billion dollars, which requires the company to put on a show. The press, which had been kept at arm’s length, was quick to pounce when it was revealed a Magic Leap promotional video was made by Weta. At the same time, Microsoft was releasing cinematic science fiction shorts about future use cases for the HoloLens, featuring an intelligent digital assistant embodied as a flying eyeball and latency-free 3D telepresence with remote participants on different continents. No one cared much about that. When Magic Leap showed a special effects-laden concept video made by Weta they were buried in a tidal wave of snark and derision that took the company by surprise. The tech press has not been kind to Magic Leap.
From L to R. Magic Leap futurist, novelist Neal Stephenson, CEO Rony Abovitz, and Magic Leap’s SVP strategy, John Gaeta at Leap Con, October 2018.
The Midas Touch
“Who else would do this? No one would do this,” Abovitz said of Magic Leap’s first three years, which he funded mostly with his own money. “More than I probably should have,” he said. A $20 million Series A round, led by Hassan and several other angels, took the onus off him personally. Even before the first prototype was fully functional, Abovitz was preparing to raise venture capital, just as he had done with Mako. On the strength of his vision and the success of his robotics company, Abovitz has been wildly successful at fundraising, assembling a world-class roster of investors and strategic partners rarely aligned behind the same startup.
Hassan introduced Magic Leap to Google. In May 2014, Abovitz got a call from Alan Eustace, one of Google’s most senior engineers, asking for his GPS coordinates. Eustace was in Florida testing a homemade space suit he would later wear to jump out of a weather balloon 135,000 feet above Earth, breaking the record for the highest free fall in history. He wanted to stop by.
Soon thereafter, Eustace jumped out of a plane and landed near Magic Leap’s headquarters. He must have been as impressed as he was impressive, because, in October 2014, Google invested $542 M in the company. “I still wake up in the night going, ‘Holy crap!’?” Abovitz told The South Florida Business Journal.
By the end of 2018, Magic Leap had raised $2.3 billion dollars at a $6 billion valuation from the biggest technology companies (Google and Qualcomm), media companies (Warner Bros., Disney), venture capital (Andreessen Horowitz and Kleiner Perkins), finance (JP Morgan and Fidelity), advanced telecom (AT&T), and e-commerce (Alibaba) companies. According to Crunchbase, of the nearly $6.3 billion raised by all Augmented Reality startups in the past ten years, 37% was for Magic Leap. Though $2.3 billion is an astonishing amount of money, Google and Facebook both raised billions more before going public. Uber, which like Magic Leap is a private company, has raised more than $24 billion.
In May 2016, Abovitz appeared on the cover of Wired Magazine, with an accompanying feature. The story was sparse on detail but full of Abovitz’s soaring poetry about spatial computing. “Ours is a journey of inner space. We are building the internet of presence and experience,” he said cryptically.
Magic Leap One
Magic Leap gave its first preview of a working lightweight prototype to Brian Crecente of Rolling Stone Magazine in December 2017. He remarked the headset had a “steampunk aesthetic,” and the description stuck. It was comfortable, light and can accommodate prescription lens inserts.
Crecente described the experience favorably. The field of view seemed bigger than the HoloLens. The Magic Leap One has a pocket size “Lightpack,” while the HoloLens is self-contained. The HoloLens leaves lots of room for the physical world. Magic Leap occludes it. Abovitz called the Magic Leap One “an artisanal computer,” a poetic way to describe a developer version of a computer for which there is little content.
Right before its launch of Magic Leap One in August 2018, Magic Leap announced a strategic alliance and a new investor, AT&T, which also owns Warner Bros and HBO. Magic Leap’s spatial computing devices will be optimized to show off the telecom’s new latency-free 5G networks. Magic Leap announced plans to demo the device at select AT&T stores in Chicago, Atlanta, Boston, Los Angeles, and San Francisco in 2019. But in the fall of 2018, AT&T ran a “Crimes of Grindelwald” movie promotion that used the Magic Leap One in Chicago. Other locations may feature the Fantastic Beasts movie tie-in. AT&T had a similar exclusive deal with Apple from 2007–2009. At CES, in January 2019, AT&T and Magic Leap announced they were expanding their partnership to target the enterprise market together.
The Magic Leap Home Screen.
The Magic Leap One was released on August 8, 2018, priced at $2,295, and only available in select areas at first. People who did not understand the meaning of “developer version” were inevitably disappointed. However, developers, the primary audience for this first edition, were for the most part enthusiastic.
Magic Leap’s AR glasses are similar to the HoloLens, but much more fun and fashionable, in a cartoony, steampunk-like way. Unlike the HoloLens, which has enough room around the eyes to permit users to retain their regular eyewear and their peripheral vision, the Magic Leap One hugs the face and requires a prescription insert. At 40 degrees the field of view of the Magic Leap One is noticeably larger than the 30 degrees of the HoloLens. Magic Leap One seems even larger because much of your peripheral vision is occluded by the design of the HMD, so the AR images rarely leave the field of view. This greatly enhances the suspension of disbelief.
A developer version of a device like the Magic Leap One, with only a few games and demos in its app store, isn’t easy to review. It’s not fair to the product or the reviewer. There’s no infrastructure to support it. The headset shipped with “Dr. Grordbort’s Invaders,” a robot wave shooter, “Tonandi”, an interactive sound experience made in collaboration with composer Sigur Rós, “Helio,” their web browser, and “Project Create,” Magic Leap’s answer to Tilt Brush. A spatial version of Rovio’s popular..
Making a VR Game With Followers: Part1 — Need Ideas
Take two ingredients: a VR dev(me) and 3 to 5 months of free time. Mix them. Mix it. Publish the result on Steam. Aaaaand you’ve got a VR game. This is how it often goes.
However, I want to take a different approach from most of the game devs. I want to a third ingredient. And it’s you. Yes you, my dear reader. I’d like to make you part of this game.
“Will I be a game character or what? What the hell are you talking about?” you might ask. Well, I ask you to be my partner (in crime?). Make a game with me. Be a developer you always wanted to be (I will do all the hard work, don’t worry).
It’ll go like this: I will be coding the game, and you will have a direct impact on everything through comments, polls, Patreon tiers, playtests and other interactive ways we’ll find through the internet.
Become a co-author of this game. No kidding. For your contributions, your name(or photo?) might appear in the main menu, in the credits or even within the game environment. You’ll also get personal exclusive in-game stuff. But most importantly you’ll be able to influence the development; like I’ll make features you want or art.
Why am I doing this? Three reasons
I want to make smth you will like to play
To form a team with you, my dear reader (listening to different points of view is soooo helpful)
Help other devs avoid mistakes I’ll make
I promise to write weekly in details about all the struggles, pearls and pitfalls of making a VR game and selling it. Like a reality TV show, but without Kardashians.
Let’s start now!
Every game must have an idea: some new fancy mechanics, story or style of art. Let’s assume I can make any game, like literally everything. Multiplayer? Yes! MMORPG? Oh, yeah!
So, got an idea? Share it with me using the form below.
But, there’s just one condition:
NO MORE WAVE SHOOTERS. You know what I mean.
What will happen to your ideas? I’ll post them in the next part of this blog series. The point is to let you choose the most interesting one (I hope there will be more than 1 contributed lol).
Obviously, I’ll start making the one which gets the most votes. Will it be monetized somehow? What about legal stuff? Honestly, I don’t know yet, but I’ll figure it out with the winner.
So, bets are made. Let’s make the bestest and the coolest VR game ever TOGETHER!
New York’s Tribeca Film Festival announced its Immersive Arcade line-up this afternoon. The Arcade is the XR (meaning VR, AR and everything else) companion to the film festival founded by Robert De Niro, Jane Rosenthal, and Craig Hatkoff in 2001 to spur the economic and cultural revitalization of lower Manhattan following the attacks on the World Trade Center. The Festival added interactive programming in 2012. Under the guidance of curators Loren Hammonds and Ingrid Kopp, the Immersive Arcade is one of the four big International XR Festivals along with Sundance, The Venice Film Festival and South-by-Southwest (SXSW). Many of the experiences offered in the Arcade, such as the highly anticipated debut of Fable Studios’, “Wolves In The Walls: It’s All Over” and “ AYHUASCA,” from producers Atlas V and director Jan Kounen, are world premieres. Many of the exhibitions are location specific, created just for Tribeca, and may never be seen again.
“Cave” from an NYU team led by Ken Perlin. Kris Layng
The competitive category, Storyscapes, includes four world premieres and one North American premiere. Hammonds told me his goal is to reveal “where we are in the development of storytelling with new technology. The broad diversity of experiences here suggests it’s not just technology. It’s storytelling, it’s creativity, it’s the biggest toolset artist have ever had. There’s performance. There are sets. There’s physical action. You do things.” Indeed, the competitive category innovative work across immersive mediums that combine various forms of the technology with audience participation. The five projects come from Australia, Egypt, Iraq, Netherlands, UK, and USA. This year, four of the five experiences up for the award were made by female creators.
Collider is a location specific VR experience that will never be duplicated anywhere else. Nichon Gleron
The Collider (North American Premiere) from UK’s Anagram, is a blend of mixed reality and immersive theater in which two participants, one in a headset, and one with controllers, have to work together. It explores the notions of memory and dependency. Future Dreaming (World Premiere) from renowned Australian XR artist Sutu was created in collaboration with native Aboriginal youth, who learned VR content creation tools in order to express their aspirations for the future. Hammonds described it as both “beautiful and funny.” Traitor (World Premiere) also uses mixed reality elements, including actors from the UK’s Pilot Theatre Company, to create an escape room experience whose goal is to find a missing teen.
Starring Lucy, the first AI character to star on a VR experience. She remembers what you’ve told her.CARLOS LEON
Out of competition, some of the best-known creators of VR content are also featured in the Tribeca Immersive Arcade, many returning for a second and third time. This is the fourth year Baobab Studios has been part of the festival. Director Eric Darnell is expanding interactivity in Bonfire, a semi-comic predicament of lost spacefarers on an alien planet.
Baobab studios (“Invasion,” “Jack”) is back again with “Bonfire.” Baobab Studios
Clyde Henry and Felix & Paul Studios present their stop-motion animation Gymnasia which takes place in a decrepit school gymnasium populated by abandoned puppets. Edward Saachi’s Fable Studios is presenting the much-anticipated Wolves In The Walls: It’s All Over (World Premiere), which features the first AI character to star in a VR experience, Lucy, who needs to talk to you.
You’ll see the World War One trench battle from the air and underground in this VR experience. War Remains
The Festival will present a one-of-a-kind collaboration with popular historian/podcaster Dan Carlin and MWM Immersive that transports attendees to a WWI battlefield with stunning realism in War Remains. Using free roam (untethered) VR, haptics, and sensations like heat and wind, users will experience World War I from above in a balloon, and below, in the trenches. A similar wartime sensory experience, HERO, won the Storyscapes prize last year. Jessica Brillhart, formerly Principal Filmmaker for VR at Google, is having the world premiere of her Vrai Pictures production Into the Light at the Festival. It’s a one-of-a-kind geolocated immersive audio experience that will move participants throughout the building to hear pieces by Yo-yo Ma. Conservation International is returning to the Festival, following their powerful debut last year with the dual 360 (My Africa) and free roam VR (Elephant Keeper) experiences. This year they’re presenting A Drop In the Ocean, which shrinks the user down to the size of a microbe on the back of a jellyfish (apparently you feel like you are indeed standing on a jellyfish) to experience pollution on a molecular, and persona, level.
“A Drop In The Ocean” from Conservation International, shrinks the participant down to the size of a microbe. They then ride atop a jellyfish. Only in VR.Conservation International
“It has been a wild ride seeing how artists and filmmakers have responded to new technologies over the years, constantly pushing and prodding and exploring the edges of things in order to make beautiful and vital work,” said Kopp, co-curator of Tribeca Immersive, in the Festival’s press release.
2nd Civil War has broken out in America. In this voice-activated VR experience, you will drop into an Insurgent Hot Zone, interrogate the inner circle of the insurgency and decide: which side are you on?
7 Lives (World Premiere) — Luxembourg, France, Belgium
Project Creator: Charles Ayats, Sabrina Calvo, Jan Kounen
Key Collaborators: Franck Weber, Céline Tricart, Marie Blondiaux, Adrien Oumhani
An afternoon in June. Tokyo. 5 p.m. A girl jumps in front of the subway. Her soul rises from the rails. On the platform, the six people who witnessed the scene are in shock. It revived a trauma in them, painful memories they never overcame…
Ayahuasca (World Premiere) — France, Luxembourg, Belgium
Participants are immersed in visions triggered by a dose of ayahuasca. The spectator lives this through director Jan Kounen’s eyes as he travels on a spiritual voyage.
Bonfire (World Premiere) — USA
Project Creator: Eric Darnell, Baobab Studios
Key Collaborators: Maureen Fan, Larry Cutler, Kane Lee, Shannon Ryan, Ali Wong
Nice job. You’ve crashed your spaceship into an alien jungle. Your instincts, nourishment cylinders, and a wary robot sidekick are all you have for survival… or so you think. With Ali Wong.
Cave (US Premiere) — USA
Project Creator: Ken Perlin, Kris Layng, Sebastian Herscher
Key Collaborators: Jess Bass
Cave is a coming-of-age story told through cutting edge Parallux technology, featuring a fully immersive holographic VR experience that can be shared by many audience members at once.
Children Do Not Play War (World Premiere) — Uganda, Brazil, USA
Project Creator: Fabiano Mixo
Children Do Not Play War is a cinematic VR tale of the war in Uganda told through the eyes of a young girl. Also playing in the program Cinema360: Her Truth, Her Power
Common Ground (World Premiere) — UK
Project Creator: Darren Emerson
Key Collaborators: Ashley Cowan, Conan Roberts
Explore the notorious Aylesbury Estate, concrete monument to the history and legacy of social housing in the UK, and home to a community affected by forces beyond their control.
Doctor Who: The Runaway (World Premiere) — UK
Project Creator: BBC / Passion Animation Studios
Key Collaborators: Mathias Chelebourg
Step inside the TARDIS with the Doctor in this beautiful, animated, interactive story from the Doctor Who team. With Jodie Whittaker, Richard Elfyn.
A Drop in the Ocean (World Premiere) — UK, France, USA
Project Creator: Adam May, Chris Campkin, Chris Parks
Key Collaborators: Philippe Cousteau, Ashlan Cousteau, Vision3, Conservation International
Discover a miniature universe at the heart of our survival. Become part of a Social VR adventure through the ocean food chain, to reveal a crisis of our making.
Ello (World Premiere) — China
Project Creator: Haodan Su
Key Collaborators: Zhiyuan Ma, Hao Luo
Ello is a sweet story about loneliness and friendship. When people expect friendship or love, proactively pursuing rather than passively waiting might lead to a surprising end.
Gymnasia (World Premiere) — Canada
Project Creator: Clyde Henry Productions
Key Collaborators: Félix Lajeunesse and Paul Raphaël
Step into a dream, where the ghostly ephemera of a lost childhood await you.
Into the Light (New York Premiere) — USA
Project Creator: Jessica Brillhart, Igal Nassima
Key Collaborator: Yo-Yo Ma
Ascend Spring Studios and move through the movements of Johann Sebastian Bach’s “Unaccompanied Cello Suite №2 in D Minor,” performed by the legendary Yo-Yo Ma.
Stealing Ur Feelings (World Premiere) — USA
Project Creator: Noah Levenson
Key Collaborators: Brett Gaylor
Stealing Ur Feelings is an AR experience about the power of facial emotion recognition AI that exploits your reaction to its own content in horrifying ways.
Unceded Territories (World Premiere) — Canada/USA
Project Creator: Paisley Smith and Lawrence Paul Yuxweluptun
Key Collaborators: Ketsia Vedrine, Peter Denny, Patrick Weekes, Jason Legge
Through infectious interaction, build a natural world made up of the colorful, surrealist art of acclaimed First Nations painter Lawrence Paul Yuxweluptun, until you are confronted by Colonialist Snake who forces you to see the truth behind your actions.
Podcast legend Dan Carlin employs the unique power of virtual reality to transport audiences into the most extreme battlefield in history — the Western Front of The First World War.
Where There’s Smoke (World Premiere) — USA
Project Creator: Lance Weiler
Key Collaborators: Peter English, Julia Pontecorvo, Dale Worstall
Where There’s Smoke mixes documentary, immersive theater, and an escape room to explore memory and loss. Set within the aftermath of a blaze, participants race to determine the cause of a tragic fire by sifting through the charred remains.
Wolves in the Walls: It’s All Over (World Premiere) — USA
Project Creator: Pete Billington, Jessica Yaffa Shamash
Key Collaborators: Fable, Facebook Sound+Design, Third Rail Projects, Oculus Studios, Edward Saatchi, Chris Hanson
Transport into the magic of VR cinema, where only you can help Lucy discover what’s truly hiding inside the walls of her house. With Jeffrey Wright, Noah Schnapp, Elizabeth Carena, Cadence Goblirsch
Sponsored by AT&T
Another Dream (حلم آخر) (World Premiere) — Netherlands, USA, Egypt
Project Creator: Tamara Shogaolu, Ado Ato Pictures
Another Dream brings to life the gripping, true story of an Egyptian lesbian couple. Faced with a post-Revolution backlash against their community, they must choose between love and home.
The Collider (North American Premiere) — UK
Project Creator: Anagram
Enter The Collider, a machine built to decipher the mysteries of human relationships. The Collider is an immersive virtual and theatrical experience exploring power, dependency, and the space between people.
Future Dreaming (World Premiere) — Australia
Project Creator: Sutu
Key Collaborators: Charles Henden, Alison Lockyer, Maverick Eaton, Maxie Coppin, Nelson Coppin
Step into a time-warping dream bubble as four young Aboriginal Australians approach their futures. Be ready for an intergalactic adventure. Look out for the space emus!
The Key (World Premiere) — USA, Iraq
Project Creator: Celine Tricart
Key Collaborators: Gloria Bradbury
An interactive VR experience taking the viewer on a journey through memories. Will they be able to unlock the mystery behind the mysterious Key without sacrificing too much?
Traitor (World Premiere) — UK
Project Creator: Pilot Theatre, Lucy Hammond
Key Collaborators: Matt Stuttard Parker, Richard Hurford, Rebecca Saw, Lydia Denno, Jonathan Eato
Eight hours ago, teenager Emma McCoy vanished. All she left behind was a game. Now it’s the viewer’s job to find her.
Change is Gonna Come
12 Seconds of Gunfire: The True Story of a School Shooting (World Premiere) — USA
Project Creator: Suzette Moyer, Seth Blanchard
Key Collaborators: John Woodrow Cox
After a gunman shoots her best friend on the playground, a first-grade girl confronts a journey of trauma and loss after the Townville, South Carolina, school shooting.
Ashe ’68 (New York Premiere) — USA
Project Creator: Brad Lichtenstein
Key Collaborators: Beth Hubbard, Rex Miller, Jeff Fitzsimmons
Arthur Ashe emerged as an elite athlete who parlayed his fame as the first black man to win the US Open tennis championship into a lifetime devoted to fighting injustice.
Accused №2: Walter Sisalu (North American Premiere) — France
Project Creator: Nicolas Champeaux & Gilles Porte
Key Collaborators: Oerd Van Cuijlenborg
A trove of 256 hours of sound archives of the Rivonia trial bring back to life the political battle waged by Nelson Mandela and his seven co-defendants against apartheid. This film looks at one of them in particular: Accused №2, Walter Sisulu.
11.11.18 (World Premiere) — Belgium, France
Project Creator: Sébastien Tixador et Django Schreven
Key Collaborators: Boris Baum, Sébastien Plazeneix, Antoine Sauwen
A few minutes before the suspension of fighting in World War on November 11, 1918, soldiers have to face a series of decisions.
Space Buddies (World Premiere) — USA
Project Creator: Matt Jenkins, Ethan Shaftel
Key Collaborators: Piotr Karwas
It takes a special team of astronauts to survive the voyage to Mars without going insane. This crew will be lucky to make it past launch.
Mr. Buddha (董仔的人) (International Premiere) — Taiwan R.O.C.
Project Creator: HTC Corporation, Lee Chung
Key Collaborators: HTC VIVE, Taipei Golden Horse Film Festival Executive Committee
The short crime story takes place in a car, following Dong-Tzu, Ching-Tsai, and Ni-Sang, who get their hands on a valuable antique only to have the fruits of their labor shared with the newcomer, A-Che.
Her Truth, Her Power
Mercy (New York Premiere) — USA, Cameroon
Project Creator: Armando Kirwin
Key Collaborators: Sutu, Ruben Plomp, Emma Debany, AMK LTD, Oculus VR for Good
Edith, a 14-year-old from Cameroon, journeys through the jungle seeking life-transforming surgery to remove a tumor on her jaw.
Girl Icon (New York Premiere) — USA, India
Project Creator: Sadah Espii Proctor
Key Collaborators: Amy Seidenwurm, Skye Von, Paula Cuneo, Lauren Burmaster, Espii Studios, little GIANT Wolf, Oculus VR for Good
Globally, over 130 million girls do not go to school. Step into the life of one girl from India who is inspired by Malala Yousafzai to change her course.
Children Do Not Play War (World Premiere) — Uganda, Brazil, USA
Project Creator: Fabiano Mixo
Key Collaborators: Amy Seidenwurm, VILD Studio, Oculus VR for Good
Children Do Not Play War is a cinematic VR tale of the war in Uganda told through the eyes of a young girl. Also playing in the Virtual Arcade.
Such Sweet Sorrow
Armonia (World Premiere) — USA
Project Creator: Bracey Smith
Key Collaborators: Anja Moreno-Smith, Neil Dvorak, Sara K White, Josh Bernhard, Jacques Lalo, Daniel Coletta
Armonia takes the ride of the dynamic original piano concerto “Armonia Degli Uccelli,” and marries it to a universally accessible animation, to produce a uniquely layered spectacle of spatial storytelling.
Dreams of The Jaguar’s Daughter (World Premiere) — USA, Mexico
A blue-hearted rom-com about people who are going to lose someone they love. Nobody knows what to say, so they bicker, laugh/cry, get married. It’s a romantic comedy, after all.
Tribeca Immersive takes place in the Tribeca Festival Hub located at Spring Studios — 50 Varick Street. Admission to presentations of the Virtual Arcade featuring Storyscapes is $40. Screening tickets for Tribeca Cinema360 screenings are $15. Tickets can be purchased online at tribecafilm.com/immersive beginning March 26 or by telephone at (646) 502–5296 or toll-free at (866) 941-FEST (3378).
A deep dive into Virtual Reality entertainment design
This post is a brief revisit to the topic of my earlier essay on rhythm games for Virtual Reality. This piece was triggered by the ‘early access’ release of Audica, a new VR rhythm game from Harmonix, a studio who basically reinvented and owned the rhythm game genre on video game consoles for years. They rode on the tremendous success of Guitar Hero and Rock Band, which perfected the formula the studio had previously explored with titles like Frequency and Amplitude.
Harmonix had done a music-themed title for mobile VR devices, SingSpace, which was a karaoke game, but Audica was going to be the real deal. Therefore, expectations for Audica were high. Beat Games, and their hugely successful Beat Saber, had conquered the throne in the genre, and now Harmonix would take it back with Audica, or at least challenge for it. I think it’s reasonable to assume this been the minimum goal for the project, right? Hence the comparison is justifiable.
Well, Audica will not surpass Beat Saber. It’s not that Audica is bad, it’s actually quite good, and there is a lot of polish that tells about the great craft(wo)manship residing at Harmonix. However, the designers at Harmonix have made a number of choices that I believe will keep Audica as a niche title.
Audica is a combination of:
game mechanics mixed in from another rhythm game franchise, plus
some of Harmonix’ proven rhythm game formula from the past,
yet, missing a significant other feature from the formula.
My take is that folks at Harmonix have not quite identified the details that make Beat Saber’s gameplay design shine — or, the stakeholders at the studio have insisted on keeping some of the proven design from the past, which doesn’t work to their advantage in VR as well as it did for “flat screen” games.
Let’s look at these choices in detail, because they can be generalised, to an extent, into general design learnings in the domain of ‘immersive’. I argue that the following five design details explain why Audica is not able to win many Beat Saber players’ hearts over:
1. Some interaction mechanics from previous media adapt better to spatial design than others
The simplest task Audica asks its players to do is to aim, using your hands (as laser pistols) at a target appearing in space, roughly in your field of view in front of you, and time your shot to a contracting circle closing at the bullseye of the target, according to the beat of the song. A gif I made to illustrate this:
This is a proven rhythm game mechanic lifted from a game series for Nintendo DS handhelds, originated with Osu! Tatakae! Ouendan and its sequels, known in the west as Elite Beat Agents. In order to bring it to VR, it had to be adapted from the flat screen and its stylus interface to physical 3D space, i.e. ‘spatialised’ for VR. In the process, playing Audica slightly resembles Skeet (the sport), i.e. observing space before you and reacting at things appearing on it by shooting at them.
This Harmonix has done adequately, embellished it with impressive visual effects, and added some variety through a few variations to the basic mechanics, as evident from the clip above. The question is, was adapting this mechanic the correct design choice in a market dominated by Beat Saber?
The answer is that the EBA mechanic is not the root of the problem, but it does mean that Audica can go only so far in truly spatialising the design, i.e. making the core mechanic make use of the 3D space around the player. Visually and spatially, it’s hard to see how those targets could ‘come at you’ in a way that makes the game fully embrace the space around the player. The mechanic has introduced a constraint that influences a number of other gameplay aspects, and not always in a positive fashion.
The more advanced targets in Audica do ask the player to make e.g. diagonal hand movement when hitting them, but this hand movement follows (literally) a dotted line in a curved plane in front of you. It’s as if using lightsabers’ tips to point at a 2D plane in front of you, rather than slicing freely while targeting objects at varying distances from you.
Summary: An ‘Objects Coming at You’ design pattern seems to be a better design solution for embodied VR experiences, which also aim at leveraging the performative aspects of VR.
More on performative aspects later with my third point.
2. Multiple margins of error across multiple motor skills add to the learning curve and cognitive load
In Audica, your ‘tools’ with which to execute the mechanics are pistols and their ‘bullets’. Therefore, the margin of error regarding timing, and the target’s physical relation to your body are different to Beat Saber, where the lightsabers function as extensions of your limbs.
In Beat Saber, you can still manage a hit at the very last fraction of a second, very close to your body, because the cubes are coming at you, and you can even make a correction to the direction and angle at the very last moment by a slight adjustment of your wrist. Furthermore, you do not need to look at the cube to do this.
In Audica, when you press the trigger, that’s it. The margin of error exists as a combination of your initial aim before pressing the trigger, and then the timing of pressing the trigger.
These two margins of error combine in a way that one might rule out the other: miss the timing or the aim, and you miss the target. No last millisecond correction is possible, once you’ve sent your shot away, and you always have to look at the target. Result: less satisfying moments resulting from keeping your combo going because of that correction; more cognitive load in trying to achieve both goals: aim and timing.
Furthermore, the height and direction of the targets have more variables; at least for a newbie, predicting where they appear is rather random — whereas in Beat Saber you learn the possible areas of appearance, constrained by the track, quite quickly, and hence there are less of these potential points of attention: less cognitive energy is spent in anticipation.
For me at least, these aspects equal a considerably steeper learning curve. With Beat Saber, I only really fell in love with the game once I started to get to grips with the hard level, and that took some effort. With Audica, the perceived effort in my mind in reaching similar skill level feels overwhelming. At times I get a taste of this when I hit a good sequence, but then one miss sets me back and makes me feel bad as the song sounds worse (see point 5. below why Beat Saber does not do this).
Summary: In immersive design, the adages of usability, such as ‘don’t make me think’ expand to the physical realm — to the design of everyday things.
3. Shooting has less performative potential than wielding
In my rhythm games essay, I mention research regarding the ‘energising’ potential of body postures and poses, and how Beat Saber and other VR games that emphasise embodiment tap into this. Beat Saber does this particularly well with the performative aspects of wielding a lightsaber, and the associated cultural meanings such wielding carries with it.
While academics still debate whether e.g. so-called power posing has emotional effects and behavioral consequences, the notion of embodied cognition does imply that what we do with our bodies affects our mind; we think with our body as well.
Consequently, performative poses in rhythm games can be argued to amplify the already positive emotions we gain from successful sequences of slicing cubes or shooting targets to the beat. Wielding lightsabers as the only ‘verb’ in a game has good chances of drawing from that energising potential, again when tied with the cool-factor of make-believe Jedi Knight-esque activity. Audica has tried to make Skeet as cool as possible, with solid audio and visual effects, but it’s no lightsabers.
(I am not discussing the so-called melee objects in Audica here, because they seem just an afterthought in addressing the space-embodiment problem. Maybe this is just a personal thing, but to me, they are just annoying, like swatting away flies while you try to focus on something important.)
Summary: What postures you ask your user to do, physically, in your immersive application, has consequences for the experience, and is a question of design.
4. Track design needs to be also about the transitions between poses, not just sequences of poses
In Beat Saber, the objects are always the same, it’s the angle and posture you are meant to hit them that changes. This means that embodied gameplay becomes less about hitting the beats, as in Audica, but also about the transitions from one target to another.
In terms of human motor skills, Audica asks us to do more discrete movements as sudden shifts of attention from one target on one side of the space to another target on the other. Beat Saber asks us to perform continuous movements, implied by the lightsabers and the cubes coming at us, and therefore the transitions between slicing one cube and the next feel continuous, rather than points in space which we have to dart to with our gaze.
The lightsabers as extensions of one’s body, and consequently our direct contact with the cubes via them, make gameplay more about proprioception, i.e. knowing where our limbs are even if we close our eyes, enabling us to hit without looking. Whereas in games like Audica, one needs to constantly target something at a distance via perception.
Consequently, the craft of ‘level’ or track design in Beat Saber, in my view, is at its best in enabling coordinated transitions from one pose to another in a satisfying way, in line with not just the beat, but the melody of the song. This also feeds into why Beat Saber has a natural feel of exercise.
I find that Audica’s shooting pose is not only less varied but also less energising and performative; even if they try to bring variety to it with the chained targets, etc, the system does not enable to design for the joy from transitions in a similar way than Beat Saber.
Hence, in Audica the marriage of gameplay and audio feel less organic. (Maybe this is true on the higher levels of difficulty and skill, please let me know!)
Summary: Design for proprioception and they will come.
5. Avoid diminishing embodied engagement with how you design negative feedback
One aspect of the Harmonix rhythm game formula Audica retains is the ‘deterioration’ of the song if you miss a beat. A track in the song takes a ‘hit’ by being decreased in volume or applied a filter, which makes the song sound wrong — until you manage to ‘restore’ it by subsequent successful hits.
This became a de-facto design feature in the Harmonix portfolio for giving negative feedback to the player, and an incentive to get back on track. This was fine when it was a twitch in your brain and your thumb, but when the active agent is your whole body, I find that it becomes too much of a negative. It takes away from the fun of the game; it reduces the cues for you to dance along while playing, effectively implying you shouldn’t dance; “no fun for a while, loser”. This might just be me sucking at playing Audica, and therefore constantly being affected by the feature, but the important takeaway here is:
Beat Saber does not do it. It throws a punishing sound effect, which I have been habituated to physically flinch upon hearing, but the song is not affected by your mistakes. Fun and dance go on, without interruption.
Whether not going for this was a conscious observation by the folks at Beat Games, or just something that they did not end up implementing, doesn’t really matter — it just works. The design acknowledges that when your user’s whole body is in play, you should not reduce the player’s embodied engagement by distorting the flow via manipulating the song.
Summary: In immersive design, negative and positive feedback need to take embodied cognition into account.
Afterword: The Disruptors and the Disrupted
In summary, what is going on here reminds me of the classic formulations about how innovation works by Clayton Christensen, in that Beat Games, as an entrant to the field, has not had to unlearn the conventions a studio like Harmonix has in its console-based rhythm game DNA.
Therefore, Beat Games can be characterised as a disruptor in the genre, at least in its VR manifestations. Harmonix is giving the VR rhythm genre a good shot with Audica, but the end result is a mash-up of old truths and experiments with a new platform.
I guess partially this post rose from frustration: I really wanted to like Audica, and still do, but with Beat Saber’s DLC out and being an absolute joy, I probably won’t give Audica many more chances, which is a shame.
Photo by rawpixel on UnsplashWill the CCPA Reshape Online Privacy for Kids’ Across America?
The California Consumer Privacy Act (CCPA) of 2018 was signed into law by California Governor Jerry Brown on June 28, 2018. Currently, penalties under the law can include up to $7,500 per incident.
This new law brings stronger data privacy protections for residents of California, especially minors under the age of 16.
What Data Privacy Rights Does CCPA Give Consumers?
The CCPA gives “consumers” (defined as California residents) four basic rights concerning data collection and consent regarding their personal information:
Right to Know: Under CCPA consumers have the right to know what personal information a business has collected about them, where it was sourced from, what it is being used for, whether it is being disclosed or sold, and to whom it is being disclosed or sold;
Right to Opt Out: CCPA gives consumers control over whom they give consent to collect their data. Consumers can “opt out” of allowing a business to sell their personal information to third parties (or, for minors who are under 16 years old, the right not to have their personal information sold, unless their parent’s opt-in to allow data collection);
Right to Delete: CCPA gives consumers the right to have a business delete their personal information, with some exceptions;
Right to Equal Service: The right to receive equal service and pricing from a business, even if they exercise their privacy rights under CCPA.
How Will CCPA Impact Kids’ Privacy?
The primary benefit for kids, teens, and families are that the CCPA will give parents and teens more control over what personal data companies can collect from minors.
Increased Age Gate: The CCPA raises the age for data consent for minors from 13 in the Children’s Online Privacy Protection Act (COPPA), to age 16 for residents of California.
Data Consent: Under CCPA, minors under 16 years of age must authorize the sale of their personal information. For children that are under 13, the opt-in must be collected from a parent or guardian.
Personal Data: Under CCPA, the definition of “personal data” has been expanded beyond names, addresses, SSNs, and email addresses.
The law defines personal information to include the collection of geolocation, IP addresses, shopping or browsing history, psychological profiles, consumption behaviors, and consumer preferences.
Due to the broad definition of “personal information” under CCPA, the law will also impact privacy and data collection on devices ranging from smartphones, VR/AR, gaming consoles, apps and more.
Businesses catering to kids’ don’t ask new users if they are 13 years old or younger so that they can avoid gaining actual knowledgeof the users’ age, and thereby avoid triggering data collection regulations under COPPA, such as obtaining Verifiable Parental Consent (VPC).
Under CCPA, businesses will need to ask consumers who reside in California to verify whether they are 16 years of age or older before they can begin selling any data obtained from a minor.
The CCPA explicitly provides that a business which willfully disregards the consumer’s age has actual knowledge of the consumer’s age.
The primary challenge for many companies is that to be CCPA compliant, they will now be required to acquire actual knowledge of a users age, and by doing so, it may potentially open them to COPPA liability.
When Does CCPA Become Law?
The California Consumer Privacy Act (CCPA) becomes law effective January 1, 2020.
The Future of Kids’ Online Privacy
The stricter age regulations required under the new California law, similar to those under the European Union’s GDPR-K law, will force business to take more responsibility to verify the age of the person whose data they are collecting and/or selling.
Either way, the CCPA is a bold step forward in making the internet a safer place for kids in California. Moreover, as the saying goes, as goes California, so does the rest of the nation. And for the sake of our children’s privacy online, that may be a step in the right direction.
VR in 2019 — It’s all about contentContent!? Don’t we need to figure out hardware first?
I didn’t get a chance to post a review of the Oculus Go in 2018 but suffice to say it’s what the VR market needed. No cables, no mobile phone, no PC required, and it’s an affordable US$ 199. The standalone headset allows users to just pop it on and try a wide variety of VR experiences, some with interactivity using the controller. I love it because I see it as a major step towards providing a truly consumer-friendly VR headset and I’ll be referring to it quite a bit on this blog.
Speaking of VR headsets, 2019 appears to be a promising year with the Oculus Quest coming out in spring and HTC announcing 2 new headsets at CES 2019. Personally, the Oculus Quest is the more exciting consumer VR headset, being the next advancement after the Oculus Go — standalone, wireless, but this time offering six degrees of freedom. Plus, it’s still reasonably priced (US$399). I think it’s safe to say that VR hardware and user experience issues are steadily being resolved. We’reon the right track.
The Oculus Go and upcoming Oculus QuestThe Year of VR Content
Which is why I think 2019 will be the year that the focus shifts to VR content. I can’t count the number of times I’ve looked at Oculus Go communities on Facebook to see comments such as, “I just got my Oculus Go! What’s a good app to start with!?”
This brings me back to a point I’ve made in previous posts — what are you going to do with all these great headsets if there’s no content to watch? What’s needed now is mass-market entertainment that can pull audiences towards VR and, in my opinion, we have to offer more than just games. There are two general categories of non-gaming VR content — passive or interactive. Passive does not require users to do anything during the experience, they can simply sit back and enjoy it — like watching a TV show or movie — except it’s happening all around them and they can look around. Interactivity, on the other hand, requires users to get involved. A recent non-VR example of this was when Netflix released Bandersnatch, an episode of the Black Mirror series with a branching storyline. Similar to a “choose your own adventure” experience, users make decisions for the protagonist at fixed points in the episode. If you’ve seen the episode, you should ask yourselves whether this type of interactive storytelling is something you would like to see more of, and would you like to see it in VR?
What did you think of Bandersnatch?What do audiences want?
Having said that, VR is still in its infancy and nobody really knows what VR audiences want, including the audiences themselves. In my conversations with VR companies, the tendency has been to focus more on interactive experiences because technologists want to maximise the possibilities presented by the tech. But would interactivity actually serve a story? And are people ready for it?
There have been some attempts at interactive storytelling in VR over the years. The U.S.-based Penrose Studios, for example, has developed a unique diorama-style storytelling technique allowing users to move around the story environment and absorb the narrative from different angles — allowing users to lean closer into characters and scenes.
Other approaches are similar to Bandersnatch. Here are a few examples (Note: I’m not personally recommending any of these, just listing out some of the ones I’ve come across):
Sequenced VR — Released in 2014, this was a post-apocalyptic episodic animated VR experience with branching storylines using gaze functionality. I say “was” because I can’t find it online anywhere anymore. It was meant to have been a 10 part series scheduled to be released in Q4 of 2016.
Speak of the Devil VR — A horror-based “choose your own adventure” style 360 live action with 13 different endings.
Broken Night — Premiered in 2017 at the Tribeca Film Festival, this film stars Emily Mortimer on a psychological journey to uncover the truth of what transpired over one harrowing night. Unfortunately, it doesn’t seem to be available on the usual VR distribution channels at the moment.
Currently, passive content dominates the offerings available if you were to buy a headset today. Just look at the Oculus Go website, it appears to focus primarily on passive entertainment:
Looks very passive to me!
So, I ask again: What works? And what do audiences want? These questions are yet to be answered with certainty by VR content creators and distributors. As it is, the VR entertainment content play is hard and, outside of a handful of games, there haven’t been any runaway successes. But with improvements in hardware, I’m confident we will start seeing more VR content and a bigger variety of experiences on offer in 2019 that will appeal to more mass audiences.
What are your thoughts on VR content? Do you prefer an interactive or passive experience? Comment below!
Till next time. #LoveVR
Abhi Kumar is Chief Creative Director at Warrior9 VR. He believes that VR is the next frontier in the content revolution and is currently working on an animated sci-fi series in VR, The PhoenIX.