A blog maintained and updated by the CEO of Sensics and the co-founder of OSVR himself! This blog provides in-depth insights on the technical and mechanical aspects of VR.Learn from the VR expert himself! Stay updated with The VRguy’s Blog and learn a great deal about VR tech and news.
For over 100 years entrepreneurs have come to Hollywood to try their luck in the dream factory and build an empire in the business of storytelling.
Propelled by new technologies, new businessmen have been landing in Los Angeles since the invention of the nickelodeon to create a studio that would dominate popular entertainment. Over the past five years, virtual reality was the latest new thing to make or break fortunes, and the founding team behind the Korean company AmazeVR are the latest would-be dream-makers to take their turn spinning the wheel for Hollywood fortunes.
Despite billions of dollars in investment, and a sustained marketing push from some of the biggest names in the technology industry, virtual reality still doesn’t register with most regular consumers.
But technology companies keep pushing it, driven in part by a belief that maybe this time the next advancement in hardware and services will convince consumers to strap a headset onto their face and stay for a while in a virtual world.
There are significant economic reasons for companies to persist. Sales of headsets in the fourth quarter of 2018 topped 1 million for the first time and new, low cost all-in-one models may further move the needle on adoption. Hardware makers have invested billions to improve the technology, and they’d like that money to not go to waste. At the same time, networking companies are spending billions to roll out new, high speed data networks and they need new data-hungry features (like virtual reality) to make a compelling case for consumers to upgrade to the newer, more expensive networking plans.
Sitting at the intersection of these two market forces are companies like AmazeVR, which is hoping to beat the odds.
Founded by a team of ace Korean technologists who won fame and fortune as early executives of the multi-billion dollar messaging service Kakao (it’s the Korean equivalent of WhatsApp or WeChat), AmazeVR is hoping it can succeed in a marketplace littered with production studios like Baobab Studios, Here Be Dragons, The Virtual Reality Company, and others.
The company was formed and financed with $6.3 million from its founding team of Kakao co-founder and co-chief executive, JB Lee, who serves as Amaze’s chief product officer; its head of strategy, Steve Lee, AmazeVR’s chief executive; Jeremy Nam, the chief technology officer at AmazeVR and the former senior software engineer of Kakao; and finally, Steve Koo, who led KakaoTalk’s messaging team and is now head of engineering at AmazeVR.
“What we saw as the problem is the content creation itself,” says Lee.
Encouraged by the potential uptake of the Oculus Go and spurred on by $7 million in funding led by Mirae Asset Group with participation from strategic investors including LG Technology Ventures, Timewise Investment, and Smilegate Investment, AmazeVR is looking to plant a flag in Hollywood to encourage producers and content creators to use its platform and get a significant library of content up and running.
For LG, it’s strategically important to get some applications up on its newly launched 5G subscription network back in Korea, and AmazeVR is already rolling up new content for its VR platform.
In fact, AmazeVR has already partnered with LG U+, the telecommunications network arm of LG to produce virtual reality content. LG U+ will host AmazeVR content on its service use the company’s proprietary content generation tools to make VR production easier as it looks to roll out 1500 new pieces of virtual reality “experiences”.
AmazeVR sells its content as a $7 per-month subscription, with 3 month bundles for $18 and 6 month bundles for $24. So far, they’ve got more than 1,000 subscribers and expect to add more as consumers start opening their wallets to pick up more devices. The company already has 20 different interactive virtual reality experiences available and is in Los Angeles to connect with top talent for additional productions, the company said.
“We believe cloud-based VR is the future, and AmazeVR has developed elegant technology that enables users to create and share interactive content very easily,” said Dong-Su Kim, CEO of LG Technology Ventures, in a statement. “We are incredibly excited about how the AmazeVR platform will enable innovative, quality content to be generated at unprecedented scale and speed.”
AmazeVR uses a proprietary backend to stitch 360-degree video and provide editing and production tools for content creators in addition to building its own cameras for video capture, the company said.
As it builds out its library, AmazeVR is giving video creators a cut of the sales from the company’s subscriptions and individual downloads of their virtual reality experiences.
“We see no reason that VR content shouldn’t be compelling enough to support a Netflix model. To get there, we must devise mechanisms to inspire, assist, and reward content creators,” said Steve Lee, CEO of AmazeVR. “Our approach, commitment to quality, industry-leading technology, and strategic investors provide a path forward to make VR/AR the next great frontier for entertainment and personal displays.”
Consumer VR might not have taken off in the mainstream but it’s still fun to use, and it’s even more fun to use in groups. There is more of an arcade renaissance for VR going on right now, as well as location-based multi-user VR experiences.
That’s the premise behind Munich-based HolodeckVR which is using proprietary tech to blend radio frequency, IR tracking and on-device IMUs to bring multi-user positionally tracked VR to mobile headsets.
How would you like to do VR in a big group, and on fairground dodgems/bumper cars? That’s the kind of this startup is cooking up.
As a spin-off from the prestigious Fraunhofer Institute for Integrated Ciruits IIS, it uses its own technology which allows its visitors to experience virtual reality in groups of up to 20 people and move around in an empty space of 10x20m, all just wearing VR goggles.
Holodeck says it can be used for different types of events (entertainment, birthday parties and corporate team building) and work through several thousands of guests per day.
It’s now raised €3 million from strategic partner ProSiebenSat.1, the leading German entertainment player. This will allow Holodeck to expand its open content platform and extend its network of locations.
The Munich-based media company owns a potential distribution channel for scaling Holodeck VR locations at leisure- and activity parks, while other synergies related to ProSiebenSat.1, including live broadcasting and VR content generation.
With 7Sports, the sports business units of ProSiebenSat.1, Holodeck VR plans eSports events leveraging the Holodeck VR platform.
Jonathan Nowak Delgado says: “With this investment, we’ll aim to become the VR touchpoint for the next generation by offering exciting new experiences that are simple, social, and fun.”
Holodeck VR’s experiences combine the real world and digital world so that you can take a ride in bumper cars or on a rollercoaster.
I hope they will have plenty of sick bags at the ready.
Facebook -owned Oculus is shipping its latest VR headgear from today. Preorders for the PC-free Oculus Quest and the higher end Oculus Rift S opened up three weeks ago.
In a launch blog Oculus touts the new hardware’s “all-in-one, fully immersive 6DOF VR” — writing: “We’re bringing the magic of presence to more people than ever before — and we’re doing it with the freedom of fully untethered movement”.
For a less varnished view on what it’s like to stick a face-computer on your head you can check out our reviews by clicking on the links below…
TC: “It still doesn’t feel like a proper upgrade to a flagship headset that’s already three years old, but it is a more fine-tuned system that feels more evolved and dependable”
The Oculus blog contain no detail on pre-order sales for the headsets — beyond a few fine-sounding words.
Meanwhile Facebook has, for months, been running native ads for Oculus via its eponymous and omnipresent social network — although there’s no explicit mention of the Oculus brand unless you click through to “learn more”.
Instead it’s pushing the generic notion of “all-in-one VR”, shrinking the Oculus brand stamp on the headset to an indecipherable micro-scribble.
Here’s one of Facebook’s ads that targeted me in Europe, back in March, for e.g.:
For those wanting to partake of Facebook flavored face gaming (and/or immersive movie watching), the Oculus Quest and Rift S are available to buy via oculus.com and retail partners including Amazon, Best Buy, Newegg, Walmart, and GameStop in the US; Currys PC World, FNAC, MediaMarkt, and more in the EU and UK; and Amazon in Japan.
Facebook’s VP of AR/VR product Hugo Barra is out after some leadership changes at the top of the Oculus organization. After initially being hired to lead the whole VR division, Barra will now be leading global AR/VR partnerships, while Erick Tseng, Facebook’s director of product management, will be replacing Barra in his most recent role leading AR/VR product management.
Barra came on in early 2017 after the ouster of Oculus’s existing leadership structure, when then-CEO Brendan Iribe was demoted alongside much of the founding team to lead product-specific verticals. Later that year Oculus founder Palmer Luckey was ousted.
Taking a new role @facebook building a global AR/VR partner ecosystem based in NYC, after 2+ amazing years leading the @oculus team. With Quest shipping 5/21, our first-gen VR lineup is now complete.Time for me to take on the next big challenge—bringing AR and VR to more people!
Barra’s proximity to CEO Mark Zuckerberg’s inner-circle was soon diminished after longtime executive Andrew Bosworth was placed ahead of him in the org chart leading AR/VR at Facebook in a role that also included other consumer hardware efforts like Portal. Barra’s transition comes as the company prepares to release two of its latest virtual reality products, the Rift S and Quest.
Late last year, Oculus had an internal reorganization that shifted the team to more specialization-focused groups as opposed to product-focused.
It’s unclear what the full scope of Barra’s new role is. Facebook partnered with Xiaomi — where Barra previously led international efforts — to build the Oculus Go and Xiaomi’s Mi VR headset. Facebook’s recent partnership with Lenovo to build the Rift S showcases just how important these hardware partnerships are to the company.
On Tseng’s promotion, a Facebook spokesperson said, “He is the right person to step into this role because of his experience leading product teams at Facebook, and leading the Android product team at Google .”
Alongside this news, Facebook noted that longtime content exec Jason Rubin has seen his role expand as well and has received a new title, VP of Special Gaming Initiatives.
The martial arts actor Jet Li turned down a role in the Matrix and has been invisible on our screens because he does not want his fighting moves 3D-captured and owned by someone else. Soon everyone will be wearing 3D-capable cameras to support augmented reality (often referred to as mixed reality) applications. Everyone will have to deal with the sorts of digital-capture issues across every part of our life that Jet Li avoided in key roles and musicians have struggled to deal with since Napster. AR means anyone can rip, mix and burn reality itself.
Tim Cook has warned the industry about “the data industrial complex” and advocated for privacy as a human right. It doesn’t take too much thinking about where some parts of the tech industry are headed to see AR ushering in a dystopian future where we are bombarded with unwelcome visual distractions, and our every eye movement and emotional reaction is tracked for ad targeting. But as Tim Cook also said, “it doesn’t have to be creepy.” The industry has made data-capture mistakes while building today’s tech platforms, and it shouldn’t repeat them.
Dystopia is easy for us to imagine, as humans are hard-wired for loss aversion. This hard-wiring refers to people’s tendency to prefer avoiding a loss versus an equal win. It’s better to avoid losing $5 than to find $5. It’s an evolutionary survival mechanism that made us hyper-alert for threats. The loss of being eaten by a tiger was more impactful than the gain of finding some food to eat. When it comes to thinking about the future, we instinctively overreact to the downside risk and underappreciate the upside benefits.
How can we get a sense of what AR will mean in our everyday lives, that is (ironically) based in reality?
When we look at the tech stack enabling AR, it’s important to note there is now a new type of data being captured, unique to AR. It’s the computer vision-generated, machine-readable 3D map of the world. AR systems use it to synchronize or localize themselves in 3D space (and with each other). The operating system services based on this data are referred to as the “AR Cloud.” This data has never been captured at scale before, and the AR Cloud is 100 percent necessary for AR experiences to work at all, at scale.
Fundamental capabilities such as persistence, multi-user and occlusions outdoor all need it. Imagine a super version of Google Earth, but machines instead of people use it. This data set is entirely separate to the content and user data used by AR apps (e.g. login account details, user analytics, 3D assets, etc.).
The AR Cloud services are often thought of as just being a “point cloud,” which leads people to imagine simplistic solutions to manage this data. This data actually has potentially many layers, all of them providing varying degrees of usefulness to different use cases. The term “point” is just a shorthand way of referring to a concept, a 3D point in space. The data format for how that point is selected and described is unique to every state-of-the-art AR system.
The critical thing to note is that for an AR system to work best, the computer vision algorithms are tied so tightly to the data that they effectively become the same thing. Apple’s ARKit algorithms wouldn’t work with Google’s ARCore data even if Google gave them access. Same for HoloLens, Magic Leap and all the startups in the space. The performance of open-source mapping solutions are generations behind leading commercial systems.
So we’ve established that these “AR Clouds” will remain proprietary for some time, but exactly what data is in there, and should I be worried that it is being collected?
AR makes it possible to capture everything…
The list of data that could be saved is long. At a minimum, it’s the computer vision (SLAM) map data, but it could also include a wireframe 3D model, a photo-realistic 3D model and even real-time updates of your “pose” (exactly where you are and what you are looking at), plus much more. Just with pose alone, think about the implications on retail given the ability to track foot traffic to provide data on the best merchandise placement or best locations for ads in store (and at home).
The lower layers of this stack are only useful to machines, but as you add more layers on top, it quickly starts to become very private. Take, for example, a photo-realistic 3D model of my kid’s bedroom captured just by a visitor walking down the hall and glancing in while wearing AR glasses.
There’s no single silver bullet to solving these problems. Not only are there many challenges, but there are also many types of challenges to be solved.
Tech problems that are solved and need to be applied
Much of the AR Cloud data is just regular data. It should be managed the way all cloud data should be managed. Good passwords, good security, backups, etc. GDPR should be implemented. In fact, regulation might be the only way to force good behavior, as major platforms have shown little willingness to regulate themselves. Europe is leading the way here; China is a whole different story.
A couple of interesting aspects to AR data are:
Similar to Maps or Streetview, how “fresh” should the data be, and how much historical data should be saved. Do we need to save a map with where your couch was positioned last week? What scale or resolution should be saved. There’s little value in a cm-scale model of the world, except for a map of the area right around you.
The biggest aspect that is difficult but doable is no personally identifying information leaves the phone. This is equivalent to the image data that your phone processes before you press the shutter and upload it. Users should know what is being uploaded and why it is OK to capture it. Anything that is personally identifying (e.g. the color texture of a 3D scan) should always be opt-in and carefully explained how it is being used. Homomorphic transformations should be applied to all data that leaves the device, to remove anything human readable or identifiable, and yet still leave the data in a state that algorithms can interpret for very specific relocalization functionality (when run on the device).
There’s also the problem of “private clouds” in that a corporate campus might want a private and accurate AR cloud for its employees. This can easily be hosted on a private server. The tricky part is if a member of the public walks around the site wearing AR glasses, a new model (possibly saved on another vendor’s platform) will be captured.
Tech challenges the AR industry still needs to solve
There are some problems we know about, but we don’t know how to solve yet. Examples are:
Segmenting rooms: You could capture a model of your house, but one side of an inner apartment wall is your apartment while the other side is someone else’s apartment. Most privacy methods to date have relied on something like a private radius around your GPS location, but AR will need more precise ways to detect what is “your space.”
Identifying rights to a space is a massive challenge. Fortunately, social contracts and existing laws are in place for most of these problems, as AR Cloud data is pretty much the same as recording video. There are public spaces, semi-public (a building lobby), semi-private (my living room) and private (my bedroom). The trick is getting the AR devices to know who you are and what it should capture (e.g. my glasses can capture my house, but yours can’t capture my house).
Managing the capture of a place from multiple people, and stitching that into a single model and discarding overlapping and redundant data makes ownership of the final model tricky.
The Web has the concept of a robots.txt file, which a website owner can host on their site, and the web data collection engines (e.g. Google, etc.) agree to only collect the data that the robots.txt file asks them to. Unsurprisingly this can be hard to enforce on the web, where each site has a pretty clear owner. Some agreed type of “robots.txt” for real-world places would be a great (but maybe unrealistic) solution. Like web crawlers, it will be hard to force this on devices, but like with cookies and many ad-tracking technologies, people should at least be able to tell devices what they want and hopefully market forces or future innovations can require platforms to respect it. The really hard aspect of this attractive idea is “whose robots.txt is authoritative for a place.” I shouldn’t be able to create a robots.txt for Central Park in NYC, but I should for my house. How is this to be verified and enforced?
Social contracts need to emerge and be adopted
A big part of solving AR privacy problems will come from developing a social contract that identifies when and where it’s appropriate to use a device. When camera phones were introduced in the early 2000s, there was a mild panic about how they could be misused; for example, cameras used secretly in bathrooms or taking your photos in public without a person’s permission. The OEMs tried to head off that public fear by having the cameras make a “click” sound. Adding that feature helped society adopt the new technology and become accustomed to it pretty quickly. As a result of having the technology in consumers hands, society adopted a social contract — learning when and where it is OK to hold up your phone for a picture and when it is not.
… [but ] the platform doesn’t need to capture everything in order to deliver a great AR UX.
Companies added to this social contract, as well. Sites like Flickr developed policies to manage images of private places and things and how to present them (if at all). Similar social learning took place with Google Glass versus Snap Spectacles. Snap took the learnings from Glass and solved many of those social problems (e.g. they are sunglasses, so we naturally take them off indoors, and they show a clear indicator when recording). This is where the product designers need to be involved to solve the problems for broad adoption.
Challenges the industry cannot predict
AR is a new medium. New mediums come along only every 15 years or so, and no one can predict how they will be used. SMS experts never predicted Twitter and Mobile Mapping experts never predicted Uber . Platform companies, even the best-intentioned *will* make mistakes.
These are not tomorrow’s challenges for future generations or science fiction-based theories. The product development decisions the AR industry is making over the next 12-24 months will play out in the next five years.
This is where AR platform companies are going to have to rely on doing a great job of:
Ensuring their business model incentives are aligned with doing the right thing by the people whose data they capture; and
Communicating their values and earning the trust of the people whose data they capture. Values need to become an even more explicit dimension of product design. Apple has always done a great job of this. Everyone needs to take it more seriously as tech products become more and more personal.
What should the AR players be doing today to not be creepy?
Here’s what needs to be done at a high level, which pioneers in AR believe is the minimum:
Personal Data Never Leaves Device, Opt In Only: No personally identifying data required for the service to work leaves the device. Give users the option to opt in to sharing additional personal data if they choose for better apps feedback. Personal data does NOT have to leave the device in order for the tech to work; anyone arguing otherwise doesn’t have the technical skills and shouldn’t be building AR platforms.
Encrypted IDs: Coarse Location IDs (e.g. Wi-Fi network name) are encrypted on the device, and it’s not possible to tell a location from the GPS coordinates of a specific SLAM map file, beyond generalities.
Data Describing Locations Only Accessible When Physically at Location: An app can’t access the data describing a physical location unless you are physically in that location. That helps by relying on the social contract of having physical permission to be there, and if you can physically see the scene with your eyes, then the platform can be confident that it’s OK to let you access the computer vision data describing what a scene looks like.
Machine-Readable Data Only: The data that does leave the phone is only able to be interpreted by proprietary homomorphic algorithms. No known science should be able to reverse engineer this data into anything human readable.
App Developers Host User Data On Their Servers, Not The Platforms: App developers, not the AR platform company, host the application and end user-specific data re: usernames, logins, application state, etc. on their servers. The AR Cloud platform should only manage a digital replica of reality. The AR Cloud platform can’t abuse an app user’s data because they never touch or see it.
Business Models Pay for Use Versus Selling Data: A business model based on developers or end users paying for what they use ensures the platform won’t be tempted to collect more than necessary and on-sell it. Don’t create financial incentives to collect extra data to sell to third parties.
Privacy Values on Day One: Publish your values around privacy, not just your policies, and ask to be held accountable to them. There are many unknowns, and people need to trust the platform to do the right thing when mistakes are made. Values-driven companies like Mozilla or Apple will have a trust advantage over other platforms whose values we don’t know.
User and Developer Ownership and Control: Figure out how to give end users and app developers appropriate levels of ownership and control over data that originates from their device. This is complicated. The goal (we’re not there yet) should be to support GDPR standards globally.
Constant Transparency and Education: Work to educate the market and be as transparent as possible about policies and what is known and unknown, and seek feedback on where people feel “the line” should be in all the new gray areas. Be clear on all aspects of the bargain that users enter into when trading some data for a benefit.
Informed Consent, Always: Make a sincere attempt at informed consent with regard to data capture (triply so if the company has an ad-based business model). This goes beyond an EULA, and IMO should be in plain English and include diagrams. Even then, it’s impossible for end users to understand the full potential.
Even apart from the creep factor, remember there’s always the chance that a hack or a government agency legally accesses the data captured by the platform. You can’t expose what you don’t collect, and it doesn’t need to be collected. That way people accessing any exposed data can’t tell precisely where an individual map file refers to (the end user encrypts it, the platform doesn’t need the keys), and even if they did, the data describing the location in detail can’t be interpreted.
There’s no single silver bullet to solving these problems.
Blockchain is not a panacea for these problems — specifically as applied to the foundational AR Cloud SLAM data sets. The data is proprietary and centralized, and if managed professionally, the data is secure and the right people have the access they need. There’s no value to the end user from blockchain that we can find. However, I believe there is value to AR content creators, in the same way that blockchain brings value to any content created for mobile and/or web. There’s nothing inherently special about AR content (apart from a more precise location ID) that makes it different.
For anyone interested, the Immersive Web working group at W3C and Mozilla are starting to dig further into the various risks and mitigations.
Where should we put our hope?
This is a tough question. AR startups need to make money to survive, and as Facebook has shown, it was a good business model to persuade consumers to click OK and let the platform collect everything. Advertising as a business model creates inherently misaligned incentives with regard to data capture. On the other hand, there are plenty of examples where capturing data makes the product better (e.g. Waze or Google search).
Education and market pressure will help, as will (possibly necessary) privacy regulation. Beyond that we will act in accordance with the social contracts we adopt with each other re: appropriate use.
The two key takeaways are that AR makes it possible to capture everything, and that the platform doesn’t need to capture everything in order to deliver a great AR UX.
If you draw a parallel with Google, in that web crawling was trying to figure out what computers should be allowed to read, AR is widely distributing computer vision, and we need to figure out what computers should be allowed to see.
The good news is that the AR industry can avoid the creepy aspects of today’s data collection methods without hindering innovation. The public is aware of the impact of these decisions and they are choosing which applications they will use based on these issues. Companies like Apple are taking a stand on privacy. And most encouragingly, every AR industry leader I know is enthusiastically engaged in public and private discussions to try to understand and address the realities of meeting the challenge.
Virtual reality is the ultimate money pit. Tech giants are selling low-margin hardware on which users play software that was funded by those same companies. This strategy seems to continue year after year without a hockey stick chart in sight.
While VR may still be waiting for its breakout hardware hit, there has already been a clear software standout. Beat Games released Beat Saber one year ago. The popular game is part Guitar Hero, part Fruit Ninja, and it utilizes the benefits of VR to let players slice their way through an EDM soundtrack, which the company’s music-mixing CEO produced.
While a few VR studios have surpassed lifetime revenue in the low millions, Beat Games sold more than 1 million copies of the $20 game after just nine months on the market. The title’s addicting gameplay has left it endlessly showcased across the web by game streamers, enabling a mainstream success that the VR market really hasn’t seen yet.
Beat Games CEO Jaroslav Beck says the title’s success is “dope,” but he’s more concerned with ensuring he doesn’t leave anything on the table, whether that’s expansion into esports or arcades or the fitness market. I chatted with Beck about what it means to actually finish a game in the era of freemium, why he doesn’t want to become a manager and how Beat Games has never raised VC funding.
‘Unlocked my head’
Beck didn’t come up with the idea for Beat Saber. Like a lot of people who have bought the game, his first experience with it was watching an early demo on the web built by a couple of tinkering Czech programmers.
Ján Ilavský and Vladimír Hrinčár had been building games together since high school. The pair had published a few games; their biggest success was Chameleon Run, a mobile game that won an Apple Design Award in 2016. Following that success, the duo set its sights on building something for virtual reality.
Beat Games co-founders (left to right) Hrinčár, Beck and Ilavský
They were already a year into development when Beck saw a demo on Facebook. At the time, Beck had been living in L.A. building out customers for his own studio, Epic Music Productions, where he had already done some work for clients like EA, Blizzard and Disney. Despite thinking the VR market might be a passing fad, the Beat Saber demo piqued his interest and led him to more online investigating.
“I was so skeptical, but then I saw the teaser that these two had put out and it was like something had unlocked my head,” Beck told TechCrunch.
After discovering the pair shared his Czech background, he contacted them and flew out to Prague to convince them to let him build the soundtrack for their new game. He also planted the idea of starting a new company around the title if it achieved the success that he thought it would.
Convincing the “strict programmers” to build a venture around their demo was more challenging than expected, he admits, but Beck eventually got them on board to finish the game. The team of perfectionists had a tougher time hitting their deadlines than expected, and even after releasing an “early access version” of the title a year ago, Beck still talks about finishing the game as though it’s a far-flung dream. This, despite the fact that Beat Saber is one of the most-downloaded games in VR’s early history.
How the team reached this achievement requires dissecting the qualities of a viral game, which is no small task. Beck thinks the game was successful simply because Ján and Vladimír built the type of game they wanted to play without ever worrying about building a commercial hit.
But for a medium that’s been so difficult to showcase without putting on a headset, the Beat Saber team set out early on to ensure the game was highly visible. Because many of the songs were Beck’s, it was much simpler to ensure that YouTubers could freely post footage of the gameplay without concerns for takedowns or reduced monetization. It worked; it wasn’t long after the game’s release that top streamers like PewDiePie and Jacksepticeye were playing the game for their massive subscriber bases.
Beck says that videos showcasing the game on YouTube have now received more than 2 billion views. That popularity has gradually expanded offline, with plenty of gamers that I’ve chatted with saying the viral Beat Saber videos led them to get VR systems in the first place.
Beat Saber with Brie Larson - YouTube
But Beat Saber isn’t just helping sell VR headsets, it’s changing how the systems are built.
Oculus has been using Beat Saber as a way to benchmark the quality of its new inside-out tracking system, ensuring that the new headsets can handle the game’s most advanced modes.
“Our tracking team continued to improve their technology until we could play Expert+ songs and achieve the sorts of scores we see on Rift,” Oculus director of ecosystem Chris Pruett told TechCrunch. “Beat Saber proved to be a very valuable bar against which we have been able to measure our overall tracking quality.”
Beck similarly says that one of Valve’s recent code updates for its SteamVR tracking system was made specifically to account for the tracking needs of Beat Saber’s fastest players.
‘Capturing the full potential’
After smashing through VR sales records, one of Beat Games’ biggest challenges might be ensuring that the platform’s limited reach doesn’t end up stifling its own growth.
The company is already working on some of its own hardware after partnering with South Korea’s SKonec to build custom VR arcade machines so that players in Asia can try out the title without having to fully buy-in to virtual reality.
Some in the VR industry see Beat Saber’s success as a sign that the VR industry’s days of lackluster growth are behind it.
“…The most interesting data point for me is that Beat Saber sold over one million copies in a year and is making over $20 million in revenue,” Tipatat Chennavasin, a co-founder of The VR Fund and an advisor to Beat Games, told TechCrunch in an email interview. “That makes it not just the biggest VR games success story but also the biggest indie games success story on any platform of the past year. Angry Bird’s success on iOS helped legitimize smartphones as a gaming platform when it made $6 million in one year.”
Finding the widest audience has led the Beat Games team to ensure that their code is lean and that the game “could run on a washing machine” if it needed to, Beck says. Even optimizing the game has been intensive work for the small team. Despite having such a hot title, Beck and his co-founders aren’t racing to turn their endeavor into an empire. In the past year, they’ve grown to just eight full-time employees.
“It’s complicated because we don’t really want to scale that much because then we’ll just become managers of a huge team,” he says. “We created the game and we want to be the ones to finish it.”
In an industry seemingly filled with investments that didn’t live up to the industry’s hype, Beat Games has never received outside funding.
“I’m really proud that we were able to build the company with this mindset of making decisions based on what is good for the game and not what is the most profitable thing,” he says. “I think the worst thing for a developer is that you get this awesome idea that you’re super excited about and then investors tell you, ‘Yeah that’s cool, but we’ll be broke.’ Then it’s like, what’s the point? You didn’t start making these games just to become crazy rich, right?”
“I’m not saying we will never raise funding, though,” he quickly adds.
Beck says that the team has some ideas for follow-on titles for Beat Games, but that completing their first effort is the studio’s central priority. Finishing Beat Saber seems like it should be well within the team’s reach, but Beck seems to see “finishing” the game as a nebulous task better defined by what’s left on the table rather than what they actually ship.
“Finishing the game isn’t about just bringing in all of these new features, but it’s about capturing the full potential of the game, whether that’s in esports, fitness or in just exploring music,” Beck says. “That’s a lot of work… the question is if we will survive until then (laughs), but I sure hope that we do.”
Lora DiCarlo, a startup coupling robotics and sexual health, has $2 million to shove in the Consumer Electronics Show’s face.
The same day the company was set to announce their fundraise, The Consumer Technology Association, the event producer behind CES, decided to re-award the Bend, Oregon-based Lora DiCarlo with the innovation award it had revoked from the company ahead of this year’s big event.
“We appreciate this gesture from the CTA, who have taken an important step in the right direction to remove the stigma and embarrassment around female sexuality,” Lora DiCarlo founder and chief executive officer Lora Haddock (pictured) told TechCrunch. “We hope we can continue to be a catalyst for meaningful changes that makes CES and the consumer tech industry inclusive for all.”
In January, the CTA nullified the award it had granted the business, which is building a hands-free device that uses biomimicry and robotics to help people achieve a blended orgasm by simultaneously stimulating the G spot and the clitoris. Called Osé, the device uses micro-robotic technology to mimic the sensation of a human mouth, tongue and fingers in order to produce a blended orgasm for people with vaginas.
Lora DiCarlo’s debut product, Osé, set to release this fall. The company says the device is currently undergoing changes and may look different upon release.
“CTA did not handle this award properly,” CTA senior vice president of marketing and communications Jean Foster said in a statement released today. “This prompted some important conversations internally and with external advisors and we look forward to taking these learnings to continue to improve the show.”
Lora DiCarlo had applied for the CES Innovation Award back in September. In early October, the CTA notified the company of its award. Fast-forward to October 31, 2018 and CES Projects senior manager Brandon Moffett informed the company they had been disqualified. The press storm that followed only boosted Lora DiCarlo’s reputation, put Haddock at the top of the speakers’ circuit and proved, once again, that sexuality is still taboo at CES and that the gadget show has failed to adapt to the times.
In its original letter to Lora DiCarlo, obtained by TechCrunch, the CTA called the startup’s product “immoral, obscene, indecent, profane or not in keeping with the CTA’s image” and that it did “not fit into any of [its] existing product categories and should not have been accepted” to the awards program. CTA later apologized for the mishap before ultimately re-awarding the prize.
At the request of the CTA, Haddock and her team have been working with the organization to create a more inclusive show and better incorporate both sextech companies and women’s health businesses.
“We were a catalyst to a huge, resounding amount of support from a very large community of people who have been quietly thinking this is something that needs to happen,” Haddock told TechCrunch. “For us, it was all about timing.”
Lora DiCarlo plans to use its infusion of funding, provided by new and existing investors led by the Oregon Opportunity Zone Limited Partnership, to hire ahead of the release of its first product. Pre-orders for the Osé, which will retail for $290, will open this summer with an expected official release this fall.
Haddock said four other devices are in the pipeline, one specifically for clitoral stimulation, another for clitoral and vaginal stimulation, one for anywhere on the body and the other, she said, is a different approach to the way people with vulvas masturbate.
“We are aiming for that hands-free, human experience,” Haddock said. “We wanted to make something really interesting and very different and beautiful.”
Next year, Haddock says they plan to integrate their products with virtual reality, a step that will require a larger boost of capital.
Haddock and her employees don’t plan to quiet down any time soon. With their newfound fame, the team will continue supporting the expanding sextech industry and gender equity within tech generally.
“We’ve realized our social mission is so important,” Haddock said. “Gender equality, at its source, is about sex. We absolutely demonize sex and sexuality … When you talk about removing sexual stigmas, you are also talking about removing gender stigmas and creating gender equity.”
More than three years after Facebook released one of the biggest gambles in its existence, a virtual reality headset it paid billions to launch as its own, the company has grown more embattled but its moonshot VR flagship has grown safer.
Facebook’s sequel to the Oculus Rift is not the Rift 2, it is the Rift S. Just as the iPhone naming scheme denotes the tock to a tick rather than a full revolution; the latest product is a hardware update, albeit a pretty minor one, more of a 1.2 than a 2.0.
The hardware design isn’t really the story that Facebook and Oculus are pushing with the Rift S. The hardware is a product of sacrifices that give the entire headset a lower-end feeling than the Quest or Go, it was largely built by Lenovo after all; as such this release is mostly about the software advancements.
The built-in “Insight” tracking is the highlight and lowlight of the release. It’s undoubtably a downgrade from the full capabilities of its predecessor — while the headset tracks itself very well, I would frequently lose tracking on the controllers on the cameras’ peripheral– but it also makes the system considerably easier to set up and maintain. Gone are the external sensors and their damn driver updates, gone are the constant needs for recalibration when a tracker was bumped out of place — this is a headset that learns from your space.
The external sensor system on the original Rift required three spaced out sensors in order to fully capture the total range of body motion. The headset only came with two sensors which minimized these capabilities and forced game developers to build titles that would constantly lead users to reorient themselves towards the sensors. No such reorienting is needed with the Rift S which brings the tracking cameras into the headset itself.
Features like the updated “Guardian” boundary setup and the passthrough camera that activates once you leave the safe space showcase some of the key learnings of the past few years and some of the added benefits of the onboard tracking. All of this feels very natural and mature.
While the headset’s onboard tracking feels great, the experience for controller tracking is more hit-or-miss. Common in-game motions like examining an object and turning the top of the controller away from you can lead to briefly obscuring the tracking rings and thus temporarily losing tracking. It always seems to recover, but there are moments in games where you are being chased and need to shoot behind you or grab something offscreen while maintaining focus on an object. These are moments where you see some short-sightedness in Oculus’s choice.
My first impressions of the Rift S were less than stellar, but after getting it in my home for some lengthy VR sessions, I’ve warmed up to it a little bit. It still doesn’t feel like a proper upgrade to a flagship headset that’s already three years old, but it is a more fine-tuned system that feels more evolved and dependable. There are still certainly some things I don’t like…
Things that are subpar:
The controller tracking system isn’t robust enough for the demands of VR movements, you will notice and frequently feel the outside edge of the tracking boundary in intense gaming sessions.
The lack of adjustable displays to account for the differing distances between people’s eyes means that the one-size-fits-all headset gives a less optimal experience for certain users. I am one of those users that falls just outside of that sweet spot, something that made the headset a bit less comfortable on the eyes than its predecessor for me.
The original Touch controllers were better and more ergonomic, with the headset blinding me I put these controllers in the wrong hand roughly 50% of the time.
The lack of built-in headphones was an annoying choice, the near-ear speakers are better than expected but don’t isolate you from the world, a concept that the entire headset is sort of built around.
The face padding is far less comfortable or durable than the Oculus Go or Oculus Quest, something that feels a bit odd for the top-of-the-line experience.
Things that aren’t as subpar as I had feared:
The switch from OLED to LCD panels feels like a downgrade on paper, but the way the sub-pixels are laid out on the higher resolution Rift S display gives a much more crisp image even if the LCD screen’s blacks feel a bit grayer. Running at 80hz versus 90hz isn’t a great choice.
I like the flexible head straps of the rest of Oculus’s product line, but the halo design that Lenovo lifted from Sony feels quite comfortable here and given that it’s a PC-tethered system there aren’t unfortunate portability trade-offs like there were with the Mirage Solo headset.
I still prefer the build on the previous generation Rift, but the headset feels less clanky than I detailed in my hands-on.
It’s a headset that will offer more pleasant on-boarding and usage to new customers, but for what is supposed to be the industry’s leading high-end headset, Oculus has taken a small step back in performance.
For new customer that are looking for something more high-end than the Quest, the device still offers a lot of advantages. You’ll have some time to make a choice, pre-orders are live for the headset today, but it doesn’t ship until May 21.
Facebook’s new VR headsets, the Oculus Rift S and Quest, are both now live for pre-orders today at $399 and will ship May 21.
Oculus released the original Rift just over three years ago. Fast forward to the present and the Facebook VR product line has gotten more robust. The $199 Oculus Go offers a cheap arena to watch media content, Rift S offers a PC experience that can showcase complex gaming experiences while the Quest aims to be a good fit for novices looking for a portable VR experience.
Facebook hopes the Quest will bring in a new class of users into VR while the Rift S allows it to expand its concurrent reach on PC.
New machine learning technologies, user interfaces and automated content creation techniques are going to expand the personalization of storytelling beyond algorithmically generated news feeds and content recommendation.
The next wave will be software-generated narratives that are tailored to the tastes and sentiments of a consumer.
Concretely, it means that your digital footprint, personal preferences and context unlock alternative features in the content itself, be it a news article, live video or a hit series on your streaming service.
The title contains different experiences for different people.
When you use Youtube, Facebook, Google, Amazon, Twitter, Netflix or Spotify, algorithms select what gets recommended to you. The current mainstream services and their user interfaces and recommendation engines have been optimized to serve you content you might be interested in.
Your data, other people’s data, content-related data and machine learning methods are used to match people and content, thus improving the relevance of content recommendations and efficiency of content distribution.
However, so far the content experience itself has mostly been similar to everyone. If the same news article, live video or TV series episode gets recommended to you and me, we both read and watch the same thing, experiencing the same content.
That’s about to change. Soon we’ll be seeing new forms of smart content, in which user interface, machine learning technologies and content itself are combined in a seamless manner to create a personalized content experience.
What is smart content?
Smart content means that content experience itself is affected by who is seeing, watching, reading or listening to content. The content itself changes based on who you are.
At the same time, Netflix has recently started testing new forms of interactive content (TV series episodes, e.g. Black Mirror: Bandersnatch) in which user’s own choices affect directly the content experience, including dialogue and storyline. And more is on its way. With Love, Death & Robots series, Netflix is experimenting with episode order within a series, serving the episodes in different order for different users.
Now, imagine, that TikTok’s individual short videos would be automatically personalized by the effects chosen by an AI system, and thus the whole video would be customized for you. Or that the choices in the Netflix’s interactive content affecting the plot twists, dialogue and even soundtrack, were made automatically by algorithms based on your profile.
The Next Leap: How A.I. will change the 3D industry - Andrew Price - YouTube
Say that a news article you read or listen to is about a specific political topic that is unfamiliar to you. When comparing the same article with your friend, your version of the story might use different concepts and offer a different angle than your friend’s who’s really deep into politics. A beginner’s smart content news experience would differ from the experience of a topic enthusiast.
Content itself will become a software-like fluid and personalized experience, where your digital footprint and preferences affect not just how the content is recommended and served to you, but what the content actually contains.
How is it possible to create smart content that contains different experiences for different people?
Content needs to be thought and treated as an iterative and configurable process rather than a ready-made static whole that is finished when it has been published in the distribution pipeline.
Importantly, the core building blocks of the content experience change: smart content consists of atomized modular elements that can be modified, updated, remixed, replaced, omitted and activated based on varying rules. In addition, content modules that have been made in the past, can be reused if applicable. Content is designed and developed more like a software.
Currently a significant amount of human effort and computing resources are used to prepare content for machine-powered content distribution and recommendation systems, varying from smart news apps to on-demand streaming services. With smart content, the content creation and its preparation for publication and distribution channels wouldn’t be separate processes. Instead, metadata and other invisible features that describe and define the content are an integral part of the content creation process from the very beginning.
Turning Donald Glover into Jay Gatsby
With smart content, the narrative or image itself becomes an integral part of an iterative feedback loop, in which the user’s actions, emotions and other signals as well as the visible and invisible features of the content itself affect the whole content consumption cycle from the content creation and recommendation to the content experience. With smart content features, a news article or a movie activates different elements of the content for different people.
It’s very likely that smart content for entertainment purposes will have different features and functions than news media content. Moreover, people expect frictionless and effortless content experience and thus smart content experience differs from games. Smart content doesn’t necessarily require direct actions from the user. If the person wants, the content personalization happens proactively and automatically, without explicit user interaction.
Creating smart content requires both human curation and machine intelligence. Humans focus on things that require creativity and deep analysis while AI systems generate, assemble and iterate the content that becomes dynamic and adaptive just like software.
Sustainable smart content
Smart content has different configurations and representations for different users, user interfaces, devices, languages and environments. The same piece of content contains elements that can be accessed through voice user interface or presented in augmented reality applications. Or the whole content expands into a fully immersive virtual reality experience.
BBC Click 360: The world's first entirely 360 TV episode - BBC Click - YouTube
In the same way as with the personalized user interfaces and smart devices, smart content can be used for good and bad. It can be used to enlighten and empower, as well as to trick and mislead. Thus it’s critical, that human-centered approach and sustainable values are built in the very core of smart content creation. Personalization needs to be transparent and the user needs to be able to choose if she wants the content to be personalized or not. And of course, not all content will be smart in the same way, if at all.
If used in a sustainable manner, smart content can break filter bubbles and echo chambers as it can be used to make a wide variety of information more accessible for diverse audiences. Through personalization, challenging topics can be presented to people according to their abilities and preferences, regardless of their background or level of education. For example a beginner’s version of vaccination content or digital media literacy article uses gamification elements, and the more experienced user gets directly a thorough fact-packed account of the recent developments and research results.
Smart content is also aligned with the efforts against today’s information operations such as fake news and its different forms such as “deep fakes” (http://www.niemanlab.org/2018/11/how-the-wall-street-journal-is-preparing-its-journalists-to-detect-deepfakes). If the content is like software, a legit software runs on your devices and interfaces without a problem. On the other hand, even the machine-generated realistic-looking but suspicious content, like deep fake, can be detected and filtered out based on its signature and other machine readable qualities.
Smart content is the ultimate combination of user experience design, AI technologies and storytelling.
The first players that master the smart content, will be among tomorrow’s reigning digital giants. And that’s one of the main reasons why today’s tech titans are going seriously into the content game. Smart content is coming.