Loading...

Follow NVIDIA | Virtual Reality on Feedspot

Continue with Google
Continue with Facebook
or

Valid

After meeting industrial designer Cobus Bothma, you’d be forgiven for assuming he works in the gaming industry or Hollywood.

He’s a big proponent of VR, AR and GPU computing. But Bothma is the director of applied research at Kohn Pedersen Fox, a New York-based global architecture firm whose work includes the city’s Museum of Modern Art.

Bothma discussed KPF’s latest GPU-powered rendering projects and use of NVIDIA Holodeck, our virtual reality collaboration environment, at the recent GPU Technology Conference.

Rendering with RTX

To create computer-generated images for architectural designs, KPF is exploring real-time ray-traced rendering on workstations powered by the NVIDIA Quadro RTX 6000 GPU.

Bothma showed the GTC audience a rendering that had been generated using a test version of the nascent Project Lavina rendering software, which was designed to harness the dedicated ray tracing silicon of NVIDIA Turing GPUs.

It was a detailed scene that had been rendered in real time on a single RTX 6000, showing a KPF-designed building, along with a landscaped park with flowers, grass and trees that had a depth of appearance from the shading.

The software was developed by Chaos Group, an early adopter of NVIDIA’s Turing architecture-based GPUs. KPF’s rendering work with Project Lavina was an effort to test the limits of hardware and software, running 9 billion shade instances of triangles and 19 million unique shade triangles in real time.

Design Iterations

GPUs are accelerating design iterations at KPF. Architectural projects come with an extensive list of requirements from clients, city codes, local governments and others. For example, the client might require a certain amount of green space and sunlight for a space. A project might have specific high-density requirements such as tall buildings or low-density configurations that allow more open space.

In the past, that required a designer to manually model, analyze and adjust work — and then continue to adjust and analyze to find the best solution. This was labor and time intensive, and would typically require multiple iterations.

With KPF’s workflow, when designers leave work for the night, they can run the project parameters and let the workstations not only model, analyze and iterate designs, but also produce the rendered results in real time.

This can create thousands of iterations rendered overnight on GPUs to match the brief’s requirements and allow the architects to filter the best solutions based on parameters and results to progress the early stages of the design.

Holodeck Teamwork

KPF uses NVIDIA Holodeck for photorealistic VR collaboration and internal communications on projects. This allows team members to review multiple design options before presenting the best options to clients.

The Holodeck VR environment includes sensations of real-world presence through sight and sound. It enables remote team members to “walk around and through” architectural models and communicate and collaborate in VR sessions with other people in a much more interactive fashion than traditional video conferencing.

“Holodeck allows us to look at designs in detail,” Bothma said. “We can move components apart and understand the structure underneath it. It will transform the way we collaborate on international projects over the next years.”

The post KPF Pushes Limits of Building-Design Rendering Using NVIDIA RTX appeared first on The Official NVIDIA Blog.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Nefertari’s tomb is hailed as one of the finest in all of Egypt. And now visitors can explore it in exquisite detail without hopping on a transoceanic flight.

Nefertari was known as the most beautiful of five wives of Ramses II, a pharaoh renowned for his colossal monuments. The tomb he built for his favorite queen is a shrine to her beauty — every centimeter of the walls in the tomb’s three chambers and connecting corridors is adorned with colorful scenes.

Like most of the tombs in the Valley of the Queens, this one had been plundered by the time it was discovered by archaeologists in 1904. And while preservation efforts have been made, the site remains extremely fragile, not to mention remote to most of the world’s population.

Simon Che de Boer and his New Zealand-based VFX R&D company, realityvirtual.co, have found a way to digitally preserve Nefertari’s tomb and give countless individuals the chance to see inside it.

Nefertari: A Journey to Eternity is a VR experience that uses high-end photogrammetry, visual effects techniques and AI to create an amazingly detailed experience that returns Queen Nefertari’s tomb to its original glory. Visitors can digitally walk around, view the scene from different angles and zoom in for a closer look.

It’s an amazingly realistic substitute for those who might otherwise have to travel to the other side of the Earth to experience it.

Powerful Data Crunching with NVIDIA Quadro GPUs

To replicate the tomb’s elaborate details, Che de Boer captured nearly 4,000 42-megapixel photographs of the site, then combined photogrammetry (the science of making measurements from photographs) with deep learning methods for processing and visualization.

NVIDIA GPUs played a critical role in processing the many hours of photogrammetric data collected onsite, crunching it many times faster than would be possible on CPUs.

GPUs were also integral to performing 3D reconstruction and presenting detailed textures. Working on powerful HP workstations equipped with high-end NVIDIA Quadro GPUs, realityvirtual.co converted the data to a dense 24Bn 3D point-cloud using CapturingReality for initial creation. Autodesk MeshMixer and Maya were used for initial clean-up. They then used an in-house, proprietary pipeline to work on the refinements and efficiencies — filling holes, extrapolating material characteristics, removing noise and cleaning up artifacts.

Che de Boer capturing imagery inside the tomb.

These very large datasets were then optimized for real-time rendered in Unreal Engine at a stable 90 frames per second, retaining all 24Bn points of detail utilizing texture streaming from Granite. Full dynamic lighting, volumetric fog, reflections, effects and 3D spacial audio.

“With these large datasets, speed of processing and playback is key,” said Che de Boer. “NVIDIA’s new architecture combined with Unreal Engine adds a level of speed and power that’s unbeatable with this enormous amount of data.”

AI: Creating More Realistic VR

No visit to an ancient tomb would be believable without removing the signs of recent modernization. To accomplish this, realityvirtual.co collected all the data that was encapsulated from the location and used the programmable Tensor Cores and 24Gb VRAM capacity from a single high-end NVIDIA Quadro GPU to train their super-sampling set.

By teaching the computer to understand what it was looking at, it could then modify the image to how it would have appeared with the modern artifacts removed. For instance, exit signs, plaques, handrails, floorboards and halogen lighting were painted out via in-painting methods and replaced with contextually aware content from the spaces around them.

To cover gaps in images, remove unwanted elements or fix overlap areas in the source photogrammetry images, realityvirtual.co infilled these areas using elements from the surrounding environment by leveraging a new AI-based method for Image InPainting developed by NVIDIA Research and available to software developers soon through the NVIDIA NGX technology stack. (Learn more about AI InPainting.)

“Without the kind of memory the high-end NVIDIA Quadro provides, processing the data from our 42-megapixel images would not have been possible,” said Che de Boer. “We use NVIDIA CUDA cuDNN extensively in both our photogrammetry and AI processes and throughout all aspects of our creation pipeline to achieve the most realism. It looks absolutely amazing. You get a real sense of being there and it’s only going to get better once we integrate NVIDIA RTX real time raytracing into our future releases.”

More recent in-house releases of the “Tomb” have been run through realityvirtual.co’s own super-sampling methods. This essentially trains their super-sampling on their own datasets, adding another level of detail to the final texture maps.

At that point, a viewer can’t distinguish the final pixels no matter how close they get to the Tomb’s artifacts. In addition more recent projects are now using realityvirtual.co’s deepPBR methods to extrapolate contextually aware normals, delit diffuse, roughness and displacement. They’re invaluable for working with physically based rendering engines such as Unreal Engine.

All this data was trained on itself, a great example of AI using its own data to improve itself. The result is an educational simulation that’s available on the STEAM gaming platform for free, but requires a Vive, Rift or Windows VR headset.

To continue documenting heritage sites an digitally preserving them for years to come,  Che de Boer recently formed a strategic relationship with Professor Sarah Kenderdine at EPFL, a prestigious research university in Lausanne Switzerland. Together they’re looking to virtually re-create New Zealand’s ChristChurch Cathedral as it existed before it was damaged by a 2011 earthquake as well as other locations that can not yet be disclosed, though are of the most prestigious nature.

“These are locations that everyone knows about but only few get to access to,” said Che de Boer. “Our goal is to make these sites accessible to people around the world who wouldn’t otherwise get an opportunity to experience them in their lifetime.”

The post How You Can Explore Nefertari’s Tomb in Hyper-Realistic VR appeared first on The Official NVIDIA Blog.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Creating high-quality, 360-degree stereo video experiences can be complex and computationally intensive work, especially for real-time streaming. With the latest release of NVIDIA’s VRWorks 360 Video SDK, it just got easier.

Version 2.0 of the SDK adds support for NVIDIA CUDA 10 and our latest Turing-based Quadro and GeForce RTX GPUs. It accelerates stitching of 360-degree video on RTX-powered GPUs by up to 2x relative to Pascal-based GPUs. And it offers a host of new features that make it easier to stitch and stream 360-degree mono- and stereoscopic video. These include:

  • Ambisonic Audio – increases the immersiveness of 360-degree videos by enabling 3D, omnidirectional audio such that the perceived direction of sound sources change when viewers modify their orientation.
  • Custom Region of Interest Stitch – enables adaptive stitching by defining the desired field of view rather than stitching a complete panorama. This enables new use cases such as 180-degree video while reducing execution time and improving performance.
  • Improved Mono Stitch – increases robustness and improves image quality for equatorial camera rigs. Multi-GPU setups are now supported for up to 2x scaling.
  • Moveable Seams – manually adjusts the seam location in the region of overlap between two cameras to preserve visual fidelity, particularly when objects are close to the camera.
  • New Depth-Based Mono Stitch – uses depth-based alignment to improve the stitching quality in scenes with objects close to the camera rig and improves the quality across the region of overlap between two cameras. This option is more computationally intensive than moving the seams, but provides a more robust result with complex content.
  • Warp 360 – provides highly optimized image warping and distortion removal by converting images between a number of 360 projection formats, including perspective, fisheye and equirectangular. It can transform equirectangular stitched output into a projection format such as cubemap to reduce streaming bandwidth, leading to increased performance.
Support for VRWorks 360 Video SDK 2.0

We offered a select group of partners an early look at the new SDK and they agree: these new features bring a much higher level of performance, immersiveness and responsiveness to stitch and stream 360 video.

Pixvana SPIN Studio is an end-to-end system for editing, creating and sharing amazing VR interactive content.

“With VRWorks 360, our new stitcher in SPIN Studio scales dramatically,” said Sean Safreed, co-founder of Pixvana. “Editing and creating masters for a full day’s shoot doesn’t get faster than this.”

STRIVR is a leader in using VR to train individuals and improve performance.

“When you experience a situation as if you are actually there, learning retention rates can soar,” said Brian Meek, chief technology officer at STRIVR. “The new Warp 360 will help ensure our customers stay fully immersed, and the functionality and performance that Turing brings to VRWorks can’t be beat.”

Professional VR camera and software maker Z CAM is dedicated to developing high-performance imaging products and solutions.

“We’ve integrated the NVIDIA VRWorks 360 Video SDK with our Z CAM WonderLive and WonderStitch software solution. This makes it easy for VR developers and content creators to live capture and stitch multiple sensor images on our VR cameras,” said Kinson Loo, CEO of Z CAM. “New features like Moveable Seams and Custom Region of Interest dramatically improve the quality and speed, opening new use cases and opportunities that give our customers greater flexibility. We expect this to take live event production to the next level.”

Mobile Viewpoint offers easy-to-operate, lightweight devices for live video capturing, including IQ SportProducer, which incorporates the NVIDIA VRWorks 360 Video SDK.

“In today’s world of changing content distribution and consumption models, the ability to deliver live video is more important than ever before,” said Michel Bais, managing director at Mobile Viewpoint. “We were excited to have an early look at the latest VRWorks 360 Video SDK release. The combination of features will make it easy for broadcasters to produce live sports productions using a single 360 camera faster than ever.”

Learn more about VRWorks. Download the VRWorks 360 Video SDK 2.0 and the latest drivers.

The post NVIDIA RTX-Powered VRWorks 360 Video SDK Brings Big Speedups and New Features appeared first on The Official NVIDIA Blog.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Over the last few decades, VR experiences have gone from science fiction to research labs to inside homes and offices. But even today’s best VR experiences have yet to achieve full immersion.

NVIDIA’s new Turing GPUs are poised to take VR a big step closer to that level. Announced at SIGGRAPH last week and Gamescom today, Turing’s combination of real-time ray tracing, AI and new rendering technologies will propel VR to a new level of immersion and realism.

Real-Time Ray Tracing

Turing enables true-to-life visual fidelity through the introduction of RT Cores. These processors are dedicated to accelerating the computation of where rays of light intersect objects in the environment, enabling — for the first time — real-time ray tracing in games and applications.

These optical calculations replicate the way light behaves to create stunningly realistic imagery, and allow VR developers to better simulate real-world environments.

Turing’s RT Cores can also simulate sound, using the NVIDIA VRWorks Audio SDK. Today’s VR experiences provide audio quality that’s accurate in terms of location. But they’re unable to meet the computational demands to adequately reflect an environment’s size, shape and material properties, especially dynamic ones.

VRWorks Audio is accelerated by 6x with our RTX platform compared with prior generations. Its ray-traced audio technology creates a physically realistic acoustic image of the virtual environment in real time.

At SIGGRAPH, we demonstrated the integration of VRWorks Audio into NVIDIA Holodeck showing how the technology can create more realistic audio and speed up audio workflows when developing complex virtual environments.

AI for More Realistic VR Environments

Deep learning, a method of GPU-accelerated AI, has the potential to address some of VR’s biggest visual and perceptual challenges. Graphics can be further enhanced, positional and eye tracking can be improved and character animations can be more true to life.

The Turing architecture’s Tensor Cores deliver up to 500 trillion tensor operations per second, accelerating inferencing and enabling the use of AI in advanced rendering techniques to make virtual environments more realistic.

Advanced VR Rendering Technologies

Turing also boasts a range of new rendering techniques that increase performance and visual quality in VR.

Variable Rate Shading (VRS) optimizes rendering by applying more shading horsepower in detailed areas of the scene and throttling back in scenes with less perceptible detail. This can be used for foveated rendering by reducing the shading rate on the periphery of scenes, where users are less likely to focus, particularly when combined with the emergence of eye-tracking.

Multi-View Rendering enables next-gen headsets that offer ultra-wide fields of view and canted displays, so users see only the virtual world without a bezel. A next-generation version of Single Pass Stereo, Multi-View Rendering doubles to four the number of projection views for a single rendering pass. And all four are now position-independent and able to shift along any axis. By rendering four projection views, it can accelerate canted (non-coplanar) head-mounted displays with extremely wide fields of view.

Turing’s Multi-View Rendering can accelerate geometry processing for up to four views. VR Connectivity Made Easy

Turing is NVIDIA’s first GPU designed with hardware support for USB Type-C and VirtualLink*, a new open industry standard that powers next-generation headsets through a single, lightweight USB-C cable.

Today’s VR headsets can be complex to set up, with multiple, bulky cables. VirtualLink simplifies the VR setup process by providing power, display and data via one cable, while packing plenty of bandwidth to meet the demands of future headsets. A single connector also brings VR to smaller devices, such as thin-and-light notebooks, that provide only a single, small footprint USB-C connector.

Availability

VRWorks Variable Rate Shading, Multi-View Rendering and Audio SDKs will be available to developers through an update to the VRWorks SDK in September.

NVIDIA Turing-based Quadro RTX and GeForce RTX GPUs will be available starting this fall on nvidia.com and from leading manufacturers and add-in card partners.

* In preparation for the emerging VirtualLink standard, Turing GPUs have implemented hardware support according to the “VirtualLink Advance Overview”. To learn more about VirtualLink, see www.virtuallink.org.

The post NVIDIA Turing Propels VR Toward Full Immersion appeared first on The Official NVIDIA Blog.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

For anyone whose life’s work involves computer graphics, there’s no event in the world like the annual SIGGRAPH conference.

This year, NVIDIA founder and CEO Jensen Huang will take the stage at the show in Vancouver, Canada, to share how AI, real-time ray tracing and VR are transforming the computer graphics industry.

His talk will begin at 4pm PT on Monday, Aug. 13, at the Vancouver Convention Center, unofficially kicking off the show’s five-day immersion into the latest innovations in CG, animation, VR, games, mixed reality and emerging technologies.

The SIGGRAPH show floor opens the next day — and NVIDIA green will be everywhere. Find us in NVIDIA booths 801 and 501. And see our technology power workflows in partner booths across the show floor, where we’re demonstrating the latest technologies incorporating AI, real-time ray tracing and virtual reality.

If you can’t make it to Vancouver, be sure to catch our livestream of Jensen’s keynote.

Move to the Head of the Class

Our courses, talks and tutorials throughout the week at SIGGRAPH (mostly in room 220-222) will showcase how AI, real-time ray tracing and VR can make your work easier. A few highlights:

Tuesday, Aug. 14, 2-5:30pm — NVIDIA Presents: GPU Ray Tracing for Film and Design
Explore recent developments in GPU-accelerated, high-quality, interactive ray tracing to support the visual quality and scene complexity required for visual effects, animation and design. NVIDIA, Autodesk, Chaos Group, Isotropix, Pixar and Weta Digital will be among those presenting.

Wednesday, Aug. 15, 9:30am-12:30pm — NVIDIA Presents: Real-Time Ray Tracing
Researchers and engineers from NVIDIA, joined by leading game studios, Epic Games and EA/SEED, will present state-of-the-art techniques for ray tracing, sampling and reconstruction in real time. This includes recent advances that promise to dramatically advance the state of ray tracing in games, simulation and VR applications.

Wednesday, Aug. 15, 4-4:25pm — Tackling the Realities of Virtual Reality
David Luebke, vice president of graphics research at NVIDIA, will describe the company’s vision for the future of virtual and augmented reality. He’ll review some of the “realities of virtual reality” including challenges presented by Moore’s law, battery technology, optics, and wired and wireless connections. He’ll discuss their implications and opportunities, such as foveation and specialization. He’ll conclude with a deep dive into how rendering technology, such as ray tracing, can evolve to solve the realities of VR. (Note: This talk takes place at NVIDIA booth 801.)

Thursday, Aug. 16, 9:30am-12:30pm — NVIDIA Presents: Deep Learning for Content Creation
Join NVIDIA’s top researchers for an examination of the novel ways deep learning and machine learning can supercharge content creation for films, games and advertising.

See our full schedule of SIGGRAPH talks and courses.

The post NVIDIA’s Jensen Huang Takes Center Stage at SIGGRAPH 2018 appeared first on The Official NVIDIA Blog.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

To see the transformative power of virtual reality, step into NVIDIA Holodeck, which has newly added navigation tools expressly designed for architecture, engineering and construction professionals.

These productivity-enhancing capabilities, which will be shown off next week at the annual SIGGRAPH conference in Vancouver, Canada, enable designers and architects to extend VR throughout their design pipelines, enhancing traditional processes with creative new possibilities in the virtual world.

The new features include:

  • Holotable – Enables tabletop viewing of scale models on a podium with tools to review, rotate and section. Models created in Autodesk 3ds Max and Maya using Materials can toggle layers, adding greater flexibility to explore floor-plans within and across levels. Lighting can be dynamically changed on the scale model to see how lights and shadows interact with their designs.
  • Workflow enhancements – Allows import of Dassault Systemes SOLIDWORKS Visualize (2019 beta required) models as well as AEC’s larger footprint models. Material palettes have been improved so that a Holodeck library material can be edited, saved and re-assigned to expedite model set up and reduce prep time for future Holodeck sessions. Creating and joining private sessions has been streamlined with the use of passcode-protection.
  • Navigation improvements – Teleport to different floors and elevations to fully explore buildings of any size. Place beacons to set up points of interest to quickly navigate around large, multi-leveled models.
  • New and improved design tools – Share concepts and feedback with stakeholders with enhanced communications tools. Whiteboards, drawing and measurement tools have all been improved for greater accuracy and flexibility; video and 360-degree image capture tools have been added for live recording; and users can now launch a fully-functional web-browser within Holodeck for displaying reference images, playing videos, and even participating in Google Hangouts.

CannonDesign Reinvents Design Workflow with Holodeck

In the immersive pavilion at SIGGRAPH, we’ll be showing how CannonDesign, voted one of the 10 most innovative architecture firms in the world in 2017 by Fast Company, uses Holodeck to place designers and clients in a digital model to interact at a 1:1 scale.

Holodeck allows architects and their clients to see and experience every part of their building. This improves their understanding of the design and gives them a chance to make important decisions earlier in the design phase.

“NVIDIA Holodeck breaks down distance barriers, bringing people together from anywhere in the world in a single virtual space to talk, sketch and visualize together without ever boarding a plane or even leaving the office. Holodeck significantly enhances virtual collaboration and strengthens both the design process and the outcomes it generates.” — Hilda Espinal, chief technology officer at CannonDesign.

To get started with these new features, Early Access users can update Holodeck in  Steam. Find out more information and apply for Holodeck Early Access at www.nvidia.com/holodeck.

The post NVIDIA Holodeck Gets New VR Tools for Architecture, Engineering and Construction appeared first on The Official NVIDIA Blog.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

When the Taliban blew up two 1,700-year-old statues of the Buddha in Bamiyan, Afghanistan, in 2001, preservationists around the world were shocked, perhaps none more so than Ben Kacyra.

Before and after view of a Bamiyan buddha.

A software developer who’d just sold his industrial mapping company, Kacyra wanted to do something good with his technology skills. The Taliban’s appalling actions gave him a perfect topic on which to focus.

The result was CyArk, a nonprofit that has spent the past 15 years capturing high-resolution imagery of World Heritage Sites.

“There was no record, no videos, no detailed photos,” said John Ristevski, CEO of CyArk, who helped Kacyra with the organization’s initial projects. “Ben was aghast and thought we needed 3D records of these things.”

Kacyra, and his wife Barbara, who together founded CyArk, wanted to ensure that if other sites met the same fate as those Afghani monuments, there would at least be a visual record.

Today, CyArk has not only detailed photogrammetric data on 200 sites on all seven continents, it has started to deliver on the second part of its vision: opening up that data to the public in the hope that developers will create 3D virtual experiences.

Taste of What’s to Come

To jumpstart things, CyArk, in conjunction with Google — which has provided cloud computing resources, funding and technical support — has released a virtual tour of the ancient city of Bagan, in central Myanmar, that shows off what’s possible.

The tour lets visitors digitally walk through temples, look at things from different angles and zoom in for closer looks, providing an amazingly detailed substitute for those who might otherwise have to travel to the other side of the Earth to see it.

The potential for using the approach to provide education about the world’s ancient historical sites, enable preservationists to study sites more readily, and allow tourists to visit places they could never travel to is seemingly limitless. As such, Ristevski hopes the Bagan tour is just the tip of a much larger iceberg than CyArk could ever build.

“We’re probably more interested in what other people can do with the data,” said Ristevski. “Through the open heritage program, anyone can take the data and build educational experiences. And they can get a non-commercial license to use the data, too.”

In addition to the Bagan tour, CyArk recently released MasterWorks VR, a free app for the Oculus Rift VR headset. It lets people explore multiple heritage sites on three continents, jumping from one location to another.

A Premium on Processing

NVIDIA GPUs have played a critical role in processing the hours and hours of programmetric data CyArk collects on each site, as well as performing 3D reconstruction. Working on powerful workstations equipped with NVIDIA Quadro P6000 cards, CyArk technicians convert the data to 3D imagery using a software package called Capturing Reality.

Ristevski said the P6000 GPUs enable CyArk to crunch its data many times faster than would be possible on CPUs, and he’s also seen significant speed gains compared with previous generations of GPUs.

Every pixel counts in CyArk’s 3D reconstructions of World Heritage Sites.

More important than speed, Ristevski said, is the improved ability to present detailed textures. CyArk has seen resolution of those textures shrink from centimeters down to fractions of millimeters, which is a huge consideration for the heritage community.

“Every square inch of surface is unique,” he said. “We can’t make up textures or replicate textures. We have to preserve every little pixel we capture to the highest degree.”

For Now, More Data

While Ristevski sees a lot of potential for deep learning to help CyArk as it gets more into classification of its data, the company hasn’t delved far into that technology to date. Once it can move on from its current focus on 3D reconstruction, deep learning figures to play a bigger role.

In the meantime, CyArk plans to continue collecting data on more World Heritage Sites. Among those it’s currently documenting or planning to hit soon are ancient temples in Vietnam and the Jefferson Memorial in Washington, D.C.

As CyArk collects more data, it will also continue to make that data publicly available, as well as packaging it in future VR applications and generating its own VR experiences.

And Ristevski maintains that CyArk has no goals to monetize its data, and will instead remain a nonprofit for the foreseeable future: “We have no intention of forming a business model.”

The post Temple Run: CyArk Taps GPUs to Capture Visual Records of World Heritage Sites appeared first on The Official NVIDIA Blog.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

San Francisco’s Museum of Modern Art wants to bring you closer to paintings. How close? Literally inside.

The iconic museum is running an exhibit of surrealist Rene Magritte that also offers visitors an augmented reality trip into his provocative, often jarring, art.

People can step in front of AR pieces inspired by the artist to enter and interact with famous interpretations of works such as “Shéhérazade,” which depicts an exotic woman behind pearls in the sky — or a viewer taking her place.

AR inspired by Magritte’s “Shéhérazade,” 1950

The  monitors are designed as window frames reminiscent of one of Magritte’s most famous paintings, “Where Euclid Walked,” which shows an easel painting in front of a window. The original painting begs a contemplation of what is real versus what is perceived — much like AR.

Created with frog design, a global design firm based in San Francisco, the interactive portion of the show uses these traditionally appearing window frames packing NVIDIA GPU computing power and front-facing cameras for visitors to peer into and interact with the artist’s world.

AR inspired by “Le Blanc Seeing,” 1965 Play In the Artwork

The first AR stop, inspired by “Le Blanc Seing,” displays a children’s storybook-like forest scene in which visitors have to position themselves slowly and carefully for the cameras and sensors to allow them into the forest image and be seen. The intent is to force visitors to slow down and really study the interaction to fit in visibly between trees.

Some windows present Magritte-like puzzles for visitors to solve. In one, people’s images were not fed into the monitor they were facing but instead an adjacent one, causing confusion and interactions with fellow museum visitors to figure it out. Sometimes people would ask the stranger on the other end for a photo. Voila, just the type of interactivity that was aimed for.

Images of visitors appear in unexpected locations

“The paintings are actually looking back and seeing the people — they are turned from visitors to participants,” said Charles Yust, a principal design technologist at frog.

The exhibit itself spills across eight different rooms in the recently expanded downtown museum. Among the more than 70 works of art are some of the Belgian artist’s best known pieces, which show his highly idiosyncratic perspective, such as the painting “Personal Values,” from 1952, which depicts ordinary objects —  like a comb and a glass — claustrophobically oversized into a bedroom. Also on display were example of his famous bowler hat paintings, such as the “Happy Donor,” from 1966 and “The Son of Man” from 1964.

While the show is a big one for the museum, what sets it apart is its use of AR —  which harnesses StereoLabs Zed Mini AR cameras for video and depth perception and NVIDIA Jetson TX2 for processing the real-time interactions.

Jetson for Museum Pieces

Frog design set out to build the interactive monitors as pieces of artwork themselves. The global design firm worked closely with the museum’s staff to conceptualize these AR displays.

The AR exhibit was designed to deepen understanding of Magritte’s work and get visitors to interact with one another in moments of confusion — even consternation —  and reflection at times, in the spirit of Magritte’s art.

“We know that playful interactive experiences can create a strong connection between visitors and the subject matter,” said Chad Coerver, chief content officer at the SFMOMA. “We have been working really hard to think about how tech can advance the museum’s mission and connect with the Bay Area around us.”

The show, which runs at the SFMOMA through Oct. 28, includes more than 20 pieces by the Belgian artist that have never previously been shown in the U.S.

The post Augmented Reality Lets You Slip Inside Magritte’s Surreal Scenes at SFMOMA appeared first on The Official NVIDIA Blog.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

360-degree video is a stunning way for publishers, production houses and content creators to create and share stories, places and experiences.

At the GPU Technology Conference in San Jose this week, VR industry leaders are rallying around NVIDIA VRWorks 360 Video SDK, paving the way for a wide range of industries — from travel and sports to entertainment and journalism — to create high-quality, 360-degree stereo video experiences.

Creating and delivering this kind of video is complex and computationally intensive. It involves capturing, processing and stitching together footage from up to 32 cameras and, in the case of live events, streaming it in real time with minimal latency.

GPU acceleration, coupled with the VRWorks 360 Video SDK, provides real-time capture, seamless stitching and streaming of 360-degree mono- and stereoscopic video that can be easily integrated into video workflows.

New Z CAM V1 Camera Enables 360-Degree Live Stereo Video

Z CAM, one of the earliest companies to bring a professional live VR camera to the masses, was the first commercial camera manufacturer to integrate the NVIDIA VRWorks 360 Video SDK into their WonderStitch and WonderLive applications.

VRWorks 360 Video SDK - Stereo 360 Times Square - YouTube

Monday at GTC, the camera maker unveiled the Z CAM V1 Professional VR Camera, capable of delivering 6K, 60 fps, 360-degree stereo. Z CAM’s 10 cameras and small footprint allow content creators to get close to the action, solving one of the key issues with traditional 360 rigs.

“Integrating the VRWorks 360 Video SDK made it easy for us to enable live streaming of high-quality, 360-degree stereo video, and to support live streaming of both mono and stereo 360 VR, so our customers can really push the boundaries of live storytelling,” said Kinson Loo​, CEO of Z CAM.

Pixvana Accelerates Video Creation and Delivery

One of the biggest challenges of 360-degree content creation is the sheer size of the files produced. It makes delivering high-quality content to a VR headset difficult. Pixvana’s SPIN Studio Platform integrates VRWorks 360 Video SDK for faster and better results, optimized for any delivery method, be it Steam, Playstation, YouTube or other supported platforms.

Because the VRWorks 360 Video SDK can be accessed via the AWS and Microsoft Azure cloud platforms, Pixvana customers can easily upload source videos from a multitude of camera rigs and let the application calibrate and stitch the footage up to 11 times faster than the previous CPU-based stitching technique — all while creating stunning 360-degree videos at up to 8K resolution.

“Because NVIDIA VRWorks 360 Video SDK shared the same API between Windows and Linux, it was super fast and easy to integrate into our Linux cloud platform,” said Sean Safreed, product director and co-founder at Pixvana. “The ability to access the VRWorks SDK through our powerful GPU-accelerated cloud backend simplifies the workflow and massively speeds the process from shot to review to final distribution, which our customers love.”

STRIVR Advances 360-Degree Video Training

Virtual reality provides immersion and puts the user in the center of the action, which is a perfect way to train for any situation, whether trying to fix an engine on a cargo ship, dealing with a Black Friday shopping mob or practicing step-by-step assembly procedures on a factory floor.

STRIVR helps clients ranging from Walmart to the New York Jets to step up their game with immersive VR training. 360-degree situational video has been shown to drastically improve retention, boost productivity and provide crucial crisis-based “on-the-job training” without putting employees at risk.

STRIVR’s immersive training platform is designed to improve situational awareness, operational procedures training, safety and sales training.

“Integrating VRWorks 360 Video SDK accelerated the STRIVR stitching process from 15 fps to between 45 and 60 fps, a 3-4x performance gain, which translates into much faster turnaround time from filming to delivery.” said Brian Meek, CTO at STRIVR.

VRWorks 360-Degree Video 1.5 Release

Today we released VRWorks 360 video SDK version 1.5 with support for Linux, making it easier to integrate into a wider range of uses, including embedded devices and Linux-based image processing pipelines.

If you’re attending GTC this year, you can learn about 8K video stitching in the cloud with Pixvana on Monday, March 26. You can also see the new Z CAM V1 in action in the VR Village and attend an informative session on how Z CAM VR camera benefits from VRWorks integration, on Wednesday, March 28

For more information, check out our “VRWorks at GTC” guide and read more about VRWorks 360 Video SDK.

The post Industry Leaders Adopt NVIDIA VRWorks to Push Boundaries of 360-Degree Video Storytelling appeared first on The Official NVIDIA Blog.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

NVIDIA CEO Jensen Huang just used VR to shrink one of our colleagues and teleport him into a miniature car, which he then drove around a miniature city.

The on-stage demonstration Wednesday was a rousing finale to Huang’s keynote at our GPU Technology Conference in Taiwan.

With such technology, humans will be able to use VR to become backups for AI machines, Huang explained. Wherever these machines are. Whatever their size.

“In the future you will be able to merge with the robot,” Huang told a stunned audience of more than 2,000 developers, researchers, government officials and media in Taipei. “You can have telepresence, you can go anywhere you want.”

The magic: taking real-time sensor feeds and using them to create a VR environment where a remote driver can take control of the car in real time. The result: an experience where the driver feels like they’ve been shrunk down and put in the the driver’s seat of a miniature car.

The Man in the Machine

The on-stage demo virtually shrunk down Justin, our lead engineer for the project — who was present beside the stage — and teleported him into a quarter-scale car inside a tiny simulated city set up in a ballroom upstairs.

To make the trip, Justin donned a VR headset and entered a high-def simulation that includes the environment around the car updated by a live feed from the sensors embedded in the vehicle.

Audience members were able to observe a demo that teleported an NVIDIAN into a miniature car.

He was then able to turn the wheel and step on the pedals in front of him to manipulate the car’s drive-by-wire system remotely and caroom around the tiny streets of a tiny city.

As he drove, Justin saw a live feed of the environment around his vehicle. While the AI is still active — preventing the car from doing anything dangerous, such as running into a wall — the human driver is able to control the vehicle to get around the obstacle.

Which is just what the demo showed. The human driver teleports into the scaled-down vehicle, through VR, as Huang spoke.

“Right now Justin is upstairs, but he’s right here, but he’s enjoying upstairs,” Huang said as camera feed showed the audience what Justin was seeing as he drove.

The demonstration points to a future where, aided by sophisticated sensors and immersive VR, humans and machines are able to work together to navigate mines deep underground, tend crops, repair satellites and space stations, or engage in rescue operations in hazardous areas, such as earthquake zones.

“In the future, we’re going to have a bunch of little pizza delivery bots, but sometimes they will get stuck so we’ll be able to go into virtual reality and help the robot get unstuck,” Huang said.

The post Man Teleports Into Miniature Vehicle in Stunning On-Stage Demo at GTC Taiwan appeared first on The Official NVIDIA Blog.

Read Full Article

Read for later

Articles marked as Favorite are saved for later viewing.
close
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview