Loading...

Follow LDV Capital Blog on Feedspot

Continue with Google
Continue with Facebook
or

Valid

Rebirth of Medical Imaging by Daniel Sodickson, Vice-Chair for Research, Dept of Radiology, Director, Bernard & Irene Schwartz Center for Biomedical Imaging. Principal Investigator, Center for Advanced Imaging Innovation & Research at New York University Langone Medical Center ©Robert Wright/LDV Vision Summit 2018

We launched the first annual LDV Vision Summit five years ago, in 2014, with the goal of bringing together our visual technology ecosystem to explore how visual technology is empowering business and society.

Our gathering is built from the ground up, by and for our diverse visual tech ecosystem - from entrepreneurs to researchers, professors and investors, media and tech execs, as well as content creators and anyone in between. We put our Summit together for us all to find inspiration, build community, recruit, find co-founders, raise capital, find customers and help each other succeed.

Every year we highlight experts working on cutting edge technologies and trends across every sector of business and society.  We do not repeat speakers and we are honored that many attendees join every year.

Below are many of the themes that will be showcased at our 6th LDV Vision Summit May 22 & 23 in NYC. Register here and hope to see you this year.


Visual Technologies Revolutionizing Medicine

Visual assessment is critical to healthcare — whether that is a doctor peering down your throat as you say “ahhh” or an MRI of your brain. Over the next ten years, healthcare workflows will become mostly digitized, more personal data will be captured and computer vision, along with artificial intelligence, will automate the analysis of that data for precision care. Our speakers will showcase how they are deploying visual technologies to revolutionize medicine:

  • CIONIC is superpowering the human body.  

  • Ezra is the new way to screen for prostate cancer.  

  • MGH/Harvard Medical School has developed AI that is better than most experts at diagnosing a childhood blindness disease.

  • Teledoc Health provides on-demand remote medical care.  

  • and more...


Computational Imaging Will Power Business ROI

Computational imaging refers to digital image capture and processing techniques that use digital computation instead of optical processes. Entrepreneurs and research scientists from Facebook, Sea Machines, Cornell Tech, and University College London will enlighten on how their research is delivering valuable results.

  • GM and Cruise Automation are using state-of-the-art software and hardware to create the world's first scalable AV fleet.

  • The inference of 3D information from the video acquired from a single moving camera.

  • Deep convolutional neural network (ConvNet) for multi-view stereo reconstruction.

  • Image Quality and Trust in Peer-to-Peer Marketplaces.

  • and more...

Synthetic Data Is Disrupting Legacy Businesses

Synthetic data is computer-generated data that mimics real data; in other words, data that is created by a computer, not a human. Software algorithms can be designed to create realistic simulated, or “synthetic,” data. This computer generated data is disrupting legacy businesses including Media Production, E-Commerce, Virtual Reality & Augmented Reality, and Entertainment. Experts speaking on this topic include:

  • Synthesia Delivers AI-driven video production.

  • Forma Technologies is building photorealistic avatars that are a dynamic form for people’s online identity.

  • and more...


Where Are The Next Visual Tech Unicorns?

A large number of visual technology businesses have already broken the $1B ceiling: Pinterest, DJI, Magic Leap, Snap, Zoom, Zoox, etc. With applications of computer vision and machine learning on an exponential rise, top investors in visual technologies will discuss the sectors and trends they see with the most potential for unicorn status in the near future:

  • Nabeel Hyatt, Spark Capital

  • Rachel Lam, Imagination Ventures

  • Matt Turck, FirstMark Capital

  • Laura Smoliar, Berkley Catalyst Fund

  • Hadley Harris, Eniac

  • Zavian Dar, Lux Capital

  • and more...

Experiential Media is the Future

The Internet and digital media have built reputations for nameless, faceless actors and disconnection but advances in tech and new approaches are changing that. Whether through interactive video or a live music video game, visual technologies are creating experiences that connect people to content & each other.

  • FTW Studios is creating experiences designed to bring people together — to be a part of live, shared moments.

  • Section4 is reinventing professional operational media, making it succinct, discoverable, provocative and actionable.

  • Eko is an interactive storytelling platform that lets you control the story. (Creators of Netflix hit, Bandersnatch).

  • and more...

Nanophotonics are Pushing the Envelope

Nanophotonics can provide high bandwidth, high speed and ultra-small optoelectronic components. These technologies have the potential to revolutionize telecommunications, computation and sensing.

  • Voyant is creating the next generation of chip-scale LIDAR

  • MacArthur Fellow Michal Lipson is a physicist known for her work on silicon photonics. Working on many projects such as drastically lowering cost and energy of high power computing for artificial intelligence.

  • and more...


Farm to Factory to Front Door, Visual Tech is Improving Logistics & Agriculture

Breakthrough visual technologies will transform the speed, safety and efficiency of agriculture, manufacturing, supply chain and logistics. Legacy actors and startups alike are finding fascinating use cases to implement computer vision and machine learning to improve their processes:

  • Plus One’s software & computer vision tackles the challenges of material handling for logistics.

  • Non-invasive, real time food quality information delivered via hyperspectral imaging.

  • Level 4 Autonomous Vehicles for Urban Logistics

  • and more...

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Hadley Harris, Founding Partner at Eniac, Captured by DepthKit

Hadley Harris is the Founding General Partner at Eniac. He has done a little bit of everything on the path to co-founding Eniac, starting out as an engineer at Pegasystems, a product manager at Microsoft and a strategist at Samsung. He ran a few aspects of the business across product and marketing at Vlingo prior to its sale to Nuance. He also served as CMO at Thumb until it was acquired.

Hadley will be sharing his knowledge on trends and investment opportunities in visual technologies as a panelist and startup competition judge at the 6th Annual LDV Vision Summit May 22 & 23. Early Bird tickets are available through April 16 , get yours now to come see Hadley and +60 other top speakers discuss the cutting edge in visual tech.

In lead up to our Summit, Evan Nisselson, General Partner at LDV Capital asked Hadley some questions about his experience investing in visual tech and what he is looking forward to at our Vision Summit...

Evan: You have extensive entrepreneur, technical and business operation experience before you co-founded Eniac Ventures. Which aspects of your expertise do you believe helps you empower entrepreneurs to succeed and why?

Hadley
: I was very fortunate to be a senior leader at two startups that were acquired. I worked with a bunch of super talented entrepreneurs and executives that I learned a lot from. I was also lucky to have some great investors and board members who helped me see how VC’s could help empower teams to thrive. Interestingly, what stuck with me the most is some of the terrible behavior I witnessed by VC’s during fundraising -- spending the whole meeting on their phones, leaving the room several times during the pitch, eating 3-course meals without looking up at what we were presenting. I told myself that if I ever became a VC I’d focus on helping entrepreneurs with empathy and respect for the amazingly difficult task they had chosen.

I consider the most interesting technological theme right now to be the way data and machine intelligence are changing every industry and our daily lives. A strong argument could be made that visual technology is the most important input to data+machine intelligence systems

-Hadley Harris

Evan: Eniac has invested in many visual technology businesses that leverage computer vision. Please give a couple of examples and how they are uniquely analyzing visual data.

Hadley:
By my count, 30% of the investments we’ve made over the last few years have a visual technology component. A handful are in autonomy and robotics. For example, iSee is an autonomous transportation company that has developed a humanistic AI that is able to flourish in dynamic and unpredictable situations where other solutions fail. Obviously, they can only do that by leveraging CV as an input to understand the vehicle’s surroundings. Another one that is really interesting is Esports One. They use computer vision to understand what’s going on in esports matches and surface real-time stats and analytics to viewers. It’s like the first down marker on steroids.  

Evan: In the next 10 years, which business sectors will be the most disrupted by computer vision and why?

Hadley:
Over the next 10 years there are a number of trillion $ sectors we’re exploring at Eniac that will be disrupted by visual technology – food & agriculture, construction, manufacturing, transportation, logistics, defense but if I were to pick one it would be healthcare. We’re already seeing some really interesting use cases taking place in hospitals but that’s just the very tip of the iceberg. When these technologies move into the home so that individuals are being monitored on a daily basis the way we think about health and wellness with dramatic change.

Evan:   We agree that visual technologies will have a tremendous impact on the healthcare industry. Actually, our annual LDV Insights deep dive report last summer analyzed Nine Sectors Where Visual Technologies Will Improve Healthcare by 2028.

Eniac & Sea Machines - [L to R] Hadley Harris (Eniac), Jim Daly (COO of Sea Machines), Michael Johnson (CEO of Sea Machines), Vic Singh (Eniac)

Evan:  Eniac and LDV Capital are co-investors in Sea Machines who capture and analyze many different types of visual data to deliver autonomous workboats and commercial vessels. What inspired you guys to invest in Sea Machines?

Hadley:
We’ve had a broad thesis over the last 4 years that everything that moves will be autonomous. When investment in the best autonomous car and truck companies became prohibitively competitive and expensive we started looking for underserved areas where autonomy could drive significant value. This drove us to look at the autonomous boat space.  We found a few teams working on this problem, by far, the best of which was Sea Machines. They stood out because they married strong AI and CV abilities with a very deep understanding of the navel space based on decades in the boating ecosystem.

Evan: LDV Capital started in 2012 with the thesis of investing in people building visual technology businesses and some said it was “cute, niche and science fiction.” How would you characterize visual technologies today and tomorrow?

Hadley: I consider the most interesting technological theme right now to be the way data and machine intelligence are changing every industry and our daily lives. A strong argument could be made that visual technology is the most important input to data+machine intelligence systems. So no, I don’t think visual technologies are cute, niche or science fiction; they are one of the primary drivers of the biggest technological theme of our time.

Hadley Harris, Eniac

Evan: You frequently experience startup pitches. What is your one sentence advice to help entrepreneurs improve their odds for success?

Hadley:
Know the ecosystem your startup is playing in absolutely cold.  

Evan: What are you most looking forward to at our 6th LDV Vision Summit?

Hadley:
I’m excited to be at such a focused event where I can hear from amazing entrepreneurs and scientists about the cutting edge projects they’re working on.  

Get your Early Bird tickets by April 16th for our 6th Annual LDV Vision Summit which is featuring fantastic speakers like Hadley. Register now before ticket prices go up!

Join our monthly newsletter to learn more about our LDV Vision Summit Email Address Sign Up Thank you!
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Matt Uyttendaele, LDV Vision Summit 2018 ©Robert Wright

Early Bird tickets now available for our LDV Vision Summit 2019 - May 22 & 23 in NYC at the SVA Theatre.  80 speakers in 40 sessions discuss the cutting edge in visual tech. Register now!

Matt Uyttendaele is the Director of Core AI at Facebook Camera. At our 2018 LDV Vision Summit , Matt spoke about enabling persistent Augmented Reality experiences across the spectrum of mobile devices. He shared how, at Facebook Camera, they are solving this and giving creators the ability to author these experiences on their platform. He showcased specific examples and highlighted future challenges and opportunities for mobile augmented reality to succeed.

Good morning LDV. I am Matt Uyttendaele. I work on the Facebook camera and today I'm going to talk about our AR efforts on smartphones.

We at Facebook and Oculus believed that AR wearables are going to happen someday, but we're not waiting for that. We want to build AR experiences into the apps that our community uses every day. That being Messenger, Facebook and Instagram. And I'm going to do a deep dive into some of those efforts.

How do we bring AR to mobile? There's three major investments that we're making at Facebook. First is just getting computer vision to run on the smartphone. We take the latest state of the art computer vision technology and get that to run at scale on smartphones.

Second, we're building a creator platform. That means that we want to democratize the creation of AR into our apps. We want to make it super simple for designers to create AR experiences on our apps.

And then we're constantly adding new capabilities. The Facebook app updates every two weeks. And in those cycles, we're adding new capabilities and I'll dive into some of those in the talk.

One of our challenges in bringing AR to mobile devices at this scale is that there's a huge variety of hardware out there, right? Some of these are obvious, like camera hardware. We need to get computer vision to run on a huge variety of phones. So that means we have to characterize exactly the cameras and lenses on all these phones. Inertial sensors are super important for determining how the phone moves. That works pretty well on the iPhone, not so much on Android. It was telling that on the Pixel 2 one of the top bullet items, marketing bullet items was IMU synchronized with camera because that enables AR. But that's a challenge that we face in bringing these experiences at scale. All told we support 10,000 different SKUs of cameras with our apps.

So let's dive a little bit into the computer vision, some of the computer vision that's running in our AR platform. On the left, take a video frame in, take an IMU data, and a user may select a point to track within that video. First we have a tracker selector that's analyzing the frame that's coming in and it is also aware of the device capabilities that we're operating on.

©Robert Wright/LDV Vision Summit 2018

Then we've built several state of the art computer vision algorithms. I think our face tracker is probably one of the best monocular, or maybe the best monocular, face tracker out there running on a mobile phone. But we also have a simple inertial tracker that's just using the IMU. And we've implemented a really strong, simultaneously localization and mapping algorithm which is also known as SLAM. At any given time one of these algorithms is running while we're doing an AR experience. And we can seamlessly transition between these algorithms.

For example, if we're using SLAM and we turned the phone to a white wall and there's certainly no visual features to track, we can seamlessly transition back to the IMU tracker. And that lets us deliver a consistent camera pose across the AR experiences, that your AR object doesn't move within the frame. Okay, so that's the dive into our computer vision.

Here's a look at our creator platform. Here's somebody wiring up our face tracker to an experience that he has designed, right? So these arrows are designed by this. Here's similarly, somebody else taking our face tracker and wiring up to accustom mask that they have developed. So this is our designer experience in something called AR Studio that we deliver.

AR Studio is cross platform obviously because our apps running cross platform so you can build an AR experience and deliver that to both iOS and Android. It's delivered through our Facebook cameras stack, which means that runs across the variety of apps, Messenger, Facebook and Instagram. And we've enabled these AR experiences to be delivered to 1.5 billion people that run our apps. So if you build an app inside this AR Studio, you can have a reach of 1.5 billion people.

“We've enabled these AR experiences to be delivered to 1.5 billion people that run our apps.”

Okay, let me look at now a new capability that we recently delivered. This is called Target AR. And here this user is taking his phone out, pointing it at a target that's been registered in our AR Studio. So this is a custom target. And they've built a custom experience, the overlay on that target. So when we recognize that target, their experience pops up and is displayed there.

And we didn't build this as a one off experience that's shown here. We built this as a core capability into the platform. So here, our partners at Warner Brothers, at South by Southwest, deployed these posters across Austin about the time of Ready Player One launch and they use the AR studio to build a custom experience here where when we recognize that poster, their effect populations up in the app. And here's, one of my partners on the camera team, doing a demo. And that Warner Brothers experience popped up as it recognized that poster.

What I want to leave you with is we at Facebook deliver value to users in AR and that's something that we think about every day in the Facebook camera. I think I've shown you some novel experiences, but what we really strive to do is deliver real user value through these things. And that's something that, please look at what we're doing over the next year in our Facebook camera apps across Messenger, Facebook and Instagram, because that's something that we hope to achieve.

Thank you.

Watch Matt’s keynote at our LDV Vision Summit 2018 below and checkout other keynotes on our videos page.

Early Bird tickets are now available for our LDV Vision Summit May 22 & 23, 2019 in NYC to hear from other amazing visual tech researchers, entrepreneurs and investors.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Early Bird tickets are available through April 16 for our LDV Vision Summit 2019 - May 22 & 23 in NYC at the SVA Theatre.  80 speakers in 40 sessions discuss the cutting edge in visual tech. Register now!

At the LDV Vision Summit 2018, Joshua Brustein of Bloomberg Businessweek spoke with Serge Belongie (Cornell Tech), Ryan Measel (Fantasmo), Mandy Mandelstein (Luxloop) and Jeff McConathy (6D.ai) about how the digital and physical world will converge in order to deliver the future of augmented reality.

They spoke about how the technology stack for augmented and mixed reality will need several new layers of different technologies to work well together. From spatial computing with hyper-precision accuracy in multiple dimensions to filtering contextually relevant data to be displayed to the user based on location, activity, or time of day. What are the technical challenges and opportunities in delivering the AR Cloud?

Watch and find out:

Our LDV Vision Summit 2019 will have many more great discussions by top experts in visual technologies. Don’t miss out, check out our growing speaker list and register now!

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Assif Glazer, LDV Vision Summit 2018 ©Robert Wright

Early Bird tickets now available for our LDV Vision Summit 2019 - May 22 & 23 in NYC at the SVA Theatre.  80 speakers in 40 sessions discuss the cutting edge in visual tech. Register now!

Assif Glazer is the founder & CEO of Nanit. At our 2019 LDV Vision Summit he shared how Nanit's unique computer vision technology and product helps children and their parents sleep better. Merging advanced computer vision with proven sleep science technologies, Nanit provides in-depth data available for helping babies, and parents, sleep well in those crucial early months and years of a baby’s development. He spoke about how this technology is expandable to the greater population as well, leading to early detection of other disease states like sleep apnea, seizures, autism and more. He shared the state of the art today and how he envisions sleep tech helping society in 10 and 20 years.

Hello, everyone. I'm Assaf, the CEO and founder of Nanit. Our business is human analytics. If you are not familiar with Nanit, we sell baby monitors that measure sleep and parental interaction. We use computer vision for this purpose. And there are thousands of parents today across the U.S. that are using Nanit to sleep better.

Nanit is a camera that is mounted above the crib. The experience with Nanit is very different from any other baby monitor that you can think. I won't go much into reviews but I would say that on BGR, they wrote, "This baby monitor is so impressive, we want to have a baby just to try it."

The camera costs $279 and there is a subscription, $10 a month or $100 a year. If you look at how we do it, actually, we do it in different levels. First, we give you the best view of your child and then we give you some real-time information of what happened in the crib. It helps you to manage nap time, check your child remotely and know, “my baby woke up one hour ago, he sleep for 20 minutes.” We give you daily and weekly updates and every week, we'll also send you sleep tips and recommendations on how to improve sleep based on the personal data that we saw during the last week. And finally, we celebrate achievements and give you rewards for sleep milestones and accomplishment from the last week.

“When you measure sleep with a camera, you can also measure the environment, the behavior and build a picture around the sleep architecture.”

We measure sleep. We actually measure sleep better than the state-of-the-art medical device. There are different ways to measure sleep but when you do it with a camera, you can also measure the environment, the behavior, and build a picture around the sleep architecture. In this context of babies, we also measure the parent and we know when the parent is approaching the crib, when he's touching the baby, when he's taking the baby out of the crib or differentiate it with different kind of moment that we would like to ignore. Then, by combining these all behaviors together, along with other behaviors of the child, we can have a very precise diagnosis of on sleep issues and beyond.

This is deeply anchored in research. When we were part of the runway program at Cornell Tech - they help people looking to commercialize science - and they really helped us build collaboration between different verticals; sleep experts and psychologists, cognitive development, model development, etc. Today, we have plenty of studies in the works, in collaboration with different types of institutes we are publishing papers.

Just last month, we published a paper at the IBSA Conferences. We took three months - 175,000 nights’ sleep - we analyzed and tracked the parental intervention patterns between zero to 24 months age babies.

Assif Glazer, LDV Vision Summit 2018 ©Robert Wright

So Nanit is also a research tool. It's a research tool that can tell you about behaviors and sleep. Here you see across the US. For instance, you can see that babies in Denver tend to wake up one more time than in the rest of the states. I don't know the reason, but it is just a fact. We have very precise data on babies’ sleep so we can tell you every day if the sleep pattern changes. It's interesting to see, for instance, that at Thanksgiving, parents are putting their baby to sleep earlier. Maybe so they will have quality time during their dinner?

Nanit is a very powerful tool. The ability to record the night and then analyze it will serve the need of people in the medical field as well as parents. Looking at Nanit as a research tool, Nanit gives you so much information. By having Nanit in the house and monitoring thousands of normal children, we can learn more about what is normal. And if we know what is normal, then we can know what's not normal and are these the early signs of a future disease?

There are constant movements to try to identify children who are at risk for autism earlier and earlier. With this technology, we could certainly develop some normative data and to be able to identify otherwise unrecognized signs. This technology could also be used in the adult population, a hospital setting, or a hospice setting, or perhaps a nursing care setting.

It can look at restless leg movement, it can look at the breathing, and of course, sleep apnea is much more common in adults than in children. Then it can really open our eyes to things we didn't know as researchers, that we couldn't study in our own labs and can change the way we treat children and adults as well.

So Nanit is also a research tool. It's a research tool that can tell you about behaviors and sleep. Here you see across the US. For instance, you can see that babies in Denver tend to wake up one more time than in the rest of the states. I don't know the reason, but it is just a fact. We have very precise data on babies’ sleep so we can tell you every day if the sleep pattern changes. It's interesting to see, for instance, that at Thanksgiving, parents are putting their baby to sleep earlier. Maybe so they will have quality time during their dinner?

Nanit is a very powerful tool. The ability to record the night and then analyze it will serve the need of people in the medical field as well as parents. Looking at Nanit as a research tool, Nanit gives you so much information. By having Nanit in the house and monitoring thousands of normal children, we can learn more about what is normal. And if we know what is normal, then we can know what's not normal and are these the early signs of a future disease?

There are constant movements to try to identify children who are at risk for autism earlier and earlier. With this technology, we could certainly develop some normative data and to be able to identify otherwise unrecognized signs. This technology could also be used in the adult population, a hospital setting, or a hospice setting, or perhaps a nursing care setting.

It can look at restless leg movement, it can look at the breathing, and of course, sleep apnea is much more common in adults than in children. Then it can really open our eyes to things we didn't know as researchers, that we couldn't study in our own labs and can change the way we treat children and adults as well.

Nanit is the future of consumer-facing health. When we are looking at the future, you can think about application in, of course, pediatrics, but also adult sleep, elder care, big data analysis. Thank you.

Watch Assif Glazer’s keynote at our LDV Vision Summit 2018 below and checkout other keynotes on our videos page.

Early Bird tickets are now available for our LDV Vision Summit May 22 & 23, 2019 in NYC to hear from other amazing visual tech researchers, entrepreneurs and investors.

We are accepting applications to our Vision Summit Entrepreneurial Computer Vision Challenge for computer vision research projects and our Startup Competition for visual technology companies with <$2M in funding. Apply now & spread the word.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Early Bird tickets are available through March 28 for our LDV Vision Summit 2019 - May 22 & 23 in NYC at the SVA Theatre.  80 speakers in 40 sessions discuss the cutting edge in visual tech. Register now!

At the LDV Vision Summit 2018, Sarah Fay of Glasswing Ventures, Michael Yang of Comcast Ventures (now at OMERS) and Nihal Mehta of ENIAC spoke with Jessi Hempel of Wired (now LinkedIn) about the industries they think carry the most opportunity for visual technology.

Watch what they have to say about the future of transport, VR, cyber security, drones and much more…

Our LDV Vision Summit 2019 will have many more great discussions by top investors in visual technologies. Don’t miss out, check out our growing speaker list and register now!

Receive Updates on our annual LDV Vision Summit Email Address Sign Up Thank you!
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

As we gear up for our 6th annual LDV Vision Summit, we’ll be highlighting some  speakers with in-depth interviews. Check out our full roster of our speakers here. Early Bird tickets now available for our LDV Vision Summit on May 22 & 23, 2019 in NYC at the SVA Theatre.  80 speakers in 40 sessions discuss how visual technologies are empowering and disrupting businesses. Register for early bird tickets before March 29!

Ezra CEO and Co-founder Emi Gal (courtesy Emi Gal)

When Emi Gal was searching for the cofounder for his New York-based cancer detection startup Ezra, he pored over 2,000+ online profiles of individuals with expertise in medical imaging and deep learning. From there, he reached out to 300 individuals, conducted 90 interviews, and walked four finalists through a four-month project. The entire process took nine months.

All while still at his “day job.” For Emi, that day job was his European startup that had just been acquired by one of its largest American clients. Not your typical founder’s story, but as his meticulous cofounder search shows, Emi is not one to take a blind leap.

It’s this willingness to go methodically down the rabbit hole and to leave no stone unturned that’s defined Emi’s approach, from one venture to the next.

To date, Ezra has raised $4 million in its first round, led by  Accomplice and with participation from Founders Future, Seedcamp, Credo Ventures, Esther Dyson, Taavet Hinrikus, Alex Ljung, Daniel Dines and many others. Emi has since brought on a head of talent to help expand the team from two to 12 in four months.

In this article, Emi talks about how Ezra is changing the game for cancer detection through visual technology. Read on to learn about his scientific research-driven approach, the long-term possibilities for Ezra, and what to expect when he takes the stage at our LDV Vision Summit this May.

Photo courtesy Ezra

MISSION-DRIVEN: MAKING CANCER SCREENING MORE AFFORDABLE, ACCURATE AND NON-INVASIVE

The name Ezra means “help” in Hebrew.

It’s a spot-on moniker for a company that, although powered by artificial intelligence and visual technology, Emi says is more mission-driven than technology-driven.

Multiple components in cancer detection rely on the painstaking study and analysis of visual inputs, making visual technology ripe for leveraging.

In its current stage, Ezra is focused on detecting prostate cancer through full-body MRIs that are then analyzed by artificial intelligence. Ezra’s subscription-based model offers an MRI and access to medical staff and support at $999 for one year.

The full-body MRI is a huge change compared to the most prevalent detection method for a cancer that kills 1 in 41 men: getting a biopsy, which is painful, uncomfortable, and can have unpleasant side effects.

Magnetic resonance imaging, on the other hand, eliminates the discomfort and is more accurate than biopsies or blood tests. MRIs, however, are not without their costs: about $1,500 if an individual books one himself, sans Ezra. And then there’s the time a radiologist needs to scan it.

Radiologist vs AI detection of cancer in MRI scans (Courtesy Ezra)

“We’re trying to make MRI-based cancer screening affordable,” says Emi. “The cost of getting screened with MRI has two parts: the scanning and the analysis. You need to get a scan, and a radiologist needs to analyze the scan and provide a report based on the scan. Those two things each have a cost associated with them. We’re using AI to help drive the costs down by assisting the radiologist, and by accelerating the MRI scanning time.”

The first thing a radiologist does is make a bunch of manual measurements of the organ in question — in Ezra’s case, the size of the prostate. If you have an enlarged prostate, you have a higher likelihood of having cancer. If there’s a lesion like a tumor in the prostate, radiologists need to measure the size and location of the tumor. They need to segment the tumor so they can make a 3D model so the urologist knows what to focus on. All of those measurements and annotations are currently done manually, which makes up about half of a radiologist’s workload.

“What we’re focusing on is using the Ezra AI to automate all of those trivial, manual tasks, so the radiologist can spend less time per scan doing manual things, and instead focus on analysis and reporting. That will potentially make them more accurate, as well.”

For the future, says Emi, the team is already considering how to use AI to accelerate the scanning process as well.

“The reason this is possible now and it wasn’t possible before is that deep learning has given us the ability to be as good or better than humans at these things, which means it’s now feasible to create these types of technologies and implement them into the clinical workflow.”

THE SEED OF A NEW IDEA: PLANT IT BEFORE YOU’RE READY

While it looks like Emi has seamlessly gone from one successful venture to the next, the reality is a lot more nuanced.

It was while he was still running Brainient, before it was acquired, that he started plotting his next move. In 2015, Emi was introduced to Hospices of Hope in Romania, which builds and operates hospices that care for terminally ill cancer patients. During his visits with doctors and patients, the seed of Ezra was born.

Cancer struck a personal chord. As a child, Emi had developed hundreds of moles on his body, which put him at very high risk of melanoma. He started getting screened and going to dermatologists regularly from the age of 10 years onwards to make sure they weren’t cancerous. While he hasn't yet had any maligned lesions, he’s experienced the discomfort of biopsies firsthand, and he’s always been very conscious about the importance of screening.

“I realized while speaking to doctors that one of the biggest problems is that people get detected late. I started looking into that and realized that [this is] because there’s no fast, affordable, accurate way to screen cancer anywhere in the body,” says Emi. From there, he began researching different ways in which you can screen for cancer. A computer scientist by education, he spent two years on what he calls, “learning and doing an accelerated undergrad in oncology, healthcare, genetics and medical imaging.”

That accelerated educated is supplemented with an incredibly impressive team. It’s no surprise that Ezra cofounder Diego Cantor is equally curious and skilled, and brings an enormous technical repertoire to the table: an undergraduate education in Electronic Design and Automation Engineering, a master’s degree in Computer and Systems Engineering, a PhD in Biomedical Engineering (application of machine learning to study epilepsy with MRI), and Post-doctoral work in the application of deep learning to solve medical imaging problems. The scientific team is rounded out with deep technical experts: Dr. Oguz Akin (professor of radiology at Weill Cornell Medicine and a radiologist at Memorial Sloan-Kettering Cancer Center), Dr. Terry Peters, director of Western University’s Biomedical Imaging Research Centre), Dr. Lawrence Tanenbaum (VP and Medical Director Eastern Division, Director of MRI, CT and Advanced Imaging at RadNet), and Dr. Paul Grewal (best-selling author of Genius Foods).

Ezra’s deeply technical team trained AI with data sets from the National Institute of Health marked up by radiology experts. On new data sets, the Ezra was 90% accurate at agreeing with the experts.

LEARNING (AND STARTUP SUCCESS) IS ALL ABOUT COURSE CORRECTING

As a lifelong learner who actively chronicles his year-long attempts to gain new skills and habits, Emi has picked up a thing or two about doing new things for the first time.

“Learning anything of material value is really, really hard,” says Emi, who’s done everything from training his memory with a world memory champion to hitting the gym every single day for one year.

This focus on the process and being comfortable with being uncomfortable came in handy when Emi, who studied computer science and applied mathematics in college, pivoted into cancer detection.

Emi cycled through twelve potential ideas before deciding on Ezra’s current technology. At every turn, he researched methodically and consulted with experts.

One of the promising ideas that Emi considered — shelved for now — is DNA-based liquid biopsies. “We’re at least a decade away from DNA-based liquid biopsies being feasible and affordable,” says Emi, who was searching to make an immediate impact.

Emi had just sold Brainient and was on his honeymoon when he came up with the winning idea in November 2016. “I had this idea: what if you could do a full-body MRI and then use AI to lower the cost of the full-body MRI, both in terms of scanning time as well as analysis in order to make a full-body MRI affordable for everybody? My wife loved the idea, and that’s always a good sign.”

In January 2017, Emi discovered a research paper that was comparing the current way to screen for prostate cancer — a Prostate-Specific Antigen (PSA) blood test followed by a biopsy — with MRIs as an alternative. “An MRI was by far more accurate and a much better patient experience, and so I was like, this is it. It can work. And that’s how Ezra was born.”

THE FUTURE

Emi has big plans for Ezra going forward, and this year’s LDV Vision Summit is one step in that direction. He hopes to meet people working in the vision tech space, particularly within healthcare.

Although Ezra has been live just since January 7th of this year — 20 people were scanned in the first three weeks — its early results are very promising.

“The first person we scanned had early-stage prostate lesions. That really makes you wake up in the morning and go at it,” says Emi.

Out of the first 20 scanned, three had early-stage prostate lesions they were unaware of. Two early users came in with elevated PSA levels, but the MRIs showed no lesions, obviating the need to do prostate biopsies.

The long-term potential for Ezra — going beyond prostate screenings — is also clear.

“We helped one man who thought he was dying from cancer find out that he had no cancer but that he was likely an undiagnosed diabetic,” says Emi. “We gave him the information for his urologist and physician to make that diagnosis. We checked with him a month later and he’s over the moon happy. We helped him get peace of mind that he doesn’t have cancer, as well as diagnose a disease he wasn’t aware he had.”

Even though these results have been powered by AI and MRIs, Emi is emphatic that Ezra is not an AI company.  “We want to help people detect cancer early...and we will build any technology necessary. We think of ourselves as a healthcare company leveraging AI, not the other way around,” he says.

While Ezra’s current focus is prostate cancer, expanding to other cancers that affect women is on the horizon. After all, as Emi points out, “Women are the most preventative-focused type of individual, for themselves and their families.” To underscore that point, Emi says that many of the early adopters for prostate screenings have come at the encouragement of men’s female partners.

“The way we are approaching expansion is based on cancer incidence. We’re starting with the cancers that are most prevalent across society, with prostate being one of them, breast, lungs, stomach, and our ultimate goal is to be able to do one scan a year and find cancer in any of those organs.

“Our ultimate goal is to do a full body MRI analyzed by AI and find any and all cancer. In a decade, I hope we will have gotten there.”

If you’re building a unique visual tech company, we would love for you to join us. At LDV Capital, we’re focused on investing in deep technical people building visual technology businesses. Through our annual LDV Vision Summit and monthly community dinners, we bring together top technologists, researchers, startups, media/brand executives, creators and investors with the purpose of exploring how visual technologies leveraging computer vision, machine learning and artificial intelligence are revolutionizing how humans communicate and do business.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Viorica Patraucean, LDV Vision Summit 2018 ©Robert Wright

Early Bird tickets now available for our LDV Vision Summit 2019 - May 22 & 23 in NYC at the SVA Theatre.  80 speakers in 40 sessions discuss the cutting edge in visual tech. Register now!

Viorica Patraucean is a Research Scientist at Google DeepMind. At our Vision Summit 2018 she enlightened us with her recent work on massively parallel video nets and how it’s especially relevant for real world low-latency/low-power applications. Previously she worked on 3D shapes processing in the Machine Intelligence group of the Engineering Department in Cambridge, after completing a PhD in image processing at ENSEEIHT–INP Toulouse. Irrespective of the modality - image, 3D shape, or video - her goal has always been the same: design a system that comes closer to human perception capabilities.

As most of you here, I'm interested in making machines see the world the way we see it. When I say machines, I'm thinking of autonomous cars or robots or systems for augmented reality. These are all very different applications, of course, but, in many cases, they have one thing in common, they require low latency processing of the visual input. In our work, we use deep artificial neural networks which consist of a series of layers. We feed in an image, this image is processed by each layer of the network, and then we obtain a prediction, assuming that this is an object detector, and there's a cat there. We care about cats and all.

Just to make it clear what I mean by latency – I mean the time that passes between the moment when we feed in the image and the moment when we get the prediction. Here, obviously, the latency is just the sum of the computational times of all the layers in the network.

Now, it is common practice that, if we are not quite happy with the accuracy of the system, we can make the systems deeper by adding more layers. Because this increases the capacity of the network, the expressivity of the network, we get better accuracy. But this doesn't come for free, of course. This will lead to increasing the processing time and the overall latency of the system. Current object detectors run at around five frames per second, which is great, of course, but what does five frames per second mean in real world?

I hope you can see the difference between the two videos here. On the left, you see the normal video at 25 frames per second and, on the right, you see the five frames per second video obtained by keeping every fifth frame in the video. I can tell you, on the right, the tennis ball appears in two frames, so, if your detector is not perfect, it might fail to detect it. Then you're left to play tennis without seeing the ball, which is probably not ideal.

The question then becomes, how can we do autonomous driving at five frames per second, for example? One answer could be like this, turtle power. We all move at turtle speed, but probably that's not what we are after, so then we need to get some speed from somewhere.

One option, of course, is to rely on hardware. Hardware has been getting faster and faster in the past decade. However, the faster hardware normally comes with higher energy consumption and, without a doubt, on embedded devices, this is a critical constraint. So, what would be more sustainable alternatives to get our models faster?

Let's look at what the brain does to process a visual input. There are lots of numbers there. Don't worry. I'll walk you through them. I'm just giving a list of comparison between a generic artificial neural network and the human brain. Let's start by looking at the number of basic units, which in the brain are called neurons and their connections are called synapses.

Here, the brain is clearly superior to any model that we have so far by several orders of magnitude, and this could explain, for example, the fact that the brain is able to process so many things in parallel and to achieve such high accuracy. However, when we look at speed of basic operation, here we can see that actually our electronic devices are much faster than the brain, and the same goes for precision of computation. Here, again, the electronic devices are much more precise.

“Current systems consider the video as a collection of independent frames. And, actually, this is no wonder since the current video systems were initially designed as image models and then we just run them repeatedly on the frames of a video.”

However, as I said, speed and precision of computation normally come with high power consumption to the point where like a current GPU will consume about 10 times more than the entire human brain so. Yet, with all these advantages on the side of the electronic devices, we are still running at five frames per second when the human brain can actually run at more. The human brain can actually process more than 100 frames per second, so this points to the fact that.

I'm going to argue here that the reason for this suboptimal behavior comes from the fact that current systems consider the video as a collection of independent frames. And, actually, this is no wonder since the current video systems were initially designed as image models and then we just run them repeatedly on the frames of a video. By running them in this way, it means that the processing is completely sequential. Except, the processing that happens on GPU where we can parallelize things. Overall, it still remains sequential, and then, the older layers in the network, they all work at the same pace, and this is opposite to what the brain does.

©Robert Wright/LDV Vision Summit

There is a high evidence that the brain actually exhibits a massively parallel processing mode and also that the neurons fire at different frame rates. All this because the brain rightfully considers the visual stream as a continuous stream that exhibits high correlations and redundancy across time.

Just to go back to the initial sketch, this is how our current systems work. You get an image. This goes through every layer in the network. You get a prediction. The next image comes in. It goes again through every layer and so on. What you should observe is that, at any point in time, only one of these layers are working and all the others are just waiting around for their turn to come.

This is clearly not useful. It's just wasting resources, and the other thing is that everybody works at the same pace, and, again, this is not needed if we take, for example, in account the slowness principle, and I'm just trying to depict here what that means. This principle informally states that fast varying observations are explained by slow varying factors. 

If you look at the top of the figure on the left - those are the frames of a video depicting a monkey. If you look at the pixel values in the pixel space, you will see high variations because of some light changes or the camera moves a bit or maybe the monkey moves a bit. However, if we look at more abstract features of the scene, for example, the identity of the object of the position of the object, this will change much more slowly.

Now, how is this relevant for artificial neural networks? It is quite well-understood now that deeper layers in an artificial neural network extract more and more abstract features, so, if we agree with the slowness principle, then it means that the deeper layers can work at a slower pace than the layers that are the input of the network.

Now, if we put all these observations together, we obtain something like this. We obtain like a Christmas tree, as shown, where all the layers work all the time, but they work at different rates, so we are pipelining operations, and this generates more parallel processing. We can now update our layers at different rates.

Initially, I said that the latency of a network is given by the sum of the computation times of all the layers in the network. Now, very importantly, with our design, the latency is now given by the slowest of the layers in the network. In practice, we obtain up to four times faster response. I know it's not the 10 times, but four is actually enough because, In perception, once you are past the 16 frames per second, then you are quite fine, I think.

We obtain this faster response with 50% less computation, so I think this is not negligible and, again, very important, we can now make our networks even deeper to improve their accuracy without affecting the latency of the network.

I hope I convinced you that this is a more sustainable way of creating a low latency video models, and I'm looking forward to the day where our models will be able to process everything that the camera can provide. I'm just showing here a beautiful video captured at 1,000 frames per second, I think this is the future.

Thank you.

Watch Viorica Patraucean’s keynote at our LDV Vision Summit 2018 below and checkout other keynotes on our videos page.

Early Bird tickets are now available for the LDV Vision Summit May 22 & 23, 2019 in NYC to hear from other amazing visual tech researchers, entrepreneurs and investors.

We are accepting applications to our Vision Summit Entrepreneurial Computer Vision Challenge for computer vision research projects and our Startup Competition for visual technology companies with <$2M in funding. Apply now & spread the word.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

We at LDV Capital are currently recruiting our analyst interns for Summer 2019 in NYC.

LDV Capital is a thesis-driven early stage venture fund investing in people building visual technology businesses that leverage computer vision, machine learning and artificial intelligence to analyze visual data.

We are looking for two entrepreneurial minded, visual tech savvy individuals to join our  team from May - August 2019. We are looking for:

  • Analyst intern who has experience with startups or venture capital. Interested in learning more about venture capital, market mapping, investment research, due diligence and how to build a successful startup.

  • Technical Analyst intern with a deep-seated interest in computer vision, entrepreneurship and venture capital. Looking to learn more about venture capital, business operations, due diligence and how to run a successful start up.

We are a small team and work closely with our summer interns on many aspects of building startups and investing. Our goals are to introduce our interns to the evaluation process of teams, technology and businesses.  We help them connect and collaborate with entrepreneurs across the globe, and to lead them on a deep dive into sectors being disrupted by visual technologies. We work to make our internships unique in three ways:


1. We expose our interns to the leading edge visual technologies that will disrupt all industries and society in the next 20 years.

We invest in companies that are working to improve the world we live in with visual technology. As such, our horizontal thesis drives us to look at and invest in any and all industries where deep technical teams are using computer vision, machine learning and artificial intelligence to solve a critical problem.

Since our interns have a seat at the table with our deal flow - you are pushed to educate yourself on numerous industries and applications of visual tech in order to understand and evaluate the value proposition of the early stage startups coming through the pipeline.

At LDV Capital you could be reviewing the pitch deck of a visual tech company in agriculture in the morning and sitting in on a call with a visual tech healthcare startup in the afternoon.

You will be asked to develop and present market trends, competitive landscapes, business opportunities and more for cutting edge, early stage technology companies. You will be consulted for your valuable opinions on those companies and technologies during weekly deal flow meetings.

While it is challenging work, the versatile knowledge of visual technologies and applications to many industries you develop over the course of the summer are applicable to almost any opportunity you pursue after your internship LDV.

“My summer internship with LDV Capital provided a unique opportunity to interact with countless visual tech entrepreneurs while experiencing the excitement of early-stage investing. Most of all, the experience was a front row seat to the newest sensing, display, and automation trends which underpin my life goals and which will revolutionize the world as we know it.”
-
Max Henkart, Summer Analyst 2018

Max is currently exploring multiple robotics spin-outs, consulting with camera development/supplier management teams, considering full time roles in VC/CVC/Self-Driving firms, and graduating from CMU’s MBA program in May 2019. ©Robert Wright

2. We empower our interns to own their own projects.

Whether you are conducting a market landscape review, investigating a unique application of computer vision, or doing a trend analysis, we want you to own it. We are here to help guide you on planning, setting milestones, creating materials, presenting your work, but we believe in “teaching you to fish.”

“I really enjoyed working with the LDV team, and I learned a lot from the experience. LDV gave me a lot of responsibility, and I was able to learn what it is like to work as a venture capitalist. There is no better way to learn about entrepreneurship and venture capital.”
-
Ahmed Abdelsalam, Summer & Fall Analyst 2018

Ahmed is currently completing the final semester of his MBA at the University of Chicago, Booth School of Business ©Robert Wright

There is no bigger example than our annual LDV Insights research project, where we deep dive into a sector or trend with prime opportunities for visual technology businesses. Our interns contribute to the project plan, conduct the research, interview experts, analyze the data, write the slides, and are named authors in the publication.

In 2017, our research found that “45 Billion Cameras by 2022 Fuel Business Opportunities” and it was published by Interesting Engineering and others.

In 2018, we identified “Nine Trends Where Visual Technologies Will Improve Healthcare by 2028” and published it on TechCrunch.

As one facet of your internship, 2019 Summer Analysts will be working on our third annual LDV Insights report on a very exciting, immensely growing sector of the economy. In your application, let us know what you think the sector for our 2019 Insights report will be.

“Interning for LDV was truly one of the most rewarding experiences in my career thus far. Working in the smaller environment allowed me to work closely with the GP and gain insight into the VC process. Unlike some other busy-work dominated internships, LDV provided an opportunity to own my own project, develop a research report that was ultimately published by the firm.”
-
Sadhana Shah, Summer Analyst 2017

Sadhana is currently finishing her final semester at NYU Stern School of Business, with a double major in Management and Finance with a minor in Social Entrepreneurship. After graduation, she will be joining the KPMG Innovation Lab as an Associate. ©Robert Wright

3. We provide opportunities to network with startups, technologists & other investors.

At LDV Capital, you don’t get stuck behind a desk all day, every day. Our interns kick off their summer with our sixth annual LDV Vision Summit which has about 600 attendees, 80 speakers, 40 sessions, 2 competitions over 2 days. Interns also help and attend our LDV Annual General Partners Meeting.  Your second week of the internship looks like this:

  • Monday - Help facilitate the subfinals for our Startup Competition and Entrepreneurial Computer Vision Challenge, watching the pitches of +40 competitors and hearing the feedback and evaluation by expert judges.

  • Tuesday - Assemble aspects of our annual report and attend our Annual General Meeting for investors as well as a dinner for our investors, portfolio companies & expert network.

  • Wednesday - attend our LDV Vision Summit, listening to keynotes about state of the art visual technologies. Attend our VIP Reception for all our speakers & sponsors.

  • Thursday - second day of our LDV Vision Summit.

The rest of the summer is filled with opportunities to attend our gender-balanced LDV Community dinners, meet with startups, go to industry events, watch pitch competitions and more.

“Spending time at LDV Capital was an unforgettable experience. I’m thankful for access to A+ investors and entrepreneurs, collaborating with a world class team and a front row introduction to VC."
-
Danilo Vicioso, Summer Analyst 2018

Danilo is currently an EIR at Prehype, a corporate innovation and venture studio behind startups like Ro, ManagedByQ, BarkBox and AndCo. ©Robert Wright

Apply before Feb 28, 2019 for consideration.

If you believe you have the skills, experience and motivation to join our team and would like to gain more knowledge over the summer in:

  • Computer vision, machine learning and artificial intelligence

  • Market mapping

  • Investment research

  • Startup due diligence

  • Startup operations

  • Technical research

  • Trend analysis

  • Data analysis

  • Networking with entrepreneurs, other investors & technologists

Read carefully through everything you can find out about us online and then submit a concise application showcasing why you are a great fit for the opportunity by February 28.

We carefully consider all applications and will get back to you ASAP. Thanks!

Apply now.


Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

The power of object recognition and the transformative effect of deep learning to analyze scenes and parse content can have a lot of impact in advertising. At the 2016 Annual LDV Vision Summit, Ken Weiner CTO at GumGum told us about the impact of image recognition and computer vision in online advertising.

The 2017 Annual Vision Summit is this week, May 24 &25, in NYC. Come see new speakers discuss the intersection of business and visual tech.

I’m going to talk a little bit about advertising and computer vision and how they go together for us at GumGum. Digital images are basically showing up everywhere you look. You see them when you're reading editorial content. You see them when you're looking at your social feeds. They just can't be avoided these days. GumGum has basically built a platform with computer vision engineers that tries to identify a lot of information about the images that we come across online. We try to do object detection. We look for logos. We detect brand safety, sentiment analysis, all those types of things. We basically want to learn as much as we can about digital photos and images for the benefit of advertisers and marketers.

The question is: what value do marketers get from having this information? Well, for one thing, if you're a brand, you really want to know: how are users out there engaging with your brand? We look at the fire hose of social feeds. We would look for, for example, at brand logos. In this example, Monster Energy drink wants to find all the images out there where their drink appears in the photo. You have to remember about 80% of the photos out there might have no textual information that’s going to identify the fact that Monster is involved in this photo, but they are. You really need computer vision in order to understand that.

Why do they do that? They want to look at how people engage with them. They want to look at how people are engaging with their competitors. They may want to just understand what is changing over time. What are maybe some associations with their brand that they didn't know about that might come up. For example, what if they start finding out that Monster Energy drinks are appearing in all these mountain biking photos or something? That might give them a clue that they should go out and sponsor a cycling competition. The other thing they can find out with this is who are their main brand ambassadors and influencers out there. Tools like this give them a chance to connect with those people.

What makes [in-image] even more powerful is if you can connect the brand message with that image in a very contextual way and tap into the emotion that somebody’s experiencing when they’re looking at a photo.

-Ken Weiner

Another product that’s been very successful for us is something we call in-image advertising. We came up with this kind of unit about eight years ago. It was really invented to combat what people call banner blindness, which is the notion that, out on a web page, you start to learn to ignore the ads that are showing at the top and the side of the page. If you were to place brand messages right in line with content that people are actively engaged with, you have a much better chance of reaching the consumer. What makes it even more powerful is if you can connect the brand message with that image in a very contextual way and tap into the emotion that somebody’s experiencing when they’re looking at a photo. Just the placement alone for an ad like this receives 10x the performance of traditional advertising because it’s something that a user pays attention to.

Obviously, we can build a big database of information about images and be able to contextually place ads like this, but sometimes situations will come from advertisers that won’t be able to draw upon our existing knowledge. We’ll have to go out and develop custom technology for them. For example, L’Oréal wanted to advertise a product for hair coloring. They asked us if we could identify every image out on different websites and identify the color of the hair of the people in the images so that they could strategically target the products that go along with those hair colors. We ran this campaign from them. They were really, really happy with it.

They liked it so much that they came back to us, and they said, “We had such a good experience with that. Now we want you to go out and find people that have bold lips,” which was a rather strange notion for us. Our computer vision engineers came up with a way to segment the lips, figure out, “What does boldness mean?” Loral was very happy. They ran a lipstick campaign on these types of images.

A couple years ago, we had a very interesting in-image campaign that I think might be the first time that the actual content that you're viewing became part of the advertising creative. What we did is, for Lifetime TV, they wanted to advertise the TV series, Witches of East End. We looked for photos where people were facing forward. When we encountered those photos, we dynamically overlaid green witch eyes onto these people. It gives people the notion that they become a little witchy for a few seconds. Then that collapses and becomes a traditional in-image ad where somebody can then, after being interested by the eyes, can go ahead and click on this to watch a Video LightBox to see the preview for the show.

I just thought this was one of the most interesting ad campaigns I’ve ever seen because it mixes the notion of content and creative into one. What’s coming after this? Naturally, this will extend into video. TV networks are already training you to look at information in the lower third of the screen. It’s only natural that this will get replaced by contextual advertising the same way we’ve done it for images online.

Another thing that I think is coming soon is the ability to really annotate specific products and items inside images at scale. People have tried to do this using crowdsourcing in the past, but it’s just too expensive. When you're looking at millions of images a day like we do, you really need information to come in a more automated way. There’s been a lot of talk about AR. Obviously, advertising’s going to have to fit into this in some way or another. It may be a local direct response advertiser. You're walking down the street. Someone gives you a coupon for McDonald’s. Maybe it’ll be a brand advertiser. You see a car accident, and they’re going to remind you that you need to get car insurance.

Lastly, I wanted to pose the idea of in-hologram ads that I think could come in the future if these things like Siri and Alexa … Now they’re voice, but in the future, who knows? They might be 3D images living in your living room, and advertisers are going to want a way to basically put their name on those holograms. Thank you very much.

Get your tickets now to the next Annual LDV Vision Summit.

Read Full Article

Read for later

Articles marked as Favorite are saved for later viewing.
close
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview