Follow Zbigatron - Zbigniew Zdziarski on Feedspot

Continue with Google
Continue with Facebook


In this post I would like to show you some results of another interesting paper I came across recently that was published last year. It’s on the topic of non-line-of-sight (NLOS) imaging or, in other words, it’s about research that helps you see around corners. NLOS could be something particularly useful for use cases such as autonomous cars in the future.

I’ll break this post up into the following sections:

  • LIDAR and NLOS
  • Current Research into NLOS

Let’s get cracking, then.


You may have heard of LIDAR (a term which combines “light” and “radar”). It is used very frequently as a tool to scan surroundings in 3D. It works similarly to radar but instead of emitting sound waves, it sends out pulses of infrared light and then calculates the time it takes for this light to return to the emitter. Closer objects will reflect this laser light quicker than distant objects. In this way, a 3D representation of the scene can be acquired, like this one which shows a home damaged by the 2011 Christchurch Earthquake:

(image obtained from here)

LIDAR has been around for decades and I came across it very frequently in my past research work in computer vision, especially in the field of robotics. More recently, LIDAR has been experimented with in autonomous vehicles for obstacle detection and avoidance. It really is a great tool to acquire depth information of the scene.

NLOS Imaging

But what if where you want to see is obscured by an object? What if you want to see what’s behind a wall or what’s in front of the car in front of you? LIDAR does not, by default, allow you to do this:

The rabbit object is not reachable by the LIDAR system (image adapted from this video)

This is were the field of NLOS comes in.

The idea behind NLOS is to use sensors like LIDAR to bounce laser light off walls and then read back any reflected light.

The laser is bounced off the wall to reach the object hidden behind the occluder (image adapted from this video)

This process is repeated around a particular point (p in the image above) to obtain as much reflected light as possible. The reflected light is then analysed and any objects on the other side of the occlusion are attempted to be reconstructed.

This is still an open area of research with many assumptions (e.g. that light is not reflected multiple times by the occluded object but bounces straight back to the wall and then the sensors) but the work on this done so far is quite intriguing.

Current Research into NLOS

The paper that I came across is entitled “Confocal non-line-of-sight imaging based on the light-cone transform“. It was published in March of last year in the Nature journal (555, no. 7696, p. 338). Nature is one of the world’s top and most famous academic journals, so anything published there is more than just world-class – it’s unique and exceptional.

The experiment setup from this paper was as shown here:

The setup of the experiment for NLOS. The laser light is bounced off the white wall to hit and reflect off the hidden object (image taken from original publication)

The idea, then, was to try and reconstruct anything placed behind the occluder by bouncing laser light off the white wall. In the paper, two objects were scrutinised: an “S” (as shown in the image above) and a road sign. With a novel method of reconstruction, the authors were able to obtain the following reconstructed 3D images of the two objects:

(image adapted from original publication)

Remember, these results are obtained by bouncing light off a wall. Very interesting, isn’t it? What’s even more interesting is that the text on the street sign has been detected as well. Talk about precision! You can clearly see how one day, this could come in handy with autonomous cars who could use information such as this to increase safety on the roads.

A computer simulation was also created to ascertain with dexterity the error rates involved with the reconstruction process. The simulated setup was as shown in the above images with the bunny rabbit. The results of the simulation were as follows:

(image adapted from original publication)

The green in the image is the reconstructed parts of the bunny superimposed on the original object. You can clearly see how the 3D shape and structure of the object is extremely well-preserved. Obviously, the parts of the bunny not visible to the laser could not be reconstructed.


This post introduced the field of non-line-of-sight imaging, which is, in a nutshell, research that helps you see around corners. The idea behind NLOS is to use sensors like LIDAR to bounce laser light off walls and then read back any reflected light. The scene behind an occlusion is then attempted to be reconstructed.

Recent results from state-of-the-art research in NLOS published in the Nature journal were also presented in this post. Although much more work is needed in this field, the results are quite impressive and show that NLOS could one day be very useful with, for example, autonomous cars who could use information such as this to increase safety on the roads.

To be informed when new content like this is posted, subscribe to the mailing list:

Leave this field empty if you're human:
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

This post is another that has been inspired by a forum question: “What are some lesser known use cases for computer vision?” I jumped at the opportunity to answer this question because if there’s one thing I’m proud of with respect to this blog, it is the weird and wacky use cases that I have documented here.

Some things I’ve talked about include:

In this post I would like to add to the list above and discuss another lesser known use case for computer vision: heart rate estimation from colour cameras.

Vital Signal Estimation

Heart rate estimation belongs in the field called “Vital Signal Estimation” (VSE). In computer vision, VSE has been around for a while. One of the more famous attempts at this comes from 2012 from a paper entitled “Eulerian Video Magnification for Revealing Subtle Changes in the World” that was published at SIGGRAPH by MIT.

(Note: as I’ve mentioned in the past, SIGGRAPH, which stands for “Special Interest Group on Computer GRAPHics and Interactive Techniques”, is a world-renowned annual conference held for computer graphics researchers. But you do sometimes get papers from the world of computer vision being published there as is the case with this one.)

Basically, the way that VSE was implemented by MIT was that they analysed images captured from a camera for small illumination changes on a person’s face produced by varying amounts of blood flow to it. These changes were then magnified to make it easier to scrutinise. See, for example, this image from their paper:

(image source)

Amazing that these illumination changes can be extracted like this, isn’t it?

This video describes the Eulerian Video Magnification technique developed by these researchers (colour amplification begins at 1:25):

Eulerian Video Magnification - YouTube

Interestingly, most research in VSE has focused around this idea of magnifying minute changes to estimate heart rates.

Uses of Heart Rate Estimation

What could heart rate estimation by computer vision be used for? Well, medical scenarios automatically come to mind because of the non-invasive (i.e. non-contact) feature of this technique. The video above (from 3:30) suggests using this technology for SIDS detection. Stress detection is also another use case. And what follows from this is lie detection. I’ve already written about lie detection using thermal imaging – here is one more way for us to be monitored unknowingly.

On the topic of being monitored, this paper suggests (using a slightly different technique of magnifying minute changes in blood flow to the face) detecting emotional reactions to TV programs and advertisements.

Ugh! It’s just what we need, isn’t it? More ways of being watched.

Making it Proprietary

From what I can see, VSE in computer vision is still mostly in the research domain. However, this research from Utah State University has recently been patented and a company called Photorithm Inc. formed around it. The company produces baby monitoring systems to detect abnormal breathing in sleeping infants. In fact, Forbes wrote an interesting article about this company this year. Among other things, the article talks about the reasons behind the push by the authors for this research and how the technology behind the application works. It’s a good read.

Here’s a video demonstrating how Photorithm’s product works:

Smartbeat baby monitor on iPhone - YouTube


This post talked about another lesser known use case of computer vision: heart rate estimation. MIT’s famous research from 2012 was briefly presented. This research used a technique of magnifying small changes in an input video for subsequent analysis. Magnifying small changes like this is how most VSE technologies in computer vision work today.

After this, a discussion of what heart rate estimation by computer vision could be used for followed. Finally, it was mentioned that VSE is still predominantly something in the research domain, although one company has recently appeared on the scene that sells baby monitoring systems to detect abnormal breathing in sleeping infants. The product being sold by this company uses computer vision techniques presented in this post.

To be informed when new content like this is posted, subscribe to the mailing list:

Leave this field empty if you're human:
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

In my last post I introduced the field of image steganography, which is the practice of concealing secret messages in digital images. I looked at the history of steganography and presented some recently reported real-life cases (including one from the FBI) where digital steganography was used for malicious purposes.

In this post I would like to present to you the following two very simple ways messages can be hidden in digital images:

  • JPEG Concealing
  • Least Significant Bit Technique

These techniques, although trivial and easy to detect, will give you an idea of how simple (and therefore potentially dangerous) digital image steganography can be.

JPEG Concealing

Image files in general are composed of two sections: header data + image data. The header data section can contain metadata information pertaining to the image such as date of creation, author, image resolution, and compression algorithm used if the image is compressed. This is the standard for JPEGs, BMPs, TIFFs, GIFs, etc.

Knowing this, one can work around these file structures to conceal messages. 

Let’s take JPEGs as an example. The file structure for this format is as follows:

Notice that every single JPEG file starts and ends with the SOI and EOI markers, respectively.

What this means is that any image interpreting application (e.g. Photoshop or GIMP, any internet browser, the standard photo viewing software that comes with your operating system, etc.) looks for these markers inside the file and knows that it should interpret and display whatever comes between them. Everything else is automatically ignored. 

Hence, you can insert absolutely anything after the EOI marker like this:

And even though the hidden message will be part of the JPEG file and travel with this file wherever it goes, no standard application will see anything out of the ordinary. It will just read whatever comes before EOI.

Of course, if you put a lot of data after EOI, your file size will increase significantly and might, therefore, arouse suspicion – so you have to be wary of that. In this case, it might be an idea to use a high resolution JPEG file (that naturally has a large file size) to turn attention away from your hidden message.

If you would like to try this steganography technique out yourself, download a hex editor for your machine (if you use Windows, WinHex is a good program), search for FF D9 (which is the hex version of EOI), paste anything you want after this section marker, and save your changes. You will notice that the file is opened like any other JPEG file. The hidden message simply piggy backs on top of the image file. Quite neat!

(Note: hexadecimal is a number system made up of 16 symbols. Our decimal system uses 10 digits: 0-9. The hex system uses the 10 digits from the decimal system plus the first 6 letters of the alphabet. To cut a long story short, hexadecimal is a shorthand and therefore much easier way to read/write binary digits, i.e. 1s and 0s. Most file formats will not save data in human readable form and we therefore need help if we want to view the raw data of these files – this is why hex is used sometimes used)

The Least Significant Bit Technique

Although easy to detect (if you know what you’re looking for), the Least Significant Bit (LSB) technique is a very sly way of hiding data in images. The way that it works is by taking advantage of the fact that small changes in pixel colour are invisible to the naked eye.

Let’s say we’re encoding images in the RGB colour space – i.e. each pixel’s colour is represented by a combination of a certain amount of red (R), a certain amount of green (G), and a certain amount of blue (B). The amount of red, green, and blue is given in the the range of 0 to 255. So, pure red would be represented as (255, 0, 0) in this colour space – i.e. the maximum amount of red, no green, and no blue.

Now, in this scenario (and abstracting over a few things), a machine would represent each pixel in 3 bytes – one byte for each of red, green and blue. Since a byte is 8 bits (i.e. 8 ones and zeros) each colour would be stored as something like this:

That would be the colour red (11111111 in binary is 255 in our number system).

What about if we were to change the 255 into 254 – i.e. change 11111111 into 11111110? Would we notice the difference in the colour red? Absolutely not. How about  changing 11111111 to 11111100 (255 to 252)? We still would not notice the difference – especially if this change is happening to single pixels!

The idea behind the LSB technique, then, is to use this fact that slightly changing the colour of each pixel would be imperceptible to the naked eye.

Since the last few digits in a byte are insignificant this is where LSB gets its name: the Least Significant Bit technique.

We know, then, that the last few bits in each byte can be manipulated. So, we can use this knowledge to set aside these bits of each pixel to store a hidden message.

Let’s look at an example. Suppose we want to hide a message like “SOS“. We choose to use the ASCII format to encode our letters. In this format each character has its own binary representation. The binary for our message would be:

What we do now is split each character into two-bit pairs (e.g. S has the following four pairs: 01, 01, 00, 11) and spread these pairs successively along multiple pixels. So, if our image had four pixels, our message would be encoded like this:

Notice that each letter is spread across two pixels: one pixel encodes the first 3 pairs and the next pixel takes the last pair. Very neat, isn’t it? You can choose to use more than 2 bits per pixel to store your message but remember that by using more bits you risk changes to each pixel becoming perceptible.

Also, the larger the image, the more you can encode. And, since images can be represented in binary, you can store an image inside an image using the exact same technique. In this respect, I would highly recommend that you take a look at this little website that will allow you to do just that using the method described here.

I would also recommend going back to the first post of this series and looking at the image-inside-image steganography examples there provided by the FBI. It shows brilliantly how sneaky image steganography can be.


In this post I looked at two simple techniques of image steganography. The first technique takes advantage of the fact that image files have an end-of-file (EOF) marker in their metadata. This means that any program opening these images will read everything up to and including this marker. If you were to put anything after the EOF, it would be hidden from view. The second technique takes advantage of the fact that slightly changing the colour of a pixel is imperceptible to the naked eye. In this sense, the least significant bits of each pixel can be used to spread a message (e.g. text or image) across the pixels in an image. A program would then be used to extract this message.

To be informed when new content like this is posted, subscribe to the mailing list:

Leave this field empty if you're human:
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

I got another great academic publication to present to you – and this publication also comes with an online interactive website for you to use to your heart’s content. The paper is from the field of image colourisation. 

Image colourisation (or ‘colorization’ for our US readers :P) is the act of taking a black and white photo and converting it to colour. Currently, this is a tedious, manual process usually performed in Photoshop, that can typically take up to a month for a single black and white photo. But the results can be astounding. Just take a look at the following video illustrating this process to give you an idea of how laborious but amazing image colourisation can be:

How obsessive artists colorize old photos - YouTube

Up to a month to do that for each image!? That’s a long time, right?

But then came along some researchers from the University of California in Berkeley who decided to throw some deep learning and computer vision at the task. Their work, published at the European Conference on Computer Vision in 2016, has produced a fully automatic image colourisation algorithm that creates vibrant and realistic colourisations in seconds. 

Their results truly are astounding. Here are some examples:

Not bad, hey? Remember, this is a fully automatic solution that is only given a black and white photo as input.

How about really old monochrome photographs? Here is one from 1936:

And here’s an old one of Marilyn Monroe:

Quite remarkable. For more example images, see the official project page (where you can also download the code).

How did the authors manage to get such good results? It’s obvious that deep learning (DL) was used as part of the solution. Why is it obvious? Because DL is ubiquitous nowadays – and considering the difficulty of the task, no other solution is going to come near. Indeed, the authors report that their results are significantly better than previous solutions.

What is intuitive is how they implemented their solution. One might choose to go down the standard route of designing a neural network that maps a black and white image directly to a colour image (see my previous post for an example of this). But this idea will not work here. The reason for it is that similar objects can have very different colours.

Let’s take apples as an example to explain this. Consider an image dataset that has four pictures of an apple: 2 pictures showing a yellow apple and 2 showing a red one. A standard neural network solution that just maps black and white apples to colour apples will calculate the average colour of apples in the dataset and colour the black and white photo this way. So, 2 yellow + 2 red apples will give you an average colour of orange. Hence, all apples will be coloured orange because this is the way the dataset is being interpreted. The authors report that going down this path will produce very desaturated (bland) results.

So, their idea was to instead calculate what the probability is of each pixel being a particular colour. In other words, each pixel in a black and white image has a list of percentages calculated that represent the probability of that particular pixel being each specific colour. That’s a long list of colour percentages for every pixel! The final colour of the pixel is then chosen from the top candidates on this list.

Going back to our apples example, the neural network would tell us that pixels belonging to the apple in the image would have a 50% probability of being yellow and 50% probability of being red (because our dataset consists of only red and yellow apples). We would then choose either of these two colours – orange would never make an appearance.

As is usually the case, ImageNet with its 1.3 million images (cf. this previous blog post that describes ImageNet) is used to train the neural network. Because of the large array of objects in ImageNet, the neural network can hence learn to colour many, many scenes in the amazing way that it does.

What is quite neat is that the authors have also set up a website where you can upload your own black and white photos to be converted by their algorithm into colour. Try it out yourself – especially if you have old photos that you have always wanted to colourise.

Ah, computer vision wins again. What a great area in which to be working and researching.

To be informed when new content like this is posted, subscribe to the mailing list:

Leave this field empty if you're human:
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Oh, I love stumbling upon fascinating publications from the academic world! This post will present to you yet another one of those little gems that has recently fallen into my lap. It’s on the topic of image completion and comes from a paper published in SIGGRAPH 2017 entitled “Globally and Locally Consistent Image Completion” (project page can be found here).

(Note: SIGGRAPH, which stands for “Special Interest Group on Computer GRAPHics and Interactive Techniques”, is a world renowned annual conference held for computer graphics researchers. But you do sometimes get papers from the world of computer vision being published there as is the case with the one I’m presenting here.)

This post will be divided into the following sections:

  1. What is image completion and some of its prior weaknesses
  2. An outline of the solution proposed by the above mentioned SIGGRAPH publication
  3. A presentation of results

If anything, please scroll down to the results section and take a look at the video published by the authors of the paper. There’s some amazing stuff to be seen there!

1. What is image completion?

Image completion is a technique for filling-in target regions with alternative content. A major use for image completion is in the task of object removal where an object from a photo is erased and the remaining hole is automatically substituted with content that hopefully maintains the contextual integrity of the image.

Image completion has been around for a while. Perhaps the most famous algorithm in this area is called PatchMatch which is used by Photoshop in its Content Aware Fill feature. Take a look at this example image generated by PatchMatch after the flowers in the bottom right corner were removed from the left image:

An image completion example on a natural scene generated by PatchMatch

Not bad, hey? But the problem with existing solutions such as PatchMatch is that images can only be completed with textures that solely come from the input image. That is, calculations for what should be plugged into the hole are done using information obtained just from the input image. So, for images like the flower picture above, PatchMatch works great because it can work out that green leaves is the dominant texture and make do with that.

But what about more complex images… and faces as well? You can’t work out what should go into a gap in an image of a face just from its input image. This is an image completion example done on a face by PatchMatch:

An image completion example on a face generated by PatchMatch

Yeah, not so good now, is it? You can see how trying to work out what should go into a gap from other areas of the input image is not going to work for a lot of cases like this.

2. Proposed solution

This is where the paper “Globally and Locally Consistent Image Completion” comes in. The idea behind it, in a nutshell, is to use a massive database of images of natural scenes to train a single deep learning network for image completion. The Places2 dataset is used for this, which contains over 8 million images of diverse natural scenes – a massive database from which the network basically learns the consistency inherent in natural scenes. This means that information to fill in missing gaps in images is obtained from these 8 million images rather than just one single image!

Once this deep neural network is trained for image completion, a GAN (Generative Adversarial Network) approach is utilised to further improve this network.

GAN is an unsupervised neural network training technique where one or more neural networks are used to mutually improve each other in the training phase. One neural network tries to fool another and all neural networks are updated according to results obtained from this step. You can leave these neural networks running for a long time and watch them improving each other.

The GAN technique is very common in computer vision nowadays in scenarios where one needs to artificially produce images that appear realistic. 

Two additional networks are used in order to improve the image completion network: a global and a local context discriminator network. The former discriminator looks at the entire image to assess if it is coherent as a whole. The latter looks only at the small area centered at the completed region to ensure local consistency of the generated patch. In other words, you get two additional networks assisting in the training: one for global consistency and one local consistency. These two auxiliary networks return a result stating whether the generated image is realistic-looking or artificial. The image completion network then tries to generate completed images to fool the auxiliary networks into thinking that their real.

In total, it took 2 months for the entire training stage to complete on a machine with four high-end GPUs. Crazy!

The following image shows the solution’s training architecture:

Overview of architecture for training for image completion (image taken from original publication)

Typically, to complete an image of 1024 x 1024 resolution that has one gap takes about 8 seconds on a machine with a single CPU or 0.5 seconds on one with a decent GPU. That’s not bad at all considering how good the generated results are – see the next section for this.

3. Results

The first thing you need to do is view the results video released by the authors of the publication. Visit their project page for this and scroll down a little. I can provide a shorter version of this video from YouTube here:

Globally and Locally Consistent Image Completion (SIGGRAPH 2017) Fast Forward - YouTube

As for concrete examples, let’s take a look at some faces first. One of these faces is the same from the PatchMatch example above.

Examples of image completion on faces (image adapted from original publication)

How’s impressive is this?

My favourite examples are of object removal. Check this out:

Examples of image completion (image taken from original publication)

Look how the consistency of the image is maintained with the new patch in the image. It’s quite incredible!

My all-time favourite example is this one:

Another example of image completion (taken from original publication)

Absolutely amazing. More results can be viewed in supplementary material released by the authors of the paper. It’s well-worth a look!


In this post I presented a paper on image completion from SIGGRAPH 2017 entitled “Globally and Locally Consistent Image Completion”. I first introduced the topic of image completion, which is a technique for filling-in target regions with alternative content, and described some weaknesses of previous solutions – mainly that calculations for what should be generated for a target region are done using information obtained just from the input image. I then presented the more technical aspect of the proposed solution as presented in the paper. I showed that the image completion deep learning network learnt about global and local consistency of natural scenes from a database of over 8 million images. Then, a GAN approach was used to further train this network. In the final section of the post I showed some examples of image completion as generated by the presented solution.

To be informed when new content like this is posted, subscribe to the mailing list:

Leave this field empty if you're human:
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

In previous posts of mine I have discussed how image datasets have become crucial in the deep learning (DL) boom of computer vision of the past few years. In deep learning, neural networks are told to (more or less) autonomously discover the underlying patterns in classes of images (e.g. that bicycles are composed of two wheels, a handlebar, and a seat). Since images are visual representations of our reality, they contain the inherent complex intricacies of our world. Hence, to train good DL models that are capable of extracting the underlying patterns in classes of images, deep learning needs lots of data, i.e. big data. And it’s crucial that this big data that feeds the deep learning machine be of top quality.

In lieu of Google’s recent announcement of an update to its image dataset as well as its new challenge, in this post I would like to present to you the top 3 image datasets that are currently being used by the computer vision community as well as their associated challenges:

  1. ImageNet and ILSVRC
  2. Open Images and the Open Images Challenge
  3. COCO Dataset and the four COCO challenges of 2018

I wish to talk about the challenges associated with these datasets because challenges are a great way for researchers to compete against each other and in the process to push the boundary of computer vision further each year!


This is the most famous image dataset by a country mile. But confusion often accompanies what ImageNet actually is because the name is frequently used to describe two things: the ImageNet project itself and its visual recognition challenge.

The former is a project whose aim is to label and categorise images according to the WordNet hierarchy. WordNet is an open-source database for words that are organised hierarchically into synonyms. For example words like “dog” and “cat” can be found in the following knowledge structure:

An example of a WordNet synset graph (image taken from here)

Each node in the hierarchy is called a “synonym set” or “synset”. This is a great way to categorise words because whatever noun you may have, you can easily extract its context (e.g. that a dog is a carnivore) – something very useful for artificial intelligence.

The idea with the ImageNet project, then, is to have 1000+ images for each and every synset in order to also have a visual hierarchy to accompany WordNet. Currently there are over 14 million images in ImageNet for nearly 22,000 synsets (WordNet has ~100,000 synsets). Over 1 million images also have hand-annotated bounding boxes around the dominant object in the image.

Example image of a kit fox from ImageNet showing hand-annotated bounding boxes

You can explore the ImageNet and WordNet dataset interactively here. I highly recommend you do this!

Note: by default only URLs to images in ImageNet are provided because ImageNet does not own the copyright to them. However, a download link can be obtained to the entire dataset if certain terms and conditions are accepted (e.g. that the images will be used for non-commercial research). 

Having said this, when the term “ImageNet” is used in CV literature, it usually refers to the ImageNet Large Scale Visual Recognition Challenge (ILSVRC) which is an annual competition for object detection and image classification. This competition is very famous. In fact, the DL revolution of the 2010s is widely attributed to have originated from this challenge after a deep convolutional neural network blitzed the competition in 2012.

The motivation behind ILSVRC, as the website says, is:

… to allow researchers to compare progress in detection across a wider variety of objects — taking advantage of the quite expensive labeling effort. Another motivation is to measure the progress of computer vision for large scale image indexing for retrieval and annotation.

The ILSVRC competition has its own image dataset that is actually a subset of the ImageNet dataset. This meticulously hand-annotated dataset has 1,000 object categories (the full list of these synsets can be found here) spread over ~1.2 million images. Half of these images also have bounding boxes around the class category object.

The ILSCVRC dataset is most frequently used to train object classification neural network frameworks such as VGG16, InceptionV3, ResNet, etc., that are publicly available for use. If you ever download one of these pre-trained frameworks (e.g. Inception V3) and it says that it can detect 1000 different classes of objects, then it most certainly was trained on this dataset.

Google’s Open Images

Google is a new player in the field of datasets but you know that when Google does something it will do it with a bang. And it has not disappointed here either.

Open Images is a new dataset first released in 2016 that contains ~9 million images – which is fewer than ImageNet. What makes it stand out is that these images are mostly of complex scenes that span thousands of classes of objects. Moreover, ~2 million of these images are hand-annotated with bounding boxes making Open Images by far the largest existing dataset with object location annotations. In this subset of images, there are ~15.4 million bounding boxes of 600 classes of object. These objects are also part of a hierarchy (see here for a nice image of this hierarchy) but one that is nowhere near as complex as WordNet.

Open Images example image with bounding box annotation

As of a few months’ ago, there is also a challenge associated with Open Images called the “Open Images Challenge. It is an object detection challenge and, what’s more interesting, there is also a visual relationship detection challenge (e.g. “woman playing a guitar” rather than just “guitar” and “woman”). The inaugural challenge will be held at this year’s European Conference on Computer Vision. It looks like this will be a super interesting event considering the complexity of the images in the dataset and, as a result, I foresee this challenge to be the de facto object detection challenge in the near future. I am certainly looking forward to seeing the results of the challenge to be posted around the time of the conference (September 2018).

Microsoft’s COCO Dataset

Microsoft is in this game also with their Common Objects in Context  (COCO) dataset. Containing ~200K images, it’s relatively small but what makes it stand out are its challenges that come associated with the additional features it provides for each image, for example:

  • object segmentation information rather than just bounding boxes of objects (see image below)
  • five textual captions per image such as “the a380 air bus ascends into the clouds” and “a plane flying through a cloudy blue sky”.

The first of these points is worth providing an example image of:

Notice how each object is segmented rather than outlined by a bounding box as is the case with ImageNet and Open Images examples? This object segmentation feature of the dataset makes for very interesting challenges because segmenting an object like this is many times more difficult than just drawing a rectangular box around it.

COCO challenges are also held annually. But each year’s challenge is slightly different. This year the challenge has four tracks:

  1. Object segmentation (as in the example image above)
  2. Panoptic segmentation task, which requires object and background scene segmentation, i.e. a task to segment the entire image rather than just the dominant objects in an image:
  3. Keypoint detection task, which involves simultaneously detecting people and localising their keypoints:
  4. DensePose task, which involves simultaneously detecting people and localising their dense keypoints (i.e. mapping all human pixels to a 3D surface of the human body):

Very interesting, isn’t it? There is always something engaging taking place in the world of computer vision!

To be informed when new content like this is posted, subscribe to the mailing list:

Leave this field empty if you're human:
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

As I’ve discussed in previous posts of mine, computer vision is growing faster than ever. In 2016, investments into US-based computer vision companies more than tripled since 2014 – from $100 million to $300 million. And it looks like this upward trend is going to continue worldwide, especially if you consider all the major acquisitions that were made in 2017 in the field, the biggest one being Intel’s purchase in March of Mobileye (a company specialising in computer vision-based collision prevention systems for autonomous cars) for a whopping $15 billion.

But today I want to take a look back rather than forward. I want to devote some time to present the early historical milestones that have led us to where we are now in computer vision.

This post, therefore, will focus on seminal developments in computer vision between the 60s and early 80s.

Larry Roberts – The Father of Computer Vision

Let’s start first with Lawrence Roberts. Ever heard of him? He calls himself the founder of the Internet. The case for giving him this title is strong considering that he was instrumental in the design and development of the ARPANET, which was the technical foundation of what you are surfing on now.

What is not well known is that he is also dubbed the father of computer vision in our community. In 1963 he published “Machine Perception Of Three-Dimensional Solids”, which started it all for us. There he discusses extracting 3D information about solid objects from 2D photographs of line drawings. He mentions things such as camera transformations, perspective effects, and “the rules and assumptions of depth perception” – things that we discuss to this very day.

Just take a look at this diagram from his original publication (which can be found here – and a more readable form can be found here):

Open up any book on image processing and you will see a similar diagram in there discussing the relationship between a camera and an object’s projection on a 2D plane.

Funnily enough, Lawrence’s Wikipedia page does not give a single utterance to his work in computer vision, which is all the more surprising considering that the publication I mentioned above was his PhD thesis! Crazy, isn’t it? If I find the time, I’ll go over and edit that article to give him the additional credit that he deserves.

The Summer Vision Project

Lawrence Roberts’ thesis was about analysing line drawings rather than images taken of the real world. Work in line drawings was to continue for a long time, especially after the following important incident of 1966.

You’ve probably all heard the stories from the 50s and 60s of scientists predicting a bright future within a generation for artificial intelligence. AI became an academic discipline and millions was pumped into research with the intention of developing a machine as intelligent as a human being within 25 years. But it didn’t take long before people realised just how hard creating a “thinking” machine was going to be.

Well, computer vision has its own place in this ambitious time of AI as well. People then thought that constructing a machine to mimic the human visual system was going to be an easy task on the road to finally building a robot with human-like intelligent behaviour.

In 1966, Seymour Papert organised “The Summer Vision Project” at the MIT. He assigned this project to Gerald Sussman who was to co-ordinate a small group of students to work on background/foreground segmentation of real-world images with a final goal of extracting non-overlapping objects from them.

Only in the past decade or so have we been able to obtain good results in a task such as this. So, those poor students really did now know what they were in for. (Note: I write about why image processing is such a hard task in another post of mine). I couldn’t find any information on exactly how much they were able to achieve over that summer but this will definitely be something I will try to find out if I ever get to visit the MIT in the United States.

Luckily enough, we also have access to the original memo of this project available to us – which is quite neat. It’s definitely a piece of history for us computer vision scientists. Take a look at the abstract (summary) of the project from the memo from 1966 (the full version can be found here):

Continued Work with Line Drawings

In the 1970s work continued with line drawings because real-world images were just too hard to handle at the time. To this extent, people were regularly looking into extracting 3D information about blocks from 2D images.

Line labelling was an interesting concept being looked at in this respect. The idea was to try to discern a shape in a line drawing by first attempting to annotate all the lines it was composed of accordingly. Line labels would include convex, concave, and occluded (boundary) lines. An example of a result from a line labelling algorithm can be seen below:

Line labelling example with convex (+), concave (-), and occluded (<–) labels. (Image taken from here)

Two important people in the field of line labelling were David Huffman (“Impossible objects as nonsense sentences”. Machine Intelligence, 8:475-492, 1971) and Max Clowes (“On seeing things”. Artificial Intelligence, 2:79-116, 1971) who both published their line labelling algorithms independently in 1971.

In the genre of line labelling, interesting problems such as the one below were also looked at:

The image above was taken from a seminal book written by David Marr at the MIT entitled “Vision: A computational investigation into the human representation and processing of visual information”. It was finished around 1979 but posthumously published in 1982. In this book Marr proposes an important framework to image understanding that is used to this very day: the bottom-up approach. The bottom-up approach, as Marr suggests, uses low-level image processing algorithms as stepping-stones towards attaining high-level information.

(Now, for clarification, when we say “low-level” image processing, we mean tasks such as edge detection, corner detection, and motion detection (i.e. optical flow) that don’t directly give us any high-level information such as scene understanding and object detection/recognition.)

From that moment on, “low-level” image processing was given a prominent place in computer vision. It’s important to also note that Marr’s bottom-up framework is central to today’s deep learning systems (more on this in a future post).

Computer Vision Gathers Speed

So, with the bottom-up model approach to image understanding, important advances in low-level image processing began to be made. For example, the famous Lukas-Kanade optical flow algorithm, first published in 1981 (original paper available here), was developed. It is still so prominent today that it is a standard optical flow algorithm included in the OpenCV library. Likewise, the Canny edge detector, first published in 1986, is again widely used today and is also available in the OpenCV library.

The bottom line is, computer vision started to really gather speed in the late 80s. Mathematics and statistics began playing a more and more significant role and the increase in speed and memory capacity of machines helped things immensely also. Many more seminal algorithms followed this upward trend, including some famous face detection algorithms. But I won’t go into this here because I would like to talk about breakthrough CV algorithms in a future post.


In this post I looked at the early history of computer vision. I mentioned Lawrence Roberts’ PhD on things such as camera transformations, perspective effects, and “the rules and assumptions of depth perception”. He got the ball rolling for us all and is commonly regarded as the father of computer vision. The Summer Vision Project of 1966 was also an important event that taught us that computer vision, along with AI in general, is not an easy task at all. People, therefore, focused on line drawings until the 80s when Marr published his idea for a bottom-up framework for image understanding. Low-level image processing took off spurred on by advancements in the speed and memory capacity of machines and a stronger mathematical and statistical vigour. The late 80s and onwards saw tremendous developments in CV algorithm but I will talk more about this in a future post.

To be informed when new content like this is posted, subscribe to the mailing list:

Leave this field empty if you're human:
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview