Loading...

Follow iMerit on Feedspot

Continue with Google
Continue with Facebook
or

Valid

Namita Pradhan walks into iMerit’s Bhubaneshwar office on a sunny and humid day. She begins her work of identifying specific pathological abnormalities in a series of high-resolution endoscopy slides. She spends her day dealing with dozens of anatomical images, marking and classifying precise abnormalities with intense concentration.

Namita is one of several medical data labeling experts at iMerit who have gained significant experience in reviewing medical images, slides, and videos and identifying and classifying pathologies. This work, done at high accuracy, is then used to train Machine Learning algorithms to pursue the same goal. This is a specialised expertise, exotic, and hard-won, particularly as Namita and her teammates do not have a background in medicine.

(Image annotation at the cell level)

Machine Learning algorithms operate by learning from thousands of training samples which have been already annotated by humans – in essence, flash cards for AI. This is fine when the task involves generalised situations like identifying everyday objects in images or text. It becomes a bottleneck when the task is a specialised one, such as a medical pathology or a legal contract. The prohibitive cost of labeling at scale by medical experts can deter companies from initiating ML projects.

We can eliminate this bottleneck if there is a scalable process for transforming data labeling generalists into specialists. This can be achieved in an enterprise setting such as iMerit’s, with the right mix of talent, skilling, and knowledge transfer.

The first point of contact is a Solution Architect (SA) who is a medical domain expert. The SA bridges the exotic medical world with the focused but deep approach of the expert data labeler. This work includes:

  •  Unpacking the use case to be generalist-friendly
  • Challenging and distilling definitions to ensure clarity
  • Building adequate but not excessive context
  • Questioning edge cases and anticipating areas of confusion

It’s no coincidence that these steps exactly mirror what the Machine Learning algorithm needs.

Next, we have the data labeler. Not everyone can be an expert data labeler. Professionals like Namita have an endless appetite for learning, craftsman-like attention to detail, and a knack for detecting patterns and outliers.

Finally, this is backed by a robust and agile skilling structure which focused on learning by example and learning by exception.

When generalists become specialists 

The team members are onboarded in a three-week phase. The “focused and deep” curriculum covers medical lexicon, pathology, spatial orientation, and data manipulation. With medical data, pattern recognition and memorization techniques enhance the conceptual knowledge. We test a specific aptitude for tasks which involve medical images devoid of real-world context. It’s a bit like sifting through abstract art all day long. Namita says it took some time to settle into this aspect of the work, but it has now become second nature.

When a project begins, the specialist is custom-trained using live-demos, videos, models, and instruction guides dealing with the specific pathologies of the project. She is also trained in using the annotation tools and custom software, some of which is familiar from past projects. Labeling and annotation are iterative processes, so a contributor becomes progressively conversant with the scope of the work. For instance, a contributor is instructed not to mark a pathology unless she is completely certain about its location. While marking boundaries, she is taught to mark only the sections that are clearly seen. Her confidence grows with repeated exposure to a similar set of images.

The process is not without challenges, especially in the early days of the project. The stakes are quite high. Tricky edge cases need to be referred to the internal medical expert and possibly escalated to the customer. A culture of being able to say “I don’t know for sure”, without fear or judgment, is valuable.

Edge cases build a disproportionately deeper understanding of the problem space and make the job interesting. They simultaneously boost the judgment and instincts of the labeling expert. The finely honed instincts become crucial when looking at a series of images which may not follow the correct sequence. This makes it hard to zero in on the exact location of the pathology. Solving this puzzle gets easier with experience.

A new breed of micro-learners 

It is challenging but rewarding work. Namita is strongly motivated by the real-world applications of her work, and the positive impact she will have on a patient’s health. She even reads medical material on her own time to expand her conceptual understanding. Her teammate, Chinmayee Swain, engages with the projects on a more personal level. The technology she is helping build could someday benefit her own community. This level of investment and interest motivates a team that tackles dense information with high rates of accuracy.

Namita and Chinmayee represent a new breed of agile micro-learners. They learn by patterns and by doing, and has a sharp instinct for edge cases. They look for ways to break the instructions provided by the subject expert. They can do this because she has no preconceived assumptions or biases. They are looking at the problem in a pristine context.

As the day winds down, the team signs out with a sense of accomplishment. Medicine is a respected field globally. Being associated with it as niche medical data experts is motivational and aspirational. In return, the team’s work enables the medical gurus to push the boundaries of their profession.

How iMerit helps create healthcare data labeling experts

  • Data labelers are hired for the medical domain based on certain key qualities.
  • They go through a three-week training process. The curriculum covers medical lexicon, pathology, spatial orientation, and data manipulation.
  • They are staffed on a specific medical project.
  • A Solution Architect unpacks the use case with the customer and bridges the domain knowledge.
  • The SA and customer impart project-specific training. This includes live demos, videos, models, and instruction guides.
  • The team tests processes and validates instructions in the first phase.
  • Tricky edge cases are referred to the SA and possibly escalated to the customer.
  • The Project Manager and QC leads track and evolve the quality through all iterations.

Watch iMerit Solutions Architect Dr Sina Bari discuss the role of expertise in healthcare data during the AIMed conference.
https://www.youtube.com/watch?v=sBZyhcxm1_s&feature=youtu.be

The post The making of a medical data labeling expert appeared first on iMerit.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

India is fast emerging as the hub of high-quality data labeling for the world’s most advanced algorithms and iMerit is a part of this rocket-ship. In a recent article in Factor Daily, journalist Anand Murali explores the growth of India’s small towns as hubs of data annotation excellence.

Some excerpts from the article:

With global enterprise giants embracing AI, and the datasets that feed the AI algorithms increasingly becoming proprietary, companies need a higher degree of engagement with data labelling teams in terms of requirements, quality control, feedback, and deliverables. Because of the business process outsourcing boom around the turn of the century, Indians are no strangers to such jargon and demands. Data annotation and labelling, too, is process-driven, requiring precision work and skills that even people with a high-school education can be trained on.

iMerit’s strategy is centered on its employees. About 80% of its 2,000-strong workforce come from families with incomes less than $100 (Rs 7,000) a month; about half of them are women. “We have a social mission to create technology employment among underprivileged communities and in territories where there are fewer companies or industry. We operate in cities slightly lesser known for tech and with less technology employment available,” says Jai Natarajan, vice president of technology and marketing at iMerit.

Companies are beginning to develop automated tools for annotation but with a lot of jobs requiring nuanced and custom annotation or labelling work, it would be some time before automated tools can achieve a high level of accuracy.

Natarajan says that unlike five years ago when AI was about differentiating cat from a dog, present-day AI handles more advanced work. “Machine-learning has moved forward, so nobody is asking us to mark for a dog versus cat. Those days are long gone. Today, every company has customized needs and very nuanced requirements, so it is not possible to automate this or automatically just throw the data and get it labelled by an anonymous set of people.”

Read the full article here: How India’s data labellers are powering the global AI race

The post The untold story of AI: Data Labeling appeared first on iMerit.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Businesses are using Artificial Intelligence to solve a variety of organizational challenges. These solutions are highly reliant on human intelligence. In a recent issue of Mediaplanet’s Future of Business & Tech ‘Business AI’ initiative, our CEO Radha Basu discusses the human element as a key part of successful AI solutions for businesses.

Read the full article below and here: Finding the Human Element in AI

The human world is complex and nuanced, and AI is still extremely reliant on the human element. Problem definition, data annotation, and output validation all incorporate human skill and subtlety. Humans also need to be involved with the ethics of AI, and that means a diverse set of people involved.

What should businesses do before implementing AI?

Evaluate where you have access to plentiful, diverse and relevant data. The better and more plentiful the data, the more likely you are to succeed at the problem you select.

Why should humans be in the loop?

Human truth is used to train AI. Multiple diverse human viewpoints can create an information-rich environment for AI. Humans can also look at edge cases where the AI is not very sure of its output.

Dispel a common myth about AI in customer experience and business.

People believe that AI will eliminate the need for people, but AI and skilled people will need to work together. Tech-enabled services and human expertise will both be needed.

How can AI be used for good?

It’s a tool for societal implications and can be applied to societal challenges. It can provide health diagnostic capabilities in remote villages. A drone image can detect crop disease.

AI can also vastly reduce the cost of personalizing a solution. A student can have her essay evaluated by an intelligent system. Social organizations are often data-rich, with decades of field data; AI can consume this data and build insights from that data.

The post iMerit CEO on leveraging the human element in AI appeared first on iMerit.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
iMerit by Soma.basu - 1M ago

When we look up at the sky today to see the stars and the moon, there are around 4987 orbital satellites looking right back down at us, and capturing images of vast tracts of the earth. These eyes in the sky provide image data that can tell us a number of things about life on earth, including how fast our building developments are progressing, how weather conditions are likely to affect our daily lives, and how well our crops our growing.

One crop’s spread in particular is being watched closely, and that is the oil palm tree (Elaeis guineensis), whose oil was first used 5000 years ago in Egypt. Today, it joins us at the breakfast table in the form of cereal and bread, and stays with us until we (hopefully) brush our teeth and call it a night. We each consume over 15 pounds of palm oil annually. This oil has seen a meteoric rise as a versatile resource. It can handle heat and changes of form. It is low cost, and is the highest-yielding vegetable oil crop. Processed versions of this oil go into everything from candy and ice cream to biodiesel for cars. Today, 85% of the world’s supply comes from Malaysia and Indonesia, who have grown their economies on the oil palm’s popularity.

So what’s the problem?

The glisten wears off when we become aware of the steep cost of meeting global demand for this commodity. Hundreds of thousands of acres of biodiverse forests in regions like Borneo were destroyed to widen the land available for palm cultivation. This rampant deforestation has pushed species like the Sumatran tigers and orangutans towards extinction.

Local populations suffered as the method of clearing land by starting huge fires set off a health crisis in Indonesia. The process released unprecedented levels of carbon into the atmosphere. NASA research cited the destruction in Borneo as a contributor to the largest single-year global increase in carbon emissions in two millennia.

Awareness about these hazards has now spread. The Roundtable for Sustainable Palm Oil (RSPO) was set up in 2004 to promote the adoption of more responsible cultivation practices. In 2010, around 400 companies also signed a Consumer Goods Forum pledge to achieve zero net deforestation in their supply chains by 2020. The pledge covers other commodities like soy, beef, and timber, but the palm oil industry accounts for about 59% of the commitments.

Enter Machine Learning and Computer Vision

With this deadline coming up, companies like Unilever and Nestle are using satellite images to study deforestation and keep a closer check on their palm oil supply chain. Satellites and drones can capture the oil palm’s whereabouts and rate of proliferation as well as the areas where it is replacing natural forests.

Geospatial datasets can be expensive and companies need to extract the most actionable insights possible. Highly annotated datasets are used to train deep learning algorithms and unlock these insights. Companies must make strategic decisions about what they want from the data they are working with. They must select from smaller quantities of accurate, precise and specific data (like individual trees), or more plentiful but less precise broad datasets. They must also design for edge cases, whether from a corrupt image or a unique looking plantation or trees.

Once the scope of the project has been defined and the data assembled, a professional image annotation team trained in pattern recognition is required to make sense of the hundreds of satellite images. The most basic requirement is discerning a palm tree from other trees, and distinguishing a plantation from other vegetation, particularly natural forests. To be successful at this seemingly-simple task, a data labeler needs be trained with and exposed to various oil palm plantations using different satellite imagery resolutions, lighting conditions and other variables. Marking a tree is easier to achieve with high resolution imagery. It becomes more challenging at medium resolution where one pixel is between 3 and 6 square meters, and it is in these cases that a labeler has to draw upon the specialised training and experience.

It takes a practiced eye to spot a palm tree when canopies overlap in a crowded plantation, and to mark each tree’s crown limits accurately. A labeler can also use polygon marking to indicate the entire plantation’s limits. Another common method is point annotation on each tree center. Marking patterns like color, shape, and size helps in understanding the health of the plantation. Malaysian plantations, for example, show a great variety in plant density and spatial arrangement. A data labeler studying the datasets closely over time can also observe the areas where growth takes place, adding even more useful insights.

Tracking palm plantations to prevent rampant deforestation is not the only use case use for this technology. Imaging technology is increasingly being leveraged to tackle global crises like climate change. A satellite is being developed that can map the emission of methane anywhere in the globe with precision. The State of California announced that it is building a satellite to locate and regulate sources of pollution. Initiatives like Microsoft’s AI for Earth equip those fighting the good fight with the best in cutting-edge tools. Companies like Planet scan the surface of the earth at regular intervals and make datasets and tools available for analysis. Specialists like Orbital Insight work with carefully labeled data to deliver the insights at scale.

So look out of the window at the sky when you brush your teeth tonight and think of palm oil!

The post Palm Oil and Machine Learning appeared first on iMerit.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

A few weeks ago, we announced a major new partnership with Amazon Web Services to provide secure enterprise-grade data labeling services on SageMaker Ground Truth. Amazon SageMaker Ground Truth is a new capability of Amazon SageMaker that makes it easy for customers to efficiently and accurately label the large datasets required in the training of Machine Learning systems. iMerit is one of only two such partners companies for AWS anywhere in the world.

Today, we are excited to announce that iMerit USA has also become a SageMaker Ground Truth partner, which means that AWS customers can now partner with us for US-based data labeling services, conducted by our in-house, full-time expert workforce. This is especially useful for customers who need their data work completed in US-based facilities. One unique advantage of our US-based teams is their bilingual text labeling capabilities: customers can rely on iMerit for entity extraction and classification both in English and in Spanish.

You can read more about this new announcement and our US-based data services in this blog post on the AWS Machine Learning Blog.

Amazon SageMaker Ground Truth was created to address the dataset labeling ‘bottleneck’ and help Machine Learning scale beyond the R&D lab. Until now, customers had to rely on their in-house teams or on third party vendors to accurately label their data, which presented huge challenges in terms of quality control, cost and time, and made the launch of commercial-scale ML systems costly and time-consuming. Amazon SageMaker Ground Truth solves this problem with a combination of automated workflows and human intelligence.

The service supports different types of content (images, audio and text) across the following services: text classification, image classification, object detection and semantic segmentation.

Swami Sivasubramanian is the Vice-president of Machine Learning at Amazon Web Services and tells us: “Many companies and organizations need to train Machine Learning models using their own data, but preparing the datasets is time consuming and expensive. Amazon SageMaker Ground Truth is a managed service that provides a simpler and faster method to get labeled data. iMerit is a trusted APN Partner offering a team of trained specialists who can help customers accurately and securely label the datasets required for training Machine Learning systems. With the integration of iMerit, we are excited about making the process of preparing their training datasets even faster and easier.”

Some early-access customers have already been working with Amazon SageMaker Ground Truth and iMerit to label image data, including leading recruitment firm ZipRecruiter. Their CTO, Craig Ogg also shared his thoughts on the benefits of this new partnership between iMerit and AWS.

“Training a Machine Learning model to be able to identify the most important information requires a sizable dataset to start. The process to create this data is often expensive, manual, and time-consuming. Amazon SageMaker Ground Truth will significantly help us reduce the time and effort required to create datasets for training. Due to the confidential nature of the data, we initially considered using one of our teams but it would take time away from their regular tasks and it would take months to collect the data we needed. Using Amazon SageMaker Ground Truth, we engaged iMerit, a professional labeling company that has been pre-screened by Amazon, to assist with the custom annotation project. With their assistance we were able to collect thousands of annotations in a fraction of the time it would have taken using our own team.”

The post iMerit partners with Amazon SageMaker Ground Truth appeared first on iMerit.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

We had the honor of participating in a special event last week on the theme of digital transformation for non-profits. Joining us at NetHope’s Global Summit in Dublin were speakers from Microsoft, Facebook and AWS.

NetHope is an organization that links the world’s largest nonprofits to its most important technology innovators. It was founded in 2001 and works with over 50 global NGOs in 180 countries to improve connectivity, access to IT services and to leverage data for impact. Our CEO Radha Basu was invited to talk about what digital transformation looks like in practice, sharing learnings from our work labeling and enriching the training datasets that power some of the world’s most advanced algorithms.

Radha started by reminding the audience that AI is powered not by code, but by models built from training datasets, which are themselves constructed by humans. Therefore, the quality and effectiveness of an AI tool is hugely dependent on the quality of the human nuance in the training sets.

This is an important insight when thinking about the future of the workforce in an AI-driven world, a theme with broad implications for the social sector. We see it everyday with our clients: the ‘AI Workforce’ is key to accelerating the digital transformation of businesses. Humans are needed to accomplish the human-judgment tasks that power the next generation of computing services. To stay one step ahead of machines, the workforce of the future will have to be diverse, agile and scalable. From that point of view, AI is not a threat for future generations of workers, but rather an opportunity to create more jobs and digital inclusion.

The second part of the talk was dedicated to sharing examples of organisations using AI ‘for good’ in fields like the environment, medicine or to prevent crime. The MAAP project (Monitoring of the Adean Amazon Project) uses high-resolution satellite imagery to detect illegal deforestation in near real-time. Computer Vision is a technology that teaches computers to ‘see’ by feeding them large volumes of images annotated by humans. This technology is used to build driverless cars, but can also be put to work to increase the precision and speed of satellite imagery analysis, and therefore improve the effectiveness of preservation efforts.

In the field of medicine, computer vision is already used to automate the detection of cancer cells. This is an extremely cost-effective way of democratising access to advanced diagnostics, for example in lower-income areas.

For non-profits looking to harness the power of AI to scale their impact, Radha charted the following roadmap: start first by identifying some of the problems with you can solve with AI, then look at your existing data sets. How can you derive insights and action points from their digitisation and labelling? If needed, accelerate data acquisition from the field. When manually labelling or enriching data, look for a large, diverse workforce: this will ensure that your data is more relevant by bringing in different cultural points of view. Finally, partner with universities or researchers looking for use cases at scale: they can help you define and grow your AI workflow with expert guidance.

You can watch some of the talks from the Summit at this link or learn more about NetHope’s remarkable work here: nethope.org.

The post ‘AI for Good’: iMerit at NetHope Global Summit appeared first on iMerit.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
iMerit by Soma.basu - 6M ago

We are proud to announce that iMerit has won not one, but two awards in the last days. Both awards are delivered by technology thought leaders and recognize iMerit’s cutting-edge, ‘human-in-the-loop’ approach to solving AI’s challenges.

iMerit was first listed as part of this year’s Red Herring Global Top 100, a selection of the most exciting technology companies worldwide, chosen from hundreds of ventures in each continent. The Red Herring Top 100 list exists since 1996 and is widely used as an instrument for discovering and advocating the most promising ventures from around the world. To learn more about Red Herring, visit this link.

iMerit was also a winner of Deloitte’s Technology Fast 50, the award that recognizes India’s fastest-growing technology companies, based on their percentage revenue growth over the last three financial years. iMerit ranks #21 in the list of India’s fastest growing companies. The Deloitte Fast 50 have been awarded since 2005 and are also part of a global program that recognizes excellence in technology. To learn more about the Deloitte Fast 50, click here.

We are grateful to both Red Herring and Deloitte for these two awards that recognize our teams’ hard work and risk-taking and look forward to more kudos this season!

The post Awards Week for iMerit appeared first on iMerit.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

September was a busy month for the teams at iMerit, and we are glad to share some news with all of you. The first piece of news is that iMerit has won the Asia finals of the MIT IDE Inclusive Innovation challenge. The award recognizes global companies using technology to drive economic opportunity for workers. We will represent Asia in the global finals in Boston on November 8th, along with three other promising companies selected from 165 applicants from 25 countries in the region.

To read more about the MIT IDE Inclusive Innovation challenge and the other finalists, follow this link: https://www.mitinclusiveinnovation.com/

In this video, you can see Jai Natarajan, VP our Technology and Marketing as he presents iMerit to the judges at the Asian finals in Bangkok.

Mr. Jai Natarajan, Vice-President, iMerit at MIT IDE Inclusive Innovation Challenge Asia 2018 - YouTube

iMerit was also in the news in a special edition of ‘BBC Click’ the popular technology show which was shot in Delhi. In front of an enthusiastic young audience at Bikaner House, our CEO Radha Basu talked about AI, the future of work and how the positive social and economic change iMerit helps create in underprivileged communities. You can watch her interview below or see the full programme at this link.

Ms. Radha Basu, CEO iMerit on BBC Click Live - YouTube

The post iMerit on the BBC and at the MIT IDE Inclusive Innovation Challenge appeared first on iMerit.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

While you were hard at work watching the FIFA World Cup, we were busy compiling the latest trends in the application of Machine Learning and Computer Vision to sports technology. This is a topic of interest to us because of our work with KinaTrax, a pioneering sports analytics company that helps Major League Baseball teams improve their performance through computer vision technology.

From predictive analysis to the automation of sports journalism and the use of performance-enhancing computer vision technology, here are some ways in which machines are transforming how sports are played by athletes and enjoyed by fans.

Better than the referee?

Predictive analysis has come a long way since since the 2010 days of Paul the Octopus. Machine Learning models are now used routinely to predict the results of games.

This year, a model built by start-up Kickoff.ai used advanced mathematical techniques to ‘encode’ the current strength of teams and to project outcomes. Another statistical model was built by Goldman Sachs to simulate one million possible evolutions of the tournament. The model-based prediction did prove accurate to some extent, for example by projecting the high probability of France reaching the semi-finals.

Another way to leverage machine learning and large volumes of statistical data is to build an index of individual’s players performance potential. SciSports is a German start-up that builds a ‘SciSkill’ score for upcoming players and helps clubs assemble the best teams.

Connecting fans to their teams

A key challenge in sports broadcasting is the sheer volume of video content generated each day across sports, leagues and geographies. This used to translate into editors manually going through all the footage to produce clips of highlights each day. The solution has come in the form of AI-driven editing platforms that use ball- tracking technology and audio patterns to automate the recognition of key moments in each game. This lets journalists create clips in minutes, when it used to take hours to select shots and edit them manually. WSC Sports is one of the leaders in this field and works with a number of organisations, including the NBA. For fans, this new technology means faster and better access to video coverage of every sport and league, however niche.

The athlete and the machine

Cameras and sensors are cheap and plentiful. It is now easy to create datasets of images that record every single moment of a match from different angles. This year and for the first time in the history of the World Cup, VAR (Video Assistant Referee) systems by Hawk-Eye  reviewed decisions made by the head referee with audio-visual footage. Cricket fans have known Hawk-Eye’s ball-tracking and trajectory-prediction technology for the past several years.

The combined capabilities of computer vision and motion capture technology give sports analysts powerful insight into what is invisible to the human eye. These statistics let clubs improve the training of their teams, and therefore their performance: the analysis of baseball pitchers provided by KinaTrax for example helped the Chicago Cubs win their first World Series in over a century.

Wearables are bringing another level of access to performance data. Sensors worn in athletes’ gear or equipment can provide in-game movement insights. The U.S. Soccer Federation has hired Irish wearables company STATSports to provide monitoring devices for its four million registered soccer players in the United States. Valued at over $1.5 billion, according to a statement released by the companies, the deal aims to create the world’s largest player data monitoring program and act as a first step towards outfitting youth and amateur soccer players with professional-grade performance technology. 

The use of AI-powered technology has spread to every sport, beyond team sports where they first appeared. In race car driving, machines can detect small damages that predict the need for a tyre change or other mechanical problems. Biomechanical analysis of players can predict and prevent potential career-threatening injuries. In tennis, machine learning models can even detect movement patterns and learn to group similar data into different types of shots, which ultimately leads to finer analysis.

A recent report by leading data analytics company DataRobot mentioned some of the challenges involved in building these models, for example the talent and time required to process large quantities of data. By the time effective models are built, the season might be over. “Speed-to-insight” is therefore a key component of helping teams maintain their edge. This shows how a partner like iMerit, who is able to assemble large teams of data experts and produce actionable insights in time are also critical players in the sports tech ecosystem. 

Inside and outside the stadium, technology is playing an exciting role in delivering predictions and actionable insights that enrich the experience of  playing and viewing sports. 

The post The Athlete and the Machine: New Trends in AI and Sports Technology appeared first on iMerit.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

This post is by Emanuel Ott, a Solutions Architect at iMerit and an expert in machine learning and computer vision. It summarizes a talk given at the Machine Intelligence in Autonomous Vehicles Summit in Amsterdam.

To create an algorithm that learns to ‘see’ a typical road the way humans do, data experts first need to classify and then label the different components of the road: for example, “this is a tree, this is another car, this is the curb of the road”. A process which is natural to the human eye and brain needs to be entirely dissected in order to build the data that feeds the algorithm that powers image recognition for a self-driving car. This is not without challenges, the chief one for data experts being: how do I ‘tell what I see’ in a particular image of a road, in words that are predefined and common to all the data experts working on one data set? And what happens when two autonomous vehicles that are trained on different datasets meet? How do they agree on whose rules to use, the way humans automatically agree on following standard driving rules? Taxonomy is a challenge that is common to all fields in machine learning, yet the issue especially critical in the emerging reality of autonomous driving because of its obvious implications for public safety.

To complicate matters further, taxonomy is only one part of an equation that includes other variables such as time available and the level of accuracy that is required on a project. For example, can we adopt a wider ‘flat’ taxonomy that allows for greater speed in execution, but does not include crucial variations within categories ? In the talk, I took the example of roadside ‘vegetation’ as a class of data that the algorithm would be trained to recognize. However, it could be important for the car to be able to distinguish between ‘grass’ (that it can drive on) and ‘agricultural land’ (that is hazardous to navigate). To be efficient, however, most companies choose to adopt a flat taxonomy that does not allow for subclasses within a category of data. The complexity of the scene itself is another element that influences taxonomy: it is not the same to label the components of an urban scene versus a rural road, or a road at night and a road during the day. The challenge becomes broader when you include non-passenger vehicles, like autonomous farm vehicles or trucks.

Lastly, the taxonomy problem is also compounded by the succession of development cycles: it happens that one labeling effort is focused on creating bounding boxes around “Bicycles” while the next round would need labeling of “Bicyclists”. This conflicting taxonomy often introduces the possibility of biases and errors in the labeling of data.

One possible solution to solving the problem of semantic classification is to take an empirical approach to category-naming. Many of the issues linked to data classification arise from the fact that categories are ‘abstract’ (eg: ‘vegetation’ is a high-level concept. In real life, people are more likely to use words like ‘grass’ or ‘nature’). Before you start your data labeling project, take a survey of the people responsible for annotating the data and agree to use the word that most people have intuitively selected to describe the category. If two words are commonly used by the people in your group, you can already forecast issues with the taxonomy of data on your project.

Several companies are working on self-driving cars at the moment, but there is no unified standard on how to teach these cars to ‘see’. I believe now would be the right time to question the assumptions around the data that powers these vehicles, and work towards creating unified standards for labeling this training data. One way to put it is: “self driving cars are safer when they talk to each other”. What’s more, having a unified standard for road data annotation would free up resources to focus on other challenges such as improving the tools to annotate the data. My final prediction is that a movement towards unification will indeed start to take shape in the near future. This will happen either organically through companies opening their datasets or through regulators enforcing industry-wide rules on data labeling.

You can watch Emanuel’s full talk and learn more about the topic at this link.

The post Do we need a standardized taxonomy behind the image training data for self-driving vehicles? appeared first on iMerit.

Read Full Article

Read for later

Articles marked as Favorite are saved for later viewing.
close
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview