Loading...

Follow AWS Machine Learning Blog on Feedspot

Continue with Google
Continue with Facebook
or

Valid

The AWS DeepRacer League is the world’s first global autonomous racing league, open to anyone. Developers of all skill levels can compete in person at 22 AWS events globally, or online via the AWS DeepRacer console, for a chance to win an expense paid trip to re:Invent 2019, where they will race to win the Championship Cup 2019.

AWS Summit Chicago – winners

On May 30th, the AWS DeepRacer league visited the AWS Summit in Chicago, which was the 11th live race of the 2019 season. The top three there were as enthusiastic as ever and eager to put their models to the test on the track.

The Chicago race was extremely close to seeing all of the top three participants break the 10-second barrier. Scott from A Cloud Guru at the topped the board with 9.35 seconds, closely followed by RoboCalvin at 10.23 seconds and szecsei with 10.79 seconds.

Before Chicago, the winner Scott from A cloud guru had competed in the very first race in Santa Clara and was knocked from the top spot in the last hour of racing! There he ended up 4th, with a time of 11.75 seconds. He tried again in Atlanta, but couldn’t do better than 8th recording a time of 12.69 seconds. It was third time lucky for him in Chicago, where he was finally crowned champion and scored his winning ticket to the Championship Cup at re:Invent 2019!

Winners from Chicago RoboCalvin (2nd – 10.2 seconds), Scott (winner – 9.35 seconds), Szecsei (3rd – 10.7 seconds).

On to Amazon re:MARS, for lightning fast times and multiple world records!

On June 4th, the AWS DeepRacer League moved to the next race in Las Vegas, Nevada, where the inaugural re:MARS conference took place. Re:MARS is a new global AI event focused on Machine Learning, Automation, Robotics, and Space.

Over 2.5 days, AI enthusiasts visited the DeepRacer track to compete for the top prize. It was a competitive race; the world record was broken twice (the previous record was set in Seoul in April and was 7.998 seconds). John (who eventually came second), was first to break it and was in the lead with a time of 7.84 seconds for most of the afternoon before astronav (Anthony Navarro) knocked him off the top spot in the final few minutes of racing, with a winning time of 7.62 seconds. Competition was strong, and developers returned to the tracks multiple times after iterating on their model. Although the times were competitive, they were all cheering for each other and even sharing strategies. It was the fastest race we have seen yet – the top 10 were all under 10 seconds!

The winners from re:MARS John (2nd – 7.84 seconds), Anthony (1st – 7.62 seconds), Gustav (3rd – 8.23 seconds).

Developers of all skill levels can participate in the League

Participants in the league vary in their ability and experience in machine learning. Re:MARS, not surprisingly brought some speedy times, but developers there were still able to learn something new and build on their existing skills. Similarly, our winner from Chicago had some background in the field, but our 3rd place winner had absolutely none. The league is open to all and can help you reach your machine learning goals. The pre-trained models provided at the track make it possible for you to enter the league without building a model, or you can create your own from scratch in one of the workshops held at the event. And new this week is the racing tips page, providing developers with the most up to date tools to improve lap times, tips from AWS experts, and opportunities to connect with the DeepRacer community. Check it out today and start sharing your DeepRacer story!

Machine learning developers, with some or no experience before entering the league.

Another triple coming up!

The 2019 season is in the home stretch and during the week of June 10th, 3 more races are taking place. There will be a full round up on all the action next week, as we approach the last few chances on the summit circuit for developers to advance to the finals at re:Invent 2019. Start building today for your chance to win!

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

This is a guest blog post by Phil Basford, lead AWS solutions architect, Inawisdom.

At re:Invent 2018, AWS announced Amazon Personalize, which allows you to get your first recommendation engine running quickly, to deliver immediate value to your end user or business. As your understanding increases (or if you are already familiar with data science), you can take advantage of the deep capabilities of Amazon Personalize to improve your recommendations.

Working at Inawisdom, I’ve noticed increasing diversity in the application of machine learning (ML) and deep learning. It seems that nearly every day I work on a new exciting use case, which is great!

The most well-known and successful ML use cases have been retail websites, music streaming apps, and social media platforms. For years, they’ve been embedding ML technologies into the heart of their user experience. They commonly provide each user with an individual personalized recommendation, based on both historic data points and real-time activity (such as click data).

Inawisdom was lucky enough to be given early access to try out Amazon Personalize while it was in preview release. Instead of giving it to data scientists or data engineers, the company gave it to me, an AWS solutions architect. With no prior knowledge, I was able to get a recommendation from Amazon Personalize in just a few hours. This post describes how I did so.

Overview

The most daunting aspect of building a recommendation engine is knowing where to start. This is even more difficult when you have limited or little experience with ML. However, you may be lucky enough to know what you don’t know (and what you should figure out), such as:

  • What data to use.
  • How to structure it.
  • What framework/recipe is needed.
  • How to train it with data.
  • How to know if it’s accurate.
  • How to use it within a real-time application.

Basically, Amazon Personalize provides a structure and supports you as it guides you through these topics. Or, if you’re a data scientist, it can act as an accelerator for your own implementation.

Creating an Amazon Personalize recommendation solution

You can create your own custom Amazon Personalize recommendation solution in a few hours. Work through the process in the following diagram.

Creating dataset groups and datasets

When you open Amazon Personalize, the first step is to create a dataset group, which can be created from loading historic data or from data gathered from real-time events. In my evaluation of Amazon Personalize at Inawisdom, I used only historic data.

When using historic data, each dataset is imported data from a .csv file located on Amazon S3, and each dataset group can contain three datasets:

  • Users
  • Item
  • Interactions

For the purpose of this quick example, I only prepared the Interactions data file, because it’s required and the most important.

The Interactions dataset contains a many-to-many relationship (in old relational database terms) that maps USER_ID to ITEM_ID. Interactions can be enriched with optional User and Item datasets that contain additional data linked by their IDs. For example, for a film-streaming website, it can be valuable to know the age classification of a film and the age of the viewer and understand which films they watch.

When you have all your data files ready on S3, import them into your data group as datasets. To do this, define a schema for the data in the Apache Avro format for each dataset, which allows Amazon Personalize to understand the format of your data. Here is an example of a schema for Interactions:

{
    "type": "record",
    "name": "Interactions",
    "namespace": "com.amazonaws.personalize.schema",
    "fields": [
        {
            "name": "USER_ID",
            "type": "string"
        },
        {
            "name": "ITEM_ID",
            "type": "string"
        },
        {
            "name": "TIMESTAMP",
            "type": "long"
        }
    ],
    "version": "1.0"
}

In evaluating Amazon Personalize, you may find that you spend more time at this stage than the other stages. This is important and reflects that the quality of your data is the biggest factor in producing a usable and accurate model. This is where Amazon Personalize has an immediate effect—it’s both helping you and accelerating your progress.

Don’t worry about the format of the data, just the key fields being identified.  Don’t get caught up in worrying about what model to use or the data it needs. Your focus is just on making your data accessible. If you’re just starting out in ML, you can get a basic dataset group working quickly with minimal data. If you’re a data scientist, you probably come back to this stage again to improve and add more data points (data features).

Creating a solution

When you have your dataset group with data in it, the next step is to create a solution. A solution covers two areas—selecting the model (recipe) and then using your data to train it. You have recipes and a popularity baseline from which to choose. Some of the recipes on offer include the following:

  • Personalized reranking (search)
  • SIMS—related items
  • HRNN (Coldstart, Popularity-Baseline, and Metadata)—user personalization

If you’re not a data scientist, don’t worry. You can use AutoML, which runs your data against each of the available recipes.  Amazon Personalize then judges the best recipe based on the accuracy results produced. This also covers changing some of the settings to get better results (hyperparameters).  The following image shows a solution with the metric section at the bottom showing accuracy:

Amazon Personalize allows you to get something up and running quickly, even if you’re not a data scientist. This includes not just model selection and training, but restructuring the data into what each recipe requires and hiding the hassle of spinning up servers to run training jobs. If you are a data scientist, this is also good news, because you can take full control of the process.

Creating a campaign

After you have a solution version (a confirmed recipe and trained artifacts), it’s time to put it into action. This isn’t easy, and there is a lot to consider in running ML at scale.

To get you started, Amazon Personalize allows you to deploy a campaign (an inference engine for your recipe and the trained artifacts) as a PaaS. The campaign returns a REST API that you can use to produce recommendations. Here is an example of calling your API from Python:

get_recommendations_response = personalize_runtime.get_recommendations(
    campaignArn = campaign_arn,
    userId = str(user_id),
    itemId = str(item_id)
)

item_list = get_recommendations_response['itemList']

The results:

Recommendations: [
  "Full Monty, The (1997)",
  "Chasing Amy (1997)",
  "Fifth Element, The (1997)",
  "Apt Pupil (1998)",
  "Grosse Pointe Blank (1997)",
  "My Best Friend's Wedding (1997)",
  "Leaving Las Vegas (1995)",
  "Contact (1997)",
  "Waiting for Guffman (1996)",
  "Donnie Brasco (1997)",
  "Fargo (1996)",
  "Liar (1997)",
  "Titanic (1997)",
  "English Patient, The (1996)",
  "Willy Wonka and the Chocolate Factory (1971)",
  "Chasing Amy (1997)",
  "Star Trek: First Contact (1996)",
  "Jerry Maguire (1996)",
  "Last Supper, The (1995)",
  "Hercules (1997)",
  "Kolya (1996)",
  "Toy Story (1995)",
  "Private Parts (1997)",
  "Citizen Ruth (1996)",
  "Boogie Nights (1997)"
]
Conclusion

Amazon Personalize is a great addition to the AWS set of machine learning services. Its two-track approach allows you to quickly and efficiently get your first recommendation engine running and deliver immediate value to your end user or business. Then you can harness the depth and raw power of Amazon Personalize, which will keep you coming back to improve your recommendations.

Amazon Personalize puts a recommendation engine in the hands of every company and is now available in US East (Ohio), US East (N. Virginia), US West (Oregon), Asia Pacific (Tokyo), Asia Pacific (Singapore) and EU (Ireland). Well done, AWS!​

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Just imagine—you say something in one language, and a tool immediately translates it to another language. Wouldn’t it be even cooler to build your own real-time voice translator application using AWS services? It would be similar to the Babel fish in The Hitchhiker’s Guide to the Galaxy:

“The Babel fish is small, yellow, leech-like—and probably the oddest thing in the universe… If you stick one in your ear, you can instantly understand anything said to you in any form of language.”

Douglas Adams, The Hitchhiker’s Guide to the Galaxy

In this post, I show how you can connect multiple services in AWS to build your own application that works like a bit like the Babel fish.

About this blog post
Time to read 15 minutes
Time to complete 30 minutes
Cost to complete Under $1
Learning level Intermediate (200)
AWS services Amazon Polly, Amazon Transcribe, Amazon Translate, AWS Lambda, Amazon CloudFront, Amazon S3
Overview

The heart of this application consists of an AWS Lambda function that connects the following three AI language services:

  • Amazon Transcribe — This fully managed and continuously trained automatic speech recognition (ASR) service takes in audio and automatically generates accurate transcripts. Amazon Transcribe supports real-time transcriptions, which help achieve near real-time conversion.
  • Amazon Translate — This neural machine-translation service delivers fast, high-quality, and affordable language translation.
  • Amazon Polly — This text-to-speech service uses advanced deep learning technologies to synthesize speech that sounds like a human voice.

A diagrammatic representation of how these three services relate is shown in the following illustration.

To make this process a bit easier, you can use an AWS CloudFormation template, which initiates the application. The following diagram shows all the components of this process, which I later describe in detail.

Here’s the flow of service interactions:

  1. Allow access to your site with Amazon CloudFront, which allows you to get an HTTPS link to your page and which is required by some browsers to record audio.
  2. Host your page on Amazon S3, which simplifies the whole solution. This is also the place to save the input audio file recorded in the browser.
  3. Gain secure access to S3 and Lambda from the browser with Amazon Cognito.
  4. Save the input audio file on S3 and invoke a Lambda function. In the input of the function, provide the name of audio file (that you saved earlier in Amazon S3), and pass the source and target language parameters.
  5. Convert audio into text with Amazon Transcribe.
  6. Translate the transcribed text from one language to another with Amazon Translate.
  7. Convert the new translated text into speech with Amazon Polly.
  8. Save the output audio file back to S3 with the Lambda function, and then return the file name to your page (JavaScript invocation). You could return the audio file itself, but for simplicity, save it on S3 and just return its name.
  9. Automatically play the translated audio to the user.
  10. Accelerate the speed of delivering the file with CloudFront.
Getting started

As I mentioned earlier, I created an AWS CloudFormation template to create all the necessary resources.

  1. Sign into the console, and then choose Launch Stack, which launches a CloudFormation stack in your AWS account. The stack launches in the US-East-1 (N. Virginia) Region.
  2. Go through the wizard and create the stack by accepting the default values. On the last step of the wizard, acknowledge that CloudFormation creates IAM After 10–15 minutes, the stack has been created.
  3. In the Outputs section of the stack shown in the following screenshot, you find the following four parameters:
    • VoiceTranslatorLink—The link to your webpage.
    • VoiceTranslatorLambda—The name of the Lambda function to be invoked from your web application.
    • VoiceTranslatorBucket—The S3 bucket where you host your application, and where audio files are stored.
    • IdentityPoolIdOutput—The identity pool ID, which allows you to securely connect to S3 and Lambda.
  4. Download the following zip file and then unzip it. There are three files inside.
  5. Open the downloaded file named voice-translator-config.js, and edit it based on the four output values in your stack (Step 3). It should then look similar to the following.
    var bucketName = 'voicetranslatorapp-voicetranslat……';
    var IdentityPoolId = 'us-east-1:535…….';
    var lambdaFunction = 'VoiceTranslatorApp-VoiceTranslatorLambda-….';
  6. In the S3 console, open the S3 bucket (created by the CloudFormation template). Upload all three files, including the modified version of voice-translator-config.js.
Testing

Open your application from the link provided in Step 3. In the Voice Translator App interface, perform the following steps to test the process:

  1. Choose a source language.
  2. Choose a target language.
  3. Think of something to say, choose START RECORDING, and start speaking.
  4. When you finish speaking, choose STOP RECORDING and wait a couple of seconds.

If everything worked fine, the application should automatically play the audio in the target language.

Conclusion

As you can see, it takes less than an hour to create your own unique voice translation application, based on the existing, integrated AI language services in AWS. Plus, the whole process is done without a server.

This application currently supports two input languages: US English and US Spanish. However, Amazon Transcribe recently started supporting real-time speech-to-text in British English, French, and Canadian French. Feel free to try to extend your application by using those languages.

To see the source code of the app (including the Lambda function written in JavaScript), you can find it in the voice-translator-app GitHub repo. In addition to using the browser to record your voice, I also used this recorder.js script by Matt Diamond.

About the Author

Tomasz Stachlewski is a Solutions Architect at AWS, where he helps companies of all sizes (from startups to enterprises) in their cloud journey. He is a big believer in innovative technology, such as serverless architecture, which allows companies to accelerate their digital transformation.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

At re:Invent 2017, we launched the world’s first machine learning (ML)–enabled video camera, AWS DeepLens. This put ML in the hands of developers, literally, with a fully programmable video camera, tutorials, code, and pre-trained models designed to expand ML skills. With AWS DeepLens, it is possible to create useful ML projects without a PhD in computer sciences or math, and anyone with a decent development background can start using it.

Today, I’m pleased to announce that AWS DeepLens (2019 edition) is now available for pre-order for developers in Canada, Europe, and Japan on the following websites:

  • Amazon.ca
  • Amazon.de
  • Amazon.es
  • Amazon.fr
  • Amazon.it
  • Amazon.co.jp
  • Amazon.co.uk

We have made significant enhancements to the device to further improve your experience:

  • An optimized onboarding process that allows you to get started with ML quickly.
  • Support for the Intel RealSense depth sensor, which allows you to build advanced ML models with higher accuracy. You can use depth data in addition to 2-D image inputs.
  • Support for the Intel Movidius Neural Compute Stick for those who want to achieve additional AI performance using external Intel accelerators.

The 2019 edition comes integrated with SageMaker Neo, which lets customers train models one time and run them with up to 2X improvement in performance.

In addition to device improvements, we have invested significantly in the content development as well. We included guided instructions for building ML for interesting applications such as worker safety, sentiment analysis, who drinks the most coffee, and so on. We’re making ML available to all who want to learn and develop their skills while building fun applications.

Over the last year, we have had many requests from customers in Canada, Europe, and Japan, asking when we would launch AWS DeepLens in their Region. We were happy to announce today’s news.

“We welcome the general availability of AWS DeepLens in Japan market. It will excite our developer community and developers in Japan to accelerate the adoption of deep learning technologies” said Daisuke Nagao and Ryo Nakamaru, co-leads for Japan AWS User Group AI branch (JAWS-UG AI).

ML in the hands of everybody

Amazon and AWS have a long history with ML and DL tools around the world. In Europe, we opened an ML Development Center in Berlin back in 2013, where developers and engineers support our global ML and DL services such as Amazon SageMaker. This is in addition to the many customers, from startups to enterprises to the public sector, who are using our ML and DL tools in their Regions.

ML and DL have been a big part of our heritage over the last 20 years and the work we do around the world, is helping to democratize these technologies, making them accessible to everyone.

After we announced the general availability of AWS DeepLens in the US in June last year, thousands of devices shipped.  We have seen many interesting and inspirational applications. Two that we’re excited to highlight are the DeepLens Educating Entertainer, or “Dee” for short, and SafeHaven.

Dee—DeepLens Educating Entertainer

Created by Matthew Clark from Manchester, Dee is an example of how image recognition can be used to make a fun, interactive, and educational game for young or less able children.

The AWS DeepLens device asks children to answer questions by showing the device a picture of the answer. For example when the device asks, “What has wheels?”, the child is expected to show it an appropriate picture, such as a bicycle or bus. Right answers are praised and incorrect ones are given hints on how to get it right. Experiences like these help children learn through interaction and positive reinforcement.

Young children, and some older ones with special learning needs, can struggle to interact with electronic devices. They may not be able to read a tablet screen, use a computer keyboard, or speak clearly enough for voice recognition. With video recognition, this can change. Technology can now better understand the child’s world and observe when they do something, such as picking up an object or performing an action. This leads to many new ways of interaction.

AWS DeepLens is particularly appealing for children’s interactions because it can run its deep learning (DL) models offline. This means that the device can work anywhere, with no additional costs.

Before building Dee, Matthew had no experience working with ML technologies. However, after receiving an AWS DeepLens device at AWS re:Invent 2017, he soon got up to speed with DL concepts.  For more details, see Second Place Winner: Dee—DeepLens Educating Entertainer.

SafeHaven

SafeHaven is another AWS DeepLens application that came from developers getting an AWS DeepLens device at re:Invent 2017.

Built by Nathan Stone and Paul Miller from Ipswich, UK, SafeHaven is designed to protect vulnerable people by enabling them to identify “who is at the door?” using an Alexa Skill. AWS DeepLens acts as a sentry on the doorstep, storing the faces of every visitor. When a visitor is “recognized,” their name is stored in a DynamoDB table, ready to be retrieved by an Alexa Skill. Unknown visitors trigger SMS or email alerts to relatives or carers via an SNS subscription.

This has huge potential as an application for private homes, hospitals, and care facilities, where the door should only be opened to recognized visitors. For more details, see Third Place Winner: SafeHaven: Real-Time Reassurance. Re:invented.

Other applications

In Canada, a large Canadian discount retailer used AWS DeepLens as part of a complex loss prevention test pilot for its operations LATAM. A Calgary-based oil company tested out augmenting its sign-in process in its warehouse facilities, adding in facial recognition.

One of the world’s largest automotive manufacturers, headquartered in Canada, is building a use case at one of its plants to use AWS DeepLens for predictive maintenance as well as image classification. Additionally, an internal PoC for manufacturing has been built to show how AWS DeepLens could be used to track who takes and returns tools from a shop, and when.

The Northwestern University School of Professional Studies is developing a computer vision course for their data science graduate students, using AWS DeepLens provided by Amazon. Other universities have expressed interest in developing courses to use AWS DeepLens in the curriculum, such as artificial intelligence, information systems, and health analytics.

Summary

These are just a few examples, and we expect to see many more when we start shipping devices around the world. If you have an AWS DeepLens project that you think is cool and you would like us to check out, submit it to the AWS DeepLens Project Outline.

We look forward to seeing even more creative applications come from the launch in Europe, so check the AWS DeepLens Community Projects page often.

About the Authors

Rick Mitchell is a Senior Product Marketing Manager with AWS AI. His goal is to help aspiring developers to get started with Artificial Intelligence. For fun outside of work, Rick likes to travel with his wife and two children, barbecue, and run outdoors.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Nomura Research Institute (NRI) is a leading global provider of system solutions and consulting services in Japan and an APN Premium Consulting Partner. NRI is increasingly getting requests to help customers optimize inventory and production plans, reduce costs, and create better customer experiences. To address these demands, NRI is turning to new sources of data, specifically videos and photos, to help customers better run their businesses.

For example, NRI is helping Japanese convenience stores use data from in-store cameras to monitor inventory. And, NRI is helping Japanese airports to optimize people flow based on traffic patterns observed inside the airport.

In these scenarios, NRI needed to create a machine learning models that detects objects. NRI needed to detect goods (drinks, snacks, paper products, etc.) and people that leave stores for retailers, and commuters for airports.

NRI turned to Acer and AWS to meet their goals. Acer aiSage, is an edge computing device that uses computer vision and AI to provide real-time insights.  Acer aiSage makes use of Amazon SageMaker Neo, a service that lets you train models that detect objects and classify images once and run them anywhere, and AWS IoT Greengrass, a service that brings local compute, messaging, data caching, sync, and machine learning inference capabilities to edge devices.

“One of our customers, Yamaha Motor Co., Ltd., is evaluating AI-based store analysis and smart store experience.” said Shigekazu Ohmoto, Senior Managing Director, NRI. “We knew that we had to build several computer vision models for such a solution. We built our models using MXNet GluonCV, compiled the models with Amazon SageMaker Neo, and then deployed the models on Acer’s aiSage through AWS IoT Greengrass.  Amazon SageMaker Neo reduced the footprint of the model by abstracting out the ML framework and optimized it to run faster on our edge devices. We leverage full AWS technology stacks including edge side for our AI solutions.”

Here is how object detection and image classification works at NRI.

Amazon SageMaker is used to train, build, and deploy the machine learning model. Amazon SageMaker Neo makes it possible to train machine learning models once and run them anywhere in the cloud and at the edge.

Amazon SageMaker Neo optimizes models to run up to twice as fast, with less than a tenth of the memory footprint, with no loss in accuracy. You start with a machine learning model built using MXNet, TensorFlow, PyTorch, or XGBoost and trained using Amazon SageMaker. Then, choose your target hardware platform. With a single click, Amazon SageMaker Neo compiles the trained model into an executable.

The compiler uses a neural network to discover and apply all of the specific performance optimizations to make your model run most efficiently on the target hardware platform. You can deploy the model to start making predictions in the cloud or at the edge.

At launch, Amazon SageMaker Neo was available in four AWS Regions: US East (N. Virginia), US West (Oregon), EU (Ireland), Asia Pacific (Seoul). As of May 2019, SageMaker Neo is now available in Asia Pacific (Tokyo), Japan.

To learn more about Amazon SageMaker Neo, see the Amazon SageMaker Neo webpage.

About the Authors

Satadal Bhattacharjee is Principal Product Manager with AWS AI. He leads the Machine Learning Engine PM team working on projects such as SageMaker Neo, AWS Deep Learning AMIs, and AWS Elastic Inference. For fun outside work, Satadal loves to hike, coach robotics teams, and spend time with his family and friends.

Kimberly Madia is a Principal Product Marketing Manager with AWS Machine Learning. Her goal is to make it easy for customers to build, train, and deploy machine learning models using Amazon SageMaker. For fun outside work, Kimberly likes to cook, read, and run on the San Francisco Bay Trail.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Pioneer Corp is a Japanese multinational corporation specializing in digital entertainment products. Pioneer wanted to help their customers check road and traffic conditions through in-car navigation systems. They developed a real-time, image-sharing service to help drivers navigate. The solution analyzes photos, diverts traffic, and sends alerts based on the observed conditions.  Because the pictures are of public roadways, they also had to ensure privacy by blurring out faces and license plate numbers.

Pioneer built their image-sharing service using Amazon SageMaker Neo. Amazon SageMaker is a fully-managed service that provides the ability for developers to build, train, and deploy machine learning models at much less effort and lower cost. Amazon SageMaker Neo is a service that allows developers to train machine learning models once and run them anywhere in the cloud and at the edge. Amazon SageMaker Neo optimizes models to run up to twice as fast, with less than a tenth of the memory footprint, with no loss in accuracy.

You start with an ML model built using MXNet, TensorFlow, PyTorch, or XGBoost and trained using Amazon SageMaker. Then, choose your target hardware platform such as M4/M5/C4/C5 instances or edge devices. With a single click, Amazon SageMaker Neo compiles the trained model into an executable.

The compiler uses a neural network to discover and apply all of the specific performance optimizations to make your model run most efficiently on the target hardware platform. You can deploy the model to start making predictions in the cloud or at the edge.

At launch, Amazon SageMaker Neo was available in four AWS Regions: US East (N. Virginia), US West (Oregon), EU (Ireland), Asia Pacific (Seoul). As of May 2019, SageMaker Neo is now available in Asia Pacific (Tokyo), Japan.

Pioneer developed a machine learning model for real-time image detection and classification using data from cameras in cars. They detect many different kinds of images, such as license plates, people, street traffic, and road signs. The in-car cameras upload data to the cloud and run inference using Amazon SageMaker Neo. The results are sent back to the cars so drivers can be informed on the road.

Here’s how it works.

“We decided to use Amazon SageMaker, a fully managed service for machine learning,” said Ryunosuke Yamauchi, an AI Engineer at Pioneer. “We needed a fully managed service because we didn’t want to spend time managing GPU instances or integrating different applications. In addition, Amazon SageMaker offers hyperparameter optimization, which eliminates the need for time-consuming, manual hyperparameter tuning. Also, we choose Amazon SageMaker because it supports all leading frameworks such as MXNet GluonCV. That’s our preferred framework because it provides state-of-the-art pre-trained object detection models such as Yolo V3.”

To learn more about Amazon SageMaker Neo, see the Amazon SageMaker Neo webpage.

About the Authors

Satadal Bhattacharjee is Principal Product Manager with AWS AI. He leads the Machine Learning Engine PM team working on projects such as SageMaker Neo, AWS Deep Learning AMIs, and AWS Elastic Inference. For fun outside work, Satadal loves to hike, coach robotics teams, and spend time with his family and friends.

Kimberly Madia is a Principal Product Marketing Manager with AWS Machine Learning. Her goal is to make it easy for customers to build, train, and deploy machine learning models using Amazon SageMaker. For fun outside work, Kimberly likes to cook, read, and run on the San Francisco Bay Trail.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Abbott Laboratories has more data than its field team can decipher while on-site with other clients.  Their solution? Working with Smart Bots (smartbots.ai) to build an enterprise-grade, reliable and stable chatbot called Maya, powered by AWS machine learning services like Amazon Lex, AWS Lambda, Amazon Comprehend, and Amazon SageMaker.

For context, Abbott Laboratories is a multinational healthcare company and a forerunner in India in its deployment of AI.  Maya serves Abbott’s 3000+ person field force in India, providing sales operations support and providing access to contextual information at employees’ fingertips.

The chatbot proves especially helpful while employees are in the field meeting doctors. Maya can handle the nitty-gritty of querying and fetching information from enterprise applications so that employees can focus on higher-order tasks.

Maya is integrated with the customer relationship management (CRM) system at Abbott. For each query, the bot gets authenticated on behalf of the user and retrieves the required information.

Amazon Lex enables the language model

Amazon Lex is core to the Maya solution, having been chosen after long discussions regarding the conversation flows and data access protocol from the backend system.

The team identified intents from the conversation flows. Maya today has more than 50 intents—including a “small talk” intent to make the bot more human-like—and close to 250 slots. Most of the intents revolve around data-related actions (for example, filter, compute, and so on). The small talk intent handles phrases like “thank you for your help.”

Lambda determined the response

All 50 intents are linked to a single Lambda function. The following steps are performed on all the requests that call the function.

  • Validate the slots based on business rules.
  • Call all the subscribed methods related to the newly filled slots.
  • Identify the next state.
  • Construct the response object.

Lambda acted as the right fit to implement the validation and state flow logic described above.

Session attributes handled context

The team used intent chaining to enhance the conversation flow, which they laud because it makes the bot smarter and streamlines bot management. For those less familiar with this concept, intent chaining facilitates shifting between multiple intents without losing the context. In Maya, context is stored as JSON in the session attributes. The Context object is structured as follows:

sessionAttributes: {
  "context": {
    "previous-context": {
        "primary-context": true,
        "intent-name":"intent-A",
        "slots": {
          "slot-name": "slot-value",
          ...
        },
        "context-variable-1": "value",
        "context-variable-2": "value"
    },
    "current-context": {
        "intent-name":"intent-B",
      "context-variable-3": "value",
      "context-variable-4": "value"
    }
  }
}

* Values in session attributes can only be a string, so the Context JSON object has to be stringified and then assigned.

In the above example, the flow was shifted from intent A to intent B (leaving intent A pending fulfillment). After the current intent (intent B) is fulfilled, the dialogue state goes back to intent A, retaining the previous state.

In real-world terms, this example is applicable in the healthcare space when a user wants to toggle between analysis of a large dataset and individual patient health records. For example, users may want to view the analysis for the causes, symptoms, and likelihood of various diseases.

Results and next steps

With the Maya chatbot deployed in the field, about a third of the queries that medical representatives raise are now answered by Maya rather than a human.

In the coming months, the team looks to further the use of the chatbot and also make it smarter. In particular, they’re looking at using Amazon SageMaker Reinforcement Learning with the Gym interface to facilitate ongoing training while engaging users. The thinking is to prompt a user with what it expects is next set of useful interactions, then reward or penalize the bot based on the relevance of its recommendations.

Amazon SageMaker is also core to a mother-bot architectural approach that is currently being tested. This mother bot is effectively the coordination point that can query the correct child bot to get an answer to the user. This ensemble of bots is expected to perform even better than a single bot handling all the intents. From a technical perspective, the mother bot is a classification algorithm implemented in Amazon SageMaker—a relatively easy task thanks to the streamlined workflow that Amazon SageMaker enables.

About the Author

Marisa Messina is on the AWS AI marketing team, where her job includes identifying the most innovative AWS-using customers and showcasing their inspiring stories. Prior to AWS, she worked on consumer-facing hardware and then university-facing cloud offerings at Microsoft. Outside of work, she enjoys exploring the Pacific Northwest hiking trails, cooking without recipes, and dancing in the rain.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

The AWS DeepRacer League is the world’s first global autonomous racing league, open to anyone. Developers of all skill levels can get hands on with machine learning in a fun and exciting way, racing for prizes and glory at 21 events globally and online via the DeepRacer console. The Virtual Circuit launched at the end of April, allowing developers to compete from anywhere in the world via the console – no car or track required – for a chance to top the leaderboard and score points in one of the 6 monthly competitions.

The rubber hits the road for the June race!

On June 3rd the Kumo Torakku challenge opened, and will be open for racing until June 30th, at midnight PST. Inspired by the Suzuka circuit in Japan, this track will help developers of all skill levels put their models to the test and advance their knowledge and practice of machine learning. All you need to do is log into the console, where you will be taken through a few quick and easy steps to get your model up and running and ready to race. With the AWS Free Tier you are covered for up to 10 hours of training (in your first 30 days of usage), so you can enter the AWS DeepRacer League at no cost to you.

Once you have learned the basics you will be able to immerse yourself inside the AWS DeepRacer online simulator and watch your model train, until it is ready for submission to the leaderboard. Will it make it round the hairpin, to get views of Mt Fuji? Will you optimize for speed or direction to get the model through the curves? Can you tune your model to take pole position? Get racing today, and don’t forget, if you compete in multiple online races you will score more points, and increase your chances to be eligible for one of the overall Virtual Circuit prizes!

AWS DeepRacer League is open to all and you don’t need the AWS DeepRacer car or to visit an in-person race for a chance to compete, with the virtual circuit you can participate in the race from the comfort of the console. Start your engines, the June race is on!

Watch a successful full lap of the Kumo Torakku, from the AWS DeepRacer 3D online simulator

Kumo Torakku Virtual League Race - June 2019 - YouTube

The Suzuka circuit and the new Kumo Torakku virtual race track

What’s new in the Kumo Torakku?

Aside from enjoying the scenery, you will now have the ability to train your model at a maximum speed of 8 meters per second. But beware, the Kumo Torakku has tight corners and a car travelling at that speed may not be able to take the turns well. It may take time for your model to converge and training time could increase with more throttle, so you will have to experiment with speed in your reward function to help you to succeed. Get started today for your chance to win your expenses paid ticket and join the best of the best at re:Invent 2019.

Cheers to the London Loop winner!

And if that doesn’t inspire you, here’s a quick spotlight and celebration of the May race winner. After a month long race, the London Loop closed on Friday May 31st and the first champion of the virtual tournament was crowned. Karl, who works for the National Australia Bank (NAB) took home the top prize and will now be heading to re:Invent 2019 to join the race for the Championship Cup. At NAB, teams are encouraged to experiment with new concepts and technologies, and the team there have been on their machine learning journey with DeepRacer since it launched at re:Invent 2018. They have created their own DeepRacer community, hosted their own competition, and even saw a team member take third place at the AWS Summit in Sydney.

Karl was joined on the London Loop podium by his teammate Paul, who came third in the May race. Paul recently posted about their experience with the AWS DeepRacer League and you can check it out here. Also be on the lookout for part two where they will share more tips on how to compete to win. Karl, Paul and the rest of the NAB team made a combined 533 attempts to conquer the London Loop challenge. They worked hard on their models, tuning them over time and ultimately clinching the win, and they even said “the virtual league was much more fun than the real race!”

Congratulations to the team and here’s to more AWS DeepRacer success!

About the Author

Alexandra Bush is a Senior Product Marketing Manager for AWS AI. She is passionate about how technology impacts the world around us and enjoys being able to help make it accessible to all. Out of the office she loves to run, travel and stay active in the outdoors with family and friends.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Bewgle is an SAP.iO, Techstars-funded company that uses AWS services to surface insights from user-generated text and audio streams. Bewgle generates insights to help product managers to increase customer satisfaction and engagement with their various products—beauty, electronics, or anything in between.  By listening to the voices of their customers with the help of Bewgle powered by AWS, these product managers are able to drive increased sales for their products.

An average human can read only about 250 words per minute. To synthesize 1000 customer reviews would therefore take upwards of 8 hours. Analyzing the information from all those reviews—plus other text like forum posts and blog posts, as well as unstructured content like survey verbatims and audio streams—quickly becomes untenable.

This is exactly the kind of problem where AI can excel, specifically, the subset of machine learning (ML) called natural language processing (NLP). At the heart of Bewgle’s solution is an AI platform developed completely on AWS that analyzes millions of pieces of content, then extracts key topics and the sentiment behind them. What would otherwise take years can now be done in minutes with Amazon Machine Learning and the AWS tech stack as a whole.

Indeed, the Bewgle solution makes use of a breadth of AWS services. Bewgle’s data processing pipeline relies on AWS Lambda and Amazon DynamoDB, which form the core of the ML tasks involved:

  • Storing data for analysis at scale.
  • Cleaning up data.
  • Firing various processing functions dynamically to generate the analysis.

The team developed an innovative serverless ML workflow to scale the system and orchestrate various workflows in a loosely coupled way. This gave them tremendous agility and flexibility in evaluating and choosing various approaches independently, facilitating speedy innovation.

A typical workflow for Bewgle starts with Amazon SageMaker Ground Truth, which they use to collect and tag data at scale and on demand. The team lauds the high accuracy of the data tagging that Amazon SageMaker Ground Truth delivers. Bewgle co-founder Shantanu Shah explains, “It [Amazon SageMaker Ground Truth] enables efficiency for Bewgle as we no longer have to look for and manage human taggers, and it’s affordable too.”

Once the data tagging is complete, the Bewgle team turns to Amazon SageMaker to reason over it.  They appreciate using the familiar Jupyter Notebook interface to work with the data; they quickly and easily build and test multiple models.  The automatic hyperparameter tuning within Amazon SageMaker greatly speeds and facilitates what would otherwise be a significant effort for the Bewgle team and makes it possible to achieve a high level of accuracy and confidence.

The next step is model deployment, and Amazon SageMaker once again is the solution.  Deploying with Amazon SageMaker is helpful because, in Shah’s words, “Traffic bursts are not an issue as the scalability and redundancy are automatically taken care of.”   He adds, “Overall, [Amazon] SageMaker helps in every step of model building, tuning and serving and saves countless hours of effort for Bewgle.”

This end to end workflow is depicted in the below diagram.

To make the insights available to customers, they built an API using AWS Elastic Beanstalk. The API allows customers to consume the data in any format. A UI layer built on top of the API also allows the customers to view the data as a digest and a dashboard.  With this implementation, listening to user insights at scale becomes easy.  Bewgle users from R&D teams can be smarter in designing new products; product design teams can consider many factors that might otherwise be overlooked; and business development teams can analyze and compare competitor data when determining new features.

Customer support teams are another key user group for Bewgle. Traditional approaches to customer support center mostly or strictly on answering queries related to structured data that they already have (e.g., templatized emails).  Because verbatims (such as comments left by hotel guests) are unstructured data, they cannot contribute to answering customer support queries. Bewgle believes that converting this unstructured text data into structured data is a key to continuously enhancing customer service. Bewgle’s NLP algorithms continuously learn as the data increases, and their output is structured data that is usable by customer service teams. As a tangible example, consider a customer who notes in a feedback form for a product that they could not open the container to access it. The customer service team is able to take that insight and realize that the glue had hardened on a certain batch, making them impossible to open. As such, the company can avoid creating more disgruntled customers (and potentially losing revenue as a result) by removing that batch from the customer-ready pile.

The team is composed of ex-Googlers who founded Bewgle to solve the information overload problem.  The Bewgle crew finds that the AWS AI and ML services enable their workflow to include “less headache” and more impact. The ease of use, documentation, and broad popularity of the AWS tech stack makes it appealing, and the reason for Bewgle’s choice to use AWS as its primary AI/ML platform.

In particular, Shah notes, “Amazon SageMaker allows us to add tremendous flexibility. [Now] we can rapidly iterate on our models as a result and this directly impacts the strength of our company.”

As the awareness of unstructured data analysis, NLP, and AI techniques has grown, Bewgle has seen rapid growth in its business over the last year. Going forward, the team plans to further scale the technology to other verticals and expand to other geographies.

About the Author

Marisa Messina is on the AWS AI marketing team, where her job includes identifying the most innovative AWS-using customers and showcasing their inspiring stories. Prior to AWS, she worked on consumer-facing hardware and then university-facing cloud offerings at Microsoft. Outside of work, she enjoys exploring the Pacific Northwest hiking trails, cooking without recipes, and dancing in the rain.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

We are excited to announce the open source release of Gluon Time Series (GluonTS), a Python toolkit developed by Amazon scientists for building, evaluating, and comparing deep learning–based time series models. GluonTS is based on the Gluon interface to Apache MXNet and provides components that make building time series models simple and efficient.

In this post, I describe the key functionality of the toolkit and demonstrate how to apply GluonTS to a time series forecasting problem.

Time series modeling use cases

Time series, as the name suggests, are collections of data points that are indexed by time. Time series arise naturally in many different applications, typically by measuring the value of some underlying process at a fixed time interval.

For example, a retailer might calculate and store the number of units sold for each product at the end of each business day. For each product, this leads to a time series of daily sales. An electricity company might measure the amount of electricity consumed by each household in a fixed interval, such as every hour. This leads to a collection of time series of electricity consumption. AWS customers might use Amazon CloudWatch to record various metrics relating to their resources and services, leading to a collection of metrics time series.

A typical time series may look like the following, where the measured amount is shown on the vertical axis and the horizontal axis is time:

Given a set of time series, you might ask various kinds of questions:

  • How will the time series evolve in the future? Forecasting
  • Is the behavior of the time series in a given period abnormal? Anomaly detection
  • Which group does a given time series belong to? Time series classification
  • Some measurements are missing, what were their values? Imputation

GluonTS allows you to address these questions by simplifying the process of building time series models, that is, mathematical descriptions of the process underlying the time series data. Numerous kinds of time series models have been proposed, and GluonTS focuses on a particular subset of these techniques based on deep learning.

GluonTS key functionality and components

GluonTS provides various components that make building deep learning-based, time series models simple and efficient. These models use many of the same building blocks as models that are used in other domains, such as natural language processing or computer vision.

Deep learning models for time series modeling commonly include components such as recurrent neural networks based on Long Short-Term Memory (LSTM) cells, convolutions, and attention mechanisms. This makes using a modern deep-learning framework, such as Apache MXNet, a convenient basis for developing and experimenting with such models.

However, time series modeling also often requires components that are specific to this application domain. GluonTS provides these time series modeling-specific components on top of the Gluon interface to MXNet. In particular, GluonTS contains:

  • Higher-level components for building new models, including generic neural network structures like sequence-to-sequence models and components for modeling and transforming probability distributions
  • Data loading and iterators for time series data, including a mechanism for transforming the data before it is supplied to the model
  • Reference implementations of several state-of-the-art neural forecasting models
  • Tooling for evaluating and comparing forecasting models

Most of the building blocks in GluonTS can be used for any of the time series modeling use cases mentioned earlier, while the model implementations and some of the surrounding tooling are currently focused on the forecasting use case.

GluonTS for time series forecasting

To make things more concrete, look at how to use one of time series models that comes bundled in GluonTS, for making forecasts on a real-world time series dataset.

For this example, use the DeepAREstimator, which implements the DeepAR model proposed in the DeepAR: Probabilistic Forecasting with Autoregressive Recurrent Networks paper. Given one or more time series, the model is trained to predict the next prediction_length values given the preceding context_length values. Instead of predicting single best values for each position in the prediction range, the model parametrizes a parametric probability distribution for each output position.

To encapsulate models and trained model artifacts, GluonTS uses an Estimator/Predictor pair of abstractions that should be familiar to users of other machine learning frameworks. An Estimator represents a model that can be trained on a dataset to yield a Predictor, which can later be used to make predictions on unseen data.

Instantiate a DeepAREstimator object by providing a few hyperparameters:

  • The time series frequency (for this example, I use 5 minutes, so freq="5min")
  • The prediction length (36 time points, which makes it span 3 hours)

You can also provide a Trainer object that can be used to configure the details of the training process. You could configure more aspects of the model by providing more hyperparameters as arguments, but stick with the default values for now, which usually provide a good starting point.

from gluonts.model.deepar import DeepAREstimator
from gluonts.trainer import Trainer

estimator = DeepAREstimator(freq="5min", 
                            prediction_length=36, 
                            trainer=Trainer(epochs=10))
Model training on a real dataset

Having specified the Estimator, you are now ready to train the model on some data. Use a freely available dataset on the volume of tweets mentioning the AMZN ticker symbol. This can be obtained and displayed using Pandas, as follows:

import pandas as pd
import matplotlib.pyplot as plt

url = "https://raw.githubusercontent.com/numenta/NAB/master/data/realTweets/Twitter_volume_AMZN.csv"
df = pd.read_csv(url, header=0, index_col=0)

df[:200].plot(figsize=(12, 5), linewidth=2)
plt.grid()
plt.legend(["observations"])
plt.show()

GluonTS provides a Dataset abstraction for providing uniform access to data across different input formats. Here, use ListDataset to access data stored in memory as a list of dictionaries. In GluonTS, any Dataset is just an Iterable over dictionaries mapping string keys to arbitrary values.

To train your model, truncate the data up to April 5, 2015. Data past this date is used later for testing the model.

from gluonts.dataset.common import ListDataset

training_data = ListDataset(
    [{"start": df.index[0], "target": df.value[:"2015-04-05 00:00:00"]}],
    freq = "5min"
)

With the dataset in hand, you can now use your estimator and call its train method. When the training process is finished, you have a Predictor that can be used for making forecasts.

predictor = estimator.train(training_data=training_data)
Model evaluation

Now use the predictor to plot the model’s forecasts on a few time ranges that start after the last time point seen during training. This is useful for getting a qualitative feel for the quality of the outputs produced by this model.

Using the same base dataset as before, create a few test instances by taking data past the time range previously used for training.

test_data = ListDataset(
    [
        {"start": df.index[0], "target": df.value[:"2015-04-10 03:00:00"]},
        {"start": df.index[0], "target": df.value[:"2015-04-15 18:00:00"]},
        {"start": df.index[0], "target": df.value[:"2015-04-20 12:00:00"]}
    ],
    freq = "5min"
)

As you can see from the following plots, the model produces probabilistic predictions. This is important because it provides an estimate of how confident the model is, and allows downstream decisions based on these forecasts to account for this uncertainty.

from itertools import islice
from gluonts.evaluation.backtest import make_evaluation_predictions

def plot_forecasts(tss, forecasts, past_length, num_plots):
    for target, forecast in islice(zip(tss, forecasts), num_plots):
        ax = target[-past_length:].plot(figsize=(12, 5), linewidth=2)
        forecast.plot(color='g')
        plt.grid(which='both')
        plt.legend(["observations", "median prediction", "90% confidence interval", "50% confidence interval"])
        plt.show()

forecast_it, ts_it = make_evaluation_predictions(test_data, predictor=predictor, num_eval_samples=100)
forecasts = list(forecast_it)
tss = list(ts_it)
plot_forecasts(tss, forecasts, past_length=150, num_plots=3)

Now that you are satisfied that the forecasts look reasonable, you can compute a quantitative evaluation of the forecasts for all the time series in the test set using a variety of metrics. GluonTS provides an Evaluator component, which performs this model evaluation. It produces some commonly used error metrics such as MSE, MASE, symmetric MAPE, RMSE, and (weighted) quantile losses.

rom gluonts.evaluation import Evaluator

evaluator = Evaluator(quantiles=[0.5], seasonality=2016)

agg_metrics, item_metrics = evaluator(iter(tss), iter(forecasts), num_series=len(test_data))
agg_metrics

{'MSE': 163.59102376302084,
 'abs_error': 1090.9220886230469,
 'abs_target_sum': 5658.0,
 'abs_target_mean': 52.38888888888889,
 'seasonal_error': 18.833625618877182,
 'MASE': 0.5361500323952336,
 'sMAPE': 0.21201368270827592,
 'MSIS': 21.446000940010823,
 'QuantileLoss[0.5]': 1090.9221000671387,
 'Coverage[0.5]': 0.34259259259259256,
 'RMSE': 12.790270668090681,
 'NRMSE': 0.24414090352665138,
 'ND': 0.19281054942082837,
 'wQuantileLoss[0.5]': 0.19281055144346743,
 'mean_wQuantileLoss': 0.19281055144346743,
 'MAE_Coverage': 0.15740740740740744}

You can now compare these metrics against those produced by other models, or to the business requirements for your forecasting application. For example, you can produce forecasts using the seasonal naive method. This model assumes that the data has a fixed seasonality (in this case, 2016 time steps correspond to a week), and produces forecasts by copying past observations based on it.

from gluonts.model.seasonal_naive import SeasonalNaivePredictor

seasonal_predictor_1W = SeasonalNaivePredictor(freq="5min", prediction_length=36, season_length=2016)

forecast_it, ts_it = make_evaluation_predictions(test_data, predictor=seasonal_predictor_1W, num_eval_samples=100)
forecasts = list(forecast_it)
tss = list(ts_it)

agg_metrics_seasonal, item_metrics_seasonal = evaluator(iter(tss), iter(forecasts), num_series=len(test_data))

df_metrics = pd.DataFrame.join(
    pd.DataFrame.from_dict(agg_metrics, orient='index').rename(columns={0: "DeepAR"}),
    pd.DataFrame.from_dict(agg_metrics_seasonal, orient='index').rename(columns={0: "Seasonal naive"})
)
df_metrics.loc[["MASE", "sMAPE", "RMSE"]]

By looking at these metrics, you can get an idea of how your model compares to baselines or other advanced models. To improve the results, tweak the architecture or the hyperparameters.

Help make GluonTS better!

In this post, I only touched on a small subset of functionality provided by GluonTS. If you would like to dive deeper, I encourage you to check out tutorials and further examples.

GluonTS is open source under the Apache license. We welcome and encourage contributions from the community as bug reports and pull requests. Head over to the GluonTS GitHub repo now!

About the Authors

Jan Gasthaus is a Senior Machine Learning Scientist with AWS AI Labs where his passion is designing machine learning models, algorithms, and systems, and deploying them at scale.

Lorenzo Stella is an Applied Scientist on the AWS AI Labs team. His research interests are in machine learning and optimization. He has worked on probabilistic and deep models for forecasting.

Tim Januschowski is a Machine Learning Science Manager at AWS AI Labs. He has worked on forecasting and has produced end-to-end solutions for a wide variety of forecasting problems, from demand forecasting to server capacity forecasting over the course of his tenure at Amazon.

Richard Lee is a Product Manager at AWS AI Labs. He is passionate about how Artificial Intelligence impacts the worlds around us, and is on a mission to make it accessible to all. He is also a pilot, science and nature admirer, and beginner cook.

Syama Sundar Rangapuram is a Machine Learning Scientist at AWS AI Labs. His research interests are in machine learning and optimization. In forecasting, he has worked on probabilistic models and data-driven models in particular for the cold-start problem.

Konstantinos Benidis is an Applied Scientist at AWS AI Labs. His research interests are in machine learning, optimization and financial engineering. He has worked on probabilistic and deep models for forecasting.

Alexander Alexandrov is a Post-Doc on the AWS AI Labs team and TU-Berlin. He is passionate about scalable data management, data analytics applications, and optimizing DSLs. 

David Salinas is a Senior Applied Scientist in the AWS AI Labs team. He is working on applying deep-learning to various application such as forecasting or NLP.

Danielle Robinson is an Applied Scientist in the AWS AI Labs team. She is working on combining deep learning methods with classical statiscal methods for forecasting. Her interests also include numerical linear algebra, numerical optimization and numerical PDEs.

Yuyang (Bernie) Wang is a Senior Machine Learning Scientist in Amazon AI Labs, working mainly on large-scale probabilistic machine learning with its application in Forecasting. His research interests span statistical machine learning, numerical linear algebra, and random matrix theory. In forecasting, Yuyang has worked on all aspects ranging from practical applications to theoretical foundations.

Valentin Flunkert is a Senior Machine Learning Scientist at AWS AI Labs. He is passionate about building machine learning systems for solving business problems. He has worked on a variety of machine learning and forecasting problems at Amazon. 

Michael Bohlke-Schneider is a Data Scientist in AWS AI Labs/Fulfillment Technology, researching and developing forecasting algorithms in SageMaker and applying forecasting to business problems.

Jasper Schulz is a software development engineer in the AWS AI Labs team.

Read Full Article

Read for later

Articles marked as Favorite are saved for later viewing.
close
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview