Cost-effective document classification using the Amazon Titan Multimodal Embeddings Model
AWS » Machine Learning
by Sumit Bhati
2d ago
Organizations across industries want to categorize and extract insights from high volumes of documents of different formats. Manually processing these documents to classify and extract information remains expensive, error prone, and difficult to scale. Advances in generative artificial intelligence (AI) have given rise to intelligent document processing (IDP) solutions that can automate the document classification, and create a cost-effective classification layer capable of handling diverse, unstructured enterprise documents. Categorizing documents is an important first step in IDP systems. It ..read more
Visit website
AWS at NVIDIA GTC 2024: Accelerate innovation with generative AI on AWS
AWS » Machine Learning
by Julie Tang
2d ago
AWS was delighted to present to and connect with over 18,000 in-person and 267,000 virtual attendees at NVIDIA GTC, a global artificial intelligence (AI) conference that took place March 2024 in San Jose, California, returning to a hybrid, in-person experience for the first time since 2019. AWS has had a long-standing collaboration with NVIDIA for over 13 years. AWS was the first Cloud Service Provider (CSP) to offer NVIDIA GPUs in the public cloud, and remains among the first to deploy NVIDIA’s latest technologies. Looking back at AWS re:Invent 2023, Jensen Huang, founder and CEO of NVIDIA, c ..read more
Visit website
Build an active learning pipeline for automatic annotation of images with AWS services
AWS » Machine Learning
by Yanxiang Yu
3d ago
This blog post is co-written with Caroline Chung from Veoneer. Veoneer is a global automotive electronics company and a world leader in automotive electronic safety systems. They offer best-in-class restraint control systems and have delivered over 1 billion electronic control units and crash sensors to car manufacturers globally. The company continues to build on a 70-year history of automotive safety development, specializing in cutting-edge hardware and systems that prevent traffic incidents and mitigate accidents. Automotive in-cabin sensing (ICS) is an emerging space that uses a combinati ..read more
Visit website
Knowledge Bases for Amazon Bedrock now supports custom prompts for the RetrieveAndGenerate API and configuration of the maximum number of retrieved results
AWS » Machine Learning
by Sandeep Singh
4d ago
With Knowledge Bases for Amazon Bedrock, you can securely connect foundation models (FMs) in Amazon Bedrock to your company data for Retrieval Augmented Generation (RAG). Access to additional data helps the model generate more relevant, context-specific, and accurate responses without retraining the FMs. In this post, we discuss two new features of Knowledge Bases for Amazon Bedrock specific to the RetrieveAndGenerate API: configuring the maximum number of results and creating custom prompts with a knowledge base prompt template. You can now choose these as query options alongside the search t ..read more
Visit website
Knowledge Bases for Amazon Bedrock now supports metadata filtering to improve retrieval accuracy
AWS » Machine Learning
by Corvus Lee
5d ago
At AWS re:Invent 2023, we announced the general availability of Knowledge Bases for Amazon Bedrock. With Knowledge Bases for Amazon Bedrock, you can securely connect foundation models (FMs) in Amazon Bedrock to your company data using a fully managed Retrieval Augmented Generation (RAG) model. For RAG-based applications, the accuracy of the generated responses from FMs depend on the context provided to the model. Contexts are retrieved from vector stores based on user queries. In the recently released feature for Knowledge Bases for Amazon Bedrock, hybrid search, you can combine semantic searc ..read more
Visit website
Build knowledge-powered conversational applications using LlamaIndex and Llama 2-Chat
AWS » Machine Learning
by Romina Sharifpour
5d ago
Unlocking accurate and insightful answers from vast amounts of text is an exciting capability enabled by large language models (LLMs). When building LLM applications, it is often necessary to connect and query external data sources to provide relevant context to the model. One popular approach is using Retrieval Augmented Generation (RAG) to create Q&A systems that comprehend complex information and provide natural responses to queries. RAG allows models to tap into vast knowledge bases and deliver human-like dialogue for applications like chatbots and enterprise search assistants. In this ..read more
Visit website
Use everyday language to search and retrieve data with Mixtral 8x7B on Amazon SageMaker JumpStart
AWS » Machine Learning
by Jose Navarro
5d ago
With the widespread adoption of generative artificial intelligence (AI) solutions, organizations are trying to use these technologies to make their teams more productive. One exciting use case is enabling natural language interactions with relational databases. Rather than writing complex SQL queries, you can describe in plain language what data you want to retrieve or manipulate. The large language model (LLM) can understand the intent behind your natural language input and data topography and automatically generate the appropriate SQL code. This allows analysts to be more productive by not h ..read more
Visit website
Boost inference performance for Mixtral and Llama 2 models with new Amazon SageMaker containers
AWS » Machine Learning
by Joao Moura
5d ago
In January 2024, Amazon SageMaker launched a new version (0.26.0) of Large Model Inference (LMI) Deep Learning Containers (DLCs). This version offers support for new models (including Mixture of Experts), performance and usability improvements across inference backends, as well as new generation details for increased control and prediction explainability (such as reason for generation completion and token level log probabilities). LMI DLCs offer a low-code interface that simplifies using state-of-the-art inference optimization techniques and hardware. LMI allows you to apply tensor parallelism ..read more
Visit website
Improving Content Moderation with Amazon Rekognition Bulk Analysis and Custom Moderation
AWS » Machine Learning
by Mehdy Haghy
1w ago
Amazon Rekognition makes it easy to add image and video analysis to your applications. It’s based on the same proven, highly scalable, deep learning technology developed by Amazon’s computer vision scientists to analyze billions of images and videos daily. It requires no machine learning (ML) expertise to use and we’re continually adding new computer vision features to the service. Amazon Rekognition includes a simple, easy-to-use API that can quickly analyze any image or video file that’s stored in Amazon Simple Storage Service (Amazon S3). Customers across industries such as advertising and ..read more
Visit website
Understanding and predicting urban heat islands at Gramener using Amazon SageMaker geospatial capabilities
AWS » Machine Learning
by Abhishek Mittal
1w ago
This is a guest post co-authored by Shravan Kumar and Avirat S from Gramener. Gramener, a Straive company, contributes to sustainable development by focusing on agriculture, forestry, water management, and renewable energy. By providing authorities with the tools and insights they need to make informed decisions about environmental and social impact, Gramener is playing a vital role in building a more sustainable future. Urban heat islands (UHIs) are areas within cities that experience significantly higher temperatures than their surrounding rural areas. UHIs are a growing concern because they ..read more
Visit website

Follow AWS » Machine Learning on FeedSpot

Continue with Google
Continue with Apple
OR