Generate customized, compliant application IaC scripts for AWS Landing Zone using Amazon Bedrock
Amazon Web Services AI Blog
by Ebbey Thomas
16h ago
Migrating to the cloud is an essential step for modern organizations aiming to capitalize on the flexibility and scale of cloud resources. Tools like Terraform and AWS CloudFormation are pivotal for such transitions, offering infrastructure as code (IaC) capabilities that define and manage complex cloud environments with precision. However, despite its benefits, IaC’s learning curve, and the complexity of adhering to your organization’s and industry-specific compliance and security standards, could slow down your cloud adoption journey. Organizations typically counter these hurdles by investin ..read more
Visit website
Live Meeting Assistant with Amazon Transcribe, Amazon Bedrock, and Knowledge Bases for Amazon Bedrock
Amazon Web Services AI Blog
by Bob Strahan
16h ago
See CHANGELOG for latest features and fixes. You’ve likely experienced the challenge of taking notes during a meeting while trying to pay attention to the conversation. You’ve probably also experienced the need to quickly fact-check something that’s been said, or look up information to answer a question that’s just been asked in the call. Or maybe you have a team member that always joins meetings late, and expects you to send them a quick summary over chat to catch them up. Then there are the times that others are talking in a language that’s not your first language, and you’d love to have a l ..read more
Visit website
Meta Llama 3 models are now available in Amazon SageMaker JumpStart
Amazon Web Services AI Blog
by Kyle Ulrich
16h ago
Today, we are excited to announce that Meta Llama 3 foundation models are available through Amazon SageMaker JumpStart to deploy and run inference. The Llama 3 models are a collection of pre-trained and fine-tuned generative text models. In this post, we walk through how to discover and deploy Llama 3 models via SageMaker JumpStart. What is Meta Llama 3 Llama 3 comes in two parameter sizes — 8B and 70B with 8k context length — that can support a broad range of use cases with improvements in reasoning, code generation, and instruction following. Llama 3 uses a decoder-only transformer arch ..read more
Visit website
Slack delivers native and secure generative AI powered by Amazon SageMaker JumpStart
Amazon Web Services AI Blog
by Jackie Rocca
1d ago
This post is co-authored by Jackie Rocca, VP of Product, AI at Slack Slack is where work happens. It’s the AI-powered platform for work that connects people, conversations, apps, and systems together in one place. With the newly launched Slack AI—a trusted, native, generative artificial intelligence (AI) experience available directly in Slack—users can surface and prioritize information so they can find their focus and do their most productive work. We are excited to announce that Slack, a Salesforce company, has collaborated with Amazon SageMaker JumpStart to power Slack AI’s initial search a ..read more
Visit website
Explore data with ease: Use SQL and Text-to-SQL in Amazon SageMaker Studio JupyterLab notebooks
Amazon Web Services AI Blog
by Pranav Murthy
3d ago
Amazon SageMaker Studio provides a fully managed solution for data scientists to interactively build, train, and deploy machine learning (ML) models. In the process of working on their ML tasks, data scientists typically start their workflow by discovering relevant data sources and connecting to them. They then use SQL to explore, analyze, visualize, and integrate data from various sources before using it in their ML training and inference. Previously, data scientists often found themselves juggling multiple tools to support SQL in their workflow, which hindered productivity. We’re excited to ..read more
Visit website
Distributed training and efficient scaling with the Amazon SageMaker Model Parallel and Data Parallel Libraries
Amazon Web Services AI Blog
by Xinle Sheila Liu
3d ago
There has been tremendous progress in the field of distributed deep learning for large language models (LLMs), especially after the release of ChatGPT in December 2022. LLMs continue to grow in size with billions or even trillions of parameters, and they often won’t fit into a single accelerator device such as GPU or even a single node such as ml.p5.32xlarge because of memory limitations. Customers training LLMs often must distribute their workload across hundreds or even thousands of GPUs. Enabling training at such scale remains a challenge in distributed training, and training efficiently in ..read more
Visit website
Manage your Amazon Lex bot via AWS CloudFormation templates
Amazon Web Services AI Blog
by Thomas Rindfuss
3d ago
Amazon Lex is a fully managed artificial intelligence (AI) service with advanced natural language models to design, build, test, and deploy conversational interfaces in applications. It employs advanced deep learning technologies to understand user input, enabling developers to create chatbots, virtual assistants, and other applications that can interact with users in natural language. Managing your Amazon Lex bots using AWS CloudFormation allows you to create templates defining the bot and all the AWS resources it depends on. AWS CloudFormation provides and configures those resources on your ..read more
Visit website
A secure approach to generative AI with AWS
Amazon Web Services AI Blog
by Anthony Liguori
3d ago
Generative artificial intelligence (AI) is transforming the customer experience in industries across the globe. Customers are building generative AI applications using large language models (LLMs) and other foundation models (FMs), which enhance customer experiences, transform operations, improve employee productivity, and create new revenue channels. FMs and the applications built around them represent extremely valuable investments for our customers. They’re often used with highly sensitive business data, like personal data, compliance data, operational data, and financial information, to o ..read more
Visit website
Cost-effective document classification using the Amazon Titan Multimodal Embeddings Model
Amazon Web Services AI Blog
by Sumit Bhati
1w ago
Organizations across industries want to categorize and extract insights from high volumes of documents of different formats. Manually processing these documents to classify and extract information remains expensive, error prone, and difficult to scale. Advances in generative artificial intelligence (AI) have given rise to intelligent document processing (IDP) solutions that can automate the document classification, and create a cost-effective classification layer capable of handling diverse, unstructured enterprise documents. Categorizing documents is an important first step in IDP systems. It ..read more
Visit website
AWS at NVIDIA GTC 2024: Accelerate innovation with generative AI on AWS
Amazon Web Services AI Blog
by Julie Tang
1w ago
AWS was delighted to present to and connect with over 18,000 in-person and 267,000 virtual attendees at NVIDIA GTC, a global artificial intelligence (AI) conference that took place March 2024 in San Jose, California, returning to a hybrid, in-person experience for the first time since 2019. AWS has had a long-standing collaboration with NVIDIA for over 13 years. AWS was the first Cloud Service Provider (CSP) to offer NVIDIA GPUs in the public cloud, and remains among the first to deploy NVIDIA’s latest technologies. Looking back at AWS re:Invent 2023, Jensen Huang, founder and CEO of NVIDIA, c ..read more
Visit website

Follow Amazon Web Services AI Blog on FeedSpot

Continue with Google
Continue with Apple
OR