DREAM: Distributed RAG Experimentation Framework
MLOps Community Blog
by Aishwarya Prabhat
6M ago
A blueprint for distributed RAG experimentation using Ray, LlamaIndex, Ragas, MLFlow & MinIO on Kubernetes Image created using DALL.E 2Contents 1. What is DREAM? a. What is it, really? b. Architecture c. Take me to the code! 2. Code Walkthrough a. Preparing Unstructured Data b. Distributed Generation of Golden Dataset c. Distributed Experimentation & Evaluation d. Experiment Tracking 3. Conclusion a. In a nutshell b. What’s next? 1. What is DREAM? a. What is it, really? Given the myriad of options for LLMs, embedding models, retrieval methods, re-ranking methods and ..read more
Visit website
Why Do We Need A Purpose-Built Database For Multimodal Data?
MLOps Community Blog
by Vishakha Gupta
6M ago
Recently, data engineering and management have grown difficult for companies building modern applications. There is one leading reason—lack of multimodal data support. Today, application data—especially for AI-driven applications—includes text data, image data, audio data, video data, and sometimes complex hierarchical data. While each of these data types can be efficiently processed individually, together they create an architectural cobweb that any spider would be ashamed of: This mess is the consequence of a few problems with multimodal data. The first is the lack of a unified data store ..read more
Visit website
Which Artificial Intelligence Conferences are Popular in 2024?
MLOps Community Blog
by Demetrios Brinkmann
7M ago
AI is here to stay! Here’s how you pick the conference of your choice. Having too many options complicates things, right? Should you take the tried-and-true route of going to where the big names are?  Or is a more niche approach your cup of tea? We researched dozens of Artificial intelligence events happening across the US this year and found six picks to help ML/AI Engineers find the best options. We’ll reveal who they are for, what they offer and how they stack up against each other. (Note, we are purposefully excluding the gigantic conferences such as AWS Re:invent, or Google Cloud Ne ..read more
Visit website
Budget Instruction Fine-tuning of Llama 3 8B Instruct(on Medical Data) with Hugging Face, Google Colab and Unsloth
MLOps Community Blog
by Bojan Jakimovski
7M ago
Many contemporary LLMs are showing impressive overall performance but often stumble when confronted with specific task-oriented challenges. Fine-tuning provides significant advantages, such as reduced computational costs and the opportunity to harness cutting-edge models without starting from scratch. Fine-tuning is a process of taking a pre-trained model and further training it on a domain-specific dataset. This process enhances the model’s performance for specific tasks, rendering it more adept and adaptable in real-world scenarios. It is an indispensable step for customizing existing model ..read more
Visit website
How Tecton Helps ML Teams Build Smarter Models, Faster
MLOps Community Blog
by Julia Brouillette
7M ago
In the race to infuse intelligence into every product and application, the speed at which machine learning (ML) teams can innovate is not just a metric of efficiency. It’s what sets industry leaders apart, empowering them to constantly improve and deliver models that provide timely, accurate predictions that adapt to evolving data and user needs. Moving quickly in ML isn’t easy, though. Model development is a fundamentally iterative process that involves engineering and experimenting with new data inputs (features) for models to use. The quality and relevance of features have a direct impact ..read more
Visit website
7 Methods to Secure LLM Apps from Prompt Injections and Jailbreaks
MLOps Community Blog
by Sahar Mor
8M ago
Practical strategies to protect language models apps (or at least doing your best) I started my career in the cybersecurity space. Dancing the endless dance of deploying defense mechanisms only to be hijacked by a more brilliant attacker a few months later. Hacking language models and language-powered applications are no different. As more high-stake applications move to use LLMs, there are more incentives for folks to cultivate new attack vectors. Every developer who has launched an app using language models faced this concern – preventing users from jailbreaking it to obey their will, may it ..read more
Visit website
Basics of Instruction Tuning with OLMo 1B
MLOps Community Blog
by Daniel Liden
8M ago
Large Language Models (LLMs) are trained on vast corpora of text, giving them impressive language comprehension and generation capabilities. However, this training does not inherently provide them with the ability to directly answer questions or follow instructions. To achieve this, we need to fine-tune these models for the specific task of instruction following. This article explores the basics of instruction tuning using AI2’s OLMo-1B model as an example. It will provide a before and after comparison, showing that the pre-trained model (before fine-tuning) is unable to follow instructions ..read more
Visit website
Make your MLOps code base SOLID with Pydantic and Python’s ABC
MLOps Community Blog
by Médéric Hurier (Fmind)
8M ago
MLOps projects are straightforward to initiate, but challenging to perfect. While AI/ML projects often start with a notebook for prototyping, deploying them directly in production is often considered poor practice by the MLOps community. Transitioning to a dedicated Python code base is essential for industrializing the project, yet this move presents several challenges: 1) How can we maintain a code base that is robust yet flexible for agile development? 2) Is it feasible to implement proven design patterns while keeping the code base accessible to all developers? 3) How can we leverage P ..read more
Visit website
PIXART-α: A Diffusion Transformer Model for Text-to-Image Generation
MLOps Community Blog
by Soumik Rakshit
8M ago
This article provides a short tutorial on how to run experiments with Pixart-α — the new transformer-based Diffusion model for generating photorealistic images from text. The popularity of text-conditional image generation models like DALL·E 3, Midjourney, and Stable Diffusion can largely be attributed to their ease of use for producing stunning images by simply using meaningful text-based prompts. However, such models require significant training costs (e.g., millions of GPU hours) which seriously hinders the course of fundamental innovation in the field of AI-generated content while increasi ..read more
Visit website
Audio Generation with Mamba using Determined AI
MLOps Community Blog
by Isha Ghodgaonkar
9M ago
Training the new Mamba architecture on speech + music data! As you might have noticed from my past blogs, most of my experience is in computer vision. But, recently, for obvious reasons (read: ChatGPT, LLaMas, Alpacas, etc…), I realized it’s about time I learn a thing or two about how transformers work. About 2 seconds deep into transformers literature, boom! Another foundational architecture was released (who’s surprised though?). It’s called Mamba, and I decided I wanted to learn about it through practice. In this blog, we’ll go through what Mamba is, what makes it different from transforme ..read more
Visit website

Follow MLOps Community Blog on FeedSpot

Continue with Google
Continue with Apple
OR