Which Artificial Intelligence Conferences are Popular in 2024?
MLOps Community Blog
by Demetrios Brinkmann
1w ago
AI is here to stay! Here’s how you pick the conference of your choice. Having too many options complicates things, right? Should you take the tried-and-true route of going to where the big names are?  Or is a more niche approach your cup of tea? We researched dozens of Artificial intelligence events happening across the US this year and found six picks to help ML/AI Engineers find the best options. We’ll reveal who they are for, what they offer and how they stack up against each other. (Note, we are purposefully excluding the gigantic conferences such as AWS Re:invent, or Google Cloud Ne ..read more
Visit website
Budget Instruction Fine-tuning of Llama 3 8B Instruct(on Medical Data) with Hugging Face, Google Colab and Unsloth
MLOps Community Blog
by Bojan Jakimovski
1w ago
Many contemporary LLMs are showing impressive overall performance but often stumble when confronted with specific task-oriented challenges. Fine-tuning provides significant advantages, such as reduced computational costs and the opportunity to harness cutting-edge models without starting from scratch. Fine-tuning is a process of taking a pre-trained model and further training it on a domain-specific dataset. This process enhances the model’s performance for specific tasks, rendering it more adept and adaptable in real-world scenarios. It is an indispensable step for customizing existing model ..read more
Visit website
How Tecton Helps ML Teams Build Smarter Models, Faster
MLOps Community Blog
by Julia Brouillette
3w ago
In the race to infuse intelligence into every product and application, the speed at which machine learning (ML) teams can innovate is not just a metric of efficiency. It’s what sets industry leaders apart, empowering them to constantly improve and deliver models that provide timely, accurate predictions that adapt to evolving data and user needs. Moving quickly in ML isn’t easy, though. Model development is a fundamentally iterative process that involves engineering and experimenting with new data inputs (features) for models to use. The quality and relevance of features have a direct impact ..read more
Visit website
7 Methods to Secure LLM Apps from Prompt Injections and Jailbreaks
MLOps Community Blog
by Sahar Mor
1M ago
Practical strategies to protect language models apps (or at least doing your best) I started my career in the cybersecurity space. Dancing the endless dance of deploying defense mechanisms only to be hijacked by a more brilliant attacker a few months later. Hacking language models and language-powered applications are no different. As more high-stake applications move to use LLMs, there are more incentives for folks to cultivate new attack vectors. Every developer who has launched an app using language models faced this concern – preventing users from jailbreaking it to obey their will, may it ..read more
Visit website
Basics of Instruction Tuning with OLMo 1B
MLOps Community Blog
by Daniel Liden
1M ago
Large Language Models (LLMs) are trained on vast corpora of text, giving them impressive language comprehension and generation capabilities. However, this training does not inherently provide them with the ability to directly answer questions or follow instructions. To achieve this, we need to fine-tune these models for the specific task of instruction following. This article explores the basics of instruction tuning using AI2’s OLMo-1B model as an example. It will provide a before and after comparison, showing that the pre-trained model (before fine-tuning) is unable to follow instructions ..read more
Visit website
Make your MLOps code base SOLID with Pydantic and Python’s ABC
MLOps Community Blog
by Médéric Hurier (Fmind)
1M ago
MLOps projects are straightforward to initiate, but challenging to perfect. While AI/ML projects often start with a notebook for prototyping, deploying them directly in production is often considered poor practice by the MLOps community. Transitioning to a dedicated Python code base is essential for industrializing the project, yet this move presents several challenges: 1) How can we maintain a code base that is robust yet flexible for agile development? 2) Is it feasible to implement proven design patterns while keeping the code base accessible to all developers? 3) How can we leverage P ..read more
Visit website
PIXART-α: A Diffusion Transformer Model for Text-to-Image Generation
MLOps Community Blog
by Soumik Rakshit
2M ago
This article provides a short tutorial on how to run experiments with Pixart-α — the new transformer-based Diffusion model for generating photorealistic images from text. The popularity of text-conditional image generation models like DALL·E 3, Midjourney, and Stable Diffusion can largely be attributed to their ease of use for producing stunning images by simply using meaningful text-based prompts. However, such models require significant training costs (e.g., millions of GPU hours) which seriously hinders the course of fundamental innovation in the field of AI-generated content while increasi ..read more
Visit website
Audio Generation with Mamba using Determined AI
MLOps Community Blog
by Isha Ghodgaonkar
2M ago
Training the new Mamba architecture on speech + music data! As you might have noticed from my past blogs, most of my experience is in computer vision. But, recently, for obvious reasons (read: ChatGPT, LLaMas, Alpacas, etc…), I realized it’s about time I learn a thing or two about how transformers work. About 2 seconds deep into transformers literature, boom! Another foundational architecture was released (who’s surprised though?). It’s called Mamba, and I decided I wanted to learn about it through practice. In this blog, we’ll go through what Mamba is, what makes it different from transforme ..read more
Visit website
The Role of AI Safety Standards in Modern MLOps
MLOps Community Blog
by Ritee Rouf
2M ago
With the recent explosive growth of AI, particularly in Generative AI, the importance of safety and reliability has surged as a paramount concern for businesses, consumers, and regulatory bodies.  Recent safety standards and regulations as outlined in the EU AI Act and Biden’s executive order underscore the imperative to ensuring safe and trustworthy AI. Furthermore, the existence of over 20 ISO standards dedicated to AI safety presents a formidable challenge in integrating them effectively into operational frameworks. Beyond considerations of safety and adherence to regulations, the issu ..read more
Visit website
How to Adapt your LLM for Question Answering with Prompt-Tuning using NVIDIA NeMo and Weights & Biases
MLOps Community Blog
by Anish Shah
2M ago
A tutorial on prompt-tuning and p-tuning using NeMo alongside W&B, complete with an experiment and executable code. Why Prompt-Tune Instead of Fine-Tune? Let’s start with a thought experiment: Imagine you’re the owner of a vast library that contains millions of books. Over the years, you’ve meticulously organized this library, placing each book on its designated shelf, in its specific corner. This library is akin to a pre-trained language model like GPT, and the books represent the knowledge and intricacies learned during its training. Now, let’s say you have a regular visitor, Alice, who ..read more
Visit website

Follow MLOps Community Blog on FeedSpot

Continue with Google
Continue with Apple
OR