Deep Learning Optimization Algorithms
Neptune.ai Blog
by Alessandro Lamberti
4d ago
TL;DR Training deep learning models means solving an optimization problem: The model is incrementally adapted to minimize an objective function. The optimizers used for training deep learning models are based on gradient descent, trying to shift the model’s weights towards the objective function’s minimum. A range of optimization algorithms is used to train deep learning models, each aiming to address a particular shortcoming of the basic gradient descent approach. Stochastic Gradient Descent (SGD) and Mini-batch Gradient Descent speed up training and are suitable for larger da ..read more
Visit website
Product Updates September ’23: Scatter Plots, Airflow Integration, and More
Neptune.ai Blog
by Patrycja Jenkner
1w ago
Here’s a quarterly product newsletter to keep you up-to-date with all the changes in Neptune. Check what happened in the last 3 months.   New ? 1. Scatter plots If you have two metrics or parameters that you wish to compare or see how they relate to each other throughout the runs, you can now create a scatter plot.  See an example of a scatter plot in the Neptune app. 2. Distraction-free mode  You can now view run dashboards and compare views in distraction-free mode.  3. Integrations & supported tools We have 4 new integrations: Airflow  Modelbit (comm ..read more
Visit website
Train, Track, and Deploy Your Models: Neptune + Modelbit Integration
Neptune.ai Blog
by Patrycja Jenkner
1w ago
We are excited to announce that Neptune and Modelbit have partnered to release an integration to enable better ML model deployment and experiment tracking. Data scientists and machine learning engineers can use the integration to train and deploy machine learning models in Modelbit while logging and visualizing training progress in Neptune. If you are not already familiar, Neptune is a lightweight experiment tracker for MLOps. It offers a single place to track, compare, store, and collaborate on experiments and models.  Neptune’s UI Modelbit is a machine learning platform that makes deplo ..read more
Visit website
Product Updates March’24: MosaicML Composer integration, Neptune Query Language, and More
Neptune.ai Blog
by Patrycja Jenkner
1w ago
I always look forward to sharing these updates with you! Hope you’ll find something here that can enhance your workflow. Here’s what we’ve released in the last quarter. New ? 1. MosaicML Composer integration New integration alert! With the Neptune-Composer integration, you can automatically log your Composer training metadata to Neptune. See the example project here. 2. Support for Seaborn figures We’re expanding the list of supported visualizations. Now, you can log and display Seaborn figures as PNG images. 3. Documentation updates We documented the Neptune Quer ..read more
Visit website
Product Updates December ’23: MLflow Plugin, New Docs Tutorials, and More
Neptune.ai Blog
by Patrycja Jenkner
1w ago
Before you dive into 2024, have a look at what we released in Neptune in the last 3 months. New ? 1. MLflow plugin The new Neptune-MLflow integration allows you to send your metadata to Neptune while using the MLflow logging code. It should be especially handy when you already use MLflow in some of your projects but you’d like to enhance the tracking with Neptune’s functionality.  2. Documentation updates To make your life easier, we’re constantly improving our documentation.  There’s a new API index available. This page lists all functions, parameters, and constants exposed by ..read more
Visit website
How to Optimize GPU Usage During Model Training With neptune.ai
Neptune.ai Blog
by Mirza Mujtaba
3w ago
TL;DR GPUs can greatly accelerate deep learning model training, as they are specialized for performing the tensor operations at the heart of neural networks. Since GPUs are expensive resources, utilizing them to their fullest degree is paramount. Metrics like GPU usage, memory utilization, and power consumption provide insight into resource utilization and potential for improvement. Strategies for improving GPU usage include mixed-precision training, optimizing data transfer and processing, and appropriately dividing workloads between CPU and GPU. GPU and CPU metrics can be moni ..read more
Visit website
Zero-Shot and Few-Shot Learning with LLMs
Neptune.ai Blog
by Michał Oleszak
1M ago
TL;DR Chatbots based on LLMs can solve tasks they were not trained to solve either out-of-the-box (zero-shot prompting) or when prompted with a couple of input-output pairs demonstrating how to solve the task (few-shot prompting). Zero-shot prompting is well-suited for simple tasks, exploratory queries, or tasks that only require general knowledge. It doesn’t work well for complex tasks that require context or when a very specific output form is needed. Few-shot prompting is useful when we need the model to “learn” a new concept or when a precise output form is required. It’s also a ..read more
Visit website
LLMOps: What It Is, Why It Matters, and How to Implement It
Neptune.ai Blog
by Stephen Oladele
1M ago
TL;DR LLMOps involves managing the entire lifecycle of Large Language Models (LLMs), including data and prompt management, model fine-tuning and evaluation, pipeline orchestration, and LLM deployment. While there are many similarities with MLOps, LLMOps is unique because it requires specialized handling of natural-language data, prompt-response management, and complex ethical considerations. Retrieval Augmented Generation (RAG) enables LLMs to extract and synthesize information like an advanced search engine. However, transforming raw LLMs into production-ready applications presents ..read more
Visit website
The Real Cost of Self-Hosting MLflow
Neptune.ai Blog
by Aurimas Griciunas
1M ago
TL;DR MLflow is a popular experiment-tracking and end-to-end ML platform Since MLflow is open source, it’s free to download, and hosting an instance does not incur license fees Hosting MLflow requires multiple infrastructure components and comes with maintenance responsibilities, the cost of which can be difficult to estimate On AWS, which offers various options for hosting MLflow, a medium-sized instance comes in at about $200 per month, plus storage and data transfer costs MLflow is well-regarded as an experiment-tracking platform. Since it’s open source, you can download i ..read more
Visit website
Deep Learning Model Optimization Methods
Neptune.ai Blog
by Alessandro Lamberti
1M ago
TL;DR Deep learning models exhibit excellent performance but require high computational resources. Optimization techniques like pruning, quantization, and knowledge distillation are vital for improving computational efficiency: Pruning reduces model size by removing less important neurons, involving identification, elimination, and optional fine-tuning. Quantization decreases memory usage and computation time by using lower numeric precision for model weights. Knowledge distillation transfers insights from a complex “teacher” model to a simpler “student” model, maintaining performa ..read more
Visit website

Follow Neptune.ai Blog on FeedSpot

Continue with Google
Continue with Apple
OR