Hyperfast Contextual Custom LLM with Agents, Multitokens, Explainable AI, and Distillation
Machine Learning Techniques
by Vincent Granville
1w ago
I discuss version 2.0 of my enterprise multi-LLM system called xLLM. Version 1.0 was presented in my recent article entitled “Custom Enterprise LLM/RAG with Real-Time Fine-Tuning”, posted here. Since version 2.0 is backward-compatible and consists of several important additions, I included all the relevant material from the previous article, in this paper. New additions include […] The post Hyperfast Contextual Custom LLM with Agents, Multitokens, Explainable AI, and Distillation first appeared on Machine Learning Techniques ..read more
Visit website
Fast Random Generators with Infinite Period for Large-Scale Reproducible AI and Cryptography
Machine Learning Techniques
by Vincent Granville
1M ago
Modern GenAI apps rely on billions if not trillions of pseudo-random numbers. You find them in the construction of latent variables in nearly all deep neural networks and almost all applications: computer vision, synthetization, and LLMs. Yet, few AI systems offer reproducibility, though those described in my recent book, do. When producing so many random […] The post Fast Random Generators with Infinite Period for Large-Scale Reproducible AI and Cryptography first appeared on Machine Learning Techniques ..read more
Visit website
Custom Enterprise LLM/RAG with Real-Time Fine-Tuning
Machine Learning Techniques
by Vincent Granville
1M ago
This article features an application of xLLM to extract information from a corporate corpus, using prompts referred to as “queries”. The goal is to serve the business user — typically an employee of the company or someone allowed access — with condensed, relevant pieces of information including links, examples, PDFs, tables, charts, definitions and so […] The post Custom Enterprise LLM/RAG with Real-Time Fine-Tuning first appeared on Machine Learning Techniques ..read more
Visit website
Podcast: Creating Custom LLMs
Machine Learning Techniques
by Vincent Granville
2M ago
Despite GPT, Claude, Gemini, LLama and the other host of LLMs that we have access to, a variety of organizations are still exploring their options when it comes to custom LLMs. Logging in to ChatGPT is easy enough, and so is creating a ‘custom’ openAI GPT, but what does it take to create a truly custom LLM? When and why might this be useful, and will it be worth the effort?   Vincent Granville is a pioneer in the AI and machine learning space, he is Co-Founder of Data Science Central, Founder of MLTechniques.com, former VC-funded executive, author, and patent owner. Vincent’s corporate e ..read more
Visit website
Synthesizing Multi-Table Databases: Model Evaluation & Vendor Comparison
Machine Learning Techniques
by Vincent Granville
3M ago
Synthesizing multi-table tabular data presents its own challenges, compared to single-table. When the database contains date columns such as transaction or admission date, a frequent occurrence in real-world datasets, generating high quality synthetizations and model evaluation are even more complicated. In this article, we focus on this type of problems, comparing generated observations produced by 3 vendors and open source. We look at preservation of data integrity across the multiple tables, run time, correct replication of the joint multivariate distribution present in the real data in par ..read more
Visit website
New Trends in LLM: Overview with Focus on xLLM
Machine Learning Techniques
by Vincent Granville
3M ago
If you ever wondered how xLLM is different from other LLM and RAG architectures, what are the foundational changes that make it appealing to fortune 100 companies, and what are the innovations being copied by competitors, read on. In this article, I share the latest trends and provide a high-level summary of xLLM, describing the ground-breaking technologies that make it unique, faster, and better for professional users and experts. In particular, I share my PowerPoint presentation on the topic. Search is becoming hot again, this time powered by RAG and LLMs rather than PageRank.  New LLMs ..read more
Visit website
New Book: State of the Art in GenAI & LLMs — Creative Projects, with Solutions
Machine Learning Techniques
by Vincent Granville
4M ago
With 23 top projects, 96 subprojects, and 6000 lines of Python code, this vendor-neutral coursebook is a goldmine for any analytic professional or AI/ML engineer interested in developing superior GenAI or LLM enterprise apps using ground-breaking technology. This is not another book discussing the same topics that you learn in bootcamps, college classes, Coursera, or at work. Instead, the focus is on implementing solutions that address and fix the main problems encountered in current applications. Using foundational redesign rather than patches such as prompt engineering to fix backend design ..read more
Visit website
GenAI Evaluation Metrics: Your Best Loss Functions to Boost Quality
Machine Learning Techniques
by Vincent Granville
4M ago
Whether dealing with LLM, computer vision, clustering, predictive analytics, synthetization, or any other AI problem, the goal is to deliver high quality results in as little time as possible.  Typically, you assess the output quality after producing the results, using model evaluation metrics. These metrics are also used to compare various models, or to measure improvement over the baseline. In unsupervised learning such as LLM or clustering, evaluation is not trivial. But in many cases, the task is straightforward. Yet you need to choose the best possible metric for quality assessment ..read more
Visit website
Breakthrough: Zero-Weight LLM for Accurate Predictions and High-Performance Clustering
Machine Learning Techniques
by Vincent Granville
4M ago
While most AI companies keep building LLMs with more weights and tokens (now one trillion is a standard number), I went in the opposite direction. Of course, zero weight means that there is no neural network behind the scenes. More specifically, it means that there is no lengthy Blackbox process to find the “best” weights optimizing a loss function. In reality, weights are still present, very much like in a neural network, but they are explicitly specified.  Indeed, I use parametric weights, governed by a few explainable parameters. The optimization focuses on these few parameters, and re ..read more
Visit website
Build and Evaluate High Performance Taxonomy-Based LLMs From Scratch
Machine Learning Techniques
by Vincent Granville
5M ago
One obvious way to dramatically improve the quality of LLM and RAG systems is to use high-quality input sources, as opposed to just raw text from the crawled or parsed content. Combine it with specialization: one LLM per top domain, allowing the user to customize parameters and specify the domain in addition to standard concise prompts. Then you end up with very fast, lightweight, self-tuned, hallucination-free implementations, suitable for enterprise needs and inexpensive (much fewer tokens, no GPU, no neural networks, no training). Also, you can deploy these multi-LLMs locally even on a mode ..read more
Visit website

Follow Machine Learning Techniques on FeedSpot

Continue with Google
Continue with Apple
OR