Andrejus Baranovskis's Blog
1,931 FOLLOWERS
A blog about ADF, JDeveloper, JET, Cloud, NetBeans, BPM, Mobile, MAF, SOA, WebCenter.
Andrejus Baranovskis's Blog
4d ago
In this tutorial, I do a code walkthrough and demonstrate how to implement the RAG pipeline using Unstructured, LangChain, and Pydantic for processing invoice data and extracting structured JSON data.
  ..read more
Andrejus Baranovskis's Blog
1w ago
Using unstructured library to pre-process PDF document content, to be in a cleaner format. This helps LLM to produce more accurate response. JSON response is generated thanks to Nous Hermes 2 PRO LLM. Without any additional post-processing. Using Pydantic dynamic class to validate response to make sure it matches request.
  ..read more
Andrejus Baranovskis's Blog
3w ago
I explain key points you should keep in mind when upgrading to LlamaIndex 0.10.x.
  ..read more
Andrejus Baranovskis's Blog
1M ago
I explain how function calling works with LLM. This is often confused concept, LLM doesn't call a function - LLM retuns JSON response with values to be used for function call from your environment. In this example I'm using Sparrow agent, to call a function.
  ..read more
Andrejus Baranovskis's Blog
1M ago
I explain how to handle file upload with FastAPI and how to process the file by using Python temporary directory. Files placed into temporary directory are automatically removed once request completes, this is very convenient for stateless API.
  ..read more
Andrejus Baranovskis's Blog
2M ago
I explain new functionality in Sparrow - LLM agents support. This means you can implement independently running agents, and invoke them from CLI or API. This makes it easier to run various LLM related processing within Sparrow.
  ..read more
Andrejus Baranovskis's Blog
2M ago
There are many tools and frameworks around LLM, evolving and improving daily. I added plugin support in Sparrow to run different pipelines through the same Sparrow interface. Each pipeline can be implemented with different tech (LlamaIndex, Haystack, etc.) and run independently. The main advantage is that you can test various RAG functionalities from a single app with a unified API and choose the one that works best in the specific use case.
  ..read more
Andrejus Baranovskis's Blog
3M ago
Haystack 2.0 provides functionality to process LLM output and ensure proper JSON structure, based on predefined Pydantic class. I show how you can run this on your local machine, with Ollama. This is possible thanks to OllamaGenerator class available from Haystack.
  ..read more
Andrejus Baranovskis's Blog
3M ago
In this video, I show how to get JSON output from Notus LLM running locally with Ollama. JSON output is generated with LlamaIndex using the dynamic Pydantic class approach.
  ..read more
Andrejus Baranovskis's Blog
3M ago
FastAPI works great with LlamaIndex RAG. In this video, I show how to build a POST endpoint to execute inference requests for LlamaIndex. RAG implementation is done as part of Sparrow data extraction solution. I show how FastAPI can handle multiple concurrent requests to initiate RAG pipeline. I'm using Ollama to execute LLM calls as part of the pipeline. Ollama processes requests sequentially. It means Ollama will process API requests in the queue order. Hopefully, in the future, Ollama will support concurrent requests.
  ..read more