Question-Answering Systems: Overview of Main Architectures
Towards Data Science
by Vyacheslav Efimov
21m ago
Discover design approaches for building a scalable information retrieval system Introduction Question-answering applications have intensely emerged in recent years. They can be found everywhere: in modern search engines, chatbots or applications that simply retrieve relevant information from large volumes of thematic data. As the name indicates, the objective of QA applications is to retrieve the most suitable answer to a given question in a text passage. Some of the first methods consisted of naive search by keywords or regular expressions. Obviously, such approaches are not optimal: a q ..read more
Visit website
IPhone Creator Suggests Opinions Drive Innovation, not Data
Towards Data Science
by Daniel Kang
6h ago
Escaping the Data Trap: How Opinions Unlock True Innovation Source: DALL-E “We need to be data driven”, says everyone. And yes. I agree 90% of the time, but it shouldn’t be taken as a blanket statement. Like everything else in life, recognizing where it does and doesn’t apply is important. In a world obsessed with data, it’s the bold, opinionated decisions that break through to revolutionary innovation. Data is behind the curve of innovation The Economist wrote about the rumoured, critical blunders of McKinsey in the 1980s during the early days of the smartphone era. AT&T asked McKinsey to ..read more
Visit website
A High Level Guide to LLM Evaluation Metrics
Towards Data Science
by David Hundley
14h ago
Developing an understanding of a variety of LLM benchmarks & scores, including an intuition of when they may be of value for your purpose Title card created by the author It seems that almost on a weekly basis, a new large language model (LLM) is launched to the public. With each announcement of an LLM, these providers will tout performance numbers that can sound pretty impressive. The challenge that I’ve found is that there is a wide breadth of performance metrics that are referenced across these press releases. While there are a few that show up more often than the others, ther ..read more
Visit website
Intro to DSPy: Goodbye Prompting, Hello Programming!
Towards Data Science
by Leonie Monigatti
14h ago
How the DSPy framework solves the fragility problem in LLM-based applications by replacing prompting with programming and compiling DSPy (Image hand-drawn by the author) Currently, building applications using large language models (LLMs) can be not only complex but also fragile. Typical pipelines are often implemented using prompts, which are hand-crafted through trial and error because LLMs are sensitive to how they are prompted. Thus, when you change a piece in your pipeline, such as the LLM or your data, you will likely weaken its performance — unless you adapt the prompt (or fine-tuni ..read more
Visit website
Get more out of XAI: 10 Tips
Towards Data Science
by Conor O'Sullivan
15h ago
Explainable AI is about more than applying algorithms Photo by Marten Newhall on Unsplash I remember the first time I used SHAP. Well, tried to use it. I wanted to understand an XGBoost model trained with over 40 features and many of those were highly correlated. The plots looked cool! But, that was pretty much it. It wasn’t at all clear how the model was making predictions. And, it wasn’t the XAI method’s fault… the underlying data was a mess. This was my first realisation that: XAI methods are not a golden bullet. You can’t fire them at complex models and expect reasonable exp ..read more
Visit website
Altair and the Powerful Vega-Lite ‘Grammar of Graphics’
Towards Data Science
by Alan Jones
20h ago
Using the Altair library for Python we can develop compelling data visualizations based on a grammar of graphics and implement them in Streamlit The grammar of graphics is like a set of building blocks — Photo by Nik Shuliahin ?? on Unsplash It was way back in 1999 that the late Leland Wilkinson wrote his seminal book, The Grammar of Graphics[1], in which he explained the notion that charts could be built from building blocks that were analogous to the grammar of a written language. According to H2O.ai in their splendid tribute to Wilkinson (and where he became Chief Scientist), “The Gram ..read more
Visit website
ChatGPT Is Not a Doctor
Towards Data Science
by Rachel Draelos, MD, PhD
1d ago
Hidden dangers in seeking medical advice from LLMs Image by Author. 2 sub-images generated by DALLE-2 Last year, ChatGPT passed the US Medical Licensing Exam and was reported to be “more empathetic” than real doctors. ChatGPT currently has around 180 million users; if a mere 10% of them have asked ChatGPT a medical question, that’s already a population two times larger than New York City using ChatGPT like a doctor. There’s an ongoing explosion of medical chatbot startups building thin wrappers around ChatGPT to dole out medical advice. But ChatGPT is not a doctor, and using ChatGPT ..read more
Visit website
Tokens-to-Token Vision Transformers, Explained
Towards Data Science
by Skylar Jean Callis
1d ago
Vision Transformers Explained Series A Full Walk-Through of the Tokens-to-Token Vision Transformer, and Why It’s Better than the Original Since their introduction in 2017 with Attention is All You Need¹, transformers have established themselves as the state of the art for natural language processing (NLP). In 2021, An Image is Worth 16x16 Words² successfully adapted transformers for computer vision tasks. Since then, numerous transformer-based architectures have been proposed for computer vision. In 2021, Tokens-to-Token ViT: Training Vision Transformers from Scratch on ImageNet³ out ..read more
Visit website
Position Embeddings for Vision Transformers, Explained
Towards Data Science
by Skylar Jean Callis
1d ago
Vision Transformers Explained Series The Math and the Code Behind Position Embeddings in Vision Transformers Since their introduction in 2017 with Attention is All You Need¹, transformers have established themselves as the state of the art for natural language processing (NLP). In 2021, An Image is Worth 16x16 Words² successfully adapted transformers for computer vision tasks. Since then, numerous transformer-based architectures have been proposed for computer vision. This article examines why position embeddings are a necessary component of vision transformers, and how different papers i ..read more
Visit website
Vision Transformers, Explained
Towards Data Science
by Skylar Jean Callis
1d ago
Vision Transformers Explained Series A Full Walk-Through of Vision Transformers in PyTorch Since their introduction in 2017 with Attention is All You Need¹, transformers have established themselves as the state of the art for natural language processing (NLP). In 2021, An Image is Worth 16x16 Words² successfully adapted transformers for computer vision tasks. Since then, numerous transformer-based architectures have been proposed for computer vision. This article walks through the Vision Transformer (ViT) as laid out in An Image is Worth 16x16 Words². It includes open-source code for ..read more
Visit website

Follow Towards Data Science on FeedSpot

Continue with Google
Continue with Apple
OR