We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Connect GPT with your data: Retrieval-augmented Generation

Formal Metadata

Title
Connect GPT with your data: Retrieval-augmented Generation
Title of Series
Number of Parts
60
Author
License
CC Attribution 3.0 Unported:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Identifiers
Publisher
Release Date2023
LanguageEnglish

Content Metadata

Subject Area
Genre
Abstract
Large Language Models (LLMs), like ChatGPT, became the poster child of AI overnight. They changed how people search the web, how they write content, and how they code. These models have billions of parameters they can use to effectively store some of the information they saw during pre-training. This enables them to show deep knowledge of a subject, even if they weren't explicitly trained on it. Yet, it’s not straightforward to use LLMs in enterprise use cases and embed them successfully in your product. The most common challenges with LLMs are 1) They don't know anything about YOUR data 2) Their knowledge is not up-to-date 3) They hallucinate - it's hard to understand on what sources they based their answers on 4) It’s hard to assess their performance In this talk, you will learn how to deal with all of the above challenges. We will demonstrate how to connect LLMs to your data and how to keep them up-to-date using retrieval-augmented generation. We will show how to design prompts that minimize hallucination and how to evaluate the performance of your NLP application by collecting end-user feedback. We share best practices of development workflows and typical traps along the way. Each step will be accompanied by practical code examples using the open source framework Haystack. By the end of the talk, you will not only know the methods to overcome the above challenges but also have code examples at hand that let you kickstart the development of your own NLP features.