This course contains 27 interactive scrims
27 lessons1 hour 35 min
Splitting with a LangChain textSplitter tool
Vectorising text chunks
Using embeddings models
Supabase vector store
Templates with input_variables
Prompts from templates
LangChain Expression Language
Basic chains with the .Pipe() method
Retrieval from a vector store
Complex chains with RunnableSequence()
The StringOutputParser() class
Troubleshooting performance issues
This is not a general knowledge chatbot. This bot can have logical, contextual conversations about a specific knowledge source that we provide it. In this case, it will be able to answer questions about Scrimba.
I’m a tutor at Scrimba and I’ve been messing around with websites since 2004. I’m aiming to take the pain out of learning to code.
In this course, you'll be using LangChain.js to build a chatbot that can answer questions on a specific text you give it. This is one of the holy grails of AI - a true superpower.
In the first part of the project, we learn about using LangChain to split text into chunks, convert the chunks to vectors using an OpenAI embeddings model, and store them together in a Supabase vector store.
Next, we study chains, which are the building blocks of LangChain. And we do this using LangChain Expression Language. This makes the process of coding in LangChain much smoother and easier to grasp.
Finally, we tackle retrieval: using vector matching to select the text chunks from our vector store which are most likely to hold the answer to a user’s query. This enables the chatbot to answer questions specific to your data - a critical skill when working with AI and one of the most common use-cases for AI in web dev.
By the end of this course, you'll be able to use LangChain to build real-world, scalable applications. And as this is a Scrimba course, there will be challenges for you to solve throughout the course, allowing you to put your new skills to the test and gain the muscle memory you need to become a rock star developer.
LangChain is an AI-first framework that helps developers build context-aware reasoning applications. The goal of LangChain is to link powerful Large Language Models, such as OpenAI's GPT-4, to external data sources to create and reap the benefits of natural language processing applications.
Embeddings refer to the encoded forms of language elements like sentences, paragraphs, or documents, represented in a multi-dimensional vector space. Each dimension in this space corresponds to a learned linguistic feature or characteristic. Embeddings serve as a means for the model to grasp, retain, and represent the meaning and connections within language, enabling it to compare and differentiate between various linguistic elements or units. But don’t worry about the theoretical stuff too much. We will build this project step by step and make the learning curve gentle.