Skip to content

LangChain Integration

Overview

LangChain is a comprehensive framework for developing applications powered by language models through composable components. It provides the building blocks for creating complex LLM applications through a modular architecture.

Key Features

  • Chains: Combine multiple components for complex workflows
  • Agents: Build autonomous systems that can use tools and make decisions
  • Memory: Implement stateful conversations and context management
  • Retrievers: Connect to external data sources for knowledge retrieval
  • Tools and Integrations: Extensive library of integrations with third-party services

Use Cases

  • Building sophisticated chatbots with multi-step reasoning
  • Creating document analysis systems with complex data processing
  • Developing agents that can perform tasks and use external tools
  • Building data analytics applications with natural language interfaces

Setup Instructions

1. Install LangChain:

Terminal window
pip install langchain langchain-openai

2. Configure the OpenAI chat model with custom base URL

from langchain_openai import OpenAI
from langchain.schema import HumanMessage
# Initialize with relaxAI parameters
llm = OpenAI(
model_name="Llama-4-Maverick-17B-128E",
openai_api_key="RELAX_API_KEY",
openai_api_base="https://api.relax.ai/v1/",
max_tokens=200
)
# Example usage
messages = [HumanMessage(content="Write a poem about quantum computers.")]
response = llm.invoke(messages)
print(response)

Advanced Example: Retrieval-Augmented Generation (RAG)

Architecture diagram of RAG

Code Example

from langchain_openai import OpenAI
from langchain_community.document_loaders import TextLoader
from langchain_text_splitters import CharacterTextSplitter
from langchain_community.vectorstores import Chroma
from langchain_openai import OpenAIEmbeddings
from langchain.chains import RetrievalQA
# Load and process documents
loader = TextLoader("./data/my_document.txt")
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
texts = text_splitter.split_documents(documents)
# Create vector store
embeddings = OpenAIEmbeddings(
openai_api_key="RELAX_API_KEY",
openai_api_base="https://api.relax.ai/v1/"
)
vectorstore = Chroma.from_documents(texts, embeddings)
# Create LLM
llm = OpenAI(
model_name="DeepSeek-R1", #Custom model name
openai_api_key="RELAX_API_KEY",
openai_api_base="https://api.relax.ai/v1/",
max_tokens=400
)
# Create RAG chain
qa_chain = RetrievalQA.from_chain_type(
llm=llm,
chain_type="stuff",
retriever=vectorstore.as_retriever()
)
# Query the system
query = "What are the key points in this document?"
response = qa_chain.run(query)
print(response)

Resources