LlamaIndex Integration
Overview
LlamaIndex (formerly GPT Index) is a powerful data framework designed for building LLM applications with custom data sources. It provides the tools necessary to connect large language models to external data and build knowledge-enhanced applications.
Key Features
- Data Connectors: Connect to various data sources including files, databases, and APIs
- Indexing Strategies: Multiple indexing methods optimized for different retrieval patterns
- Query Engines: Run natural language queries against your data
- Agent Frameworks: Build sophisticated agents that can reason over your data
- Multi-Modal Support: Process text, images, and other data types
Use Case
- Building RAG (Retrieval-Augmented Generation) applications
- Creating knowledge bases with natural language querying
- Developing document Q&A systems with private data
- Building semantic search applications
Setup Instructions
1. Set up a Python Environment:
- Open your terminal or command prompt.
- Create a new directory for your project and navigate to it
- Create and activate a virtual environment
# Create virtual environmentpython -m venv llamaindex-env
# Activate on macOS/Linuxsource llamaindex-env/bin/activate
# Activate on Windowsllamaindex-env\Scripts\activate
2. Install Required Packages:
- With your virtual environment activated, install the necessary packages
# Install the core LlamaIndex packagepip install llama-index
# Install the OpenAI-compatible integrationpip install llama-index-llms-openai-like
# Install additional dependencies for document processingpip install llama-index-readers-file python-dotenv
3. Set up Environment Variables:
- Create a new file named
.env
in your project directory - Open it in your text editor
- Add your relaxAI API credentials:
API_KEY=<RELAX_API_KEY>API_BASE=https://api.relax.ai/v1/
4. Create Configuration Module:
- Create a new file named
config.py
in your project directory. - Add the following sample code to configure LlamaIndex with relaxAI:
import osfrom llama_index.llms.openai_like import OpenAILikefrom dotenv import load_dotenv
# Load environment variablesload_dotenv()
# Configure the LLMdef get_llm(): return OpenAILike( model="Llama-4-Maverick-17B-128E", api_key=os.getenv("API_KEY"), api_base=os.getenv("API_BASE"), is_chat_model=True, temperature=0.7, max_tokens=500, api_version='v1' )
5. Create a Basic Query Script:
-
Create a new file named
query_example.py
-
Add the following code to test your relaxAI integration:
from config import get_llm
# Get configured LLMllm = get_llm()
# Test with a simple queryresponse = llm.complete("What is the capital of France?")print(str(response))
6. Run the Test Script:
- In your terminal, run the test script:
python query_example.py
Expected output:
The capital of France is Paris.
Code Example
from llama_index.llms.openai_like import OpenAILike
# Configure with relaxAI APIllm = OpenAILike( model="Llama-4-Maverick-17B-128E", api_key="RELAX_API_KEY", api_base="https://api.relax.ai/v1/")
# Use in your applicationresponse = llm.complete("Hello World!")print(str(response))
Advanced Usage
Here’s an example of using LlamaIndex with relaxAI for document retrieval:
from llama_index import VectorStoreIndex, SimpleDirectoryReaderfrom llama_index.llms.openai_like import OpenAILike
# Load documentsdocuments = SimpleDirectoryReader("./data").load_data()
# Configure LLMllm = OpenAILike( model="Llama-4-Maverick-17B-128E", api_key="RELAX_API_KEY", api_base="https://api.relax.ai/v1/")
# Create indexindex = VectorStoreIndex.from_documents(documents, llm=llm)
# Create query enginequery_engine = index.as_query_engine()
# Query your dataresponse = query_engine.query("What information is in these documents?")print(str(response))
Resources
- LlamaIndex Documentation
- LlamaIndex GitHub Repository
- OpenAI-Like LLM Integration
- LlamaIndex Quickstart
- Custom LLM Integration Guide
- Vector Store Setup Guide
- LlamaIndex Discord Community