Skip to content

LlamaIndex Integration

Overview

LlamaIndex (formerly GPT Index) is a powerful data framework designed for building LLM applications with custom data sources. It provides the tools necessary to connect large language models to external data and build knowledge-enhanced applications.

Key Features

  • Data Connectors: Connect to various data sources including files, databases, and APIs
  • Indexing Strategies: Multiple indexing methods optimized for different retrieval patterns
  • Query Engines: Run natural language queries against your data
  • Agent Frameworks: Build sophisticated agents that can reason over your data
  • Multi-Modal Support: Process text, images, and other data types

Use Case

  • Building RAG (Retrieval-Augmented Generation) applications
  • Creating knowledge bases with natural language querying
  • Developing document Q&A systems with private data
  • Building semantic search applications

Setup Instructions

1. Set up a Python Environment:

  • Open your terminal or command prompt.
  • Create a new directory for your project and navigate to it
  • Create and activate a virtual environment
Terminal window
# Create virtual environment
python -m venv llamaindex-env
# Activate on macOS/Linux
source llamaindex-env/bin/activate
# Activate on Windows
llamaindex-env\Scripts\activate

2. Install Required Packages:

  • With your virtual environment activated, install the necessary packages
Terminal window
# Install the core LlamaIndex package
pip install llama-index
# Install the OpenAI-compatible integration
pip install llama-index-llms-openai-like
# Install additional dependencies for document processing
pip install llama-index-readers-file python-dotenv

3. Set up Environment Variables:

  • Create a new file named .env in your project directory
  • Open it in your text editor
  • Add your relaxAI API credentials:
API_KEY=<RELAX_API_KEY>
API_BASE=https://api.relax.ai/v1/

4. Create Configuration Module:

  • Create a new file named config.py in your project directory.
  • Add the following sample code to configure LlamaIndex with relaxAI:
import os
from llama_index.llms.openai_like import OpenAILike
from dotenv import load_dotenv
# Load environment variables
load_dotenv()
# Configure the LLM
def get_llm():
return OpenAILike(
model="Llama-4-Maverick-17B-128E",
api_key=os.getenv("API_KEY"),
api_base=os.getenv("API_BASE"),
is_chat_model=True,
temperature=0.7,
max_tokens=500,
api_version='v1'
)

5. Create a Basic Query Script:

  • Create a new file named query_example.py

  • Add the following code to test your relaxAI integration:

from config import get_llm
# Get configured LLM
llm = get_llm()
# Test with a simple query
response = llm.complete("What is the capital of France?")
print(str(response))

6. Run the Test Script:

  • In your terminal, run the test script:
Terminal window
python query_example.py

Expected output:

The capital of France is Paris.

Code Example

from llama_index.llms.openai_like import OpenAILike
# Configure with relaxAI API
llm = OpenAILike(
model="Llama-4-Maverick-17B-128E",
api_key="RELAX_API_KEY",
api_base="https://api.relax.ai/v1/"
)
# Use in your application
response = llm.complete("Hello World!")
print(str(response))

Advanced Usage

Here’s an example of using LlamaIndex with relaxAI for document retrieval:

from llama_index import VectorStoreIndex, SimpleDirectoryReader
from llama_index.llms.openai_like import OpenAILike
# Load documents
documents = SimpleDirectoryReader("./data").load_data()
# Configure LLM
llm = OpenAILike(
model="Llama-4-Maverick-17B-128E",
api_key="RELAX_API_KEY",
api_base="https://api.relax.ai/v1/"
)
# Create index
index = VectorStoreIndex.from_documents(documents, llm=llm)
# Create query engine
query_engine = index.as_query_engine()
# Query your data
response = query_engine.query("What information is in these documents?")
print(str(response))

Resources