Skip to content

Rasa Integration

Overview

Rasa is a comprehensive open-source framework for building contextual AI assistants and chatbots. It provides the infrastructure for developing sophisticated conversational experiences that can be deployed across multiple channels.

Key Features

  • Contextual Understanding: Maintains conversation state and user context
  • Intent Classification: Accurately identifies user intents
  • Entity Extraction: Recognizes and extracts key information
  • Custom Actions: Execute backend operations based on user inputs
  • Multi-Channel Support: Deploy across websites, messaging apps, and voice interfaces

Use Cases

  • Customer service automation with contextual understanding
  • Internal enterprise assistants with secure data access
  • Multi-turn conversational interfaces for complex tasks
  • Voice-enabled assistants for hands-free operation

Note: Custom LLM integration is available only in Rasa Pro.

Setup Instructions

1. Install Rasa and Create Project:

Terminal window
# Install Rasa
pip install rasa
# Create a new Rasa project
rasa init

For more detailed setup instructions, refer to the Rasa documentation.

2. Install Rasa Pro (required for custom LLM integration):

Visit then Rasa Pro page to obtain a license. You will be prompted to fill out a form and will receive license information and installation instructions via email.

Add the Pro repository to your pip configuration.

Terminal window
pip install rasa-pro

3. Configure config.yml:

Navigate to your project directory and locate the config.yml file (for more details, refer to the Rasa configuration documentation).

Edit the config.yml to include the LLM configuration:

recipe: default.v1
language: en
pipeline:
- name: WhitespaceTokenizer
- name: LexicalSyntacticFeaturizer
- name: CountVectorsFeaturizer
- name: CountVectorsFeaturizer
analyzer: char_wb
min_ngram: 1
max_ngram: 4
- name: RasaLLM
model_name: "Deepseek" # Custom relaxAI model name
base_url: "https://api.relax.ai/v1/"
llm_parameters:
model: "DeepSeek-R1-0528"
# Add your API key in credentials.yml, not here
policies:
- name: MemoizationPolicy
- name: RulePolicy
- name: TEDPolicy
max_history: 5
epochs: 100

4. Configure credentials.yml file:

  • In the same project directory, locate or create the credentials.yml file.
  • Add your relaxAI API key to this file:
custom_llm:
url: "https://api.relax.ai/v1/chat/completions"
api_key: "RELAX_API_KEY"

5. Create Custom Action Handler (optional):

Navigate to the actions directory in your project and edit the actions.py file.

Add a custom action class that uses the relaxAI API for specific tasks

from rasa_sdk import Action
from rasa_sdk.events import SlotSet
import requests
class ActionQueryrelaxAI(Action):
def name(self):
return "action_query_relaxai"
def run(self, dispatcher, tracker, domain):
# Custom code to call relaxAI API
# ...
return []

6. Train Your Model:

Return to your terminal in the project directory and run the training command:

Terminal window
rasa train
  • Training progress will be displayed in the terminal

7. Start Action Server (if using custom actions):

Open a new terminal window and navigate to your project directory

  • Run the actions server:
Terminal window
rasa run actions

8. Test Your Setup:

  • In your original terminal window, run:
Terminal window
rasa shell
  • Test your bot with sample conversations.

You can switch between available relaxAI models by changing the model parameter in config.yml:

  • DeepSeek-R1-0528
  • Llama-4-Maverick-17B-128E