Skip to content

FlowiseAI Integration

FlowiseAI is a powerful low-code platform for building custom AI workflows and agents with an intuitive visual interface. It enables you to create sophisticated AI applications without extensive programming knowledge.

  • Visual Node-Based Editor: Drag-and-drop interface for creating AI workflows
  • Pre-built Components: Extensive library of ready-to-use AI components
  • Multi-Modal Support: Text, image, and audio processing capabilities
  • Custom Function Support: Extend functionality with JavaScript functions
  • Building conversational AI agents for customer service
  • Creating data processing and analysis pipelines
  • Developing content generation and summarization tools
  • Integrating multiple AI services into cohesive workflows

Visit the FlowiseAI GitHub repo for installation options, either via Docker or NPM:

Terminal window
docker pull flowiseai/flowise
docker run -d --name flowise -p 3000:3000 flowiseai/flowise
Terminal window
npm install -g flowise
npx flowise start

Open your browser and go to http://localhost:3000. You’ll see the FlowiseAI dashboard with the canvas area.

FlowiseAI Dashboard Home

Click on the + Add New button to create a new flow. Give your flow a name like relaxAI Integration.

In the components panel on the left, navigate to the LangChain section. Find the ChatModels category.

Drag the ChatOpenAI Custom ChatModel node to your canvas. This will be your connection point to relaxAI.

In the node configuration panel on the right, select Custom OpenAI API Compatible from the Model Name dropdown.

Adding ChatModel Node

Enter the following details in the configuration panel:

Base URL: https://api.relax.ai/v1/
API Key: RELAX_API_KEY
Model Name: <model name> # for example: Llama-4-Maverick-17B-128E

Configuration Parameters

  • Configure optional parameters like Temperature, Top P, and Max Tokens
  • Recommended starting values:
    • Temperature: 0.7
    • Top P: 0.95
  • Add additional components like ChatPrompt, TextInput, and TextOutput

  • Connect the components to create a complete flow:

    TextInput → ChatPrompt
    ChatPrompt → ChatModel (relaxAI)
    ChatModel → TextOutput

Complete Flow Example

  • Click the “Save” button in the top-right corner
  • Your relaxAI integration is now ready to use
  • Click the “Prediction” tab at the top
  • Enter a test message and check if your relaxAI model responds correctly
import { ChatOpenAI } from "@langchain/openai";
const llm = new ChatOpenAI({
modelName: "Llama-4-Maverick-17B-128E",
openAIApiKey: "RELAX_API_KEY",
configuration: {
baseURL: "https://api.relax.ai/v1/"
}
});