Skip to content

Semantic Kernel Integration

Overview

Semantic Kernel is a lightweight SDK from Microsoft that integrates Large Language Models (LLMs) with conventional programming languages. It allows developers to create AI applications that combine semantic and symbolic AI techniques.

Key Features

  • Plugin Architecture: Create, share, and compose AI capabilities
  • Multi-Language Support: Available for C#, Python, Java, and more
  • Memory Management: Built-in state and context handling
  • Planning Capabilities: Enable LLMs to solve complex problems
  • Hybrid AI Approach: Combine traditional code with AI components

Use Cases

  • Building intelligent applications with natural language interfaces
  • Creating domain-specific assistants with specialized knowledge
  • Developing autonomous agents that can plan and execute tasks
  • Enhancing existing applications with AI capabilities
  • Implementing complex workflows combining code and AI reasoning

Setup Instructions

  1. Install Semantic Kernel:

    • For .NET:
      Terminal window
      dotnet add package Microsoft.SemanticKernel
    • For Python:
      Terminal window
      pip install semantic-kernel
    • For Java:
      Terminal window
      // Add Maven dependency
      <dependency>
      <groupId>com.microsoft.semantic-kernel</groupId>
      <artifactId>semantickernel-core</artifactId>
      <version>0.3.6</version>
      </dependency>

    For more information, check the Semantic Kernel Installation Guide

  2. Initialize Semantic Kernel:

    • Create a new project in your preferred language
    • Import the Semantic Kernel package
    • Set up a basic kernel instance
  3. Configure Custom LLM:

    • While Semantic Kernel doesn’t directly have any official support custom LLMs, we can use a workaround and leverage relaxAI’s OpenAI-compatible API by using Semantic Kernel’s existing OpenAI components.

    C# Example:

    using Microsoft.SemanticKernel;
    using Microsoft.SemanticKernel.Connectors.OpenAI;
    // Create a new kernel builder
    var builder = Kernel.CreateBuilder();
    // Add OpenAI-compatible text completion service with custom endpoint
    builder.AddOpenAITextCompletionService(
    modelId: "your-model-name",
    apiKey: "your-api-key",
    endpoint: new Uri("https://api.relax.ai/v1")
    );
    // Build the kernel
    var kernel = builder.Build();

    Python Example:

    import semantic_kernel as sk
    from semantic_kernel.connectors.ai.open_ai import OpenAITextCompletion
    # Initialize the kernel
    kernel = sk.Kernel()
    # Configure OpenAI-compatible text completion service
    service = OpenAIChatCompletion(ai_model_id="DeepSeek-R1", api_key="relax_api_key")
    service.client.base_url = "https://api.relax.ai/v1"
    # Register the service
    kernel.add_service(
    service,
    overwrite=True
    )
  4. Create Semantic Functions:

    • Define natural language prompts as semantic functions
    • Organize related functions into plugins
    • Use semantic functions to interact with the LLM

    Python Example:

    import semantic_kernel as sk
    from semantic_kernel.connectors.ai.open_ai import OpenAIChatCompletion
    from semantic_kernel.functions import KernelArguments
    from semantic_kernel.contents import ChatHistory
    import asyncio
    service = OpenAIChatCompletion(ai_model_id="DeepSeek-R1", api_key="relax_api_key")
    # Set the base URL for the service
    service.client.base_url = "https://api.relax.ai/v1"
    # Initialize the kernel
    kernel = sk.Kernel()
    # Register the service
    kernel.add_service(
    service,
    overwrite=True
    )
    # Define a system message for the chat bot.
    # This message sets the context for the conversation.
    system_message = """
    You are a chat bot. Your name is Mosscap and
    you have one goal: figure out what people need.
    Your full name, should you need to know it, is
    Splendid Speckled Mosscap. You communicate
    effectively, but you tend to answer with long
    flowery prose.
    """
    # Create a chat history object with the system message.
    chat_history = ChatHistory(system_message=system_message)
    chat_function = kernel.add_function(
    plugin_name="ChatBot",
    function_name="Chat",
    prompt="{{$chat_history}}{{$user_input}}",
    template_format="semantic-kernel"
    )
    # Add the chat function to the kernel.
    async def chat() -> bool:
    try:
    user_input = input("User:> ")
    except KeyboardInterrupt:
    print("\n\nExiting chat...")
    return False
    except EOFError:
    print("\n\nExiting chat...")
    return False
    if user_input == "exit":
    print("\n\nExiting chat...")
    return False
    # Get the chat message content from the chat completion service.
    kernel_arguments = KernelArguments(
    chat_history=chat_history,
    user_input=user_input,
    )
    answer = await kernel.invoke(plugin_name="ChatBot", function_name="Chat", arguments=kernel_arguments)
    if answer:
    print(f"Mosscap:> {answer}")
    chat_history.add_user_message(user_input)
    # Add the chat message to the chat history to keep track of the conversation.
    chat_history.add_message(answer.value[0])
    return True
    # Define the main function to run the chat loop.
    async def main() -> None:
    # Start the chat loop. The chat loop will continue until the user types "exit".
    chatting = True
    while chatting:
    chatting = await chat()
    if __name__ == "__main__":
    asyncio.run(main())

    You can expect a similar output as below:

    Semantic Kernel Functions

  5. Create and Register Skills:

    • Organize related semantic functions into skills
    • Register skills with the kernel

    Python Example:

    # Import a skill from directory
    text_skill = kernel.import_semantic_skill_from_directory("./skills", "TextSkill")
    # Use a function from the skill
    result = await kernel.run_async("Some input text", text_skill["Summarize"])
    print(result)

Advanced Configuration

  1. Custom AI Service Configuration:

    • Fine-tune request parameters for your specific model
    • Set up fallback mechanisms for reliability
    • Configure request middleware for monitoring
  2. Chaining and Pipelines:

    • Create complex workflows by chaining functions
    • Build pipelines for multi-step processing
    • Handle intermediate results between steps
  3. Integration with Native Code:

    • Combine semantic functions with traditional code
    • Implement custom plugins with programming logic
    • Create hybrid AI systems leveraging both approaches

Full Code Example (C#)

using Microsoft.SemanticKernel;
using Microsoft.SemanticKernel.Connectors.OpenAI;
using Microsoft.SemanticKernel.Planning;
using Microsoft.SemanticKernel.Memory;
// Create a new kernel builder
var builder = Kernel.CreateBuilder();
// Add custom LLM service
builder.AddOpenAITextCompletionService(
modelId: "DeepSeek-R1",
api Key: "relax_api_key",
endpoint: new Uri("https://api.relax.ai/v1")
);
// Add embedding service for memory
builder.AddOpenAITextEmbeddingGenerationService(
modelId: "text-embedding-ada-002",
apiKey: "your-api-key"
);
// Add memory storage
builder.AddMemoryStorage(new VolatileMemoryStore());
// Build the kernel
var kernel = builder.Build();
// Create a semantic function
var translateFunction = kernel.CreateSemanticFunction(
"Translate the following text to {{$language}}: {{$input}}",
functionName: "Translate",
skillName: "LanguageSkill",
maxTokens: 1000,
temperature: 0.2
);
// Create a summarization function
var summarizeFunction = kernel.CreateSemanticFunction(
"Summarize the following text in 3 sentences or less: {{$input}}",
functionName: "Summarize",
skillName: "TextSkill",
maxTokens: 500,
temperature: 0.1
);
// Register functions with the kernel
kernel.ImportFunctions(translateFunction, "LanguageSkill");
kernel.ImportFunctions(summarizeFunction, "TextSkill");
// Save information to memory
await kernel.Memory.SaveInformationAsync(
collection: "documents",
id: "doc1",
text: "This is a lengthy document about artificial intelligence and its applications in various industries..."
);
// Create a planner
var planner = new SequentialPlanner(kernel);
// Generate and execute a plan
var plan = await planner.CreatePlanAsync(
"Retrieve document doc1, summarize it, and translate the summary to Spanish."
);
var result = await kernel.RunAsync(plan);
Console.WriteLine(result);

Full Code Example (Python)

import semantic_kernel as sk
from semantic_kernel.connectors.ai.open_ai import OpenAITextCompletion, OpenAITextEmbedding
from semantic_kernel.memory import VolatileMemoryStore
from semantic_kernel.planning import SequentialPlanner
# Initialize kernel
kernel = sk.Kernel()
# Add custom LLM service
service = OpenAIChatCompletion(ai_model_id="DeepSeek-R1", api_key="relax_api_key")
# Set the base URL for the service
service.client.base_url = "https://api.relax.ai/v1"
kernel.add_text_completion_service("relaxAI", service)
# Configure memory
memory_store = VolatileMemoryStore()
kernel.register_memory_store(memory_store)
# Create semantic functions
translate_function = kernel.create_semantic_function(
"Translate the following text to {{$language}}: {{$input}}",
function_name="Translate",
skill_name="LanguageSkill",
max_tokens=1000,
temperature=0.2,
service_id="Relax-AI"
)
summarize_function = kernel.create_semantic_function(
"Summarize the following text in 3 sentences or less: {{$input}}",
function_name="Summarize",
skill_name="TextSkill",
max_tokens=500,
temperature=0.1,
service_id="Relax-AI"
)
# Register functions with the kernel
kernel.import_skill(translate_function, "LanguageSkill")
kernel.import_skill(summarize_function, "TextSkill")
# Save information to memory
await kernel.memory.save_information_async(
collection="documents",
id="doc1",
text="This is a lengthy document about artificial intelligence and its applications in various industries..."
)
# Create a planner
planner = SequentialPlanner(kernel)
# Generate and execute a plan
plan = await planner.create_plan_async(
"Retrieve document doc1, summarize it, and translate the summary to Spanish."
)
result = await kernel.run_async(plan)
print(result)

Resources