@@ -255,6 +255,39 @@ OpenSlice also offers management support of *multiple Kubernetes Clusters* simul
For this, you will have to replicate the steps in [Standalone CRIDGE deployment](#standalone-cridge-deployment) for every Cluster. Each CRIDGE instance will be in charged with the management of one Kubernetes Cluster.
### MCP Backend Service
The MCP Backend Service provides AI-powered assistance using Ollama and connects to the OpenSlice MCP server.
To configure the MCP Backend Service, update the following fields in the `values.yaml` file:
```yaml
mcpbackend:
enabled:true
logLevelRoot:INFO
logLevelOSL:INFO
spring:
logLevel:INFO
ai:
ollama:
model:"gpt-oss:20b"# Change the used model here
temperature:0.5
apiUrl:"http://ollama:11434"# Change the Ollama API URL here
chat:
systemPrompt:"YouareanOpenSliceAIAssistant."# Customize your initial Assistant prompt
maxMessages:100# Maximum number of messages to keep in context
```
**Key Configuration Fields:**
-`mcpbackend.enabled`: Set to `true` to deploy the MCP Backend Service, `false` to disable it.
-`mcpbackend.spring.ai.ollama.apiUrl`: URL of your Ollama server. Update this if using an external Ollama instance.
-`mcpbackend.spring.ai.ollama.model`: The AI model to use (e.g., gpt-oss:20b, llama2, mistral).
-`mcpbackend.spring.ai.chat.systemPrompt`: Customize your initial Assistant prompt.
-`mcpbackend.spring.ai.ollama.temperature`: Controls randomness (0.0-1.0). Lower values are more deterministic.
-`mcpbackend.spring.ai.chat.maxMessages`: Maximum number of messages to keep in the conversation context.
> **Note:** Ensure the Ollama server is running and the specified model is available before starting the service.