Saturn: Zero-Configuration AI Service Discovery
Saturn automatically connects your apps to AI services on your network. No configuration required for users, just connect to the same internet as your AI server and your apps find it.
What is Saturn?
Saturn makes AI services available to all your devices without any setup. Run a Saturn server once on your network, and every compatible app automatically finds and connects to it.
Have you ever noticed how easy it is to find printers or smart TVs on your local network? Saturn brings that same convenience and technology to AI services. Your apps discover available AI backends on their own and seamlessly switch between them if one goes offline.
Architecture Overview
Saturn uses network discovery, that same technology that lets your devices find printers and speakers, to automatically locate AI services. Your apps find and connect to available servers without you entering any addresses or ports.
The Discovery Process
When a Saturn server is started, it announces itself on your local network. Clients automatically discover these services and connect to the best available option.
-
Server Announces ItselfWhen you start a Saturn server, it broadcasts its presence on your network. This announcement includes everything apps need to connect: address, port, and priority level.
-
App Discovers ServicesYour app looks for Saturn services on the network. Within moments, it finds all available servers. No configuration files or addresses needed.
-
Automatic ConnectionThe app connects to the best available server based on priority. You control which server is preferred by setting priority values when you start each server.
-
Seamless FailoverIf a server goes offline, the app automatically switches to the next available one. Your work continues without interruption.
Traditional setups require you to configure connection addresses in each application. With Saturn, you deploy servers once and every client on the network automatically finds them. Add a new server or remove an old one, and clients adapt instantly. This makes it easy to:
- Switch between local and cloud AI providers without changing client code
- Run multiple servers for redundancy and load distribution
- Deploy new services without reconfiguring existing applications
Discovery Methods
Saturn's service discovery is based on mDNS (multicast DNS) and DNS-SD (DNS Service Discovery). This approach has several advantages:
- Universal compatibility: Available on all major operating systems without additional software
- Language agnostic: Can be implemented in any programming language through simple subprocess calls
- Zero configuration: Works automatically on local networks without DNS servers or service registries
- Real-time updates: Clients are notified immediately when services appear or disappear
How It Works for Users
When you start a Saturn server, it broadcasts its presence on your local network using mDNS. Any device on the same network can discover this service without knowing the server's IP address or port. This is similar to how your phone finds wireless printers or how smart home devices appear in your apps.
You can manually query for Saturn services on your network using built-in system tools:
# Browse for all Saturn services dns-sd -B _saturn._tcp local # Get details about a specific service dns-sd -L "ServiceName" _saturn._tcp local
# Browse for all Saturn services with details avahi-browse -r _saturn._tcp # Continuous monitoring avahi-browse -r _saturn._tcp --terminate
dns-sd command is available on macOS by default and on Windows when Bonjour Print Services is installed (included with iTunes or available separately). On Linux, install the avahi-utils package to get avahi-browse.
Quick Start
Running a Server
Deploy at least one Saturn server on your network. Select the appropriate server type based on your AI backend:
Example: Starting an OpenRouter Server
# Set up your credentials in a .env file echo "OPENROUTER_API_KEY=your-key-here" > .env echo "OPENROUTER_BASE_URL=https://openrouter.ai/api/v1/chat/completions" >> .env # Start the server python servers/openrouter_server.py
Example: Starting an Ollama Server
# Make sure Ollama is running, then start the Saturn server
python servers/ollama_server.py
Setting Priority
Lower numbers mean higher priority. If you run multiple servers, set priorities to control which one apps prefer:
# Prefer local Ollama (priority 10) python servers/ollama_server.py --priority 10 # Fall back to cloud (priority 50) python servers/openrouter_server.py --priority 50
Simple Chat Client
A minimal client implementation demonstrating Saturn service discovery. Suitable for testing deployments and basic interactive use.
How to Use
python clients/simple_chat_client.py
The client discovers available Saturn services, connects to the highest-priority service, and initiates an interactive chat session.
Commands
- Type your message and press Enter to chat
- Type
clearto reset the conversation - Type
quitto exit
File Upload Client
An extended client supporting multimodal interactions with file context. Processes text files, images, and documents for inclusion in AI requests.
How to Use
python clients/file_upload_client.py
Commands
| Command | Description |
|---|---|
/upload <filepath> |
Share a file with the AI |
/list |
See all uploaded files |
/remove <filename> |
Remove a specific file |
/clear-files |
Remove all files |
/clear |
Reset the conversation |
/info |
Check token usage and costs |
Supported Files
- Text files: .py, .js, .ts, .java, .cpp, .c, .h, .rs, .go, .rb, .php, .swift, .kt, .scala, .sh, .bash, .md, .txt, .json, .xml, .yaml, .yml, .toml, .ini, .conf, .log, .sql, .html, .css, .scss, .lua
- Images: .png, .jpg, .jpeg, .gif, .webp
/info to view current usage statistics including input tokens, output tokens, and total cost. A warning appears when costs exceed 25 cents.
Local Proxy
A bridge that finds Saturn services and makes them available to apps that don't have built-in Saturn support. Run this tool when you want to use Saturn with third-party AI applications; we recommend using Jan.
Use Cases
- Integrating third-party applications (Jan, Continue, etc.) with Saturn services
- Consolidating multiple AI backends behind a single address
- Providing automatic failover for applications without native Saturn support
How to Use
# Start with default settings (auto-finds available port) python clients/local_proxy_client.py # Or specify host and port python clients/local_proxy_client.py --host 127.0.0.1 --port 8080
Connecting Other Applications
Once the proxy is running, it will display the connection address. Configure your AI application (like Jan) to connect to that address. For example:
http://127.0.0.1:8080/v1
The proxy provides automatic failover when services go offline, combines models from all discovered backends, and routes requests to the best available service based on priority.
Troubleshooting
No Saturn services found
- Verify at least one Saturn server is running and has completed initialization
- Confirm client and server are on the same network subnet
- Check firewall rules allow network discovery traffic (UDP port 5353)
- On Linux, ensure avahi-daemon is running
Connection timeouts
- Verify the target service process is running and responsive
- Check if the service is healthy by visiting
/v1/healthin your browser - For Ollama servers, confirm your system has enough memory for the model
- Review service logs for error messages
Model not found
- Check available models by visiting
/v1/modelsin your browser - Verify model name matches exactly (names are case-sensitive)
- Confirm the service hosting the model is running
- For Ollama, ensure the model is downloaded:
ollama pull <model>
Frequently Asked Questions
Can I run both cloud and local AI services?
Yes. Multiple Saturn servers can operate simultaneously with different priorities. A typical configuration runs Ollama locally (priority 10) for cost-free inference, with OpenRouter (priority 50) as a fallback. Clients automatically select based on availability and priority order.
What are the security considerations?
Saturn operates on your local network for service discovery. By default, services are accessible to anyone on your network without requiring a password. Security depends on your network setup—if your network is secure, Saturn is secure. For more protection, consider separating your network into segments or adding authentication.
What are the cost implications?
Ollama servers have no usage costs since they run locally on your hardware. OpenRouter servers charge based on your OpenRouter account usage. Network-wide service sharing means your whole network uses one account rather than requiring separate credentials for each application.
How does Ollama handle concurrent requests?
Concurrent request handling depends on your Ollama configuration. By default, requests are queued. For higher throughput, deploy additional servers at different priority levels for load distribution.
Integrator Reference
This documentation covers the technical specification for implementing Saturn-compatible services and clients. It covers the DNS-SD service discovery protocol, OpenAI-compatible REST API, and integration patterns.
Overview
Saturn uses a two-layer architecture for service discovery and communication:
-
Service Discovery LayerServices announce themselves using DNS-based Service Discovery (DNS-SD) over multicast DNS (mDNS). This allows zero-configuration discovery on local networks.
-
API LayerServices expose HTTP endpoints following OpenAI API conventions. All Saturn services speak the same API format, making them interchangeable.
Service Discovery Protocol
Saturn services register under the following mDNS service type:
_saturn._tcp.local.
Implementation Approaches
There are two primary methods for implementing Saturn service discovery in your application:
- DNS-SD Command-Based Discovery (Recommended): Use native system commands (
dns-sdoravahi-browse) via subprocess calls. This approach works in any programming language and requires no additional libraries. Used by most Saturn servers and clients includingservers/ollama_server.py,servers/openrouter_server.py, andclients/local_proxy_client.py. - Zeroconf Python Library: Use the Python
zeroconflibrary for pure-Python implementation. Provides cleaner API but requires installing a dependency. Used byservers/fallback_server.pyandclients/file_upload_client.py.
This section focuses on the DNS-SD command-based approach. For the zeroconf library approach, visit the Zeroconf Library section of these docs..
DNS-SD Command Reference
Cross-Platform Commands
Different operating systems provide different tools for DNS-SD operations:
| Platform | Command | Installation |
|---|---|---|
| macOS | dns-sd |
Built-in (no installation needed) |
| Windows | dns-sd |
Install Bonjour Print Services (included with iTunes) |
| Linux | avahi-browse |
apt install avahi-utils or yum install avahi-tools |
Discovery Process
Step 1: Browse for Services
Send an mDNS query for PTR records to find available services:
# Browse for services (continues until interrupted)
dns-sd -B _saturn._tcp local
# Browse with full resolution avahi-browse -r _saturn._tcp # Parseable output avahi-browse -rpt _saturn._tcp
Example response (dns-sd):
Browsing for _saturn._tcp.local. DATE: ---Sat 18 Nov 2024--- Timestamp A/R Flags if Domain Service Type Instance Name 10:30:00.000 Add 2 4 local. _saturn._tcp. OpenRouter 10:30:00.100 Add 2 4 local. _saturn._tcp. Ollama
Step 2: Resolve Service Details
For each discovered service, query for SRV and TXT records to get connection details:
dns-sd -L "OpenRouter" _saturn._tcp local
Example response:
Lookup OpenRouter._saturn._tcp.local. DATE: ---Sat 18 Nov 2024--- OpenRouter._saturn._tcp.local. can be reached at DESKTOP-ABC.local.:8080 priority=10 version=1.0 api=OpenRouter
Step 3: Resolve Host Address
The hostname from Step 2 (e.g., DESKTOP-ABC.local.) needs to be resolved to an IP address. This is typically done automatically using your system's DNS resolver (socket.gethostbyname() in most languages).
DESKTOP-ABC.local. has address 192.168.1.100
Implementing Discovery in Your Application
Below are basic examples showing how to implement Saturn service discovery using subprocess calls to DNS-SD commands. These snippets demonstrate the core pattern used in Saturn's own clients and servers.
Python Implementation
This example shows the subprocess-based approach used by clients/local_proxy_client.py:
import subprocess import time import re import socket def discover_saturn_services(): """Discover all Saturn services on the local network.""" services = [] # Start browsing for services browse_proc = subprocess.Popen( ['dns-sd', '-B', '_saturn._tcp', 'local'], stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True ) # Let it run for 2 seconds to collect responses time.sleep(2.0) browse_proc.terminate() stdout, _ = browse_proc.communicate(timeout=1) # Parse service names from output service_names = [] for line in stdout.split('\n'): if 'Add' in line and '_saturn._tcp' in line: parts = line.split() if len(parts) > 6: service_names.append(parts[6]) # Resolve each service to get details for service_name in service_names: lookup_proc = subprocess.Popen( ['dns-sd', '-L', service_name, '_saturn._tcp', 'local'], stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True ) time.sleep(1.5) lookup_proc.terminate() stdout, _ = lookup_proc.communicate(timeout=2) hostname = None port = None priority = 50 # default # Parse hostname and port for line in stdout.split('\n'): if 'can be reached at' in line: match = re.search(r'can be reached at (.+):(\d+)', line) if match: hostname = match.group(1).rstrip('.') port = int(match.group(2)) # Parse priority from TXT record if 'priority=' in line: parts = line.split('priority=') if len(parts) > 1: priority_str = parts[1].split()[0] priority = int(priority_str) if hostname and port: # Resolve hostname to IP address try: ip_address = socket.gethostbyname(hostname) except socket.gaierror: ip_address = hostname services.append({ 'name': service_name, 'address': ip_address, 'port': port, 'priority': priority, 'url': f"http://{ip_address}:{port}" }) # Sort by priority (lowest first) services.sort(key=lambda s: s['priority']) return services # Example usage if __name__ == "__main__": services = discover_saturn_services() for service in services: print(f"{service['name']}: {service['url']} (priority: {service['priority']})")
TypeScript/Node.js Implementation
Basic service discovery using Node.js child processes:
import { spawn } from 'child_process'; import { promisify } from 'util'; interface SaturnService { name: string; address: string; port: number; priority: number; url: string; } async function discoverSaturnServices(): Promise<SaturnService[]> { const services: SaturnService[] = []; // Browse for services const browseProc = spawn('dns-sd', ['-B', '_saturn._tcp', 'local']); let browseOutput = ''; browseProc.stdout.on('data', (data) => { browseOutput += data.toString(); }); // Wait 2 seconds for responses await new Promise(resolve => setTimeout(resolve, 2000)); browseProc.kill(); // Parse service names const serviceNames: string[] = []; const lines = browseOutput.split('\n'); for (const line of lines) { if (line.includes('Add') && line.includes('_saturn._tcp')) { const parts = line.split(/\s+/); if (parts.length > 6) { serviceNames.push(parts[6]); } } } // Resolve each service for (const serviceName of serviceNames) { const lookupProc = spawn('dns-sd', [ '-L', serviceName, '_saturn._tcp', 'local' ]); let lookupOutput = ''; lookupProc.stdout.on('data', (data) => { lookupOutput += data.toString(); }); await new Promise(resolve => setTimeout(resolve, 1500)); lookupProc.kill(); // Parse connection details let hostname: string | null = null; let port: number | null = null; let priority = 50; const lookupLines = lookupOutput.split('\n'); for (const line of lookupLines) { const reachMatch = line.match(/can be reached at (.+):(\d+)/); if (reachMatch) { hostname = reachMatch[1].replace(/\.$/, ''); port = parseInt(reachMatch[2]); } if (line.includes('priority=')) { const priorityMatch = line.match(/priority=(\d+)/); if (priorityMatch) { priority = parseInt(priorityMatch[1]); } } } if (hostname && port) { services.push({ name: serviceName, address: hostname, port, priority, url: `http://${hostname}:${port}` }); } } // Sort by priority services.sort((a, b) => a.priority - b.priority); return services; } // Example usage discoverSaturnServices().then(services => { services.forEach(service => { console.log(`${service.name}: ${service.url} (priority: ${service.priority})`); }); });
Go Implementation
Basic service discovery using Go's exec package:
package main import ( "bufio" "context" "fmt" "os/exec" "regexp" "sort" "strconv" "strings" "time" ) type SaturnService struct { Name string Address string Port int Priority int URL string } func discoverSaturnServices() ([]SaturnService, error) { var services []SaturnService // Create context with timeout for browse command ctx, cancel := context.WithTimeout(context.Background(), 2*time.Second) defer cancel() // Browse for services cmd := exec.CommandContext(ctx, "dns-sd", "-B", "_saturn._tcp", "local") output, _ := cmd.Output() // Parse service names serviceNames := []string{} scanner := bufio.NewScanner(strings.NewReader(string(output))) for scanner.Scan() { line := scanner.Text() if strings.Contains(line, "Add") && strings.Contains(line, "_saturn._tcp") { parts := strings.Fields(line) if len(parts) > 6 { serviceNames = append(serviceNames, parts[6]) } } } // Resolve each service for _, serviceName := range serviceNames { ctx2, cancel2 := context.WithTimeout(context.Background(), 2*time.Second) defer cancel2() cmd2 := exec.CommandContext(ctx2, "dns-sd", "-L", serviceName, "_saturn._tcp", "local") output2, _ := cmd2.Output() var hostname string var port int priority := 50 scanner2 := bufio.NewScanner(strings.NewReader(string(output2))) reachRegex := regexp.MustCompile(`can be reached at (.+):(\d+)`) priorityRegex := regexp.MustCompile(`priority=(\d+)`) for scanner2.Scan() { line := scanner2.Text() // Parse hostname and port if matches := reachRegex.FindStringSubmatch(line); matches != nil { hostname = strings.TrimSuffix(matches[1], ".") port, _ = strconv.Atoi(matches[2]) } // Parse priority if matches := priorityRegex.FindStringSubmatch(line); matches != nil { priority, _ = strconv.Atoi(matches[1]) } } if hostname != "" && port > 0 { services = append(services, SaturnService{ Name: serviceName, Address: hostname, Port: port, Priority: priority, URL: fmt.Sprintf("http://%s:%d", hostname, port), }) } } // Sort by priority sort.Slice(services, func(i, j int) bool { return services[i].Priority < services[j].Priority }) return services, nil } func main() { services, err := discoverSaturnServices() if err != nil { fmt.Printf("Error: %v\n", err) return } for _, service := range services { fmt.Printf("%s: %s (priority: %d)\n", service.Name, service.URL, service.Priority) } }
- These examples demonstrate the core discovery pattern - spawn the command, wait for output, parse results
- Production implementations should handle errors more gracefully and support continuous monitoring
- On Linux, replace
dns-sdcommands withavahi-browseand adjust parsing accordingly
Service Registration
To make your AI service discoverable on the network, you need to register it using DNS-SD. This announces your service to all devices on the local network so clients can automatically find and connect to it.
Registration with DNS-SD Commands
The simplest way to register a Saturn service is using the dns-sd -R command. This works on macOS and Windows (with Bonjour installed).
Basic Registration
Register a service with a single command:
# Register a Saturn service
dns-sd -R "OpenRouter" "_saturn._tcp" "local" 8081 "version=1.0" "api=OpenRouter" "priority=50"
This command does the following:
-R: Register a service"OpenRouter": Service name (appears to clients)"_saturn._tcp": Service type (must be exactly this for Saturn)"local": Domain (always "local" for local network)8081: Port number where your service listens"version=1.0": Version property in TXT record"api=OpenRouter": API type property"priority=50": Priority for service selection
dns-sd -R command must stay running for your service to remain advertised. If you terminate it, the service disappears from the network. In practice, you'll typically run this as a background process alongside your HTTP server.
Registration on Linux (Avahi)
On Linux systems, use the avahi-publish command instead:
# Register with avahi-publish
avahi-publish -s "OpenRouter" _saturn._tcp 8081 "version=1.0" "api=OpenRouter" "priority=50"
Registration in Application Code
For production servers, you'll want to register the service programmatically. Here's how Saturn's servers do it:
Python Implementation (subprocess)
This example shows the pattern used in servers/openrouter_server.py and servers/ollama_server.py:
import subprocess import socket def register_saturn_service( service_name: str, port: int, priority: int = 50, api_type: str = "Generic", version: str = "1.0", features: str = None ) -> subprocess.Popen: """ Register a Saturn service using dns-sd. Returns the subprocess Popen object - keep it running! """ # Build the registration command cmd = [ 'dns-sd', '-R', service_name, '_saturn._tcp', 'local', str(port), f'version={version}', f'api={api_type}', f'priority={priority}' ] # Add optional features if features: cmd.append(f'features={features}') try: # Start the registration process proc = subprocess.Popen( cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True ) print(f"Registered '{service_name}' on port {port} with priority {priority}") return proc except FileNotFoundError: print("ERROR: dns-sd not found. Install Bonjour services.") return None # Example usage if __name__ == "__main__": # Register the service registration_proc = register_saturn_service( service_name="MyAIService", port=8080, priority=10, api_type="OpenRouter", features="multimodal,streaming" ) try: # Your HTTP server runs here... # uvicorn.run(app, host="0.0.0.0", port=8080) input("Press Enter to stop...") finally: # Clean shutdown if registration_proc: print("Unregistering service...") registration_proc.terminate() registration_proc.wait(timeout=2)
Shell Script Registration
For quick testing or scripting, you can register directly from the command line:
#!/bin/bash # Register Saturn service in background dns-sd -R "MyService" "_saturn._tcp" "local" 8080 \ "version=1.0" \ "api=Custom" \ "priority=50" & # Store the PID so we can kill it later DNS_SD_PID=$! # Cleanup function cleanup() { echo "Unregistering service..." kill $DNS_SD_PID exit 0 } # Register cleanup on script exit trap cleanup SIGINT SIGTERM # Start your HTTP server echo "Service registered. Starting server..." python -m http.server 8080
Registration Properties
When registering your service, include these TXT record properties:
| Property | Required | Description | Example |
|---|---|---|---|
version |
Yes | Your server version | "1.0", "2.0" |
api |
Yes | Backend API type | "OpenRouter", "Ollama", "Custom" |
priority |
Yes | Selection priority (lower = preferred) | "10", "50", "100" |
features |
No | Comma-separated capabilities | "multimodal,streaming" |
Testing Your Registration
After registering your service, verify it's discoverable:
# Browse for all Saturn services dns-sd -B _saturn._tcp local # You should see your service appear in the output # Example output: # Browsing for _saturn._tcp.local. # Timestamp A/R Flags if Domain Service Type Instance Name # 10:30:00.000 Add 2 4 local. _saturn._tcp. MyService
Then resolve the service to see its details:
# Look up your specific service dns-sd -L "MyService" _saturn._tcp local # You should see the connection details and TXT records: # MyService._saturn._tcp.local. can be reached at hostname.local.:8080 # version=1.0 api=Custom priority=50
Client Discovery with Zeroconf Library
As an alternative to subprocess-based DNS-SD commands, clients can use the Python zeroconf library for a pure-Python implementation. This approach provides a cleaner API and better event handling but requires installing an additional dependency.
When to Use This Approach
- Python-only projects: If you're building in Python and don't mind the dependency
- Cleaner API: The library provides a more Pythonic interface than parsing subprocess output
- Event-driven discovery: Built-in support for continuous monitoring with callbacks
- Cross-platform without external commands: Works consistently across platforms without requiring
dns-sdoravahi-browse
Installation
pip install zeroconf
Service Discovery with Zeroconf
This example is based on clients/file_upload_client.py:
from zeroconf import ServiceBrowser, ServiceListener, Zeroconf import socket import time class SaturnServiceListener(ServiceListener): def __init__(self): self.services = {} def add_service(self, zc: Zeroconf, type_: str, name: str) -> None: """Called when a new service is discovered""" info = zc.get_service_info(type_, name) if not info: return # Extract service details address = socket.inet_ntoa(info.addresses[0]) port = info.port # Parse priority from TXT record priority = 50 # default if info.properties: priority_bytes = info.properties.get(b'priority') if priority_bytes: try: priority = int(priority_bytes.decode('utf-8')) except (ValueError, UnicodeDecodeError): pass # Clean up service name clean_name = name.replace('._saturn._tcp.local.', '') self.services[clean_name] = { 'name': clean_name, 'address': address, 'port': port, 'priority': priority, 'url': f"http://{address}:{port}" } print(f"Discovered: {clean_name} at {address}:{port} (priority: {priority})") def remove_service(self, zc: Zeroconf, type_: str, name: str) -> None: """Called when a service disappears""" clean_name = name.replace('._saturn._tcp.local.', '') if clean_name in self.services: del self.services[clean_name] print(f"Service removed: {clean_name}") def update_service(self, zc: Zeroconf, type_: str, name: str) -> None: """Called when service information changes""" self.add_service(zc, type_, name) def get_best_service(self): """Get the service with lowest priority""" if not self.services: return None return min(self.services.values(), key=lambda s: s['priority']) # Example usage if __name__ == "__main__": zeroconf = Zeroconf() listener = SaturnServiceListener() # Start browsing for services browser = ServiceBrowser(zeroconf, "_saturn._tcp.local.", listener) try: # Wait for services to be discovered print("Searching for Saturn services...") time.sleep(3) # Get the best service best = listener.get_best_service() if best: print(f"\nBest service: {best['name']} at {best['url']}") else: print("No services found") # Keep running to monitor for changes input("Press Enter to stop...\n") finally: browser.cancel() zeroconf.close()
Comparison: DNS-SD Commands vs Zeroconf Library (Discovery)
| Aspect | DNS-SD Commands | Zeroconf Library |
|---|---|---|
| Language Support | Any language with subprocess support | Python only |
| Dependencies | System-level (dns-sd/avahi-browse) | Python package (pip install) |
| API Complexity | Parse text output from subprocess | Clean object-oriented API |
| Event Handling | Manual polling or parsing streaming output | Built-in callback system |
| Cross-Platform | Different commands per OS | Consistent across all platforms |
| Used In | ollama_server.py, openrouter_server.py, local_proxy_client.py | fallback_server.py, file_upload_client.py |
Use DNS-SD commands if:
- You're not using Python, or want to minimize dependencies
- You need maximum portability across languages
- You want to match Saturn's reference client implementations
Use the zeroconf library if:
- You're building a Python-only client application
- You prefer a cleaner API over subprocess management
- You need sophisticated event-driven service monitoring with callbacks
Server Registration with Zeroconf Library
As an alternative to subprocess-based DNS-SD commands, servers can use the Python zeroconf library to register services. This provides a cleaner API but requires installing an additional dependency.
When to Use This Approach
- Python-only servers: If you're building a server in Python and don't mind the dependency
- Cleaner API: The library provides a more Pythonic interface than managing subprocess lifetimes
- Cross-platform without external commands: Works consistently across platforms without requiring
dns-sdoravahi-publish - Programmatic control: Easier to start/stop registration dynamically
Installation
pip install zeroconf
Service Registration with Zeroconf
This example shows how to announce a Saturn service, based on servers/fallback_server.py:
from zeroconf import ServiceInfo, Zeroconf import socket def register_saturn_service(port: int, priority: int = 50, service_name: str = "MyService"): """Register a Saturn service on the network""" zeroconf = Zeroconf() # Get local IP address hostname = socket.gethostname() host_ip = socket.gethostbyname(hostname) # Create service info service_type = "_saturn._tcp.local." full_name = f"{service_name}.{service_type}" info = ServiceInfo( type_=service_type, name=full_name, port=port, addresses=[socket.inet_aton(host_ip)], server=f"{hostname}.local.", properties={ 'version': '1.0', 'api': 'MyAPI', 'priority': str(priority) }, priority=priority ) # Register the service zeroconf.register_service(info) print(f"Registered {service_name} on port {port} with priority {priority}") return zeroconf, info # Example usage if __name__ == "__main__": zc, info = register_saturn_service(port=8080, priority=10, service_name="TestServer") try: input("Service registered. Press Enter to stop...\n") finally: zeroconf.unregister_service(info) zeroconf.close() print("Service unregistered")
Integration with FastAPI/Flask
For production servers using FastAPI or Flask, integrate registration into the application lifecycle:
from fastapi import FastAPI from contextlib import asynccontextmanager from zeroconf import ServiceInfo, Zeroconf import socket # Global variables for cleanup zeroconf_instance = None service_info = None @asynccontextmanager async def lifespan(app: FastAPI): """Manage service registration during app lifecycle""" global zeroconf_instance, service_info # Startup: Register service print("Registering Saturn service...") zeroconf_instance = Zeroconf() hostname = socket.gethostname() host_ip = socket.gethostbyname(hostname) service_info = ServiceInfo( type_="_saturn._tcp.local.", name=f"MyAIService._saturn._tcp.local.", port=8080, addresses=[socket.inet_aton(host_ip)], server=f"{hostname}.local.", properties={ 'version': '1.0', 'api': 'Custom', 'priority': '10' } ) zeroconf_instance.register_service(service_info) print("Service registered!") yield # Shutdown: Unregister service print("Unregistering service...") if zeroconf_instance and service_info: zeroconf_instance.unregister_service(service_info) zeroconf_instance.close() print("Service unregistered") app = FastAPI(lifespan=lifespan) @app.get("/v1/health") async def health(): return {"status": "ok"}
Comparison: DNS-SD Commands vs Zeroconf Library (Registration)
| Aspect | DNS-SD Commands | Zeroconf Library |
|---|---|---|
| Language Support | Any language with subprocess support | Python only |
| Dependencies | System-level (dns-sd/avahi-publish) | Python package (pip install) |
| Process Management | Must keep subprocess running | Managed internally by library |
| Cleanup | Terminate subprocess on exit | Call unregister_service() |
| Cross-Platform | Different commands per OS | Consistent across all platforms |
| Used In | ollama_server.py, openrouter_server.py | fallback_server.py |
Use DNS-SD commands if:
- You're not using Python, or want to minimize dependencies
- You need maximum portability across languages
- You want to match Saturn's reference server implementations
- You're comfortable managing subprocess lifetimes
Use the zeroconf library if:
- You're building a Python-only server
- You prefer a cleaner API integrated with your application lifecycle
- You want consistent behavior across all platforms without system commands
- You need programmatic control over registration
Service Properties
Each service advertises these properties in its TXT record:
| Property | Type | Description |
|---|---|---|
version |
string | Server version (e.g., "1.0", "2.0") |
api |
string | API type identifier (e.g., "OpenRouter", "Ollama") |
priority |
string | Numeric priority (lower = preferred) |
features |
string | Comma-separated feature list (optional) |
Service Selection Algorithm
When multiple services are discovered:
- Parse the
priorityproperty from each service's TXT record - Sort services by priority in ascending order (lower values first)
- Connect to the service with the lowest priority number
- On connection failure, try the next service in priority order
Priority Guidelines
The default priority is 50. Lower numbers indicate higher preference. Clients select the service with the lowest priority number.
| Range | Usage | Example |
|---|---|---|
| 1-20 | Primary/preferred services | Local Ollama (free, private) |
| 21-100 | Standard services (default: 50) | Cloud providers (OpenRouter) |
| 101+ | Fallback services | Emergency backup (999) |
API Endpoints
All Saturn services implement these required endpoints:
| Method | Path | Description |
|---|---|---|
| GET | /v1/health |
Health check |
| GET | /v1/models |
List available models |
| POST | /v1/chat/completions |
Chat completions |
Health Check
curl http://192.168.1.100:8080/v1/health
{
"status": "ok",
"provider": "OpenRouter",
"models_cached": 344,
"features": ["multimodal", "auto-routing", "full-catalog"]
}
List Models
curl http://192.168.1.100:8080/v1/models
{
"models": [
{
"id": "openrouter/auto",
"object": "model",
"owned_by": "openrouter"
},
{
"id": "anthropic/claude-3-opus",
"object": "model",
"owned_by": "anthropic",
"context_length": 200000
}
]
}
Chat Completions
Send messages and receive AI responses.
Request Format
curl -X POST http://192.168.1.100:8080/v1/chat/completions \ -H "Content-Type: application/json" \ -d '{ "model": "anthropic/claude-3-opus", "messages": [ { "role": "system", "content": "You are a helpful assistant." }, { "role": "user", "content": "Hello!" } ], "max_tokens": 1024, "stream": false }'
Request Parameters
| Field | Type | Required | Description |
|---|---|---|---|
model |
string | Yes | Model identifier from /v1/models |
messages |
array | Yes | Conversation history |
max_tokens |
integer | No | Maximum tokens in response |
stream |
boolean | No | Enable streaming (default: false) |
Message Roles
| Role | Description |
|---|---|
system |
System instructions (context, personality, constraints) |
user |
Messages from the human user |
assistant |
Previous responses from the AI |
Response Format
{
"id": "chatcmpl-1699900000",
"object": "chat.completion",
"created": 1699900000,
"model": "anthropic/claude-3-opus",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "Hello! How can I help you today?"
},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 25,
"completion_tokens": 18,
"total_tokens": 43
}
}
Streaming Responses
For real-time responses, set stream: true in the request. The response uses Server-Sent Events (SSE).
curl -X POST http://192.168.1.100:8080/v1/chat/completions \ -H "Content-Type: application/json" \ -N \ -d '{ "model": "anthropic/claude-3-opus", "messages": [ {"role": "user", "content": "Count to 5"} ], "stream": true }'
Streaming Response Format
Each chunk is prefixed with data: and followed by two newlines:
# First chunk (role announcement) data: {"id":"chatcmpl-1699900000","object":"chat.completion.chunk","created":1699900000,"model":"anthropic/claude-3-opus","choices":[{"index":0,"delta":{"role":"assistant"},"finish_reason":null}]} # Content chunks data: {"id":"chatcmpl-1699900000","object":"chat.completion.chunk","created":1699900000,"model":"anthropic/claude-3-opus","choices":[{"index":0,"delta":{"content":"1, "},"finish_reason":null}]} data: {"id":"chatcmpl-1699900000","object":"chat.completion.chunk","created":1699900000,"model":"anthropic/claude-3-opus","choices":[{"index":0,"delta":{"content":"2, "},"finish_reason":null}]} # Final chunk data: {"id":"chatcmpl-1699900000","object":"chat.completion.chunk","created":1699900000,"model":"anthropic/claude-3-opus","choices":[{"index":0,"delta":{},"finish_reason":"stop"}]} data: [DONE]
Required Headers for Streaming
Content-Type: text/event-stream Cache-Control: no-cache Connection: keep-alive X-Accel-Buffering: no
Error Handling
All errors return a JSON response with a detail field:
{
"detail": "Error description here"
}
HTTP Status Codes
| Code | Meaning |
|---|---|
| 200 | Success |
| 400 | Bad request (invalid model, malformed request) |
| 404 | Model not found in any available service |
| 500 | Internal server error (connection error to backend) |
| 502 | Bad gateway (upstream API error, invalid JSON response) |
| 503 | Service unavailable (backend not reachable, no models available) |
| 504 | Gateway timeout (request to backend timed out) |
Shell Script Examples
Service Discovery Script
Discover Saturn services on the network and extract connection details:
#!/bin/bash # Find Saturn services (runs for 3 seconds) echo "Discovering Saturn services..." timeout 3 dns-sd -B _saturn._tcp local. 2>/dev/null | grep Add | while read line; do SERVICE=$(echo "$line" | awk '{print $NF}') echo "Found: $SERVICE" # Resolve service details dns-sd -L "$SERVICE" _saturn._tcp local. 2>/dev/null | head -2 | tail -1 done
#!/bin/bash # Find Saturn services and resolve their addresses echo "Discovering Saturn services..." avahi-browse -rpt _saturn._tcp 2>/dev/null | grep "^=" | while IFS=';' read _ _ _ name _ _ host ip port txt; do echo "Service: $name" echo " Address: $ip:$port" echo " Properties: $txt" done
Complete Discovery and Chat Script
#!/bin/bash # Configuration SERVER="http://127.0.0.1:8080" MODEL="openrouter/auto" # Check health echo "Checking server health..." curl -s "$SERVER/v1/health" | python3 -m json.tool # List available models echo "\nAvailable models:" curl -s "$SERVER/v1/models" | python3 -c " import sys, json data = json.load(sys.stdin) for model in data['models']: print(f\" - {model['id']}\") " # Make a chat request echo "\nSending chat request..." curl -s -X POST "$SERVER/v1/chat/completions" \ -H "Content-Type: application/json" \ -d '{ "model": "'"$MODEL"'", "messages": [ {"role": "user", "content": "What is 2+2?"} ] }' | python3 -c " import sys, json data = json.load(sys.stdin) print(data['choices'][0]['message']['content']) "
Multi-turn Conversation
#!/bin/bash SERVER="http://127.0.0.1:8080/v1" MODEL="openrouter/auto" # First turn RESPONSE=$(curl -s -X POST "$SERVER/chat/completions" \ -H "Content-Type: application/json" \ -d '{ "model": "'"$MODEL"'", "messages": [ {"role": "user", "content": "My name is Alice."} ] }') ASSISTANT_MSG=$(echo "$RESPONSE" | python3 -c " import sys, json print(json.load(sys.stdin)['choices'][0]['message']['content']) ") echo "Assistant: $ASSISTANT_MSG" # Second turn with history curl -s -X POST "$SERVER/chat/completions" \ -H "Content-Type: application/json" \ -d '{ "model": "'"$MODEL"'", "messages": [ {"role": "user", "content": "My name is Alice."}, {"role": "assistant", "content": "'"$ASSISTANT_MSG"'"}, {"role": "user", "content": "What is my name?"} ] }' | python3 -c " import sys, json print('Assistant:', json.load(sys.stdin)['choices'][0]['message']['content']) "
Streaming Output
#!/bin/bash SERVER="http://127.0.0.1:8080/v1" MESSAGE="Tell me a short story" curl -s -N -X POST "$SERVER/chat/completions" \ -H "Content-Type: application/json" \ -d '{ "model": "openrouter/auto", "messages": [ {"role": "user", "content": "'"$MESSAGE"'"} ], "stream": true }' | while read -r line; do if [[ "$line" == data:* ]] && [[ "$line" != "data: [DONE]" ]]; then content=$(echo "${line#data: }" | python3 -c " import sys, json try: data = json.load(sys.stdin) content = data.get('choices', [{}])[0].get('delta', {}).get('content', '') print(content, end='') except: pass ") echo -n "$content" fi done echo
Saturn Integrations
Integrate Saturn with your favorite applications and workflows. From VLC Media Player extensions that provide context-aware AI interactions to desktop AI clients like Jan, Saturn's integration-friendly architecture makes it easy to bring AI capabilities to the tools you already use.
Integration Philosophy
Saturn is designed to be an integration-friendly platform that works seamlessly with existing tools and workflows. Rather than forcing you to adopt new interfaces, Saturn brings AI capabilities to the applications you already use through two primary integration approaches:
Why Integrate with Saturn?
- Automatic Service Discovery: No manual configuration of endpoints or API keys for each service
- Unified Access: Access all Saturn network services through a single integration point
- Automatic Failover: Requests automatically route to healthy services if one becomes unavailable
- Priority-Based Routing: Services can advertise priority levels for intelligent request routing
- Context-Aware: Integrations can leverage application-specific context (like media metadata) for more relevant AI responses
- Zero-Configuration: mDNS-based discovery means services are found automatically on your network
Available Integrations
Saturn currently provides the following integrations, with more planned for the future:
VLC Extensions: Shared Architecture
Both Saturn VLC extensions (Roast and Chat) share the same underlying architecture, making them reliable, portable, and zero-configuration solutions for bringing AI capabilities to VLC Media Player.
How It Works
-
Extension ActivationWhen you activate either extension in VLC (View → Extensions), the Lua script automatically detects your operating system and launches the appropriate bundled bridge executable.
-
Bridge InitializationThe bridge starts an HTTP server and begins discovering Saturn services via mDNS. It writes connection information to a temporary port file, allowing the Lua extension to connect.
-
Service DiscoveryThe bridge continuously discovers Saturn AI services broadcasting on
_saturn._tcp.local, monitors their health, and maintains a list of available models. -
Extension ConnectionThe Lua extension connects to the bridge using retry logic (up to 7 attempts with exponential backoff), retrieves available services, and presents them in the UI.
Key Components
- Lua Extensions:
saturn_roast.luaandsaturn_chat.luaprovide the user interface and media context extraction - Discovery Bridge:
vlc_discovery_bridge.py(bundled as executable) handles mDNS service discovery and HTTP routing - Bridge Executables: Pre-built for Windows (.exe), macOS, and Linux - no Python installation required
- Port File System: Temporary file-based communication for dynamic port discovery
vlc_extension folder. Installing one gives you access to both. The shared bridge executable supports both extensions simultaneously, so you can use them interchangeably or even run them at the same time.
VLC Roast Extension
The Saturn Roast extension brings entertainment to VLC Media Player by providing AI-generated roasts based on your media taste. It analyzes what you're currently watching or listening to and delivers comments about your media choices.
How to Use
- Play any media in VLC (music, video, podcast, etc.)
- Activate the extension: View → Extensions → Saturn Roast Extension
- Wait for the bridge to connect and discover Saturn services
- Select a service and model (or use Auto mode)
- Click the 🔥 Roast Me! 🔥 button
Example Roasts
Installation - Roast Extension
The Roast extension shares the same installation process as the Chat extension. Both extensions are included in the vlc_extension folder and use the same discovery bridge.
Quick Install
-
DownloadGet the
vlc_extensionfolder from the Saturn repository: VLC Extension Repository -
Copy to VLCCopy the entire folder to your VLC extensions directory:
Windows:%APPDATA%\vlc\lua\extensions\
macOS:~/Library/Application Support/org.videolan.vlc/lua/extensions/
Linux:~/.local/share/vlc/lua/extensions/ -
Restart VLCRestart VLC to load the extension
-
ActivateGo to View → Extensions → Saturn Roast Extension
Using the Roast Extension
Getting Roasted
- Make sure media is playing in VLC
- Select a service and model (or leave on Auto)
- Click 🔥 Roast Me! 🔥
- Watch the "Thinking..." message appear
- Enjoy your roast in the styled verdict display
Tips for Better Roasts
- Metadata Matters: Media with complete metadata (artist, album, genre) generates more personalized roasts
- Try Different Models: Different AI models have different comedy styles - experiment to find your favorite
- Context is Key: The AI sees what you're playing, so the more interesting your media choice, the funnier the roast
- Embrace the Chaos: Play something unusual or embarrassing for the best results
VLC Chat Extension
The Saturn Chat extension provides an interactive conversational AI assistant directly within VLC Media Player. Unlike the entertainment-focused Roast extension, Chat is designed for productive, informative interactions about your media content. The AI assistant is aware of what you're watching or listening to and can answer questions, provide context, and help you understand your media better.
What Makes It Special
- Context-Aware Conversations: The AI knows what you're playing and can reference it in responses
- Full Chat History: Maintains conversation history with color-coded messages for easy reading
- Educational & Informative: Perfect for learning about music, films, podcasts, or educational content
- Multi-turn Discussions: Build on previous messages to have natural, flowing conversations
- Playback Position Tracking: Records when conversations started for temporal context
- Flexible System Prompt: Configured to be helpful and focused on media intelligence
How to Use
- Play any media in VLC (music, video, podcast, educational content, etc.)
- Activate the extension: View → Extensions → Saturn VLC Extension
- Wait for the "Saturn Chat - Media Intelligence" window to appear and connect
- Select a service and model (or use Auto mode for automatic routing)
- Type your question or comment in the message input
- Click Send and watch the AI respond with context-aware information
Installation - Chat Extension
What's Included
The VLC extension comes bundled with everything needed and no additional software installation required:
saturn_chat.lua— The VLC Lua extension (UI and logic)bridge/vlc_discovery_bridge.exe— Windows bridge executablebridge/vlc_discovery_bridge— Linux/macOS bridge executable
Installation Steps
-
Download the ExtensionDownload the
vlc_extensionfolder from the Saturn repository. -
Copy to VLC Extensions DirectoryCopy the entire folder to your VLC extensions location:
Windows:%APPDATA%\vlc\lua\extensions\
macOS:~/Library/Application Support/org.videolan.vlc/lua/extensions/
Linux:~/.local/share/vlc/lua/extensions/ -
Restart VLCIf VLC was already running, restart it to load the extension.
Manual Copy
xcopy /E /I vlc_extension "%APPDATA%\vlc\lua\extensions\vlc_extension"
Verifying Installation
- Open VLC (restart if already running)
- Go to View → Extensions
- You should see Saturn VLC Extension in the list
Download
Download the Saturn VLC extension from the repository:
Using the Extension
Activating Saturn Chat
- Open VLC and play some media (music, video, podcast, etc.)
- Go to View → Extensions → Saturn VLC Extension
- The "Saturn Chat - Media Intelligence" window will open and automatically:
- Launch the bundled discovery bridge executable
- Wait for the bridge to initialize (uses port file for discovery)
- Search for Saturn services via mDNS
- Connect with retry logic (up to 7 attempts with exponential backoff)
Starting a Conversation
- Wait for the status to show "Connected - X service(s) available"
- Select a service from the dropdown and click Select (or leave on Auto)
- Select a model from the model dropdown and click Select
- Type your message in the input box
- Click Send
The AI response incorporates media context including the current file name, playback position, duration, and any available metadata (artist, album, genre). The system prompt tells the AI it's integrated into VLC Media Player and can help with questions about the user's media.
Clearing the Chat
Click Clear Chat to reset conversation history. This clears all messages and resets the stored playback position context.
Deactivating
Close the extension window or select View → Extensions → Saturn VLC Extension again. The extension sends a POST request to the bridge's /shutdown endpoint and cleans up the port file from the temp directory.
Troubleshooting
Cannot connect to bridge
Symptom: Status displays "Cannot connect to bridge at http://127.0.0.1:9876"
Resolution:
- The extension has built-in retry logic (7 attempts with exponential backoff up to 2.5s)
- Allow up to 10 seconds for bridge initialization
- Close and reopen the extension to restart the bridge
- Verify port 9876 is not in use:
netstat -ano | findstr 9876(Windows) orlsof -i :9876(macOS/Linux) - Check if firewall or antivirus is blocking the bridge executable
No AI services available
Symptom: Bridge connects but reports "No healthy AI services available" or "Bridge connected but no AI services found"
Causes:
- No Saturn servers running on the network
- Saturn servers running but not broadcasting on
_saturn._tcp.local. - Firewall blocking mDNS traffic (UDP port 5353)
- Services unhealthy (failing
/v1/healthchecks)
Resolution:
- Start a Saturn server (OpenRouter or Ollama)
- Verify server health:
curl http://<server>:<port>/v1/health - Check firewall allows mDNS traffic on UDP port 5353
- Click Refresh in the extension to re-query services
Bridge executable not found
Symptoms: Extension loads but bridge doesn't start. VLC logs show "Bridge executable not found at..."
Solutions:
- Verify the
bridge/folder exists in your extension directory - Check that it contains the correct executable:
- Windows:
%APPDATA%\vlc\lua\extensions\vlc_extension\bridge\vlc_discovery_bridge.exe - macOS:
~/Library/Application Support/org.videolan.vlc/lua/extensions/vlc_extension/bridge/vlc_discovery_bridge - Linux:
~/.local/share/vlc/lua/extensions/vlc_extension/bridge/vlc_discovery_bridge
- Windows:
- On macOS/Linux, ensure the executable has run permissions:
chmod +x bridge/vlc_discovery_bridge
Port file timeout
Symptom: VLC logs show "Timeout waiting for bridge to start"
Resolution:
- The bridge may be crashing on startup. Run it manually to see error messages:
# Windows cd %APPDATA%\vlc\lua\extensions\vlc_extension\bridge\ vlc_discovery_bridge.exe --port-file test.txt # Linux/macOS cd ~/.local/share/vlc/lua/extensions/vlc_extension/bridge/ ./vlc_discovery_bridge --port-file test.txt
- Check if port 9876 is already in use
- Verify the bridge executable has correct permissions
Viewing debug information
- In VLC, go to Tools → Messages
- Set verbosity to 2 - Debug
- Look for messages starting with
[Saturn]
A successful startup sequence looks like:
[Saturn] Extension activated [Saturn] OS detected: windows [Saturn] Extension dir: C:\...\vlc_extension\ [Saturn] Launching bridge: C:\...\bridge\vlc_discovery_bridge.exe [Saturn] Port file: C:\Users\...\AppData\Local\Temp\vlc_bridge_port.txt [Saturn] Bridge process launched [Saturn] Port file found: http://127.0.0.1:9876 [Saturn] Waiting for server to fully initialize... [Saturn] Bridge should be ready [Saturn] Health check attempt 1/7 [Saturn] Bridge connection successful!
If the sequence breaks, the last message indicates where the failure occurred. The debug label at the bottom of the extension window also shows the current operation status.
Jan AI Client Integration
Jan is an open-source desktop application that provides a privacy-focused alternative to ChatGPT. It allows users to download and run large language models entirely offline on their local machines, while also supporting cloud integrations with providers like OpenAI and Anthropic. Saturn integrates with Jan through the local proxy client, turning Jan's polished interface into a front-end for all Saturn services discovered on your network.
Why Use Jan with Saturn?
- Unified Interface: Access all Saturn network services through Jan's clean, user-friendly chat interface
- Automatic Discovery: No need to manually configure connection endpoints - Saturn handles service discovery
- Flexible Routing: Saturn automatically routes requests to the best available service based on priority
- Failover Support: If one Saturn service goes down, requests seamlessly route to backup services
- Model Aggregation: See all models from all Saturn services in one unified list
- Privacy First: Keep your AI conversations on your local network while enjoying Jan's excellent UX
Setting Up Jan with Saturn
Prerequisites
- At least one Saturn server running on your network (OpenRouter or Ollama)
- Jan desktop application installed on your computer
- Python 3.7+ installed for running the local proxy client
Installation Steps
-
Download JanVisit jan.ai and download the desktop application for your platform (Windows, macOS, or Linux). Install and launch Jan.
-
Start a Saturn ServerIf you haven't already, start at least one Saturn server on your network. For example:
cd servers/ python openrouter_server.py
-
Start the Local Proxy ClientThe local proxy client discovers Saturn services and exposes them via an OpenAI-compatible API:The proxy will start on
cd clients/ python local_proxy_client.py
http://127.0.0.1:8080by default and begin discovering Saturn services. -
Configure JanIn Jan, you need to add Saturn as a remote endpoint:
- Open Jan and go to Settings (gear icon in top-right)
- Navigate to Model Providers or Remote Models section
- Add a new OpenAI-Compatible endpoint
- Set the Base URL to:
http://127.0.0.1:8080/v1 - Leave the API Key field empty or enter any placeholder text (not required)
- Save the configuration
Using Jan with Saturn
Selecting Models
Once configured, Jan will show all models available from all Saturn services on your network. The local proxy aggregates models from every discovered Saturn server, presenting them as a unified list.
- In Jan's main interface, look for the model selector (usually in the top bar or sidebar)
- You should see models from all your Saturn services listed together
- Select any model to start chatting
- Jan will send requests through the Saturn local proxy, which routes them to the appropriate service
How Routing Works
When you send a message in Jan:
-
Request Sent to ProxyJan sends your chat request to the local proxy at
http://127.0.0.1:8080/v1 -
Service DiscoveryThe proxy identifies which Saturn service hosts the selected model
-
Request ForwardingThe proxy forwards your request to the appropriate Saturn service
-
Response ReturnThe AI response flows back through the proxy to Jan's interface
Verifying the Integration
To confirm Jan is successfully connected to Saturn:
- Check Models: You should see models from your Saturn services in Jan's model list
- Check Proxy Logs: The local proxy client logs all incoming requests and service routing
- Send Test Message: Try chatting with a Saturn-provided model and verify you get responses
[INFO] Local proxy starting on http://127.0.0.1:8080 [INFO] Discovered service: OpenRouter at http://192.168.1.100:8000 (priority=50) [INFO] Available models: gpt-4, claude-3-opus, llama-3-70b [INFO] Routing request for model 'gpt-4' to OpenRouter service [INFO] Request completed successfully
Benefits of This Integration
Open WebUI Integration
Open WebUI is a self-hosted, feature-rich web interface for AI chat that supports multiple model providers through extensible functions (similar to plugins). Saturn integrates with Open WebUI through a custom function that uses DNS-SD service discovery to automatically find and connect to AI services on your local network.
Why Use Open WebUI with Saturn?
- Modern Web Interface: Access Saturn services through Open WebUI's responsive, browser-based chat interface
- Zero-Configuration Discovery: The Saturn function automatically discovers all network services without manual endpoint configuration
- Model Aggregation: See all models from all Saturn services in a single unified list
- Automatic Failover: Built-in support for failing over to backup services when primary services are unavailable
- Flexible Deployment: Run as a desktop app or self-hosted web server
- Streaming Support: Full support for streaming responses with real-time token generation
Installation Options
Open WebUI can be installed in two ways. Both support the Saturn function integration - choose based on your deployment preference.
Option 1: Desktop Application (Recommended for Personal Use)
The desktop app provides a standalone application with built-in server, ideal for individual users.
Option 2: Server Installation (Recommended for Network Sharing)
The server installation runs as a web application, allowing multiple users to access Open WebUI from their browsers.
Choose Desktop App if:
- You want the simplest installation experience
- You're the only user who will access Open WebUI
- You prefer a native application over a web interface
Choose Server Installation if:
- You want multiple users to access the same instance
- You have a home server or NAS for hosting applications
- You prefer Docker-based deployments
- You want to access Open WebUI from multiple devices
Installing the Saturn Function
Once you have Open WebUI installed (either desktop or server), you need to add the Saturn function to enable service discovery.
Prerequisites
- Open WebUI desktop app or server installed and running
- At least one Saturn server running on your network
- Administrator access to Open WebUI settings
Method 1: Discover from Community (Recommended)
-
Open Function SettingsClick your Name/Avatar in the top-right corner, then navigate to Settings → Admin Settings → Functions.
-
Discover FunctionSelect Discover a Function at the bottom of the page.
-
Search for SaturnIn the search box, type
Saturn. The Saturn function should appear in the results. -
Install FunctionClick on the Saturn function result and follow the prompts to install it.
Method 2: Manual Installation (If Discovery Doesn't Work)
If the function doesn't appear in search results, you can manually install it by copying the source code:
-
Copy Source CodeOpen
owui_saturn.pyfrom the repository root directory and copy all of its contents. -
Create New FunctionIn Open WebUI, go to Settings → Admin Settings → Functions, click the + (plus icon) button, and select Create New Function.
-
Paste CodeDelete any template code and paste the entire contents of
owui_saturn.pyinto the code editor. -
Save FunctionClick Save to create the function. Open WebUI will validate the code and add it to your functions list.
dns-sd command for service discovery. On Windows, this requires Bonjour Print Services to be installed. If you don't have it, you can download it from Apple's website or install iTunes (which includes Bonjour). On macOS and Linux, the necessary tools are typically pre-installed.
Using Saturn with Open WebUI
Enabling the Function
- Ensure at least one Saturn server is running on your network
- In Open WebUI, go to Settings → Admin Settings → Functions
- Find the Saturn function in the list
- Toggle it to Enabled
- Refresh Open WebUI (F5 or reload the page)
Selecting Saturn Models
Once enabled, Saturn models will appear in Open WebUI's model selector:
-
Open Model SelectorIn the chat interface, click the model dropdown (usually at the top of the page).
-
Find Saturn ModelsSaturn models are prefixed with
SATURN/followed by the service name and model ID. For example:SATURN/OpenRouter:anthropic/claude-3-opus. -
Select and ChatSelect any Saturn model and start chatting. The function automatically routes your requests to the appropriate Saturn service.
Configuring Function Settings
The Saturn function includes configurable "valves" (settings) that you can adjust:
| Setting | Default | Description |
|---|---|---|
NAME_PREFIX |
"SATURN/" | Prefix added to model names for identification |
DISCOVERY_TIMEOUT |
2.0 seconds | How long to wait when discovering services |
ENABLE_FAILOVER |
true | Automatically fail over to backup services if primary fails |
CACHE_TTL |
60 seconds | How long to cache discovered services before re-scanning |
REQUEST_TIMEOUT |
120 seconds | Maximum time to wait for AI service responses |
To adjust these settings:
- Go to Settings → Admin Settings → Functions
- Click on the Saturn function
- Look for the "Valves" section
- Adjust values as needed and save
How Failover Works
If you have multiple Saturn services offering the same model, the function provides automatic failover:
-
Primary Service SelectionThe function selects the service with the lowest priority number for each model.
-
Request AttemptYour chat request is sent to the primary service.
-
Automatic FailoverIf the primary service fails, the function automatically tries the next service offering that model.
-
Transparent RecoveryFailover happens automatically - you'll receive your response without knowing a backup was used.
- At least one Saturn server is running
- The Saturn server has completed initialization
- Your firewall allows mDNS traffic (UDP port 5353)
- You're on the same network as the Saturn server
SATURN/[ServiceName]:[ModelID]. This format helps identify which service provides each model and enables proper request routing.
Integration Troubleshooting
Jan can't see any models from Saturn
Symptoms: Jan shows no models or only local models
Solutions:
- Verify the local proxy is running: Check for "Local proxy starting" log message
- Confirm Saturn services are discovered: Check proxy logs for "Discovered service" messages
- Verify Jan's configuration: Ensure Base URL is exactly
http://127.0.0.1:8080/v1 - Test the proxy directly: Visit
http://127.0.0.1:8080/v1/modelsin a browser to see the model list - Restart Jan after changing configuration
Messages fail or timeout in Jan
Symptoms: Chat requests hang, timeout, or return errors
Solutions:
- Check proxy logs for error messages about service routing
- Verify the Saturn service is healthy:
curl http://<service>:<port>/v1/health - Ensure the selected model is actually available on the Saturn service
- Check firewall settings aren't blocking communication between Jan, the proxy, and Saturn services
- Try a different model to isolate whether it's model-specific or service-wide
Models appear multiple times
Symptoms: Same model name appears multiple times in Jan's model list
Explanation: This is expected behavior when multiple Saturn services offer the same model
Details: If you have both an OpenRouter server and an Ollama server, and both offer "llama-3-8b", you'll see it twice. The proxy distinguishes them internally by service, but Jan may show them with the same display name. Choose either - Saturn will route to the appropriate service.
Open WebUI shows "No Saturn services discovered"
Symptoms: The only model available is named "No Saturn services discovered..."
Solutions:
- Verify at least one Saturn server is running: Check the server terminal for "Service registered" messages
- Ensure the Saturn function is enabled in Open WebUI
- Check that
dns-sdcommand is available (Windows requires Bonjour Print Services) - Verify firewall allows mDNS traffic on UDP port 5353
- Refresh Open WebUI (F5) to trigger a new service discovery scan
- Check function settings and increase
DISCOVERY_TIMEOUTif on a slow network
Open WebUI function fails with DNS-SD error
Symptoms: Error messages about dns-sd command not found or failing to execute
Solutions:
- Windows: Install Bonjour Print Services from Apple or install iTunes (includes Bonjour)
- macOS:
dns-sdis built-in, no installation needed. If it's missing, reinstall macOS - Linux: Install Avahi tools:
sudo apt install avahi-utils(Ubuntu/Debian) orsudo yum install avahi-tools(RHEL/CentOS) - Test the command manually in terminal:
dns-sd -B _saturn._tcp local
Saturn models not appearing in Open WebUI
Symptoms: Function is enabled but no SATURN/ models appear in model selector
Solutions:
- Check that the function is actually enabled: Go to Admin Settings → Functions and verify toggle is on
- Refresh the page after enabling the function (F5 or hard refresh)
- Check function logs in Open WebUI for error messages
- Verify Saturn services have models available: Visit
http://[saturn-service]:port/v1/modelsin browser - Wait up to 60 seconds for service cache to refresh
- Try disabling and re-enabling the function
Chat requests fail in Open WebUI
Symptoms: Models appear but chat requests timeout or return errors
Solutions:
- Check Open WebUI function logs for detailed error messages
- Verify the Saturn service is healthy:
curl http://[service]:[port]/v1/health - Increase
REQUEST_TIMEOUTin function valves settings if using slow models - Ensure network connectivity between Open WebUI server and Saturn services
- Check firewall isn't blocking HTTP traffic between Open WebUI and Saturn services
- Try a different model to isolate whether it's model-specific or service-wide
Saturn