Providers
Providers connect synapses to LLM APIs.
Provider Interface
All providers implement:
type Provider interface {
Call(ctx context.Context, messages []Message, temperature float32) (*ProviderResponse, error)
Name() string // Returns provider identifier (e.g., "openai", "anthropic")
}
OpenAI Provider
Basic Configuration
import "github.com/zoobz-io/zyn/openai"
provider := openai.New(openai.Config{
APIKey: os.Getenv("OPENAI_API_KEY"),
})
Full Configuration
provider := openai.New(openai.Config{
APIKey: os.Getenv("OPENAI_API_KEY"),
Model: "gpt-4o", // Model selection
BaseURL: "https://...", // Custom endpoint (Azure, proxies)
OrgID: "org-...", // Organization ID
})
Model Selection
| Model | Speed | Cost | Best For |
|---|---|---|---|
gpt-4o | Medium | High | Complex reasoning |
gpt-4o-mini | Fast | Low | Most use cases |
gpt-4-turbo | Medium | Medium | Large context |
Anthropic Provider
import "github.com/zoobz-io/zyn/anthropic"
provider := anthropic.New(anthropic.Config{
APIKey: os.Getenv("ANTHROPIC_API_KEY"),
Model: "claude-3-5-sonnet-20241022",
})
Model Selection
| Model | Speed | Cost | Best For |
|---|---|---|---|
claude-3-5-sonnet-20241022 | Fast | Medium | Balanced performance |
claude-3-opus-20240229 | Slow | High | Complex analysis |
claude-3-haiku-20240307 | Very Fast | Low | Simple tasks |
Gemini Provider
import "github.com/zoobz-io/zyn/gemini"
provider := gemini.New(gemini.Config{
APIKey: os.Getenv("GEMINI_API_KEY"),
Model: "gemini-1.5-flash",
})
Temperature Control
Temperature affects response randomness. Each synapse type has a default temperature, but you can override it per-request via the input struct:
// Set temperature via input struct
input := zyn.BinaryInput{
Subject: "test@example.com",
Temperature: 0.1, // Override default
}
result, err := synapse.FireWithInput(ctx, session, input)
// Default temperatures by synapse type:
// Binary, Extraction, Convert: 0.1 (DefaultTemperatureDeterministic)
// Sentiment, Ranking, Analyze: 0.2 (DefaultTemperatureAnalytical)
// Classification, Transform: 0.3 (DefaultTemperatureCreative)
Guidelines:
- Precision tasks (extraction, conversion) → 0.1
- Analytical tasks (sentiment, ranking) → 0.2
- Creative tasks (transform, classification) → 0.3
Mock Provider
For testing without API calls:
// Fixed response
provider := zyn.NewMockProviderWithResponse(`{
"decision": true,
"confidence": 0.95,
"reasoning": ["Valid"]
}`)
// Dynamic response
provider := zyn.NewMockProviderWithCallback(func(prompt string, temp float32) (string, error) {
return `{"decision": true, "confidence": 0.9, "reasoning": ["ok"]}`, nil
})
// Error simulation
provider := zyn.NewMockProviderWithError("rate limit exceeded")
Custom Providers
Implement the Provider interface:
type CustomProvider struct {
client *http.Client
}
func (p *CustomProvider) Call(ctx context.Context, messages []zyn.Message, temperature float32) (*zyn.ProviderResponse, error) {
// Build request from messages
// Call your LLM API
// Return response with content and usage
return &zyn.ProviderResponse{
Content: responseJSON,
Usage: zyn.TokenUsage{
Prompt: promptTokens,
Completion: completionTokens,
Total: totalTokens,
},
}, nil
}
func (p *CustomProvider) Name() string {
return "custom"
}
Provider Selection Strategy
// Primary + fallback pattern
primaryProvider := openai.New(openai.Config{APIKey: key1})
fallbackProvider := anthropic.New(anthropic.Config{APIKey: key2})
synapse, _ := zyn.Binary("question", primaryProvider,
zyn.WithFallback(fallbackSynapse),
)
Next Steps
- Sessions Guide - Managing conversation context
- Reliability Guide - Retry, timeout, circuit breaker