Core Concepts
zyn is built around three core concepts: Synapses, Sessions, and Providers.
Synapses
A synapse is a typed unit of LLM interaction. Each synapse type handles a specific pattern:
Decision Synapses
| Synapse | Purpose | Input | Output |
|---|---|---|---|
Binary | Yes/no decisions | string | bool |
Classification | Categorize into classes | string | string |
Ranking | Order items by criteria | string | string |
Sentiment | Analyze emotional tone | string | SentimentResult |
Transformation Synapses
| Synapse | Purpose | Input | Output |
|---|---|---|---|
Extract[T] | Extract structured data | string | T |
Transform | Transform text | string | string |
Analyze[T] | Analyze structured data | T | string |
Convert[T,U] | Convert between types | T | U |
Creating Synapses
All synapses follow the same pattern:
// Constructor: task description + provider + options
synapse, err := zyn.Binary("Is this valid?", provider, options...)
// Fire: context + session + input
result, err := synapse.Fire(ctx, session, "input text")
The Validator Interface
Custom types used with Extract, Analyze, or Convert must implement Validator:
type Validator interface {
Validate() error
}
This ensures LLM outputs are validated before being returned:
type Order struct {
ID string `json:"id"`
Amount float64 `json:"amount"`
}
func (o Order) Validate() error {
if o.ID == "" {
return fmt.Errorf("order ID required")
}
if o.Amount <= 0 {
return fmt.Errorf("amount must be positive")
}
return nil
}
Sessions
Sessions manage conversation context across multiple synapse calls.
Why Sessions?
LLMs are stateless - each call is independent. Sessions maintain history so later calls can reference earlier context:
session := zyn.NewSession()
// First call
classifier.Fire(ctx, session, "I love this product!")
// Session now contains: user message + assistant response
// Second call sees the history
followup.Fire(ctx, session, "Was my previous statement positive?")
// LLM can answer "Yes" because it sees the previous exchange
Session Lifecycle
┌─────────────────────────────────────────────────────────────┐
│ Session │
├─────────────────────────────────────────────────────────────┤
│ Messages: []Message │
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
│ │ User: ... │ │ Assistant: │ │ User: ... │ ... │
│ │ │ │ ... │ │ │ │
│ └─────────────┘ └─────────────┘ └─────────────┘ │
├─────────────────────────────────────────────────────────────┤
│ LastUsage: *TokenUsage (from most recent call) │
└─────────────────────────────────────────────────────────────┘
Session Operations
session := zyn.NewSession()
// Basic operations
messages := session.Messages() // Get all messages
count := session.Len() // Number of messages
session.Clear() // Remove all messages
// Message manipulation
msg, _ := session.At(0) // Get message at index
session.Remove(0) // Remove message at index
session.Insert(0, msg) // Insert at index
session.Replace(0, msg) // Replace at index
// Bulk operations
session.Prune(5) // Remove last 5 message pairs (10 messages)
session.Truncate(2, 2) // Keep first 2 and last 2
session.SetMessages(msgs) // Replace entire history
// Token tracking
usage := session.LastUsage() // Token usage from last call
Transactional Updates
Sessions update atomically - messages are only added after successful calls:
session := zyn.NewSession()
synapse.Fire(ctx, session, "input") // Fails after retries
// Session is unchanged - no partial state
assert(session.Len() == 0)
Providers
Providers connect synapses to LLM APIs.
Provider Interface
type Provider interface {
Call(ctx context.Context, messages []Message, temperature float32) (*ProviderResponse, error)
Name() string
}
Available Providers
OpenAI (production):
import "github.com/zoobz-io/zyn/openai"
provider := openai.New(openai.Config{
APIKey: os.Getenv("OPENAI_API_KEY"),
Model: "gpt-4o", // Optional, defaults to gpt-3.5-turbo
})
Mock (testing):
provider := zyn.NewMockProviderWithResponse(`{
"decision": true,
"confidence": 0.95,
"reasoning": ["Valid format"]
}`)
Temperature
Temperature controls response randomness. Each synapse has a default temperature, but you can override it per-request via the input struct:
// Set temperature via input struct
input := zyn.BinaryInput{
Subject: "test@example.com",
Temperature: 0.1, // Override default
}
result, _ := synapse.FireWithInput(ctx, session, input)
// Default temperatures by synapse type:
// Binary, Extract, Convert: 0.1 (deterministic)
// Sentiment, Ranking, Analyze: 0.2 (analytical)
// Classification, Transform: 0.3 (creative)
Putting It Together
// Provider connects to LLM
provider := openai.New(openai.Config{APIKey: key})
// Synapse defines the task
classifier, _ := zyn.Classification(
"Classify support ticket priority",
[]string{"low", "medium", "high", "critical"},
provider,
zyn.WithRetry(3),
)
// Session maintains context
session := zyn.NewSession()
// Fire executes the task
priority, _ := classifier.Fire(ctx, session, "Server is down!")
// priority = "critical"
// Session now has context for follow-up calls
Next Steps
- Architecture - How zyn works internally
- Providers Guide - Provider configuration
- Sessions Guide - Advanced session patterns