Quick Start

Prerequisites

Run with Docker

bash
docker run -d --name bloop \
  -p 5332:5332 \
  -v bloop_data:/data \
  -e BLOOP__AUTH__HMAC_SECRET=your-secret-here \
  ghcr.io/jaikoo/bloop:latest

Build from Source

bash
git clone https://github.com/jaikoo/bloop.git
cd bloop
cargo build --release
./target/release/bloop --config config.toml

To include the optional features (DuckDB-powered analytics and/or LLM tracing):

bash
# Analytics only
cargo build --release --features analytics

# LLM tracing only
cargo build --release --features llm-tracing

# Both
cargo build --release --features "analytics,llm-tracing"

Send Your First Error

bash
# Your project API key (from Settings → Projects)
API_KEY="bloop_abc123..."

# Compute HMAC signature using the project key
BODY='{"timestamp":1700000000,"source":"api","environment":"production","release":"1.0.0","error_type":"RuntimeError","message":"Something went wrong"}'
SIG=$(echo -n "$BODY" | openssl dgst -sha256 -hmac "$API_KEY" | awk '{print $2}')

# Send to bloop
curl -X POST http://localhost:5332/v1/ingest \
  -H "Content-Type: application/json" \
  -H "X-Project-Key: $API_KEY" \
  -H "X-Signature: $SIG" \
  -d "$BODY"

Open http://localhost:5332 in your browser, register a passkey, and view your error on the dashboard.

Configuration

Bloop reads from config.toml in the working directory. Every value can be overridden via environment variables using double-underscore separators: BLOOP__SECTION__KEY.

Full Reference

toml
# ── Server ──
[server]
host = "0.0.0.0"
port = 5332

# ── Database ──
[database]
path = "bloop.db"       # SQLite file path
pool_size = 4            # deadpool-sqlite connections

# ── Ingestion ──
[ingest]
max_payload_bytes = 32768   # Max single request body
max_stack_bytes = 8192      # Max stack trace length
max_metadata_bytes = 4096   # Max metadata JSON size
max_message_bytes = 2048    # Max error message length
max_batch_size = 50         # Max events per batch request
channel_capacity = 8192     # MPSC channel buffer size

# ── Pipeline ──
[pipeline]
flush_interval_secs = 2     # Flush timer
flush_batch_size = 500       # Events per batch write
sample_reservoir_size = 5   # Sample occurrences kept per fingerprint

# ── Retention ──
[retention]
raw_events_days = 7         # Raw event TTL
prune_interval_secs = 3600  # How often to run cleanup

# ── Auth ──
[auth]
hmac_secret = "change-me-in-production"
rp_id = "localhost"                     # WebAuthn relying party ID
rp_origin = "http://localhost:5332"      # WebAuthn origin
session_ttl_secs = 604800                 # Session lifetime (7 days)

# ── Rate Limiting ──
[rate_limit]
per_second = 100
burst_size = 200

# ── Alerting ──
[alerting]
cooldown_secs = 900        # Min seconds between re-fires

# ── SMTP (for email alerts) ──
[smtp]
enabled = false
host = "smtp.example.com"
port = 587
username = ""
password = ""
from = "bloop@example.com"
starttls = true

# ── Analytics (optional, requires --features analytics) ──
[analytics]
enabled = true
cache_ttl_secs = 60
zscore_threshold = 2.5

# ── LLM Tracing (optional, requires --features llm-tracing) ──
[llm_tracing]
enabled = true                    # Runtime toggle
channel_capacity = 4096           # Bounded channel size
flush_interval_secs = 2           # Time-based flush trigger
flush_batch_size = 200             # Count-based flush trigger
max_spans_per_trace = 100         # Validation limit
max_batch_size = 50               # Max traces per batch POST
default_content_storage = "none"  # none | metadata_only | full
cache_ttl_secs = 30               # Query result cache TTL

# ── LLM Proxy (optional, requires --features llm-tracing) ──
# The proxy enables zero-instrumentation tracing by acting as a reverse proxy.
# Your app points its LLM client at bloop instead of the provider directly.
[llm_tracing.proxy]
enabled = true                    # Enable proxy endpoints
providers = ["openai", "anthropic"]  # Supported providers
openai_base_url = "https://api.openai.com/v1"    # OpenAI API base
anthropic_base_url = "https://api.anthropic.com/v1"  # Anthropic API base
capture_prompts = true           # Store full prompts
capture_completions = true       # Store completion text
capture_streaming = true         # Capture streaming responses

Environment Variables

VariableOverridesExample
BLOOP__SERVER__PORTserver.port8080
BLOOP__DATABASE__PATHdatabase.path/data/bloop.db
BLOOP__AUTH__HMAC_SECRETauth.hmac_secretmy-production-secret
BLOOP__AUTH__RP_IDauth.rp_iderrors.myapp.com
BLOOP__AUTH__RP_ORIGINauth.rp_originhttps://errors.myapp.com
BLOOP__LLM_TRACING__ENABLEDllm_tracing.enabledtrue
BLOOP__LLM_TRACING__DEFAULT_CONTENT_STORAGEllm_tracing.default_content_storagefull
BLOOP_SLACK_WEBHOOK_URL(direct)Slack incoming webhook URL
BLOOP_WEBHOOK_URL(direct)Generic webhook URL

Note: BLOOP_SLACK_WEBHOOK_URL and BLOOP_WEBHOOK_URL are read directly from the environment (not through the config system), so they use single underscores.

Architecture

Bloop is a single async Rust process. All components run as Tokio tasks within one binary.

Client
SDK / curl
Middleware
HMAC Auth
Validate
Payload Check
Fingerprint
xxhash3
Buffer
MPSC (8192)
Flush
Batch Writer
Store
SQLite WAL

Storage Layers

LayerRetentionPurpose
Raw events7 days (configurable)Full event payloads for debugging
AggregatesIndefiniteError counts, first/last seen, status
Sample reservoirIndefinite5 sample occurrences per fingerprint

Fingerprinting

Every ingested error gets a deterministic fingerprint. The algorithm:

  1. Normalize the message: strip UUIDs → strip IPs → strip all numbers → lowercase
  2. Extract top stack frame: skip framework frames (UIKitCore, node_modules, etc.), strip line numbers
  3. Hash: xxhash3(source + error_type + route + normalized_message + top_frame)

This means "Connection refused at 10.0.0.1:5432" and "Connection refused at 192.168.1.2:3306" produce the same fingerprint. You can also supply your own fingerprint field to override.

Backpressure

The ingestion handler pushes events into a bounded MPSC channel (default capacity: 8192). If the channel is full:

Bloop never returns 429 to your clients. Mobile apps and APIs should not retry errors — if the buffer is full, the event wasn't critical enough to block on.