Documentation
Get started with Epok in under 5 minutes. Send your first logs and let the intelligence engine do the rest.
Authentication
Epok uses API keys for log ingestion. You'll get a default API key when you sign up. Find it in Settings.
Include your API key in every request using any of these methods:
Authorization: Bearer epk_your_api_key
or
Authorization: Basic base64(epk_your_api_key:x)
or
X-API-Key: epk_your_api_key
Basic Auth is used by Loki-native shippers (FluentBit, Promtail, Grafana Alloy). Set the username to your API key and the password to any value.
Quick Start
Send your first log entry. Replace YOUR_API_KEY with your key from Settings.
curl -X POST https://app.getepok.dev/insert/elasticsearch/_bulk \
-H 'Authorization: Bearer YOUR_API_KEY' \
-H 'Content-Type: application/json' \
-d '
{"create":{}}
{"_msg":"Application started","level":"info","service":"api","_time":"2026-02-21T00:00:00Z"}
'That's it. Your logs appear in real time immediately. Anomaly detection activates automatically.
Supported Integrations
Epok accepts logs from any source. Pick the integration that fits your stack.
| Protocol | Endpoint | Use With |
|---|---|---|
| Elasticsearch Bulk | /_bulk | curl, Logstash, Vector, Filebeat |
| Loki Push | /loki/api/v1/push | FluentBit, Promtail, Grafana Alloy, any Loki client |
| OTLP HTTP | /v1/logs | OpenTelemetry Collector, any OTEL SDK |
| FluentBit Native | /api/v1/fluent | FluentBit (with http output, alternative to Loki) |
| Fluentd | /api/v1/fluentd | Fluentd (out_http plugin) |
| Syslog (HTTP) | /api/v1/syslog | rsyslog, syslog-ng (via omhttp) |
| CloudWatch | /api/v1/cloudwatch | AWS Lambda subscription filter |
| GCP Cloud Logging | /api/v1/ingest | Cloud Function via Pub/Sub sink |
| Generic JSON | /api/v1/ingest | Any HTTP client, custom apps |
Configuration Examples
Copy-paste configs for every supported shipper. Replace YOUR_API_KEY with your key.
curl
POST /insert/elasticsearch/_bulkThe fastest way to test. Send a log line from your terminal.
curl -X POST https://app.getepok.dev/insert/elasticsearch/_bulk \
-H 'Authorization: Bearer YOUR_API_KEY' \
-H 'Content-Type: application/json' \
-d '
{"create":{}}
{"_msg":"Application started successfully","level":"info","service":"api","_time":"2026-02-21T00:00:00Z"}
{"create":{}}
{"_msg":"GET /api/users 200 42ms","level":"info","service":"api","status_code":200,"duration_ms":42}
'FluentBit
POST /loki/api/v1/pushLightweight log shipper. Ideal for Docker, Kubernetes, and edge devices. Uses native Loki output with Basic Auth.
# /etc/fluent-bit/fluent-bit.conf
[INPUT]
Name tail
Path /var/log/app/*.log
Tag app
[OUTPUT]
Name loki
Match *
Host app.getepok.dev
Port 443
TLS On
HTTP_User YOUR_API_KEY
HTTP_Passwd x
Labels job=fluentbit, host=my-server
drop_single_key onVector
POST /_bulkHigh-performance observability pipeline by Datadog. Supports Elasticsearch sink.
# vector.toml
[sources.app_logs]
type = "file"
include = ["/var/log/app/*.log"]
[sinks.epok]
type = "elasticsearch"
inputs = ["app_logs"]
endpoints = ["https://app.getepok.dev"]
bulk.action = "create"
[sinks.epok.request.headers]
Authorization = "Bearer YOUR_API_KEY"Promtail / Grafana Alloy
POST /loki/api/v1/pushIf you already run Promtail or Grafana Alloy, point them at Epok. Native Loki protocol support.
# promtail-config.yml
clients:
- url: https://app.getepok.dev/loki/api/v1/push
basic_auth:
username: YOUR_API_KEY
password: x
scrape_configs:
- job_name: app
static_configs:
- targets: [localhost]
labels:
app: api
__path__: /var/log/app/*.logPython
POST /loki/api/v1/pushSend logs directly from your application code.
import time, httpx
resp = httpx.post(
"https://app.getepok.dev/loki/api/v1/push",
headers={"Authorization": "Bearer YOUR_API_KEY"},
json={
"streams": [{
"stream": {"app": "myapp", "env": "production"},
"values": [[
str(int(time.time())) + "000000000",
"User signup completed for user_id=4821"
]]
}]
}
)OpenTelemetry (OTLP)
POST /v1/logsNative OTLP HTTP support. Works with any OpenTelemetry SDK or Collector.
# otel-collector-config.yml
exporters:
otlphttp:
endpoint: https://app.getepok.dev
headers:
Authorization: "Bearer YOUR_API_KEY"
service:
pipelines:
logs:
receivers: [otlp]
exporters: [otlphttp]Loki API
POST /loki/api/v1/pushDirect Loki push API. Works with any Loki-compatible client.
curl -X POST https://app.getepok.dev/loki/api/v1/push \
-H 'Authorization: Bearer YOUR_API_KEY' \
-H 'Content-Type: application/json' \
-d '{
"streams": [{
"stream": {"app": "api", "env": "production"},
"values": [
["1771632000000000000", "Application started successfully"],
["1771632001000000000", "GET /api/users 200 42ms"]
]
}]
}'Fluentd
POST /api/v1/fluentdNative Fluentd HTTP output. Tag-based service routing.
# /etc/fluentd/fluent.conf
<source>
@type tail
path /var/log/app/*.log
tag app.logs
</source>
<match app.**>
@type http
endpoint https://app.getepok.dev/api/v1/fluentd
headers {"Authorization": "Bearer YOUR_API_KEY"}
json_array false
<format>
@type json
</format>
</match>Syslog (via FluentBit relay)
UDP/TCP 514 → HTTPFor network appliances (Cisco, Fortinet, Palo Alto) and legacy systems. Use FluentBit as a local relay to convert syslog to HTTP.
# /etc/fluent-bit/syslog-relay.conf
[INPUT]
Name syslog
Listen 0.0.0.0
Port 514
Mode udp
[OUTPUT]
Name loki
Match *
Host app.getepok.dev
Port 443
TLS On
HTTP_User YOUR_API_KEY
HTTP_Passwd x
Labels job=syslog-relayAWS CloudWatch
POST /api/v1/cloudwatchForward CloudWatch Logs via subscription filter. Native gzip decompression.
# Create a Lambda subscription filter that POSTs to Epok.
# CloudWatch → Lambda → Epok
import base64, urllib3
EPOK_URL = "https://app.getepok.dev/api/v1/cloudwatch"
API_KEY = "YOUR_API_KEY"
http = urllib3.PoolManager()
def handler(event, context):
# CloudWatch payload is base64-encoded gzip, send as-is
compressed = base64.b64decode(event["awslogs"]["data"])
http.request("POST", EPOK_URL,
body=compressed,
headers={
"Authorization": f"Bearer {API_KEY}",
"Content-Encoding": "gzip",
"Content-Type": "application/json"
})GCP Cloud Logging
POST /api/v1/ingestForward Google Cloud Logging via Pub/Sub sink and a Cloud Function.
# GCP Cloud Logging → Pub/Sub → Cloud Function → Epok
# 1. Create a log sink that routes to a Pub/Sub topic
# 2. Deploy this Cloud Function as a Pub/Sub subscriber
import base64, json, requests
EPOK_URL = "https://app.getepok.dev/api/v1/ingest"
API_KEY = "YOUR_API_KEY"
def handle_log(event, context):
data = json.loads(base64.b64decode(event["data"]))
entry = {
"_msg": data.get("textPayload", json.dumps(data.get("jsonPayload", {}))),
"level": data.get("severity", "info").lower(),
"service": data.get("resource", {}).get("type", "gcp"),
"_time": data.get("timestamp"),
}
requests.post(EPOK_URL,
headers={"Authorization": f"Bearer {API_KEY}"},
json=[entry])Generic JSON
POST /api/v1/ingestSimplest format for custom applications. Send a JSON array or newline-delimited JSON.
curl -X POST https://app.getepok.dev/api/v1/ingest \
-H 'Authorization: Bearer YOUR_API_KEY' \
-H 'Content-Type: application/json' \
-d '[
{"_msg": "User signed up", "level": "info", "service": "auth", "user_id": 4821},
{"_msg": "Payment processed", "level": "info", "service": "billing", "amount": 29.99}
]'What Happens Next
Once your logs start flowing, Epok's intelligence engine activates automatically. No configuration needed.
Search and live tail work immediately
As soon as your first log arrives, you can search it and stream it live. No indexing delay.
New errors are detected from the first log
Epok fingerprints every error-level log message. The instant a never-before-seen error appears, it shows up in the New Errors feed.
Silence detection activates within 1 hour
Epok learns each service's expected log cadence. If a service that was sending logs every 30 seconds goes quiet for 5 minutes, you'll get an alert.
Volume baselines build over 7 days
Log rate anomaly detection learns your normal patterns per service, per hour, per day of week. Early detection is active from day one with wider thresholds. Full precision by day seven.
Detectors
Epok runs 16 detection algorithms automatically on every connected log stream. All detectors are included on every tier, including free.
Core detectors (Free tier)
Volume anomaly detection
Detects spikes, drops, and flatlines in log volume per service. Learns hourly and daily seasonality over 7 days. Uses z-score against per-hour-of-day, per-day-of-week baselines.
New error detection
Fingerprints every error-level log message by normalizing numbers, IPs, UUIDs, and hex strings. When a never-before-seen error pattern appears, it fires immediately. Resurfaced errors (gone >24h) are flagged at lower severity.
Pattern clustering
Groups log messages into patterns using the Drain algorithm. Detects when new patterns emerge or existing patterns change frequency. Useful for spotting behavioral shifts after deploys.
Forecast + changepoint detection (EWMA/CUSUM)
Exponentially weighted moving average and cumulative sum algorithms detect gradual drifts and sudden shifts in log volume that z-score anomaly detection might miss.
Advanced detectors (Pro+)
Silence detection
Alerts when a service that should be logging goes quiet. Learns each service's expected cadence within 1 hour. Catches OOM kills, crashed workers, and deleted cron jobs.
Golden signal monitoring
Watches the four golden signals (latency, traffic, errors, saturation) extracted from log fields like duration_ms, status_code, and level.
Kubernetes intelligence
Detects 20+ K8s failure modes: CrashLoopBackOff, OOMKilled, ImagePullBackOff, FailedScheduling, Node NotReady, DNS failures, probe failures, HPA limits, and more.
Database intelligence
Detects slow queries, connection pool exhaustion, replication lag, lock contention, and deadlocks from database log patterns.
Security intelligence
Detects brute-force login attempts, privilege escalation, unusual access patterns, and authentication anomalies.
Web intelligence
Monitors HTTP status code distributions, response time degradation, and bot traffic anomalies from web server logs.
Infrastructure intelligence
Detects disk pressure, memory pressure, CPU saturation, and network errors from system-level logs.
AWS intelligence
Parses and detects anomalies in AWS service logs (CloudTrail, ELB, Lambda, RDS).
Serverless intelligence
Detects cold start spikes, timeout patterns, and concurrency throttling in serverless function logs.
SLO monitoring
Track error budgets and burn rates. Get alerted when an SLO is on track to breach before the window ends.
Threshold rules
Define custom static thresholds for hard business constraints (e.g., payment success rate below 99.9%). Use these for known SLAs alongside automatic anomaly detection.
Composite rules
Combine multiple conditions with AND/OR logic. Fire only when two or more signals align (e.g., error spike AND latency increase on the same service).
Alert Management
Epok handles alert deduplication, grouping, escalation, and lifecycle automatically.
Deduplication
If the same anomaly (same detector, same service, same type) fires again within the suppression window, Epok updates the existing alert instead of creating a new one. The suppression window starts at 30 minutes and escalates dynamically to 60 then 120 minutes for persistent issues.
Severity escalation
Alerts that keep re-firing automatically escalate in severity. An INFO that fires 5 times becomes a WARNING. A WARNING that persists becomes CRITICAL. This ensures persistent issues get the attention they deserve.
Incident grouping
Multiple alerts from the same tenant within a 5-minute window are grouped into a single incident. Epok uses Jaccard similarity to correlate related alerts across services. One Slack message instead of fifteen.
Auto-resolve
Alerts automatically resolve when the detector stops producing anomalies for that service for 15 minutes. You can also manually resolve alerts from the dashboard.
Snooze and mute
Snooze an alert for a set duration during maintenance windows. Mute specific services or detector types to suppress known noisy patterns. Feedback from snooze/mute actions trains the self-tuning system.
Analysis Tools
When an alert fires, Epok automatically runs analysis to help you understand what happened, why, and what to do next. Available on Pro and Business tiers.
Root Cause Analysis
Ranks potential root causes by scoring error patterns, causal language signals, timing correlation, and cross-service propagation. Outputs a ranked list of hypotheses with confidence scores.
Error categorization
Classifies errors into 8 categories: infrastructure, dependency, configuration, resource, application, security, data, and timeout. Categories drive different investigation paths and RCA scoring.
What Changed (9 methods)
Compares the anomaly window against a baseline period across 9 dimensions: new error patterns, volume shifts, field distribution changes, new log streams, disappeared streams, latency changes, status code shifts, new field values, and pattern frequency changes.
Blast Radius
Determines which services, endpoints, and users are affected by an incident. Shows the scope of impact to help you prioritize response.
Cascade Timeline
Reconstructs the sequence of failures across services. Shows which service failed first and how the failure propagated through dependencies.
Dimension Lift
Identifies which field values are disproportionately represented in the anomaly. If 90% of errors come from region=us-east-1, Dimension Lift surfaces that automatically.
Cross-service error matching
Matches related errors across different services. When your API returns 500s and your database logs connection timeouts at the same time, Epok links them.
Service dependency graph
Infers service-to-service dependencies from log patterns and error propagation. Visualizes which services depend on what.
Deploy correlation
Detects recent deploys from log patterns (version strings, restart markers, config changes) and correlates anomalies with deploy timing.
Notifications
Configure where Epok sends alerts. Free tier includes 2 channels. Pro includes 10. Business is unlimited.
Slack
Incoming webhook integration. Alerts include severity, affected service, description, and a link to the investigation view. On Business tier, AI-generated incident narratives are included inline.
Add a Slack incoming webhook URL in Settings > Notification Channels.
PagerDuty
Native Events API v2 integration. Alerts map to PagerDuty incidents with severity, dedup key, and custom details. Resolved alerts auto-resolve in PagerDuty.
Add your PagerDuty integration key (Events API v2) in Settings > Notification Channels.
Webhook
Send alert JSON to any HTTP endpoint. Use this to integrate with OpsGenie, Microsoft Teams, Discord, or custom systems.
Add a webhook URL in Settings > Notification Channels. Epok sends a POST with the alert payload as JSON.
Email notifications for alerts. Includes a summary with links to the dashboard for investigation.
Add email addresses in Settings > Notification Channels.
Team Management
Epok supports team collaboration with role-based access control.
Roles
Three roles: Owner (full access, can manage billing and delete tenant), Admin (manage members, API keys, settings), and Member (view alerts, search logs, investigate incidents).
Inviting team members
Owners and admins can create invite links in Settings. New members sign in with Google and are automatically added to your tenant with the role you specify.
Tier limits
Free: 1 user. Pro: 5 users. Business: 25 users. API keys: Free 3, Pro 10, Business unlimited.
Configuration
Epok works with zero configuration out of the box. All settings below are optional and can be adjusted in the dashboard.
Detection sensitivity
Volume anomaly detection uses a z-score threshold (default 3.0 for spikes, -3.0 for drops). You can adjust sensitivity per service if needed. Flatline detection triggers after 3 consecutive minutes of zero logs when the baseline expects activity.
Threshold rules
Create custom rules for hard business constraints. Example: "Alert CRITICAL when payment_success_rate drops below 99.9% for 5 minutes." Pro: 20 rules. Business: unlimited.
SLO monitoring
Define Service Level Objectives with error budget tracking. Epok monitors burn rate and predicts when your SLO will breach. Free: 2 SLOs. Pro: 5 SLOs. Business: 25 SLOs.
Self-tuning thresholds (Pro+)
Epok learns from your feedback. When you snooze, mute, or resolve alerts, the system adjusts sensitivity to reduce noise over time. No manual threshold tuning needed.
API Reference
Key endpoints for programmatic access. All endpoints require authentication via API key.
| Method | Endpoint | Description |
|---|---|---|
| GET | /health | Health check |
| GET | /api/v1/alerts | List alerts (active + recent resolved) |
| GET | /api/v1/alerts/:id | Get alert detail with analysis |
| POST | /api/v1/alerts/:id/resolve | Manually resolve an alert |
| GET | /api/v1/streams | List monitored log streams |
| GET | /api/v1/new-errors | List new error patterns |
| GET | /api/v1/patterns | List detected log patterns |
| GET | /api/v1/search | Full-text log search |
| GET | /api/v1/facets | Field facets for filtering |
| GET | /api/v1/hits | Log volume histogram |
| WS | /api/v1/tail | WebSocket live tail |
| GET | /api/v1/detectors | List registered detectors |
| POST | /api/v1/channels | Add notification channel |
| GET | /api/v1/channels | List notification channels |
| GET | /metrics | Prometheus metrics |
Example: List active alerts
curl https://app.getepok.dev/api/v1/alerts?state=firing \
-H 'Authorization: Bearer YOUR_API_KEY'Example: Search logs
curl 'https://app.getepok.dev/api/v1/search?query=level%3Aerror&start=-1h&limit=100' \
-H 'Authorization: Bearer YOUR_API_KEY'Further Reading
20 Kubernetes Failures You Should Be Alerting On
CrashLoopBackOff, OOMKilled, ImagePullBackOff, and 17 more failure modes with automatic detection.
Catch New Errors Before Users Report Them
How error fingerprinting detects never-before-seen errors automatically.
Stop Writing Alert Rules by Hand
Why anomaly detection catches things that static thresholds miss.
What Log Management Actually Costs in 2026
Real pricing at 50 GB, 600 GB, and 3 TB across CloudWatch, Datadog, Grafana, Splunk, Elastic, and Epok.
Ready to get started?
Open Epok DashboardFree tier includes 1 GB/day with all intelligence features. No credit card required.