Why We Built Epok
We kept building dashboards nobody watched and alert rules that missed the real problems. So we built the log monitoring we actually wanted.
We didn't start by deciding to build a company. We started by being annoyed.
We were running a handful of services. Standard setup: API server, a few workers, a database, deployed on a small cloud instance. We had logs going to CloudWatch because it was the default. And we had no idea when things broke until someone told us.
Not because we were lazy. We tried the right things. We built Grafana dashboards. We wrote alert rules. We set up Slack notifications. And every time, the same thing happened: the monitoring decayed within weeks. Services changed, thresholds drifted, and the dashboards became wallpaper.
The realization
At some point we asked a simple question: why do we have to tell the tool what to look for? The tool has all our logs. It knows what normal looks like. Why can't it just tell us when something is different?
That question turned out to be the whole product.
We wanted a log monitoring system with zero configuration. Send your logs and get intelligence back. New errors detected automatically. Volume anomalies flagged without writing queries. Silent services caught before they become outages. And all of this working from the moment the first log arrives.
What we decided not to build
We made some deliberate choices early on:
- No proprietary agents. Your existing log shipper (FluentBit, Vector, Promtail, OTEL Collector) works as-is. We accept logs over every standard protocol.
- No dashboard builder. Instead of making you build panels, we built automatic detection that tells you what's wrong without you asking.
- No complex pricing. Flat monthly pricing with included volume. No per-query fees, no cardinality charges, no surprise bills.
- No feature gating on intelligence. Every detector — anomaly, error, silence, K8s, golden signal, security — runs on every tier, including free.
Who Epok is for
We built Epok for teams that look like us. Small, fast-moving, wearing too many hats. Teams where nobody has time to be a full-time SRE. Teams that want to know when something breaks but don't want to maintain monitoring infrastructure.
If you're a team of five deploying three times a day, Epok is for you. If you're an indie developer running a SaaS product and you want to sleep knowing that if your service dies at 3am, your phone will buzz, Epok is for you. If you're spending more on Datadog than on your actual infrastructure, Epok is very much for you.
Where we are now
Epok runs 16 detection algorithms across every connected log stream. New error fingerprinting, volume anomaly detection (spikes, drops, flatlines, forecast breaches), silence detection, pattern clustering, golden signal monitoring, Kubernetes-specific rules, and more. The AI layer explains what the detectors find: root cause analysis, suggested actions, and incident narratives.
We accept logs over Elasticsearch bulk, Loki push, OTLP, FluentBit, Fluentd, syslog (six variants including Cisco, Fortinet, and Palo Alto), CloudWatch subscription filters, and raw JSON. If you can send HTTP, you can send to Epok.
The free tier gives you 1 GB/day with every detection feature we have. No credit card, no expiration, no catch. We want you to use it, find value in it, and upgrade when your needs grow. That's the whole business model.
We built the log monitoring we wanted. We hope you find it useful too.
Try Epok free. No credit card. First alert in 5 minutes.
Every detector included. Root cause analysis on every incident. See what your logs are trying to tell you.
Start FreeRelated
You Don't Need Dashboards to Monitor Your Logs
The logging industry sold us on dashboards. Build panels. Write queries. Tune thresholds. But what if the tool just told you when something was wrong?
Silent Failures: The Bug That Won't Page You
The scariest production failures aren't the ones that throw errors. They're the ones where a service dies and the logs just stop. Here's why silence detection matters more than error alerting.
Stop Writing Alert Rules by Hand
You can't predict every failure mode. Static thresholds miss novel incidents and drown on-call in false alarms. Anomaly detection is the way out.