Skip to main content
Alerts are signals that something needs attention. Unlike incidents (which are user-facing), alerts are internal notifications for your team. Kodo helps you manage alerts from multiple sources, reduce noise, and ensure nothing falls through the cracks.

Alert Lifecycle

┌─────────┐    ┌──────────────┐    ┌──────────┐
│  Fired  │ →  │ Acknowledged │ →  │ Resolved │
└─────────┘    └──────────────┘    └──────────┘
     ↓              ↓
  Notifies      Stops
  on-call      notifications
  1. Fired: Alert is triggered, notifications sent to on-call
  2. Acknowledged: Someone is looking at it, notifications stop
  3. Resolved: Issue is fixed, alert is closed

Alert Sources

Alerts can come from:
SourceDescription
Uptime MonitorsHTTP endpoint failures
Heartbeat MonitorsMissed cron job heartbeats
SSL/Domain MonitorsCertificate or domain expiration
Beacon SDKError thresholds exceeded
MetricsCustom metric thresholds
ExternalWebhooks from your monitoring tools

Connecting External Alert Sources

Receive alerts from tools like Datadog, New Relic, or custom systems:
  1. Go to Dashboard → Alerts → Sources → New Source
  2. Choose the source type (Datadog, New Relic, custom webhook, etc.)
  3. Name the source and create it
  4. Copy the API key provided — this authenticates your webhook requests
Configure your monitoring tool to POST to the webhook endpoint with the API key:
# Send alerts to the webhook endpoint
curl -X POST "https://kodostatus.com/api/alerts/webhook" \
  -H "x-api-key: your_source_api_key" \
  -H "Content-Type: application/json" \
  -d '{
    "title": "High Error Rate on API",
    "severity": "critical"
  }'

Firing Alerts

From External Systems

curl -X POST "https://kodostatus.com/api/alerts/webhook" \
  -H "x-api-key: your_source_api_key" \
  -H "Content-Type: application/json" \
  -d '{
    "title": "High Error Rate on API",
    "message": "Error rate exceeded 5% for the last 5 minutes",
    "severity": "critical",
    "service_id": "svc_api",
    "dedup_key": "api-error-rate-high",
    "metadata": {
      "error_rate": 7.2,
      "threshold": 5
    }
  }'
The dedup_key prevents duplicate alerts—if an alert with the same key is already active, it updates the existing alert instead of creating a new one.

From Workflows

Alerts can also be fired by workflows when conditions are met:
{
  "trigger": "metrics.threshold_exceeded",
  "actions": [
    {
      "type": "fire_alert",
      "title": "Memory usage critical",
      "severity": "critical",
      "service_id": "{{service.id}}"
    }
  ]
}

Acknowledging Alerts

Acknowledge an alert to indicate someone is investigating:
  1. Go to Dashboard → Alerts
  2. Find the alert
  3. Click Acknowledge
  4. Optionally add a note: “Looking into this now”

Resolving Alerts

When the issue is fixed:
# Resolve via API
curl -X POST "https://kodostatus.com/api/alerts/alert_abc123/resolve" \
  -H "X-API-Key: your_api_key" \
  -H "Content-Type: application/json" \
  -d '{"message": "Scaled up API servers, error rate back to normal"}'

# Or via webhook (for auto-resolution from monitoring tools)
curl -X POST "https://kodostatus.com/api/alerts/webhook" \
  -H "x-api-key: your_source_api_key" \
  -H "Content-Type: application/json" \
  -d '{
    "dedup_key": "api-error-rate-high",
    "status": "resolved"
  }'

Alert Suppressions

Temporarily suppress alerts during known events. Create suppressions in Dashboard > Alerts > Suppressions by specifying:
  • Matcher conditions — which alerts to suppress (by service, source, or severity)
  • Time window — when the suppression is active (start and end time)
  • Reason — why alerts are being suppressed
Suppressed alerts are still recorded but don’t trigger notifications. Use suppressions during planned maintenance or while a known fix is deploying.

Alert Routing

Route alerts to different teams based on service or severity. Configure routing rules in Dashboard > Alerts > Rules:
  • Conditions — match by service, source, severity, or custom metadata
  • Actions — route to notification channels, escalation policies, or specific on-call schedules
  • Priority — rules are evaluated in priority order; first match wins

Best Practices

Prevent alert fatigue by deduplicating related alerts. An ongoing issue should be one alert, not many.
  • Critical: Immediate action required, pages on-call
  • Major: Needs attention soon, Slack notification
  • Minor: Informational, logged for review
Even if you can’t fix it immediately, acknowledge to stop repeated notifications.
Unresolved alerts create noise. Resolve alerts when issues are fixed or determined to be non-issues.
Regularly audit active suppressions to ensure they’re still needed.