Skip to main content
Slack Integration

Monitor Alerts — InstantlyDelivered to Slack

When a node goes offline, a smart contract event fires, or an incident is declared, your team should know in seconds — not minutes. BlackTide routes alerts to the exact Slack channel for each monitor, @mentions on-call engineers by name, and attaches all the context needed to act without switching tools.

<10s
Alert delivery time
Per-monitor
Channel routing
@mentions
On-call targeting
2 min
Setup time

Why generic alert emails are failing your team

  • Alert emails get buried in inboxes — by the time someone reads them, an outage has already impacted users for 10+ minutes
  • Generic notifications lack context: no monitor name, no incident severity, no link to the relevant dashboard — engineers waste time hunting for information
  • On-call rotations are invisible to monitoring tools: alerts flood everyone in the team instead of targeting the engineer actually responsible at that moment. PagerDuty on-call research
  • No per-service routing means a minor testnet blip fires into the same channel as a production P0 incident, eroding team trust in alerts
  • Alert fatigue sets in when every notification looks identical regardless of severity — critical alerts start getting ignored

How BlackTide Slack alerts keep your team responsive

  • Alerts arrive in the exact Slack channel you configure per monitor — production incidents go to #incidents, testnet noise stays in #testnet-alerts
  • Rich Slack message cards include monitor name, current status, severity level, affected chain, and a direct link to the incident timeline
  • @mention specific team members or on-call groups using Slack user IDs or group handles configured per alert rule
  • Incident lifecycle messages update the original thread — no duplicate noise, just a single thread that evolves from "down" to "investigating" to "resolved"

Capabilities

Everything you need from a Slack monitoring integration

Purpose-built for engineering teams who live in Slack and cannot afford to miss a critical alert.

Per-monitor channel routing

Configure a different Slack channel for each monitor or alert rule. Route Ethereum mainnet alerts to #prod-alerts, Cosmos validator alerts to #validators, and DeFi protocol health to #defi-ops. Teams with complex infrastructure use routing to create dedicated Slack channels per blockchain or per service, keeping alerts actionable and noise-free.

Rich contextual message cards

Each alert includes monitor name, status change (UP → DOWN), severity, affected chain or endpoint, last successful check timestamp, and a direct link to the BlackTide incident page. Engineers can assess severity and start investigating without opening any other tool — every piece of information needed for an initial triage is in the Slack message.

@mention on-call engineers

Configure @mention targets per alert rule using Slack user IDs or group handles like @oncall-web3. When a P1 incident fires at 3 AM, the exact engineer on rotation gets paged in Slack — not the entire team. Use Slack's native user groups to sync your on-call schedule and BlackTide will target the right person automatically.

Incident thread updates

BlackTide posts the initial alert as a new Slack message and then threads all subsequent status changes below it — acknowledged, investigating, resolved. Your #incidents channel stays clean with one thread per incident instead of a wall of disconnected messages. Resolution time is visible at a glance from the thread timestamps.

Flexible alert filtering

Define minimum severity thresholds per Slack channel — send only P1 and P2 alerts to your #critical-alerts channel and route everything else to #monitoring-logs. Combine severity filters with monitor groups to build a notification topology that eliminates noise without missing anything important.

Use Cases

Who uses BlackTide Slack alerts

Web3 engineering team with on-call rotation

Your on-call engineer is paged in Slack the moment a node goes offline, with full context in the message. No separate PagerDuty login required for initial triage — the Slack card has everything needed to assess the incident.

DeFi protocol with multi-chain production monitors

Mainnet alerts go to #prod-incidents, testnet noise to #testnet-ops. Your team stops ignoring alerts because the routing is tight enough that every message in #prod-incidents is a real production issue.

Small team without a dedicated NOC

Without a dedicated operations team, Slack is your operations center. BlackTide routes the right alerts to the right people so a two-person engineering team can cover 24/7 monitoring without a full NOC setup.

Frequently asked questions

How do I connect BlackTide to Slack?
In BlackTide Settings → Alert Channels, click "Add Channel" and select Slack. You will be redirected to Slack's OAuth flow to authorize BlackTide for your workspace. Select the default channel, then configure per-monitor routing in each monitor's alert settings. Total setup time is under 2 minutes.
Can I send different monitors to different Slack channels?
Yes. Each monitor and each alert rule can have its own Slack channel destination. You can send Ethereum mainnet monitor alerts to #prod-alerts, Cosmos validator alerts to #validators, and all DeFi protocol health checks to #defi-ops. Channel routing is configured individually per alert rule.
Can BlackTide @mention specific people when an alert fires?
Yes. In the alert rule configuration, add one or more Slack user IDs or Slack group handles (such as @oncall-web3) as mention targets. When the alert fires, the Slack message will include those mentions so the right people are notified immediately.
Will I receive duplicate messages for the same incident?
No. BlackTide posts one Slack message per incident and threads all status updates below it — acknowledged, investigating, resolved. Your channel receives a single thread per incident rather than a flood of disconnected notifications.
Can I filter alert severity so only critical alerts go to Slack?
Yes. Each alert channel configuration supports minimum severity thresholds. For example, you can configure one Slack channel to receive only P1 alerts and a second channel to receive all severities for logging purposes. This lets you tune signal-to-noise without losing visibility.

Ready to get BlackTide alerts in Slack?

Connect your Slack workspace in 2 minutes. No agents, no config files.