Stop Wasting Money on Noisy Logs: A Q&A on Adaptive Logs Drop Rules

From Moocchen, the free encyclopedia of technology

If your platform or observability team is drowning in useless log lines—health checks, forgotten debug statements, or verbose info from rarely used services—you know the pain of paying for noise. Adaptive Logs introduces drop rules (now in public preview) that let you define custom logic to ditch low-value logs before they ever hit Grafana Cloud Logs. This Q&A covers how they work, why they matter, and how to use them to save money fast.

What exactly are Adaptive Logs drop rules and how do they work?

Drop rules are a new feature in Adaptive Logs that let you create your own filters to eliminate specific log lines before they are stored in Grafana Cloud Logs. When a log line arrives, the system evaluates it against your rules in priority order. The first matching rule applies its drop rate—either 100% to block all matching logs or a percentage to sample them. You can combine criteria like log labels, detected log levels, or line content to precisely target the noise. For example, you could drop all DEBUG logs from a specific service or sample repetitive health check logs by 90%. This gives you direct control over what gets ingested, without needing to modify application code or wait for infrastructure changes.

Stop Wasting Money on Noisy Logs: A Q&A on Adaptive Logs Drop Rules

Why should I care about dropping logs? What benefits do drop rules provide?

Every log line you send to Grafana Cloud costs money—for storage, indexing, and processing. Most teams know they have logs that add zero value: throwaway health checks, forgotten debug output, or repetitive batch job messages. By dropping these logs with drop rules, you immediately reduce your log volume, which lowers your bill and decreases noise for anyone troubleshooting. Plus, your centralized platform team can enforce standards across all services without bothering individual developers to change their logging config. It’s a fast, self-service way to eliminate waste that previously required toilsome change management. The result? You keep the logs that matter and save money right away.

Can you give me some real-world examples of when to use drop rules?

Absolutely. Here are three common scenarios:

  • Drop logs by level: Your development teams left DEBUG logging enabled in production, eating your budget. Create a rule that drops all DEBUG logs (100% drop rate).
  • Sample chatty, repetitive logs: A batch processing job generates thousands of nearly identical log lines per minute. Use a rule with a 90% drop rate to keep only a representative sample, preserving visibility while slashing volume.
  • Target a specific noisy producer: A particular microservice suddenly starts emitting high-volume, low-value error logs. Combine a label selector (e.g., service=\"foo\") with a log level filter and drop 100% of those lines.

These examples show how flexible drop rules are—you can be as broad or as granular as needed. Check out the documentation for more details on crafting rules.

How do drop rules fit with other Adaptive Logs features like exemptions and patterns?

Drop rules are one of three mechanisms in Adaptive Logs for managing log volume, and they work together in a defined order. When a log line arrives:

  1. Exemptions are checked first: protected logs pass through untouched, no sampling applied.
  2. Drop rules are evaluated next in priority order: the first matching rule applies its drop rate (100% or a percentage).
  3. Patterns (optimization recommendations) are applied to any remaining log lines that weren’t exempted or dropped.

This tiered system lets you do it all: protect critical logs from ever being sampled, drop known noise with custom rules, and rely on intelligent recommendations for the rest. Drop rules give you the manual override to eliminate waste that automated patterns might miss. Learn more in our drop rules docs.

How do I create a drop rule? What criteria can I use?

Creating a drop rule in Grafana Cloud is straightforward via the Adaptive Logs interface. You define a stream selector (labels like service, namespace, job) and optionally add filters for log level (e.g., DEBUG, INFO, ERROR) or line content (text matching). Then set a drop percentage—100% to block all matching logs, or a lower percentage to sample. For example: {service=\"batch-worker\"} | level=DEBUG with a 100% drop rate. Rules are evaluated in priority order, so you can create multiple rules with different priorities to handle complex logic. The system processes each log line against the rule list until a match is found. This gives you precise control without needing to touch your application’s logging configuration.

Are there any limitations or things I should be aware of when using drop rules?

Drop rules are powerful, but a few things to keep in mind:

  • They act before ingestion – once a log is dropped, it’s gone permanently. Make sure your rule is correct, especially if you use a 100% drop rate.
  • Priority matters – rules are evaluated in order you define, so a high-priority rule can override lower ones. Plan carefully if you have overlapping criteria.
  • Exemptions take precedence – if a log line matches an exemption, drop rules won’t apply. Use exemptions for logs you must keep.
  • Drop rules are for known noise – for unknown patterns, rely on Adaptive Logs’ optimization recommendations instead.
  • The feature is in public preview – some changes may occur before general availability. Monitor updates.

Despite these considerations, drop rules are a safe and effective way to cut costs when used thoughtfully. Test with a low drop percentage first to verify your criteria.

How do drop rules help reduce costs compared to simply ignoring logs?

Ignoring logs doesn’t save you money—you still pay for ingestion and storage of every line. Dumping logs into a black hole still consumes resources. Drop rules prevent logs from ever being written to Grafana Cloud Logs, so you pay zero for them. No storage, no indexing, no query time. This is a direct cost reduction, especially for high-volume noise like DEBUG logs or health checks. Additionally, reducing log volume means your team spends less time sifting through irrelevant lines, boosting productivity. Combined with exemptions and patterns, drop rules give you a complete cost management system. By actively dropping what you know is worthless, you can often cut log costs by 30-50% or more, depending on your profile.