Book Review Content Corporate Life Customer Support fitness garmin Licensing Monitoring Observability Operations Partnership Product Management Products Sales Security User Experience

Moving the transformation point of data

Published by

on

NGC7331 Galaxy

There’s a pattern that has become common knowledge, perhaps on its way to received wisdom. Endpoints pass their raw data off to storage as quickly as possible. Analysts then do their work against that storage using map reduced processors, automated and/or ad hoc.

This pattern has many benefits and is correct for many use cases. Ephemeral endpoints, such as elastically scaling containers, are able to emit data before it is lost. Machines under attack are able to emit data before the attacker deletes or corrupts it. Best of all, the analyst can explore the data, learn to ask smarter questions, and gradually improve the quality of their work.

The pattern also has a downside: staggering cost. Network load, storage, compute resources, license costs, service costs, and analyst time. It can all be worthwhile during the process of discovering a new root cause or isolating a security threat, but this is not an ideal way to perform operationalized tasks.

What if more of the analysis work is pushed to the endpoint where the raw data is generated?

That does not work for use cases where raw data must be saved, but those cases do not have to be as prevalent as assumed. It’s only like this because we work with data systems that fight against producing information.

We haul dozens of low fidelity lines around for every actual event, generating gigs of noise from every device to sort through. The distance from a given unit of event or metric data to the business impact it represents is huge; just ask anyone who’s collected data for a SIEM.

Bulk raw data collection is similar to running all your applications with logging level at Debug: it’s the right move when you need it, but a waste on the average day.

Alternatives are out there. The most obvious is metrics: it’s hard to argue that a single raw measurement is valuable. Generating a periodic histogram is higher signal, and a smaller data set to boot. Performing metrics analysis at the point of collection produces a better result for less effort.

“checking… is more efficient to do on the host as opposed to querying against a central metrics store… there was no need to store host metrics with that retention and resolution.“ — Uber engineering team

What about events? It’s true that a single event could be the needle in a haystack that explains a critical situation. It’s also true that these events can be lost to system failure or malicious activity. Unfortunately, it’s also true that these events are often nearly impossible to derive state from. That stream of debug events is great for forensics, but poor for analytics.

What if the endpoint were able to regularly report state in business-friendly terms? Less DURSLEy, more CAPS? What would that require?

Administrators would need to describe the transformations that they need. Ideally, that’s writing configuration for an engine, just like they’re doing in analysis tools today. However, it would also work to write scripts; this could even be the optimal approach when native tools are faster than a cross-platform engine.

However the work is done, endpoints would need to be able to perform calculations to turn raw data into information. That almost certainly requires a data cache on the endpoint, so cache management becomes a probable requirement.

Of course, bidirectional data distribution, execution shell, and configuration management are required. Those are baseline requirements for any data collection mechanism though, even if some shortcut through baking into a golden image. Ditto for system identification and disambiguation.

So there are some gnarly problems in the infrastructure required to do endpoint-based business analysis, but nothing new, and no true show stoppers. The biggest challenge is at the point of highest value: converting data to business knowledge instead of gathering data for it’s own sake.

Update: Helios Hyperscaler


Discover more from Monkeynoodle.Org

Subscribe to get the latest posts to your email.