AI & ML products are harder than they look
AI tech is obviously overhyped, and conflating with ideas from science fiction and religion. I prefer using terms like Machine Intelligence and Cognitive Computing, just to avoid the noise. But if we strip away the most unrealistic stuff, there’s some interesting paths forward.
The biggest problem is in defining strong semantic paths from the available data to valid use cases. Many approaches founder on assumptions that the data contains value, that the use case can be solved with the data, or that producer and consumer of data use terms the same way.
Given a strong data system, there is a near term opportunity to build AI-powered toolsets that help customers learn and use the data systems that are available. This is a services heavy business with tight integration to data collection and storage.
This has to be human intelligence driven and therefore services-heavy though, because the data and use cases are not similar between budget-owning organizations. There is data system similarity on low-value stories, but high-value stuff is specific to an organization.
That services work should lead to the real opportunity for cognitive computing, which is augmenting human intelligence in narrow fields. If there is room to abstract the data system, there’s room to normalize customers to a tool. Then you’ve got a product plan, similar to SIEMs.
Put products into fields where the data exists, use cases are clear, the past predicts the future, pattern matching and cohort grouping are effective, the problem has enough value in it to justify effort, and outside context problems don’t completely derail your model. Simple!
If you can describe the world in numbers without losing important context, then I can express complex relationships between the numbers.
There’s a question being begged though… given a data system that successfully models, how much did the advanced system improve over a simpler approach? Is the DNN just figuring out that 95%-ile outliers are interesting?
If a problem can be solved with machine intelligence, great. If the same problem could be solved with basic statistics, that’s cheaper to build, operate, and maintain. It’ll be interesting to see how this all shakes out.
Update: an interesting take on this from Benedict Evans: https://www.ben-evans.com/benedictevans/2018/06/22/ways-to-think-about-machine-learning-8nefy
Update: and another from Raffael Marty: https://raffy.ch/blog/2018/08/07/ai-ml-in-cybersecurity-why-algorithms-are-dangerous/
Update: and another from Nyotron: https://www.nyotron.com/collateral/Nyotron-AI-White-Paper_12-13-2017.pdf