By Joan Ross, Chief Intelligence Officer. Part 1 of a 2-part series
Now that it’s clear that the latest MITRE ATT&CK framework (v8) addressing tactics in the areas of reconnaissance and resource development, it’s a perfect time to reexamine your machine learning (ML) and artificial intelligence (AI) security strategy. This will help you to get ahead and visualize attacks before they gain a significant foothold within your ecosystem.
A new method is currently the focus of scientific dissertation research work and widely available to the market in the coming months. This method enables a number of roles within the organization to aid security professionals in detecting malicious activities through behavioral analysis being surfaced to these roles through specific visual analytics.
Several notable vendors have attempted ML/AI for cybersecurity, but when examined further and implemented into production ecosystems, the ML is focused on limited vectors. This is something any advanced security operations center (SOC) analyst will tell you is of limited use if the actual AI activity is restricted in what it’s capable of delivering with any extensive assurance.
Specifically, what hadn’t been solved to date is the numerous false-positives generated by both ML/AI. This is the same challenge faced by SOC supervisors who have invested heavily in every technology from security incident event monitors (SIEMs), network access control (NAC), network access prevention (NAP), and various monitoring and alerting platforms.
Even more important are the elusive false negatives, which are the goal of advanced persistent threats (ATPs) and the goal that advanced and nation-state attacks strive toward. There is still a lack of detection through low and slow attacks, evasive lateral movements, malware that’s already gained persistence, finding unmonitored areas of application networks, cloud ecosystems, entry vectors into cyber-physical systems, and/or of course, the easy way in, through stagnant accounts, phishing, zero day malware, misrepresentation, social engineering and other online lures.
Several years ago, a significant breakthrough occurred when the National Security Administration gleaned what their considerable investment in security analysts provided, compared to their considerable investment in security tools and technology development.
They found their analysts’ human brains were far superior to their computer technology at calculating qualitative risk. Their tools and technology excelled at calculating numbers at scale, i.e., quantitative calculations. But their analysts could determine where advanced persistent threats originated by virtue of certain pattern combinations that the technology wasn’t detecting. The patterns were unique to geographical regions and there were other tendencies detected by humans the computers were missing.
Where nation-state attacks were detected by computers, the analysts could best refine the origin of the attack by their converted visual patterns. Programmatic code is written by humans in differing geographical regions and those humans have evolving tendencies — much like the modus operandi of other crime suspects.
This problem became clear to the experts that came together here at InsightCyber: several aspects were missing from vendor security technology. And until those problems were solved, security teams were not going to get ahead of the constant barrage of daily cyber-attacks.
In our next post, we’ll drill down into the four most critical missing aspects.