A new artificial intelligence (AI) system developed by MIT researchers promises to offer increased threat detection capabilities and reduce false positive rates, boosting incident response and productivity in the security world.

The team, based at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), detailed in the paper AI2: Training a big data machine to defend [PDF], how the new platform achieves three times higher prediction capabilities, and is able to deliver significantly fewer false positive rates than current analytics models.

The team showcased the AI2 platform last week at the IEEE International Conference on Big Data Security, and released the study to the public earlier today.

The paper explains how the tool combines AI with ‘analyst intuition’ to create a learning model whereby intermittent human analyst feedback is layered into a continuous unsupervised machine learning system.

“You can think about the system as a virtual analyst,” commented CSAIL research scientist Kalyan Veeramachaneni, who designed AI2 alongside PatternEx chief data scientist and former CSAIL researcher, Ignacio Arnaldo. “It continuously generates new models that it can refine in as little as a few hours, meaning it can improve its detection rates significantly and rapidly,” he added.

The team argues that this hybrid model will offer huge benefits to today’s security analytics industry. Existing detection methods typically rely on analyst-based solutions which build on human-created rules, or they depend overly on machine learning for error detection which results in high and potentially harmful false positive rates.

Veeramachaneni pointed to the difficult task of merging human and computer-based approaches in cybersecurity, because of the challenges presented by having to manually label threat data for algorithms. A standard user on a crowdsourcing site such as Amazon Mechanical Turk does not have the skills to apply labels like “DDoS” or “exfiltration attack,” said the researcher – “You need security experts.”

However, in today’s paper the scientists show how the new platform obviates the need for constant ‘expert’ overseeing. When tested with 3.6 billion pieces of data, the system was able to detect 85% of attacks. This result was three times higher than the current benchmark. AI2 also reduced false positives by a factor of five.

In response to the study, Nitesh Chawla, professor of computer science at the University of Notre Dame, commented: “This paper brings together the strengths of analyst intuition and machine learning, and ultimately drives down both false positives and false negatives… This research has the potential to become a line of defence against attacks such as fraud, service abuse and account takeover, which are major challenges faced by consumer-facing systems.”