Impressive: MIT researchers have built an AI system that can detect 85 percent of cyber attacks
04/26/2016 / By usafeaturesmedia / Comments
Impressive: MIT researchers have built an AI system that can detect 85 percent of cyber attacks

(Cyberwar.news) Leave it to the geniuses at the Massachusetts Institute of Technology to get it right when it comes to developing software to help better defend the nation’s information systems from hacking.

As reported by The Hacker News, MIT researchers have developed an Artificial Intelligence system that is capable of detecting 85 percent of cyber attacks, and with high accuracy.

Operating from the premise of what a revolution in cybersecurity it would be if a system was capable of predicting cyber incidents before they occurred, researchers at “MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) are working with machine-learning startup PatternEx to develop a line of defense against such cyber threats,” the site reported.

The AI system also relies on human input, which researchers call Analyst Intuition (AI), which explains why the concept has been named Artificial Intelligence Squared, or AI2.

“You can think about the system as a virtual analyst,” said CSAIL research scientist Kalyan Veeramachaneni, who developed AI2 with Ignacio Arnaldo, a chief data scientist at PatternEx and a former CSAIL postdoc. “It continuously generates new models that it can refine in as little as a few hours, meaning it can improve its detection rates significantly and rapidly.”

Creating cybersecurity systems that merge human analytical abilities with computer-based approaches can be tricky, MIT said in a press release, in part because of the challenge of manually labeling cybersecurity data for algorithms.

Brighteon.TV

As the press release noted further:

For example, let’s say you want to develop a computer-vision algorithm that can identify objects with high accuracy. Labeling data for that is simple: Just enlist a few human volunteers to label photos as either “objects” or “non-objects,” and feed that data into the algorithm.

But for a cybersecurity task, the average person on a crowdsourcing site like Amazon Mechanical Turk simply doesn’t have the skillset to apply labels like “DDOS” or “exfiltration attacks,” says Veeramachaneni. “You need security experts.”

However, that presents another problem. Experts are busy and cannot spend all day examining volumes of data that have been flagged as suspicious. As such, companies have been known to have given up on platforms that require a lot of extra work. Therefore, a machine-learning system has to be able to improve without overwhelming its human partners.

That’s what makes AI2 so promising.

“This paper brings together the strengths of analyst intuition and machine learning, and ultimately drives down both false positives and false negatives,” Nitesh Chawla, the Frank M. Freimann Professor of Computer Science at the University of Notre Dame, told MIT News. “This research has the potential to become a line of defense against attacks such as fraud, service abuse and account takeover, which are major challenges faced by consumer-facing systems.”

What’s more, the system is forever evolving.

“The more attacks the system detects, the more analyst feedback it receives, which, in turn, improves the accuracy of future predictions,” Veeramachaneni says. “That human-machine interaction creates a beautiful, cascading effect.”

More: 

Cyberwar.news is part of the USA Features Media network.

Submit a correction >>

, ,

This article may contain statements that reflect the opinion of the author
Get Our Free Email Newsletter
Get independent news alerts on natural cures, food lab tests, cannabis medicine, science, robotics, drones, privacy and more.
Your privacy is protected. Subscription confirmation required.


Get the world's best independent media newsletter delivered straight to your inbox.
x

By continuing to browse our site you agree to our use of cookies and our Privacy Policy.