4/30/2018 1:10:00 PM
Cybersecurity operations have always been somewhat like finding a needle in a haystack.
As businesses and enterprises collect, gather, and process increasing amounts of data, the risk of malicious activity only rises. The amount of processing and analytical power required to verify the average enterprise's data throughput is incalculably staggering.
The traditional approach cybersecurity professionals were taught to use no longer produces results. According to the 2016 Verizon Data Breach Investigations Report, more than half of all data breaches are undiscovered for months.
Machine learning and predictive analytics have given security operations centers (SOCs) new tools in the arms race to cybersecurity dominance. But these tools are nearing the end of their lifetimes, paving the way for the next step forward in cybersecurity evolution – artificial intelligence.
However, at the same time, cybercriminals are using increasingly sophisticated toolkits to break through victims' digital defenses. The first side to properly implement an artificially intelligent solution will have a powerful advantage in the post-AI cybersecurity landscape.
Today's Cybersecurity Failures Inspire Tomorrow's Successes
In a traditional cybersecurity environment, security technicians gather data on previous data breaches, phishing campaigns, and well-known malware examples. They extract data from those examples, turn them into digital signatures, and then compare those signatures against network traffic and emails flowing into and out of the servers they protect.
This process can protect victims from large-scale data exfiltration but is not very effective against more sophisticated attacks such as Advanced Persistent Threats (APTs), which leak small amounts of data out over long periods of time.
To combat these types of attacks, SOCs rely on machine learning and predictive algorithms to flag potentially malicious activity. In turn, this generates a large daily log of false positives and alerts that a human security analyst needs to verify.
While verifying thousands of automatically flagged security logs every day is less burdensome than verifying millions, it is still an exhausting task. The tipping point comes with the advent of AI aggregating and learning from human analysts' actions to generate better flags and alerts.
This is the premise of MIT's CSAIL technology, which incorporates AI to reduce log line items from thousands to mere hundreds. The more feedback the system gets, the better it becomes.
On the proactive threat detection and mitigation front, data deception technologies use AI to create user-specific predictive behavior models and then look for deviant use patterns within those models.
Instead of immediately locking the malicious user out (which may incentivize them to re-mount the attack), new approaches trick attackers into thinking their approaches are working. The goal is to draw out and expose as many attacker resources as possible while misdirecting them away from actual core databases and processes.
How Cybercriminals Are Using AI to Thwart Cybersecurity
The problem with AI is that its fundamental benefits are available to anyone who knows how to code it. Mark Gazit of ThetaRay recounts a sophisticated attack on a large international bank that his company audited.
In this case, an attacker took control of 250 bank accounts and started moving money between the accounts with harmless-sounding transaction names like "present for my dad,” "tuition for my son,” or "buying a car," and kept this up for months. When the system began pulling money out of the accounts and into offshore accounts, nobody could identify the true source of the transactions.
In the hands of sophisticated cybercriminals, AI allows a single user to multiply a single attack vector indiscriminately. With cybercriminal AI, there is not much of an increase in difficulty between hacking into a single account and hacking into hundreds of accounts simultaneously.
Another instance of attack vector multiplication and advanced persistence in the financial industry can be found in ATM hacks. Instead of breaking into a single ATM, today's cybercriminals can leverage AI to hack into multiple ATMs across an entire city and have them dispense torrents of cash simultaneously.
This was the strategy of international crime syndicate Carbanak, which targeted banks across the world and stole more than $1 billion by "jackpotting" ATMs.
The Future of Cybersecurity Is Collaborative
AI cybersecurity solutions don't operate in a vacuum, and neither do they automate security event verification without human intervention. Instead, artificially intelligent processes help reduce human workloads to reasonable amounts – a strategic necessity in an industry plagued by constant and growing labor shortages.
Considering the fact that even a modestly sized e-commerce website can generate up to 40 million loglines everyday, security professionals need solutions that automate low-impact verifications and free up their time for high-impact, strategic decision-making.
Artificial intelligence is the key to building predictive models that respond to constant cybercriminal innovation. Rather than replacing security analysts with infallible automated processes, AI augments analysts and empowers them to make valuable strategic decisions.