Table of Contents
Artificial Intelligence (AI) has been heralded by many as a revolutionary technology that is going to replace humans - eventually. While that may be a while away yet, in the cybersecurity industry many recognize its ability to repel cyberattacks. The problem is that, like so many weapons before it, the technology has not gone unnoticed by the criminal fraternity.
While there is obviously a lot of good that could be done, AI also has a very dark side.
AI has no allegiance
Dating back to 1956, the promise of AI is only now coming to fruition thanks to significant advances in computing power, storage, data volumes and advanced algorithms. While we might be a few years away from realistic robots performing everyday tasks in our homes, AI is delivering benefits in a number of areas – particularly those that require extensive correlation between data sets, for example healthcare, particularly medical research, manufacturing and retail. A category within AI is Machine Learning (ML). Using algorithms, it can query vast amounts of data to discover patterns and generate insights that fundamentally means the machine can learn from these relationships.
The problem is that data is too freely available.
Criminals have been given too much information already
Headlines continue daily of organization after organization falling victim to hackers and having their data breached - Yahoo, TalkTalk, Vodafone, Equifax, the list goes on and on. Add to that the organizations that simply messed up and exposed the data themselves – the most recent being FedEx that stored sensitive customer data on a publicly accessible Amazon S3 bucket. The final element is the data that we, as individuals, hand out on a daily basis. The ‘check in’ from our local coffee shop, Instagram updates, tweets and retweets, and even our use of GPS powered apps. All these digital footprints can be harvested to help piece together the jigsaw of our lives.
You’d be naïve to believe that criminals aren’t already trawling the internet and the dark web for these data repositories. Harnessing AI, they can piece together each scrap of information to create a complete dossier that can be used for identify theft, social engineering attacks and even formulate spear-fishing campaigns.
This leads into another area where AI could be harnessed by scammers – writing phishing messages that humans can’t identify. In 2016, a Japanese AI program co-authored a short novel that made it through the first round for a national literary prize. In the same year Gartner predicted that, in 2018, 20 percent of business content – such as shareholder reports, legal documents, etc. would be written by machines. While that might not be a reality quite yet, things are certainly moving in this direction with an Economist article written using AI printed at the end of last year.
Eventually, AI will be harnessed to produce communications that will be indiscernible from human or machine written. If used to create phishing messages that are personalized from our digital personas I really struggle to see how the human eye will be able to determine between real and sham correspondence.
Slashing a hole in the phishers net
While AI will be used by an evil genius’ to create his devilishly devious plan, it doesn’t mean its game over. In fact, far from it – for now!
IRONSCALES is harnessing machine learning to inject machine intelligence into our phishing defenses today.
Using machine learning algorithms, IronSights continuously studies every employee's inbox to detect anomalies and communication habits based on a sophisticated user behavioral analysis. All suspicious emails are visually flagged the second an email hits an inbox, and a quick button link inside Outlook & Gmail toolbar enables instant SOC team notification while prompting security tools for further investigation and immediate remediation.
IronTraps then automatically executes a comprehensive phishing forensic examination of the suspicious email using our integrated Multi-AV and Sandbox Scan. Working in conjunction with IRONSCALES's advanced technology, IronTraps analyzes the number and skill ranking of the reporter, in addition to other proprietary analytics, which determines the most appropriate mitigation or remediation response. Once an attack is verified, an automatic remediation response is initiated resulting in the enterprise-wide removal of all malicious emails.
Phishing email attacks are already bypassing spam filters and other gateway solutions to be delivered into end users’ mailboxes daily, imagine how much worse the situation will be if they’re AI powered. Humans can’t be expected to identify every phishing email that lands in inboxes across the workforce, so organizations must fight fire with fire and deploy machine powered phishing solutions that will help the workforce spot attacks whether written by man or machine.
Learn more about our award-winning products here and schedule a demo today
Explore More Articles
Say goodbye to Phishing, BEC, and QR code attacks. Our Adaptive AI automatically learns and evolves to keep your employees safe from email attacks.