What the Twitter hack reveals about spear phishing – and how to prevent it
Dan Fein, Director of Email Security Products, Darktrace
Twitter has now confirmed that it was a “phone spear phishing attack” targeting a small number of their employees that allowed hackers to access 130 high-profile user accounts and fool thousands of people into giving away money via bitcoin.
Spear phishing involves targeted texts or emails aimed at individuals in an attempt to ‘hook’ them into opening an attachment or malicious link. This attack highlights the limitations in the security controls adopted by even some of the largest and most tech-savvy organisations out there, who continue to fall victim to this well-known attack technique.
The incident has been described by Twitter as a “coordinated social engineering attack” that “successfully targeted employees with access to internal systems and tools.”
Though the specific nature of the attack remains unclear, it likely followed a similar pattern to the series of threat finds detailed elsewhere on the Darktrace Blog: impersonating trusted colleagues or platforms, such as WeTransfer, Microsoft Teams or even Twitter itself, with an urgent message coaxing an employee into clicking on a disguised URL and inputting their credentials on a fake login page.
When an employee inputs their credentials, that data is recorded and beaconed back to the attacker, who will then use these login details to access internal systems — which, in this case, allowed them to subsequently take control of celebrities’ Twitter accounts and send out the damaging Tweets that left thousands out of pocket.
Training the workforce is not enough
Twitter says in a statement that this incident has forced them to “accelerate several of [their] pre-existing security workstreams.” But the suggestion that they will continue to organise “ongoing company-wide phishing exercises throughout the year” indicates an over-reliance on the ability of humans to identify these malicious email attacks that are getting more and more advanced, and harder to distinguish from genuine communication.
Cyber-criminals are now using AI to create fake profiles, personalise messages and replicate communication patterns, at a speed and scale that no human ever could. In this threat landscape, there can no longer be a reliance solely on educating the workforce, as the difference between a malicious email and legitimate communication becomes almost imperceptible. This has led to an acceptance that we must rely on technology to help us catch the subtle signs of attack, when humans alone fail to do so.
The legacy approach: no playbook for new attacks
The majority of communications security systems are not where they need to be, and this is particularly true for the email realm. Most tools in use today rely on static blacklists of rules and signatures that analyse emails in isolation, against known ‘bads’. Methods like looking for IP addresses or file hashes associated with phishing have had limited success in stopping attackers, who have devised simple techniques to bypass them.
As we have explored previously, attackers are constantly changing their approach, purchasing new domains en masse, experimenting with novel strains of malware, and manipulating headers to get around common validation checks. It is due to these developments that Secure Email Gateways (SEGs) become antiquated almost the moment they are updated.
The mean lifetime of an attack has reduced from 2.1 days in 2018 to 0.5 days in 2020. As soon as an SEG identifies a domain or a file hash as malicious, cyber-criminals change their attack infrastructure and launch a new wave of fresh attacks. Their fundamental means of operation renders legacy security tools incapable of evolving with the threat landscape, and it is for this reason that over 94% of cyber-attacks today start with an email.
How Cyber AI catches the threats others miss
However, one area where email security has seen great progress even in the last two years is the application of AI to spot the subtle features of advanced email attacks, even those that leverage novel malware. This approach allows security tools to move away from the binary decision-making that comes with asking “Is this email ‘bad’?” and moving to the far more useful question of “does this belong?”
This form of what we’re calling ‘layered AI’ combines supervised and unsupervised machine learning, enabling it to spot the subtle deviations from learned ‘patterns of life’ that are indicative of a cyber-threat.
Supervised machine learning models can be trained on millions of emails to find subtle patterns undetectable by humans and detect new variations of known threat types. These models are able to find the real-world intentions behind an email: by training on millions of spear phishing emails, for example, a system can find patterns associated with this type of email attack and accurately classify a future email as spear phishing.
In addition, unsupervised machine learning models can be trained on all available email data for an organisation to find unknown variations of unknown threat types — that is, the ‘unknown unknowns,’ the combinations never before seen. Ultimately this is what enables a system to ask that critical question “does this belong?” and spot genuine anomalies that fall outside of the norm.
Layering both of these applications of AI allows us to make determinations such as: ‘this is a phishing email and it doesn’t belong’, dramatically improving the system’s accuracy and allowing it to interrupt only the malicious emails – since there could be phishy-looking emails that are legitimate! It also enables us to act in proportion to the threat identified: locking links and attachments in some cases, or holding back emails entirely in others.
This form of ‘layered AI’ requires an advanced understanding of mathematics and machine learning that takes years of research and development. With that experience, Cyber AI has proven itself capable of catching the full range of advanced attacks targeting the inbox, from spear phishing and impersonation attempts, to account takeovers and supply chain attacks. Once implemented, it takes only a week before any new organization can derive value, and thousands of customers now rely on Cyber AI to protect both their email realm and wider network.
Plenty more phish in the sea
This will not be the last time this year that a cyber-attack caused by spear phishing makes the headlines. Just this week, it was revealed that Russian-backed cyber-criminals stole sensitive documents on US-UK trade talks after successful spear phishing, and the technique may well have played a part in ongoing vaccine research espionage that surfaced in July.
With the US presidential race heating up, it was recently revealed that fewer than 3 out of 10 election administrators have basic controls to prevent phishing. This attack method may come to not only damage organisations and their reputation, but also to undermine the trust that serves as the bedrock of democracy. Now is the time to start recognising the very real threat that email attackers represent, and to prepare our defences accordingly.