Phishing attacks are increasingly becoming more targeted and customized than in the past. The goal of a phishing attack is to steal sensitive data like credit cards and/or login information or to install malware on the victim’s machine. Phishing has evolved considerably over the past dozen-or-so years. We now have many different subtypes of phishing, including spear phishing (targeting specific users in phishing attacks), whaling (phishing specific high-profile users who have considerable resources or access privileges), smishing (phishing users via SMS messages), quishing (phishing using QR codes) and vishing (telephone-based phishing). Leveraging on AI for more convincing phishing lures:
Contemporary phishing lures tend to fall into two basic categories. Many phishing messages attempt to replicate transactional messages, for example, an invoice or receipt for a purchase. Other phishing messages tend to target victims who are eager for news about specific topics, such as current events, natural disasters, or the latest celebrity gossip. With readily accessible open-source information and publicly available breach data, there can be a substantial trove of sensitive information concerning individuals/groups that is freely available online to attackers who are motivated enough to search. This information can be used to customize phishing emails and make phishing content even more relevant to the victims. To aid in customizing phishing content, attackers are increasingly turning to AI apps such as ChatGPT that can be used to generate phishing content that sounds quite convincing. Fortunately, the designers of ChatGPT have built some guardrails so that attackers cannot simply ask ChatGPT to generate a phishing lure. However, by phrasing the question a little differently, ChatGPT will help generate convincing content for use in phishing attacks.
Over the past few years, we have seen how attackers, such as those behind Emotet, have leveraged existing email threads to compel targets to open attachments or click links. Using AI applications similar to ChatGPT, those malicious emails could be customized based on the context of those prior threads. Voice cloning is another piece of AI technology that is expected to play a role in future phishing attacks. Deepfake technology has already progressed so that a familiar voice over the telephone can fool users. Once deep fake tools become more widely available, we expect attackers to deploy this as an additional mechanism to phish users.
Chetan Raghuprasad is a CyberSecurity researcher with the Cisco Talos, focusing on hunting and researching the latest threats in the cyber threat landscape generating actionable intelligence. He seeks to uncover the tactics, techniques, and procedures used by threat actors by reversing and analyzing the threats to identify the actors’ motives and their origin. Chetan also publicly represents Cisco Talos by writing the Talos blogs and talking at cybersecurity conferences worldwide. Chetan Raghuprasad has 14 years of experience in the Information Security sector, having worked within incident response and forensic analysis teams analyzing attacks against institutions in Singapore and across South East Asia. He is CISSP certified and SANS-certified Malware Reverse Engineer.