What strategies should companies adopt to mitigate the growing number of AI-based phishing attacks? There are 7 guidelines to keep in mind.

Recognize the threat
Cybersecurity leaders can move forward by understanding machine learning (ML) from the perspective of hacker tools. Currently, the most important area related to AI in cybersecurity is “content creation”. It’s also where ML is progressing the most, and it’s a good fit for hackers using phishing and malicious chatbots as vectors. Anyone who can use ChatGPT, i.e. has an Internet connection, has the opportunity to create attractive and well-structured texts.
“Finding bad grammar or spelling mistakes is a thing of the past,” says Conal Gallagher, CISO of IT management firm Flexera. Phishing emails before ChatGPT were already more sophisticated than in the past,” he said. Is the sender address correct? Do emails generate link clicks? You should ask the same question. Security awareness training is always important.
Gallagher cited research from cybersecurity firm WithSecure that demonstrated that AI can generate effective phishing emails by interacting with ChatGPT. Furthermore, studies confirm that safeguards against the use of AI tools for illegal purposes are not effective and that customized tools for such purposes are being created.
It’s important to recognize that AI can be used to create effective content today, and that this technology will only evolve in the future. LLM tools will be improved, made more accessible to hackers, and custom tools will be created for hackers. Now is the time to start thinking about and implementing measures to strengthen your security policy.
You should also expect phishing content to not only be much more persuasive, but also more targeted and reflect the specifics of time, place and event. You can no longer rely solely on signals indicating malicious emails. Images and even audio and video can be faked through content creation techniques. We must keep repeating that we must be wary of all emails that are not meant to be received.
Recognize that mindset and culture are our main defenses
Scott Ozenbaum, a former FBI Cyber Division Surveillance Special Agent, told CSO, “Ninety percent of cybercriminal damage can be prevented if the end user has some basic knowledge. Good to start with this part. All other methods cost money and unfortunately don’t seem to work. I hope someone will tell me I’m wrong so I can retire.”
“The first line of defense is to be your own human firewall,” Ozenbaum emphasized. In other words, the human mindset is at the heart of cybersecurity. It is therefore essential to promote this state of mind within the company.
“Culture trumps strategy, and it’s always top-down,” says Stu Schuberman, CEO of cybersecurity awareness training company KnowB4. The daily thoughts and actions of employees constitute the basic immune system of the company. Ongoing employee training to foster security awareness is important. The important thing to recognize in AI-based phishing is that when evaluating emails and other communications, you shouldn’t just look at the completeness and sophistication of your phrases. Phishers no longer use sloppy text, so employees should also be on high alert.
Highlight appropriate actions
Built-in security features of email and other software infrastructure elements usually ensure security unless the user does something on their own. In this part, you can tripwire your way of thinking. In other words, when you do something, you are very aware of what triggers it. Sensitive information isn’t at risk until employees send replies, run attachments, or fill out forms. The question “Is the content you are viewing legit based on the whole context, not just the internal aspects?” should be the first line of defense in your mind. The second ring of defense is “Wait! I’m being asked to do something now.
If an employee who sees a phishing attempt goes further, that’s a big win for the attacker. This is because the attack can only continue when there is an element of “one-step progress”. Security professionals need to educate themselves, their employees, and everyone around them so that when they are asked to enter information or launch an unknown application, an alarm will automatically go off in their heads.
The level of vigilance still needs to be raised, especially in the case of operations such as remittances. In one case, a deepfake was used to trick an employee into thinking the boss had given the correct money transfer instructions. High priority communications should be verified through a secondary channel that is phishing-proof.
“Everyone’s first reaction should be to go straight to the organization and see the message, not click on a link,” said Bob Kelly, product manager at Flexera.
Running a Phishing Simulation
Testing is the only way to know how well a company responds to phishing. Using AI-generated content for testing is an important part of threat response. How to run effective phishing simulations is a topic in its own right. It starts with defining specific, measurable metrics to guide testing. A good example is measuring how often phishing emails are reported and running campaigns to improve this metric.
When designing an anti-phishing campaign, it’s a good idea to emphasize the usefulness of AI tools for effective content creation. This helps create a strong awareness of the need to take the issue seriously. Trevor Duncan, security engineer at JumpCloud, told the CSO: “AI will continue to be used. By reinforcing and testing best practices frequently, you can create resilient security. If you are not currently conducting attacks social engineering involving employees, consider adding this to your 2023 plans. It can improve your security posture and make your security program resilient.
Use of AI detection automation tools
Several other companies, including developer ChatGPT OpenAI, have released tools to detect AI-generated text. The integration and automation of these tools, which will continue to improve with NLP generators, can help detect malicious content. Email analytics tool vendors are also starting to use AI to gain contextual details like metadata and location when evaluating legitimate content. Fightback, or using AI in the fight against AI, is an important strategy for the future of cybersecurity.
Phishing detection is an important part of your overall network and infrastructure strategy. It is particularly effective when an AI-based infrastructure reconnaissance and infiltration solution meets an AI-based detection and blocking solution. Many major security vendors, including Octa and DarkTrace, integrate these tools into their products.
“Bots that leverage AI and ML to quickly adapt to changing security postures are effective tools of attack,” said Jamika Greene Aaron, CISO of the Client Identity Division at ‘Okay. “Moving forward, you need to take advantage of automation designed to gather real-time threat intelligence and adaptive authentication, a method that validates a user’s identity based on factors such as as location, device state, and end-user behavior”.
AI detection is an area of ML that is being actively researched. It will continue to be promoted as an AI-based anti-phishing tool, so keep an eye out for it in the future.
Provide an easy to use phishing reporting system
Informing security services of phishing is essential to combat AI-based attacks. AI campaigns can be effectively mass-produced, so it’s important to be aware of phishing as it unfolds. This can quickly alert employees and provide valuable information to anti-phishing tools and AI detection models.
In addition to facilitating reporting, a reporting system should be in place to collect as much information as possible in order to increase the value of the act of reporting itself and to take action based on it. Forwarding an email to a reporting address is useful for capturing all headers and metadata contained in an email, while a portal with a simple form is great for reporting things like sites Phishing website. Governments around the world are also encouraging the inclusion of Domain-based Message Authentication, Reporting, and Conformance (DMARC) policies, including CISA.
Phishing reporting is an essential part of a strong security infrastructure. Effective reporting is especially important in the context of AI campaigns. This is because attackers are better able to deploy spear phishing attacks (using specific information within an organization) through automation and information gathering and integration.
Implementing a phishing-resistant authentication process
Password authentication is inherently vulnerable to phishing, and methods like Captcha are vulnerable to AI. On the other hand, there are authentication methods that are resistant to phishing. Passkey is considered the most phishing-resistant authentication method. It’s still in development, but it’s slowly emerging into the mainstream. Once implemented, phishing is virtually impossible.
Multi-factor authentication (MFA) is also useful. If two-factor authentication is required, hackers will not be able to access resources simply by exposing a username and password combination on a phishing site or during interactions. CISA also introduced its Anti-Phishing MFA, highlighting its importance.
editor@itworld.co.kr


