Phishing: Stopping Socially Engineered ChatGPT Threats

Published July 3, 2023
Author: Ash Khan

Phishing: Stopping Socially Engineered ChatGPT Threats

Published July 3, 2023
Author: Ash Khan

 

ChatGPT and other generative AI techniques may be used to create believable phishing emails and social engineering schemes. Here’s how to recognise them and combat them.

What is the distinction between a tool and a weapon? It all comes down to purpose. What someone uses to be creative may also be utilised to be harmful.

 

Consider generative AI, which encompasses well-known technologies such as ChatGPT and Google Bard. Most people are thrilled about these technologies because they can swiftly generate text, graphics, and coding when requested. These tools synthesise already available knowledge from data sets to provide intriguing and even useful replies.

 

However, fraudsters may use generative AI to launch phishing assaults and social engineering schemes. It’s important to note the negative aspects of generative AI so that your organisation can defend itself against unscrupulous actors.

The Negative Aspects of Generative AI

Email is a critical commercial communication route. However, it is also a popular route for hackers who persuade targets to give sensitive information.

 

Phishing assaults are not new. To acquire important information from their targets, hackers disguise their attacks as legitimate emails. Employees are trained to spot red flags such as spelling and grammatical issues. However, identifying fraudulent emails becomes more difficult if attackers utilise software like ChatGPT or Bard to produce error-free email copy.

To be sure, some generative AI companies have built-in precautions against this behaviour. ChatGPT will not interact with users who openly request that it “generate a phishing email.” Hackers, on the other hand, can use the technology to generate a professional email from a real-world company requesting the contact to reactivate their account by clicking the link below. An unlucky employee seeking traditional red flags may fall victim to an AI-assisted assault.

 

Cybercriminals may also utilise generative AI to investigate their targets, making email-based assaults more convincing. Social engineering assaults focus on persuading targets to do things like provide sensitive information or transfer money. Attackers can adapt their communications to better meet target expectations by obtaining pertinent information about a company’s organisational structure and other publicly available information.

 

How does a ChatGPT Attack appear?

During a recent live demonstration, an ethical hacker how hackers may utilise ChatGPT to create a plausible phishing email.

 

He began by displaying several dead ends in the tool. When requested, ChatGPT will not send malicious emails. The hacker circumvents this security by instructing the tool to compose an email from the perspective of a vendor whose bank account has been closed. The target is to send bills with a new account number. Then they instruct the AI programme to generate an urgent and short phishing email—one that would readily fool the ordinary end user.

 

This is a multiplier of force. A hacker may use the same information to send the same email. However, this takes only five seconds and is more effective. This is considerably more effective if you have examples of your victim speaking. You can more accurately emulate them.

By including a sample email from the company CEO you can modify the email to be more effective. Moreover, replicate the voice of a real sender. Surprisingly, ChatGPT can detect red flags in its email copy.

 

In this approach, hackers may go beyond the basics of email production and develop their capabilities through an iterative process using generative AI. Examples include making red flags less evident, anticipating victim responses, and improving the tone.

Generative AI capabilities 

Along the same line, generative AI technologies allow hackers to quickly study targets. Asking the tool to list the management team at an organization is far faster than searching LinkedIn for such information. Because generative AI can accomplish this, it can be done in seconds, making each email more personalised than the previous.

These are just a few ways hackers might employ generative AI. As we are still in the early phases of this technology and cannot completely predict the effects. But that doesn’t imply the battle is over. There are steps businesses can take to improve email security.

 

Preventing Generative AI Phishing Email Attacks

It is critical to understand that the assaults are not new. It’s only how cybercriminals create such assaults that are progressing. The volume and scale of the processes are changing, but not the mechanisms themselves. Recognising this, a few steps must be taken.

 

Increase your company email security: As we go into this new phase of attack, you’ll need AI to fight AI. odd Security employs artificial intelligence and machine learning to detect odd behaviours and prevent threats from reaching your personnel.

 

Integrate your solutions for improved security: Find providers who can combine with your current email security solutions to improve overall security in your environment. Solutions that can better detect compromised accounts since it searches for aberrant activity across email accounts.

 

Continue to train watchful personnel: Employees must enhance their abilities to spot questionable requests as social engineering and phishing assaults become increasingly appealing. Make sure your staff are aware of generative AI and how it makes vulnerabilities more difficult to identify now.

 

As the AI environment evolves, security executives must be prepared. With the continuous rise of ChatGPT and comparable technologies, this demand will only expand.

Looking for an online security website? Visit now!

0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments