FBI Sounds Alarm On Rising Cyber Threats Driven By AI Technology

The agency says that AI is being used to generate more sophisticated phishing attacks, create more effective malware, and even produce deepfakes.
(credit: Freepik)

Subscribe to our Telegram channel for the latest stories and updates.

The emergence of generative artificial intelligence (AI) brings significant benefits to people by aiding and enhancing various aspects of their work.

From creative endeavours like art and music generation to improving productivity in industries like healthcare, finance, and research, generative AI has demonstrated immense potential to streamline processes and solve complex problems.

However, like any powerful technology, generative AI also has the potential to be misused for criminal purposes.

According to PCMag, the Federal Bureau of Investigation (FBI), a law enforcement agency in the United States, said that criminals are actively utilising open-source generative AI programs to carry out various nefarious activities, such as developing malware and conducting phishing attacks.

The agency highlights that the popularity of generative AI programs within the tech industry is also fuelling cybercrime, with malicious actors employing these programs to refine and execute scams. The implications are far-reaching, with even terrorists seeking to leverage the technology to enhance chemical attacks.

A senior FBI official warns that as the adoption and accessibility of AI models continue to grow, these criminal trends are likely to escalate. While the FBI refrained from disclosing specific AI models used by criminals, they observed that hackers are gravitating towards freely available, customisable open-source models.

Additionally, private AI programs developed by hackers are circulating in the cybercriminal underworld for a fee.

(credit: Image by Freepik)

Seasoned cybercriminals are exploiting AI technology to craft new malware attacks and refine delivery methods, such as using AI-generated websites to conceal malicious code and employing polymorphic malware to evade antivirus software effectively.

Polymorphic malware is a type of malware that constantly changes its identifiable features in order to evade detection.

The FBI also cautioned about scammers employing AI image generators to create sexually explicit deepfakes of victims in extortion attempts. The exact scale of these AI-powered schemes remains uncertain, but the FBI stressed that a significant portion of cases they encounter involve criminals bolstering their traditional schemes with AI models.

This includes attempts to defraud individuals, especially the elderly, through scam phone calls using AI voice-cloning technology.

Share your thoughts with us via TechTRP's Facebook, Twitter and Telegram channel for the latest stories and updates.

Previous Post

No Legal Action For Meta: MCMC Prioritises Cooperation Over Confrontation With The Company On Harmful Content

Next Post

Fighting Corruption With Full Transparency: MACC Adopts Bodycam Technology

Related Posts
Total
0
Share