Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News Editorials & Other Articles General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

riversedge

(81,393 posts)
Mon May 4, 2026, 09:38 PM 11 hrs ago

AI fails to make inroads with cybercriminals, study finds

Source: techxplore.com


May 4, 2026 edited by Stephanie Baum, reviewed by Robert Egan

Cybercriminals have been struggling to adopt AI in their work, reports the first-of-its-kind study that analyzed a dataset of 100 million posts from underground cybercrime communities. The study is published on the arXiv preprint server.

In reality, most cybercriminals—often referred to as hackers—lack the skills or resources to support real innovation within their criminal activities, experts say.

The research found that AI was used most successfully when hiding patterns that are often detectable by cybersecurity defenders, and for running social media bots that conduct misogynistic harassment and make money from fraud.

The team of researchers from Universities of Edinburgh, Cambridge and Strathclyde analyzed discussions from the CrimeBB database that contains over 100 million posts scraped from underground and dark web cybercrime forums. They analyzed these conversations using a combination of machine learning tools and manual sampling techniques, searching for posts that discussed how cybercrime actors were experimenting with AI technologies beginning in November 2022, which marked the release of ChatGPT.

Through their analysis, the researchers found that AI coding assistants are mostly proving useful for already skilled actors rather than reducing the skill barrier to committing cybercrime, as the AI tools still require significant skills and knowledge to use effectively. The team also found some evidence of the use of AI tools in more advanced forms of automation, especially in social engineering and bot farming.
.......................
..............



The findings have been peer reviewed and will be presented at the Workshop on the Economics of Information Security in Berkeley, U.S., in June 2026.

Dr. Ben Collier, Senior Lecturer in Digital Methods at University of Edinburgh's School of Social and Political Science, said, "Cybercriminals are experimenting with these tools, but as far as we can tell, they're not delivering them real benefits in their own work. Our message to industry is: Don't panic yet. The immediate danger comes from companies and members of the public adopting poorly secured AI systems themselves, opening them up to catastrophic new attacks that can be performed by cybercriminals with little effort or skill."...........................

Read more: https://techxplore.com/news/2026-05-ai-inroads-cybercriminals.html?utm_source=twitter.com&utm_medium=social&utm_campaign=v2#google_vignette



Very interesting study to say the least.

Scott Horton
‪@robertscotthorton.bsky.social‬
It's really unclear at this point what AI can do for ordinary citizens, says this Edinburgh study, but what it can do for organized crime groups is staggering. Across 100 mn underground forum posts, AI helps cybercriminals most with hiding detectable patterns and running harassment/fraud bots.

It's really unclear at this point what AI can do for ordinary citizens, says this Edinburgh study, but what it can do for organized crime groups is staggering. Across 100 mn underground forum posts, AI helps cybercriminals most with hiding detectable patterns and running harassment/fraud bots.

Scott Horton (@robertscotthorton.bsky.social) 2026-05-05T01:06:02.326Z
1 replies = new reply since forum marked as read
Highlight: NoneDon't highlight anything 5 newestHighlight 5 most recent replies
AI fails to make inroads with cybercriminals, study finds (Original Post) riversedge 11 hrs ago OP
I suspect it is that criminals actually want to make money jfz9580m 7 hrs ago #1

jfz9580m

(17,698 posts)
1. I suspect it is that criminals actually want to make money
Tue May 5, 2026, 01:29 AM
7 hrs ago

And ai is useless and mainly for bubble pushing swindlers.
The CS people have been attacking medical research (where scams are far easier to detect; e.g.: Elizabeth Holmes vs Tech Bros, though even in her case she went over in scamming the investor class which is where these guys are headed, hence the desperation to take over society) as needing stringent oversight with asinine people like Ioannidis parroting it from Stanford, which totally had no motivations except the purest of course.
Maybe it is time to apply actual scientific standards to ai and cs in general.

A nuisance field with dreary and unwanted products, toys and services outside of cybersecurity and some essential stuff we pay for as opposed to all the stuff we don’t pay for since we don’t want it. Outside of Woebot, Lybrate etc that is rare in medicine.

And their use of metaphors is brain destroying.

That Diplomacy Game with human oversight vaguely seemsminimally annoying in dealings and communications with those guys.

Tired of my wallet being attacked day after day. My govt (the real one not a metaphor) has seriously pissed me off.

Latest Discussions»Latest Breaking News»AI fails to make inroads ...