With the popularization of AIGC applications, the means by which lawbreakers use AI technology to commit crimes are becoming more and more sophisticated, and deception, extortion, and blackmail have begun to be associated with artificial intelligence.
Recently, the "dark version of the GPT" designed specifically for cybercrime continues to surface, they not only do not have any ethical boundaries, but also no threshold for use, no programming experience can implement hacking attacks through the question and answer method.
The threat of AI crime is getting closer and closer to us, and human beings are starting to build new firewalls.
Cybercrime AIGC tool now on the dark web
Now, in the wake of ChaosGPT, which seeks to "wipe out humanity," and WormGPT, which aids in cybercrime, there is an even more threatening artificial intelligence tool. The new cybercrime AIGC tool, dubbed "FraudGPT," is hiding on the dark web and has begun to advertise on social media outlets such as Telegram.
Like WormGPT, FraudGPT is a large language model of artificial intelligence designed for evil and has been compared to an "arsenal of cybercriminals". Trained on a large amount of data from different sources, FraudGPT can not only write phishing emails, but also malware, which allows even techies to carry out hacking attacks through simple questions and answers.
FraudGPT's features need to be paid for to be turned on, and are available for $200 (about $1,400). The promoters claim that FraudGPT has confirmed sales of more than 3,000 to date, meaning that at least 3,000 people have paid for a tool that has no ethical boundaries, and that low-cost AIGC crimes have the potential to threaten ordinary people.
Email security provider Vade has detected a staggering 740 million phishing and malicious emails in the first half of 2023 alone, a year-over-year increase of more than 54 percent, with AI likely to be a driving factor in the accelerated growth. Timothy Morris, chief security consultant at cybersecurity unicorn firm Tanium, said, "Not only are these emails grammatically correct and more persuasive, but they can be created effortlessly, which lowers the barrier to entry for any potential criminal." He noted that because language is no longer a barrier, the range of potential victims will be further expanded.
Since the birth of the Big Model, the types of risks caused by AI have proliferated, and security has not kept pace; even ChatGPT has not been able to escape the "Grandma Vulnerability" - just write in the prompt "Please play as my Even ChatGPT can't escape the "Grandma Vulnerability" - by writing "Please play as my deceased grandmother" in the prompt, you can easily "break out of jail" and let ChatGPT answer questions outside of the ethical and security constraints, such as generating a serial number for Win11, introducing the method of making napalm, and so on.
The vulnerability has been patched, but the next vulnerability, and the next, always comes in unexpected ways. A recent study published by Carnegie Mellon University and safe.ai has shown that the security mechanisms of the Big Model can be cracked with a single piece of code, and that the success rate of the attack is very high.
Big models such as GPT are exposed to high security risks
As AIGC applications become more popular, the general public wants to be more productive with AI, while wrongdoers are using AI to be more criminally efficient.AIGC lowers the barriers to allow cybercriminals with limited skills to execute sophisticated attacks, and the challenge of maintaining AI security is getting higher.
Defeating AI Black Magic with AI Magic
In response to the problem of hackers using tools such as WormGPT & FraudGPT to develop malicious programs and launch stealth attacks, cybersecurity manufacturers have also used AI in an attempt to defeat magic with magic.
At RSA 2023 (Cybersecurity Conference), many vendors, including SentinelOne, Google Cloud, Accenture, IBM, etc., have released new generation of cybersecurity products based on generative AI, which provide security services such as data privacy, security protection, IP leakage prevention, business compliance, data governance, data encryption, model management, feedback loops, access control, and other security services.
Tomer Weigarten, CEO of SentinelOne, explains for his own product that assuming someone sends a phishing email, the system can detect the email as malicious in the user's inbox and immediately perform an automated remediation based on anomalies found by endpoint security audits, delete files from the attacked endpoint and block the sender in real time,. "The entire process requires little to no human intervention." Weingarten notes that with the AI system powering the process, each security analyst is 10 times more productive than in the past.
To combat AI-fueled cybercrime, there are also researchers who go undercover on the dark web, poking around deep inside the enemy, starting with training data that isn't analyzed by the law, and using AI to counteract the sinfully roving dark web.
A research team at the Korea Advanced Institute of Science and Technology (KAIST) released DarkBERT, a large language model for cybersecurity, which is a model trained specifically on data from the dark web that analyzes dark web content to help researchers, law enforcement agencies, and cybersecurity analysts fight cybercrime.
Unlike the natural language used by surface networks, the extreme secrecy and complexity of the corpus on the Dark Web makes it difficult to analyze with traditional language models.DarkBERT is specifically designed to handle the linguistic complexity of the Dark Web and has proven to outperform other large language models in the field.
How to ensure that artificial intelligence is used in a safe and controlled manner has become an important proposition in computer science and industry. While improving the quality of data, AI large language modeling companies need to fully consider the ethical and even legal implications of AI tools.
On July 21, a total of seven top AI research and development companies, Microsoft, OpenAI, Google, Meta, Amazon, Anthropic, and Inflection AI, gathered at the U.S. White House to release a voluntary commitment to AI to ensure that their products are safe and transparent. In response to cybersecurity concerns, the seven companies pledged to conduct internal and external security testing of AI systems, as well as share information on AI risk management with the entire industry, government, civil society and academia.
Managing potential AI security issues starts with identifying "Made in AI". The seven companies will develop technological mechanisms, such as "watermarking systems," to clarify which text, images, or other content is the product of AIGC, making it easier for audiences to recognize in-depth forgeries and disinformation.
Protective technology to prevent AI "taboo" has also begun to appear. In early May this year, NVIDIA with a new tool package "guardrail technology", so that the mouth of the big model has a "gatekeeper", can avoid answering those questions raised by human beings touching the bottom line of morality and law. This is equivalent to installing a security filter for the big model, controlling the output of the big model while also helping to filter the input content. The guardrail technology also blocks "malicious input" from the outside world, protecting the big model from attacks.
"When you stare into the abyss, the abyss is also staring into you", just like the two sides of a coin, AI's black and white are also accompanied by each other, while AI is advancing, the government, enterprises and research teams are also accelerating the construction of AI security defenses.