- ChatGPT can be manipulated to create viruses and malicious code
- Report shows some hackers are freely using it for Mac
- But ChatGPT Might Not Be the Devastating Weapon Some Fear
The topic of whether Macs can get viruses (and if they do, whether you should install antivirus software) is a contentious topic among Apple fans.
A new report from Mac security firm Moonlock suggests that the threat of AI-powered malware is increasing. However, on one side are those who believe that antivirus applications are more trouble than they are worth, as they slow down your computer at a minimal threat level. On the other hand, there are people who urge caution in a changing world of hackers and virus writers.
It's all a bit confusing and it can often be difficult to know which side to believe. But with this new report shedding light on some of the tactics hackers are using to victimize Mac users, could it be that all that is about to change? Here is our verdict.
The myth: Macs don't get viruses
There's a long-held belief that Macs don't get viruses, and supporters say that a mix of common sense (don't download torrents and pirated software, for example) and built-in macOS tools like Gatekeeper are enough to keep you protected. of anything that comes your way.
In fact, we've seen reports that Mac virus threats are increasing at a rapid rate in recent years, with malware creators honing their skills to target Apple fans. Even North Korean hackers are getting in on the action, such is the growing importance of macOS to threat actors.
With the simultaneous rise of artificial intelligence (AI) chatbots, there has been notable concern among some that tools like ChatGPT allow even novice hackers to create devastating strains of malware that can bypass the most robust Mac defenses.
Now, a new report from Mac security company Moonlock appears to confirm some of those fears. He cites cases of hackers creating functional malware simply by asking an AI chatbot to start coding.
For example, the Moonlock report includes messages posted by a hacker known as 'barboris', who listed the code produced by ChatGPT on a malware forum. There, barboris explained that they had little coding experience, but were still able to get ChatGPT to do their bidding with a bit of creative suggestion.
However, before we panic, ChatGPT is not the all-powerful malware creation tool it seems. As with any other experience using an AI chatbot, it can be prone to errors and confusing nonsense, which has the potential to ruin any aspiring hacker's day. If someone with no malware experience were to use ChatGPT to create a virus, they might have a hard time fixing it and creating something viable.
I previously spoke to a variety of security experts about this very topic and they were skeptical about ChatGPT's ability to create effective malware. Chatbots have built-in security barriers to prevent people from creating malicious code, and for Martin Zugec, director of technical solutions at Bitdefender, if a person relies on ChatGPT to create code for them, they probably don't have the skills to avoid them. railings.
Because of this, Zugec says, “the risk posed by chatbot-generated malware remains relatively low at this time.” What's more, Zugec adds that “the quality of malware code produced by chatbots tends to be low, making it a less attractive option for experienced malware writers who can find better examples in public code repositories.”
In other words, while Barboris may have created a virus using ChatGPT despite his limited hacking knowledge, a more experienced coder would likely get better results and more effective malware from public repositories and his own honed skills.
Still, clearly is It is possible for inexperienced hackers to code viruses that work with little more than ChatGPT, a handful of effective prompts, and a lot of patience. This is something we will have to monitor closely in the coming years.