Happy Friday my friend,
Just when you thought the AI hype was cooling off, now this. ChatGPT has received a lot of press over the past year for everything from publishing a children's book to writing graduate papers, to writing code and even threatening creative professions such as copywriting and legal.
Now a new threat has emerged, ChatGPT's evil villain twin, FraudGPT. As the name implies, it's a generative AI engine trained for malicious intent. According to claims made by the author, FraudGPT can be used to "write malicious code, develop undetectable malware, and identify leaks and vulnerabilities."
The Takeaway
So what? Why does this matter to me?
FraudGPT is a sharp instrument targeting businesses, making it a whole lot easier for bad guys to create better and more deceptive emails, malicious code and allows even inexperienced criminals to carry out large-scale and successful attacks.
My 3 tips:
1) Ensure that when an update tries to install on your office computers, servers, tablets and phones that they get installed. So many critical vulnerabilities are patched quickly by manufacturers but if they don't get rolled out in your company's infrastructure then you're a sitting duck.
2) I can't stress enough how important password hygiene is to keeping your company's data and reputation from being damaged. In short, every website you use should have a unique and complex password and 2-factor enabled, preferably with a mobile authenticator app. I know it's hard. That's why a password manager can make it easier! Check out Keeper or Bitwarden.
3) Be sure that your staff are trained on the latest and most convincing tactics being used in phishing emails. Yes, these tactics evolve - your staff should be prepared.
Need help implementing ideas? Feel free to reach out, we can help.
Stay safe out there.
-Attila
New Friday Funnies!
ChatGPT's joke for today on the topic of how scary AI can be:
Why don't AI systems tell ghost stories?
They're afraid they'll start debugging themselves!
Comments