Happy Friday my friend!
The bad guys are at it again, this time they're sneaking malware into AI bots. I know, it's Friday and you have AI fatigue but this is new and a problem you can only avoid by knowing about it.
Malvertising is a well known strategy that pops up now and again, where bad actors buy Google and Bing ads to fool you into downloading legitimate, popular software saddled with malware. Security researchers at Malwarebytes discovered this week that malicious ads were being delivered inside of Bing's AI chatbot.
Bing Chat, powered by OpenAI's GPT-4, made its debut in February and since then Microsoft has linked it to the Internet for search. We're just at the beginning of this thing.
The problem is that now Microsoft is attempting to monetize their AI chatbot by inserting paid ads into the output of the AI's search results. Here's what you need to look out for:
Incorporating ads into Bing Chat has opened the door to threat actors, who increasingly take out search advertisements to distribute malware.
AI-powered chat tools can instill unwarranted trust, potentially convincing users to click on ads, which isn't the case when skimming through impersonal search results.
This conversation-like interaction can imbue AI-provided URLs with a misplaced sense of authority and trustworthiness, so the existing problem of malvertizing in search platforms is amplified by the introduction of AI assistants.
Your best defense is awareness against these strategies and making sure that your and your staff are aware of these types of attacks.
If you know someone who would find these updates useful, please consider forwarding this email - it might just save them from disaster.
Stay safe out there.
New Friday Funnies
Why did the AI break up with the computer? It found someone byte-ter