ChatGPT and Dall-E are forms of Generative AI, able to create and give advice on practically anything. Every professional knowledge worker should consider using these tools or risk getting sidelined by them.
How are cybercriminals using ChatGPT? According to new research published yesterday by BlackBerry: ▪ 51% of security leaders (the majority) expect ChatGPT to be at the heart of a successful cyber-attack within the next 12 months
▪ 71% believe nation-states are likely to already be using the technology for malicious purposes against businesses and other countries
Despite its enormous potential, information security experts have raised concerns over its possible use by threat actors to launch attacks, including malware development and convincing social engineering scams.
There are also fears it will be used to spread misinformation online in a quicker and more convincing manner. For example, the survey found that security professionals believed that: ▪ AI can craft more believable and legitimate sounding phishing emails (53%)
▪ Less experienced cyber-criminals can use AI to improve their technical knowledge and develop more specialized skills (49%)
▪ Is able to spread misinformation faster than humans (49%) The Takeaway While the study acknowledged ChatGPT’s capabilities to be used for ‘good,’ 74% viewed it as a potential cybersecurity threat. 82% of IT decision makers plan to invest in AI-driven cybersecurity in the next two years with almost half (48%) planning to invest before the end of 2023. Sidenote: our Total Security service already includes AI driven protection I recommend using common sense when reading online blogs, social media posts, or convincing emails. If it sounds just too good, it was likely made by a robot. Be especially vigilant over the next 12 months as methods of detecting AI generated content have yet to make it to market. And by the way, no AI was used in the writing of this email! Stay safe out there. -A