OpenAI, Microsoft Block ChatGPT Hackers in China, North Korea

OpenAI, Microsoft Block ChatGPT Hackers in China, North Korea

OpenAI and Microsoft have recently disrupted efforts by state-affiliated cybercriminals from China, Iran, North Korea, and Russia who were attempting to exploit the chatbot, GPT-3, for malicious purposes. The cybercriminals were leveraging the AI technology to engage in fraudulent activities like phishing attacks, spreading disinformation, and other deceptive practices. These state-sponsored hackers attempted to use the chatbot to conduct cyber espionage and influence operations.

Microsoft and OpenAI have been collaborating to combat these threats, with the tech giants implementing strict usage policies and moderating system outputs to prevent misuse. They’ve further enhanced the system’s defenses by using methods like active human moderation, machine learning models to detect misuse, and user feedback to improve the system’s overall resilience. The vigilance of these firms underlines the importance of cybersecurity measures in the ever-evolving landscape of artificial intelligence.

Facebook
Twitter
LinkedIn
WhatsApp
Email

Want to learn even more about NFTs?

Sign up for the 👇Newsletter