Cybersecurity researchers have disclosed a new attack technique showing how artificial intelligence (AI) assistants with web browsing or URL-fetching capabilities can be abused as covert command-and-control (C2) relays. The method could allow attackers to hide malicious communications within legitimate enterprise traffic, making detection significantly more difficult.
The technique, called “AI as a C2 proxy,” demonstrates how attackers can leverage anonymous web access alongside browsing and summarization prompts to send instructions and retrieve data through AI platforms. Researchers found that this approach could enable AI-assisted malware operations, including reconnaissance planning, automated scripting, and dynamically deciding next actions during an intrusion.
This development marks another evolution in AI-driven cyber threats. Instead of simply speeding up attacks, adversaries may use AI APIs to generate code in real time, allowing malware to adapt its behavior based on information collected from compromised systems and evade traditional security defenses.
AI tools already act as a force multiplier for threat actors by helping with phishing campaigns, vulnerability scanning, malware development, and synthetic identity creation. However, using AI directly as a command-and-control proxy represents a major escalation — turning trusted AI services into potential components of attack infrastructure.
As organizations continue adopting AI assistants, security teams are being urged to monitor AI usage closely and strengthen safeguards against misuse.

