We frame AI and cybercrime the wrong way. Most discussions still treat AI as a tool for fraudsters, but the biggest threat is that AI is starting to carry out important parts of fraud itself. This shift is occurring in parallel with the larger industry trend toward autonomous AI agents, systems designed to plan and execute tasks with minimal human oversight, making the distinction between tool and actor more blurred.
Cybercrime is used to carry a human signature, even when the code is complex. Someone wrote the malware, someone set the phishing lure, and someone decided when to move sideways, when to exfiltrate data, and when to cash out. Security teams can follow this chain of intent, even when they fail to stop it in time.
Recent AI-powered attacks suggest that this chain is starting to break. In February, an unknown hacker used Anthropic chatbot to automate cyberattacks Against Mexican government agencies. The reported theft covered 150 gigabytes of leaked data, including voter records, civil registry files, employee credentials, and 195 million exposed identities. What’s even more disturbing is the way the hack occurred.
The system can scan government systems, find vulnerabilities and choose which ones to exploit itself. It does not appear to require specific human instructions at each stage. Once inside, it is said to have created tailored exploits in real-time, adapting to changing defenses and moving quickly enough to turn access into a mass leak. While elements of this behavior are similar to well-known automated penetration testing tools, the degree of independence described represents a notable escalation.
In the end, investigators were left with a series of automated actions with no clear actor behind them. Traditional forensic methods did not produce the fingerprint of an identifiable attacker or a clear suspect. What remains is an attack pattern consistent with AI-assisted execution. This is the strategic warning involved in the breakout.
The attacker disappears from sight
The case of Mexico is important because it combines several troubling trends into one incident. AI has reduced the work needed to identify vulnerabilities and produce attack code, speeded up execution once the data is accessed, and made attribution more difficult after the fact. This is in line with broader warnings from cybersecurity companies and government agencies that AI is compressing the attack lifecycle from weeks to minutes.
Fraud goes in the same direction. Deepfakes are no longer just a novelty reserved for election clips or celebrity hoaxes, but have become a usable criminal front. In one notable case in early 2024, A Deep video conference He convinced an employee of the British engineering company Arup to transfer $25 million. Insurance companies are also starting to do so Damage price Due to artificial impersonation and damage to reputation.
The same pattern is now reaching everyday users in more personal settings. Fake celebrity endorsements continue Investment push and consumer fraud. High-profile figures like Taylor Swift and Elon Musk have been repeatedly used in AI-generated fraud campaigns, underscoring how known identities are being widely weaponized. Synthetic voices and synthetic characters have become more convincing. The threat is no longer theoretical.
In humanity, we ran a Controlled experiment To see how easy it is to create compelling dating profiles with widely available AI tools and gain the trust of real users. The profiles cleared Tinder checks, engaging 296 users and convincing 40 to agree to in-person meetings. The most important lesson came after that initial pass. Once a profile looks credible, the system can keep the conversation going with quick responses and enough consistency to feel human. At one point, the experiment was handling about 100 conversations simultaneously. This is the change that organizations must pay attention to. Fraud now relies on synthetic identities that can remain believable long enough to move people from conversation to action.
Artificial identity has become Operational tool To deceive. Artificial intelligence is moving from persuasion to implementation. that it Help the attackers Find vulnerabilities faster, develop exploits more quickly, and compress the path from reconnaissance to damage. Major AI labs such as Anthropic and OpenAI are developing efficient systems capable of multi-step actions, raising questions about how these systems are authenticated, constrained, and audited in real-world environments. The source is now at the center of the security challenge.
Verification must move to the action layer
Traditional cybersecurity still assumes that serious attacks can ultimately be traced back to a human operator, group, or organization. This assumption weakens as AI takes over more of the implementation layer. When a system can adapt in real time and leaves only automated traces behind, attribution becomes more difficult, both operationally and legally.
The challenge is technical, legal and judicial at the same time. An AI system does not need a fixed location and can operate across borders simultaneously, making traditional methods of attribution and enforcement less effective. This already clashes with emerging regulatory frameworks, which emphasize accountability but lack clear mechanisms for identifying independent systems in practice. This is why subsequent AI actions must carry a verifiable encrypted identity. A signed procedure creates a permanent audit trail, helps establish legitimacy and gives investigators a stronger basis for future attribution.
Therefore, the next security standard should focus on source as well as detection. If an AI system can impact money, access, identity, or sensitive data, its subsequent actions must be signed, recorded, and traced to a responsible entity operating within specific permissions. This could take the form of enforced identity layers for AI agents interacting with financial systems, consumer platforms, or critical infrastructure, similar to the way SSL certificates established trust for the web.
This would make attribution more credible and accountability more realistic. This principle is more important than any single application, whether in the form of proof of trust or another machine identity framework. Systems that can operate in sensitive environments should also be identified.
The Mexican government’s violation indicates a broader shift already underway. As independent fraud agents become more prevalent, accountability must be built into AI systems before anonymous, automated action becomes routine. Without this, we risk entering a phase where harm can be carried out on a large scale without clear authorship, undermining cybersecurity and legal and financial systems built on the assumption of identifiable actors. The future of cybersecurity will depend on whether working in the digital world still carries a name, a signature and a chain of responsibility. This is the standard we need to build now.
