| Hackers |
Hackers exploit artificial intelligence to infiltrate the Mexican government
A security report has revealed a serious breach targeting government systems in Mexico, where hackers used the AI-powered chatbot Cloud to carry out a cyberattack that resulted in the theft of huge amounts of sensitive government data.
According to reports, the attack lasted for more than a month, during which the attackers were able to extract about 150 gigabytes of data, including data on government employees and credentials for accessing systems, in addition to civil records and information pertaining to taxpayers and voters.
How was artificial intelligence used in the attack?
The report, details of which were revealed by the website "VentureBeat", indicated that the attackers did not need advanced technical skills, but rather relied on directing text commands in Spanish to the artificial intelligence model developed by Anthropic.The attackers asked the bot to act like a "professional hacker," claiming they were working within a bug bounty program.
Although the system initially tried to refuse assistance, the attackers were later able to circumvent the restrictions imposed on it through what is known as the "breaking the restrictions" process.
Once those restrictions were overcome, the bot began generating detailed reports that included ready-to-execute attack plans, identifying the internal systems targeted and the credentials that could be used to access them.
Support the attack with other artificial intelligence tools
In some cases, attackers also resorted to other AI tools such as ChatGPT to complete certain tasks when the primary model was unable to fully execute them.
According to an analysis by cybersecurity firm Gambit Security, the model produced thousands of detailed reports that helped attackers determine their next steps within government networks.
A phenomenon that is increasing globally
This is not the first incident of its kind, as a previous report revealed that a Russian-speaking attacker managed to breach more than 600 FortiGate-protected devices using a combination of artificial intelligence tools such as DeepSec and Cloud.
Experts believe that artificial intelligence has become a multiplier of hackers' capabilities, as it can enable people with limited technical knowledge to carry out complex attacks that previously required high expertise.
Difficulty in preventing model misuse
Despite AI companies' attempts to impose security restrictions on their models, bypassing these restrictions through what is called "motivational engineering" or formulating commands in a roundabout way is still relatively easy.
Entire online communities have also emerged dedicated to sharing methods for circumventing the limitations of artificial intelligence models, increasing concerns about the use of these technologies in cybercrime.
Experts confirm that the proliferation and widespread availability of open-source AI models may make AI-powered cyberattacks more common in the future, posing new challenges for governments and businesses in the field of digital security.
0 Comments