Anthropic's announcement is perhaps the most high profile example of companies claiming bad actors are using AI tools to carry out automated hacks.
It is the kind of danger many have been worried about, but other AI companies have also claimed that nation state hackers have used their products.
In February 2024, OpenAI published a blog post in collaboration with cyber experts from Microsoft saying it had disrupted five state-affiliated actors, including some from China.
"These actors generally sought to use OpenAI services for querying open-source information, translating, finding coding errors, and running basic coding tasks," the firm said at the time.
Anthropic has not said how it concluded the hackers in this latest campaign were linked to the Chinese government.
It comes as some cyber security companies have been criticised for over-hyping cases where AI was used by hackers.
Critics say the technology is still too unwieldy to be used for automated cyber attacks.
In November, cyber experts at Google released a research paper which highlighted growing concerns about AI being used by hackers to create brand new forms of malicious software.
But the paper concluded the tools were not all that successful - and were only in a testing phase.
The cyber security industry, like the AI business, is keen to say hackers are using the tech to target companies in order to boost the interest in their own products.
In its blog post, Anthropic argued that the answer to stopping AI attackers is to use AI defenders.
"The very abilities that allow Claude to be used in these attacks also make it crucial for cyber defence," the company claimed.
And Anthropic admitted its chatbot made mistakes. For example, it made up fake login usernames and passwords and claimed to have extracted secret information which was in fact publicly available.
"This remains an obstacle to fully autonomous cyberattacks," Anthropic said.