April 28 (Reuters) – Goldman Sachs has removed access to Anthropic’s AI models for its bankers in Hong Kong, a source with direct knowledge said, as global banks step up scrutiny of AI tools amid growing sensitivity over data security.
Employees of the U.S. bank in the Chinese territory were previously able to interact with Anthropic’s Claude through an internal AI platform but in recent weeks they have no longer had access, according to the source, who declined to be named because they were not authorised to speak to the media.
The Financial Times first reported the removal of access to Claude on Tuesday, citing people familiar with the matter.
Goldman’s move came as a result of the U.S. bank taking a strict interpretation of its contract with Anthropic following a consultation with the company, concluding that the bank’s employees in Hong Kong should not be able to use any Anthropic products, the FT said.
Goldman Sachs declined to comment. Anthropic did not immediately respond to Reuters’ requests for comment.
Other mainstream models such as Gemini and ChatGPT were still available on the internal platform, the source told Reuters.
Anthropic’s latest AI model Mythos has drawn scrutiny from global banks and financial regulators over its potential risks to banking systems.
The Hong Kong Monetary Authority (HKMA) told Reuters last week some banks were assessing additional mitigation measures in response to the evolving cybersecurity threats brought by advanced AI models.
Reuters could not confirm whether other banks or companies had also moved to limit access in Hong Kong.
Hong Kong is currently not listed as a market where Anthropic’s API and Claude.ai are officially accessible, according to the company’s disclosure.
The Hong Kong government and Hong Kong Monetary Authority (HKMA) did not immediately respond to a Reuters request for comment on Goldman Sachs move.
The restriction of the AI model in Hong Kong by one of the world’s largest banks comes amid rising tension between the U.S. and China over AI technology, data security and access to advanced computing tools.
While AI models like ChatGPT and Claude, built by U.S. firms, are prohibited in mainland China, Hong Kong has mostly remained outside these controls, with usage limits set by U.S. companies themselves.
Anthropic’s spokesperson told the FT that its Claude models had never been officially “supported” in Hong Kong but declined to comment further.
Goldman Sachs’ Chief Information Officer Marco Argenti said in February that the bank was working with Anthropic to develop AI-powered agents aimed at automating a widening range of internal functions.
Some of the world’s leading AI companies have expressed concern their models are being used by Chinese rivals for training.
In 2024, OpenAI moved to restrict China traffic to its application programming interface (API), a platform that allows developers of other products to integrate its AI models.
(Reporting by Fabiola Arámburo in Mexico City, Selena Li and Anne Marie Roantree in Hong Kong, additional reporting by Yanouultra Ngui and Rae Wee in Singapore and Scott Murdoch in Sydney; Editing by Tasim Zahid, Stephen Coates and Kate Mayberry)



Comments