Goldman Sachs Prohibits Hong Kong Bankers from Using Anthropic AI

Goldman Sachs Prohibits Hong Kong Bankers from Using Anthropic AI

Synopsis

​Goldman ​Sachs ​has barred its bankers ‌in ⁠Hong ⁠Kong ​from using Anthropic's ​AI models, ​the Financial ⁠Times ‌reported ​on ​Tuesday, ⁠citing people familiar ​with the matter.

Listen to this article in summarized format

Goldman Sachs has barred its bankers in Hong Kong from using Anthropic's AI models, the Financial Times reported on Tuesday, citing people familiar with the matter.

Employees of the US bank ‌were unable ⁠to ⁠access Claude models as of a few weeks ago, the newspaper added, citing four sources.

While AI models like ChatGPT and Claude, built by US firms, are prohibited in mainland China, Hong Kong has mostly remained outside these controls, with usage limits set by ⁠US companies themselves.

Anthropic's ‌spokesperson told the FT that its Claude models had never been officially "supported" in ⁠Hong Kong but declined to comment further.

Goldman's move came as a result of the U.S. bank taking a strict interpretation of its contract with Anthropic following a consultation with the company, concluding that the bank's employees in Hong Kong should not be able to use any ‌Anthropic products, the report said.

The decision did not extend to contracts with other AI vendors such as OpenAI, the ⁠newspaper added.

Goldman Sachs and Anthropic did not immediately respond to Reuters' requests for comment.

Goldman Sachs' chief information officer Marco Argenti said in February that the bank was working with Anthropic to develop AI-powered agents aimed at automating a widening range of internal functions.

This editorial summary reflects ET Tech and other public reporting on Goldman Sachs Prohibits Hong Kong Bankers from Using Anthropic AI.

Reviewed by WTGuru editorial team.