In response to increasing questions from Microsoft employees about using ChatGPT for work tasks, a senior engineer in Microsoft's CTO's office announced that they could access ChatGPT, but not share "sensitive data" with it. Amazon staff received the same warning last week.

The companies' events followed the same scenario: employees expressed interest in ChatGPT in work chats and asked how they could use it in their work; management responded with a reminder about liability for disclosing confidential information and a recommendation not to share code and other data covered by the NDA with ChatGPT. The difference between the companies' situations is one, but significant: Microsoft is investing $10 billion in OpenAI, the creator of ChatGPT.
Read also: OpenAi API is not available in your country?
As Insider recalls, the software giant plans to incorporate OpenAI technology into some of its products, such as the Bing search engine and Office applications.
In a comment to the publication, Vincent Konitzer, professor and director of the Artificial Intelligence Lab at Carnegie Mellon University, said that the close relationship between Microsoft and OpenAI could lead to a potential conflict of interest because Microsoft could benefit from OpenAI getting more data for training. That doesn't mean Microsoft will behave irresponsibly, but "it's always good to be aware of the incentives," he added.
Another question experts are now asking is: Who will be at fault if sensitive information is leaked? According to Konitzer, employees could be liable for the disclosure, or OpenAI could be at fault for reckless use of the information, or everyone could be at fault.
OpenAI's terms of service allow the company to use all data created by users and ChatGPT. The caveat is that all personal information (PII) is deleted. However, given the rapid growth of ChatGPT, personal information is still virtually impossible to identify and delete. And corporate intellectual property likely does not fall under the definition of PII at all.