Plugging the loopholes in AI era

ChatGPT's development should not mean having to choose between privacy and technology, and the solution should not come at the cost of either.

Zhang Zhouxiang

Zhang Zhouxiang

China Daily

642e0d59a31057c4b4b898eb.jpeg

OpenAI logo is seen in this illustration taken, February 3, 2023. [Photo/Agencies]

April 6, 2023

BEIJING – The Italian government’s privacy protection watchdogs have temporarily blocked OpenAI from getting access to Italian users’ data, and ordered a probe into its suspected breaches of the European Union’s strict privacy regulations.

Four years before Windows 1.0 was launched, in the early 80s, it was the European Parliament that passed the world’s first personal data protection regulation, which it keeps updating. So when it comes to personal privacy protection, it is natural to see the EU taking the lead.

People are in the dark about what personal data ChatGPT takes from their communications, where such personal data is stored and what it does with it. It is therefore reasonable to probe the software and block its access until there is some clarity.

Some argue that such moves, although temporary, might hinder ChatGPT from progressing, and as the world’s leading natural language model, ChatGPT’s progress is of utmost importance to the sector.

However, its development should not mean having to choose between privacy and technology, and the solution should not come at the cost of either. A better solution is for OpenAI to make ChatGPT more transparent to meet the demands of regulation on personal privacy protection, so that people can enjoy its advantages while ensuring the protection of their data.

That’s how progress has been made throughout history, by constantly finding loopholes and filling them. The age of AI will be no different.

scroll to top