ChatGPT: after the blocking of the Privacy Guarantor, what happens to the other algorithms?

ChatGPT: after the blocking of the Privacy Guarantor, what happens to the other algorithms?

ChatGPT

After the ChatGPT block and after OpenAi suspended the service, the Italian Privacy Guarantor evaluates the next moves in the artificial intelligence sector and the use of personal data by algorithms. The premise is that nothing has been decided yet. What is being evaluated within the Authority for the protection of personal data is whether other generative artificial intelligences also have the same privacy profiles that triggered the investigation against ChatGPT. An example, as sportsgaming.win learns from sources inside the Authority, are the deepfake images of Pope Francis wearing a long white trapper-style duvet, created with the Midjourney algorithm. Can the processing of personal data be considered inaccurate and without the consent of the interested party, as was contested in ChatGPT?

These are the questions circulating in the corridors of Piazzale Venezia, besieged by criticism after the preliminary investigation load of OpenAi, which requires the US startup to stop processing personal data of Italian citizens obtained without their consent and used to train the algorithm or collected during conversations that took place in the software chat. OpenAi responded by blocking the provision of the service in Italy and stating that it had arranged to respond to the Guarantor.

The situation:

The other algorithms That signature on the appeal to stop AI The next moves The Privacy Guarantor blocks ChatGPT in Italy The Authority for the protection of personal data has ordered the stop of the chatbot. At the heart of the battle is the use of information to train the algorithm

The other algorithms

Because other artificial intelligences do a similar job to that of ChatGPT. Like Bard, the competitor in Google's stable to keep up with Microsoft, which has enlisted ChatGPT to pair it with its Bing search engine. Or generative artificial intelligences which, starting from text instructions, generate images, such as Midjourney and Dall-E . “ We are evaluating it - confirms to sportsgaming.win Agostino Ghiglia, commissioner of the board of the Guarantor Authority -. We will broaden the investigation where we find the same shortcomings and failings as ChatGPT. It's a question of compliance with the law".

The Privacy Guarantor imposed a temporary block on data processing on OpenAi (which resulted in a block on the service decided by the company) on the basis of four main reasons : lack of information on data processing; the absence of consent for the training of the algorithm; inaccurate results ; the absence of a filter to prevent anyone under 13 from accessing ChatGPT. Other more practical ones must be added to these conditions. First: we need to see if the other artificial intelligences are organized with a stable presence in the European Union. In that case, the privacy authority of the country where the company is based must act. In the case of ChatGPT, the Italian Guarantor was able to activate its investigation because OpenAi has only one legal representative in Ireland.

OpenAi blocks ChatGPT in Italy After the Italian Privacy Guarantor imposed a temporary block on data processing personal data of the users, the stop comes from the company which explains to sportsgaming.win that it wants to collaborate with the Italian authorities and to respect the Gdpr

That signature on the appeal to stop AI

And then there is a force theme. 161 people work in Piazzale Venezia, who also have to deal with telemarketing and unwanted calls from call centres, the non-consensual dissemination of intimate images, cyberbullying. In short, the order of priorities will also have to be weighed. Finally there is the European question. The Guarantor is the first authority in the world to issue such a provision against ChatGPT. At the end of April there will be a meeting with counterparts from other EU countries, where the sensitivities of others will emerge. “ We cannot expect such a fast technology to write the laws - says Ghiglia -. The Ai Act [the package of community rules on artificial intelligence, ed.] is still in drafts ”.

During the hours in which the provision against ChatGPT was being prepared, Ghiglia himself, a political cursus honorum that from the Movimento social network arrives at Fratelli d'Italia , he signed the appeal of the Future for life institute, an organization that brings together various personalities on the development of protection from the risks associated with artificial intelligence, biotech, nuclear weapons and the climate crisis. A thousand subscribers, including a name like Elon Musk, have launched an appeal to block the development of generative artificial intelligence such as GPT-4 for six months. Ghiglia tweeted the signature calling into question the Prime Minister Giorgia Meloni and the Undersecretary for Innovation Alessio Butti and confirmed to sportsgaming.win that he had signed the appeal. “ I did a lot of empirical tests on ChatGPT and other chatbots and I was able to directly see the distortion of the Gdpr and the privacy code - says Ghiglia -. It is a conscious appeal ”. And he adds: “ Today, anyone who tries to enforce the laws passes for a brontosaurus ”.

Why a pause in the development of artificial intelligence is harmful This was explained to sportsgaming.win by Francesca Rossi, manager of Ibm who deals with ethics of algorithms and head of the international association of the sector: "That's why I didn't sign the appeal"

The next steps

The Italian Guarantor has adopted an emergency provision, a choice which, for lawyers Giulio Coraggio and Tommaso Ricci of the Dla Piper law firm “ appears unjustified. The position of the Guarantor seems to reflect a bias towards generative artificial intelligence systems, unfortunately in line with the current position of the European Commission in drafting the Ai Act ". A collection of signatures to revoke the moratorium has been published on Change.org. In a note, the lawyer Massimiliano Masnada, partner of the Hogan Lovells law firm, writes: " Data, whether personal or not, is the fuel necessary for the development of AI mechanisms such as ChatGPT. Access to data allows for more precise and suitable algorithms to be used to improve people's lives. Theirs must be done safely and ethically. Prohibitions are not enough to do this. A first step, in this sense, will be the correct implementation of the rules on the reuse of data which are the basis of the Data Governance Act, soon to come into force, and of the subsequent Data Act [European provisions, ed.]".

The ball is now in OpenAi's court, which has 20 days to respond. You risk a fine of up to €20 million or up to 4% of your annual global turnover. But his answers could also reverse the situation, soften the provision or make it void, if the startup founded by Sam Altman manages to demonstrate that it has not violated the GDPR. According to Coraggio and Ricci “there are rather wide margins of defense for some of the disputed conducts with respect to ChatGPT, for example, the data generated by the AI ​​does not correspond to real personal data”. The game is open.