Italian privacy authority is examining AI training with personal data
The Italian privacy regulator GPDP will investigate training algorithms and AI models with personal data. The factual investigation must show whether websites take sufficient security measures against the scraping of personal data.
The study includes public and private websites “acting as data controllers based in Italy or offering services in Italy,” the report said. Garante Per La Protezione Dei Dati Personali Thursday.
“AI platforms are known to collect vast amounts of data through scraping, including personal data. This is then used by the platforms for various purposes, while the sites from which the data comes have published it for specific purposes, including news and administrative transparency,” the regulator explains.
Furthermore, the GPDP asks organizations and companies for advice to combat ‘mass data collection’ by AI companies. These opinions can be submitted up to sixty days after the publication of the investigation announcement. The privacy regulator does not indicate how long the investigation will take.
The announcement of this particular study is not surprising. Earlier this year, GPDP criticized ChatGPT developer OpenAI. The regulator blames OpenAI for a ‘lack of information for users and interested parties about whom data is collected’. There would also be no legal basis for OpenAI to collect and store data en masse for the training of the language model. That is why Italy banned ChatGPT at the end of March.