TNO wants AI testing ground for the risks of government use of algorithms

Spread the love

TNO wants to create an experimental environment for artificial intelligence to test unintended adverse effects of algorithmic public decision-making. Citizens should also be involved in this, according to the institute.

The living lab should become a protected environment in which government applications that use algorithms can be tested. This should make it possible to monitor and monitor the application of AI systems and to report on the risks and impact. This mainly concerns applications with a high risk, such as those identified in the regulation of AI proposed by the EU. The EU cited, among others, the use of employment algorithms, software to manage migration, asylum applications or border control and artificial intelligence for authorities.

TNO itself mentions as an example an AI system that the CJIB uses to keep people out of debt by making predictions based on historical payment behaviour. This system was developed according to ethics by design principles, but turned out to have unexpected long-term effects, such as unfairness in highlighting certain groups and extra work for civil servants.

The intention is for multiple parties to be involved in the AI ​​environment in order to take all interests into account. This includes developers, policy makers, government officials and citizens. TNO proposes to develop a leaflet for citizens to help them understand how the system works and what impact it can have on their lives.

The proposal for the AI ​​living lab arises from the paper ‘In search of humans in AI’ published by TNO. The institute believes that it will take eighteen months to set up this environment and the aim is to do this together with several universities. In the paper, TNO describes a methodology for responsible algorithmic decision-making, of which the living lab is a part. According to this methodology, questions should be asked prior to the design and implementation of AI systems, such as which social questions the AI ​​application is aimed at and what the legitimate interest is in using data and AI.

According to previous research by TNO, the use of AI in government has increased sharply in the past two years. The problem of the System Risk Indication showed that things can go wrong, which was declared to be contrary to the European Convention on Human Rights after a retrospective review and is no longer used to analyze fraud.

You might also like
Exit mobile version