Il Parlamento britannico pubblica un post riassuntivo delle principali caratteristiche dei fenomeni dell’intelligenza artificale (AI) e machine learning (ML): v. POSTNOTE, n° 633, October 2020 INTERPRETABLE MACHINE LEARNING.
Con la consueta chiarezza e precisione che contraddistinguono la comunicazione divulgativa nella cultura anglosassone.
Riporto solo i concetti di IA e ML (v. Box 1):
<< Artificial intelligence (AI) – There is no universally agreed definition of AI. It is defined in the Industrial Strategy as “technologies with the ability to perform tasks that would otherwise require human intelligence, such as visual perception, speech recognition, and language translation”. AI is useful for identifying patterns in large sets of data and making predictions.
Machine learning (ML) – ML is a branch of AI that allows a system to learn and improve from examples without all its instructions being explicitly programmed. An MLsystem is trained to carry out atask by analysing large amounts of training data and building a model that it can use toprocess future data, extrapolating its knowledge to unfamiliar situations. Applications of ML include virtual assistants (such as Alexa), product recommendation systems, and facial recognition. There is a range of ML techniques, but many experts attribute recent advances to developments in deep learning:
1) artificial neural networks (ANNs).Type of ML that have a designinspiredbythe way neurons transmit information in the human brain.17Multiple data processing units (nodes) are connected in layers, with the outputs of a previous layer used as inputs for the next.
2) deep learning (DL). Variation of ANNs. Uses a greater number of layers of artificial neurons to solve more difficult problems.16DL advances have improved areas such as voice and image recognition >>.
Il post si sofferma alquanto sulla “interpretabilità”. Tema importante, nei limiti in cui una decisione venga presa sulla base di AI/ML (diverrà dunque sempre più importante): il destinatario, per esaminarne la correttezza e valutarne l’eventuale impugnabilità, deve infatti senza troppa fatica comprenderne la motivazione.
Si legge ad es. <<Some stakeholders have said that ML that is not inherently interpretable should not be used in applications that could have a significant impact on an individual’s life (for example, in criminal justice decisions). The ICO and Alan Turing Institute have recommended that organisations prioritise using systems that use interpretable ML methods if possible, particularly for applications that have a potentially high impact on a person or are safety critical>> (p. 3).
Non è però chiaro perchè l’interpretability debba essere perseguita solo nelle decisioni più importanti e (a contrario) perchè si possa invece lasciare nell’oscuro totale il destinatario in quelle meno importanti (come distinguere, poi, le prime dalle seconde?).