Abstract
In striving for explainable AI, it is not necessarily technical understanding that will maximise perceived transparency and trust. Most of us board planes with little understanding about how the plane works, and without knowing the pilot, be-
cause we put trust in the regulatory and authoritative systems that govern the people and processes. By providing knowledge of the governing ecosystem, industries like aviation and engineering have built stable trust with everyday people. This is known as “social explainability.” We extend this concept to AI systems using a series of "social" explanations designed with users (based on external certification of the system, data security and privacy). Core research questions are: Do social explanations, purely technical explanations, or a combination of the two, predict greatest trust from users? Does this depend on digital literacy of the user? An interaction between explanation type and digital literacy reveals that more technical information predicts higher trust from those with higher digital literacy, but those of lower digital literacy given purely technical explanations have the worst trust overall. For this group, social explainability works best. Overall, combined socio-technical explanations appear more successful in building trust than purely social explanations. As in other areas, social explainability may be a useful tool for building stable trust for non-experts in AI systems.
cause we put trust in the regulatory and authoritative systems that govern the people and processes. By providing knowledge of the governing ecosystem, industries like aviation and engineering have built stable trust with everyday people. This is known as “social explainability.” We extend this concept to AI systems using a series of "social" explanations designed with users (based on external certification of the system, data security and privacy). Core research questions are: Do social explanations, purely technical explanations, or a combination of the two, predict greatest trust from users? Does this depend on digital literacy of the user? An interaction between explanation type and digital literacy reveals that more technical information predicts higher trust from those with higher digital literacy, but those of lower digital literacy given purely technical explanations have the worst trust overall. For this group, social explainability works best. Overall, combined socio-technical explanations appear more successful in building trust than purely social explanations. As in other areas, social explainability may be a useful tool for building stable trust for non-experts in AI systems.
Original language | English |
---|---|
Pages | 150-156 |
Number of pages | 7 |
Publication status | Published - 23 Jul 2022 |
Event | IJCAI 2022 Workshop on Explainable Artificial Intelligence (XAI) - Vienna, Austria Duration: 23 Jul 2022 → 23 Jul 2022 https://ijcai-22.org/workshop/ |
Conference
Conference | IJCAI 2022 Workshop on Explainable Artificial Intelligence (XAI) |
---|---|
Country/Territory | Austria |
City | Vienna |
Period | 23/07/22 → 23/07/22 |
Internet address |