Are non-diverse AI research and development teams risking bias in innovated algorithms and artefacts?

    Activity: Talk or presentationInvited talk

    Description

    Nascent AI technologies have exposed some bias issues stemming from possible limited range in data sets used to train algorithms and teach machines. The news conveys the results of inadequately trained algorithms, for example: “Black man is stunned after passport photo checker mistakes his lips for an open mouth as Home Office facial recognition system is accused of bias” (MSN, 2019).

    MSN: https://www.msn.com/en-gb/money/technology/black-man-is-stunned-after-passport-photo-checker-mistakes-his-lips-for-an-open-mouth-as-home-office-facial-recognition-system-is-accused-of-bias/ar-AAHx5on?li=AABMOzg
    Period5 Feb 2020
    Held atAI4EU, Belgium
    Degree of RecognitionInternational

    Keywords

    • AI
    • AI ethics
    • Bias
    • Algorithms
    • diversity
    • data science
    • Machine Learning
    • facial recognition

    ASJC Scopus subject areas

    • Computer Science(all)