Abstract
When judging the capabilities of technology, different humans can have very different perspectives and come to quite diverse conclusions over the same data set. In this paper we consider the capabilities of humans when it comes to judging conversational abilities, as to whether they are conversing with a human or a machine. In particular the issue in question is the importance of human judges interrogating in practical Turing tests. As supportive evidence for this we make use of transcripts which originated from a series of practical Turing’s tests held 6–7 June 2014 at the Royal Society London. Each of the tests involved a 3-participant simultaneous comparison by a judge of two hidden entities, one being a human and the other a machine. Thirty different judges took part in total. Each of the transcripts considered in the paper resulted in a judge being unable to say for certain which was the machine and which was the human. The main point we consider here is the fallibility of humans in deciding whether they are conversing with a machine or a human; hence we are concerned specifically with the decision-making process.
Original language | English |
---|---|
Pages (from-to) | 207-221 |
Journal | AI and Society |
Volume | 31 |
Issue number | 2 |
Early online date | 16 Jun 2015 |
DOIs | |
Publication status | Published - Jun 2016 |
Keywords
- Artificial Intelligence (incl. Robotics)
- Computer Science
- general
- Engineering Economics
- Organization
- Logistics
- Marketing
- Control
- Robotics
- Mechatronics
- Performing Arts
- Methodology of the Social Sciences