When judging the capabilities of technology, different humans can have very different perspectives and come to quite diverse conclusions over the same data set. In this paper we consider the capabilities of humans when it comes to judging conversational abilities, as to whether they are conversing with a human or a machine. In particular the issue in question is the importance of human judges interrogating in practical Turing tests. As supportive evidence for this we make use of transcripts which originated from a series of practical Turing’s tests held 6–7 June 2014 at the Royal Society London. Each of the tests involved a 3-participant simultaneous comparison by a judge of two hidden entities, one being a human and the other a machine. Thirty different judges took part in total. Each of the transcripts considered in the paper resulted in a judge being unable to say for certain which was the machine and which was the human. The main point we consider here is the fallibility of humans in deciding whether they are conversing with a machine or a human; hence we are concerned specifically with the decision-making process.
- Artificial Intelligence (incl. Robotics)
- Computer Science
- Engineering Economics
- Performing Arts
- Methodology of the Social Sciences
FingerprintDive into the research topics of 'The importance of a human viewpoint on computer natural language capabilities: a Turing test perspective'. Together they form a unique fingerprint.
- Research Centre for Computational Science and Mathematical Modelling - Assistant Professor Research
Person: Teaching and Research