The importance of a human viewpoint on computer natural language capabilities: a Turing test perspective

Kevin Warwick, Huma Shah

Research output: Contribution to journalArticlepeer-review

10 Citations (Scopus)
93 Downloads (Pure)


When judging the capabilities of technology, different humans can have very different perspectives and come to quite diverse conclusions over the same data set. In this paper we consider the capabilities of humans when it comes to judging conversational abilities, as to whether they are conversing with a human or a machine. In particular the issue in question is the importance of human judges interrogating in practical Turing tests. As supportive evidence for this we make use of transcripts which originated from a series of practical Turing’s tests held 6–7 June 2014 at the Royal Society London. Each of the tests involved a 3-participant simultaneous comparison by a judge of two hidden entities, one being a human and the other a machine. Thirty different judges took part in total. Each of the transcripts considered in the paper resulted in a judge being unable to say for certain which was the machine and which was the human. The main point we consider here is the fallibility of humans in deciding whether they are conversing with a machine or a human; hence we are concerned specifically with the decision-making process.
Original languageEnglish
Pages (from-to)207-221
JournalAI and Society
Issue number2
Early online date16 Jun 2015
Publication statusPublished - Jun 2016


  • Artificial Intelligence (incl. Robotics)
  • Computer Science
  • general
  • Engineering Economics
  • Organization
  • Logistics
  • Marketing
  • Control
  • Robotics
  • Mechatronics
  • Performing Arts
  • Methodology of the Social Sciences


Dive into the research topics of 'The importance of a human viewpoint on computer natural language capabilities: a Turing test perspective'. Together they form a unique fingerprint.

Cite this