Member-only story

AI: Thinking Outside the Chinese Room

What Searle’s famous thought experiment proves, and what it doesn’t.

Dustin Arand
3 min readDec 1, 2021
Image credit: Kaori Kubota (Unsplash)

In 1980, the philosopher John Searle published a now-famous thought experiment called The Chinese Room, and nearly forty years later it is still considered by some to provide an insuperable refutation of the idea of strong artificial intelligence. Here is how Searle restated the thought experiment in 1999:

“Imagine a native English speaker who knows no Chinese locked in a room full of boxes of Chinese symbols (a database) together with a book of instructions for manipulating the symbols (the program). Imagine that people outside the room send in other Chinese symbols which, unknown to the person in the room, are questions in Chinese (the input). And imagine that by following the instructions in the program the man in the room is able to pass out Chinese symbols which are correct answers to the questions (the output). The program enables the person in the room to pass the Turing Test for understanding Chinese but he does not understand a word of Chinese.”

Searle and his followers contend that this argument demonstrates the impossibility that machines can ever have any conscious understanding of the computations they perform, no matter how sophisticated those computations may be. But does it really establish that?

--

--

Dustin Arand
Dustin Arand

Written by Dustin Arand

Lawyer turned stay-at-home dad. I write about philosophy, culture, and law. Author of the book “Truth Evolves”. Top writer in History, Culture, and Politics.

Responses (2)