The question of whether machines will be capable of human intelligence is ultimately a matter for philosophers to take up and not something that scientists can answer, an inventor and a computer scientist agreed during a debate Thursday night at the Massachusetts Institute of Technology. Inventor Ray Kurtzweil and Yale University professor David Gelernter spent much of the session debating the definition of consciousness as they addressed the question, "Are we limited to building super-intelligent, robotic "zombies," or will it be possible for us to build conscious, creative, even "spiritual" machines?" Although they disagreed, even sharply, on various points, they did agree that the question is philosophical rather than scientific.
The debate and a lecture that followed were part of MIT"s celebration of the 70th anniversary of Alan Turing"s paper "On Computable Numbers," which is widely held to be the theoretical foundation for the development of computers. In a 1950 paper, Turing suggested a test to determine "machine intelligence." In the Turing Test, a human judge has a conversation with another human and a machine, not knowing which responses come from the human or the machine. If it cannot be determined where the responses come from -- the human or the machine -- then the machine is said to "pass" the test and exhibit intelligence. Of course, this being at least in part a philosophical matter, the Turing Test itself is the source of ongoing dispute.