Does Artificial Intelligence Undermine Religion?

AI, Machine learning, Hands of robot and human touching on big data network connection background, Science and artificial intelligence technology, innovation and futuristic.

Artificial intelligence (AI) has developed rapidly over the past few years. We have computers, phones and other hardware that can now display abilities and intelligence that makes humans look primitive. With this fast-moving area of technology many are postulating that AI can become conscious, and the implications are that it undermines religious narratives. If AI can be conscious then there is a physicalist explanation for what makes us human.The concept of the soul in Islam, referred to as the rūḥ in Arabic, is something that we have little revealed knowledge about. However, what can be affirmed is the fact that it is of the “unseen”, coming from a transcendent reality. From this perspective, if the soul, which is the immaterial thing that animates the body, can now be replaced with a physical, materialistic explanation, then religion is undermined.

The physicalist may argue that consciousness and the ability to experience subjective conscious states (also referred to as phenomenal states) can be explained by artificial intelligenceconsciousness becomes analogous to a computer programme. However, there is a difference between weak AI and strong AI. Weak AI is a computer system’s ability to display intelligence. This can include answering complex mathematical equations or beating multiple opponents at a game of chess in less than an hour. Strong AI refers to computer systems actually being conscious. In other words, having the ability to experience subjective conscious stateswhich includes attaching meaning to things. Weak AI is possible and has already been developed. Strong AI is impossible. What follows are the reasons why.

The first reason, which is more of a general point, is that computers are not independent systems with the ability to engage in reasoning. A thing characterised as conscious implies being an independent source of rational thought. However, computers (and computer programmes) were designed, developed and made by human beings that are independently rational. Therefore, computers are just a protraction of our ability to be intelligent. William Hasker explains:

“Computers function as they do because they have been constructed by human being endowed with rational insight. A computer, in other words, is merely an extension of the rationality of its designers and users; it is no more an independent source of rational thought than a television set is an independent source of news and entertainment.”3

The second reason is that humans are not only intelligenttheir reasoning has intentionality. This means that our reasoning is about or of something and that it is associated with meaning.Conversely, computer programmes are not characterised as having meaning. Computer systems just manipulate symbols. For the system, the symbols are not about or of somethingall computers can “see” are the symbols they are manipulating, irrespective of what we may think the symbols are about or of. Computer programmes are just based on syntactical rules (the manipulation of symbols), not semantics (meaning).

To understand the difference between semantics and syntax, consider the following sentences:

  • I love my family.
  • αγαπώ την οικογένειά μου.
  • আমি আমার পরিবারকে ভালবাসি.

These three sentences mean the same thing: I love my family.  This refers to semantics, the meaning of the sentences. But the syntax is different. In other words, the symbols used are unalike.  The first sentence is using English ‘symbols’, the second Greek, and the last Bangla. From this the following argument can be developed:

  1. Computer programmes are syntactical (based on syntax);
  2. Minds have semantics;
  3. Syntax by itself is neither sufficient for nor constitutive for semantics;
  4. Therefore computer programmes by themselves are not minds.5

Imagine that an avalanche somehow arranges mountain rocks into the words ‘I love my family’.  To make the claim that the mountain knows what the arrangement of rocks (symbols) mean would be untenable.  This indicates that the mere manipulation of symbols (syntax) does not give rise to meaning (semantics).

Computer programmes are based on the manipulation of symbols, not meaning. Likewise, I cannot know the meaning of the sentence in Bangla just by manipulating the letters (symbols). No matter how many times I manipulate the Bangla letters, I will not be able to understand the meaning of the words. This is why for semantics we need more than the correct syntax. Computer programmes work on syntax and not on semantics.  Computers do not know the meaning of anything.

John Searle’s Chinese Room thought-experiment is a powerful way of showing that the mere manipulation of symbols does not lead to an understanding of what they mean:

“Imagine that you are locked in a room, and in this room are several baskets full of Chinese symbols. Imagine that you (like me) do not understand a word of Chinese, but that you are given a rule book in English for manipulating the Chinese symbols.  The rules specify the manipulation of symbols purely formally, in terms of their syntax, not their semantics.  So the rule might say: ‘Take a squiggle-squiggle out of basket number one and put it next to a squiggle-squiggle sign from basket number two.’ Now suppose that some other Chinese symbols are passed into the room and that you are given further rules for passing back Chinese symbols out of the room.  Suppose that unknown to you the symbols passed into the room are called ‘questions’ by the people outside the room, and the symbols you pass back out of the room are called ‘answers to questions.’ Suppose furthermore, that the programmers are so good at designing the programs and that you are so good at manipulating the symbols, that very soon your answers are indistinguishable from those of a native Chinese speaker.  There you are locked in your room shuffling your Chinese symbols and passing out Chinese symbols in response to incoming Chinese symbols… Now the point of the story is simply this: by virtue of implementing a formal computer program from the point of view of an outside observer, you behave exactly as if you understood Chinese, but all the same you do not understand a word of Chinese.”6

In the Chinese Room thought-experiment the person inside the room is simulating a computer. Another person manages the symbols in a way that makes the person inside the room seem to understand Chinese. However, the person inside the room does not understand the language; they merely imitate that state. Searle concludes:

“Having the symbols by themselves—just having the syntax—is not sufficient for having the semantics. Merely manipulating symbols is not enough to guarantee knowledge of what they mean.”7

The objector might respond to this by arguing that although the computer programme does not know the meaning, the whole system does. Searle has called this objection “the systems reply”8. However, why is it that the programme does not know the meaning? The answer is simple: it is because it has no way of assigning meaning to the symbols. Since a computer programme cannot assign meaning to symbols, how can a computer system—which relies on the programme—understand the meaning? You cannot produce understanding just by having the right programme. Searle presents an extended version of the Chinese Room thought-experiment to show that the system as a whole does not understand the meaning: “Imagine that I memorize the contents of the baskets and the rule book, and I do all the calculations in my head. You can even imagine that I work out in the open. There is nothing in the ‘system’ that is not in me, and since I don’t understand Chinese, neither does the system.”9

Lawrence Carleton postulates that Searle’s Chinese Room argument is invalid. He argues that Searle’s argument commits the fallacy referred to as the denial of the antecedent. Carleton maintains that Searle commits the fallacy because “we are given no evidence that there is only one way to produce intentionality”.10 He claims that Searle is assuming that only brain’s have the processes to manipulate and understand symbols (intentionality), and computers do not. Carleton presents the fallacy in the following way:

“To say, ‘Certain brain-process equivalents produce intentionality’ and ‘X does not have these equivalents’, therefore ‘X does not have intentionality’, is to commit the formal fallacy, ‘Denial of the antecedent.’”11

However, Dale Jacquette maintains that Searle does not commit the formal fallacy if an interpretation of Searle’s argument is:

“If X is (intrinsically) intentional, then X has certain brain-process equivalents.”12

Jacquette believes that Searle’s argument is a concession to functionalism. He argues that functionalists “maintain that there is nothing special about protoplasm, so that any properly organized matter instantiating the right input-output program duplicates the intentionality of the mind.”13 Searle also seems to admit that machines could have the ability to understand Chinese. However he states that “I do see very strong arguments for saying that we could not give such a thing to a machine where the operation of the machine is defined solely in terms of computational processes over formally defined elements….”14

If computers cannot attach meaning to symbols, then what kind of conscious machine is Searle referring to? Even if one would postulate a robot (something that Searle rejects), it would still present insurmountable problems. Machines are based on “computational processes over formally defined elements”. It seems that the mere possibility of a machine having understanding (attaching meaning to symbols) would require something other than these aforementioned processes and elements. Does such a machine exist? The answer is no. Could they exist? If they could, they probably would not be described as machines if something other than “computational processes over formally defined elements” is required.

According to Rocco Gennaro, many philosophers agree with Searle’s view that robots could not have phenomenal consciousness.15 Some philosophers argue that to build a conscious robot “qualitative experience must be present”[16], something that they are pessimistic about. Others explain this pessimism:

“To explain consciousness is to explain how this subjective internal appearance of information can arise in the brain, and so to create a conscious robot would be to create subjective internal appearance of information inside the robot… no matter how advanced, will likely not make the robot conscious since the phenomenal internal appearances must be present as well.”17

AI cannot attach meaning to symbols, it just manipulates them in very complex ways. Therefore there will never be a strong version of AI. Religion is not undermined.



Physicalism is the view that consciousness can be reduced to, explained by, or identical to physical processes in some way.

In the philosophy of the mind physicalism or materialism are synonymous terms, even though they have different histories and meaning when used in other domains of knowledge.

Hasker, Hasker. Metaphysics (Downer’s Grove, IL: InterVarsity, 1983), 49; also see “The Transcendental Refutation of Determinism,” Southern Journal of Philosophy 11 (1973) 175–83.

Searle, John, Intentionality: An Essay in the Philosophy of Mind. (Cambridge: Cambridge University Press, 1983), p. 160.

Searle, John. (1989). Reply to Jacquette. Philosophy and Phenomenological Research, 49(4), 703.

Searle, John. (1984) Minds, Brains and Science. Cambridge, Mass: Harvard University Press, pp. 32–33.

Searle, John. (1990) Is the Brain’s Mind a Computer Program? Scientific American 262: 27.

Ibid, 30.


10 Carleton, Lawrence (1984). Programs, Language Understanding, and Searle. Synthese, 59, 221.

11 Ibid.

12 Jacquette, Dale. “Searle’s Intentionality Thesis.” Synthese 80, no. 2 (1989): 267.

13 Ibid, 268.

14 Searle, John. (1980b) Minds, Brains, and Programs. Behavioral and Brain Sciences 3, 422.

15 Gennaro, Rocco. Consciousness. (London: Routledge, 2017), p. 176.

16 Ibid.

17 Ibid.