Pinker.doc Jan 18, 1998; rev. 1-30-98; 2-26-98; 3-06-98

 

I Don’t Think So:

Pinker on the Thinker;

mentalese monopoly in thought not vindicated

 

 

David Cole

Philosophy Department

University of Minnesota, Duluth

 

The received view in cognitive science is that thought takes place in mentalese, an innate inner propositional representation language analogous to machine language in computers. Jerry Fodor’s Language of Thought (1975) is a philosopher’s classic argument for mentalese, and against natural language, as a medium of thought. Nearly twenty years later psychologist Steven Pinker has reinforced and popularized this view with a bevy of arguments against the idea that natural language can be the language of thought. In Chapter III of his popular work The Language Instinct (1994), Pinker discusses the nature of the representational system for thought in conjunction with discussion of the Whorf hypothesis - a hypothesis Pinker roundly rejects. Part of the brief is that Whorf moves rather too quickly from scanty and sometimes flawed evidence to large scale conclusion. Given the structure of the discussion, there appears also to be an unexpressed argument to the effect that, if thought were in natural language, then the Whorf Hypothesis would be true. Since it isn't true, thought isn't in natural language. My interest is in Pinker's several express arguments that in addition to overt natural language, we must hypothesize the covert innate language, mentalese, and I will be especially concerned with the further, stronger, arguments that natural language is unsuitable for thinking -- the covert language has a natural monopoly as medium for thought. I will also point to some problems with the alleged analogy between mentalese and machine language in computers.

Some of Pinker's arguments against thinking in natural language are the same as Fodor's, but Pinker has many additional arguments - ten in all. Fodor observes in passing that animals think (and so, if thought requires language, animals, lacking natural language, must have an internal language of thought). But Fodor's main argument concerns humans and is transcendental: learning involves hypothesis testing, and so we must posit mentalese as the representation system in which to frame the hypotheses that are needed for learning natural language). Pinker doesn't discuss this argument, and instead relies first on a group of arguments based on common experience, and second on a group of arguments based on scientific evidence from ethnography and psychology. Pinker's positive position also appears to differ somewhat from Fodor’s: while Fodor and Pinker are both fans of mentalese, Pinker is decidedly friendly to the role of mental images in thought, whereas Fodor is not. However, the core position on propositional thought is the same: natural language must be translated into mentalese before thought can take place.

 

But the additional arguments Pinker gives us do not increase the plausibility of the case that humans do not use natural language as a medium of thought. Since my main concern is with the arguments that purport to show that natural language is totally unsuited as a medium of thought, (I’ll not discuss the Whorf hypothesis here, beyond asserting that a defense of language as a medium of thought is not ipso facto a defense of relativism, incommensurability, ineffability, horrible holism, etc. that are the alleged implications of holding that natural language and thought are related more closely than as translations.) But I will have something to say about some of the arguments that purport to support the weaker claim that natural language is not the only medium of thought, but must be at least supplemented by mentalese, even though this weaker position may be correct. Even correct views need to be well founded, and it is still worth noting arguments for mentalese that aren’t very helpful in establishing its role in thought.

 

Five Initial Arguments that not all thought can be in Natural Language

 

In the early part of his discussion, Pinker suggests five non-technical reasons for thinking that not all thought is in natural language:

 

1. "We have all had the experience of uttering or writing a sentence, then stopping and realizing that it wasn't exactly what we meant to say. To have that feeling, there has to be a what we meant to say that is different from what we said." (57)

 

2. We remember the gist of something heard, not the exact words.

 

3. If thoughts depended on words, new words could not be coined.

 

4. Language could not be learned.

 

5. Translation from one language to another would be impossible.

 

After a reasonable but very critical discussion of Whorf and his methods, Pinker offers less mundane and more scientifically kosher experimental evidence for mentalese. The least rigorous, according to Pinker, of this evidence concerns thought in the languageless deaf. The more rigorous concern four clear and distinct areas where evidence for thought without natural language has been observed: human babies, monkeys, self-report by famous scientists and others, and image rotation experiments by Shepard and Cooper.

 

All of the arguments mentioned so far are directed against the hypothesis that _all_ thought is conducted in the medium of natural language. Even if they were successful against that claim, these considerations don't count against the thesis that _some_ thought is normally in, or requires, natural language, nor do they directly support mentalese. Showing that _some_ thought is possible without language does not rule out a very large role for natural language in much of distinctively human thought, especially abstract thought. The closest that the arguments mentioned so far come to calling this into question is the specific claim, made in the course of discussing a few observed cases of the languageless deaf, and of some reports by scientists on how they came up with their discoveries without using language, that abstract thought does not require natural language.

 

The discussion of the deaf and of the scientists’ reports of scientific discovery can not succeed in showing that thought does not require natural language. As I have argued elsewhere (On Hearing Yourself Think, 1997), The latter do not show that natural language is not playing a decisive role in creativity and problem solving, for it is possible that the role is subconscious. Anyone possessed of natural language, including the signing or lip reading deaf, may use natural language to solve problems even if he or she is not aware of the use of natural language representations. Surely some creative thought, as in Kekule’s snakes biting their own tails, involves something other than language, but it is imagery, not a propositional rival to natural language, which is what mentalese purports to be.

 

Abstract thought and problem solving in persons lacking natural language altogether would be a decisive challenge, but there is no clear evidence of any abstract thinking capabilities similar to those evinced by the scientists. Pinker cites languageless persons rebuilding broken locks - this is evidence of perhaps visual imagery, but not mentalese (at least not without quite a bit more detail and argument than we are given). Spiders, e.g., build marvelous things, but no inference to spiderese appears to be warranted. There simply is much we don’t understand about how even unintelligent organisms accomplish what they do, and while there must be some physical basis underlying the complex behaviors, there is not evidence that propositional representation systems are involved. The same considerations would apply to similar manifestations in humans. It could even turn out that some mathematical ability may have a basis in a facility that is closer to image processing or a mechanical calculator than it is to a proposition manipulator.

 

Even if they do not threaten the possible role in thought of natural language that I wish to defend, the scientific results that Pinker cites have other deficiencies as a defense of mentalese. Shepard and Coopers’ results support the view that there are mental images - not the view that there is an internal representation in a propositional medium, mentalese. The abilities of infants to expect two things to appear when two have gone behind a curtain are certainly interesting but not especially impressive as math abilities go. It is not clear that they may not be accounted for by an image recognition system, without invoking a propositional medium like mentalese.

 

More evidence presented by Pinker concerns monkeys who attack the kin of a monkey who has assaulted them (rather than retaliating directly against the assaulter). This seems to me to fall short of "reasoning by analogy", which is how Pinker describes it. It doesn’t involve an analogy. It is not clear to me that it requires any more explanation than when a monkey, struck on the head, responds by whacking his attacker on the back as he turns to leave. At most, the vindictive monkey must associate the back as connected to the attacker - and in the former case, the sister as connected with the attacker. This requires that the monkey make connections, true enough, and perhaps via connections between mental representations, but it does not show that the representations must be in a propositional language of thought. An imagistic representation may serve to representationally connect the parts of the other monkey, and in the second case, the parts of the family. This imagistic representation may fall well short of a propositional representation system (a system admitting of negation, modal operators, and the like), while still serving the purposes evident in the monkeyshines cited.

 

It is also hard to see how mentalese is going to help us coin new words, or that the lack of mentalese could prevent us from coining new words. Take "Kleenex" and "slimeball". I don’t know the, no doubt interesting, history of these neologisms in detail, but I hope it is plausible to speculate that Kleenex, a product of Kimberly-Clark, was coined because it had a K, and also because it was hoped it would be associated by sound with "clean". ("Kodak" is well-known example of a minted word which, Eastman tells us, was selected specifically for its K endowment). Now for these attractive features, mentalese is totally hopeless as an aid to word coining. Mentalese doesn’t have Roman letters, and lacks rhyme, which is only a feature of phonology. So here mentalese would be an impediment rather than an aid to coining new words as compelling as "Kleenex".

 

With "slimeball", I suspect the vividness of the visual image (not a proposition in mentalese), plus perhaps the sound of "slime" and the associations with that English word, endeared it to its creator and subsequent echoers. The original creator may have been looking for a way to convert "slime" to a count noun, and accomplished the transformation of the English "ball of slime" into "slimeball". "Byte" was coined to be phonologically similar to "bit", yet mean something bigger, which is accomplished by homonymy with "bite". In doing so, it relies on the similar phonological properties that "i" and "y" can have. "Minivan" rather than "miniature van" or "small van" is clearly chosen because it is easy to say, and assimilates to a host of other "mini-" things -- surface features, "Caravan", a brand name for a minivan, is truly a stroke of genius. "Boomer" is a convenient shortening of "Baby boomer" which is a transformation of "product of baby boom", which presumably endeared itself by alliteration. "TV", "pc", "scsi", "wasp", have jointed a host of acronyms. Again and again we see the central role played in neologism by various surface properties: phonological, including phonological economy, orthography, homonymy, alliteration, and acronyms. Mentalese is not much help here.

 

The most expensive coining that I know of is the saga of how Standard Oil of Ohio became Esso and then finally Exxon. The latter transformation is alleged to have involved computers and millions of dollars. It allowed them to get away from gas-leak sound of Esso. And surely part of the appeal of the new name lies in the associations with the letter "x", and the exciting visual opportunity to have two x’s in a row, sharing a single crossbar. Who could resist? And how would mentalese have helped with discovery of these phonological and orthographic virtues? Word coining seems to point away from mentalese and instead to natural language as a medium of a least some thought.

Pinker’s very first reason for supposing mentalese exists is the common experience that we realize that something we said was not what we meant to say. This, he says, requires that there be something that we meant to say. Presumably, the something we meant to say must be something that was represented in mentalese, and then something went wrong with the translation from mentalese into natural language. But no matter how intuitively compelling this may seem, I don’t think it withstands scrutiny. Stan Laurel picks a thread off Oliver Hardy’s sweater. The sweater unravels into a pile of yarn. Stan didn’t mean to do that. I go to remove a book from the pile next to my chair, and the pile falls over. I didn’t mean to do that. You say that it’s very nice, and I disagree, saying "it’s not very nice". I see how that will be taken and I didn’t mean that. - The thing is that in all these ways of spilling ink there is no reason to suppose that there is an explicit representation in the head of an intended consequence - of a thread coming off a sweater and the sweater NOT unravelling, of my removing the book and the books remaining upright, of your not supposing that it is not nice at all. And even if there were, there is no basis, in the case of the speech act, for an inference to something that is not represented in natural language. I can have an image in my minds eye, and my execution in my drawing may fall well short of what I intended. The same for speech; I might have an image of what I am about to say and fail in the execution. If that were the case, what I say will not be what I meant to say - but there is no call to suppose the archetype is framed in mentalese. (See Gauker 1992 p. 309 for several other ways to explain the feeling that we lack the words to express our thought, without invoking mentalese.)

 

Pinker’s claim that language could not be learned unless there were mentalese is one that I will not address in detail here. This argument is not developed by Pinker - and I think for good reason. It is always rather more difficult to show what is impossible than what is possible. Language learning _might_ consist in translating into an innate mentalese. But that does not show that it could not be anything else. Language learning is learning how to use words, putting them in logical space as it were. Real spatial locations are also learned. Learning where things are in space might consist in mapping their locations onto an innate inner space map, and it seems plausible to hold that that is how bodily locations are represented - but is it really plausible to hold that the brain holds an innate location representation for every possible position in the universe? A gigantic Atlas mostly full of pages carefully ruled with latitude and longitude, but otherwise blank? I don’t think so. There may be several possible ways to learn positions, and also natural languages. And a complete innate internal representation system may be a coherent logical possibility, but ultimately not the most plausible. Ruling out as-yet-unarticulated alternative explanations apriori does not seem possible.

 

The issues here are enormously complicated, and the argument is much more specific in Fodor (language learning requires hypothesis testing which requires hypothesis framing which requires a medium other than the language being learned). So apart from the general remarks above, the topic is best dealt with in considering Fodor’s rather than Pinker’s arguments.

 

Finally, the claim that translation between natural languages would be impossible without mentalese seems very difficult to sustain. It’s not clear exactly what the problem is supposed to be. If translation requires a metalanguage, natural languages are striking in their capacity to serve as metalanguages. So I can say to myself "`Es Regnet’" means `It’s raining’". If it’s indeterminacy that’s the worry, the problem may be at least partially addressed below in the context of looking at alleged deficiencies of ambiguity and context dependence of meaning in natural language. But perhaps this claim is just another appearance of the regressive "if A and B mean the same, there must be a third thing C that they both mean". No. Nein. Nyet.

 

 

 

Arguments for a Mentalese Monopoly

 

In the last pages of the chapter, after his discussion of Whorf, and the workings of a computational reasoning system, Pinker presents five arguments against the view that natural language is even minimally suitable as a medium of any thought. If these arguments succeed, then _no_ thought could be in natural language -- even if natural language sentences sometimes _appear_ to be the medium of thought, the real action (thought, understanding, inference) must be in mentalese sentences behind the scene, unavailable to consciousness.

The five arguments are:

 

1. Ambiguity: natural language is (often) ambiguous. Our understanding is not. So the medium of our understanding is distinct from natural language.

 

2. Inexplicitness: Natural language lacks "logical explicitness". This appears related to the Frame Problem - the example given is of a computational system that infers from "Ralph is an elephant", "Elephants live in Africa" and "Elephants have tusks" to "Ralph lives in Africa" and "Ralph has tusks" - but doesn’t know that all elephants live in the same Africa, while not sharing the same tusks.

 

3. The "Co-reference problem": pronouns and their antecedents co-refer, but this is not explicit in natural language.

 

4. Deixis: context determines meaning (often? always?). Example: "a" and "the" have no meaning apart from a particular conversation or text. Pinker’s example concerns the difference between killing a policeman and killing the policeman. A similar point is made by McDermott (pp. 152-3 as reprinted in Haugeland).

 

5. Synonymy: distinct arrangements of words in natural language mean the same thing, so there must be "something else that is not one of these arrangements of words" (80)

 

Conclusion: "People do not think in English or Chinese or Apache; they think in a language of thought" (81).

 

 

But these arguments, individually or collectively, don't show this. At most they can show that the occurrence of natural language strings in the head cannot be all there is to thought -- but no one should deny that. Even if one thinks in natural language, the mental occurrences must have functional roles, that is, causal connections with other strings, as in inference, as well as connections with sensory input and motor output. No language can think. The transitions characteristic of thought must be effected by the system employing the representations, no matter what language is used (for those representations that are propositional). I’ll develop this point below, in discussion of each of the five arguments.

 

The Argument from Co-reference

 

Consider the third problem Pinker cites first: co-reference. Pronouns perforce do not explicitly co-refer with their antecedent nouns. So a new language is invoked to solve a problem in natural language. The suggestion is that the co-reference must be provided by _another_, non-natural, language: an ideal language for thought that lacks the deficiencies of natural language, a logically superior language. And how is that ideal mental language going to solve this problem?

 

By not having pronouns, presumably. But that does not solve the problem! (Beware, I feel a nuevo-Wittgensteinian mood coming on). Suppose we translate "Jim bit himself" into mentalese, which has no pronouns. So it have something like "Jim bit Jim". Does that solve the co-reference problem? No! Just moving to identical syntactic entities (instead of syntactically differing antecedent and pronoun) does not of itself solve anything - for clearly we are left with the problem of what makes the first occurrence of "Jim" co-refer with the second occurrence of "Jim". It is not essential to language that they co-refer, nor can it owe simply to the fact that they are syntactically identical (they are not physically identical, in that they occupy different positions and have different contexts); it is certainly coherent to have a language in which the second occurrence of a particular symbol in a sentence always refers to something _other_ than the first occurrence (as in "You and you and you, follow me", or, more famously, twice in "Tomorrow, and tomorrow, and tomorrow creeps in this petty pace from day to day" ). And we can't just _stipulate_ that they co-refer - co-reference can’t depend on anything we do consciously, it needs to be intrinsic to the operations of the language.

 

Co-reference of terms in _any_ medium of thought will be provided by the inference roles that the terms play. Inference roles are a special case of causal role. A system can be built -- and indeed, is most naturally built -- so that each occurrence of the same term permits inferences that presuppose co-reference. But, as argued above, co-reference is not an intrinsic feature of recurrence. Co-reference is often a feature of artificial languages, and we can see how it is solved there. A computer language may have more than one expression for the same item -- and even a machine language, the innate language of computers, can be built with multiple names for the same operation, or location in memory, etc. What will make them co-refer is that the machine reacts to them the same way.

 

But then, there is no problem of co-reference uniquely faced by natural language. For thought to take place in natural language, warts, pronouns, and all, it is only necessary that the relevant (inference) causal roles of the pronouns be the same as that of their antecedents. So that from "Jim bit himself", I can come to think "Quite possibly, Jim was alone when bitten" and "Jim was not bit, on that occasion, on the back of his neck" and even "Jim doesn't have to worry about getting rabies from that bite".

It is true that something must give the pronoun the proper inferential role, tying it to its antecedent, and that something is not internal to the sentence. But that must be true for any language, natural or innate. And that which determines co-reference will not itself be a language, natural or innate. It will be the role the symbols play in an inference engine, a mind.

 

The problem of co-reference we are concerned with here is one of how a system takes terms. There is another problem, which has to do with terms that co-refer whether the system takes them to co-refer or not - Morning Star and Evening Star, for example. That distinct problem is not at issue here, and its solution presumably takes us on a causal foray outside the system. Here we are just concerned with conditions of understanding, and a system that doesn’t know that "Morning Star" and "Evening Star" co-refer can understand sentences in which both terms appear (although, perhaps, that understanding will be eclipsed by an otherwise identical system that is aware of the co-reference).

Figuring out that symbols co-refer will a problem for any language system. Often co-reference is a difficult discovery -- "I'm so glad to hear you are engaged to Dr. Jekyl, dear; I was frankly worried by the rumors you had been seen in the company of Mr. Hyde." On the other hand, to think that Dr. Jekyl is Mr. Hyde is just to apply the indiscernibility of identicals rule to them. That application will be required for the language of thought as well as natural language. Since in either case it will turn out that symbols that are syntactically distinct will nevertheless co-refer -- if discovery of identities is to be possible, there is no unique occurrence of problems of co-reference in natural language.

 

There is another way of approaching the considerations advanced above. Suppose, again, that we attempt to solve the problem of co-reference in Language A by translating into Language B, a language in which co-reference is allegedly not a problem. How are we to translate? Pursuing the example used above, letting Language A be English, the language with the alleged co-reference problem, we must somehow know that we should translate "himself", in "Jim bit himself", as the language B translation of "Jim". But the translation process, involving this tacit knowledge, must be a process that is in neither language A nor language B. So the crucial problem is, how do we know to translate "himself" the same as we translate "Jim"? The fact that language B allegedly has no co-reference problem does not help us at all - the problem of co-reference just reappears as a problem of translation. The only solution that is plausible is to hold that we can solve the co-reference problem by the way we determine the inference role that the sentence in A is to play, and that not only does not require a distinct language B, but is not helped by the presence of language B. We can just as helpfully produce the transformations in English, transforming "Jim bit himself" into "Jim bit Jim"; this will do just as much and as little good as translation into, e.g., something akin to "Bjj" of predicate logic.

 

Finally, it is worth saying something about the analogy between machine language and mentalese. The problem is that machine language in computers is not a representation system. Simply put, actual machine language is unsuitable for thought about the world. Machine language has no causal connections with the world, its elements do not refer to or represent anything in the world. It is a command language, specifying operations only. And it is tiny: the language ("instruction set") for what is probably the most widely used cpu family in the world, the Intel 80x86 family, only has 20 distinct operations. About half these of these specify direct operations on data, such as addition, or shift bits, or move the data to a register or memory address. The rest control program flow, including several forms of a conditional jump to an instruction at a specified location, one of the most powerful commands. What _can_ represent in a machine is a _data structure_, information that can reflect what’s going on in the world. Unlike the machine language, the data are not "innate" to the machine. It strikes me as interesting (at least) that the computer analogy, stressed by Fodor and Pinker, most closely supports a traditional empiricist model of mind: a few basic but non-representational operations are innate; the vast majority of mental content, and everything that can represent anything in the world, must come from outside. The empiricist, such as Hume, has the mind "associating" one impressed idea with another; the computer can innately conditionally jump from one piece of impressed data to another. If there is much analogy between mentalese and machine language, mentalese is far too impoverished to be a medium of thought.

Data structures consist of numbers, including encoded words or sentences - or images. The operations on the data structures, ultimately produced by machine language code, are computational, whether the data is propositional or imagistic. The question then is, would an image, say of acoustic spoken words and sentences in natural language, _have_ to be translated into a non-acoustic language, mentalese, in order for it to be the subject of logical operations, association, and other computational processes characteristic of thought? We haven’t encountered an argument yet that shows that the answer must be "yes".

 

The Arguments regarding Context and Ambiguity

 

Some of the other problems Pinker points to turn on issues closely related to the preceding, such as the problems of context and of ambiguity. Again, computer analogies may be helpful (we can see that similar problems are encountered in a simpler and well-understood system). The same number in a computer means many different things depending on where it occurs - context. This is a feature, an economy in a system that can only represent 2 digits. It is not a problem. All that is necessary is that the processing be sensitive to context. If a given bit sequence occurs in one context, it specifies a command, in another, a color, in another, a letter of the Roman alphabet. This systematic ambiguity is not a problem, it is a feature of the system. There is no requirement on a language, especially a language of thought, that it not display ambiguity when removed from context. If all you know about the operations inside a computer is that, say, a given 10 byte sequence occurred, you are in no position to know what role it played in the system. To resolve that, you would need to know its context inside the system.

 

Similarly in natural language. If I think "Jim went to the bank", it might - from what you can tell from the English sentence here displayed outside my head -- be the thought that he is going to a river bank or the thought that he is going to a financial institution. But it is a holdover from days gone by to think that this difference is solved by an ideal language of thought that uses different symbols for river bank and money bank. For again we can ask the crucial question: what makes the different symbols refer to different things? Nothing _intrinsic_ to the symbols, and nothing intrinsic to the fact that they are different -- nothing internal to language keeps different symbols from co-referring. A system might use a certain symbol to mean one thing on Tuesdays and another on Wednesdays, or one thing in context A and another in context B.

 

Again, inference role is crucial to determining how the system understands its symbols. If I think "Jim went to the bank, and since the river is running high, he might slip and get wet, or worse", it is much clearer how I am interpreting "bank". (But not absolutely conclusive, as you can see by reading the sentence with both interpretations of "bank".) What is relevant is that I am letting my beliefs about river banks determine my inferences from the sentence. Suppose further, what is not manifest in the sentence reported publicly, I am not letting my beliefs about financial institutions have any role. Then I am interpreting it one way and not the other. But doing that does not require that I translate the natural language sentence into some other, non-natural, language. What is required is what will be required for any medium of thought: a particular inference role for the sentences.

 

The Argument from Inexplicitness

 

The inference role will make the sentence, as thought by the thinker, logically explicit. A string of words on paper or in the air is not logically explicit. But in the head, it's the same words, but a different story. In the course of thought, the internal representation in natural language leads this way and not that, as a result of the causal influence of other things in my head. I think "Better bring my raincoat". On paper, this string of words may be logically inexplicit, ambiguous, and have a pronoun with no antecedent, etc. But in my head, it leads straight to my attempt to retrieve a particular raincoat. In my head it is no handicap to the representation that it is in English words - and it would be no advantage to a representation in a non-public language.

 

Pinker says in this connection (p. 79):

 

Intelligent you, the reader, knows that the Africa that Ralph lives in is the same Africa that all the other elephants live in, but that Ralph’s tusks are his own." [Unlike a computational model of you.]

 

He concludes: "English sentences do not embody the information that a process needs to carry out common sense."

 

What makes this conclusion especially odd is that just one sentence separates this negative conclusion, about the inability of English to represent common-sense about Africa and tusks, from the English sentence, quoted above, that represents the common-sense about English and tusks.

 

It is also odd because the example appears much earlier, for example in Drew McDermott’s 1976 paper "Artificial Intelligence Meets Natural Stupidity", in the course of a critique of the practice of AI practitioners of labeling their networks with English. The networks are the representational systems in AI computational models of intelligent inference and understanding - thought. They are the mentalese of the models, the medium in which the inferences are carried out. The point there was that the English labels were misleading because the internal representation systems were missing the import of the English - the mentalese lacked properties had by the English. The critique (at this point in McDermott’s article) was not of English as a medium of thought, but of illegitimately reading the content of English into the impoverished representational system being created by the programmers. So it's ironic to see it all topsy-turvy with the example being used to purported show how English is impoverished compared to the hypothesized computational language.

In any case, there cannot be a valid argument from the claim that some English sentences lack the information needed for inferences that go beyond those sentences directly to a conclusion that no English sentences embody that information, or that those sentences cannot be part of the medium of thought in humans.

 

McDermott does go on in the paper cited to argue that AI workers should not be bewitched by English as the model for thought. The argument (p. 151) seems to be something like this: English and other natural languages have many uses and are only sometimes communication systems, and even then suffer from constraints quite different from a medium of thought. In particular, communication systems are helped by brevity above all else, and leaving much up to inference by the hearer. But within an information processing system, "packaging and encoding of information is usually already done", ambiguity is avoidable, and brevity is typically not a desideratum. So don’t look to natural language as your model for the internal representation system in AI.

 

Of course, even if all this is true, it doesn’t tell us much about how nature works (which wasn’t McDermott’s concern). Nature doesn’t work in silicon, when producing info processing systems, nor does nature work with programming languages, and Nature may or may not be concerned with features of representational media appropriate to systems that do work with sequential binary state processors made from silicon. It may be more efficient for a brain to use the same medium for internal processing and external communication to avoid the overhead of translation, unlike a machine where the internal language is literally carved in stone.

The Argument from Synonymy

 

The (alleged) problem of synonymy is solved in a way similar to the problem of co-reference, but even more simply. Pinker argues that distinct arrangements of words in natural language mean the same thing, so there must be "something else that is not one of these arrangements of words".

This move is Platonic, in the pejorative sense. Compare "Different physical objects have the same color, so there must be something else that is not a physical thing (but rather, the color itself)". Even without the questionable reification of meaning, it is certainly hard to see how invoking mentalese will solve this problem. This invocation presumably leaves us with the following: the distinct arrangement of words in natural language AND the representation in mentalese ALL mean the same thing - so there must be "something else" that is not one of these arrangements of words NOR the representational tokens in mentalese. But I regress.

 

Summary

 

In summary, none of the problems, of ambiguity, synonymy, context (deixis), co-reference or logical explicitness is solved merely by introducing another medium of thought. Something external to the representations themselves must still determine meaning and reference in that language. And there is no reason to believe that the same mechanisms can’t solve the problems for representations in natural language - with the obvious advantage of cutting out the overhead of translating in and out of the second language. It is clear that organisms must deal with images, and some of those images are of sentences in natural language. There has not yet been a cogent argument showing that our manifest competence with sentences in natural language requires hypothesizing a covert innate language.

 

 

Bibliography

 

Carroll, J.B. (ed) 1956 Language, Thought, and Reality: Selected Writings of Benjamin Lee Whorf MIT Press

 

Cole, David 1997 "On Hearing Yourself Think" http://www.d.umn.edu/~dcole/hearthot.htm

 

Dennett, Daniel "A Cure for the Common Code" in Brainstorms Bradford/MIT Press

 

Fodor, Jerry 1975 The Language of Thought Thomas Y. Crowell Company

 

Gauker, Christopher 1992 "The Lockean Theory of Communication" Nous xxvi 3 (September) pp. 303-324.

Haugeland, John (ed.) 1981 Mind Design Bradford Books/MIT Press.

 

McDermott, Drew 1976 "Artificial Intelligence meets Natural Stupidity" reprinted in Haugeland 1981, pp. 143-160.

 

Pinker, Steven 1994 The Language Instinct William Morrow and Company (esp. Chapter III, "Mentalese", pp. 55-82)