Luna McNultyWords

One Brain, Two Minds

January 15, 2022

This is a post I made in the forum of my Spring 2021 class on Language Processing in Humans and Machines. It was be written late at night while I felt enlightened after finishing an assigned reading – as a result, it should be taken with a grain of salt.

The paper in question was Connectionism and cognitive architecture: A critical analysis, a 1988 critique of connectionism by Jeremy Fodor and Zenon Pylyshyn. Connectionism was then a popular theory viewing cognition in humans as fundamentally similar to computations done in neural networks. Fodor and Pylyshyn argued that some of the brain’s capabilities could only be explained if it was doing symbol processing in the manner of classical Turing machines.


I just finished the Fodor and Pylyshyn reading, and throughout I was pre-occupied with one idea that was only addressed briefly at the end: the possibility that a connectionist and classical models of cognitive processing could co-exist in the same brain. F&P mention throughout the paper that connectionist models could serve as an implementation for a classical architecture / Turing machine. Only at the very end, they suggest that

A good bet might be that networks sustain some cognitive processes as can be analyzed as the drawing of statistical inferences; as far as we can tell, what network models really are is just analog machines for computing such inferences. Since we doubt that much of cognitive processing does consist of analyzing statistical relations, this would be a quite a modest estimate of the prospects for network theory compared to what the connectionists themselves have been offering.

This possibility seems much more significant today, where we can see that neural nets can do all sorts of tasks, and the question is whether they can do everything the brain can do. Supposing that they can’t, it still seems plausible that brains use a network-like architecture for many tasks in addition to a neurally-implemented Turing machine that handles what networks can’t. The answer to the question of “how do you combine them?” mentioned in Monday’s lecture then might simply be “you don’t.” The network system doesn’t have access to symbols, and the symbolic system doesn’t have access to nodes. There would be, essentially, two largely-independent minds in one brain.

Indeed, a lot of people have the intuition that the brain has two systems, one of which is precise and methodical and the other of which is more fuzzy and intuitive. In pop psych these are called the left and right sides of the brain. We know that these systems aren’t actually divided in the brain this way, but it’s conceivable the mythology reflects some real aspect of how the brain works. In Thinking Fast and Slow, Daniel Kahneman talks in similar terms about how human behavior can be understood in terms of a fast, automatic “system one” and a slow, logical “system two.” The existence of two systems explains why almost everyone has intuitions like “if 2 chickens can lay 2 eggs in 2 days, 4 chickens can lay 4 eggs in 4 days” even though we can pretty easily work out that it’s wrong. The fast, associationist system gives the wrong answer before the slow, methodical system gives the right answer. That would make perfect sense if system one were a massively-parallel neural network and system two was a serial Turing machine (at some level of representation).

This would also seem to explain certain observations about language processing if we suppose that both systems are involved in overlapping but disconnected language-related tasks. For instance, Rumelhart and McClelland gave the example of “The man the boy the girl hit kissed moved” to show that we have trouble understanding the deep recursive structures that the classical model says are so important. However, even though most people would fail to comprehend this sentence on the first pass, we can understand it if we think about it a little longer. Again, this makes sense if we suppose that the “first pass” is carried out by a fast, connectionist-like neural architecture and the second is carried out by a slow neural Turing machine. This would also explain why the different levels of language processing seem to happen in parallel even though we have trouble formulating rules that work on a partial input: first-pass processing is done with a parallel architecture, while the rules are carried out on a serial one.

The presence in the brain of a slow, rule-based system for processing language might also enable the network-based system to do more with less data. The systems could be independent of each other in terms of their representations, but could potentially take as input the output of the other system. This would enable the rule-based system to generate valid sentences which could be fed as input to the network system, allowing it to get really good at quickly performing most, but not all, language-related tasks. (I’m not sure if a two-system model of cognition could explain the data problem on other tasks, though.)

This also seems consistent with biological evolution. A connectionist network can do some useful tasks with just a few neurons, so it’s easy to imagine a mutation creating a proto-brain of just a few neurons that grew over many generations, to the point where it was large enough to implement a Turing machine (in addition to its other functions). By contrast, if the brain were just a Turing machine, it’s hard to imagine how it would evolve from more rudimentary structures, especially without maintaining any residual functions.

If the human brain in particular evolved to implement a Turing machine, that would also explain the leap between what human brains can do and what other animals can do. A chimp’s brain doesn’t seem all that different from a human brain, but no matter how hard we’ve tried, we can’t teach a chimp to use recursive syntactic structure. That would make sense if human brains have, and chimp brains lack, a system that fundamentally works by processing symbols according to recursive constituent relations.

Likewise, my understanding is that humans started to do all the things we associate with our cognitive sophistication over animals—like language, religion, art, technological development—at around the same time on an evolutionary scale. That too would make sense if all of those things are a result of the ability to process symbols, which appeared all at once when the human brain evolved to implement a Turing machine.

Fodor and Pylyshyn claimed that it’s implausible that human cognition is systematic and the cognition of other animals is not. However, I wasn’t convinced by their argument. They state:

It is not, however, plausible that only the minds of verbal organisms are systematic. Think what it would mean for this to be the case. It would have to be quite usual to find, for example, animals capable of representing the state of affairs aRb but incapable of representing the state of affairs bRa.

I don’t find it at all obvious that this isn’t usual among non-human animals, at least if I’m understanding what’s being said correctly. I’ve heard that Galapagos tortoises don’t fear larger animals (like humans) because they don’t have any natural predators. It seems reasonable to suppose, then, that they can’t conceive of something eating a tortoise. However, presumably they can conceive of a tortoise eating something else, seeing as they haven’t all starved by now.


I’ve since learned that the idea I proposed here is known as “Dual Process Theory.” A TA referred me to this paper which makes similar arguments more formally, and with reference to cognition in general more than just language.