Página inicial > Filosofia da Ciência e da Técnica > Kiverstein (HCS:1-3) – limites do modelo original de IA

Heidegger and Cognitive Science

Kiverstein (HCS:1-3) – limites do modelo original de IA

domingo 7 de novembro de 2021

KIVERSTEIN, Julian & WHEELER, Michael (ed.). Heidegger   and Cognitive Science. London: Palgrave Macmillan, 2012, p. 1-3.

Cognitive science? was founded on the idea? that intelligent? human? behaviour is caused by internal psychological processes that work in much the same? way as a digital? computer. Digital computers are physical devices that construct interpretable and combinable elements or symbols?, and carry out operations on those elements, such as copying, combining, creating, erasing, storing, retrieving etc. Early cognitive scientists? hypothesised that something similar? takes place? in us whenever we think? and reason?. Psychological processes like expert reasoning?, language? production? and understanding? and logical problem? solving were explained in terms of the construction, storage, retrieval and manipulation of symbolic representations?. Some cognitive scientists even made the strong claim that there was no other way our minds could work. No system? could possess the type of psychology? required for intelligent behaviour that didn’t also produce, store and manipulate symbolic representations. Newell and Simon (1976) captured the spirit? of this strong claim succinctly with their physical symbol systems hypothesis? according to which, digital computation is both necessary? and sufficient for "general intelligent action?" (p.87). Once we conceptualise cognition in this way, it makes eminent sense? to study cognition by building and programming computers that can think. Suppose, as classical cognitive scientists thought?, that all we are doing when we engage? in the types of psychological processes that generate intelligent human behaviour is accessing stored symbolic representations, building new representations and carrying out rule-governed operations on these representations. Computers can build, store, retrieve, transform? and manipulate representations. Hence they have all the necessary ingredients cognitive scientists took to be required for cognising in ways that lead to intelligent behaviour. Moreover, by building machines that [2] think and reason as we do, cognitive science could make intelligible? how reason-respecting behaviour was in fact? the outcome of perfectly mechanical processes. So it was that the project? of engineering artificial intelligence was born.

The idea that minds are computational engines remains at the core of thinking in cognitive science today, [1] but much has changed in recent years. John Haugeland (1985) labelled symbolic approaches to Artificial Intelligence of the kind just described "Good Old Fashioned AI" (GOFAI). The new fashion in AI circles, and in many other areas of cognitive science, is to emphasise real?-time, dynamic couplings between brain, body? and world?, and the multiple? ways in which cognisers exploit bodily and environmental structures to simplify or enhance the computational operations carried out by brains. GOFAI was premised on a thoroughly disembodied understanding of our psychology that entirely ignored the ways in which cognition takes place in organisms? that are geared into cultural? worlds?. This view of cognition as taking place entirely inside of the heads of individuals that lack a history? and a culture is undergoing gradual replacement by a view that takes brain, body and world to be equal partners in the production of cognitive behaviour.

Part of the impetus for this shift in thinking came from (what some thinkers considered to be) insuperable problems and failures that ultimately led the GOFAI research program to degenerate. Hubert Dreyfus   has long argued that the reasons for the failure of the GOFAI research program can be accurately diagnosed using the writings of Martin Heidegger  . [2] Dreyfus   looked in particular? to Heidegger  ’s phenomenological? description of being-in-the-world?. Human beings always find themselves in familiar? situations that matter? to them in determinate ways, and that they know how to deal with in such a way as to meet their concerns. What we encounter in those situations are meaningful things? we know how to put to work to meet our interests? and needs. We know how to find our way about in the world not because we have knowledge? of a vast body of facts and rules that tell us what to do in each of an open-ended number? of different situations we might encounter. Throughout our lives? we acquire practical skills and habits?, and it is because of this know-how? that situations show? up as offering possibilities for action that are keyed into our interests. Computers, argues Dreyfus  , have to be programmed to deal with real-world situations using rules and representations on the basis of which they must somehow reconstruct the meaning the world always already has for us. They have to first? construct a model of a situation or context of activity?, and then form? a [3] plan of action based on a model of a situation and knowledge of a vast body of rules and facts. Dreyfus   argued that no computer that works in this way is likely to be capable of flexible and adaptive responses to the open-ended variety of situations we deal with as humans. The computer must work out which of the many rules and facts it knows are of relevance to its current situation. This problem is multiplied once we build in the fact that the world is constantly changing in all sorts of unexpected ways. In order? to deal with these changes the computer must know what to keep constant and what to change in its assessment of what is relevant and what is not. Perhaps the computer could be programmed with representations of lots of different contexts in which it will be required to act, and heuristics that tell it what to do in each of these contexts. However, it is hard to see? how this is going to help, since the machine will still need to work out which of these rules and representations it is appropriate to bring to bear in its current situation. We get the same problem again but at a higher?-level of rules and representations, and there is no reason to think that the regress should end here.


Ver online : Heidegger and Cognitive Science


[1See for instance Jose Bermudez’s (2010) recently published textbook on cognitive science, which is organised around this idea.

[2Dreyfus has also drawn extensively on ideas from Merleau-Ponty and the later Wittgenstein in developing his critique of GOFAI, but for obvious reasons we will concentrate on the aspects of his critique that are Heideggerian in provenance in what follows.