Some thoughts on “rationality”

Meta: I recently stumbled upon this post which I wrote in Summer 2021 and apparently never posted (for reasons I can’t recall). My ideas have definitely evolved since then, but this still felt good enough to post, especially the two bolded bits towards the end of the post I think are still today of great relevance.

***

A vast number of related bodies of literature are interested in rationality and decision-making. The term “rationality” is used in subtly different ways in different fields and contexts, which is relevant to keep in mind when exploring these bodies of literature. 

The “classical” variant of rationality is what I will refer to as axiomatic rationality; a way of making decisions that conforms to a set of abstract axioms (also see here). Departure from any of these axioms make the decision-making agent “exploitable” (e.g. money pump) and thus is considered “irrational”.

In this post I do not want to focus on axiomatic rationality, however. Instead, I will introduce two related conceptions of rationality and explore some possible implications for how we might understand the world. 

Bounded rationality  

Bounded rationality is a term that has been thrown around a lot recently, so I expect most readers to have come across it. 

Instead of categorizing humans as either rationality or irrational, this school of thought takes seriously the fact that human decision-makers face resource constraints which affect what “a good decision strategy” looks like. Resources relevant to decision making include time, attention, cognitive capacity, memory, etc. Because humans are bounded, they will fall back on so-called “fast and frugal heuristics”. As opposed to how the “heuristics and biases” literature interprets this behaviour, relying on heuristics doesn’t necessarily represent human irrationality; given the agent’s constraints, relying on such heuristics is often the more robust, eventually more successful strategy. (The better understand this landscape of thought, I recommend reading “Bounded rationality: the two cultures” and “Mind, rationality, and cognition: An interdisciplinary debate.“)

A subtlety within this framework which I believe is often being glossed over is that the agent’s boundedness is not inherent to the agent but relative to their decision problem and environment. To illustrate, the same agent can be described as rational when playing tic-tac-toe, while being described as boundedly rational when making decisions within the economy. The degree of complexity depicted by the economy vastly overshadows the cognitive capacities of humans. 

If an agent cannot process all relevant information - and in complex systems all relevant information can be a lot of information - the agent tends to be better off relying on heuristics. Biases and heuristics thus lose their common negative connotation; they become a necessity. Instead of asking how to make agents rational and unbiased, we should be asking how we can differentiate better from worse heuristics. 

Ecological rationality  

This is where ecological rationality has things to say. 

Ecological rationality asks: “Given a problem and decision environment, which strategies should an agent rely on when optimization is not feasible?”. It is thus the normative study of which heuristic processes succeed in a given environment, where the environment is such that optimization is not feasible. According to ecological rationality, the yardstick determining the success of a decision is to be found in the external world, as opposed to in its internal consistency with the principles of rationality choice theory, as purported by most classical theory of rationality. 

This conception is rooted in the realization, first popularized by Simon A. Herbert, that most decision environments do not in fact fulfil the assumptions necessary to apply as-if models of optimal decision-making, meaning that there is no guarantee that using optimization processes will lead to optimal outcomes. 

Instead, for many problems, optimization is not feasible due to depicting one or several of the following three characteristics: a decision situation is (1) ill-defined, meaning that not all alternatives, consequences, and probabilities are or can be known; (2) underpowered, meaning that parameter values must be estimated from limited samples which can lead to greater error than error caused by “bias” of a simple heuristic does; (3) intractable, meaning that while the problem is well-specified, it is computationally intractable and thus no optimal strategy can be found (within the imposed time and resource constraints). It is in fact the case that most decision problems, including the decision problems we are concerned with in our research (GCR preparedness), fall into the category of decision problems which cannot be optimized (see below). 

The conditions for the infeasibility of optimization lead us to recognise the central role of the information structure of the environment in which the agent makes a decision. Decision-making mechanisms, according to ecological rationality, exploit this information structure of the environment. The goal of the study of ecological rationality is thus to map individual and collective decision-making heuristics (“adaptive toolbox”) onto a set of environmental structures. 

In other words, ecologically rational decision-making is a measure of fit between the mental structures of the decision-maker and the informational structures of the environment.This allows for comparative statements such as, given a decision problem and environment, which of a set of decision-making strategies will perform best? Similarly, it follows that, when aiming to improve decision-making in real world scenarios, it is central to match the strategy and information to mental structures of the decision maker(s). 

Even if an agent is able to identify the “smartest” decision strategy, the additional cost for finding that strategy might outweigh the added benefit from having identified the marginally better decision. What constitutes the “best strategy” depends therefore inherently on the environment, including on the strategies adopted by other agents. Furthermore, an agent might make a sequence of decision in an evolving environment. The strategy that is best in one decision situation might not be the strategy that is most robust over the course of many decisions. 

Some implications for thinking about the behaviour of possible (artificial) intelligences

I am interested in the space of possible intelligences, and how we should expect they would act and make decisions in the world. The discussion here will thus explore some ways the above might shed light on how we should expect strong AI (“superintelligent” / ”general” / ”transformative” / ...) will behave. 

Economists sometimes differentiate between “normal” and inherent/knightian uncertainty (the former is also sometimes referred to as “risk”) (see here. “Normal” uncertainty or risk refers to a decision under conditions of known probabilities. (For example a (fair) coin flip: you know that there is a 50/50 chance the coin will land heads-up.) Inherent or knightian uncertainty refers to a decision under conditions of unknown probabilities; you don’t even know what your probability distribution should look like, potentially because you are not even sure what the space of possible future states are. 

If you, like me, believe in a deterministic universe, you might struggle with the concept of knightian uncertainty. What does it mean, you might ask, to be “inherently unpredictable”? This confusion remains for me until we define inherent uncertainty as being relative to an agent’s epistemic situation (i.e. as being subjective). 

There is an obvious link between (subjective) inherent uncertainty and bounded rationality. The subjective boundedness of a decision maker imposes additional constraints on decision making, beyond the axioms of rational choice, turning decisions of risks into decisions of inherent subjective uncertainty. 

One might ask: will a superintelligent AI experience inherent uncertainty, or will it be able to treat any decisions as a decision under risk? It seems to be quite common to expect that strong AI will be rational in the sense of being able to treat all decisions as decisions under risk. However, that doesn’t seem right to me. 

Conceptualizing bounded rationality as relative to a decision environment informs how to think about the decision-making of intelligent agents. We want to understand how AI makes decisions. Instead of asking whether or not some AI system makes decisions conforming with the axioms of rational choice, a better starting point might be to ask: relative to what problem and environment is the AI making its decisions? 

This can nuance how we think about the promise of strong AI solving all our problems for us, and doing so in a way that is, unlike with human decision makers, free of irrationalities and biases. 

Many of the most decision environments with the most real-world consequences (e.g. the economy, policy-making and governance) are complex social systems. These systems depict non-linearity, meaning that the slightest differences in initial condition can result in large differences later on. What sort of epistemic position would an intelligent system need to be in to be able to treat decisions within such complex systems as decisions under risk? This reflection makes me sceptical of some of the more simplistic scenarios of how strong AI will navigate the real world. 

One, albeit speculative, caveat seems worth mentioning: this inherent epistemic challenge could compel a powerful AI to try to reduce the complexity of the real-world, thereby making it more controllable. (This is, in part, what we can already see with things like the Youtube recommender algorithm.) Social complexity is a primary mark of modern society, thus the prospect of having a strong system trying to radically reduce complexity in the world seems worrisome. 


References 

Creux du vent