Rational use of cognitive resources: levels of analysis between the computational and the algorithmic
Griffiths, T. L., Lieder, F., & Goodman, N. D. (2015). Rational Use of Cognitive Resources: Levels of Analysis Between the Computational and the Algorithmic. Topics In Cognitive Science, 7(2), 217–229. doi:10.1111/tops.12142
Summary
In this paper, Griffiths et al. propose a focus on levels between the computational and algorithmic. They outline a method for approaching analysis at these intermediate levels called resource-rational analysis. The key idea in resource-rational analysis is to begin with a computational-level analysis, assume an idealized abstract computational architecture that solves the proplem posed at the computational level, and then examine how resources should be used optimally within that framework. It is specifically not a proposal to modify the computational level:
Rather than blurring these lines and building constraints into computational-level theories, we suggest a different approach: define the computational-level theory without considering limitations on its execution, and then explore the consequences of those limitations as a further analysis that brings us closer to an algorithmic-level theory… Various proposals about limitations—or alternatively abstract computational architectures—provide us with levels of analysis between the computational and the algorithmic, and the principle of rationality provides us with a methodology for developing models at those intermediate levels. (pg. 220)
Griffiths et al. outline four steps in performing a resource-rational analysis:
- Function. Perform a computational-level analysis to determine the function of the system.
- Model of mental computation. Pick a class of algorithms that approximate the optimal solution (e.g. particle filters), and define what the costs are in the model (e.g. number of samples).
- Optimal resource allocation. Determine which algorithm should be used in order to optimally trade off between accuracy and computational cost (i.e., be “resource rational”)
- Evaluate and refine. Compare to human behavior, and revise assumptions in steps 1, 2, and 3 as necessary.
They posit a very specific definition of resource rationality, based on the notion of value of computation (VOC), defined as:
where $c$ is a computation, $s$ is the state, $a$ is the action, $Q(s,a)$ is the $Q$ function, and $B$ is the agent’s belief about $Q$ and $s$.
Griffiths et al. note that in many cases, the initial resource-rational analysis may start by making certain assumptions of unbounded optimality (e.g., the ability to take perfect posterior samples). Further iterations of the analysis can identify these assumptions and apply a resource-rational analysis to them as well (e.g., correlated MCMC samples).
Takeaways
I like this approach a lot, and I think that starting with approximation algorithms from computer science is likely to be incredibly fruitful. However, a lot of the algorithms from computer science have proven bounds for large $n$. I wonder to what extent there are other algorithms that computer scientists aren’t necessarily investigating that have provably better bounds (or other relevant properties) for small $n$? For example, are there inference algorithms that do poorly in terms of getting perfect posterior samples as $n$ goes to infinity, but which perhaps give somewhat better samples (e.g. less correlated) in the short term than more popular algorithms like MH?