Retrospective: Towards a Clarion Model of Raven's Matrices

I presented a progress report about my dissertation research at my department’s weekly Issues in Cognitive Science Talk Series on 2019-12-11. You can watch the full talk here.

Here are some corrections/comments:

  • At several points, I mention a publication defining the format, but there is no reference to it in the slides, so here it is: Penrose, S. & Raven, J. (1936). A new series of perceptual tests: preliminary communication. British Journal of Medical Psychology, 16(2), 97-104. DOI: 10.1111/j.2044-8341.1936.tb00690.x
  • Starting at 40:50 I talk about whether RPM items have formally validated correct answers. I mention that the creators of RPM stipulate that each matrix should have only one valid answer, and that’s true (Penrose & Raven, 1936). However I later say that ‘they’ claim the answer set could be selected so as to enforce this constraint. That’s not correct. What I should have said is that it’s possible to select the alternative set so as to resolve ambiguities. This has come up in discussion at least a few times, and I seem to recall reading about it, but I haven’t tracked down those references.

Why computational psychological modeling?

I think computational modeling is important for the advancement of psychology. Here is why. I think the arguments below readily generalize to other social and behavioural sciences, but I will focus on psychology since I am most familiar with it.

I’ll start with a review of the typical research cycle in experimental psychology. Figure 1 illustrates an idealized workflow. The starting point is some background theory informed by existing literature and intuition. In practice, the background theory is sometimes, but not always, explicitly formulated and it is almost always an informal theory. The theory drives the formulation of operational (i.e., testable in principle) hypotheses. These hypotheses are then tested by behavioural experiments. Finally, experimental results are analysed and interpreted in light of the background theory and technical considerations about the experiments.

Figure 1: Illustration of a typical research cycle in experimental psychology. This is an idealized picture of reality as the research process is far more messy, often involving several epicycles among the various steps.

In my opinion, it is better to have formal background theories and, currently, the best way to formulate formal psychological theories is to construct computational psychological models. By a computational psychological model, I mean a piece of software whose logic is designed to capture the hypothetical structure and behaviour of psychological mechanisms and processes.1 The fundamental reason for this is that, in addition to informal analysis, computational models are amenable to deep logical/mathematical analysis and capable of simulation. These properties may be leveraged to enhance the research cycle in the following ways:

  • Enhanced Hypothesis Generation. Computational models present two additional ways to generate hypotheses as compared to informal theory: (i) formal analysis and (ii) simulation of untested experimental designs. These methods may lead to the formulation of hypotheses that may otherwise be missed or not formulated at all; they may also lead to more specific hypotheses (e.g., in terms of expected measurements).
  • More Efficient Experimental Workflow. Computational models, if done well, distil existing knowledge into formal systems, so they may aid theoretical analysis and speed up formulation of hypotheses. Furthermore, they present a platform for quickly prototyping (i.e., piloting) experimental designs through simulation.
  • Deeper Analysis and Interpretation of Results. Computational models may inform more nuanced and precise analysis and interpretation of results. More specific hypotheses may allow the use of more powerful statistical tests, and unexpected findings may be addressed through post-hoc simulations exploring possible revisions or adjustments to theory. Furthermore, broadly-scoped computational models, such as cognitive architectures, may reveal possible links between seemingly unrelated findings.

These advantages come with the technical, material, time, and training costs associated with developing, documenting, maintaining, and supporting complex software systems. I believe these costs can be managed with prudent planning and strategy. I am optimistic that, over time, computational models may be improved incrementally to account for broader and broader ranges of findings and become capable of accurate prediction. Furthermore, in the long run, I think the costs are more than worth it because sufficiently powerful models have great potential for application. For instance, they may help in the development of new psychological interventions or inform human factors design in computational systems.

I’ll end this with some pointers to the literature:

The first two articles are classic papers that advocate computational modeling in psychological research. The last is a fascinating example of how formal models may be applied in behavioural domains.


1. Computational models are distinct from statistical and mathematical models, though the distinctions are not crisp. It is often possible to implement statistical and mathematical models as computational models, and it is also often possible to analyse computational models from mathematical or statistical points of view. I specifically advocate computational models because they force a procedural/constructive encoding of theoretical concepts.