Last week I spoke to Alex Gomez-Marin, a behavioural neuroscientist who had a passing interest in the theory of Causal Entropic Forces about determining the Causal Entropic Force on a dimensionless particle contained in a 2-D heat reservoir. I promised to try and work out an approximation of the Causal Entropic Force on the particle. Meanwhile,it has been almost a year since I last wrote an article on the matter and since then I have developed a better understanding of this theory which Dr. Wissner-Gross calls an ‘ for intelligence’.
You might guess, from my slightly reticent tone, that I’m no longer the biggest fan of this theory. While I won’t lambast the theory as Gary Marcus has done in the following New Yorker article I now think that on the balance his criticism was spot on. To understand why, I shall present a constructive dissection of the theory by going through its principles and simulating the toy problem of a particle in a heat reservoir(code here).
Causal entropic forces:
In the following summary of Wissner’s meta-heuristic, it’s assumed that the agent has access to an approximate or exact simulator. A close reading of the original paper  will show that this assumption is actually necessary.
For any open thermodynamic system, we treat the phase-space paths taken by the system over the time interval as microstates and partition them into macrostates using the equivalence relation:
As a result, we can identify each macrostate with a unique present system state . This defines a notion of causality over a time interval.
Causal path entropy:
We can define the causal path entropy of a macrostate with the associated present system state as the path integral:
where we have:
In (3) we basically integrate over all possible paths taken by the open/closed system’s environment. In practice, this integral is intractable and we must resort to approximations which we shall discuss shortly.
Causal entropic force:
A path-based causal entropic force may be expressed as:
where and are two free parameters. This force basically brings us closer to macrostates that maximize . In essence the combination of equations (2), (3) and (4) maximize the number of future options of our agent. This isn’t very different from what many people try to do in life but this meta-heuristic does have very important limitations.
The main limitation is that the agent actually needs to have access to the true state-transition probabilities of its environment and if such a model is to be learned, the authors of the original paper don’t say how.
A toy problem:
When simulating the toy problem of a dimensionless particle in a square heat reservoir, I made the following assumptions:
- The room is a 10x10 square and the walls are inelastic.
- Given that state is represented by the particle’s position and the room is convex, the euclidean distance is a good metric for measuring the difference between states.
- Assuming that the Causal Path Entropy varies continuously over states, we have a second argument for discretisation and may use the max operator rather than the nabla operator to discover local maxima.
- Assuming that the Causal Path Entropy is proportional to a propensity for mixing, we may approximate variations in Causal Path Entropy with Euclidean proxy measures for diffusion such as average nearest neighbours and the radius of gyration.
- The particle isn’t quite dimensionless though it’s relatively small with respect to the room which allows us to approximate the Causal Path Entropy with the Boltzmann Entropy.
Considering these four assumptions, I tried using two proxy measures. I first tried using the average nearest neighbour measure as a proxy for dispersion though this wasn’t quite as reliable as I hoped so I experimented with the radius of gyration of an ensemble of terminal states as a proxy for diffusion as suggested in . Below is a figure demonstrating convergence to the centre of the room using the radius of gyration as a proxy measure:
Interestingly, the second measure performed much better than the first and I suspect that this is because the radius of gyration implicitly exploits the fact that the square is convex and therefore the centre of the square may be identified with the largest inscribed circle. This begs the question as to how general these proxy measures actually are and whether we can hope to efficiently calculate path entropy for non-trivial systems even if we assume that a simulator is in fact available.
An for intelligence?
To be fair with the Causal Entropic Forces theory, I think it’s necessary to compare it with other prominent single-motivation theories such as the Free Energy Principle which aims to minimise prediction error and the theory of Empowerment which encourages agents to maximise their number of intrinsic options[3,4]. Unlike these other theories which are frameworks for learning, inference and decision-making the theory of Causal Entropic Forces is mainly a framework for decision making and simulation assuming that a simulator fo the environment is known to the agent. Moreover, given that an Empowerment maximising agent maximises its number of intrinsic options the Causal Entropic Force is merely a third-rate Empowerment variant.
Finally, even in the event that such a simulator is available(ex. Chess/Go) you would actually need to design a clever search algorithm for that particular environment. In non-trivial environments, you can’t actually use the nabla operator as proposed by Wissner-Gross to move the agent towards more promising states. For these reasons, I think it’s completely silly to compare this five-page theory of ‘intelligence’ with Einstein’s labours on the theory of relativity.
- Causal Entropic Forces (A. D. Wissner-Gross & C.E. Freer. 2013. Physical Review Letters.)
- Causal Entropic Forces: Intelligent Behaviour, Dynamics and Pattern Formation (Hannes Hornischer. 2015. Masters Thesis.)
- The free-energy principle: a rough guide to the brain? Friston. 2005.
- Empowerment — An Introduction. C. Salge et al. 2013.