I can launch into a lecture at the drop of a hat. One of my friends responded to my notes about Stephen Wolfram’s lecture with an e-mail. This is my response (in places, quoting his e-mail, as one does). This is not exactly a review, and I’m afraid it might be a bit opaque to someone who has read neither Wolfram or Egan, but I hope the general topic may still be of interest to some readers.
As you know I didn’t attend the lecture and the only exposure I’ve had to the book was the chat in your living room. I wonder however if you’ve read the novel Permutation City by Greg Egan.
Yes! Actually I’ve read it something like 3 times. It’s one of the SF books I keep not throwing out when I clean out my SF, along with Stanislaw Lem, The Book of the New Sun, and some others.
One of the big issues in the book, as far as realism goes, and I think we talked about this once, was whether it was conceivable that you could have a CA-like model in which the updates did not proceed chronologically in a straight-forward way, i.e., whether it would be possible to still run a simulation in which you broke down the order in which you updated certain regions until it was effectively random. Would there still be any sensible causality going on in the simulated universe?
Intuitively, the answer is obviously no… how could the person living the simulation experience causality and a normal experience of time passing when the bleeding from cutting yourself shaving was proceeding before even going into the bathroom and turning on the light? And how would it even be sensible to model the outcome of a process before modeling the start of the same process? If this was possible we would not need algorithms. It would introduce an “oracle.”
The Oracle has pondered your question deeply. Your question is: “What is the outcome of applying rule 25923 to row 94857 of your simulation 19985790389393757?”
The answer is: “Who the fuck knows? You have to model the outcome of applying rule 25923 to row 93856 first.”
And yes in Permutation City the “universes” were very CA-like in the way they were modeled. Traditional CA, actually, on a grid, much like the hexagonal grid used for turbulence simulations, where the state of the cell is updated based on all its neighbors. I’m not so sure this complexity in the model is really necessary: in other words, you really don’t gain any expressive power in what you can model, once you’ve got a “universal” CA that can express complexity. These are actually disturbingly simple like a Turing machine.
Wolfram doesn’t ask us to consider losing the causality of his simulations but does make us think about just what it might take to model causality. For example, in his causal network model, he likes to consider that in the smallest time scales, the choice of which nodes in the network to update next could be essentially random. This would correspond to the Heisenberg uncertainty (a sea of probability-generating wave functions) going on at the smallest time scales. He also asks us to consider that there may be no universal clock — we’re already kind of used to thinking that way given relativity — but that there might only be one update going on in the universe at once. That is pretty damned weird. Causality then becomes implicit in the network of interconnects and how the updater (finger of God?) can proceed from one node to the next.
I’m still just barely reading this part of his book so I may be misrepresenting it a bit.
(Less relevantly, IMO the author pushes the premise to far and makes the book collapse into a smoldering heap of caca by the end.)
I’m not quite sure I agree although I will agree it gets more and more confusing, and every time I read it I have to figure out what really happens with the main character over again. It gets weird when we find out he’s been a human simulating being a model simulating being a human, etc. Once he’s had himself “scanned” and launches himself in the simulation, he decides his human life can just end, and that he will essentially “wake up” in the simulation. This gets very Star Trek transporter-ish. This book also seems to come out of the “trans-humanist” thinking of a lot of SF of the period… the idea that we can be downloaded, uploaded, extended, escape from the human time and space scale. That seems to have died down and SF people are thinking more along the lines of the Viridian movement (i.e., let’s figure out what to do with what we really have, which is a lot, come to think of it).
One thing I am curious about (again, without having read the book) is whether the “the universe is a CA” approach has the benefit of what I will — probably improperly — refer to as abstraction. What I mean is, conventional physics lets us model the universe in ways that abstract away inconvenient details and therefore we can (attempt to) predict the outcomes of physical processes without actually having to do the processes. Think of it as a higher-level abstraction on top of the (stipulated) CA “machine code” of the universe. Biology and chemistry (hand waving wildly) are further still higher-level abstractions that we need because it’s intractable to predict (say) the behavior of even the simplest biological systems using only the tools of physics.
I’m not sure. His writing is kind of short of testable hypotheses. It is actually one of the features of CA models, I think, that you would be able to model complexity, where there is intrinsic randomness arising from the process itself, not from the initial conditions or injected from the environment. But the very nature of that means that you can’t jump right to the solution. If the solution evolves in a complex way you can’t even estimate it.
It seems to me as though if you were to discover a set of fundamental CA rule(s) that the universe uses to run, you would have a fascinating scientific and philosophical curiosity, but not necessarily something with direct predictive power because it seems axiomatic that a computer cannot run a complete simulation of itself faster that it, itself, runs.
Yep, one would think that. And I can’t figure out how you could possibly decide on the “fitness” of the model if you couldn’t start it with the same initial conditions as the universe and run it up to “now” and see if everything matches.
(In some cases of course a simulation which is slower and more expensive than the system being simulated is valuable, for example if the system being simulated is apt to be very dangerous. But I think in most cases this is not true.)
I dunno. I’m not sure how detailed a simulation has to be in order to model something useful about a phenomenon we care about.
It’s worth noting that Egan himself weighed in, a few years later, on some of the issues my friend and I discussed, most notably in the “dust” theory. You can find Egan’s notes here. I’ll not try to summarize Egan’s thinking here, except to say that he admits that the idea of computing the states in the evolution of a cellular automaton out of order is problematic. He points out, though, that it might not be quite so problematic, or even noticeable, from the perspective of the automaton being so modeled, who might not even notice fnord that it was happening. See also: the Urban Dictionary definition of “glitch in the matrix”.
Egan’s novel, Permutation City, remains one of the most interesting and thought-provoking works of the nineties and I highly recommend it to anyone who loves to read books about ideas.
Ann Arbor, Michigan
November 13, 2002