Stephen Wolfram: in Print and in Person

Paul R. Potts

Wolfram is the creator of the Mathematica computer program and the author of A New Kind of Science. As a fan of science, old a new, a fan of Mathematica, and someone interested in cellular automata, physics, chemistry, and related subjects, I went to hear Wolfram speak about his new book. I can’t claim to have completed his book, but I read portions of it, and didn’t feel the need to complete it. It is still on my shelf.

I don’t think Wolfram’s managed to come up with a simple cellular automaton that replaces most of the mathematics of modern physics. He, or someone working with automata, still may, but I am a bit doubtful.

Oddly enough, I originally posted this to the “Stickwire” mailing list, an online community for Chapman Stick players. I played Stick for a time, but sadly it was one of the pieces of music gear that I had to sell during one of my periods of unemployment. I hope to play Stick again someday. This is not Stick-related, but it was not unusual for members to speak their minds on other topics.

Well, the lecture was better than I expected. I wound up taking a number of pages of notes. In particular, after reading much of his book, I had found his “principle of computational equivalence” to be completely incomprehensible. I couldn’t decide if had discovered some interesting idea, convinced himself that he had discovered some interesting idea, or had basically taken someone else’s earlier idea and revised it and was now presenting it as his big idea. After hearing him explain it from several angles I felt like I understood it much better. I think it is a fairly big idea, but I’m not sure it is all his, and I’m not sure it is true.

He has discovered a lot of neat stuff, and thought about it more than anyone else. I think there will even be applications, and not just in cryptography.

The Q&A part was interesting. A lot of people are quite peeved at him. I think in particular physicists think there is no “meat” to his ideas, because there is no math behind their presentation, and he has not produced testable hypotheses with equations to test them. But that is to misunderstand a little bit what he claims to have done.

He really claims to have spent years and years developing an “intellectual framework” around “the world of simple programs” — that is, a wide array of small programs including L-systems, Turing machines, other rule-based rewriting systems, etc. Half of the people he was arguing with in the Q&A were having a hard time understanding what it means to make a model of something, as opposed to coming up with the ultimate cellular automata rule that the universe supposedly uses to calculate its own unfolding.

And this is the part where the physicist’s hackles were going up: he claims he’s come up with ways in which some of these simple programs, in this case, simple multivalent causal networks, could be used to model, say, spacetime. Some extremely simple properties of these networks can stand in as a much, much simpler way of modeling spacetime, than, say, relativity’s modeling of gravity as curved space-time, and complex, hard-to-understand pieces from traditional math, like Ricci tensors and Reimann tensors, can esssentially be replaced by much simpler and easier-to-intuitively-understand structures.

Now, whether ones’ mind can more easily understand a discrete model or a continuous model rendered in equations: well, that may be a product of what kind of math you’ve been exposed to, but we are children of a world where computation rules. The mathematicians Hardy and Ramanujan weren’t. When they were working on some of their bizarre proofs in number theory and needed to actually solve the equations, it could take them weeks or months to generate results. We’re living in a time of much better tools for computation. But one of his more radical claims is that the rules to produce complex, realistic behavior in a model just don’t need to be complex themselves, and I think that’s true. It isn’t original, certainly, but he’s thought about it a lot.

And, speaking of better tools: yes, there exist tools other than Mathematica that can be used to model interesting results with cellular automata, L-systems, and other things. They’re called programming languages, and toolkits written on top of programming languages.

I myself have even written some: back in college, one summer I wrote a bunch of Think Pascal programs that iterated cellular automata. I produced a picture of “his” rule 30 cellular automata and stared at it with some bafflement. I tried to prove the results were not due to bugs in my program, and generated more of them, running them until I was out of memory, and the strangeness did not go away. So did, I’m sure, a lot of other people. Years later I wrote a program in Dylan that generated L-systems based on rules that I could vary at will, to sort of explore the space of L-systems. A lot of people have worked on these kinds of programs, and even created some fairly subtle and sophisticated models with them. What I didn’t do, and many of them didn’t do, apparently, is to exhaustively categorize so many of them and come up with proofs showing that these were not just buggy programs, but producing genuinely weird results, truly random in some cases, and at least complex in others.

Wolfram seemed to be imply that before Mathematica, it was impossible to study these systems effectively. I’ll give him this: Mathematica is an effective tool for studying them. But it’s also a commercial product that costs over a thousand dollars for non-academic users! And, he also should understand that writing programs in Mathematica to explore all these things and prove results about them is easy for him because he knows Mathematica really well. But it is a big program, and its language is idiosyncratic and highly irregular. For a beginner interested in studying these systems another language might be far more effective and simple. (A toolkit on top of a free GUI on Scheme, anyone? I could write it, anyone interested?)

Now, he’s not claiming that he’s come up with a set of cellular automata rules that can be run on a computer to model all physical phenomena. What he’s done is just what I said: come up with an intellectual framework for thinking about these simple programs and chosen some, and tuned them, so they could effectively be made to model the properties we understand spacetime to have. This is not as small a feat as his detractors are claiming, but also not as big an achievement as, say, popular science writers now claim he’s working on.

And, yeah, there was a lot about his presentation that peeved me a great deal. He presented mollusc shell patterns and leaf patterns as if he had discovered that there are simple rules that can create these patterns. He didn’t. There are two great books that predate his work considerably: The Algorithmic Beauty of Plants, and The Algorithmic Beauty of Seashells. In both of them, Lindenmayer and co-authors discovers some amazing simple programs that can generate pretty much the whole range of plant growth patterns and seashell patterns and — and this, I think, is pretty damned important — proceeds to show how the real cell growth, propagation of activator/inhibitor hormones during growth, etc. — make the real plants and seashells actually form the patterns that these models model. And there are no doubt many earlier publications that explore these ideas. This is not my field at all, but I’m aware of at least one classic: On Growth and Form, by D’Arcy Wentworth Thompson, first published in 1917.

Yet the press in particular pretty much parrots his statements about “his discoveries” and when interviewed at his home office he showed an interviewer what the interviewer called “a smoking gun” — a mollusc shell that shows a Sierpinski-triangle CA pattern. As if Woflram discovered this. He didn’t. In fact, I’ve got one, too. There are lots of them for sale at seashell stands all over the country. After reading the Lindenmeyer books I started looking for them when I went to tourist shops.

In general, he doesn’t acknowledge at all that he is working in a sea of intellectual endeavor in this area. He is perhaps doing the most theoretical work and maybe the most important, but it really, really irritates me that he does not give proper credit to all the little people whose work he wraps up, and, sometimes dismissively summarizes as unimportant when compared to his own, to build up his “intellectual framework.”

Wolfram’s only just occasionally begun to do what these people did. He talked briefly about how CA modeling snowflakes could model latent heat. One audience member in a question became quite agitated about this. Wolfram responded, correctly, that when making a model, you should pick the aspects of something that you are particularly interested in to model. But what he didn’t understand was the audience member’s anger that Wolfram could appear to be claiming to understand snowflake formation because some of his little cellular rules could make little snowflake pictures. And I think that audience member’s frustration is very valid.

His claims to be able to explain the 2nd law of thermodynamics in action, irreversibility, time’s arrow, free will, and determinism… I’m not even going to touch on that.

Ann Arbor, Michigan
November 13, 2002

Creative Commons Licence
This work by Paul R. Potts is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. The CSS framework is stylize.css, Copyright © 2014 by Jack Crawford.