Announcing the S Combinator Challenge

What is the point of hiding in plain sight for a century?

On December 7, 1920, Moses Schönfinkel introduced the S and K combinators—and in doing so provided the first explicit example of a system capable of what we now call universal computation. A hundred years later—as I prepared to celebrate the centenary of combinators—I decided it was time to try using modern computational methods to see what we could now learn about combinators. I was surprised when I discovered the unexpected.

It’s already remarkable that S and K yield universal computation. My explorations led me to believe that S alone could be sufficient to achieve universal computing. Or, to put it another way, simply applying the rule could be sufficient to achieve universal computation.

S f gf[x][g[x]]

over and over again might be all that’s needed to do any computation that can be done.

I don’t know for sure that this is true, though I’ve amassed empirical evidenceThis seems to point in the right direction. And today I’m announcing a prize of $20,000 (yes, the “20” goes with the 1920 invention of combinators, and the 2020 making of my conjecture) for proving—or disproving—that the S combinator alone can support universal computation.

Why is it important? Obviously it’ll be neat if it turns out that hiding in plain sight for a century has been an even simpler basis for universal computation than we ever knew. However, it will be more important to determine whether the S combinator is universal. This will allow us to map the threshold for universal computation.

Practical computers use complex and well-designed CPUs. But is universal computation really possible with that level of complexity? My explorations in the computational universeI have been able to prove that it is not. Over the past 40-years, I have found that universal computation is very common even in systems with simple rules.

I’ve developed a very general principle that I call the Principle of Computational Equivalence that implies that pretty much whenever we see behavior that isn’t in some sense obviously simple, then it will actually correspond to a computation that’s equivalent in sophistication to any other computation. One of the many consequences of this principle, however, is that it states that computation universality should always be possible.

For more than 30 years I’ve been pushing to get explicit evidence about this, by proving (or disproving) that particular systems are computation universal. These two excellent examples help to validate the Principle Of Computational Equivalence. the rule 110 cellular automaton

&#10005
RulePlot[CellularAutomaton[110]]

The 2,3 Turing machine:

&#10005
RulePlot[TuringMachine[{596440, 2, 3}]]

Both these systems I identified as the simplest of their type that could conceivably be computation universal—and both I expected would actually be computation universal.

Both cases required a lot of work to produce an explicit proof. In the first instance, the proof was part my book. A New Kind of Science(with much of what? heavy lifting done by a then-research assistant of mine). In the second case, I’d already “made the conjecture” in A New Kind of ScienceHowever, 2007 I decidedto put up prize for actually resolving it. After only a few short months, I was thrilled to see the results. a proof was found, and the prize was won.

So now I’m hoping something similar will happen in the S combinator case. I’m expecting that it will turn out to be computation universal, but it would perhaps be even more interesting if it was proved not to be.

In the past one might have viewed proving that a simple system is (or is not) computation universal as a curiosity—something like solving a specific mathematical puzzle. But the Principle of Computational Equivalence—and all the science I’ve done as a result of exploring the computational universe—now places such a question in a broader context, and shows that far from being a curiosity, it’s actually quite central.

For example, when computation universality comes computational irreducibility, undecidability. These are fundamental and practical concepts that can be used to predict the future and answer questions.

Identifying the threshold for computation universality has direct implications for making molecular-scale computers.

The Basic Setup

The S, K combinationator setup is the most common. One sets up an initial combinator expression to encode the computation one wants, then one applies the combinator rules repeatedly until one is found. gets to a fixed pointThis can be taken to mean that the computation has completed. Of course, not all computations halt, and for computations that don’t halt, no fixed point will ever be reached.

With the S combinator alone a direct input → output representation of computations won’t work. This is because there is an elaborate, but finite, characterization of every S combinator expression that halts, and the existence of this characterization implies that halting S combinator evolutions don’t have enough richness to represent arbitrary computations.

There are many non-halting S combinator evolutionary possibilities. Here’s the very simplest of them:

&#10005
ResourceFunction["CombinatorEvolutionPlot"][
 Text[ResourceFunction["CombinatorPlot"][
     ReplaceAll[#, {s -> CombinatorS, k -> CombinatorK}], 
     "CharactersLeftAssociative", "SKGlyphs" -> {s, k}, 
     "ApplicationGlyph" -> "\[NegativeVeryThinSpace]"]] & /@ 
ResourceFunction["CombinatorEvolveList"][s[s][s][s[s]][s][s], 8, 
   "SKGlyphs" -> {s, k}], "StatesDisplay"]

These are just a few examples of the sizes of expressions. obtained in various such evolutions:

&#10005
Grid[Partition[
  Labeled[ListStepPlot[
      Differences[
       ResourceFunction["SKCombinatorLeftmostOutermostLeafCounts"][#, 
        1000, "SKGlyphs" -> {s, k}]], ImageSize -> 130, 
      PlotRange -> All, Frame -> True, FrameTicks -> None, 
      Filling -> Axis, 
      FillingStyle -> Hue[0.56, 0.02, 0.9500000000000001], 
      PlotStyle -> Hue[0.56, 0.68, 0.71]], 
Text[Style[ResourceFunction["CombinatorTraditionalForm"][#], 
       12]]] & /@ s[s[s[s[s]][s]]][s][s],[s][s][s[s[s]]][s[s]], 
s[s][s][s[s[s[s][s]]]][s],[s[s[s]][s[s[s]][s]]][s][s], 
s[s][s][s][s][s][s[s[s]]][s],[s][s][s[s[s[s][s]][s]]][s], 
s[s[s][s][s[s]]][s[s][s]][s],[s][s][s[s[s]]][s[s]][s[s]], 
s[s][s][s][s[s][s]][s][s][s][s],[s][s][s[s[s[s[s]][s]][s]]][s], 
s[s][s][s[s][s[s[s[s][s]]]]][s], 
s[s[s[s]][s][s]][s[s[s]][s]][s], 4]]

And potentially it’s possible to do computations by “piggy-backing” on these non-halting evolutions. Start by specifying the computation that you want to perform. This will be used as an initial condition for any of the infinitely many non-halting combinator evolutionary possibilities. Next, run the combinator evolution until you find a particular feature. The output from the computation should then be decoded.

One must prove that the S combinationator can support universal computation. In other words, it is necessary to show that an S combinationator system can support universal computation. can emulate some other system(e.g., the S, K combinator one), that is already universally recognized. And by “emulate”, what we mean here is that a suitable “encoder”, “detector” and “decoder” can be found that will allow any evolution in the system being emulated to be mapped to a corresponding evolution in the S combinator system.

One must ensure that the encoder and detector are not capable of universal computation. If one’s lucky, these will operate in sufficiently straightforward ways so that it’ll be obvious they’re not universal. But in general it’ll be nontrivial to demonstrate that they’re not universal. Depending on your setup, one approach might be to show that inputs must halt within a specified time.

Okay, but what if the S Combinator system isn’t computation universal? How can one prove that? One way would be to show that all evolutions, even infinite ones, have a finite or essentially finite number of distinct forms. Another would be to establish that any “modulation” of such evolution must always die out in finite time, so that it must in effect correspond to a computation that halts.

A slightly different approach would be to show that S combinator evolution can be “solved” to the point where its properties after any number of steps can be determined by some bounded computation.

We’ve been talking about “doing computations” with combinators. But there’s a subtlety with that. Because there are generally. many different possible evaluation ordersFor the combinator evolution. If the evolution halts, then it’s guaranteed that it’ll always reach the same fixed point, regardless of the order in which the evaluation was done. But if it doesn’t halt, there’s in general a whole multiway graph of possible sequencesResults

What does computation universality mean then? As I’ve discussed elsewhereOne strong definition is to ask whether the multiway system generated from the combinator can effectively imitate the multiway system generated, for example, by a multiway or nondeterministic Turing machine. Another definition is to ask if there is a path (i.e. Some evaluation order) that can perform any given (deterministic), computation. Or one might for example have detector and decoder functions that can work on some—or all—branches, somehow always being able to reproduce the computation one wants.

Operation of the S Combinator Challenge

I don’t know how long it’s going to take, or how hard it’s going to be, but I’m hoping that someday a message will arrive that successfully resolves the S Combinator Challenge. What will the message contain?

Specific is my favorite case. Wolfram Languagecode that implements the solution. If the S combinator is universal, then the code should implement the solution. “compiler” that takes, say, a Turing machine or cellular automaton specification, and generates an S combinator initial condition, together with code for the “detector” and “decoder”. But how will we validate that this code is “correct”?

It might conceivably be simple enough that it’s amenable to direct human analysis. But more likely it’s going to require some level of automated analysis. At first one might just do explicit testing. Then, one might try to generate an instance of the test. automated proofThe detector and decoder will halt at all times and may even determine a maximum time this can take.

If the S combinator is not universal, one might show this by having code that can “jump ahead” by any number of steps and explicitly determine the outcome, but that itself always halts. Or maybe one might have code that effectively implements what happens to any “modulation” of S combinator evolution—and then prove that this code leads to behavior that, say, always halts, or otherwise ultimately shows complete regularity.

But maybe one won’t have explicit code that implements a solution, and instead one will only have an abstract proof of what’s possible. This could be in the form a standard, human-oriented mathematical proof. But I think it’s more likely it will be complicated enough that it has to be presented in a systematic computational way. And in the end, if it’s a proof it’ll have to start from certain axioms.

What axioms can be used? One would hope that the “standard axioms of mathematics” will be enough. Perhaps the Peano principles will suffice. But I wouldn’t be surprised if transfinite induction were needed, requiring the axioms of set theory. There could be other strange cases, such as when proof is possible with one assumption about Continuum Hypothesis and not with the others.

Other strange things could also happen. It might be possible to show that an infinite structure can be created by running the combinator evolution for infinite time. However, one might not know if this is possible with finite evolutionary. It could be that one can only construct a structure if there is an infinite tree (say, a transfinite number) in the initial condition.

It’s also conceivable that the behavior of the S combinator could correspond to an “intermediate degree” of computational capability—and have undecidable features, but not be universal. I don’t think this will happen, but if it does, I would consider it a resolution of the S Combinator Challenge.

And then, of course, there’s the unexpected. In my experience of exploring the computational universe over the past 40 years, that’s something one encounters with remarkable frequency. A corner case one didn’t anticipate. An intermediate situation that didn’t seem possible. Unimaginable behavior. These are often the most fascinating things you can find. But inevitably they make giving a cut-and-dried definition of what’s needed for the S Combinator Challenge difficult.

It’s worth remembering, though, that the S Combinator Challenge is about answering the human-stated question “Is the S combinator computation universal?”. It may take humans to decide if a particular solution answers that question.

The S Combinator Challenge is in a sense already a 100-year-old story. But I’m certainly hoping that it’ll be resolved in far less than 100 more years. And that in doing so we’ll learn important things about the remarkable world of combinators, and the computational universe in general.

Leave a Reply

Your email address will not be published.