34-byte Universal Machine 260
N. Megill writes: "Computer scientist and obfuscated code aficionado John Tromp has devised
what may be the world's
most compact Universal Machine (Postscript research paper)
to date. Written in the 'S-K combinatory logic' language, which has
only 2 commands (S and K), his UM can be encoded with only 272 bits
(34 bytes), compared to
5495 bits
for the Universal Turing Machine given
in Roger Penrose's book The Emperor's New
Mind ."
Real computer scientists don't write code. (Score:2)
Ummm.... Plain English translation? (Score:1)
Re:Ummm.... Plain English translation? (Score:3, Interesting)
Re:Ummm.... Plain English translation? (Score:5, Interesting)
And a Turing machine is a state machine whose only storage (beyond "what state am I?") and I/O is done with a sequential tape. So the machine can read from the tape, and then act based on its current state -- said actions including overwriting the symbol, or perhaps going forwards or backwards on the tape, plus changing state.
Re:Ummm.... Plain English translation? (Score:2, Funny)
Funny, i'm sure they said ``plain english translation''
Re:Ummm.... Plain English translation? (Score:5, Informative)
Layman's example - It's the concept that you could theoretically rework the Quake3 engine to run using the 8088's capabilities. (I say theoretically because some Slashdot nitpicker is inevitably going to try to karma whore by pointing out that 8088s could only use 16 bits of memory addresses, and you'd need way more.) The point is that given the 8088's instruction set, it's theoretically possible to emulate a 32-bit protected mode, and graphics, and so on and so on until you had one frame every hour of Quake 3. Now, try a Z80. Back on down the chain... what is the most minimal machine possible that can still run a translated version of today's software?
One of the major ideas in Computer Science is what sorts of questions and problems are difficult, or absolutely downright impossible, for a given type of machine to solve. In particular we like to talk about Turing machines, which are damn minimal
(There are problems that Turing machines can't solve - the classic one is the halting problem. Can you write a program that can look at any other programs (not just most programs, ANY program in the infinite universe, including itself) and say if it'll lock up or not? With a Turing-equivalent machine, it's fundamentally impossible to do that - not just improbable or hard-to-implement, but impossible. The solution is to create a machine that's more capable than a Turing-equivalent machine, but that's rather tricky with macroscopic physics.)
This is just a very rough intro - the article is that this guy came up with a CFG (Context-Free Grammar) with only two rules that apparently is capable of arbitrary computation of functions, which is damn cool. The site is slashdotted, so I've only read the top HTML page, not the Postscript paper
Re:Ummm.... Plain English translation? (Score:2)
When you say,
(There are problems that Turing machines can't solve - the classic one is the halting problem. Can you write a program that can look at any other programs (not just most programs, ANY program in the infinite universe, including itself) and say if it'll lock up or not? With a Turing-equivalent machine, it's fundamentally impossible to do that - not just improbable or hard-to-implement, but impossible. The solution is to create a machine that's more capable than a Turing-equivalent machine, but that's rather tricky with macroscopic physics.)
it seems as if you're implying that a quantum computer would be more than Turing complete, and/or might be able to solve the Halting problem. This is wrong. It *may* be the case (AFAIK this hasn't been proven yet) that a quantum computer of arbitrary size could be used to solve all problems in NP in polynomial time/space in the length of the inputs (at least, if you don't believe the many-worlds interpretation of quantum theory) even if NP != P. In other words, a quantum computer of arbitrary size may be able to serve as the oracle that you need for the naive way to solve any problem in NP in polynomial time, even if a Turing machine could never solve NP-Complete problems in poly time in the length of the inputs.
This is *not* the same as being able to solve an incomputable problem--i.e. those which a Turing machine may take infinite time to solve (the original example being the Halting problem)--even though it would represent the first greater-than-polynomial improvement on the Turing machine yet. (Assuming that a quantum computer is, in fact, equivalent to a nondeterministic Turing machine.)
Re:Ummm.... Plain English translation? (Score:2)
I haven't given it much thought at this juncture, but it seems to me that one could reduce a Turing machine program to a unitary matrix, and then supply the Hadamard operator as input (i.e. a superposition of all possible inputs, equally likely) - thus evaluating the program on all possible inputs at once. Based on this line, if the resulting quantum state wasn't representable as a tensor product of basi, the program would halt.
The above probably wouldn't work, but at least I'm thinking it over
Re:Ummm.... Plain English translation? (Score:2)
The Turing machine is not just a state machine. One thing you neglect is the dependency on input data.
A Turing machine is an abstract cpu together with a program which reads a finite number of strings as input and has a string as return value: .., String s_k)
String f(String s1, String s2,
What you describe might or might not decide if f("hi") halts, but certainly not f(s) (s some arbitrary String) in general, where you have to do your simulation for every possible s, from which there are infinitely many differents ones.
Re:Ummm.... Plain English translation? (Score:2)
Well, since you brought it up...
The 8088 could actually address 20 bits of memory (1MB), a relatively impressive feat at the time, though you had to use that goofy addressing technique with segment:offset registers and 64K pages. Of course, if 1MB was not enough you could always swap stuff out to slower storage (e.g. disk) -- a technique that I became altogether too familiar with in the Z80 days when machines had 64K of *total* RAM that was expected to accommodate your OS, code, and data all at once.
Anyway, I am just nitpicking because it's the Slashdot way. I think your explanation was actually quite informative. (And I'm not whoring for karma either -- hit the max a long time ago.)
Re:Ummm.... Plain English translation? (Score:2)
How? The machine returns its answer when it stops—it returns one value if the program ends normally or another value if it crashes. If you feed it a program that runs forever (doesn't crash, doesn't end), it'll never be able to determine a correct answer. How does using multiple machines change this?
Re:Ummm.... Plain English translation? (Score:2)
There's an interesting example that one of my professors pointed out in class - a kind of problem for which being multithreaded (not necessarily multiprocessor, just an environment in which you can spawn threads, are guaranteed bounded waiting for your next turn at the processor, and can kill off threads) actually buys you the ability to solve more problems.
Consider the problem of reducing Boolean expressions to their truth values. Now add a twist: in addition to the values T and F, you introduce a value B (bottom). Trying to compute the truth value of B results in an infinite loop, so, for instance, a program that tries to compute the truth value of the expression "T AND B" never returns a result. This is the correct behavior for such a program; neither true nor false is the right answer, so the program goes into an infinite loop.
The interesting thing is when you consider boolean short-circuiting. What's the value of "F AND B"? If you're using a short-circuiting AND operator, you see the F, decide not to bother with the right hand side, and return F as the value of the expression.
But now order matters: "F AND B" returns F, but "B AND F" never returns. This doesn't seem to make sense; in both cases you'd like to return F. Boolean operators should be commutative, even in this new world where you might randomly blow up upon encountering the value B.
The solution is to implement your AND reducer as a function that spawns two threads which are handed each side of the expression. It returns T if both threads return T; if either thread returns F, the other thread is killed off and F is returned. A similar trick is performed for the OR reducer.
The result is that you can now compute the truth values for all expressions that have Bs only in parts of the expression which are not necessary to compute due to short-circuiting. A straightforward, single-threaded approach will go into infinite loops for some inputs that this method will find a result for.
Re:The Mind of the Machine. (Score:2)
Re:Ummm.... Plain English translation? (Score:2, Informative)
The link seems to be slashdotted, so I can't see whether he has come up with a way to make a UTM using fewer states than previous ones, or just a way to encode one in fewer bits.
I think I heard somewhere that there is a correlation between the optimal number of states in a UTM and the number of states a human can actually keep track of unaided by a machine. That's food for thought as to whether humans can do anything that is "extra-computational." That is, whether we are more powerful than Turing machines.
Re:Ummm.... Plain English translation? (Score:2)
Re:Ummm.... Plain English translation? (Score:2)
Re:Ummm.... Plain English translation? (Score:2)
The equation that I wish to solve has a trillion arguments:
f(x1, x2, x3,
The turing machine sets up the arguments on the tape.
It is quite clear that a Sinclair is not a Turing machine. Turing machines are infinite, Sinclairs are finite.
Re:In simple terms yes.. (Score:2)
Re:In simple terms yes.. (Score:2)
Then you're not talking about a Sinclair. You're effectively saying that the combination of a tape feeder monkey AND a Sinclair are a turing machine. That's quite a bit different than just a Sinclair.
A Turing machine doesn't need to hold an initial set, that's on the tape
The tape is defined as part of the turing machine. A Turing machine without a tape is just an automaton
I'm not sure that I'm getting that this across.
Yes, you are. You're adding all sorts of hardware to the sinclair to make it more capable, but remember that even a sinclair with a googalplex bytes of memory is infinitely smaller than the tape on a Turing machine.
Let me help you out:
Given that you have a Sinclair programmed to emulate JUST the read/write head, you also need to have an INFINITELY long mag tape to store the initial program/dataset and hold the result of the computation.
Only that will make a machine equal to a Turing machine, nothing else. A Sinclair by itself is not equivalent. That infinite tape is very important.
To give you an insight into the different, let's consider the halting problem. It's impossible to write a program that can determine if a Turing machine will ever stop running. BUT, I can easily imagine a program that can tell you if a Sinclair will ever stop running.
Step by step:
1) A sinclair has a finite memory (1K?) and a finite number of processor registers and states. If one desired, one could enumerate them all, from 1 to N.
2) Now, we know how many states a Sinclair has. We also know how quickly the machine changes states (the clock speed).
3) The only thing else that we need is a computer larger than the Sinclair, large enough to hold all possible states of the Sinclair. That larger computer has access to the complete state of the Sinclair computer.
4) We load the Sinclair with our mystery program. Does it stop or not? We will soon know the answer.
5) At every clock cycle, the larger computer stores away the complete state of the Sinclair computer.
6) The larger computer also checks the just stored state of the Sinclair computer against all the past states that have already been observed. If there is a match, that means that the Sinclair is in exactly the same state that it was in sometime during the past. Therefore, the Sinclair is in an infinite loop and will never stop.
7) Since we know how many possible states the Sinclair could be in. We also know the clock speed of the Sinclair. Therefore, we know what the maximum run time of the Sinclair is without ever duplicating the state of the machine.
8) So, with the information from #7, we can positively say that if a duplicate state is not detected (as in #6) then the Sinclair's program will eventually halt.
So, there's the proof. The halting problem is impossible to solve on a Turing machine. I have just solved it for a Sinclair, therefore, a Sinclair is not a Turing machine.
Add the infinite tape to the Sinclair and it WILL be a Turing machine. (If you can find an infinite tape I will buy it from you for a million dollars.)
The image of the code is a nice touch... (Score:1)
Right... (Score:5, Funny)
of size 46
head reduces in 53 steps to S(S(K(S(SKK)))KK)(K(SKK(S(K(S(*K)))K)(SKK(S(K(*K)
outputs 16 bits "0000000000000000"
pfffffttt... Well duh. Anybody could see that.
(</sarcasm>)
Re:Crash the VM..... (Score:2, Funny)
Enter a combination or a definition.
Names are single characters; whitespace is ignored
An empty definition like "P=" undefines a name
Predefined names are
Y ? U S $ P K I 1 0
Example inputs are: Sxyz, U"I110", 2fx=f(fx), Z=222(P0), U"Z"
Sxyz
of size 4
head reduces in 1 steps to xz(yz) of size 4
xy=x
defines x as SSK(S(K(*))K)K of size 13
222(P0)$
of size 25
head reduces in 0 steps to 222(S(*)(KK)K)(KK) of size 25
"Z"
java.lang.RuntimeException: Variable can't toBinaryString()
Re:Crash the VM..... (Score:3, Informative)
Re:Crash the VM..... (Score:2)
Yes, you're probably right- that does make a lot of sense. Mozilla doesn't have this problem on either OS. Except this version of IE was only updated last year and I downloaded Sun's new 1.4 J2SE SDK last week.
I suspect it's a combination of Microsoft playing games with the rendering components, and Sun being too busy coming up with new APIs instead of fixing the ones people are already using.
That's what leadership is about (Score:1)
This proves it: emperors and other leaders prefer using short commands.
Ive been doing the same thing for years..... (Score:2)
From the article....
I=SKK
Y=SSK(S(K(SS(S(SSK))))K)
Pxyz=zxy
0xy=x
1xy=y
?xy=1
$x=0
Maybe Im a bit tiered but dosent this sound a little Orwellian to all you ? "2+2=5"
Hey (Score:5, Funny)
Universal Machine == Universal Turing Machine? (Score:1, Informative)
Maybe someone can brush me up on my theory: is a Universal Machine a Turing Machine?
Re:Universal Machine == Universal Turing Machine? (Score:4, Informative)
The idea is that a universal machine is one that any algorithm can be executed on. Of course, this depends on what you call an algorithm, but most reasonable definitions are equivalent here.
I think of a Universal Turing Machine as a specific implementation that meets this requirement. Specifically, a state machine coupled to an infinite tape. There are plenty of other viable machines: infinate register machines, c code with infinite memory, etc.
Re:Universal Machine == Universal Turing Machine? (Score:2)
34 byte microkernel operating system? (Score:2, Funny)
Can this Universal Machine be used as the ULTIMATE microkernel for an operating system? Imagine an implementation of Linux running on a 34 byte picokernel!
Re:34 byte microkernel operating system? (Score:2)
For those who are interested, the actual link to the Smart Dust software project is Tiny OS: An operating system for Networked Sensors [berkeley.edu].
Re:34 byte microkernel operating system? (Score:3, Funny)
Imagine a BEOWULF CLUSTER of these babies!
Ok, I take that back: Actually, you could implement
this thing with, what, maybe 1000 transistors? It's
the ultimate RISC CPU! You could put one of them
on every column in your DRAMs, right on the silicon,
so that latency to memory would be nigh zero.
The resulting machine would be just like a CM-1, with
no router. Add a router, and *bang* you've got
the ultimate fine-grained MPP.
Re:34 byte microkernel operating system? (Score:2)
ObYouKnowWhat (Score:1, Redundant)
Nice result (Score:4, Insightful)
Re:Nice result (Score:2)
In the words of a good friend, "It's all just ones and zeros."
-Puk, half-joking
Re:Nice result (Score:2)
And in the words of somebody undoubtedly very wise, "I ain't payin' that kind of money for a bunch o' ones and zeros, no matter how much time y'all spent puttin' 'em in just the right order."
Re:Nice result (Score:2)
Well, then we're all in trouble [theonion.com].
Re:Nice result (Score:2)
Google Cache (Score:2, Informative)
Google Cached copy (Score:1)
The Null Command... (Score:5, Funny)
ms
I can beat 34 bytes. (Score:4, Funny)
Hah. 14bytes.
15 counting the null terminator, but that is neither here nor there!
Re:I can beat 34 bytes. (Score:2)
Heheheh, you forgot the word "universal", so it's 24 or 25 bytes depending on whether or not you are counting the null terminator (since a universal turing machine is a specific type of turing machine).
You can try this out on your own (Score:3, Informative)
Re:You can try this out on your own (Score:2)
Hehe, call/cc (call-with-current-continuation, for those that likes long names, or doesn't know what it is; it's like setjump/longjmp in C, but a continuation can be "jumped to" several times, from anywhere, even after the function in which it was created (setjump in C) has returned (once or several times)) is a really nice feature, with which you can implement both exceptions and threading...
Perheaps I should take a look at that unlambda language, I read about it somtime ago, but I don't really have time for an intensive night of hacking/reading atm...
SK reducing hardware (Score:3, Interesting)
It was (as I recall) built out of around 100 TTL chips on two cards all using verowire technology (yuk). My responsibility was to write the microcode that directly executed the various combinators. We ended up supporting around 20 operators, starting out with S, K, I, and then progressing through B, Y, and some simple numeric and comparison operators. The garbage collector was written in one (long) night as the result of a bet!
It worked remarkably well considering the date (1979-1980). Unfortunately, I couldn't find a copy of the paper that describes it online anywhere.
One of the cool things about SKIM was that you could enter infinite programs, and since they were evaluated lazily, things just worked. For example, you could define a function that returned the infinite list of prime numbers. Actually what it returned was a code fragment that evaluated the list, and as the caller needed those values, the list would be evaluated, and the code fragement pushed backwards down the list.
We never thought of building a UTM - now it has been done, it seem obvious!
Re:SK reducing hardware (Score:2)
arch/design/impl? I'm really getting hot about
the idea of hanging one of these off of each column
of a DRAM. Mail me at alk at pobox dot com, if
you wish.
Re:SK reducing hardware (Score:2)
I don't know about the physical structure, but have a read of Simon Peyton-Jones' book The Implementation of Functional Programming Languages. There will be some references in the chapter on S-K combinators.
I can beat that. (Score:4, Funny)
--- Begin program
1
--- End program
See, wasn't that easy? I should note, though, that this is the second version of my program. The first version is still at large in the greater Manhattan area.
Re:I can beat that. (Score:2)
"Don't make me call out the flying monkeys"
Re:I can beat that. (Score:3, Funny)
Ok well heres my contribution:
--- Begin program
0 0 0 0 0 0 0 0 0 0 0
--- End program
Fly my pretties! Fly!
Re:I can beat that. (Score:2)
--- Begin program
1
--- End program
but can you prove that your program terminates?
Re:I can beat that. (Score:2)
That's what the rabid flying monkeys are for. Duh.
Re:I can beat that. (Score:2)
Hey, I can beat that. I'll name that language in 1 BIT.
It's in a language I call "WolfWithoutAClausium." In this language, a 1 is interpreted as a universal machine. A 0 is a rabid flying monkey. OK, here's my code:
--- Begin program
1
--- End program
Re:I can beat that. (Score:2)
The other seven bits, incidentally, were my secret backdoor for any programs written in "MWoodylazium." I mean... Uh... Look! Monkey!
Re:I can beat that. (Score:2)
I can see at least sqrt(-1) problems with that.
Re:I can beat that. (Score:2)
I admit I'm a little dim, though. Could you show me, say, the traditional palindrome detector in your language?
(Note to moderators: This is sarcasm. Note to MWoody: Pay more attention in your theory class. If you've even taken any.)
Re:I can beat that. (Score:2)
And I switched majors from CSC to English the quarter before I was supposed to take Theory of Computing. Maybe it was for the best. ^_^
Re:I can beat that. (Score:4, Insightful)
This also has interesting implications for DeCSS. Is the bit 1 a circumvention device? Then 0 must be too, since we can just invert the rule.
Re:I can beat that. (Score:2)
No, he's incorrect. The encoded UTM much accept other Turing Machines in the same encoding. Since the specified language has only one string, which is the claimed UTM, it can accept precisely zero other TMs as input to run them; it's absolutely incapable because they can't even be expressed in that language.
Therefore, it's not a UTM. It's a Nothing-TM. It can't do anything; it's too constrained by the language.
Even if you extend the language, you'll find it does odd (and provably odd) things to the rest of the language, rendering it impossible to work with with traditional techniques. (Is that 1 a "1", or is it the UTM? Who can tell? In a proof, you need to know.)
Come on, think about it for a moment. If it were that easy, why would this news article even exist? The guy who put together the 34-number version isn't stupid; if it were this easy to get down to 1, why would he have bothered?
Re:Er (Score:2)
Oh, I'm sorry, am I dissing other people's opinion on this mathematical issue? How non-post-modern and non-PC of me. If all these people think it, despite clearly not understanding what's going on at all, then it can't help but be mathematical proof, no?
Re:Er (Score:2)
Secondly, I thought SK was functional and the Turing machine is imperative? Non-trivial paradigm mismatch here?
Re:Er (Score:2)
(Heck, a non-trivial number of the grad students find this stuff extremely challenging, and curse those of us who find it comprehensible. (Not easy; I don't know anyone who finds this stuff "easy". But comprehensible.))
TM's are really neither functional nor imperitive, nor OO or anything else. It's the bottom, the foundation upon which those paradigms are built. (Remember, there's really no such thing as a "functional" language in the sense that the language requires functional constructs, and rejects all else. It's just that there are languages that encourage functional programming techniques. As long as the language is Turing Complete, you could always implement a TM on top of the base language, then program in whatever style you want, albiet with potentially huge losses in efficiency.) The traditional TM may look imperative, and the SK machine may look functional, from certain points of view, but they are the same thing. The exact same thing. Isomorphic. Identical. Synonymous (that's probably the best way of thinking about it). Again, that's the point.
Combinators and lambdas and stuff..? (Score:2)
Now, though, it would appear that we can run a universal turing machine in unlambda! I think! I haven't read the paper closely enough yet to work out how exactly the block of encoded binary at the end functions
In the meanwhile, though, i'm posting for this reason: i'm really, really wierdly interested by lambda calculus and combinatorial functions and church numerals and all these other bizarre functional-programming offshoots. I can't, however, seem to find any really good resources explaining how they work. I mean, you look around on google or whatever, and occationally you'll find something explaining the basics of what an anonymous function is, and maybe explaining how to construct a church numeral or putting the old ((lambda (x x) (x x) (lambda (x x) (x x))) infinite loop thing up. But i've yet to find something that actually, you know, goes on and explains to you how to use all of this. None of them reach the point of beginning to explain how you go from knowing what a church numeral and "successor to 1" means to expressing foreach (1..10) {} using only anonymous functions.
The site with the universal turing machine links to what claims to be an overview of how to use combinatorial functions, which makes me really happy. However, i was wondering: does anyone have any good links, examples of books, etc, that explain how to construct programs using lambdas and/or combinatorial s/k functions and all that? I hear some college courses cover this stuff (although near as i can gather, unfortunately, no courses at my current college do), so there has to be some kind of information online somewhere. I'll probably find something on my own eventually, but just as long as i have your attention: is there any reading material anyone *recommends* for someone who just finds lambda calculus and such interesting?
For now, though, i'm going to read this intro to combinators thing linked from Tromp's site. Maybe it'll teach me enough i'll be able to finally realize my crack-addled dream of porting the lambda calculus DeCSS program [cmu.edu] to Unlambda. Though more likely it'll just make me really confused
John Tromp reads slashdot? (Score:4, Funny)
Don't laugh... (Score:3, Interesting)
Anyway, the point is, if you're writing the back end for a functional language... wouldn't it be real swell if you all had to implement was two combinators? No dealing with messy variables either. This is exactly the approach taken in the implementation of the functional language Miranda. (Although for reasons of efficiency, there are more than 2 combinators... you *could* use just S and K, but then you get some REALLY HUGE results for even relatively simple lambda-expressions).
So combinatory logic isn't just the domain of Schoenfinkle, Haskell Curry and assorted logic fetishists... it can and has been used for `real life' applications too.
Re:Don't laugh... (Score:2, Informative)
Sl a b c => (a c) b
Sr a b c => a (b c)
(I think Turner calls these B and C, corrections pls)
It's pretty easy to see that S/Sl/Sr are the optimal 1-ary combinator set. You hardly need K, and you don't need to generate I during compiles (but it's handy during eval to preserve sharing).
John
I LOVE my universal machine (Score:2, Funny)
I remember like it was yesterday how my hands trembled, as I tore the wrapping off the box. Sweat dripped from my face and stung my eyes in the mid-afternoon heat, as they had so many dusty days of mowing and painting, but it was all worth it!
My Universal Machine does EVERYTHING! I set it in my pet hamster's cage, and it exercised Herbie (my hamster) till he was tired and sleepy. I then took it out and used it to clean the dishes and take out the trash (my chores) which it did AUTOMATICALLY!
Then, as the sun was setting, I used my Universal Machine to walk Sparky (my dog), which it did all by itself. It even cleaned up after him. I was amazed!
When school started, I used my Universal Machine to do my homework, which it did with no intervention from me! Then, my friend Bobby and me used it to make two fake ID's and disguises to get us into an R rated movie... there's nothing it can't do!
I used to love using it to scare my older sister and her stupid high-school friends, or chase the neighbor's cat up and down the street. Now, mostly I let my Universal Machine do everything for me, since it's universal, it can do anything and everything, and since it's a machine, it never gets tired or bored or complains.
In short, I wholeheartedly recommend the Universal Machine to anyone who wants a machine which is universal. It even processes S and K commands. It does absolutely EVERYTHING.
Sincerely,
Timmy B. Stevenson
Atlanta, Georgia
Yeah, but what about the universal program? (Score:2)
one-combinator basis for Lambda-terms (Score:3, Interesting)
Ix = SKzx = Kx(zx) = x
But did you know that you can reduce the language to one-combinator?!
X = lambda f.fS(lambda xyz.x)
K => XX
S => X(XX)
The proof [cs.uu.nl] to this particular variation of a one-combinator basis for Lambda-terms was first published by Fokker [cs.uu.nl], University of Utrecht The Netherlands, and shows that among several variations of one-combinator basis Lambda terms his is the shortest.
Re:one-combinator basis for Lambda-terms (Score:3, Informative)
The trouble with the one-combinator model is that it's not a supercombinator. The reduction rule of X introduces lambdas, which is of no practical use, since the whole point of using combinators to begin with is to remove the lambdas.
As far as I know, it has not (yet) been proven that you need at least two supercombinators to implement all of lambda calculus, nor has it been proven that you can do it with only one.
SSSCA (Score:2)
Another example of how unrealistic the SSSCA is.
ARGHGHH! (Score:3, Funny)
SKK this.
Re:ARGHGHH! (Score:2)
Lambda Calculus wins again! (Score:2)
Now, which one is your favorite programming language closer to?
Re:Lambda Calculus wins again! (Score:2)
I do like combinatorial calculus, though; especially in non-applicative form. You know, the form where functions don't get applied to their parameters; instead, functions get applied to the stack, which contains the result of previous functions. (That is, function composition becomes the fundamental operation.) Lots of interesting results.
Currying is also interesting, but is more studied
-Billy
Re:Lambda Calculus wins again! (Score:2)
I don't know what you mean. All I'm saying is that the lambda calculus is a simpler and more foundational model of computation than turing machines. The fact that we can distill the essence of computation down to two closed lambda terms is a positive result!
For all the braintrusts posting smaller solutions (Score:5, Funny)
The posted "single bit" solution doesn't work. The only machine encodable in that language is the claimed UTM... but that means that the UTM is far from a UTM, and is in fact a Nothing-TM. Don't hold your breath waiting for your Nobel.
The others have similar problems. The string "Turing Machine" isn't a specified encoding.
Joke all you want, I guess, but pay more attention in theory class, please.
Re:For all the braintrusts posting smaller solutio (Score:2)
Re:For all the braintrusts posting smaller solutio (Score:2)
Note that it is provably impossible to know we have the smallest possible UTM... but 1-bit it ain't.
Re:For all the braintrusts posting smaller solutio (Score:2)
And besides, you're leaning on Slashdot moderators for support in a mathematical argument? I get victory by default. (Oddly, Slashdot seems better with legal arguments then real Computer Science...)
Re:For all the braintrusts posting smaller solutio (Score:2)
I Could Have Sworn... (Score:2)
S-K++ (Score:3, Funny)
Re:S-K combinatory logic?!? (Score:2)
Re:52.8KB widening (Score:1)
Re:slashdotted (Score:1)
Sounds like a good idea to me...
Re:slashdotted (Score:4, Funny)
The server was probably running at a bandwidth of 34 bytes.
Re:Wrong department (Score:1)
Re:Pi (Score:2)
Re:Pi (Score:2)
Just because a number is nonrepeating and infinite does not mean it contains every possible sequence of numbers.
It's like saying that if the universe were infinite, then somewehre there is a planet just like this one, but where my laptop is green instead of blue. It just isn't so.
Re:Pi (Score:2)
There are also multiple-universe theories, in some of which, there DOES exist a planet where your laptop is green, since some of them postulate an infinity of universes for ever possibility.
Tim
No.. you missed the point. (Score:2)
WEhther you want ot call it an infinite universe, or an infinite number of finite universes, it still does not mean that every possibly thing exists.
Take the set {1,2,3,4,...} . WE can say that every whole number exists in this set.
However, an infinite set of the same size
{1, 3, 5, 7, 9,
Re:Pi (Score:2)
Simple. If the theory is true then "only a single universe" isn't a possiblity.
You are confusing "imagining a possibility" with "physics allowing a possibility".
-
Re:Pi (Score:2)
Nope. It's true or it's false.
The problem isn't in the nature of the universe. Its in the nature of possibility.
As I said, the problem is in how you are interpreting "possibility". You are confusing "imagining a possibility" with "physics allowing a possibility".
The "universe for every possibility" theories say that everything that is possible actually exists. Saying "how about the possibility of X" does not mean X is possible.
-
Re:Pi (Score:2)
If you can figure out the higher level laws governing the creation of the universe then you could figure out the extent and limits of what else is possible.
-
Re:Pi (Score:2)
No. The universe is finite, and there are many planet variations that are posible but never happened to occur. We have some general idea what is possible and what isn't.
Back to the the multi-universe theory there would be two variations on the theory. One where, like planets, some random collection of possible universes happen to occur. In another variation of the theory every possibility occurs.
They are certainly interesting theories, but there is very little evidence yet weather they are good theories or bad theories. This stuff goes pretty far beyond the hard evidence to back it up. It's partly just a game of making stuff up, then seeing if it is logicly self-consistant.
Alot of this stuff traces back to the particle/wave duality of light. If light can take two paths, and you look, it always takes exactly one path like a particle would. If you don't check which path it takes, you get proof that it takes both, like a wave would. One of the only explanations for this that seems to make any sense is that everything that can happen does happen. The universe splits. In one it goes left, and in the other it goes right. The two basicly identical universes curve a little bit apart, but then come back together at the same "spot". The two different versions of the same particle interfere with each other. The math of it works out exactly right. In physics, if the math works then you've probably got the right explanation.
It's a pretty far fetched thing to imagine and most scientists are quite sceptical. The thing is, quantum mechanics has some *truly* bizzare features and so far this is about the only theory that makes any sense. It's a choice of "far fetched theory" or "we have absolutely no clue".
-
Re:Pi (Score:2)
What you say is true.
However, pi goes beyond that. It is conjectured (as far as I know, not yet proven) to be a "normal number", which means that it does contain every possible (finite) sequence of integers.