Ask Chuck Moore About 25X, Forth And So On 323
Chuck Moore is, among other things, a chip designer. His latest design, the 25x, is based on a 5x5 array of X18 microprocessor cores, and could provide 60,000 MIPS with a production cost of about one dollar. And Moore has the chops to back that up: he's been designing tiny, efficient processors for many years. He's also the inventor of the programming language Forth, which has evolved from a miniscule but radically fast language "difficult for a human to read" (according to The Secret Guide) to the even more radical colorForth. How radical? Try "includes own operating system; has own 27-key Dvorak keyboard layout; meaningful color syntax." How's that for starters? Ask below your questions for Chuck about processors and programming (ask all you'd like, but one per post, please) We'll pass the best ones on to him, answers soon to follow.
5x5 grid of procs (Score:2, Interesting)
And what led you to design/implement this array?
Re:5x5 grid of procs (Score:2, Informative)
There are a number of different mechanisms that have been discussed to signal the software, I'm not sure what Chuck has experimented with so far.
There is a group at MIT with a similar architecture only much less efficient.
http://www.cag.lcs.mit.edu/raw/
John L. Sokol
Forth as intermediate language (Score:4, Interesting)
Re:Forth as intermediate language (Score:1)
code created by GCC, RTL, are stack based, and
later interpreted or compiled for a register machine. It isn't really a problem, sometimes
it's actually easier that way.
Re:Forth as intermediate language (Score:3, Interesting)
In a register-based architecture, you don't know which register is going to be used next, and which ones are seldom used; in a stack, you know that stuff near the top is coming sooner than stuff near the bottom. Because all of the data is ordered in terms of urgency, a caching system can make very intelligent decisions.
This is even more powerful when the programmer (not the compiler) is the one who arranged the data in order on the stack -- the programmer has an optimiser in his head which by FAR defeats all currently possible software optimisers.
-Billy
Re:Forth as intermediate language (Score:2)
When humans optimize, some can do a better job than most compilers for short sequences of code, but they spend too much time optimizing the wrong thing, at the expense of bugs, development time, and maintenance.
Re:Forth as intermediate language (Score:2)
The compiler should understand machine limitations, data and code cache sizes, and all the other things that vary from machine to machine and which can be optimised locally; the programmer should understand what data is going to be needed when, and he should be able to tell that to the compiler. THAT is the big advantage of dataflow languages like Forth, APL, and so on (although I'm only aware of optimizing compilers for Forth, not for the other dataflow languages).
-Billy
Re:Forth as intermediate language (Score:2)
The Java VM is a stack-based system which is approximately equivalent to C's speed (a bit faster in some benchmarks, a bit slower in others). If you were to examine the machine code HotSpot produces, you'd see that it was indeed using registers even though the Java bytecode is entirely stack-based.
Re:Forth as intermediate language (Score:2)
where to start? Java has not actually RUN on a virtual machine for many years. Compiled to it, yes, but JITted to native code in pretty much all cases.
So you can see the JVM as a convenient intermediate stack format.
Translating to registers is notso hard, if you have a few registers handy. You can view the register file as an array implementing a circular list (the stack). When you catch up to the tail, start flushing to memory. Using this technique, caller saves is perfectly implementable (ie, the caller guarantees that the callee will not overwrite the caller's data. It is up to the callee to not overwrite its own).
$0.00/MIP (Score:2)
A 7 sq mm die, packaged, will cost about $1 in quantity 1,000,000. Cost per Mip is 0.
At that price, I'll take a few billion MIPs, please!
Comment removed (Score:3, Interesting)
Direction of Forth/25x (Score:3, Interesting)
Do you have a direction in mind as to where Forth/colorForth and the 25x could go? e.g. do you see them in handhelds, set-top boxes, etc?
Market Niche (Score:2, Interesting)
And an aside:
My ignorance of Forth might be showing (one of the few I haven't had kicked into me over the years) - but wouldn't "meaningful colour syntax" represent quite a nasty disadvantage for those who are either entirely or partially (red-green) colour blind?
And speaking of Dvorak... anyone know where I can get an ergonomic, full sized, keyboard with a Dvorak key layout. I can probably remap the keys on the existing MS keyboard, but the idea of switching the keycaps is nasty. It'd be better to have a keyboard that sent the right scancodes.
Re:Market Niche (Score:1)
has some great (though expensive) keyboards. They take some getting used to, but they're great once you do.
I don't think you really want such a keyboard (Score:1)
You MIGHT want one that sent the same keyboards and someone had already switched the keycaps...
and I could see some usefulness in on that switched between the two modes in hardware, except it's probably exorbinantly expensive, due to the low production run. That alone is reason enough not to get one...
But there are a bunch of things which map to a keyboard geometry, the one foremost in my mind being IJKM as arrows... and there are others, some I've programmed for testing reaction time. There's no reason I can think of to swap the keycodes around, and several not to, unless you're trying to fool everything, which I don't think you are. At least, if I were going to do that, I'd want an A/B switch.
- Arete
Re:Market Niche (Score:2)
Seriously, though -- it's (very) well worth knowing how to remap any keyboard quickly (and being able to type without keycaps), because much of the time you won't be at your own box, and you can't very well drag your own keyboard along. Software key remapping works well almost everywhere (it gets covered as part of "internationalization" initiatives, even if nobody had it done beforehand), so the need for a keyboard that sends remapped scancodes is really redundant.
Re:Market Niche (Score:2, Informative)
From the web site: Max power 500 mW @ 1.8 V, with 25 computers running
Or is the only way to get this high of projected performance by clocking the chip like a six year old on chocolate frosted sugar bombs?
Again, from the website: asynchronous microcomputer core, meaning you don't count clocks like you do in synchronous logic.
My ignorance of Forth might be showing (one of the few I haven't had kicked into me over the years) - but wouldn't "meaningful colour syntax" represent quite a nasty disadvantage for those who are either entirely or partially (red-green) colour blind?
Chuck uses color, but you could change the colors to different fonts and/or font styles, if you want. Just as Python source uses indentation for telling the compiler about nesting levels, colorForth uses color tokens (think of it as a trivial markup language) to tell the compiler about word (aka function) definition starts, literal numeric values, etc.
revolutionary (Score:4, Interesting)
What is the most revlutionary (i.e., it is scoffed at by those in control/power) idea in the software industry today? Explain how this idea will eventually win out and revolutionize software as we know it.
Re:revolutionary (Score:2)
How did it get so late so soon?
It's night before it's afternoon.
December is here before it's June.
My goodness how the time has flewn.
How did it get so late so soon?
uh, forth post? (Score:2, Interesting)
Forth is a very cool language: I first used it running on an Apple ][ a couple of decades ago, to write programs to control lasers for laser shows at a planetarium. The combination of interactiveness and performance was great - it allowed details of a show to be reliably tweaked right before and even during a show. This was one of those situations where the tool really made a difference to the end result. Other languages available on the Apple at the time couldn't really compete.
I don't have a question for Chuck, but I'll come back when I think of one.
What's the next Big Computational Hurdle? (Score:4, Interesting)
3D, rendered-on-the-fly games get well over 30 frames per second at insanely high resolutions and levels of detail. The most bloated and poorly-written office software scrolls though huge documents and recalculates massive spreadsheets in a snap. Compiling the Linux kernel can be done in less than 5 minutes. And so on.
It seems that the limiting speed of modern computers is off the processor, in IO.
What then, do you forsee coming down the pike that requires more processor power than we have today? What's the underlying goal you intend to solve with your work?
Re:What's the next Big Computational Hurdle? (Score:2)
A computer which, when asked whether there is a God, would simply answer, "present."
The possibilities are boundless. Seriously.
-Billy
Chuck makes neat stuff, but... (Score:2, Flamebait)
His processor designs are poorly specified and buggy as hell, and he just kind of glosses over that (and remember to mix in 3 or 4 NOPs between each useful operation to avoid overheating the chip...). His MIP counts are inflated, because his instruction sets approach turing-tarpit level. For example, if you wanted to do 64-bit floating point operations, you'd probably need to consume 40 or 50 machine ops for each addition, never mind more complex operations.
This new chip is going to be completely I/O crippled and I doubt it will ever get past prototype stage.
OKAD is to the VLSI CAD stuff used by people like Intel and AMD as ed is to MS-Word. Sure, it's smaller, cleaner, and in some ways much more powerful, but it's also strange, hard to use, and it doesn't do the same stuff. OKAD couldn't be used to design something like the Athlon, or even something like a 486. It's made to design tiny chips, like Chuck wants to make.
He dismisses a lot of real problems. He claims that software is easy, but never writes anything hard. Everything is non-standard and isolated from the distressfully complicated real world. Basically, he makes it easy by making things nobody but he would want to use.
The machine on my desk can read, interpret, and process thousands of standard data formats, connect to other computers using dozens or hundreds of standard protocols, recompile and run many thousands of legacy programs, and emulate almost every machine more than a few years old. When I want to do high-speed graphics processing, all the slow crappy code gets out of the way and doesn't matter any more. The machine he would replace it with would do none of these things, it would require all new software and would probably cost about the same anyway, after buying the RAM, hard disk (both for the vast amounts of data I want handy, which takes up far more space than code bloat), input devices, and monitor. That is, assuming it wouldn't need costly specialized versions of these.
He's really designing specialized embedded chips, without bothering to specify (or specifying wrongly) what they are good for. A quirky Forth chip with 486-level performance, support for up to 2 megs of DRAM, and video out? What on Earth for? The toy computer in a mouse doesn't do it for me.
And the new one: basically an embedded chip design, with a turing-complete but miniscule set of primitive integer operations, copied 25 times and laid out in a square array. Why, oh, why?!
Do you know what they're talking about using it for? A replacement for PC server clusters! On the grounds that you can fit as many MIPs in one small box! Wow! 60,000 bogoMIPs! Never mind that the chief assets of a server cluster are the hard-drives and RAM...
The I/O specs? Well, you could put an SRAM controller on the edge...
Cache? 256 KWords (18-bit, of all things) off-chip, which must be managed manually. Sold seperately, O/C. Each processor has 384-word on-chip DRAM, into which you must cram your whole program, or stop to load in new instructions whenever you want to do anything else. All "cache" must be managed by code in this tiny space, too since there's no hardware support.
Speed? Multiplication (18-bit) takes 125 operations. Realistically, we're talking under 500 MIPs, before taking into account the I/O problems and the difficulty of writing good parallel code. I'd be utterly shocked if you got more than 50 MFLOPs out of it, after some very careful optimization. Yeah, it's real supercomputer material.
Now it's starting to look like a $1 chip, non?
His Color Forth is very much like the BASICs on early home computers. They also served as both OS and interface. They were about as small, too, and provided essentially the same functionality. They were also tied inextricably to one platform. Hell, even MS got its start doing this stuff.
And yeah, it would suck to be color blind. Whatever he says about using different typefaces, I wouldn't want to distinguish between 8 different kinds of text with "italic", "underlined", "italic-underlined", etc. If it worked decently, he would have used that instead of color in the first place. We're talking about a system designed around his own poor eyesight, which doesn't account for other vision problems, and doesn't provide real advantages for those with good vision. That's typical of his work: exactly what he wants, what nobody else wants.
Yes, almost everything out there is bloated and ugly. The industry could stand to improve a lot. But Chuck Moore doesn't have the answers, in many ways he's just a smart-ass infatuated with some easy answers that don't work in the real world.
On the other hand, Forth is a very nice language. I agree that it should be taught to children while they are learning arithmetic. It's as simple as languages get, and gives you an accurate model of what's going on in the computer (execute the instructions of this word, then this word, then that word...). I see it as just another language, not language/OS/universal interface. It's very, very good for small, isolated systems. Being an extensible language that relies heavily on globals, it's very, very bad for large team-effort software projects.
Basically, read his work, but take everything with a grain of salt.
Re:Chuck makes neat stuff, but... (Score:2)
How will this 25X be programmed? It seems to be like the CM-1. You have all these tiny processors comunicating quickly, but with almost no local instructions or data.
I tend to think that the compactness of Forth and the ~ 1 KiloInstruction availible to each processor will be enough to store a useful program in each core, while still leaving room for housekeeping chores. I'll be willing to accept that IO won't be a complete killer.
But! I'm dammed if I'm going to sit and hand optimize the communication between 25 cores. What if I decide that my application really needs 25X25 cores? I mean, Occam wasn't that fun, nor was Lisp*. Ask Transputer or Danny Hillis.
So for the question:
Do you have any programming tools where I can express my algorithm in a communication neutral way, and then have it tuned for the architecture at hand? Or is it not that hard to make this architecture fly?
I've read /everything/ on that site... (Score:2)
His claims about software are unrealistic for most other people, but accurately describe the software that matches well to his chips.
The problem is that you guys don't admit that this is a limited problem domain, specifically: easy problems that are neither memory nor computation intensive. You pretend that it has to do with the approach to creating the software, not the problem you need to solve. And for these problems, you don't need much power, so your painstakingly optimized software performs decently on these low-power chips.
You people don't have much of a clue, really, about the systems you're comparing yours to.
Our multitasker and memory manager with garabage collection and device managemnt fit in 1K. The jpg file read, decode, and display routine fit in 1K. The GUI library fit in a couple of K.
Ooo... A handful of trivial operations in a few K! I'm impressed!
Your "multitasker" is a cooperative multitasker without any real load management. Your "memory manager" is trivial garbage collection on a small, single page of RAM. Neither have any sort of protection against poorly written or hostile code. Crack open Knuth's TAoCP and see these functions implemented in a few dozen assembly instructions. Nothing new at all. I could write them with my eyes closed.
GUI library? Yeah, just like the toy ones commonly written into game engines. Ooo, but this one is skinned like Windows, so it must be functionally equivalent to Windows! I've written little GUI libraries that manage sprites and text, windows, focus, and mouse-clicks, in a day or two. They're toys, and they're utterly trivial once you figure out how to lay down pixels efficiently. Making one with a rich supply of widgets, support for multiple languages, and a component model is much harder.
Wow, a JPEG decoder! I'm impressed that you reimplemented a standard and hand-compressed the code instead of just using one of several completely free C implementations that have been tested and debugged for years. With how many thousands of test files from different sources did you test your implementation to be sure that it is real-world ready? And you managed to reduce the memory-needs of decompessing a 600X400 JPEG file from about a megabyte of image data (pre- and post-compression taken together) + 20k of code to about a megabyte of image data + 1k of code! Very sound software-engineering practice, I'm sure. You really know how to set your priorities.
And all this code was used in... what product did you put to market again? Since you're so sure of real-world applicability, you must be making an absolute fortune from your vastly superior development methods...
Most importantly, these things aren't portable or flexible in the least. They're hacks for one monolithic system, written all crammed together so that if you lose the original implementor, almost nobody will be able to read them. A thing like Linux or Windows is written to be very flexible and support a wide variety of commodity hardware, so you don't have to rewrite every piece of software when you upgrade one part.
Upgradability is very important. You can't access the web from a static platform, because the web is always changing, not just the content, but the specifications. This is why "information appliances" fail in the marketplace. If the architecture isn't tolerant of faults in these components, then the system doesn't work. That is why all the "unnecessary bloat" of defensive coding is used.
Don't get me wrong, you've made a very pretty little imitation airfields out of rocks and coconuts, I just haven't seen any cargo planes dropping off supplies.
Real programs, not bogomips.
Another example of how you people don't understand the performance numbers you use. BogoMIPS are a measure of unproductive operations (NOPs). The MIPS Chuck gives are essentially the bogoMIPS count. A technically and ethically sound MIPS rating would average over standard, common operations such as multiplication of numbers stored in main memory. Just because you're running programs on the system doesn't make your numbers anything but bogoMIPS.
He's suggesting supercomputer use, advertising 60000 MIPS, when the standard supercomputer reference, GFLOPS, probably comes in somewhere around 0.02 (seriously, think about doing a 64-bit floating-point multiply on these things, and don't you dare wave your hands and say that people shopping for supercomputers shouldn't be using 64-bit floats), compared to around 1 for a fairly standard desktop chip.
It's fun to play around with seeing how much you can cram into tiny programs, and sometimes it's useful. But most of the time, it's more sensible to write portable, readable code. And it would be loads of fun to play around with chip designs, but you can't just optimize for bogoMIPS and then claim effectively infinite performance by hand-waving over the I/O and programming for the freakish architecture. If you make bizarre, specialized chips you either have a realistic market or you're playing around. Chuck's just having fun, then mistaking fun for good product.
Worse, you're all mistaking an engineer at play, duplicating decades-old work, for someone doing cutting-edge research. (Ultratechnology indeed!)
There is nothing new about the idea of just putting multiple processors on one die, on a simple network. Nobody does it because it's too much of a pain to use. Parallel programming isn't easy. But it sure is an easy way to inflate your MIPS rating.
There's nothing new about tiny MISC chips. They're too hard to program and require too much cache to execute large programs efficiently. Go back in time a bit, and you'll see similar things all over the place. Look around the rest of the embedded industry, and you'll see equally small, cheap, efficient chips, with adequate performance and all sorts of different nifty specialized features, that don't require you to code everything from scratch.
There's nothing new about tiny programs. You can only do so much with tiny programs! The real world is messy, and dealing with everyone else's standards and bugs makes programs necessarily big (and lazy programmers make programs unnecessarily huge, but that's another issue).
a 2400 MIPS cpu you can route gigabit datastreams on separare I/O pins and do megahertz analog signals on other I/O pins at the same time.
Yes, it will make a very lovely piece of wire, once you build up a whole supercomputing architecture around it to feed it these gigabit datastreams, though there's no room for routing tables or anything like that.
Where is forth going? (Score:5, Interesting)
Forth has (in my eyes) always been about small and efficient. Today, though, embedded apps are more likely to be written in C than in forth, and the "OS as part to the language" thing isn't as compelling today as it was in the eighties. Where is forth being used today, and where do you see it going in the future?
/Janne
Well, it's staying in the UltraSPARCs... (Score:2)
Forth is small and efficient enough that the UltraSPARC PROMs contain a small interpreter. You can write Forth code and store it non-volatile RAM, to be executed at powerup.
Re:Well, it's staying in the UltraSPARCs... (Score:2)
Sure there's documentation, on Sun's doc site [sun.com] for one. That site is basically just a full install of the AnswerBook2 software that comes with Solaris, plus all the docs for all the products. (Usually you run your own AB2 server locally, and it only displays the doc collections for stuff you have installed.)
Anyhow, just look for "openboot" on the web.
Re:Where is forth going? (Score:2)
Re:Where is forth going? (Score:2)
If you're programming a microprocessor controlled toaster, and will sell a million toasters, then using forth might let you use a $0.50 microcontroller instead of an $0.75 one, saving $250,000. That's certainly worth some extra programming effort. This is called "embedded programming". And, believe it or not, there are several guys coding the smallest scale embedded programs for every one coding PC and server operating systems and applications. (I suspect embedded programmers are grossly underrepresented in surveys, because most of them are EE's and also do circuit design, and so show up as design engineers rather than programmers. Likewise, few of the many people customizing databases to fit the needs of each and every corporation are called "programmers", although they do sort of code, and there are a heck of a lot of them.)
For bigger jobs, things are a bit different. I don't disagree with Moore's comments about bloatware, it's just that I think there's a middle ground between Win ME (a bloated program with a 20 year history during which nothing was ever taken out, including bugs), and tiny OS-less applications where the programmer sweated for each byte saved. And I think that for less than million-unit quantities, that middle ground is more cost-effective -- otherwise we'd all still be using assembly language, except for the Forth programmers. Apparently Forth can have a smaller memory footprint than good assembly; it's not going to run as fast, but there never have been many good assembly programmers, and Forth might beat poorly written assembly. At any rate, for toasters, washing machines, ATM's, and about 90% of the other 8-bit embedded systems, it doesn't matter because the thing being controlled is thousands of times slower than any reasonably written software.
I don't have experience working with Forth, but it looks like it would be much harder to work in than C, mainly because algebraic notation is much easier for humans to comprehend than reverse polish notation. (RPN is easier for computers to comprehend -- that's why the first scientific calculators used it, and why Forth is so compact.) In some respects, they have similar capabilities: they let you work on an extremely low level when you have to, and they let you make horrendous mistakes. (I can't imagine a compiler that would work for direct hardware control that didn't have those possibilities.) They also let you work at a higher level. It's claimed that Forth can go to a higher and more abstract level than C or even C++, and this might be true, but I don't see how I could ever program Forth without continually thinking about the stack, and I can shove the details into C functions and work at a pretty high level in C. C produces much larger machine code than Forth, but at the present prices of memory, working harder on the code to save memory takes something like 100,000 units sold to pay off. Forth is sort of interpreted, very quickly, while C is compiled into pretty fast machine code. C should be faster, but there's a lot more overhead to C so Forth would win sometimes. And of course, a good algorithm in a slow language beats a poor algorithm in expertly hand-tuned assembly -- will Forth's general weirdness make it harder to find an apply good algorithms?
Am I biased? Too many people merely like the first language they met, but I think I'm about as unbiased as it is possible to be while still having some experience: I first learned programming in FORTRAN and COBOL long before these languages were "structured", then BASIC, APL, assembly, many more dialects of BASIC and assembly, a little Pascal and LISP, C, and Labview. I'm not wedded to any particular paradigm of programming languages, not even to using the Roman alphabet, but I do have to admit that C resembles FORTRAN in several ways (algebraic notation, printf formatting, the simpler variable declarations), and Forth doesn't resemble anything at all.
One final thing note: Forth does seem like a great language for p-codes, that is, compiler output that is not machine code. P-code can be machine and OS independent, and protection against rogue code can be built into the interpreters. There's a performance loss, but since MS's bloated code has pushed everyone into buying ridiculously overpowered desktop boxes, does it matter?
Re:Where is forth going? (Score:2)
For almost any microcontroller, what you will get, development-wise, is a C-compiler (or an assembler in the case of signal processors, as you _want_ to get that close to it to realise it's benefits). About the only advantage forth seems to have in the embedded space today seems to be that the code can be even smaller than assembler, and that advantage is being eroded.
At the same time, forth is Way Cool(tm), and it would be a crying shame to see the ideas slip away. The only thing I believe forth really has against itself is that the choice of keywords are... non-intuitive, let's say. From a readability standpoint, a language that allows you to define a useful, non-trivial function using only punctuation is not optimal. Now and again, I even get this urge to write a forth-like shell before I sober up and come to my senses
/Janne
Re:Where is forth going? (Score:2)
I guess you haven't taught or tutored any algebra classes? So many programmers imagine that they way they've learned to think so well is the only way to think. Both forms are hard to work with; however, the Forthlike form is easier to express action in (since it's chronologically ordered, with no execution occuring out of order) and easier to refactor (since most refactorings require only cut'n'paste, with no possibility of code breakage); the Algol or Lisp-like form is easier to do certain other transformations (since the arguments of a function are 'tied' or applied to the function by hard syntax).
The theory of Forthlike languages is brand new, in spite of Forth's age and Postscript's overwhelming success; it's discussed at the Joy page [latrobe.edu.au].
will Forth's general weirdness make it harder to find an apply good algorithms?
Forth's wierdness is explicitly tailored to help the programmer find and apply good algorithms. Let me list some ways:
Am I biased?
You have an awesome list of languages, but all of them operate on the same basic system: functions syntactically take parameters. Forth, together with Postscript and Joy, is
Read the Joy page -- I found it mind-stretching. It's good for a programmer to know some truly _different_ languages, which encourage truly different thinking.
-Billy
Re:Where is forth going? (Score:2)
Interesting post.
Every high-level language beats poorly written assembly. Believe me, I know. I've seen too much of it.
Apart from that, though, Forth in its original P-Code form is tighter than assembly. At least 33% tighter, and often as high as 90% tighter, particularly in inefficient assembly codes like pre-386 Intel.
Wow, you admit to learning in COBOL. I am in awe of your bravery. :)
Horrible languages designed for appeasing the needs of suits aside, FORTRAN, COBOL, BASIC, Pascal and C are all Algol-family languages. Assembly of course is its own breed. LISP is also. I don't know what Labview is.
It is true that Forth is unique. It's not an Algol-family language, like most languages are, outside of assembly. This is both a strength and a weakness. It's a strength because the way Forth does things is, well, better; it's a weakness because there are no commonly-used operating systems written in it, and it sometimes can be a real pain in the butt calling OS routines from Forth.
The weirdness you reference does not make it harder to do algorithm design in Forth. It makes it easier. What it makes harder is learning Forth in the first place. If it wasn't for Leo Brodie (some of whose books [amazon.com] are still in print; unfortunately Starting Forth is not) I doubt there would be nearly as many Forth programmers as there are.
Forth is the original language for p-code, as far as I know. I am not sure what you mean by rogue code. Normally, Forth compiles to tokens. Each token is a numerical referent to a routine. For example, here's a sample (rather goofy) Forth program:
4 5 +
;
the delimiter between commands is either a space or a CR. so that's actually four commands compiled there (six if you count the compiler control commands). : means compile a new command, and takes an argument following (there are a few commands in Forth which do that) of the name of the function. therefore, this is a new command named "additiondemo."
and by the way, that's what all Forth programs are. new commands. so as you write programs, your Forth just gets bigger. and bigger. there are tools available to reduce your Forth code to the minimum necessary to run a particular command, and then run that command immediately on startup. this is referred to as turnkeying. but you don't have to do that. you can just start Forth on bootup and keep all your programs in RAM all the time.
so back to our program. the command "4" puts the number 4 on the top of the stack. the command "5" does a similar operation. the command + adds the top two numbers on the stack, and places their total back on the top of the stack. (Therefore, at this point there's a 9 on the stack.) the command . pops the number on the top of the stack, and outputs it to the screen (well stdout really).
finally, the command ; says time to stop compiling, write the end of the command, and return to interpreter mode.
anyway, there ya go. that's basically how it all works. that command above should output a single 9, and leave the stack in the same condition it found it.
with most Forth implementations, the 4 and 5 commands would compile to assembly. actually, probably so would the + - but it could compile to a P-code reference to the + command. the . command usually compile to a P-code reference. result? this additiondemo should come in under 10 bytes on most architectures.
this is also why Forth only looks like it's using RPN. in reality, the + command is just a command like any other: it takes its arguments from the stack. it just so happens that it looks like RPN, most of the time. but you can do some rather strange things with +, especially if you have pointers on the stack. some of these things can be useful, although their implementations are usually very hairy.
Comment removed (Score:4, Interesting)
A practical use for Forth (Score:1)
Basically its a Forth interpreter with a stack, and a device tree. You can literally 'cd' and 'ls' around the PCI and other busses on Sun workstations.
x.25? (Score:2)
Programming languages... (Score:5, Interesting)
This one would probably require a bit more time to answer than you probably have available, but a quick rundown would be cool: Where do you see programming languages headed -vs- where do you think they SHOULD be headed? Java, C#, and some of the other 'newer' languages seem to be a far cry from Fourth, but are languages headed (in your opinion) in the proper direction?
Re:Programming languages... (Score:2)
I'm willing to be proved wrong however.
Newbie / Future MISC Chip Designers (Score:1, Interesting)
I've read everything on your site & also Jeff Fox Ultratechnology.com site about your Minimal Instruction Set Chips, their design, performance etc.
What advice and tools would you recommend to anyone today starting out and wanting to follow and build upon the path that you've set out?
Very Interested?
What is Forth? (Score:2, Interesting)
I am looking for the simplicity, control, and elegance of ASM. But I also would like to enjoy some degree of abstraction and features that reduce the drudgery of programming. I have looked at HLA and Terse but they are platform-dependent, unless I write my own compiler. Do you think Forth meets these criteria?
Another thing. Just from peeking at the FAQ I see Forth uses postfix expressions (among other things), which seems a little dated. I assume this was implemented for compiling on resource-constrained machines? Do you plan on giving Forth a minor face-lift?
Re:What is Forth? (Score:2)
-Billy
Quick question (Score:4, Interesting)
(If you could microcode the "instruction set", all the better. A parallel processor array can become an entire Object Oriented program, with each instance stored as a "thread" on a given processor. You could then run a program without ever touching main memory at all.)
I'm sure there are neater solutions, though, to the problems of how to make a parallel array useful, have it communicate efficiently, and yet not die from boredom with a hundred wait-states until RAM catches up.
What approach did you take, to solve these problems, and how do you see that approach changing as your parallel system & Forth language evolve?
The direction of 25x Microcomputer... (Score:4, Interesting)
The 25x concept looks like it could really a damned interesting idea. But one of the questions in my mind is where you want to head with it? Is this something that is to be used for very specialized research and scientiffic applications, or is this something that you envision for a general 'desktop' computer for normal people eventually?
Secondly, if you are considering the 25x for a desktop machine that would be accessable by people that aren't full-time geeks, what about software? Forth is a lost development art for many people (It's probably been 10 years since I even looked at any Fourh code) and porting current C and C++ application would be impossible - or would it? Is there a potential way to minimize the 'pain' of completely re-writing a C++ app to colorForth for the 25x machines, which could help to speed adoption of a platform?
Where did you get the name "ShBOOM" (Score:2)
Only question I have is about the choice of ShBoom for a microprocessor - any story behind that??
I sure wish I could get a uP like an NC4000, RTX2000 or PSC1000 - inexpensively.
Re:Where did you get the name "ShBOOM" (Score:2)
Yes [bmrc.co.uk] - at least one company mktd shboom as a java processor, the PSC1000. (Link is from 1996).
I was just wondering if, and it's a long shot, Chuck knew about the old Republic Picture's serial: "Captain Marvel", which featured the keyword SHAZAM (for Solomon (Wisdom), Hercules (Strength), A....(can't remember), Zeus (something), Atlas (...) and Mercury (speed I guess)), then Firesign Theatre comes along and makes a (really obscure) film "J Men Forever" which includes clips from Captain Marvel, except they changed the magic word to SHBOOM.
Two books on FORTH, TILs. (Score:1)
Two excellent books worth finding, both of which are probably long out of print:
"Starting FORTH", Leo Brody, Prentice Hall, 1981.
A very well written book aimed at the absolute beginning programmer. Brody uses cartoon drawings to illustrate the operation of the forth operators, and over the course of some 350 pages, explains not only how to program in forth, but how the language works under the covers and how to extend the compiler. Highly recommended and extremely novice-friendly.
"Threaded Interpretive Languages", R.G. Loeliger, Byte Books, 1981.
A more technical work, Loeliger describes and explains the implementation of an almost-but-not-quite-FORTH language. The book contains (and explains) the full source code, assembly as well as high-level, for the interpreter.
What is Forth (Score:3, Interesting)
What is Forth? Why is it useful? How fast is it in terms of useful computations? X MIPS, when comparing miniscule Forth instructions to CISC Intel instructions isn't really a good comparison. So how many *useful* computations can it perform compared to modern processors? What has it been used for in the "real world"?
I recall a company creating a transputer -- basically an array of FPGA's, all doing 4-bit add operations, and claimed X thousand MIPS, where X is large. How are Forth machines different?
When will we see your prodducts in the market? (Score:1)
Questions for Chuck. (Score:2, Interesting)
And why isn't Forth used more as a platform? Is it speed, security, advertising, what? I've never understood why the Forth community will take an excellent implementation right up to the point of being useful, then leave it without developing any applications. I can see an efficient, user configurable web cruiser built on any one of a number of Forths. But nobody has done it. Ditto for httpd servers. Why?
And to the rest of the world, please stop parroting the old line about Forth being hard to read. It isn't. You can pick up most of what you need to know in an afternoon, then begin to enjoy some very elegantly stated code.
Re:Questions for Chuck. (Score:2)
Other reasons: It isn't advertised. It isn't standardized. (I doubt this matters much since it's pretty easy to add new words to any available Forth as needed to support a program from another dialect, but this sort of thing scares away managers who don't have time to listen to the details.) It allows you to do horrible things like writing a subroutine that removes too many or too few items from the stack. (Sort of like the horrible mistakes C programmers make with pointers, malloc and free, == vs =, etc, but we're talking about managerial perceptions again...) Some versions of Forth store your tokenized source code right in the executable program, so you can't protect your "trade secrets." RPN really is hard to work with, at least for me (with the early HP scientific calculators). But the basic reason is that it looks weird and this inclines people to find fault...
Re:Questions for Chuck. (Score:2)
On old HP calculators, you could only ever see the bottom item on the stack. Using these calculators was difficult because you had to think about "registers" that you couldn't see.
A good RPN calculator on a computer (or an HP graphing calculator) lets you see almost everything that goes on with the stack. This is when RPN begins to make sense.
And if you're _programming_ in an RPN language like Forth, then the stack is what you make it. If you switch things around on the stack at bizarre times, it will be hard to work with. If you think of stack slots like function inputs and outputs, it's easy to work with.
But you did hit on what are probably the main reasons Forth isn't used: tradition, and the fact that Forth is closer to assembly than C, and as such it would not be obfuscated enough in binary form.
Re:Questions for Chuck. (Score:2)
Re:Questions for Chuck. (Score:2)
1994 was way too late for a Forth standard. ANSI C has at least a decades lead, and I think C was kept reasonably standard ever since its creation in the early 70's.
Somewhere in Moore's pages on colorForth it does say that the executable is tokenized source. Unless I misunderstood? Glad to hear that other compilers don't.
You might be right about programming traps and debugging -- most C and C++ projects never do get adequately debugged, so how could Forth be worse? But the big problem is that I can show a C program to an engineer that doesn't program, and he can read parts of it. Show him Forth, and it might as well be hexadecimal.
OpenSource Software (Score:1)
How do you forsee such a synergy affecting the popularity of both parties?
-Marvin
Information theory (Score:2, Interesting)
...
* Max power 500 mW @ 1.8 V, with 25 computers running
500 milliWatts is
It could have been C, couldn't it? (Score:2)
I was truly amazed when I first found a FORTH compiler for the Apple II. It was so alien to everything else available, yet so advanced, so ahead of the pack.
So, as for a question, do you think the growth of the appliance and handheld markets can give FORTH a chance to achieve a mainstream status? What steps are being taken to bring FORTH compilers to Palm OS, Windows CE and such?
Open forum to the X25 guys? I wanna OPT OUT (Score:1)
.
.
*Yes, I know that is X10. It is a joke. Live a little.
Ahem (Score:2, Funny)
Re:Ahem (Score:2, Interesting)
Massively Parrallel Computing (Score:5, Interesting)
The biggest problem in dealing with a large number of small cores lies in the programming. I.e. how do you design and code a program that can utilize a thousand cores efficiently for some kind of operation? This goes beyond multi-threading into an entirely different kind of program organization and execution.
Do you see Forth (or future extensions to Forth) as a solution to this kind of problem? Does 25X dream of scaling to the magnitude that IBM envisions for Blue Gene? Do you think massively parrallel computing with inexpensive, expendable cores clustered on cheap die's will hit the desktop or power-user market, or forver be constrained to research...
Re:Massively Parrallel Computing (Score:2)
I wonder if the limits of the programming languages available had any impact on the decline of thinking machines and massively parrallel computing. Perhaps it was some other factor entirely (like synchronization, resource contention, etc)...
Re:Massively Parrallel Computing (Score:2)
You can compress this into binary with each base pair represented by 2 bits (00, 01, 10, 11) which reduces the amount of data by a factor of 4.
There are some additional diferences between a compact binary representation for a single genome, and the data used by the HGP, some of which is used for correlating sequences, etc.
In short, your DNA sequence would fit on a CD-R.
While it may be closer to 700M, it would still fit.
As a side note, if you removed the excess filler in the genome, you would end up with substantially less than 600-700M. perhaps in the neighborhood of 200-300M. But no one is sure if that filler is truely filler, or if it plays an indirect part in the gene expression within cells...
Delta compression? (Score:2)
But given that some large percentage of our genome is identical from person to person, I imagine we could be stored as a diff from a "reference human" in far less space than that.
Re:Massively Parrallel Computing (Score:2)
700M only contains the genetic information required for the process of life to occur.
The execution of this genetic program is what you call life, and being human. To store the 'output' from this genetic program would be impossible.
There is infinite variety and always will be because the environment in which this 700M genetic program executes is always dynamic, and is a nonlinear dynamic system.
I dont want to get into the details here, but suffice it to say that there is a GIGANTIC difference between the genetic code, which is small, and the end result, which is life.
That the wonders of life can arise from such simple programs is a mystery to me as well... I am in no means trying to trivialive life or individuality.
See recent research on complex adaptive systems if you are curious...
Background on Mr. Moore (Score:2)
He did create Forth, yes, but that was thirty years ago. And while Forth has been relatively unchanged for the last twenty years, Chuck has kept evolving the language in a quest for the minmum interface between a human and a computer. The "OS" talked about in the intro is only a couple of kilobytes (yes, kilobytes).
He works not just on software, but does true systems work: a combination of software and hardware. And that is what he is trying to minimize. The system as a whole, not just a programming language. He has been designing processors hand-in-hand with stack-based languages. So he can do things like write a compiler for his language in a hundred lines of code. And he has a chip that uses _milliwatts_ of power and only 15,000 or so transistors.
If nothing else, realize that Chuck is one of the few people single-handedly creating microprocessors. And he's way, way out there. Remember the recent Slashdot post about asynchronous logic? Chuck has been designing chips without proper clocks for ten years now.
My question to Mr. Moore: Linux is seen as a more stable and reliable alternative to Windows, but at the same time I wonder if it's real progress or just a similar incarnation of a traditional operating system. Is the concept of "operating system" outdated?
Re:Background on Mr. Moore (Score:2)
Async is being touted as the next step in chip design. Very few people have any experience with it, and many people don't think it is possible (even though the result is higher speed and lower power consumption). But Chuck is doing it now.
Can Forth/ColorForth really bridge the gap? (Score:2)
Thanks for delivering a language that has proven to be great on memory constrained systems for years.
Re:Can Forth/ColorForth really bridge the gap? (Score:2)
Chuck's CAD system used to design his processors is written in colorForth.
Eating your own dogfood (Score:2)
Note that it wasn't used to design something equivalent to a Pentium. I think it was used to design a much simpler but not too slow CPU, and then replicate that 25 times with interconnections. And the user interface is often the hardest part of a program; Moore may have left that as simply a text console or something that takes a lot of work for anyone but the programmer to master. But still, I would think that to write a CAD program to do even that much well would take a a large programming team, and various routing algorithms that are held as trade secrets or patents by large corporations.
Mr. Moore, are the specs or a demo on the web somewhere?
Re:Can Forth/ColorForth really bridge the gap? (Score:2)
Enough to build a high-end CPU.
Re:Can Forth/ColorForth really bridge the gap? (Score:2)
As long as you consider CMOS chips with a few thousand surface-elements "high-end".
Re:Can Forth/ColorForth really bridge the gap? (Score:2)
Re:Can Forth/ColorForth really bridge the gap? (Score:2)
You're missing the point. Which is more powerful? A word processor that can handle a 1,000,000 novel, or one that craps out around 1,000?
Re:Can Forth/ColorForth really bridge the gap? (Score:2)
And no, nothing Chuck Moore has done compares remotely to the raw hardware performance of a typical desktop machine, with hardware cache management, pipelining, and goodies like floating point support. Let alone supercomputing hardware. His real-world performance is not all that exceptional for embedded chips.
I don't care how many MIPs he claims. He doesn't even provide hardware support for multiplication! As soon as you try to do anything that requires number-crunching performance, such as graphics beyond a simple bit-blt (as in his "Windows" demo, which is a slight step up from the NES sprite engine, and even with 1 toy app pushes the limits of his chip's RAM capacity), you'll find out just how ridiculously inflated his claims are. He's regularly out by at least 2 orders of magnitude, and with 25X probably 3 or 4.
And it's not like his stuff is rock-solid reliable. Think "thermal bug." And then think about what else would show up after someone orders a run of 1,000,000 chips.
Re:Can Forth/ColorForth really bridge the gap? (Score:2)
Show me the bugless production processors and I'll believe that other CAD systems are too conservative, and that this dilettante has shown up the entire industry. As it is, OKAD is an experimental system which has seen a mere handful of real silicon prototypes.
Have you ever heard a single impartial evaluation of this work? Or have you just been reading around at ultratechnology.com?
Object-Oriented Programming (Score:3, Interesting)
What are your views on Object-Oriented programming and how it would relate to forth?
X25 communication (Score:2)
Re: (Score:2)
Java VM in Forth? (Score:2)
Have you looked at Java as a high-level language for these systems or at Java bytecodes as a way to make common software available to users?
Re: (Score:2)
Re: (Score:2)
MIPS, But Not much I/O - What apps work well on it (Score:2)
Gap between claims and reality (Score:2)
I used Forth a couple of times in my younger days, for a PC data collection board, and STOIC, a VAX/VMS forth that an excellent editor was written in. So I'm familiar with some of Forth's strengths.
But Forth hasn't taken off, either in the general market or in its target market. In the same time period, numerous other languages have either become quite popular or have become well established in niches: C++, Java, Perl, Python. Companies like Cygnus have made mucho $$ supporting C in embedded environments, supposedly a natural niche for Forth. And research projects which involve, say, downloading codelets into an operating system to filter network packets tend to use Java or interpreted C instead of Forth.
If you could somehow wave a magic wand and create projects to make Forth popular, what would you do? What vendors would begin to offer Forth as an alternative, what killer open source projects could be done far more efficiently with Forth, and what great benefits could firewall vendors create by letting admins add little arbitrary packet filters written in Forth?
Re: (Score:2)
Re: (Score:2)
Limits of minimalism (Score:2)
Against complexity (Score:2)
Moore rejects most of the innovations in computer architecture of the past 20-30 years. No superscalar execution units. No pipelines. No caches. No floating point. No huge memories. Just simple little stack machines with high clock rates.
Nobody seems interested. Not even the digital signal processing people, who should like the repeatable timing and be willing to put up with the tiny memories.
So the real question is, what is this for?
Tiny web browsers (Score:2)
Forgotten interview? (Score:2)
What happened to the "Ask FCC Chief Technologist David J. Farber" interview questions (http://slashdot.org/interviews/01/01/22/1349237.
maru
www.mp3.com/pixal
Synchronization overhead? (Score:2)
Will this restrict the set of applications for which this chip is useful, or have you come up with a clever solution to the problem?
Extreme Programming and the Forth philosophy (Score:2)
How, if at all, does Forth help you to do things like refactoring and unit testing in ways that other languages don't?
-Billy
Language types (Score:2)
Why?
Many people complain about the RPN and many other argue that factor that can't be enough of a reason to spurn a language of the quality of Forth, but is it actually that maths as taught in schools the world over really does give languages like C a huge "familiarity bonus"?
Re:Language types (Score:2)
Another factor is that Algol and its descendants visually break up the code more than Lisp and Forth usually do, making it easier to see the structure. (This depends on the programmer, of course -- anyone with a smidgen of artistic ability and any concern for those who must follow him can add white space to make any language look good. But the average hacker seems to be lacking in either artistry or concern for maintainability. I've even seen Pascal code run together until it was unreadable. But the varied grouping elements in C (parentheses and curly brackets) give it some visible structure even when the indentation is snafu'd. Lisps all-parentheses is harder to parse visually, and I've seen Forth programs presented as a single string with no breaks at all.)
Who is saying "what's Algol?" It was a structured language created by a committee of mathematicians around 1960, before anyone thought to call it "structured programming". It had most of the ideas you find in Pascal, C, Modula 2, and filtering back into the originally unstructured Fortran, cobol, and basic languages. It also had a lot of ideas which turned out to be either unimplementable or just plain bad. If you want to see just the good parts of Algol, learn Pascal.
It had blocks separated by begin and end, which allowed you to replace the single statement controlled by an if with a whole group of statements (C turned that into {} for faster typing), plus the whole nine yards of "structured" languages (while, for, etc.). For wasn't new, FORTRAN and COBOL already had equivalent loop statements, but using blocks instead some hokey syntax to delimit the loop was new. Algol might have been the first to make the variable names in a subroutine independent of the names in the main and other subroutines. It had rules for variable scope that only a mathematician could love -- e.g., you could define a function inside another function inside main, and it would have access to the outer function's and main's variables, besides having its own local variables. (C simplified this to each function having its own distinct set of variables, and blocks being allowed to have local variables plus inheriting the next level's.) It introduced
Call by Name seems to have been both hard to implement and a basically bad idea. Since a function could access the variables in the calling program, all the way up to the main program, the original language definition seems to say that a function could actually call a subordinate function that would change the invisibly change the value of the first function's argument, and if it was by name the first function should use the new value henceforth. Uhhg! The second edition of the standard (which I long ago discovered marked down to about 10 cents in hardcover and still have packed somewhere...), discussed this and various other language features that were impractical with the compiler technology of the time, but didn't actually say what to do about it. So I guess each compiler writer implemented a different subset of Algol. Fortran and Cobol maintained fairly good compatibility, Algol started out incompatible, so it died except as an inspiration to others.
In the 1970's, Wirth finally handled it right: he named his favorite Algol subset "Pascal" and pushed it as a standard language. But it didn't compete too well with the more hacker-friendly C. It told you when you f*d up and didn't let the program compile, while C just assumed you knew what you were doing... Borland gave a version of Pascal a new lease on life by writing remarkably fast compilers for it (after, I presume, changing whatever part of Wirth's standard made it slow to compile but mathematically correct) and selling them cheap, but it was still treated mainly as a toy language -- good for learning to program, but to do real work you used C which didn't complain when you did something odd with a pointer. Even though most of the time it was a mistake... But I don't think it's possible to write a hardware driver in Pascal, it will think your reads and writes to hardware are a mistake.
Marginalizing of the blind (Score:3, Interesting)
Now, with some computer experts estimating that over 50% of the Internet is incomprehensible to braille interfaces, and most computer operating systems devolving to caveman interfaces ("point at the pretty pictures and grunt") we seem to be ready to take the next step - disenfranchising the merely color-blind.
I realize that colorforth is not inherently discriminatory, in that there are a great many other languages that can be used to do the same work. The web is also not inherently discriminatory, because it does not force site designers to design pages as stupidly as, for example, Hewlett-Packard.
Would you care to comment on the situation, speaking as a tool designer? How would you feel if a talented programmer were unable to get a job due to a requirement for colored sight?
--Charlie
Small systems for small systems (Score:2)
Wouldn't this make Forth and similar small-footprint environments a natural choice for devices such as sub-$100 PDA's, and why does it seem that line of development is completely unexplored?
-jhp
Re:Small systems for small systems (Score:2)
-jhp
FFP, Combinator Calculus and Parallel Forth (Score:3, Interesting)
Re:FFP, Combinator Calculus and Parallel Forth (Score:2)
Re:Did you know... (Score:1)
You're offtopic, and surely will be modded down. My reply to you will not be modded up. Nevertheless: You, (along with millions of others) have mistakenly identified Slashdot (and millions of other sites) as public services. Sorry, tax dollars didn't pay for Slashdot. Its privately owned. It belongs to its owners, who may do whatever they please with it, enact whatever rules they wish, and so forth. If you don't like it, write to the owners with suggestions. They'll ignore you if they like. Your only other option is to go away. Period.
-Leperflesh
Postscript ~= type checked Forth... (Score:3, Interesting)