Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Programming IT Technology

Chuck Moore Holds Forth 211

A little while ago you asked Forth (and now colorForth) originator Chuck Moore about his languages, the multi-core chips he's been designing, and the future of computer languages -- now he's gotten back with answers well worth reading, from how to allocate computing resources on chips and in programs, to what sort of (color) vision it takes to program effectively. Thanks, Chuck!

FFP, Combinator Calculus and Parallel Forth
by Baldrson

In his 1977 Turing Lecture, John Backus challenged computists to break free of what he called "the von Neumann bottleneck". One of the offshoots of that challenge was work on massive parallelism based on combinator calculus a branch of mathematics that is far closer to Forth's formalism than parameter list systems (which are more or less lambda calculus derivatives).

The prolific Forth afficionado Philip Koopman did some work on combinator reduction related to Forth but seems not to have followed through with implementations that realize the potential for massive parallelism that were pursued in the early 1980s by adherents of Backus's Formal Functional Programming paradigm. Given recent advances in hierarchical grammar compression algorithms, such as SEQUITUR, that are one step away from producing combinator programs as their output, and your own statements that Forth programming consists largely of compressing idiomatic sequences, it seems Backus's original challenge to create massively parallel Formal Functional Programming machines in hardware are near realization with your new chips -- lacking only some mapping of the early work on combinator reduction machines.

It is almost certainly the case you are aware of the relationship between combinator reduction machines and Forth machines -- and of Backus's challenge. What have you been doing toward the end of unifying these two branches of endeavor so that the software engineering advantages sought by Backus are actualized by Forth machines of your recent designs?

Chuck Moore: What can I say? Backus did not mention Forth in his lecture. He probably didn't know of it then. Yet Forth addresses many of his criticisms of conventional languages.

He thinks a language needs or benefits from a formal specification. I grew up worshiping Principia Mathematica 'till I learned how Goedel refuted it. The result is that I distrust formal representations. For example, the ANSII Forth standard does not describe Forth, but a language with the same name.

Yes, I am struck by the duality between Lisp and Lambda Calculus vs. Forth and postfix. But I am not impressed by the productivity of functional languages. Even as research tools, they have failed to live up to their promise. By that I mean to do something with computers that I couldn't do more easily in Forth.

I designed the memory for the c18 to occupy the same area as the processor. This means small, fast and smart. c18 can respond to a bus request by fetching from its memory, accessing off-chip or performing a calculation. The 25x avoids the von Neumann bottleneck by making up to 27 memory accesses at the same time (2 off-chip). And its multiple buses do not substitute a network bottleneck for a memory one.

Standard code will be in the ROM of each computer. How this is customized in RAM and the computers assigned tasks is left to the ingenuity of the programmer, not a compiler. Automatically generated or factored code has never impressed me. Nor has automatic place and route for circuit boards or silicon. They are both an order-of-magnitude from human performance. Because humans understand the problem, judge the results and cheat as required.

Marginalizing of the blind
by Medievalist

When I built my first Internet node, the web did not yet exist, and one of the amazing things about the Internet was how friendly it was to the blind.

Now, with some computer experts estimating that over 50% of the Internet is incomprehensible to braille interfaces, and most computer operating systems devolving to caveman interfaces ("point at the pretty pictures and grunt") we seem to be ready to take the next step - disenfranchising the merely color-blind.

I realize that colorforth is not inherently discriminatory, in that there are a great many other languages that can be used to do the same work. The web is also not inherently discriminatory, because it does not force site designers to design pages as stupidly as, for example, Hewlett-Packard.

Would you care to comment on the situation, speaking as a tool designer? How would you feel if a talented programmer were unable to get a job due to a requirement for colored sight?

CM: I'm amazed at how effective blind programmers can be. I rely so strongly upon seeing the code that it's hard to imagine listening to it. Yet I know it can be done. Not being color-blind, it's hard to appreciate the degree of information loss. But it's less than being blind.

My goal is to develop tools that augment my abilities. If others can use them, fine. It would be foolish to lose an opportunity to explore or excel just to conform to some equalitarian philosophy. Too often our culture seeks the lowest common denominator.

20-20 vision is required for fighter pilots. I have no qualms about requiring color vision for programmers. Everyone does not need to be a programmer.

But in fact, color is merely a property of words that helps to distinguish them. As is intensity, size, font, volume and tone. I'm sure colorForth will be translated into these other representations. I, myself, will be exploring spoken colorForth. (As soon as I can decipher PC sound cards.)

Massively Parallel Computing
by PureFiction

The 25X system reminded me of IBM's Blue Gene computer, where a large number of inexpensive CPU cores are placed on a single chip.

The biggest problem in dealing with a large number of small cores lies in the programming. I.e. how do you design and code a program that can utilize a thousand cores efficiently for some kind of operation? This goes beyond multi-threading into an entirely different kind of program organization and execution.

Do you see Forth (or future extensions to Forth) as a solution to this kind of problem? Does 25X dream of scaling to the magnitude that IBM envisions for Blue Gene? Do you think massively parallel computing with inexpensive, expendable cores clustered on cheap dies will hit the desktop or power-user market, or forever be constrained to research?

CM: Forth is a massively pragmatic language: do whatever you can to solve a problem. Its strength is in the ease of violating whatever rules it has. The 25x is similarly pragmatic. I don't know how to program it yet, but I'm confident I can. It's just another level of factoring.

The parallelism provided by the 25x has a different slant from other parallel architectures. The computers are not identical. I expect many will have different ROM and different interface to the real world. This asymmetry is a powerful clue as to how applications will be factored.

A 10x10 array of 25x chips is an easy board to build. At 50 Watts, it needs as much power as a notebook. That's 2500 computers providing 6M Mips. I can't imagine programming them any other way than Forth.

The advantage of Forth in this kind of context is that it scales. Forth is the machine language, Forth is the high-level language, Forth is the task-control language, Forth is the supervisory language. Each of these has a different vocabulary, but they share syntax, compiler and programmer skills.

Back to the array of 25x chips. Each chip could be on a vertical and horizontal serial bus with 10 others. A half-duplex bus requires a computer to manage, so that accounts for 200 computers. Now whatever the application, data must be provided. Say 1GHz Ethernet. Data (and program) is received, distributed and crunched. The assignment and coding of computers follows the data flow. Results are routed back to Ethernet, or displayed or whatever. It's a nice programming problem, well within the ability of a human to organize.

Will this ever reach the mass market? I don't know.

The direction of 25x Microcomputer...
by Midnight Ryder

The 25x concept looks like it could really a damned interesting idea. But one of the questions in my mind is where you want to head with it? Is this something that is to be used for very specialized research and scientific applications, or is this something that you envision for a general 'desktop' computer for normal people eventually?

Secondly, if you are considering the 25x for a desktop machine that would be accessible by people that aren't full-time geeks, what about software? Forth is a lost development art for many people (It's probably been 10 years since I even looked at any Forth code) and porting current C and C++ application would be impossible - or would it? Is there a potential way to minimize the 'pain' of completely re-writing a C++ app to colorForth for the 25x machines, which could help to speed adoption of a platform?

CM: At this stage the 25x is a solution looking for a problem. It's an infinite supply of free Mips. There's no obligation to use them all, or even very many. But they can effectively be used to eliminate hardware. To bit-bang what would otherwise need a controller. So if you want video or audio or radio or ...

The first applications will doubtless be embedded. These offer greater volume, less software and less market resistance than a general-purpose computer. I see 25x reaching the desktop as dedicated appliances rather than universal golems.

I'm not interested in recoding C applications. My experience indicates that most applications are hardware-dependent. The 25x is as large a change in the hardware environment as I can imagine. This changes the program so much it might as well be rethought and recoded. The most efficient way to do that is Forth.

Forth is a simple, interactive language. Its learning curve is steep with a long tail. You can be productive in a day/week. This depends only on how long it takes to memorize pre-existing words. Good documentation and management helps mightily. I'd rather train programmers than fight code translators.

That said, there are those who look at the mountain of existing applications and want to mine it. C to Forth translators exist and with some pre/post editing could produce code for the c18 core. How to distribute the application among 25 tiny computers would be a good thesis.

Quick question
by jd

I have often conjectured that multi-threaded processors (ie: processors that can store multiple sets of internal states, and switch between them) could be useful, as the bottleneck moves from the processor core to communications and dragging stuff out of memory.

(If you could microcode the "instruction set", all the better. A parallel processor array can become an entire Object Oriented program, with each instance stored as a "thread" on a given processor. You could then run a program without ever touching main memory at all.)

I'm sure there are neater solutions, though, to the problems of how to make a parallel array useful, have it communicate efficiently, and yet not die from boredom with a hundred wait-states until RAM catches up.

What approach did you take, to solve these problems, and how do you see that approach changing as your parallel system & Forth language evolve?

CM: The 25x could implement a multi-thread application nicely indeed. Except that most applications expect more memory that a c18 core has. Whereupon memory remains the bottleneck.

It's important to choose problems and solutions that avoid using off-chip memory. Even so, with 25 computers to support, I expect that every memory cycle will be utilized. The computer controlling memory can be smart about priorities and about anticipating requirements. For example, it could guarantee enough access to support display computers.

And the nice thing about memory-mapped communication is that a computer need not be aware of its environment. It's an ordinary Forth program accessing data asynchronously. Delays are invisible, as is synchronization. Of course, due care is required to avoid lock-up loops.

These conjectures are fun. But in a year we'll have real applications to review. And a much better appreciation of the advantages and drawbacks of so many tiny computers.

Programming languages...
by Midnight Ryder

This one would probably require a bit more time to answer than you probably have available, but a quick rundown would be cool: Where do you see programming languages headed -vs- where do you think they SHOULD be headed?

Java, C#, and some of the other 'newer' languages seem to be a far cry from Fourth, but are languages headed (in your opinion) in the proper direction?

CM: I've been bemused with the preoccupation of new languages with text processing. I've been accused of not providing string operators in both Forth and colorForth. Indeed, I haven't because I don't use them. Editing a file to pass on to another program never struck me as productive. That's one reason I chose pre-parsed source, to break the dependence upon strings and text processors.

Languages are evolving, as evidenced by the new ones that arise. But as with natural evolution, the process is not directed. There is no goal to approach nor any reward for approaching it. But whatever progress you might perceive, I don't. New languages seem only to propose new syntax for tired semantics.

These languages are all infix. Which is extraordinarily clumsy for anything but arithmetic expressions. And even those are comfortable only because we learned them in Algebra 101. Do you remember the learning curve?

Does everyone really think that 50 years into the computer age we have hit upon the ultimate language? As more and more C code accumulates, will it ever be replaced? Are we doomed to stumble over increasingly bloated code forever? Are we expecting computers to program themselves and thus save civilization?

I'm locked in the Forth paradigm. I see it as the ideal programming language. If it had a flaw, I'd correct it. colorForth uses pre-parsed source to speed and simplify compilation. This solves a non-problem, but it's neat and worth exploring. At least it proves I haven't gone to sleep.

What about memory protection?
by jcr

From the web pages, I don't see any mention of access control.

Can this processor be used in a multi-user, general-purpose mode?

CM: If you had a chip, you'd physically control access to it. It doesn't make sense for another person to share your chip. He can get his own. Certainly an individual c18 has too little memory to multi-task. And I doubt 25 computers could run 25 tasks.

But the 25 computers can certainly perform more than one task. They have to share resources: communication buses, off-chip memory and interfaces. Access is negotiated by the computer in charge of the resource. There is no hardware protection. Memory protection can be provided by the access computer. But I prefer software that is correct by design.

Communication with other computers, via internal or external buses, is subject to the usual problems of scheduling, routing and authentication. Internally, at least, my goal is to minimize delay rather than attempt protection. I anticipate spectacular crashes while software is developed. (Have you ever crashed 2500 computers?)

Where is forth going?
by JanneM

I learned forth early on in my programming career; it was very memory and CPU efficient, something that was important on early microcomputers. It was also a great deal of fun (though far less fun to try and understand what you wrote a week earlier...). Today, even small, cheap microcontrollers are able to run fairly sophisticated programs, and it is far easier to cross-compile stuff on a 'big' machine and just drop the compiled code onto the development board.

Forth has (in my eyes) always been about small and efficient. Today, though, embedded apps are more likely to be written in C than in forth, and the "OS as part to the language" thing isn't as compelling today as it was in the eighties. Where is forth being used today, and where do you see it going in the future?

CM: Forth is being used today as it always has been. In resource-constrained applications. I think they will always exist. I'm creating some with the tiny c18 computers in the 25x. I imagine molecular computers will be limited when they first appear.

Personally, I don't mind losing a mature market that can afford abundant resources. Such applications aren't as much fun. But Forth isn't restricted to small applications. Even with huge memories and fast processors, small, reliable programs have an advantage.

The major project cost has become software, to the dismay of managers everywhere. On-time, bug-free software is the grail. Forth doesn't guarantee it, but sure makes it easier. Will this ever be convincingly demonstrated? Will management ever value results over procedures?

The currently popular language is selected by uninformed users. The only thing in favor of such democratic choice is that it's better than any other. But why would anyone want to debug 1M lines of code instead of 10K?

What's the next Big Computational Hurdle?
by DG

Now that sub-$1k computers are running in the GHz range, it seems that all the computational tasks on a common desktop system are not processor-bound.

3D, rendered-on-the-fly games get well over 30 frames per second at insanely high resolutions and levels of detail. The most bloated and poorly-written office software scrolls though huge documents and recalculates massive spreadsheets in a snap. Compiling the Linux kernel can be done in less than 5 minutes. And so on.

It seems that the limiting speed of modern computers is off the processor, in IO. What then, do you forsee coming down the pike that requires more processor power than we have today? What's the underlying goal you intend to solve with your work?

CM: Memory is cheap. I don't mind wasting memory as long as it's not full of code that has to be debugged.

Likewise, Mips are cheap. The trick is to find productive ways to waste them. A Pentium waiting for a keystroke isn't very clever.

So here's a huge pool of Mips. What can you do with them? Voice recognition comes instantly to mind. Image recognition close behind. The brain deploys substantial resources to these tasks, so I suspect a computer must.

IO is indeed a bottleneck, but not in principle. If you can't get data from the camera to the computer, combine them. Put the image recognition algorithms in the camera. Analyse, reduce, compress data at the source. Meanwhile, it helps to have multiple paths off-chip.

revolutionary
by rnd

What is the most revolutionary (i.e., it is scoffed at by those in control/power) idea in the software industry today? Explain how this idea will eventually win out and revolutionize software as we know it.

CM: Forth! But then I haven't been out looking for revolutionary ideas. I like the phrase Baldrson used above: compressing ideomatic sequences. If you do this recursively, you obtain a optimal representation. I see no way to get a more compact, clear, reliable statement of a problem/solution.

Forth clearly revolutionizes software as most know it. It could lead to efficient, reliable applications. But that won't happen. A mainstay of our economy is the employment of programmers. A winnowing by factor 100 is in no one's interest. Not the programmers, the companies, the government. To keep those programmers busy requires clumsy languages and bugs to chase.

I don't have to be glib or cynical. Those are facts of life. Society must cope with them. But I don't have to. Nor you. There are niches in which you can be creative, productive, inspired. Not everyone can be so lucky.

Forth as intermediate language
by Ed Avis

Many high-level languages compile into C code, which is then compiled with gcc or whatever. Do any use Forth instead? I understand Forth is a stack-based language: doesn't that present problems when compiling for CPUs that mostly work using registers?

CM: I remember my shock at learning that Fortran compiled into Assembler, that then had to be assembled. A language that can be translated into another is clearly unnecessary. Truely different languages cannot be translated: C into Lisp.

Forth would make a fine intermediate language. But why have an intermediate language? It introduces another layer of confusion and inefficiency between the programmer and her computer. Macros were invented to support compiling directly to machine code.

Stacks are a compiler-friendly construct. Every compiler has to use one to translate infix notation to postfix. If this stack solution has to be assigned to registers, it's an extra step. Forth uses stacks explicitly and avoids the whole subject.

Register-based CPUs have more problems than just the complexity of their compilers. Their instructions must contain register addresses, which makes them longer and programs bigger. And it is rare that every register can be used in every instruction.

Moreover registers need to be optimized. After assigning system registers, performance depends on how well the remaining registers are handled. Compilers can optimize infix expressions better than humans. But such expressions are no longer the preferred means of doing arithmetic. DSPs and super-computers integrate difference equations.

Design guidelines encourage code with many subroutine calls each with only a few arguments. This is the style Forth employs. But it plays havoc with optimization, since register usage must be resolved before each call. So apart from being unnecessary and difficult, optimization has no effect on good code.

This discussion has been archived. No new comments can be posted.

Chuck Moore Holds Forth

Comments Filter:
  • The Blind (Score:3, Interesting)

    by Kallahar ( 227430 ) <kallahar@quickwired.com> on Friday September 14, 2001 @01:22PM (#2299612) Homepage
    "Now, with some computer experts estimating that over 50% of the Internet is incomprehensible to braille interfaces, and most computer operating systems devolving to caveman interfaces ("point at the pretty pictures and grunt") we seem to be ready to take the next step - disenfranchising the merely color-blind."

    This brings me back to the BBS days. One of the best people in the area was a blind judge. He used a text->speech program which allowed him to do everything on the BBS that everyone else did. Since the BBS's were all text-only anyway the interface was easy. Nowadays we have so many sites that are design-centric that I can't see how people with disabilities get around.

    I always strive to keep my sites simple and clean (like slashdot) so that the site can be more easily used by anyone, anywhere.

    This isn't to say that flash, etc. shouldn't exist, but I don't think that they belong on a business-oriented site.

    Travis

  • by heatsink ( 17798 ) on Friday September 14, 2001 @01:29PM (#2299647)
    "Now, with some computer experts estimating that over 50% of the Internet is incomprehensible to braille interfaces"

    Isn't this because over 50% of the Internet is porno?
    • It has to be way more than that! It's everywhere, in every single part of the internet!
    • That, and probably because half of them also don't look right in links. I can't imagine how difficult it must be for a blind person to try to navigate sites with multiple frames and poor info ordering. I mean, there's no big candy-coated indicators for braile users to look for when they're scanning a website via a braille interface. But web designers probably don't take this into account because their managers usually just want to see cool designs that they can see and that are easy to navigate for them (aka usually people who can see).

      F-bacher
    • Probably right. In which case, it's incomprehensible to any device based on logic.
      • by Anonymous Coward
        ?

        Why can't logical devices comprehend porn? The reason people look at porn is because it turns them on. We'll define this as "good." Then it's only logical to conclude that if a logical device understands what's good, then it can understand porn.
  • The language that Chu Moo created for the control of telescopes in astronomy became so valuable for use in personal robotics that Mind.Forth for robots has resulted at the http://sourceforge.net/projects/mind/ [sourceforge.net] website where more than three hundred Open Source AI projects are rushing to introduce real artificial intelligence. Mind.Forth has a companion version in MSIE JavaScript at http://mind.sourceforge.net/ [sourceforge.net] -- with Tutorial and hard-copy Troubleshoot print-out options.
  • by watanabe ( 27967 ) on Friday September 14, 2001 @01:41PM (#2299707)
    Chuck is an unabashed Forth zealot. I guess this is fine; I'm sure Forth is great. But, I get the feeling he is so focused on the details (Commenting that programs don't need to read text to do something useful? Hello? How was this slashdot interview pulled from a database and sent to my web browser?) that I feel like reading stuff from him will only be interesting if I want to know something about Forth. This, compared to Neal Stephenson, say.


    I even get the impression that Chuck would be happy that I noticed this about him. It just makes me think that there will be no revolution coming from the Forth camp. Which, I hadn't really expected, anyway. But, I'll cross it off my list of possible revolution starters, anyway.

    • Linus is an unabashed Linux zealot. I guess this is fine; I'm sure Linux is great. But, I get the feeling he is so focused on the details that I feel like reading stuff from him will only be interesting if I want to know something about Linux.

      Yes, the comparison applies. Chuck Moore wrote Forth in the beginning. Of course he loves it: writing a programming language (at least a useful one) is work, and you wouldn't do it except because you really, really wanted to.

      Put it another way: If Dennis Ritchie were to be interviewed on /. would you be shocked if the C programming language were mentioned from time to time?

  • Compression (Score:4, Insightful)

    by donglekey ( 124433 ) on Friday September 14, 2001 @01:43PM (#2299720) Homepage
    One of the things that can never get enough power is compression. Right now the next generation of image and video compression looks like JPEG 2000 and Motion JPEG 2000. I have tested it and it seems like a miracle compression, it consistently works at least 4x better than jpeg compression. Apply that to video and you have something incredible but very VERY expensive. He said it was a solution looking for a problem, and there it is. I think that HDTV video disc standards are leaning toward mpeg4 which would be a mistake in my opinion. Motion JPEG 2000 would be much more forward thinking if someone could pull it off. I say go for it Chuck.

    More immediatly, using a 25x chip in a digital camera to compress large pictures to JPEG2k could save lots of space and more importantly, quality. Seems like a perfect mariage to me!
    • Mpeg4 is not Divx!!

      MPEG4 is an incredibly complicated scene graph compression scheme. Currently people are merely using it a transport stream format with mpeg1 video compression.

      A real mpeg4 encoder has to be content aware, and right now that's beyond our abilities(and hardware).

    • One of the things that can never get enough power is compression. Right now the next generation of image and video compression looks like JPEG 2000 and Motion JPEG 2000. I have tested it and it seems like a miracle compression, it consistently works at least 4x better than jpeg compression. Apply that to video and you have something incredible but very VERY expensive. He said it was a solution looking for a problem, and there it is.

      Actually, you'd almost certainly be better off just building dedicated hardware optimized for JPEG/MPEG compression, like you already see in the hardware decoder cards for DVD. These either implement common computation-intensive parts of the CODEC algorithms in hardware, or use DSPs to implement them in firmware using hardware that's geared towards signal processing.

      Running on a general-purpose device, even a parallel one, won't get you these benefits.
      • Actually, you'd almost certainly be better off just building dedicated hardware optimized for JPEG/MPEG compression, like you already see in the hardware decoder cards for DVD. These either implement common computation-intensive parts of the CODEC algorithms in hardware, or use DSPs to implement them in firmware using hardware that's geared towards signal processing.

        Running on a general-purpose device, even a parallel one, won't get you these benefits.

        Uhm, a DSP is a general-purpose parallelized device.

        It's not like their adders and multipliers are any faster then those in modern processors. The only real difference between a DSP and a CPU is that DSPs have typically embraced parallelism in hardware (VLIW, multiple cores on a chip, pipelining) to a greater extent than contemporary CPUs, at the necessary expense of backwards compatibility with earlier models.

        25x looks to me like it could be a GREAT chip to do common DSP tasks like filtering or block discrete cosine transforms on.

        • Running on a general-purpose device, even a parallel one, won't get you these benefits.

          Uhm, a DSP is a general-purpose parallelized device. It's not like their adders and multipliers are any faster then those in modern processors. The only real difference between a DSP and a CPU is that DSPs have typically embraced parallelism in hardware (VLIW, multiple cores on a chip, pipelining) to a greater extent than contemporary CPUs, at the necessary expense of backwards compatibility with earlier models.

          We seem to be using the term "general-purpose" differently.

          The 25x chip is an array of more-or-less independent cores, optimized for a general-purpose instruction set and geared towards SISD integer instructions, with each processor running its own instruction stream with its own control flow.

          A generic DSP chip is a single-instruction-stream core optimized to do things like dot products, multiply-accumulates FFTs, and/or various other signal-processing-specific operations in parallel (SIMD-style) in hardware.

          Ask a DSP processor to play chess, and it'll crawl.

          Ask it to perform frequency-domain feature extraction or to do geometry transformations for rendering, and it'll work at blinding speed, because much of the control flow and register shuffling and memory shuffling that you'd have for these tasks with a general-purpose chip doesn't have to be performed.

          A DSP's instruction set and chip hardware are geared towards a narrow class of applications (signal processing), and as a result it does these tasks (and pretty much only these tasks) extremely well.

          (Note to purists: I'm considering a stream of VLIW instructions to be a stream of "single" instructions for purposes of this thread, because you don't have independent control flow in the multiple instructions per clock being executed. This is an arbitrary terminology distinction on my part.)
          • The 25x chip is an array of more-or-less independent cores,

            You mean like Texas Instrument's TMS320C80 [ti.com] DSP?

            optimized for a general-purpose instruction set and geared towards SISD integer instructions, with each processor running its own instruction stream with its own control flow.

            So it seems like it can do everything a DSP can vis a vis hardware-level parallelism, plus some things that are hard for SIMD-style DSPs. Which is faster, 25 SISD processors sharing an address space, or one SIMD processor which can operate on 25-ary vectors? Multiple SISD cores seem to win, I think, though they may be harder to program for.

            Ask a DSP processor to play chess, and it'll crawl.

            The main bottleneck in chess program is a game tree search. Tree searches are inherently parallelizable, and there are, in fact, very good algorithms [nec.com] for doing them on SIMD processors.

            • So it seems like it can do everything a DSP can vis a vis hardware-level parallelism, plus some things that are hard for SIMD-style DSPs. Which is faster, 25 SISD processors sharing an address space, or one SIMD processor which can operate on 25-ary vectors? Multiple SISD cores seem to win, I think, though they may be harder to program for.

              I don't see how you support this conclusion.

              I agree that multiple SISD cores are more _flexible_. That isn't the issue.

              If you're doing, say, a dot product on multiple SISD cores, you have to deal with several instructions dedicated to control flow.

              If you're doing the same dot product on a SIMD core, you don't have any loop overhead.

              Heck, even if you're just doing a MAC operation, the example holds. Which would make best use of processor capabilities - issuing a multiply and then an add, or issuing one operation with data flow built into the hardware?

              I'll place my speed bets on SIMD and hardwired data flow, thank you.
  • by Angst Badger ( 8636 ) on Friday September 14, 2001 @01:47PM (#2299735)
    The thing I have always liked best about Chuck Moore is that, whether you agree with him or not on a particular point, his ideas are always interesting and original. He's not afraid to follow his own judgment wherever it leads, and while he may perhaps end up following more blind alleys than a conventional thinker, it's people like him who will also make the most breakthroughs. In this day of C++/Java/XML/insert-other-orthodoxy-here, it's good to have someone like Chuck Moore around to remind us that computing can still be exploratory and experimental, and that you can still make a living without following the herd.
    • In this day of C++/Java/XML/insert-other-orthodoxy-here, it's good to have someone like Chuck Moore around

      I fail to see how replacing today's orthodoxy with Forth orthodoxy is any great step forward. Differently languages have different expressive purposes. Would you rather code a web-log report tool in Perl or Forth? Moore side steps the problem by saying that text-processing problems don't appeal to him. That doesn't invalidate the existance of such problems.

      I think Forth has it's place. I've used it and found it effective for certain problems. But to say
      I remember my shock at learning that Fortran compiled into Assembler, that then had to be assembled. A language that can be translated into another is clearly unnecessary
      borders on stupidity. An assembler simply translates into binary, why not skip both steps and bring back the panel switches a la the MITS Altair? Basically, Moore has simply found an assembly language for a virtual machine that works for him. He called it Forth, and is now spending his time building hardware that matches that virtual machine.

  • I'll tell you one thing: he sure has Forth-colored glasses on....


    I'm not bashing Forth. I think zeal can be a healthy thing. But, you know, zeal about a concept, not about one isolated tool/approach.

    Nate

  • Hmmm (Score:5, Insightful)

    by Reality Master 101 ( 179095 ) <`moc.liamg' `ta' `101retsaMytilaeR'> on Friday September 14, 2001 @01:54PM (#2299781) Homepage Journal

    Forth clearly revolutionizes software as most know it. It could lead to efficient, reliable applications. But that won't happen. A mainstay of our economy is the employment of programmers. A winnowing by factor 100 is in no one's interest. Not the programmers, the companies, the government. To keep those programmers busy requires clumsy languages and bugs to chase.

    To be honest, to me this invalidates everything else he said. If you have to depend on a conspiracy to figure out why your pet language is not universally adopted, then you are not living in reality.

    I used Forth a long time ago. In fact, I advocated using Forth for the game company I worked at because I liked its simplicity and compactness. But I realize now that the practical measure of a language is how easy it is to maintain it... and Forth is not that language.

    It kind of reminds me of APL zealots (yes, there used to be those, and there probably still around in hiding). They claimed much of the same things... that APL should be the language that everyone uses (I remember someone trying to convince me that APL would be a great language for an accounting system). They would NEVER admit that APL was hard to maintain.

    I think this guy needs to pull his head out of the clouds and realize that there just might be reasons other than conspiracy that Forth is not more widely used. Forth had its time in the sun, and it was eventually rejected.

    • I agree with you to an extent, but Forth hasn't been rejected at all!

      Forth is in broad use for very low-level embedded stuff - doesn't every Sun computer have Forth embedded in it. FreeBSD uses Forth in it's boot process, and I'm certain that there are others...
      • I agree with you to an extent, but Forth hasn't been rejected at all!

        I should have said that it's been rejected as a "general purpose language". I think Forth has a lot of value in embedded applications.

    • Re:Hmmm (Score:3, Funny)

      by Viadd ( 173388 )

      They would NEVER admit that APL was hard to maintain.

      But you don't have to maintain an APL program. If, for instance, there was a bug in the operating system kernel, and it were written in APL, you would just re-write it. How hard is it to rewrite one line of code?
    • Re:Hmmm (Score:4, Insightful)

      by William Tanksley ( 1752 ) on Friday September 14, 2001 @02:42PM (#2300046)
      To be honest, to me this invalidates everything else he said. If you have to depend on a conspiracy to figure out why your pet language is not universally adopted, then you are not living in reality.

      Read again. He did NOT claim any form of conspiracy; he simply identified common interest. There IS a common interest in all those sectors in staying employed.

      At the same time, there's a more powerful common interest at stake: making money. It's better to make money than to stay employed (in a corporation which refuses to change to a more efficient concept, and which will therefore soon lose business).

      So it's clear to me that Chuck's analysis is simplistic. But so is yours -- at least he's correctly identified a problem.

      The real problem with Forth is that it achieves its successes by being fundamentally different. The theory behind Forth is totally at odds with the theory behind all other languages, with the exception of Postscript and Joy. And until the last couple of years, nobody had done ANY work on understanding that theory, and from that understanding coming up with simple ways to explain what makes a Forth program "good" or "bad".

      Thankfully, we now have some work done on that, and I believe that the clear result is that all modern Forths are screwy. They encourage writing code which is bad, and therefore hard to maintain. This isn't the fault of Forth; it's the fault of the vocabulary Forth now has. The success of C wasn't due to the language (although it was good, it was only a minor improvement over previous languages); it was the teaming of the language with the library and runtime (AKA Unix).

      Work is ongoing... See the concatenative languages group [yahoo.com] for more discussion and a very informative link.

      As for APL... We have one APL "zealot" (your word) on the concatenative languages list. He's an excellent help; he can see right through many of the trivial differences in syntax to the problem actually being solved. There's no doubt that APL is a great language, and provides a marvelous "transform", converting a problem from one domain (its native domain) to another domain (the array programming domain) in which it can sometimes more easily be solved. It's like a laplace transform -- a wonderful tool for problem solving. One doesn't maintain a laplacian transform when the problem changes; one simply reworks the math.

      Forth is different; in Forth you don't transform the problem into a form which fits the language; instead, you transform the language into a form that fits the solution domain.

      -Billy
      • He effectively said there was a conspiracy to perpetrate bugs, which there isn't. Beyond that, what the other poster said makes a lot of sense. I like Forth too, and have tried to do some personal project in it. But I find that when I inevitably get distracted, and return to the projects, trying to re-read the code and figure out where exactly I was is damn near impossible.

        This is not to say that you can't write code in other languages that is just as incomprehensible, but I find that I can pick up a perl script I wrote 2 years ago and read it right away. I can't do that with Forth. It would take a lot of effort, and continuous programming (which I don't do in perl either) to remain fluent enough in Forth to do that. And since I only have these intermittent projects written in it....it's not worth the hassle.

        Like it or not, Forth is not the most easily maintained language out there, and there are languages that do lend themselves more easily to long term maintainability.

        • Re:Hmmm (Score:3, Informative)

          He effectively said there was a conspiracy to perpetrate bugs, which there isn't.

          He most certainly did NOT say that. He said that there is a common interest in keeping to the current ways of doing things, and further that the current way is less efficient than his way. His first statement is obviously true; he doesn't infer from this that there's any conspiracy, nor does he need to. His second statement is not obviously true, and in fact I believe it misses the point. It may be true that his way is actually more efficient, but the costs of converting everything to work his way would be very high indeed. When you add in the fact that the theory behind Forth isn't completely understood and is only beginning to be explored, it's very clear that now isn't the time to switch to Forth, and even more clear the 1980 was worse.

          This is not to say that you can't write code in other languages that is just as incomprehensible, but I find that I can pick up a perl script I wrote 2 years ago and read it right away.

          What can I say? You know Perl, and you know how to write clear code in it. Try reading a hardware engineer's Perl code (i.e. someone who didn't grow up writing software), and see how far you get.

          I can't do that with Forth.

          Odds are that you can -- I certainly can, and I don't have an immense amount of experience with Forth. You simply have to realise that Forth is different, and you have to learn how those differences affect the ideal coding style.

          For example, you're used to a vertical code layout:

          function1
          function2
          function3

          In Forth, the ideal code layout is horizontal:

          function1 function2 function3

          You're also used to keeping your routine size down to a page or two -- any bigger, and it gets too hard to maintain, and any smaller and the parameters take up too much space to type and too much time to pass to the function. In Forth the ideal routine size is about 50 characters.

          In both Perl and Forth, you can write unit tests to check your code. In Forth, you're encouraged to keep them in the same source file, and run them as part of compilation. There are tools to help -- one of my favorites defines the words "testing", "{", "--", and "}".

          testing addition
          { 3 4 + -- 7 }
          testing distributive property of addition
          { 3 4 5 + + -- 3 4 + 5 + }

          Include a section like that after every word definition, and update it regularly whenever you try to use the word in a different manner, and you'll have a much easier time maintaining the word when it has to be changed.

          (I'm not claiming that unit testing is new to Forth, although Forth was one of the first languages to use it heavily as part of the language. Perl and Python in particular can be used very effectively for unit tests, since they're interactive.)

          Anyhow, my point is that the rules for programming in Forth are very different than those for other languages.

          -Billy
      • Re:Hmmm (Score:2, Interesting)

        by fusiongyro ( 55524 )
        Read again. He did NOT claim any form of conspiracy; he simply identified common interest. There IS a common interest in all those sectors in staying employed.

        According to this logic, we would never have moved to C from assembly language.

        Daniel
        • According to this logic, we would never have moved to C from assembly language.

          Yes, taken strictly and applied to all cases, it means that. But only a great fool would do so. You are clearly not a great fool; therefore, I cannot take the glass in front of you. Sorry. I mean that I can't assume that Chuck meant that generality.

          C is not a huge step up from assembly language is bug-freeness. It's an improvement in consistency of code, and it makes structure easier to discern, but there's a reason it's called "portable assembly".

          -Billy
      • Forth is different; in Forth you don't transform the problem into a form which fits the language; instead, you transform the language into a form that fits the solution domain.

        And how is this different from OO?

        I realise that Forth has little in common with C++ or Java, but the whole point of OO is that you can define new types that model the solution domain. So in Forth you define new 'words' that model the solution domain, and this is so very different?

        • Very good question. This is fundamentally different from OO. For example, Forth is not an OO language, but if your problem fits OO, you can easily tranform Forth into OO. Forth isn't aspect-oriented, either, but again, it can be made so.

          In Forth you don't merely define new types that model the solution domain; rather, you define a new _language_ in which the solution domain can be expressed naturally.

          A common and desired result is that the solution portion of your program (as opposed to the part in which you're defining your language) can be read by an expert in the solution domain who knows NOTHING about computers. Nothing; not even how to read pseudocode. The final result may look like English; more often it looks like the formal terms appropriate to the solution.

          Forth people are proud of their almost total lack of syntax; even so, sometimes the problem requires syntax. In those cases too Forth has been extended; there are at least two Fortran/BASIC style parsers, and one general-purpose parser generator.

          -Billy
    • I've seen a complete accounting system written
      in APL. I've also seen a complete banking system
      also written in APL and deployed across tens of
      small finanical companies and hundreds of sites.
      Both worked well.
    • It kind of reminds me of APL zealots (yes, there used to be those, and there probably still around in hiding). They claimed much of the same things... that APL should be the language that everyone uses (I remember someone trying to convince me that APL would be a great language for an accounting system). They would NEVER admit that APL was hard to maintain.

      Nowadays most of us APL zealots have moved onto J, K, or A+ all of which have most of the traditional procedural primitives, for, while, do, if, etc. There are many programming tasks where maintenance is not an issue but speed of development is, for example, rapid prototyping. Structured and Commented APL is not hard to maintain if written by a competant programmer.
    • If you have to depend on a conspiracy to figure out why your pet language is not universally adopted, then you are not living in reality.

      Good point. Forth has been around for a long time, and it's never really caught on. I've written a few thousand lines of it myself, for an embedded application. It's not that there's any big opposition to it, or that people can't understand it; it's that it just isn't all that great.

      Big arrays of little machines aren't that useful either. That approach has been tried; the hardware can be built, but few problems fit that model well. The ILLIAC IV, the Hypercube, the BBN Butterfly, and the Connection Machine all fit that model, and they were all dead ends. The two extremes of that spectrum, shared memory multiprocessors and clusters of networked machines, are both useful. Nothing has been found in the middle that's equally useful.

      The trouble with stack machines is that they have a worse Von Neumann bottleneck than register machines. Everything has to go through the top of the stack. It's probably possible to get around that with a superscalar architecture. (The FPU in x86 machines is a stack architecture, yet all modern implementations are able to get a few instructions going simultaneously.) But Moore would rather have lots of dumb processors than superscalar ones.

      • The trouble with stack machines is that they have a worse Von Neumann bottleneck than register machines. Everything has to go through the top of the stack.

        Hmmmm.... I wonder about that. If you designed a CPU for Forth (or another stack-based language), you could have something where the first, say, 10 elements of the stack are in registers, and the rest in memory. As the stack grows, the bottom elements are moved off into main memory. Vice versa for shrinking. If done cleverly, there wouldn't be much need for copying data around.

        Of course, I'm sure someone's already thought of this...

        • Oh? Really? You think you could do that?
          Learn about CRISP by Ditzel et al. from Bell Labs. Damn near 20 years ago, amazingly efficient stack based RISC processor.
        • by wjw ( 261380 )
          have something where the first, say, 10 elements of the stack are in registers, and the rest in memory.

          Usually in Forth you won't put more than few elements on the stack. In fact one of Chuck's processors had stack only 5 cells deep.

          While some software Forth systems give you 64 or more stack cells, it is advocated that good Forth program won't use more than 10 or 15.
        • Of course, I'm sure someone's already thought of this...

          Yup. Burroughs B5000, circa 1960, a stack-machine mainframe. An elegant design, and far, far ahead of its time. A stack-machine shared memory symmetrical multiprocessor with paging and tagged memory in 1960. Reasonable performance for its day, and a successful product; banks used them heavily for years. The B5000 was followed by the B5xxx, B6xxx, and B8xxx product lines, with compatible architecture, and the product line continued until well after the merger with UNIVAC to form Unisys. Definitely the most succesful of the stack machines.

      • The trouble with stack machines is that they have a worse Von Neumann bottleneck than register machines. Everything has to go through the top of the stack.

        At the same time, though, because you don't have a register file, you don't have to have an instruction decode stage. As a result, your instruction cycle time goes down.

        The cycle time needed for these machines is truly amazing. I built one back in college, and outperformed every other chip in the class.

        -Billy
    • I am not speaking directly about APL, but more about its successors: J and mostly K. I used to think it was hard to stare at a page of K, but after programming in it for a few months, I find it very hard to look at Java code. Code compaction allows the programmer to get the big picture in one page of code. I do not need to constantly flip between multiple pages of code to determine what it happening. I have found myself not using bookmarks or tags in code. Just as you became trained to stare at C-like statements, you also become trained to look at APL/J/K sentences.

      There are some very powerful ideas in the APL family. The ability to read code like prose, where each symbol on the screen has an english equivalent (e.g., the "," operator is pronounced join). I was very sceptical when my roommate showed me one of these languages, K, but soon we were deubugging incredibly complex functions but just speaking the phrases aloud. For example, in K:

      x is the default first argument to a function

      x is is used here as a vector
      x is pronounced x

      + is pronounced plus
      / is pronounced over
      # is pronounced count
      % is pronounced divide

      "(+/x)%#x" is pronounced "plus over x, divide count x"

      Even more powerful is how idioms are built from this. This is what is referred to as building your vocabulary:

      "+/" is also pronounced sum

      "(+/x)%#x" is then pronounced "sum x divide count x"

      the whole construct is is really average.

      : is pronounced is or gets
      * is pronounced times

      "weighted_average:{(+/x*y)%+/x}"
      is pronounced "weighted_average is sum x times y divide sum x".

      Yes, you can abstract it away as a function in this very simple example, but the point is more of how powerful idioms can be and how you build your vocabulary, especially, when interacting with people across the room. Compare this to the C equivalent:

      [not shown; censored by lameness filter; wtf?]

      I am very new to this whole APL/J/K thing, so things are probably a bit off.

      In the C-like languages idiomatic exblockquotessions are not nearly as clear. I only have a 19" monitor, so I use K to make my monitor a JumboTron.

      -j
    • There is more than 100 times the currently produced software per annum that is needed and desired. So, if you did make programming 100 times more efficient you wouldn't be in danger of putting any programmers out of work except for those who could not adapt to the more efficient methods.
  • by jd ( 1658 )
    The replies are fascinating insights. It's a pity he didn't spend more time on each one, but he DOES have a "paying" job & real life, too. I guess that kind-of limits people on the time they can spend on an interview.


    This is NOT a critisism, though, more wishful thinking. What was said was fascinating, and there is a lot there to think over. Doubly so, since it IS just the tip of the iceberg.

    • If you look at his web page, you will see that Chuck is always very terse. I think this is a side effect of working with Forth so much; he carries his style over into English.

      In the documentation for ColorForth, for example, he often describes a feature with a single sentence and moves on.
      • Then I guess I'll thank God he doesn't speak with a LISP. The parentheses would drive me nuts. (Hmmm. I'm ALREADY nuts. Maybe I wouldn't be too badly effected, then.)
  • by skroz ( 7870 ) on Friday September 14, 2001 @01:56PM (#2299791) Homepage
    I have no qualms about requiring color vision for programmers. Everyone does not need to be a programmer.
    Well thanks, Chuck. I'm not completely blind, but close enough that your colorForth is inaccessable, as is most web content. Thanks for telling me that I don't have to be a programmer. Guess what? I want to be, buddy. And while I don't think I'm the best coder that ever walked the earth, I think I can get the job done. But thanks to people with dismissive views such as yours, it's becoming harder and harder every day to do what I enjoy: coding. It's bad enough that most developers don't consider the needs of people with disabilities, but to hear someone who has considered then DISMISSED those needs is truly disheartening.
    • If he ever refused to hire somebody based on their inability to see color, I bet he'd lose in Court, lose bad too.

      Chuck would have to prove that the ability to percieve colors in MANDATORY to coding, which it is not. It's understandable that people in wheel chairs don't run marathons, because a prerequisite to running is having legs. The only prerequisite for programming is a brain that contain knowledge of the language and some way to relay thoughts. There's braile keyboards for th second, and I'm assuming the previous poster has a brain.

      If guys are successfully sueing Hooters to be able to work there (actually I haven't heard about this ina while, does anyone have an update?), then blind programmers could defintey when this case.

      F-bacher
      • If he ever refused to hire somebody based on their inability to see color, I bet he'd lose in Court, lose bad too.

        If it was for a generic programming job, I'd agree. For a generic job in *any* field, I'd agree.

        If he was hiring for a specific job, which involved programming in a language where color was a key element, it could be a stated requirement that the applicant be able to see color.

        In that case, I think he'd prevail -- simply because it's a stated requirement.

        The same as if there was a manual assembly line job that involved seperating piles of red and green items that were otherwise identical - I think that the job would, in part, require the applicant to be able to distinguish red from green.

        In a generic setting, the ability to see color has no relevance to programming -- but in certain niche fields or jobs, it might.

        As for the original poster - I'd be put off too, if I was in your situation. But it would only be a minor setback. Perl, C, and a myriad of other great languages don't require color vision (or any vision at all - although it might be hard to produce GUI apps without sight)

      • If he ever refused to hire somebody based on their inability to see color, I bet he'd lose in Court, lose bad too.

        I applied for a job at Motorola back in '93. They had two open positions, one in the wafer fab and another in the test department. The fab meant wearing bunny suits and dealing with etch chemicals; the test department meant endless boredom and $2/hr less. I wanted the money, but I could not get the job because I am severely color blind (11/13 color plates failed) and it requires acid green scale reading to determine chemical grades.

        The morale of the story? Yes, you can "discriminate" between your employees based on their aptitudes, skills, and physical abilities if they come in direct conflict with their ability to perform the specified job. There's nothing illegal about it, and in most cases, is completely sensible.

        I was still miffed about the pay differential, though, and my inability to do the job that paid more. Such is life.

    • Not everything can me made accesible to everyone and that is his point. He isn't going to rethink his design because some people can't see well. He didn't mean that blind people shouldn't program, he meant blind people don't need to program forth. He said that he knew lots of good blind programmers.

      I do a lot of 3D animation. Why don't you go after Alias|Wavefront or Newtek for not making their products accesible to the blind. Your lack of sight is an obstacle that you will have to overcome, and it will limit you in some things. Just accept it. You can't use color forth, its not a big deal. You probably shouldn't become a photographer, animator, cinemetographer or airline pilot. Def people won't become sound engineers. That's just the way it is and people aren't going to limit themselves because someone somewhere might be offended. That's how we get politicians not progress.
      • Just accept this: he's gratuitously adding the requirement for color vision to programming. Would you be so accepting if someone added an unnecessary and mostly worthless requirement to your job description, one you physically can't acquire, and then said, "Oh well, not everyone needs to be able to do that job, anyway."?

        No, the inability to use colorforth is not a big deal. In fact, given the total irrelevance of any Forth to the world, it's no deal at all. My objection is not to colorforth, but to you and Chuck's "Fuck'em" attitude. That's far more offensive than an unused, needlessly colorized computer language. But, I'll take your advice and just accept it.

        --unoffical spokesman, Colorblind Computer Programmer's Association
        • I agree with you that it's his attitude and the offensive "well, not everyone needs to be a computer programmer" comment that's the problem.

          His actions on the other hand, I have no problem with. He's trying to come up with a better system for himself: encoding meaning in the color of words. He pointed out himself that this same meaning could be given to things like typeface, volume, etc. It would be a problem if he came up with inaccessableForth, which gave him a slight productivity gain, then started advocating its use at the exclusion of people perfectly capable of computer science, just not his language. It would be possibly justifiable if he (and everyone using his system) got an order of magnitude productivity increase, IMO.

        • Comment removed based on user account deletion
      • A deaf guy produced Rahsaan Roland Kirk's excellent album "The Inflated Tear."

        (Kirk, btw, was blind.)
    • Please observe what he said:
      I'm sure colorForth will be translated into these other representations. I, myself, will be exploring spoken colorForth. (As soon as I can decipher PC sound cards.)



      This man is simply saying that he wrote colorForth for his needs. He would not write code that he would not use. Why should he? It is refreshingly frank and honest. Someone else who is color blind can write code for themself.
      He is not against people with disability, but he is not your shepherd, and you are not a sheep.

    • You're taking him badly out of context. His Colorforth is indeed inaccessible to you, but as he's repeatedly stated, color isn't the important things -- he mentions tone, font, and emphasis as alternatives here.

      He's only working with color right now, because that's what he needs; why should he do extra work to solve a problem nobody has? Hire him (or have him hire you), and he'll find a way for you to use ColorForth.

      -Billy
    • He said shortly afterward that color is just his own method for distinguishing different types of code, that you can change the font or face to distinguish different colors too. But yeah, saying that blind people don't have to be programmers is mean :)

    • Others have defended Moore based on other statements he made on the subject. But I, too, noticed the statement "everyone does not need to be a programmer", and think it is unforgivable. The only thing that could mitigate it is an apology from Moore.
  • I get very uncomfortable around fanatics like Chuck - I've been an engineer for a long time and I usually find that anyone with only one tool is going to try and redefine all problems to be fixable with that tool. I've used Forth for years, but I also use C/C++, Perl, Lisp, etc. Each tool for a different problem. Forth has a well-defined and useful niche (it's longevity proves that), but it's far from something that 99% of us will ever use.

    That said, his processor is really cool looking - wish I had some to play with. I can think of a lot of problems that could be solved with these and Forth. But I think that the fanaticism here will put off many backers...
    • I guess we get a Forth entry in the "Programming languages as hammers" list...
    • The most important part of Forth isn't a tool -- it's a concept. And most of you HAVE used it, although you deny it; most of you use the concept which powers Forth every time you print something. The same concept that powers Forth also powers Postscript.

      -Billy
  • cm said " 20-20 vision is required for fighter pilots. I have no qualms about requiring color vision for programmers. Everyone does not need to be a programmer."

    I would disagree. While the Free Software Foundation, for instance, does not explicitly condone "programming regardless of skill"-- contributing to gcc does require some aptitude, free software allows anyone to program--without regard to financial means, or the willingness to sign NDA's.

    Before color forth, color vision was not a prerequisate for programmers. Why should it be now? (Why are boldface, italic, and roman not appropriate analogues for red green and yellow, anyway?)

    • There's tons of other Forth dialects that actually meet his criteria of being a true Forth- and they run under every OS out there from DOS all the way to Linux.

      ColorForth is his implementation of his idea of what Forth should be for him. If you can use it, fine. If not, find another Forth- I'm sure there will be other implementations that code for the x25 CPU at some point. People aren't using Strostroup's implementation of C++ or K&R's implementation of C either- for that very reason.

      Go hit Taygeta Scientific [taygeta.com]'s website for implementations of Forth that you can try out. For Linux users, I suggest BigForth from Bernd Paysan, BTW- it's a native code generating implementation with some GUI support that shows some promise for making usable apps, etc.
    • Boldface, Italic, and Roman *ARE* appropriate analogues, he already made reference to that.

      Also.. I think what's he's saying is, why should everything on earth cater to the lowest common denominator? It shouldn't.

      You don't need 20/20 vision to fly a plane. if you want to work for the Air Force in particular, to fly their jets, you have to have perfect vision. Period.

      So.. if you want to work with Color Forth, as he implemented it, you need to be able to see colors. I fail to see how this is bad.

      Everything relating to computers does not need to be built for the lowest common denominator.

      • Hmm.
        I don't really think that designing a language to be accessible to the blind or the color blind is catering to the "lowest common denominator".

        Theoretically, the main requirement for a programmer is a brain. Everything else is secondary. One can argue that a fast computer is nice, and good compilers are also helpful.
        Vision really doesn't enter into it.

        ColorForth is a personal language designed to fit the personal quirks of its creator. That personal fit also makes it less useful as a universal language.

        As for the jet fighter analogy-- it is much harder to design a jet fighter that can be flown both effectively and safely by a nearsighted pilot. that it is to design a computer language accessible to the color blind.

    • (Why are boldface, italic, and roman not appropriate analogues for red green and yellow, anyway?)

      They are, of course -- he said they were. He's using color now because that's what he started with; you can't deny that it's easier to work with than multiple typefaces.

      -Billy
  • This guy designing the language and the chip is interesting to me.

    Its mentioned in the technical manual for the Enterprise 1701-D that the software was designed long before the hardware. This seems unnatural considering the ease with which you can change software compared to hardware but there are advantages.

    EROS for example is a OS that is struggling to apply some really cool ideas becuase there is not enough hardware support for its permission paradigm. Alternatively, it took MS over 10 years to implement all the hardware features built in to the 486 for OS's to use (not just the 32bit bus).

    3d libraries are being written in hardware code now, after the attempt to do it in software couldn't handle the massively parallel nature and speed requirements. Now there are crypto cards that simularly add hardware designed functions to prop up where software is slow.

    So what I wonder is, is it really so unfeasable or unreasonable to design the software first and then the hardware?
    • EROS (Score:2, Interesting)

      by kip3f ( 1210 )
      I don't know where your comment about EROS [eros-os.org] comes from. EROS stands for Extremely Reliable Operating System, and has cool stuff like transparent persistence for all programs and a pure capability security system. EROS was built from the ground up to run on commodity Intel boxes. The OS is not ready for prime time because it is being re-written in C (from C++). It is GPL'ed, and it has mucho potential.
      • You are very right. One of my favorite features is that it doesn't use a "file-system" per-say. I assure you that although I'm not active on the mailing list, I've been monitoring it since 1997.

        My comment comes from Jonathan Shapiro's comment about his own OS sometime about when it was GPL'ed. In short he mentioned that hardware had a lot of catching up to do with his OS.

        I believe it came in response to a thread on how to deal with the "capabilities" (I'm assuming you know how they use that term) of modems and other peripherals. I remember the quote going something like "It will take years for hardware to see a need and implement it."

        As another note, if you look at the v2.0 logs you will notice a long essay on why he is finnaly implementing a malloc. Essentially becuase of, [...drumroll...] braindead hardware design [eros-os.org].
  • Trinary (Score:3, Interesting)

    by Water Paradox ( 231902 ) on Friday September 14, 2001 @02:05PM (#2299838) Homepage
    Looks like my question was too far out there to get moderated up. Perhaps it's a lame question. I believe it's a solid question, so I'll pose it again, maybe some of you have comments:

    A trinary [trinary.cc] computer system is based on architecture which is much more efficient than binary, especially for moving large numbers around. Since you are designing your own processors, have you considered the possibility of building (and coding on) a trinary system? It seems like trinary eclipses the revolutionariness of even colorForth, by taking us into a whole nuther dimension of architecture...

    -WP
    • A trinary [trinary.cc] computer system is based on architecture which is much more efficient than binary, especially for moving large numbers around.

      Actually, to the best of my knowledge, trinary systems (and other systems with a radix other than 2) are not vastly more efficient than binary.

      Trinary logic is more complex to design than binary and requires more transistors. There's no substantial design time or area saving.

      If you're doing math, you'll also have larger roundoff errors using a radix larger than 2.

      Back in the olden days, computers were built to work in base 10 or base 16, or to work with multiple logic levels per line, but for these and other reasons, they finally converged on base 2 with two-level signalling.
    • I get the impression that this is 0, 1, 2 trinary. I think it would be interesting with -1, 0, 1 trinary numbers. I.e., if A=+1, Z=-1, and 0=0:

      1=A (1)
      2=AZ (3+-1)
      3=A0 (3+0)
      4=AA (3+1)
      5=AZZ (9+-3+-1)
      6=AZ0 (9+-3+0)
      7=AZA (9+-3+1)
      8=A0Z (9+0+-1)
      9=A00 (9+0+0)

      I don't know if this would be helpful, but it would certainly be interesting. And +1, 0, -1 seems less artibrary than a normal base-3 number system. It's intrinsically signed, too, since, for instance, ZZA=-11.

  • by jasno ( 124830 )
    "There is no hardware protection. Memory protection can be provided by the access computer. But I prefer software that is correct by design."

    That statement alone should point out that this guy has no clue about real world software design. People make mistakes, big ones, and they're not always caught in the debug cycle.

    He sounds like a real smart guy, who's written alot of cool things ON HIS OWN. Once you break out of the individual 'hacker' environment and have to teach and share with others, alot of this stuff falls apart...

    • I agree. His comment sounds similar to a comment I heard an OS/2 zealot make years ago. He was talking about the single-threaded nature of Presentation Manager and said that if everyone wrote perfect programs PM would never lock-up. I asked (rhetorically) how likely he thought that was. I'm still waiting for an answer.


      Honestly, have you ever seen anything but the most trivial program that was "correct by design"?

    • He's worked with a lot of teams as well. Memory protection is NOT a way to get bugs out of your system (that's STUPID; there's no protection inside a process); rather, it's a way to emulate an air-gap between programs (in other words, for security). When you have multiple processors for that cheap, it's far more secure to just set up a seperate processor.

      -Billy
      • When you have multiple processors for that cheap, it's far more secure to just set up a seperate processor.
        Let's see, how many processes do I have running?

        $psax|wc-l
        82

        So I'm supposed to have 82 processors on account of having 82 processes that might become runnable at any moment? What about shell servers with hundreds of simultaneous users? Lack of memory protection is just ludicrous, it's so not-real-world it isn't even funny.

        And as far as being more secure, that's ludicrous too. A functioning MMU is just as secure.

        • Your incredulity shows your almost unspeakable insularity. Your use of a Unix-based, system-specific command to provide evidence against me is solid proof of that insularity.

          Not all the world is a desktop or workstation. Not all systems run Unix. This will ALWAYS be true, because Unix isn't appropriate for most systems.

          Yes, lack of memory protection is entirely inappropriate for systems with a variable number of multiple users. But it's entirely appropriate for users with multiple systems. Chuck's chip is supposed to cost $1 in production quantities. One dollar for 25 processors. I tell you one thing for sure: I'm not sharing mine with anyone else. I may even buy the x36 or x49 or x64. Chuck thinks he might even be able to stretch it up to 100 chips on a side, for ten thousand processors on one chip (although the resulting chip will be as big as a Pentium III).

          What would you run on this? I see you're wanting to run Apache and Beowulf. I think that's stupid; those are designed for desktops. I'd want to run neural networks and simulated annealing. I'd want to build a wristwatch which can transcribe to ASCII all speech which it hears, with identities for the seperate speakers. I'd build a PDA which recognises commands and takes dictation subvocally (in other words, it reads lips).

          The applications for this kind of power and speed are astounding -- even with only 256K of memory and 18-bit processors.

          Ah, I bet you didn't notice that, did you. 18 bit! 256K address space! Total. ALL of the processors share that 256K. That's tons of elbow room for a fixed-purpose system (although not enough for many algorithms; reimplementing Deep Blue will have to wait until Chuck simulates at LEAST a 24-bit chip).

          Seriously, can you see any point in memory-protecting a system with a total of 256K of memory?

          -Billy
          • Your incredulity shows your almost unspeakable insularity. Your use of a Unix-based, system-specific command to provide evidence against me is solid proof of that insularity.
            I also design logic for FPGAs and design systems that incorporate microcontrollers. I am fully aware that there is a place for simple chips that pack a lot of bang for the buck, and that the software/firmware/logic for them is narrowly tailored for specific jobs.

            So it is valid under some circumstances to say "an MMU is not good". However, Chuck Moore makes all sorts of statements like that one *as general laws*, which really squicks me. E.g., this [colorforth.com] page which says "The word */ multiplies by a ratio, with a double-length intermediate product. It eliminates the need for floating-point." Eliminates. Not "eliminates fp when you have a known and small dynamic range", but "eliminates". This has got to be one of the stupidest things I've ever seen. Another example: This [colorforth.com] is Chuck "Yes, that's all it takes." Moore's IDE driver. Puh-leeze. It's the "Hello World" of IDE drivers. It does *nothing*. No DMA. No locking for running on SMP machines. No autodetection and adaptation to drive capabilities. No workarounds for chipset bugs. No blacklisting of drives that are buggy. If you want to play in the real world, you need code like this [bitmover.com] (the Linux IDE driver).

            Look at this page [colorforth.com] where he says "With the huge RAM of modern computers, an operating system is no longer necessary, if it ever was." Idiocy, written by a person who has never designed a system of significant complexity. RAM has *nothing* to do with whether an operating system is needed: it has everything to do with complexity. E.g., the only sane way to use floppy, IDE, SCSI, flash, CD-ROM, and network-mounted drives in the same system is to have a generic drive-access layer. Then you'll add a generic removeable drive layer to support things like CD-ROMs and flash drives that might suddenly disappear. If you want to support tons of hardware and software, you have to have an operating system.

            This page [colorforth.com] is *full* of idiocy. "Don't try for platform portability. Most platform differences concern hardware interfaces. These are intrinsically different. Any attempt to make them appear the same achieves the lowest common denominator. That is, ignores the features that made the hardware attractive in the first place." Except, of course, for the usual case where the hardware has an upgrade that is easy to turn on and use, and which hardware is available is not known until runtime.

  • by Midnight Ryder ( 116189 ) <midryder@NOSpAm.midnightryder.com> on Friday September 14, 2001 @02:44PM (#2300057) Homepage

    After reading the results of the interview, I really like Chuck Moore. Why? Simple - he's got a language he likes and develops further for his needs when nessisary, and when it comes to what everyone else thinks, he doesn't care!


    That's not nessiarily a bad thing, in some ways. How different would colorFourth be if, for instance, he stopped to consider the effect on color blind or blind people trying to use the langauge? What about if he stopped to concern himself deeply with how to get colorFourth to become accepted as a mainstream language?


    Instead, he concentrated on creating something he felt was the perfect language for him - not really for anyone else. There's something very admirable about that. Seems like projects these days (I mean Open Source projects in particular - commercial projects obviously tailor to as many people as possible) end up giving up part of thier original focus to instead appeal to a much broader audience within thier application style grouping. He doesn't care about how (x) implemented (y) - if it doesn't fit the applications he's been working on, then he ain't adding it in.


    On the flip side, that means that colorFourth, for instance, isn't going to get a whole lotta acceptance. His comment about blind programmers struck me as callous, but, what the hell - it pretty much comes down to being 'his personal language'. If that's how he treats is, then yeah, to program in colorFourth (Chucks personal language) then you have to either learn to adapt it yourself (font changes for color blind people, or possibly tonal changes for those who have the source read to them by text to speach programs.)


    But the one comment that struck me as wrong was his thinking that the reason more people don't use it is a matter of conspiracy. *SIGH* No, Chuck - if you build a language and tailor it pretty much completely for yourself, well... who the heck is gonna really care that much since you dont?


  • The lambda calculus is not inherently sequential, and languages based on the lambda calculus can be massively parallel in execution, just by using simple tricks like evaluating the arguments. The ideas have been floating around for decades - they boil down to:

    Functional, side-effect-less programs and primitives

    Parallel evaluation of arguments

    Parallel evaluation of mapping primitives

    See here [nec.com] for a recent reference.

  • 20-20 vision is required for fighter pilots. I have no qualms about requiring color vision for programmers. Everyone does not need to be a programmer

    Yes, but the set of people who want to be programmers and the set of colorblind people are NOT the same.

    I am a colorblind programmer. I program in C, PHP, and Perl regularly. I have no qualms with users that, endowed with color vision, enhance their programming abilities by using colorized editors. My coworkers, who all use syntax highlighting, come to my black and white display and squint at the screen, almost unable to read the code displayed.

    However, the sentiment embodied here is truly ugly. I mean does this guy have any idea how frustrated potential fighter pilots feel when they are struck down by their imperfect vision? If you want to fly a plane to defend your country and ideals, the inability to do so caused by an imperfection beyond your control would be devastating.

    There are a myriad of alternatives. Even within Forth it would be possible to represent the structures currently shown as color with other types of highlighting. Moore, get a clue.

    Justin

  • I admire Forth, it's a fascinating language. But would I consider it for a project, even an embedded one? No way!

    "Languages are evolving, as evidenced by the new ones that arise. But as with natural evolution, the process is not directed." Mr. Moore notes. He missed the fact that evolution *is* directed -- anything that survives, flourishes. LISP, Forth, and C have a very long history, but the C-style syntax has completely dominated the development of new languages, including C# and Java. Why? Because it's easier read, easer to write, easier to develop with.

    When I have C questions, I have gazillions of code samples to borrow and learn from. (cf: http://www.codecatalog.com/ ) There are multiple sites devoted to Perl. PHP. When developing with Forth I get a "not invented here" kind of thing: each of the few, small libraries available is customed to a specific home-grown flavor of the language. Yes, I can write my own this or that, but *why*?

    I'm interested in amateur robotics, for which Forth might be perfect. But what do I see? Nearly 100% of robots are written in 1) assembly, 2) BASIC, or 3) C. Assembly is of course specific to each chip, and totally nonportable. BASIC is readable, but only somewhat reusable: each flavor of BASIC is incompatible. C immediately rises to the top -- even if I have to write all the libraries myself, the *language* doesnt change from underneath me.

    Mr. Moore's inventions deserve attention for their audacity in completely upsetting the status quo. If his approach is superior, if he is uninterested in the other 100% of the software world to follow, fine. But where are are all of Moore's beautiful chips? Applications? Where are people using Forth on other machines?

    Show me!

    - j
    • > Forth is relevant? prove it!

      Forth is everywhere.

      Look in the Solaris kernel repository, there are even Forth source code.

      The Java virtual machine is a specific Forth-like implementation.

      Even Postscript looks like Forth enough to claim its legacy.

      You should rather demostrate us that this obviously useful language does *not* exist, instead of trolling...
    • An international airport in Saudi Arabia...
      The collision avoidance system on the Space Shuttle...
      The first pocket language translator...
      Numerous Atari console games...

      Some of these used a Forth implementation for a given CPU, one of them uses a special chip that was built by Harris Semiconductor, the RTX2000, one of the first Forth chips.

      Forth is not used not because it's worse or not-useful. Forth is not used because it's so different from just about everything else developed to code for computers.
  • I recently downloaded GNU Forth out of curiosity. I hadn't played around with it in over a decade, but I always thought it was a lot of fun to program in Forth.

    I soon noticed what looked like a cool new feature: named function parameters. You can now access stack slots by name instead of always juggling the parameters with operations like DUP, ROT, SWAP, etc.

    After using this new capability, though, I realized that much of the fun of writing Forth code is figuring out clever ways to juggle the stack. Using named parameters makes the language kind of boring, combining C's explicit memory management headaches with the performance questions of an interpreted language.

    I guess I just never got deep enough into the Forth ways to take much advantage of the magical "extend the language with itself" capabilities. OTOH, Lisp can do some of these tricks and provides automatic memory management.

    Oh well, my favorite language this year is Ruby, anyway. It brings together a lot of good concepts from other languages in a nice way that's easy to understand; it even has a little bit of the extensibility that Forth exhibits.

  • PDP-11 BASIC+2, VAX FORTRAN (and probably other languages from DEC) were compiled to threads of the language run-time system. VAX/Alpha object language is a stack-oriented language.

    Threads are an enormously powerful tool for compiler writers. They allow one to emit consistent sequences (idioms) from a "ridiculously" easy front end.

    At run-time, the loader transfers control to the familiar indirect loop, which gnaws through the "object" code.

    One of the pure charms of programming was to see the PDP-11/VAX/Alpha FORTH inner interpreter.
  • by Anonymous Coward
    no, but give me a week to learn visual basic and i could try.
  • My understanding is that the bytecode language of Python is very stack based/rpn-like...It'd be a curious exercise to compile Python to Forth instead (Fython?)

    Also, for more Forth-like fun, check out MUCKs, MUDs with an internal Forth language. (I spent far too much time hacking the internals of what would later become DaemonMUCK, almost ruining my college career...:)
  • Many of the more negative comments I'm seeing seem to be missing one or two points about the Forth programming environment (we call it that since it's more than just a language).

    Since he lacks feature xxx his ideas aren't relevant: Many of the comments in this regard seem to stem from an assumption that unless a programming environment includes support for multiple protected users, protected memory, protected devices, protective APIs, and so on, it cannot be a relevant environment. What I'd like some of you to consider is that, on the contrary, there are many more programming applications which simply don't require those mechanisms. Sure, they're great to have, when you need them. Your microwave doesn't require multiuser support; your watch doesn't require protected memory; set-top boxes don't require CORBA bindings; CCD firmware doesn't; engine management, FedEx barcode scanners, and so on. The list is nearly infinite, and can extend all the way up to your desktop, if you want. Certainly, we all know many many environments where those tools help us get our work done, but that doesn't invalidate the environments where they aren't needed. Think beyond your desktop; every CPU in the world doesn't have to be running Netscape. Implement what you need or want, throw the rest away.

    Progamming isn't for everyone: Some of you are turning this into a real strawman. Come on, you don't really believe Chuck is dissing someone who wants to program but has a challenge (such as blindness), do you? Re-reading the interview should show you that he has an interest in other representations of programming environments (other than text based ones). Furthermore, Chuck himself has poor eyesight. Thus, he's created his own programming environment that uses very large characters and uses color to replace punctuation, thus saving him precious screen real estate. If anything, you'd see he embodies the attitude, "Change the system to match your wants." I believe with a little thought that it should be obvious that he isn't seeking to exclude people with different abilities.

    // boba

Ummm, well, OK. The network's the network, the computer's the computer. Sorry for the confusion. -- Sun Microsystems

Working...