Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Programming

Mr. Pike, Tear Down This ASCII Wall! 728

theodp writes "To move forward with programming languages, argues Poul-Henning Kamp, we need to break free from the tyranny of ASCII. While Kamp admires programming language designers like the Father-of-Go Rob Pike, he simply can't forgive Pike for 'trying to cram an expressive syntax into the straitjacket of the 95 glyphs of ASCII when Unicode has been the new black for most of the past decade.' Kamp adds: 'For some reason computer people are so conservative that we still find it more uncompromisingly important for our source code to be compatible with a Teletype ASR-33 terminal and its 1963-vintage ASCII table than it is for us to be able to express our intentions clearly.' So, should the new Hello World look more like this?"
This discussion has been archived. No new comments can be posted.

Mr. Pike, Tear Down This ASCII Wall!

Comments Filter:
  • by enec ( 1922548 ) <jho@hajotus.net> on Sunday October 31, 2010 @07:58PM (#34083854) Homepage
    The thing with ASCII is that it's easy to write on standard keyboards, and does not require a specialized layout. Once someone can cram the necessary unicode symbols into a keyboard so that I don't have to remember arcane meta-codes or fiddle with pressing five different dead keys to get one symbol, I'm all for it.
    • by angus77 ( 1520151 ) on Sunday October 31, 2010 @08:10PM (#34083960)
      Japanese is typed using a more-or-less standard QWERTY keyboard.
      • by MichaelSmith ( 789609 ) on Sunday October 31, 2010 @08:22PM (#34084050) Homepage Journal

        Japanese is typed using a more-or-less standard QWERTY keyboard.

        Tediously.

        • by Sycraft-fu ( 314770 ) on Sunday October 31, 2010 @08:57PM (#34084312)

          Because that's what you find in JIS X 0213:2000. Even if you simplify it to just what is needed for basic literacy, you are talking 2000 characters. If you have that many characters your choices are either a lot of keys, a lot of modifier keys, or some kind of transliteration which is what it done now. There is just no way around this. You cannot have a language that is composed of a ton of glyphs but yet also have some extremely simple, small, entry system.

          You can have a simple system with few characters, like we do now, but you have to enter multiple ones to specify the glyph you want. You could have a direct entry system where one keypress is one glyph, but you'd need a massive amount of keys. You could have a system with a small number of keys and a ton of modifier keys, but then you have to remember what modifier, or modifier combination, gives what. There is no easy, small, direct system, there cannot be.

          Also, is it any more tedious than any Latin/Germanic language that only uses a small character set? While you may enter more characters than final glyphs, do you enter more characters than you would to express the same idea in French or English?

      • by Ernesto Alvarez ( 750678 ) on Sunday October 31, 2010 @08:30PM (#34084142) Homepage Journal

        Japanese is typed using a more-or-less standard QWERTY keyboard.

        ...then requiring the input to pass through what amounts to a tokenizer to get the phonetic spelling, and into another program, which needs a database of words and has to prompt you for each one in order to select the proper one from a list. [wikipedia.org]

        Not something as simple as writing ASCII by a long shot.

        • by Z34107 ( 925136 ) on Sunday October 31, 2010 @10:22PM (#34084936)

          Typing Japanese is exactly like typing in English - you press the "space" key between words. The IMEs are pretty smart, and usually the first kanji is the one you want. If it's not you might have to press "space" a second or third time, but it's rare to have to dig through a giant list of kanji to get what you want.

          So, you might have to hit the space key more often if you're typing Japanese. Or, you might not - you can space-to-kanji entire sentences at once, whilst the romance languages are stuck hitting space between every word like shmucks. Except for the Germans. I don't think their language uses spaces.

          The Japanese keyboard layout [wikipedia.org] also types produces kana (most of which are romanized with two latin characters) rather than individual letters. Instead of typing w-a-t-a-s-h-i-space, you type wa-ta-shi-space.

          So, it's really not that bad. What's worse is the irony of seeing an article on slashdot complain about the persistence of ASCII. I mean, really now, slashdot.jp [slashdot.jp] manages to display non-ASCII characters.

          • by rainer_d ( 115765 ) on Sunday October 31, 2010 @10:39PM (#34085024) Homepage

            Typing Japanese is exactly like typing in English - you press the "space" key between words. The IMEs are pretty smart, and usually the first kanji is the one you want. If it's not you might have to press "space" a second or third time, but it's rare to have to dig through a giant list of kanji to get what you want.

            So, you might have to hit the space key more often if you're typing Japanese. Or, you might not - you can space-to-kanji entire sentences at once, whilst the romance languages are stuck hitting space between every word like shmucks. Except for the Germans. I don't think their language uses spaces.

            NatürlichhabenwirLeerzeichen!

          • Re: (Score:3, Insightful)

            by spongman ( 182339 )

            Typing Japanese is exactly like typing in English

            hardly. when you type in english, you think of the word and you type in the letters of that word.

            when you type in japanese, you think of the word, then you have to translate it at least once (maybe twice) in your head before you have a list of roman letters to type. Then you have to assist the computer in guessing the reverse of the translations you just did. certainly, much of this is simple for the typist, and for the computer, but it's fundamentally diff

            • Re: (Score:3, Insightful)

              by HonIsCool ( 720634 )
              When I think of, por ejemplo, the word pronounced as 'hait' [*], I don't have to "translate" that at all. No, sir! Just type it straight in, exactly as it is pronounced: "height" of course! =)

              [*] IPA doesn't work on /.
    • by arth1 ( 260657 ) on Sunday October 31, 2010 @08:22PM (#34084052) Homepage Journal

      Once you've had to do an ad-hoc codefix through a serial console or telnet, you appreciate that you can write the code in 7-bit ASCII.

      It's not about being conservative. It's about being compatible. Compatibility is not a bad thing, even if it means you have to run your unicode text through a filter to embed it, or store it in external files or databases.

      It'd also be hell to do code review on unicode programs. You can't tell many of the symbols apart. Is that a hyphen or a soft hyphen at the end of that line? Or perhaps a minus? And is that a diameter sign, a zero, or the DaNo letter "Ø" over there? Why doesn't that multiplication work? Oh, someone used an asterisk instead of the multiplication symbol which looks the same in this font.

      No, thanks, keep it compatible, and parseable by humans, please.

      • Re: (Score:3, Informative)

        by Tablizer ( 95088 )

        It'd also be hell to do code review on unicode programs. You can't tell many of the symbols apart. Is that a hyphen or a soft hyphen at the end of that line?

        If you want to test and/or frustrate a newbie, replace one of those in their program and see how long it takes them to fix it.

        The first time I ran into something like that it took me a good while. I ended up comparing hex dumps to find it. I should have just retyped the suspect code sections from scratch instead, but I was determined to get to the botto

        • by TheRaven64 ( 641858 ) on Monday November 01, 2010 @06:28AM (#34087008) Journal
          Apple's documentation in HTML form has a few of the standard ASCII characters replaced with other unicode characters. If you copy and paste into a text editor, you get compiler warnings which seem to be saying that they're expecting the character that is there. They also sometimes contain ligatures, which you don't notice unless you look one character at a time. One of the most irritating problems I found was on the Nouveau wiki a load of constants have 0x prefixes where the x is actually a unicode multiplication symbol. Copy them into the code and it looks right, but the compiler rejects it as an invalid constant type.
      • by goombah99 ( 560566 )

        Grep on ascii is more than 100x faster for complex string expressions. THere's a lot of good reasons not to use unicode.

  • "Syntactic sugar causes cancer of the semicolon" - Alan Perlis.
  • Project Gutenberg (Score:5, Insightful)

    by symbolset ( 646467 ) on Sunday October 31, 2010 @07:59PM (#34083864) Journal

    Michael decided to use this huge amount of computer time to search the public domain books that were stored in our libraries, and to digitize these books. He also decided to store the electronic texts (eTexts) in the simplest way, using the plain text format called Plain Vanilla ASCII, so they can be read easily by any machine, operating system or software.

    - Marie Lebert [etudes-francaises.net]

    Since its humble beginnings in 1971 Project Gutenberg has reproduced and distributed thousands of works [gutenbergnews.org] to millions of people in - ultimately - billions of copies. They support ePub now and simple HTML, as well as robo-read audio files, but the one format that has been stable this whole time has been ASCII. It's also the format that is likely to survive the longest without change. Project Gutenberg texts can now be read on every e-reader, smartphone, tablet and PC.

    If you want to use Rich Text format, or XML, or PostScript or something else then fine - please do. But don't go trying to deprecate ASCII.

    • by shutdown -p now ( 807394 ) on Sunday October 31, 2010 @08:04PM (#34083916) Journal

      If you want to use Rich Text format, or XML, or PostScript or something else then fine - please do. But don't go trying to deprecate ASCII.

      This is false dichotomy. Plain text can be non-ASCII, and ASCII doesn't necessarily imply plain text. All the formats you've listed allow to add either visual or semantic markup to text, whereas ASCII is simply a way to encode individual characters from a certain specific set. They do not propose to move to rich text for coding, but to move away from ASCII.

      There are still many reasonable arguments against it, but this isn't one of them.

    • by pz ( 113803 ) on Sunday October 31, 2010 @09:58PM (#34084742) Journal

      When I was a young graduate student building my first experimental setup, a professor who was older and wiser than me suggested that data should be saved in ASCII whenever possible because space was relatively inexpensive and time is always scarce. Although I thought that a bit odd, I did follow his advice.

      The result? I can use almost any editor to read my data files from the very start of my career, closing in on 30 years ago. Just this past week, that was an important factor in salvaging some recently-collected data. In contrast, I can't always read the MS Word files -- an example of an extended character set -- from even a few years ago, and I sure as hell can't view them in almost any editor. Sure, with enough time, I can or could, figure out how to read them, but, as the wise professor rightly pointed out, time is scarce.

      Thus, compatibility is important, and the most compatible data and document format is human-readable plain ASCII.

  • huh (Score:4, Insightful)

    by stoolpigeon ( 454276 ) * <bittercode@gmail> on Sunday October 31, 2010 @07:59PM (#34083866) Homepage Journal

    so we should start coding in Chinese?

    Seems easier to spell words with a small set of symbols than to learn a new symbol for every item in a huge set of terms.

    • Re:huh (Score:5, Insightful)

      by MightyYar ( 622222 ) on Sunday October 31, 2010 @08:17PM (#34084010)

      so we should start coding in Chinese?

      Exactly! Keep the "alphabet" small, but the possible combination of "words" infinite.

      You don't need a glyph for "=>" for instance. Anyone who knows what = and > mean individually can discern the meaning.

      And further (I know, why RTFA?):

      But programs are still decisively vertical, to the point of being horizontally challenged. Why can't we pull minor scopes and subroutines out in that right-hand space and thus make them supportive to the understanding of the main body of code?

      This is easily done with a split screen, and sounds like an editor feature to me. Not sure why you'd want a programming language that was tied to monitor size and aspect ratio.

      Why not make color part of the syntax? Why not tell the compiler about protected code regions by putting them on a framed light gray background? Or provide hints about likely and unlikely code paths with a green or red background tint?

      Again, if you want this, do it in the editor. Doesn't he know anyone who is colorblind? And even a normally sighted user can only differentiate so many color choices, which would limit the language. And forget looking up things on Google: "Meaning of green highlighted code"... no wait "Meaning of hunter-green highlighted code" hmmmm... "Meaning of light-green highlighted code"... you get the idea.

    • Re: (Score:3, Interesting)

      by jonbryce ( 703250 )

      No, but I think the idea of being able to draw flowcharts on the screen and attach code to each of the boxes could be an idea that has mileage.

      • Re:huh (Score:5, Interesting)

        by CensorshipDonkey ( 1108755 ) on Sunday October 31, 2010 @09:07PM (#34084366)
        Have you ever used a visual diagrammatic code language before, such as LabView? Every scientist I've ever met that had any experience writing code vastly prefers the C based LabWindows to the diagrammatic LabView - diagrammatic is simply a fucking pain in the ass. Reading someone else's program is an exercise in pain, and they are impossible to debug. Black and white, unambiguous plain text coding may not be pretty to look at but it is damn functional. Coding requires expressing yourself in an explicitly clear fashion, and that's what the current languages offer.
        • Re:huh (Score:5, Insightful)

          by ScrewMaster ( 602015 ) * on Sunday October 31, 2010 @09:29PM (#34084528)

          diagrammatic is simply a fucking pain in the ass.

          Amen.

          Every scientist I've ever met that had any experience writing code vastly prefers the C based LabWindows to the diagrammatic LabView

          Well, I'm not a scientist, just a humble software engineer, and back in my contract coding days I was always faced by managers that would try to push me to use LabView. They had this mistaken belief that because it was "visual" they could a. understand it and b. thought it was simpler and c. thought I should charge less if I used it.

          I told them that a. it's still programming, and beyond a certain level of complexity understanding still requires sufficient knowledge and b. refer to a. and c. if they were going to force me to waste time fighting such an environment up 'til the point where I found something critical that it couldn't do (such as run fast enough) and would end up re-coding the right way anyway, they damn well weren't going to pay me less.

        • Re:huh (Score:4, Insightful)

          by bh_doc ( 930270 ) <brendonNO@SPAMquantumfurball.net> on Sunday October 31, 2010 @11:14PM (#34085252) Homepage

          As a scientist who has a fair bit of coding experience, including LabVIEW, ++ this.

          What particularly annoys me about visual code like LabVIEW is that you can't diff. So change tracking is a pain in the arse, and forget distributed development.

          LabVIEW itself is good for setting up a quick UI and connecting things to it, but any serious processing? ...No, thanks. If I could get my hands on something else that had the UI prototyping ease, connectivity to experimental devices (motion controllers, for example), but based on a textual language, I'd be a happy camper. (There are some things that come close, I'm sure, though I've not had the time to properly search. Busy scientist is busy...)

  • Learn2code (Score:5, Insightful)

    by santax ( 1541065 ) on Sunday October 31, 2010 @08:00PM (#34083870)
    I can express my intentions just fine with ASCII. They have cunningly invented a system for that. It's called language and it comes in very handy. The only thing I would consider missing is a pile of shit-character. I could use that one right now.
    • by MightyYar ( 622222 ) on Sunday October 31, 2010 @08:20PM (#34084028)

      You mean "@"? Looks like a pile of shit to me.

  • Yes, it's the next fad that just _everyone_ has to wear. this season. Within 5 years, it will be something else, and given the ability of major vendors like Microsoft to get Unicode _wrong_, it's not stable for mission critical applications. If you want your code to remain parseable and cross-platform compatible and stable in both large and small tools, write it in flat, 7-bit ASCII. You also get a significant performance benefit from avoiding the testing and decoding and localization and most especially th

    • Re: (Score:3, Insightful)

      Yes, it's the next fad that just _everyone_ has to wear. this season. Within 5 years, it will be something else

      Unicode has been around for, what, over 15 years now? It's part of countless specifications from W3C and ISO. All modern OSes and DEs (Windows, OS X, KDE, Gnome) use one or another encoding of Unicode as the default representation for strings. No, it's not going away anytime soon.

      If you want your code to remain parseable and cross-platform compatible and stable in both large and small tools, write it in flat, 7-bit ASCII.

      This may be a piece of good advice. Even for languages where Unicode in the source is officially allowed by the spec (e.g. Java or C#), many third-party tools are broken in that regard.

      You also get a significant performance benefit from avoiding the testing and decoding and localization and most especially the _testing_ costs for multiple regions.

      I don't see how this has any relevance to your

      • by scdeimos ( 632778 ) on Sunday October 31, 2010 @09:11PM (#34084382)

        Unicode has been around for, what, over 15 years now? It's part of countless specifications from W3C and ISO. All modern OSes and DEs (Windows, OS X, KDE, Gnome) use one or another encoding of Unicode as the default representation for strings. No, it's not going away anytime soon.

        And yet major vendors like Microsoft still get Unicode wrong. A couple of examples:

        • Windows Find/Search cannot find matches in Unicode text files, surely one of the simplest file formats of all, even though the command line FIND tool can (unless you install/enable Windows Indexing Service which then cripples the system with its stupid default indexing policies). This has been broken since Windows NT 4.0.
        • Microsoft Excel cannot open Unicode CSV and tab-delimited files automatically (i.e.: by drag-and-drop or double-click from Explorer) - you have to go through Excel's File/Open menu and go through the stupid import wizard.
        • Abuse of Unicode code points by various Office apps, causing interoperability issues even amongst themselves.
  • by FeatherBoa ( 469218 ) on Sunday October 31, 2010 @08:02PM (#34083894)

    Everyone who tried to do something useful in APL, put up your hand.

    • by SimonInOz ( 579741 ) on Sunday October 31, 2010 @09:08PM (#34084368)

      Incredibly, I worked for a major investment company who had, indeed, done something useful in APL. In fact they had written their entire set of analysis routine in it, and deeply interwoven it with SQL. I had to untangle it all. (Would you beleive they had 6 page SQL stored procedures? No, nor did I - but they did).
      APL is great sometimes - especially if you happen to be a maths whizz and good at weird scripts. Not exactly easy to debug, though. Sort of a write-only language.

      For the last ten plus years, we have been steadily moving in the direction of more human readable data - the move to XML was supposed to be a huge improvement. It meant you could - sort of - read what was going on at ever level. It also meant we had a common interchange between multiple platforms.

      So you want to chuck all that away to get better symbols for programming? No, I don't think so.
      I must point out that the entire canon of English Literature is written in - surprise - English, and that's definitely ascii text. I don't think it has suffered due to lack of expressive capability.

      What does supriose me, though, is how fundementally weak our editors are. Programs, to me, are a collection of parts - objects, methods, etc, all with internal structure. We seem very poor at further abstracting that - why, oh tell me why, when I write a simple - trivial - bit of Java code, do I need to write funtions for getters and setters all over the place - dammit, just declare them as gettable and settable - or (to keep full source code compatibility) the editor could do it. Simply ,easily, tranparently. And why can't the editor hide everything except what I am concerned with?
      Microsoft does a better job of this in C#, but we could go much, much further. We seem stuck in the third generation language paradigm.

      • Re: (Score:3, Insightful)

        by cgenman ( 325138 )

        Things I would love to see standard in all new editors:

        1. Little triangles that hide blocks of code unless you explicitly open and investigate them.
        2. Dynamic error detection. Give me a little underline when I write out a variable that hasn't been defined yet. Give a soft red background to lines of code that wouldn't compile. That sort of thing.
        3. While we're at-it, "warning" colors. When "=" is used in a conditional, for example, that's an unusual situation that should be underlined in Yellow.
        4. Hard a

        • Re: (Score:3, Interesting)

          by Yetihehe ( 971185 )

          1. Little triangles that hide blocks of code unless you explicitly open and investigate them.

          Netbeans. (view > code folds > collapse all)

          2. Dynamic error detection. Give me a little underline when I write out a variable that hasn't been defined yet. Give a soft red background to lines of code that wouldn't compile. That sort of thing.

          Netbeans.

          3. While we're at-it, "warning" colors. When "=" is used in a conditional, for example, that's an unusual situation that should be underlined in Yellow.

          Netbea

  • by MaggieL ( 10193 ) on Sunday October 31, 2010 @08:04PM (#34083914)

    ...the character set isn't the problem.

    And I say this as an old APL coder.

    (There aren't many new APL coders.)

  • And more than 10 years ago, in Bjarne Stroustrup's "Generalizing Overloading for C++2000". PDF can be donwloaded here:

    www2.research.att.com/~bs/whitespace98.pdf

    Pages 4-5 delve with this.

    It was also a joke paper. Like I hope this article is.

  • How silly of us to be compiling to binary all this time!
    We've been relegating ourselves to only two different options for decades!

    I reckon that a memory cell and single bit of a processor opcode should have --at least-- 7000 different possibilities. Think of everything a computer could accomplish *then*!

    Seriously, someone tell this guy you're allowed to use more than one character to represent a concept or action, and that these groups of characters represent things rather well.
  • It ain't broke! (Score:5, Insightful)

    by webbiedave ( 1631473 ) on Sunday October 31, 2010 @08:08PM (#34083944)
    Let's take our precious time on this planet to fix what's broken, not break what has clearly worked.
  • by Anonymous Coward on Sunday October 31, 2010 @08:10PM (#34083962)

    but fuck no.
    I eagerly await comments saying how anglo-centric, racist, bigoted, culturally-imperialist the insistence of using ASCII is.
    The nuanced indignation is salve for my frantic masturbation.
    (If my post is the only one that mentions this, all the better)

    • Re: (Score:3, Insightful)

      by sznupi ( 719324 )

      Also: Slashdot would never, ever, ever be able to display code snippets of such thing.

  • limiting? (Score:3, Insightful)

    by Tei ( 520358 ) on Sunday October 31, 2010 @08:11PM (#34083968) Journal

    the chinese have problems to learn his own language, because have all that signs, it make it unncesary complex.

    26 letter lets you write anything, you dont need more letters, really. ask any novelist.

    also, programming languages are something international, and not all keyboards have all keys, even keys like { or } are not on all keyboards, so tryiing to use funny characters like ñ would make programming for some people really hard.

    all in all, this is not a very smart idea , imho

    • Re:limiting? (Score:4, Interesting)

      by Sycraft-fu ( 314770 ) on Sunday October 31, 2010 @08:29PM (#34084116)

      For that matter, we could probably even get away with less letters. Some of them are redundant when you get down to it. What you need are enough letters that you can easily denote all the different sounds that are valid in a language. You don't have to have a dedicated letter for all of them either, it can be through combination (for example the oo in soothe) or through context sensitivity (such as the o in some in context with the e on the end). We could probably knock off a few characters if we tried. If that is worth it or not I don't know but we sure as hell shouldn't be looking at adding MORE.

      Also in terms of programming a big problem is that of ambiguity. Compilers can't handle it, their syntax and grammar is rigidly defined, as it must be. That's the reason we have programming languages rather than simply programming in a natural language: Natural language is too imprecise, a computer cannot parse it. We need a more rigidly defined language.

      Well as applied to unicode programming that means that languages are going to get way more complex if you want to provide an "English" version of C and then a "Chinese" version and a "French" version and so on where the commands, and possibly the grammar, differ slightly. It would get complex probably to the point of impossibility if you then want them to be able to be blended, where you could use different ones in the same function, or maybe on the same line.

    • Re: (Score:3, Informative)

      by yuje ( 1892616 )
      China has greater than 90% literacy, and the more advanced Chinese speaking societies (Hong Kong, Taiwan, Macau, Singapore) basically have full Chinese literacy. While Japan uses a smaller subset of those characters, the Japanese have full literacy and seemed to have functioned perfectly well while retaining those characters in their writing system. The Chinese people hardly have problems learning, reading, or writing their own language.

      the chinese have problems to learn his own language, because have all that signs, it make it unncesary complex.

      26 letter lets you write anything, you dont need more letters, really. ask any novelist.

      also, programming languages are something international, and not all keyboards have all keys, even keys like { or } are not on all keyboards, so tryiing to use funny characters like ñ would make programming for some people really hard.

      all in all, this is not a very smart idea , imho

      Judging by your post, it appears that you have problems learning your o

      • Re: (Score:3, Insightful)

        by pipatron ( 966506 )

        Judging by your post, it appears that you have problems learning your own language. It certainly appears that simple spelling, capitalization, punctuation and correct grammar in the English language are apparently beyond your abilities.

        Did it ever occur to you that the person you replied to isn't a native English speaker?

  • This is nonsense (Score:5, Insightful)

    by Kohath ( 38547 ) on Sunday October 31, 2010 @08:14PM (#34083988)

    Programming languages usually have too much syntax and too much expressiveness, not too little. We don't need them to be even more cryptic and even more laden with hidden pitfalls for someone who is new, or imperfectly vigilant, or just makes a mistake.

    If anything, programming needs to be less specific. Tell the system what you're trying to do and let the tools write the code and optimize it for your architecture.

    We don't need longer character sets. We don't need more programming languages or more language features. We need more productive tools, software that adapts to multithreaded operation and GPU-like processors, tools that prevent mistakes and security bugs, and ways to express software behavior that are straightforward enough to actually be self-documenting or easily explained fully with short comments.

    Focusing on improving programming languages is rearranging the deck chairs.

    • Re: (Score:3, Interesting)

      by Twinbee ( 767046 )
      One day, I think we'll have a universal language that everyone uses (yeah English would suit me, but I don't care as long as whatever language it is, everyone uses it). Efficiency would rocket through the roof, and hence we'll save billions or trillions of pounds.

      In the same way, we'll all be using a single programming language too (even if that language combines more than one paradigm). Yes competition is good in the mean time, but I mean ultimately. It'll be as fast as C or machine code, but as readabl
  • No we don't (Score:5, Informative)

    by Sycraft-fu ( 314770 ) on Sunday October 31, 2010 @08:19PM (#34084022)

    Because I don't want to have to own a 2000 key keyboard, or alternatively learn a shitload of special key combos to produce all sorts of symbols. The usefulness of ASCII, and just of the English/Germanic/Latin character set and Arabic numerals in general is that it is fairly small. You don't need many individual glyphs to represent what you are talking about. A normal 101 key keyboard is enough to type it out and have enough extra keys for controls that we need.

    To see the real absurdity of it, apply the same logic to the numerals of the character set. Let's stop using Arabic numerals, let's use something more. Let's have special symbols to denote commonly used values (like 20, 25, 100, 1000). Let's have different number sets for different bases so that a 3 can be told what base its in just by the way it looks! ...

    Or maybe not. Maybe we should stick with the Arabic numerals. There's a reason they are so widely used: The Indians/Arabs got it right. It is simple, direct, and we can represent any number we need easily. Combining them with simple character indicators like H to indicate hex works just fine for base as well.

    You might notice that even languages that don't use the English/ASCII character set tend to use keyboards that use it. Japanese and Chinese enter transliterated expressions that the computer then interprets as glyphs. Doesn't have to be that way, they could different keyboards, some of them rather large depending on the character set being used, but they don't. It is easy and convenient to just use the smaller, widely used, character set.

    Now none of this means that you can't use Unicode in code, that strings can't be stored using it, that programs can't display it. Indeed most programs these days can handle it, just fine. However to start coding in it? To try and design languages to interpret it? To make things more complex for their own sake? Why?

    I am just trying to figure out what he thinks would be gained here. Also remembering that the programming languages, the compilers, would need to be changed at the low level. Compilers do not take ambiguity, if a command is going to change from a string of ASCII characters to a single unicode one, that has to be changed in the compiler, made clear in the language specs and so on.

  • ASCII art is cool! (Score:5, Insightful)

    by Joe The Dragon ( 967727 ) on Sunday October 31, 2010 @08:20PM (#34084038)

    ASCII art is cool!

  • by philgross ( 23409 ) on Sunday October 31, 2010 @08:22PM (#34084046) Homepage
    Sun's Fortress language allowed you to use real, LaTeX-formatted math as source code. They reasoned, correctly I think, that for the mathematically literate, this would make the programs far clearer. Google for Fortress Programming Language Tutorial.
  • by thisisauniqueid ( 825395 ) on Sunday October 31, 2010 @08:24PM (#34084074)
    Fortress [wikipedia.org] allows you to code in UTF-8. However it has a multi-char ASCII equivalent for every Unicode mathematical symbol that you can use, so there is a bijective map between the Unicode and ASCII versions of the source, and you can view/edit in either. That is the only acceptable way to advocate using Unicode anywhere in programming source other than string constants. Programming languages that use ASCII have done well over those that don't, for the same reason that Unicode has done well over binary formats.
  • Haskell (Score:3, Interesting)

    by kshade ( 914666 ) * on Sunday October 31, 2010 @08:28PM (#34084114)
    Haskell supports various unicode characters as operators and it makes me wanna to puke. http://hackage.haskell.org/trac/haskell-prime/wiki/UnicodeInHaskellSource [haskell.org] IMO one of the great things about programming nowadays is that you can use descriptive names without feeling bad. Single character identifiers from different alphabets are something that rub me the wrong way in mathematics. Keep 'em out of my programming languages!

    Bullshit from the article:

    Unicode has the entire gamut of Greek letters, mathematical and technical symbols, brackets, brockets, sprockets, and weird and wonderful glyphs such as "Dentistry symbol light down and horizontal with wave" (0x23c7). Why do we still have to name variables OmegaZero when our computers now know how to render 0x03a9+0x2080 properly?

    OmegaZero is at least something everybody will recognize. And why would you name a variable like that anyway? It's programming, not math, use descriptive names.

    But programs are still decisively vertical, to the point of being horizontally challenged. Why can't we pull minor scopes and subroutines out in that right-hand space and thus make them supportive to the understanding of the main body of code?

    Because we're not using the same IDE?

    And need I remind anybody that you cannot buy a monochrome screen anymore? Syntax-coloring editors are the default. Why not make color part of the syntax? Why not tell the compiler about protected code regions by putting them on a framed light gray background? Or provide hints about likely and unlikely code paths with a green or red background tint?

    ... what?

    For some reason computer people are so conservative that we still find it more uncompromisingly important for our source code to be compatible with a Teletype ASR-33 terminal and its 1963-vintage ASCII table than it is for us to be able to express our intentions clearly.

    ... WHAT? If you don't express your intentions clearly in a program it won't work!

    And, yes, me too: I wrote this in vi(1), which is why the article does not have all the fancy Unicode glyphs in the first place.

    vim does Unicode just fine. And from the Wikipedia entry on the author (http://en.wikipedia.org/wiki/Poul-Henning_Kamp):

    A post by Poul-Henning is responsible for the widespread use of the term bikeshed colour to describe contentious but otherwise meaningless technical debates over trivialities in open source projects.

    Irony? Why does this guy come off as an idiot who got annoyed by VB in this article when he clearly should know better?

  • by theodp ( 442580 ) on Sunday October 31, 2010 @08:36PM (#34084182)

    From Idiocracy: Keyboard for hospital admissions [flickr.com]

  • by Tridus ( 79566 ) on Sunday October 31, 2010 @08:40PM (#34084210) Homepage
    He comes up with a bunch of ideas at the end that are out to lunch. Let's take a look:

    Unicode has the entire gamut of Greek letters, mathematical and technical symbols, brackets, brockets, sprockets, and weird and wonderful glyphs such as "Dentistry symbol light down and horizontal with wave" (0x23c7). Why do we still have to name variables OmegaZero when our computers now know how to render 0x03a9+0x2080 properly?

    Well, let's think. Possibly because nobody knows what 0x03a9+0x2080 does without looking it up, and nobody seeing the character it produces would know how to type said character again without looking it up? I know consulting a wall-sized "how to type X" chart is the first thing I want to do every 3 lines of code.

    While we are at it, have you noticed that screens are getting wider and wider these days, and that today's text processing programs have absolutely no problem with multiple columns, insert displays, and hanging enclosures being placed in that space? But programs are still decisively vertical, to the point of being horizontally challenged. Why can't we pull minor scopes and subroutines out in that right-hand space and thus make them supportive to the understanding of the main body of code?

    If you actually look at word processing programs, the document is also highly vertical. The horizontal stuff is stuff like notes, comments, revisions, and so on. Putting source code comments on the side might be a useful idea, but putting the code over there won't be unless the goal is to make it harder to read. (That said, widescreen monitors suck for programming.)

    And need I remind anybody that you cannot buy a monochrome screen anymore? Syntax-coloring editors are the default. Why not make color part of the syntax? Why not tell the compiler about protected code regions by putting them on a framed light gray background? Or provide hints about likely and unlikely code paths with a green or red background tint?

    So anybody who has some color-blindness (which is not a small number) can't understand your program? Or maybe we should make a red + do something different then a blue +? That's great once you do it six times, then it's just a mess. (Now if you want to have the code editor put protected regions on a framed light gray background, sure. But there's nothing wrong with sticking "protected" in front of it to define what it is.) It seems like he's trying to solve a problem that doesn't really exist by doing something that's a whole lot worse.

  • by lkcl ( 517947 ) <lkcl@lkcl.net> on Sunday October 31, 2010 @09:21PM (#34084478) Homepage

    the point has been entirely missed, and blame placed on ASCII [correlation is not causation]. when you look at the early languages - FORTH, LISP, APL, and later even Awk and Perl, you have to remember that these languages were living in an era of vastly less memory. FORTH interpreters fit into 1k with room to spare for goodness sake! these languages tried desperately to save as much space and resources as possible, at the expense of readability.

    it's therefore easy to place blame onto ASCII itself.

    then you have compiled languages like c, c++, and interpreted ones like Python. these languages happily support unicode - but you look at free software applications written in those languages and they're still by and large kept to under 80 chars in length per line - why is that? it's because the simplest tools are not those moronic IDEs; the simplest programming tools for editing are straightfoward ASCII text editors: vi and (god help us) emacs. so by declaring that "Thou Shalt Use A Unicode Editor For This Language" you've just shot the chances of success of any such language stone dead: no self-respecting systems programmer is going to touch it.

    not only that, but you also have the issue of international communication and collaboration. if the editor allows Kanji, Cyrillic, Chinese and Greek, contributors are quite likely to type comments in Kanji, Cyrillic, Chinese and Greek. the end-result is that every single damn programmer who wants to contribute must not only install Kanji, Cyrillic, Chinese and Greek unicode fonts, but also they must be able to read and understand Kanji, Cyrillic, Chinese and Greek. again: you've just destroyed the possibility of collaboration by terminating communication and understanding.

    then, also, you have the issue of revision control, diffs and patches. by moving to unicode, git svn bazaar mercury and cvs all have to be updated to understand how to treat unicode files - which they can't (they'll treat it as binary) - in order to identify lines that are added or removed, rather than store the entire file on each revision. bear in mind that you've just doubled (or quadrupled, for UCS-4) the amount of space required to store the revisions in the revision control systems' back-end database, and bear in mind that git repositories such as linux2.6 are 650mb if you're lucky (and webkit 1gb) you have enough of a problem with space for big repositories as it is!

    but before that, you have to update the unix diff command and the unix patch command to do likewise. then, you also have to update git-format-patch and the git-am commands to be able to create and mail patches in unicode format (not straight SMTP ASCII). then you also have to stop using standard xterm and standard console for development, and move to a Unicode-capable terminal, but you also have to update the unix commands "more" and "less" to be able to display unicode diffs.

    there are good reasons why ASCII - the lowest common denominator - is used in programming languages: the development tools revolve around ASCII, the editors revolve around ASCII, the internationally-recognised language of choice (english) fits into ASCII. and, as said right at the beginning, the only reason why stupid obtuse symbols instead of straightforward words were picked was to cram as much into as little memory as possible. well, to some extent, as you can see with the development tools nightmare described above, it's still necessary to save space, making UNICODE a pretty stupid choice.

    lastly it's worth mentioning python's easy readability and its bang-per-buck ratio. by designing the language properly, you can still get vast amounts of work done in a very compact space. unlike, for example java, which doesn't even have multiple inheritance for god's sake, and the usual development paradigm is through an IDE not a text editor. more space is wasted through fundamental limitations in the language and the "de-facto" GUI development environment than through any "blame" attached to ASCII.

  • by melted ( 227442 ) on Sunday October 31, 2010 @11:39PM (#34085446) Homepage

    I wouldn't consider Mr. Pike an authority on programming language design. At Google, he's known for designing Sawzall (described here: http://static.googleusercontent.com/externIal_content/untrusted_dlcp/research.google.com/en/us/archive/sawzall-sciprog.pdf [googleusercontent.com]) - a language that's so feature poor, esoteric, and ass-backwards, that Google engineers curse at length every time they have to use it. And use it they have, since it's darn near impossible, for various reasons, to do certain things without it. Try as I may, I don't see anything in Go that would make it better than half a dozen existing alternatives. It's like reinventing the bicycle again, but this time with square wheels and without the saddle. Yes, you guessed it right, that's where that pipe goes on this particular bicycle.

  • pros? (Score:3, Insightful)

    by Charliemopps ( 1157495 ) on Sunday October 31, 2010 @11:57PM (#34085568)
    Ok, so everyone agrees this is a stupid idea... but are there ANY pros? I just don't understand the premiss at all...
  • by Animats ( 122034 ) on Monday November 01, 2010 @12:21AM (#34085700) Homepage

    This has come up in the context of domain names, where a long, painful set of rules has been devised to try to prevent having two domain names which look similar but are different to DNS. If exact equality of text matters, it's helpful to have a limited character set for identifiers.

    There's currently a debate underway on Wikipedia over whether user names with unusual characters should be allowed. This isn't a language question; the issue is willful obfuscation by users who choose names with hard-to-type characters.

    As for having more operators, it's probably not worth it. It's been tried; both MIT and Stanford had, at one time, custom character sets, with most of the standard mathematical operators on the keys. This never caught on. In fact, operator overloading is usually a lose. Python ran into this. "+" was overloaded for concatenation. Then somebody decided that "*" should be overloaded, so that "a" + "a" was equivalent to 2*"a". The result is thus "aa". This leads to results like 2*"10" being "1010". The big mistake was defining a mixed-mode overload.

    In C++, mixed-mode overloads are fully supported by the template system and a nightmare when reading code.

    In Mathematica, the standard representation for math uses long names for functions, completely avoiding the macho terseness the math community has historically embraced.

  • by glassware ( 195317 ) on Monday November 01, 2010 @01:33AM (#34085954) Homepage Journal

    I'm truly saddened to see so many people took this article summary so literally. If you read TFA, it's actually a very bright, intelligent, humorous example of programming insight. I found it a very delightful read and I wholeheartedly felt that the article presented its thoughts lightheartedly and without expectation of seriousness. To hear all the commenters here, it's as if the article ran puppies over with a steamroller.

    Please guys - I'm all for silly commentary. But read the article if you're going to pretend to write something clever. It's thoroughly tongue-in-cheek.

What is research but a blind date with knowledge? -- Will Harvey

Working...