Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Programming

Mr. Pike, Tear Down This ASCII Wall! 728

theodp writes "To move forward with programming languages, argues Poul-Henning Kamp, we need to break free from the tyranny of ASCII. While Kamp admires programming language designers like the Father-of-Go Rob Pike, he simply can't forgive Pike for 'trying to cram an expressive syntax into the straitjacket of the 95 glyphs of ASCII when Unicode has been the new black for most of the past decade.' Kamp adds: 'For some reason computer people are so conservative that we still find it more uncompromisingly important for our source code to be compatible with a Teletype ASR-33 terminal and its 1963-vintage ASCII table than it is for us to be able to express our intentions clearly.' So, should the new Hello World look more like this?"
This discussion has been archived. No new comments can be posted.

Mr. Pike, Tear Down This ASCII Wall!

Comments Filter:
  • by enec ( 1922548 ) <jho@hajotus.net> on Sunday October 31, 2010 @07:58PM (#34083854) Homepage
    The thing with ASCII is that it's easy to write on standard keyboards, and does not require a specialized layout. Once someone can cram the necessary unicode symbols into a keyboard so that I don't have to remember arcane meta-codes or fiddle with pressing five different dead keys to get one symbol, I'm all for it.
  • by AsmCoder8088 ( 745645 ) on Sunday October 31, 2010 @07:58PM (#34083860)
    "Syntactic sugar causes cancer of the semicolon" - Alan Perlis.
  • Project Gutenberg (Score:5, Insightful)

    by symbolset ( 646467 ) on Sunday October 31, 2010 @07:59PM (#34083864) Journal

    Michael decided to use this huge amount of computer time to search the public domain books that were stored in our libraries, and to digitize these books. He also decided to store the electronic texts (eTexts) in the simplest way, using the plain text format called Plain Vanilla ASCII, so they can be read easily by any machine, operating system or software.

    - Marie Lebert [etudes-francaises.net]

    Since its humble beginnings in 1971 Project Gutenberg has reproduced and distributed thousands of works [gutenbergnews.org] to millions of people in - ultimately - billions of copies. They support ePub now and simple HTML, as well as robo-read audio files, but the one format that has been stable this whole time has been ASCII. It's also the format that is likely to survive the longest without change. Project Gutenberg texts can now be read on every e-reader, smartphone, tablet and PC.

    If you want to use Rich Text format, or XML, or PostScript or something else then fine - please do. But don't go trying to deprecate ASCII.

  • huh (Score:4, Insightful)

    by stoolpigeon ( 454276 ) * <bittercode@gmail> on Sunday October 31, 2010 @07:59PM (#34083866) Homepage Journal

    so we should start coding in Chinese?

    Seems easier to spell words with a small set of symbols than to learn a new symbol for every item in a huge set of terms.

  • Learn2code (Score:5, Insightful)

    by santax ( 1541065 ) on Sunday October 31, 2010 @08:00PM (#34083870)
    I can express my intentions just fine with ASCII. They have cunningly invented a system for that. It's called language and it comes in very handy. The only thing I would consider missing is a pile of shit-character. I could use that one right now.
  • by shutdown -p now ( 807394 ) on Sunday October 31, 2010 @08:04PM (#34083916) Journal

    If you want to use Rich Text format, or XML, or PostScript or something else then fine - please do. But don't go trying to deprecate ASCII.

    This is false dichotomy. Plain text can be non-ASCII, and ASCII doesn't necessarily imply plain text. All the formats you've listed allow to add either visual or semantic markup to text, whereas ASCII is simply a way to encode individual characters from a certain specific set. They do not propose to move to rich text for coding, but to move away from ASCII.

    There are still many reasonable arguments against it, but this isn't one of them.

  • It ain't broke! (Score:5, Insightful)

    by webbiedave ( 1631473 ) on Sunday October 31, 2010 @08:08PM (#34083944)
    Let's take our precious time on this planet to fix what's broken, not break what has clearly worked.
  • by shutdown -p now ( 807394 ) on Sunday October 31, 2010 @08:10PM (#34083958) Journal

    Yes, it's the next fad that just _everyone_ has to wear. this season. Within 5 years, it will be something else

    Unicode has been around for, what, over 15 years now? It's part of countless specifications from W3C and ISO. All modern OSes and DEs (Windows, OS X, KDE, Gnome) use one or another encoding of Unicode as the default representation for strings. No, it's not going away anytime soon.

    If you want your code to remain parseable and cross-platform compatible and stable in both large and small tools, write it in flat, 7-bit ASCII.

    This may be a piece of good advice. Even for languages where Unicode in the source is officially allowed by the spec (e.g. Java or C#), many third-party tools are broken in that regard.

    You also get a significant performance benefit from avoiding the testing and decoding and localization and most especially the _testing_ costs for multiple regions.

    I don't see how this has any relevance to your previous point (writing the source code in ASCII). If your app source is in Unicode, it will still compile (or not compile) the same on any locale. And what would you be you testing? The compiler?

    I've no idea what "decoding and localization" means in this context, either.

    Well, unless you're also advocating for the use of ASCII as the default runtime string encoding in apps, and completely forgoing localization. Which is fine if you only intend your app to be used in the USA, I guess (and even then, considering take-up of Spanish, it may not be such a wise idea).

  • limiting? (Score:3, Insightful)

    by Tei ( 520358 ) on Sunday October 31, 2010 @08:11PM (#34083968) Journal

    the chinese have problems to learn his own language, because have all that signs, it make it unncesary complex.

    26 letter lets you write anything, you dont need more letters, really. ask any novelist.

    also, programming languages are something international, and not all keyboards have all keys, even keys like { or } are not on all keyboards, so tryiing to use funny characters like ñ would make programming for some people really hard.

    all in all, this is not a very smart idea , imho

  • This is nonsense (Score:5, Insightful)

    by Kohath ( 38547 ) on Sunday October 31, 2010 @08:14PM (#34083988)

    Programming languages usually have too much syntax and too much expressiveness, not too little. We don't need them to be even more cryptic and even more laden with hidden pitfalls for someone who is new, or imperfectly vigilant, or just makes a mistake.

    If anything, programming needs to be less specific. Tell the system what you're trying to do and let the tools write the code and optimize it for your architecture.

    We don't need longer character sets. We don't need more programming languages or more language features. We need more productive tools, software that adapts to multithreaded operation and GPU-like processors, tools that prevent mistakes and security bugs, and ways to express software behavior that are straightforward enough to actually be self-documenting or easily explained fully with short comments.

    Focusing on improving programming languages is rearranging the deck chairs.

  • Re:huh (Score:5, Insightful)

    by MightyYar ( 622222 ) on Sunday October 31, 2010 @08:17PM (#34084010)

    so we should start coding in Chinese?

    Exactly! Keep the "alphabet" small, but the possible combination of "words" infinite.

    You don't need a glyph for "=>" for instance. Anyone who knows what = and > mean individually can discern the meaning.

    And further (I know, why RTFA?):

    But programs are still decisively vertical, to the point of being horizontally challenged. Why can't we pull minor scopes and subroutines out in that right-hand space and thus make them supportive to the understanding of the main body of code?

    This is easily done with a split screen, and sounds like an editor feature to me. Not sure why you'd want a programming language that was tied to monitor size and aspect ratio.

    Why not make color part of the syntax? Why not tell the compiler about protected code regions by putting them on a framed light gray background? Or provide hints about likely and unlikely code paths with a green or red background tint?

    Again, if you want this, do it in the editor. Doesn't he know anyone who is colorblind? And even a normally sighted user can only differentiate so many color choices, which would limit the language. And forget looking up things on Google: "Meaning of green highlighted code"... no wait "Meaning of hunter-green highlighted code" hmmmm... "Meaning of light-green highlighted code"... you get the idea.

  • ASCII art is cool! (Score:5, Insightful)

    by Joe The Dragon ( 967727 ) on Sunday October 31, 2010 @08:20PM (#34084038)

    ASCII art is cool!

  • by arth1 ( 260657 ) on Sunday October 31, 2010 @08:22PM (#34084052) Homepage Journal

    Once you've had to do an ad-hoc codefix through a serial console or telnet, you appreciate that you can write the code in 7-bit ASCII.

    It's not about being conservative. It's about being compatible. Compatibility is not a bad thing, even if it means you have to run your unicode text through a filter to embed it, or store it in external files or databases.

    It'd also be hell to do code review on unicode programs. You can't tell many of the symbols apart. Is that a hyphen or a soft hyphen at the end of that line? Or perhaps a minus? And is that a diameter sign, a zero, or the DaNo letter "Ø" over there? Why doesn't that multiplication work? Oh, someone used an asterisk instead of the multiplication symbol which looks the same in this font.

    No, thanks, keep it compatible, and parseable by humans, please.

  • by Anonymous Coward on Sunday October 31, 2010 @08:27PM (#34084104)

    "Yes, it's the next fad that just _everyone_ has to wear. this season."

    Like the Metric System.

  • by Tridus ( 79566 ) on Sunday October 31, 2010 @08:40PM (#34084210) Homepage
    He comes up with a bunch of ideas at the end that are out to lunch. Let's take a look:

    Unicode has the entire gamut of Greek letters, mathematical and technical symbols, brackets, brockets, sprockets, and weird and wonderful glyphs such as "Dentistry symbol light down and horizontal with wave" (0x23c7). Why do we still have to name variables OmegaZero when our computers now know how to render 0x03a9+0x2080 properly?

    Well, let's think. Possibly because nobody knows what 0x03a9+0x2080 does without looking it up, and nobody seeing the character it produces would know how to type said character again without looking it up? I know consulting a wall-sized "how to type X" chart is the first thing I want to do every 3 lines of code.

    While we are at it, have you noticed that screens are getting wider and wider these days, and that today's text processing programs have absolutely no problem with multiple columns, insert displays, and hanging enclosures being placed in that space? But programs are still decisively vertical, to the point of being horizontally challenged. Why can't we pull minor scopes and subroutines out in that right-hand space and thus make them supportive to the understanding of the main body of code?

    If you actually look at word processing programs, the document is also highly vertical. The horizontal stuff is stuff like notes, comments, revisions, and so on. Putting source code comments on the side might be a useful idea, but putting the code over there won't be unless the goal is to make it harder to read. (That said, widescreen monitors suck for programming.)

    And need I remind anybody that you cannot buy a monochrome screen anymore? Syntax-coloring editors are the default. Why not make color part of the syntax? Why not tell the compiler about protected code regions by putting them on a framed light gray background? Or provide hints about likely and unlikely code paths with a green or red background tint?

    So anybody who has some color-blindness (which is not a small number) can't understand your program? Or maybe we should make a red + do something different then a blue +? That's great once you do it six times, then it's just a mess. (Now if you want to have the code editor put protected regions on a framed light gray background, sure. But there's nothing wrong with sticking "protected" in front of it to define what it is.) It seems like he's trying to solve a problem that doesn't really exist by doing something that's a whole lot worse.

  • by Sycraft-fu ( 314770 ) on Sunday October 31, 2010 @08:57PM (#34084312)

    Because that's what you find in JIS X 0213:2000. Even if you simplify it to just what is needed for basic literacy, you are talking 2000 characters. If you have that many characters your choices are either a lot of keys, a lot of modifier keys, or some kind of transliteration which is what it done now. There is just no way around this. You cannot have a language that is composed of a ton of glyphs but yet also have some extremely simple, small, entry system.

    You can have a simple system with few characters, like we do now, but you have to enter multiple ones to specify the glyph you want. You could have a direct entry system where one keypress is one glyph, but you'd need a massive amount of keys. You could have a system with a small number of keys and a ton of modifier keys, but then you have to remember what modifier, or modifier combination, gives what. There is no easy, small, direct system, there cannot be.

    Also, is it any more tedious than any Latin/Germanic language that only uses a small character set? While you may enter more characters than final glyphs, do you enter more characters than you would to express the same idea in French or English?

  • by scdeimos ( 632778 ) on Sunday October 31, 2010 @09:11PM (#34084382)

    Unicode has been around for, what, over 15 years now? It's part of countless specifications from W3C and ISO. All modern OSes and DEs (Windows, OS X, KDE, Gnome) use one or another encoding of Unicode as the default representation for strings. No, it's not going away anytime soon.

    And yet major vendors like Microsoft still get Unicode wrong. A couple of examples:

    • Windows Find/Search cannot find matches in Unicode text files, surely one of the simplest file formats of all, even though the command line FIND tool can (unless you install/enable Windows Indexing Service which then cripples the system with its stupid default indexing policies). This has been broken since Windows NT 4.0.
    • Microsoft Excel cannot open Unicode CSV and tab-delimited files automatically (i.e.: by drag-and-drop or double-click from Explorer) - you have to go through Excel's File/Open menu and go through the stupid import wizard.
    • Abuse of Unicode code points by various Office apps, causing interoperability issues even amongst themselves.
  • Re:Not only no, (Score:3, Insightful)

    by sznupi ( 719324 ) on Sunday October 31, 2010 @09:12PM (#34084394) Homepage

    Also: Slashdot would never, ever, ever be able to display code snippets of such thing.

  • by lkcl ( 517947 ) <lkcl@lkcl.net> on Sunday October 31, 2010 @09:21PM (#34084478) Homepage

    the point has been entirely missed, and blame placed on ASCII [correlation is not causation]. when you look at the early languages - FORTH, LISP, APL, and later even Awk and Perl, you have to remember that these languages were living in an era of vastly less memory. FORTH interpreters fit into 1k with room to spare for goodness sake! these languages tried desperately to save as much space and resources as possible, at the expense of readability.

    it's therefore easy to place blame onto ASCII itself.

    then you have compiled languages like c, c++, and interpreted ones like Python. these languages happily support unicode - but you look at free software applications written in those languages and they're still by and large kept to under 80 chars in length per line - why is that? it's because the simplest tools are not those moronic IDEs; the simplest programming tools for editing are straightfoward ASCII text editors: vi and (god help us) emacs. so by declaring that "Thou Shalt Use A Unicode Editor For This Language" you've just shot the chances of success of any such language stone dead: no self-respecting systems programmer is going to touch it.

    not only that, but you also have the issue of international communication and collaboration. if the editor allows Kanji, Cyrillic, Chinese and Greek, contributors are quite likely to type comments in Kanji, Cyrillic, Chinese and Greek. the end-result is that every single damn programmer who wants to contribute must not only install Kanji, Cyrillic, Chinese and Greek unicode fonts, but also they must be able to read and understand Kanji, Cyrillic, Chinese and Greek. again: you've just destroyed the possibility of collaboration by terminating communication and understanding.

    then, also, you have the issue of revision control, diffs and patches. by moving to unicode, git svn bazaar mercury and cvs all have to be updated to understand how to treat unicode files - which they can't (they'll treat it as binary) - in order to identify lines that are added or removed, rather than store the entire file on each revision. bear in mind that you've just doubled (or quadrupled, for UCS-4) the amount of space required to store the revisions in the revision control systems' back-end database, and bear in mind that git repositories such as linux2.6 are 650mb if you're lucky (and webkit 1gb) you have enough of a problem with space for big repositories as it is!

    but before that, you have to update the unix diff command and the unix patch command to do likewise. then, you also have to update git-format-patch and the git-am commands to be able to create and mail patches in unicode format (not straight SMTP ASCII). then you also have to stop using standard xterm and standard console for development, and move to a Unicode-capable terminal, but you also have to update the unix commands "more" and "less" to be able to display unicode diffs.

    there are good reasons why ASCII - the lowest common denominator - is used in programming languages: the development tools revolve around ASCII, the editors revolve around ASCII, the internationally-recognised language of choice (english) fits into ASCII. and, as said right at the beginning, the only reason why stupid obtuse symbols instead of straightforward words were picked was to cram as much into as little memory as possible. well, to some extent, as you can see with the development tools nightmare described above, it's still necessary to save space, making UNICODE a pretty stupid choice.

    lastly it's worth mentioning python's easy readability and its bang-per-buck ratio. by designing the language properly, you can still get vast amounts of work done in a very compact space. unlike, for example java, which doesn't even have multiple inheritance for god's sake, and the usual development paradigm is through an IDE not a text editor. more space is wasted through fundamental limitations in the language and the "de-facto" GUI development environment than through any "blame" attached to ASCII.

  • Re:huh (Score:5, Insightful)

    by ScrewMaster ( 602015 ) * on Sunday October 31, 2010 @09:29PM (#34084528)

    diagrammatic is simply a fucking pain in the ass.

    Amen.

    Every scientist I've ever met that had any experience writing code vastly prefers the C based LabWindows to the diagrammatic LabView

    Well, I'm not a scientist, just a humble software engineer, and back in my contract coding days I was always faced by managers that would try to push me to use LabView. They had this mistaken belief that because it was "visual" they could a. understand it and b. thought it was simpler and c. thought I should charge less if I used it.

    I told them that a. it's still programming, and beyond a certain level of complexity understanding still requires sufficient knowledge and b. refer to a. and c. if they were going to force me to waste time fighting such an environment up 'til the point where I found something critical that it couldn't do (such as run fast enough) and would end up re-coding the right way anyway, they damn well weren't going to pay me less.

  • by goombah99 ( 560566 ) on Sunday October 31, 2010 @09:38PM (#34084626)

    Grep on ascii is more than 100x faster for complex string expressions. THere's a lot of good reasons not to use unicode.

  • Re:huh (Score:4, Insightful)

    by mr_mischief ( 456295 ) on Sunday October 31, 2010 @09:44PM (#34084676) Journal

    Let someone who reads and writes Chinese develop a programming language with Chinese keywords and syntax, then. Programming in English-like languages has largely been a waste of time, remember. English keywords are great, but using English syntax for a programming language is a nightmare. Everyone uses a syntax that's simpler than English. Even Perl's grammar is simpler than English, and that grammar is massive compared to most programming languages.

  • by Z34107 ( 925136 ) on Sunday October 31, 2010 @09:45PM (#34084678)

    I find the act of reading your descriptions laborious, and have decided to never bother learning Japanese just so I don't have to put up with that kind of thing EVER.

    "That kind of thing" is quite literally hitting the "space" key between words. I'm surprised you managed to put up with it long enough to finish your post.

  • by 644bd346996 ( 1012333 ) on Sunday October 31, 2010 @09:55PM (#34084732)

    Try reading an EULA and then come back and tell me that English is sufficiently expressive as-is.

  • by pz ( 113803 ) on Sunday October 31, 2010 @09:58PM (#34084742) Journal

    When I was a young graduate student building my first experimental setup, a professor who was older and wiser than me suggested that data should be saved in ASCII whenever possible because space was relatively inexpensive and time is always scarce. Although I thought that a bit odd, I did follow his advice.

    The result? I can use almost any editor to read my data files from the very start of my career, closing in on 30 years ago. Just this past week, that was an important factor in salvaging some recently-collected data. In contrast, I can't always read the MS Word files -- an example of an extended character set -- from even a few years ago, and I sure as hell can't view them in almost any editor. Sure, with enough time, I can or could, figure out how to read them, but, as the wise professor rightly pointed out, time is scarce.

    Thus, compatibility is important, and the most compatible data and document format is human-readable plain ASCII.

  • Re:huh (Score:5, Insightful)

    by tftp ( 111690 ) on Sunday October 31, 2010 @10:08PM (#34084826) Homepage

    it might be fully resonable to have classes related to financial years (finansår), close of year (årsavslutning), the tax report (årsoppgave) and so on.

    And one day the code is sold to China or India, and then people there can't even find a way to enter the glyph. Same if a visiting programmer has to work on the code, or if you need to send a class to another country for some reason.

    How far Linux would get if Linus decided to use Finnish (or Swedish) words written with all the proper UNICODE characters for all the variables and types?

  • by Angst Badger ( 8636 ) on Sunday October 31, 2010 @10:18PM (#34084916)

    Funny you mention it, but the first thing I thought of was Japanese text entry, followed by the autocorrect/text-expansion facility that most word processors have, which is much the same thing applied to western languages. I've also thought it would be good to be able to make use of mathematical symbols for, you know, mathematics. The same could be said of word processor-like formatting for comments. I'm dubious about using it for actual code, but I'm open to having my mind changed about that.

    (Color-as-syntax has already been done [colorforth.com] in Chuck Moore's latest implementation of Forth. It's not a bad idea, though I suspect it works better with low-level languages like Forth than it would with a higher level language.)

    The second thing I thought of was what I always think when someone starts complaining about what languages should and shouldn't have, which is this: Quit bitching and go implement it, smart boy. Come up with something good, and I'll use it, but I am not about to run out and implement someone else's ideas. I have a day job where I get to do that all fucking day long, and they actually pay me. And contrary to popular belief, ideas are cheap and plentiful, including good ideas. The time, effort, and dedication that it takes to actually implement them are what's in short supply.

  • by icebraining ( 1313345 ) on Sunday October 31, 2010 @10:37PM (#34085012) Homepage

    I'm Portuguese and our language uses accents, but if I ever get a source code file with accents in variable names I'll insult the person. Writing with accents in programming serves absolutely no purpose and it only causes problems. It's slower (two key presses instead of one), it's less compatible, it can be troublesome if I need to send the code to someone without accents in the keyboard, etc.

    In fact, not only I disagree with accents in programming, but I prefer writing all the names in English. Where would OSS be if all the Gnome devs had to learn Spanish to contribute to De Icaza's code, or Finnish to contribute to Linux?

  • by MadMaverick9 ( 1470565 ) on Sunday October 31, 2010 @10:56PM (#34085124)
    From TFA:

    And, yes, me too: I wrote this in vi(1), which is why the article does not have all the fancy Unicode glyphs in the first place.

    Excuse me - vim can handle utf-8 just fine. utf-8 file names and utf-8 content. on a vanilla slackware 13.1.
    http://www.cl.cam.ac.uk/~mgk25/unicode.html#apps [cam.ac.uk] [cam.ac.uk]
    # Vim (the popular clone of the classic vi editor) supports UTF-8 with wide characters and up to two combining characters starting from version 6.0.
    # Emacs has quite good basic UTF-8 support starting from version 21.3. Emacs 23 changed the internal encoding to UTF-8.
    And svn can handle utf-8 as well - http://svnbook.red-bean.com/en/1.4/svn.advanced.l10n.html [red-bean.com] [red-bean.com].

    The repository stores all paths, filenames, and log messages in Unicode, encoded as UTF-8.

    All it requires is ... set your locale and lang. "export LANG=en_DK.utf8" in "/etc/profile.d/lang.sh" (Slackware 13.1) and add some better fonts maybe.

    I apologize for repeating myself. I've written the same thing further down already in reply to another user's post. But I just read tfa and felt the need to reply to the author of tfa.

  • Re:huh (Score:4, Insightful)

    by bh_doc ( 930270 ) <brendon@quantumf ... l.net minus city> on Sunday October 31, 2010 @11:14PM (#34085252) Homepage

    As a scientist who has a fair bit of coding experience, including LabVIEW, ++ this.

    What particularly annoys me about visual code like LabVIEW is that you can't diff. So change tracking is a pain in the arse, and forget distributed development.

    LabVIEW itself is good for setting up a quick UI and connecting things to it, but any serious processing? ...No, thanks. If I could get my hands on something else that had the UI prototyping ease, connectivity to experimental devices (motion controllers, for example), but based on a textual language, I'd be a happy camper. (There are some things that come close, I'm sure, though I've not had the time to properly search. Busy scientist is busy...)

  • by MadMaverick9 ( 1470565 ) on Sunday October 31, 2010 @11:14PM (#34085260)

    I blame the cult of Unix/Linux to some degree. The whole OS and all its tools and standards are based on ASCII text

    you ever heard of the nls_utf8 kernel module? ever seen the "LANG" environment variable? set it to "en_DK.utf8" for example and you're ready to go.
    vim, svn, rm, mv, cp can handle utf8 just fine. this being on slackware 13.1.

  • by santax ( 1541065 ) on Sunday October 31, 2010 @11:20PM (#34085308)
    Visual programming isn't big for the same reason people talk and not use drawings to communicate in day to day life. A decent well explained and understood language is faster, universal and more convenient. Drawings are used in situations where you can't communicate true a spoken or written language. As a replacement tool. It's very basic since with a spoken or written language you can uniformly have so much more precise interpretation of your intentions. Same goes for visual programming at this moment in time. I won't say there isn't a future for it, but as a replacement tool for the tried and tested programming environments it has a long way to go. Come up with a visual programming system for writing actually sophisticated code and you might have yourself a winner. Only party that comes in mind is Labview from NI.
  • very bad idea (Score:2, Insightful)

    by t2t10 ( 1909766 ) on Sunday October 31, 2010 @11:22PM (#34085322)

    Using full Unicode for programming causes lots of problems; even string equality is a tricky proposition for Unicode, let alone precise parsing. Most people don't even know how to enter Unicode characters not found in their own language. And once you allow Unicode, people will do things like they did in APL.

    The only place Unicode should be allowed--if at all--is in comments. Everything else should be in ASCII.

  • by The Mighty Buzzard ( 878441 ) on Sunday October 31, 2010 @11:43PM (#34085476)

    Hunt and peck? I don't even want to have to remember that many glyphs exist, much less where to find them. If it can't be expressed with a standard qwerty keyboard and one (shift) modifier key, it's too fucking complicated to bother with as general text entry.

  • by Dahamma ( 304068 ) on Sunday October 31, 2010 @11:54PM (#34085544)

    This proposal isn't about giving programmers more power to code, it's about making it easier for non-english speakers who aren't coders to read the code that their programmers write.

    No, actually, it's not. Java already allows Unicode variable and function names. This is about using Unicode in basic syntax of the language, which is IMO idiotic if you ever want your language to be adopted. I mean, he says it himself in the last paragraph - he didn't use any Unicode in his article because he was using vi, which makes it difficult - not to mention even if it was doable, it would be tedious as hell with a standard keyboard.

  • by angus77 ( 1520151 ) on Sunday October 31, 2010 @11:55PM (#34085554)
    And we only use the Roman alphabet for English because it was a widespread standard, even though we already had a functioning writing system that suited Englisc better had worked for us for centuries (runes). We mangle the system with digraphs and multiple sounds for many of the characters (especially the vowels). It's a hack. We've made do.
  • pros? (Score:3, Insightful)

    by Charliemopps ( 1157495 ) on Sunday October 31, 2010 @11:57PM (#34085568)
    Ok, so everyone agrees this is a stupid idea... but are there ANY pros? I just don't understand the premiss at all...
  • by Jurily ( 900488 ) <jurily&gmail,com> on Monday November 01, 2010 @12:29AM (#34085738)

    If he really wants to go into creative writing, we might remind him that the 26 letters of the alphabet were good enough for Shakespeare.

    Exactly. Completely Missing The Point at its best.

    1. The idea behind modern programming is reducing complexity. That can't really be done by using symbols no other programmer has ever seen before.
    2. Most programming fonts go out of their way to make those symbols look distinct. You simply have to know if that's a zero or an upper-case O. Imagine trying to figure out if that there is a Greek upper-case Omega or a "Dentistry symbol light down and horizontal with wave" (taken from TFA).
    3. APL [wikipedia.org] died for a reason.
    4. Author cites C++ operator overloading [yosefk.com] as a good thing. 'Nuff said.

  • by robbak ( 775424 ) on Monday November 01, 2010 @12:34AM (#34085762) Homepage

    So this means that you cannot touch-type in Japanese?

    (clarification: touch-typing here not just meaning not looking at the keys as you type, but not looking at the output either. If you have to check the screen to see that it has entered the right 'kanji', then surely transcription is slow.)

    I'm sure the hacks to enter [whatever the correct word to describe these 'symbolic' alphabets is] languages are very well resolved. It is just a pitty that they have to exist. But I have no idea what the perfect Japanese-entry device would be. Maybe a 'chord' keyboard, where two keys are pressed simultaneously - but the learning curve!

  • by spongman ( 182339 ) on Monday November 01, 2010 @01:10AM (#34085904)

    Typing Japanese is exactly like typing in English

    hardly. when you type in english, you think of the word and you type in the letters of that word.

    when you type in japanese, you think of the word, then you have to translate it at least once (maybe twice) in your head before you have a list of roman letters to type. Then you have to assist the computer in guessing the reverse of the translations you just did. certainly, much of this is simple for the typist, and for the computer, but it's fundamentally different from typing a roman language.

  • by cgenman ( 325138 ) on Monday November 01, 2010 @01:11AM (#34085908) Homepage

    Things I would love to see standard in all new editors:

    1. Little triangles that hide blocks of code unless you explicitly open and investigate them.
    2. Dynamic error detection. Give me a little underline when I write out a variable that hasn't been defined yet. Give a soft red background to lines of code that wouldn't compile. That sort of thing.
    3. While we're at-it, "warning" colors. When "=" is used in a conditional, for example, that's an unusual situation that should be underlined in Yellow.
    4. Hard auto-indent. It may be two spaces in the source code, but accidentally copying the indentation, and putting it in the wrong places, etc, should just be taken care of. That shouldn't even be an issue any more.
    5. Code-hint hover. When you hover over a function name, bring up a window with the first few lines of that function. Maybe open it in a "related code" pane?
    6. Right-click to jump to anything. Right-click a variable to jump to the declaration, or goto other places it is used. Right-click a class name to bring up that class definition.
    7. Start typing out a function, and get a menu of variable-specific functions that can be called. Flash actually does this surprisingly well, or did before CS5.

  • by glassware ( 195317 ) on Monday November 01, 2010 @01:33AM (#34085954) Homepage Journal

    I'm truly saddened to see so many people took this article summary so literally. If you read TFA, it's actually a very bright, intelligent, humorous example of programming insight. I found it a very delightful read and I wholeheartedly felt that the article presented its thoughts lightheartedly and without expectation of seriousness. To hear all the commenters here, it's as if the article ran puppies over with a steamroller.

    Please guys - I'm all for silly commentary. But read the article if you're going to pretend to write something clever. It's thoroughly tongue-in-cheek.

  • by Anonymous Coward on Monday November 01, 2010 @02:07AM (#34086066)

    Iverson thought the character set was a problem. That is why, at the end of his career, he invented 'J' to put APL on a standard keyboard and get past the many issues the custom glyphset creates.

  • by Anonymous Coward on Monday November 01, 2010 @02:40AM (#34086220)

    I _will_ say there isn't a future for visual programming, except perhaps in very limited domains, and even then, with a text language backup that people drop into for nontrivial applications.

    There have been hundreds of commercial and academic attempts at this, including the horrible CASE tool fad of the 90's, of which I was a victim in my first job out of school. I tool a grad course in visual programming in 1989 and I knew then that it would never work. Editing text is simple and almost one-dimensional (the layout is 2D but to insert things you add new lines). This makes revising programs easy - add and delete lines, sometimes move things between lines, and the editor just pushes the surrounding code out of your way. Editing 2D diagrams (such as CASE tool diagrams in the 90's) is a nightmare, because you have to think about both the logic of your program and the layout of the diagram (moving things around so your changes will fit, then moving them for an hour more to make it pretty). The latter is a really annoying distraction.

    Even if the tool fixed that problem, you'd run into the next problem, which is that diagrams are not very expressive of complex relationships between components, especially implicit ones such as types or templates. There's a reason ASIC and FPGA designers moved _away_ from drawing logic diagrams in a circuit diagramming tool to Verilog and VHDL (text programming languages) for hardware design in the 90's - as chip and FPGA designs became more complex, a textual design language with abstraction that is hard to do graphically became necessary. Circuit board designers still use diagramming-based CAD because they still place components, holes, and layers by hand - when laying out a board you usually want some of the components to have specific locations (like the connectors), and there are few enough components that placing them by hand still works. But for hundreds of thousands or millions of components (lines of source code, or logic gates), language-based design (as opposed to graphical) provides the necessary abstraction tools and editability to get the job done.

    And also, how the hell would you diff two code-diagrams to determine what changed between the last working version and the current one?
    There are just too many working tools for text-based programming to start using something different.

    The submitter wasn't really talking about diagrammatic programing so much as expanding the symbol set from ASCII, but the subject was close enough to detonate the above rant. On the symbol-set question, I agree with everyone else that it will be an incompatible waste of time. People complain that English is a poorly designed language, with obtuse spelling rules and whatnot - but it has over a million words and everyone is learning it. A language can have a lot of flaws that make it hard to learn, but people will learn it anyway if they have a reason, and won't even see the flaws once they know it. You spend a lot more time using a language than learning it, so it's not worth optimizing ease of learning if doing so will cause a bunch of other problems with usability. I wish Microsoft would learn that about their Office UI - revising something everyone knows in a mature product is a waste of resources that would be better spent fixing bugs. But that's another rant...

  • by shutdown -p now ( 807394 ) on Monday November 01, 2010 @03:08AM (#34086338) Journal

    Good point. Maybe one day Unicode will win out

    It's not a question anymore. Unicode has already won. The sheer amount of other specifications and standards that reference various versions of Unicode spec is such that it's going to stick around for decades to come.

    Yes, we still don't have 100% support in software (but we do have 99%). Time will fix that.

    or perhaps EBCDIC will have a resurgence. 'twixt now and then it's best to write the text in ascii, perhaps with a well-documented human-readable escape table for symbols that aren't represented - perhaps even a complete Unicode escape table current to the document.

    For programming languages, we already have that - \u1234 or \U12345678 are used as escape sequences in C++, Java and C# for just this purpose. There's nothing stopping an IDE from rendering them as if they were actual symbols and not escape sequences, too, though I haven't seen that in practice.

    But this is purely an encoding issue, not a character set issue, which is what TFA is about. They are asking why we still design languages with syntax that is restricted to characters only present in the ASCII character set, even though Unicode has many handy symbols that can represent the same things better and/or shorter. Quote:

    Unicode has the entire gamut of Greek letters, mathematical and technical symbols, brackets, brockets, sprockets, and weird and wonderful glyphs such as "Dentistry symbol light down and horizontal with wave" (0x23c7). Why do we still have to name variables OmegaZero when our computers now know how to render 0x03a9+0x2080 properly?

    For a good example of what is possible there, have a look at Fortress [PDF] [sun.com] programming language, which uses various traditional math symbols heavily.

    Unicode? How many revisions will Unicode see between now and then? Thousands?

    Unicode has been there for 18 years now (the second volume of Unicode 1.0 spec was published in 1992), and we've seen 5 revisions, so the rate is roughly 1 per 3.5 years. Assuming it stays the same, we're looking at Unicode 35.0 by 2100. But it won't, because in practice it will slow down eventually as we add most (and, eventually, all) scripts that we know and care about. In fact, if you look at the recent additions to the standard, they do not affect the vast majority of texts ever created in any way.

    On the other hand, it doesn't really matter in the slightest, since Unicode versions are all backwards-compatible (characters get added, but never removed or moved around). Assuming that trait persists, they'll just use the most recent version of the spec available to them.

    But then why would things be any different for ASCII-encoded text with escapes for Unicode characters? You'd still need a Unicode character table to make sense of those escapes.

    It would seem that you're arguing that any character set other than basic Latin is not future-proof. This implies that any text written in any language other than English is also not future-proof. I think this assertion is rather Anglo-centric, and not very realistic.

  • by the_womble ( 580291 ) on Monday November 01, 2010 @03:15AM (#34086370) Homepage Journal

    The article is talking about using unicode, not a proprietary format. Do you think it likely that future text editors will be able to handle ASCII but not UTF-8?

  • by HonIsCool ( 720634 ) on Monday November 01, 2010 @03:23AM (#34086392)
    When I think of, por ejemplo, the word pronounced as 'hait' [*], I don't have to "translate" that at all. No, sir! Just type it straight in, exactly as it is pronounced: "height" of course! =)

    [*] IPA doesn't work on /.
  • by TheLink ( 130905 ) on Monday November 01, 2010 @05:36AM (#34086860) Journal

    So how are you going to tell the difference between:
    a) a hyphen
    b) a dash
    c) a minus sign

    And worse the different unicode versions of hyphens and dashes:

    http://en.wikipedia.org/wiki/Hyphen#Unicode [wikipedia.org]
    http://en.wikipedia.org/wiki/Dash#Common_dashes [wikipedia.org]

    Yes, there's more than one unicode hyphen and dash! There are plenty of confusing characters like that too.

    So for programming you're still going to have to stick to a subset for keywords and symbols, and not use the full "tons of glyphs". Or at least you're going to need an entry system that allows you to switch.

    Maybe that Poul guy just wants a few extra symbols for some stuff. Good luck with that, many already complain about perl :).

  • by NickFortune ( 613926 ) on Monday November 01, 2010 @06:28AM (#34087010) Homepage Journal

    I thought it sucked. You thought it sucked. A load of guys from the maths department that wanted to do quick mathematical computations loved it. APLwas not meaningless symbols to everyone.

    Right. It's a niche language, very useful for a fairly narrow subset of programmers, but something of an impediment for the rest of us.

    The point is that using an expanded set of glyphs didn't, of itself, make a language that was widely useful, let alone better. At the same time, it brought considerable drawbacks, many of which have already been mentioned in this thread.

    Of course, that doesn't mean you couldn't leverage unicode to create a more expressive syntax. But TFA doesn't really have any ideas on how this is to be done apart from "obviously, more glyphs would be better", which I think APL disproves, at least in the general case.

  • by Anonymous Coward on Monday November 01, 2010 @07:10AM (#34087140)

    Color-as-syntax has already been done [colorforth.com] in Chuck Moore's latest implementation of Forth. It's not a bad idea,

    As a color-blind person, I'd like to say... yes. Yes,it is.

  • by arth1 ( 260657 ) on Monday November 01, 2010 @07:58AM (#34087344) Homepage Journal
    One problem is that word-for-word translations don't work. Other languages have both cases and genders applied to words, and often a different sentence structure too. Should "LET A=10" become "A=10 LASSEN"? What about Russian where the gender is significant? Or Japanese, where the status between speaker and listener determines the word? And what about right-to-left languages? Or top-to-bottom ones? But the biggest problems are, of course, compatibility and maintainability. You can't hire consultants who don't speak the language. And what if you branch out from Iceland to Sweden? Will you hire Swedes who speak Icelandic, or port all your apps to Swedish and maintain two different versions and prohibit unported e-mail attachments? Ask yourself why Microsoft doesn't have localized Office Basic anymore.
  • Re:limiting? (Score:3, Insightful)

    by pipatron ( 966506 ) <pipatron@gmail.com> on Monday November 01, 2010 @08:39AM (#34087638) Homepage

    Judging by your post, it appears that you have problems learning your own language. It certainly appears that simple spelling, capitalization, punctuation and correct grammar in the English language are apparently beyond your abilities.

    Did it ever occur to you that the person you replied to isn't a native English speaker?

  • by fyngyrz ( 762201 ) on Monday November 01, 2010 @09:33AM (#34088150) Homepage Journal

    As a martial artist of many decades, I have learned to read Chinese. Both traditional characters and the nasty simplified ones. So I'm well aware of up side - the power, and even beauty, of high-speed recognition from a large symbol set.

    But writing Chinese through a keyboard or a GUI has many cautionary lessons for us here that transfer directly to the idea of a many-symbol programming language. Take Python, for instance. A beautiful language in almost every way; visually well structured, minimalist in its core tools, yet so well thought out that it is almost unlimited in what can be done with it.

    If you were, say, to create a symbol for each Python grammar atom, you'd soon have a symbol set equal to or surpassing that required for college in China... thousands of them. This takes your average Chinese person many years to learn, by the way -- and it's non-technical.

    Now, assuming you've learned these in the first place, and stipulating that somehow, you've made them as beautiful and intuitive as the language itself, how do you select these symbols when programming? Therein lies the rub, and as no one yet has come up with a good answer for Chinese, I suspect the idea desert is just as dry for Python, or any other language one might like to turn into a concise symbolic tool.

    Now, speech has very fast mapping (although you get into context a lot... for instance "ma" can mean quite a few different things) to Chinese symbols, and so one could reasonably assume that it could also have reasonably fast mapping to my hypothetical Python symbols, but speech recognition isn't ready for this yet; and a programmer speaking "Pythonese" into a microphone isn't going to be a very good cube-mate, either.

    In the meantime, I'm quite convinced that ASCII is an excellent character set for programming, and that UNICODE belongs inside quotes for use in input and output parsing, no more, no less.

    APL suffered from all of this. You needed a special keyboard, or a GUI or other mechanism to input the "simple" symbol. You had to learn the symbolic mapping. It really represents a huge extra load in aim of simplification. All of which is completely unnecessary if you simply use ASCII. And frankly... the time it takes me to type sin(x) is going to beat your mapped keyboard input time until you've been doing it for 50 years. In which time I will have leveraged my ASCII toolkit into innumerable languages, and your APL toolkit is still only enabling you to work in APL.

    So like I said... ASCII.

  • by ZmeiGorynych ( 1229722 ) on Monday November 01, 2010 @09:41AM (#34088226)
    I really, really don't think so. Different tools for different jobs - a language for writing reliable infrastructure should look very very different from a language for exploration of datasets, for example - the first one must place emphasis on reliability and performance, the second on flexibility. Eg adding members to data structures on the fly is a great idea in the second case, but not in the first.

    Sure you can try to sweep that under 'different paradigms', and indeed you could mix two arbitrary languages in the same file using some delimited blocks for example, and call it 'one language with different paradigms', but why would you want to? The convoluted multi-paradigm monstrosity that is C++ is a terrible example to us all there, in my opinion.

    I think instead the shape of the future will be more like all those different languages that compile on the JVM - jython, Scala, Lua, and whatnot. They compile into interoperable modules without extra hassle, so in each module you can use the right tool for the job at hand.

An Ada exception is when a routine gets in trouble and says 'Beam me up, Scotty'.

Working...