Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Upgrades

Migrating Large Scale Applications from ASCII to Unicode? 202

bobm asks: "We've been asked to migrate our newer applications to Unicode. My biggest issue is that if we start storing user data in unicode we will no longer be able to provide complete updates the legacy (pure ASCII) systems. This is important in that we are currently updating > 25k customers a day and managment does not want that to be affected. I also haven't found a clean way to provide multilanguage data mining that can return a single language output. This doesn't even begin to address issues like data validation and display issues. (note: we currently handle the web pages in multiple language sets but require the data to be in ascii form.) I've spent some time on Unicode.Org but I really haven't found any real world discussions on people doing this on a large scale (>1Tbyte databases)."
This discussion has been archived. No new comments can be posted.

Migrating Large Scale Applications from ASCII to Unicode?

Comments Filter:
  • by Kingpin ( 40003 ) on Thursday October 11, 2001 @07:00AM (#2414601) Homepage

    You don't mention any specifics, so it's hard to give details in response. What databases? How free hands do you have?

    I'd suggest a message oriented XML based system. You can model to your hearts content in XML, languages, charset etc. You can design near anything around that, and have various backends convert the XML messages (SOAP possibly) to the kind of data that's useful for the given backend.
  • I don't get it... (Score:2, Informative)

    by jonr ( 1130 )
    All decent databases have unicode support and allow you to convert the data on the fly. What's the problem here? And if you use UTF-8 encoding you have ASCII combatiabilty (sp)...
    J.
    • Re:I don't get it... (Score:4, Informative)

      by Twylite ( 234238 ) <twylite&crypt,co,za> on Thursday October 11, 2001 @07:51AM (#2414678) Homepage

      What's with people assuming that UTF-8 is ASCII? Its not. UTF-8 is a multibyte representation, that just happens to coincide with ASCII for characters 0 through 127. After that it takes two bytes to encode a character, possibly more when you get to "big" characters.

      UTF-8 is an encoding for unicode characters.

      • After that it takes two bytes to encode a character, possibly more when you get to "big" characters.

        UTF-8 takes:

        • 1 byte from 0 to 0x7f
        • 2 bytes from 0x80 to 0x7ff
        • 3 bytes from 0x800 to 0xffff
        • 4 bytes from 0x10000 to 0x1fffff

        That's why it's only popular in Europe and the Middle East. Characters in scripts from India, South-East Asia and the native American languages take up more space in UTF-8 than in UTF-16.

        • Space tradeoffs (Score:4, Informative)

          by d5w ( 513456 ) on Thursday October 11, 2001 @09:25AM (#2415010)
          But if your database is currently dominated by ASCII or even typical Latin-1 text, that's a reasonable tradeoff; no increase for ASCII text, a slight increase for Latin-1 text (100% on a minority of the characters in actual text; anyone have actual stats handy?), 50% increase for the rest of the 16-bit range, and the same maximum character size (U+10000 - U+1fffff take 4 bytes in both UTF-8 and UTF-16). And then you have the other advantages already mentioned: compatibility with 7-bit ASCII, NUL-terminated C strings, and ordinary 8-bit clean text channels. If you're currently in the ASCII or Latin-1 domain the question isn't even what you expect to store in the future, so much as how much cheaper disk space will be when you finally need to store it.
      • Actually UTF-8 takes one byte for characters 0 to 255. From there the rest of the unicode charset takes between two and three bytes.
        • Please explain how this would work. If you store 0 .. 255 in one byte, how do you indicate a multibyte sequence.
          • Re:I don't get it... (Score:2, Informative)

            by cwiegand ( 200162 )
            simple. Characters 0 - 127 have the 1st (or 8th) byte OFF (ie. a space (32) = 0x00100000). Now, character 129 would be 0x10000001. In ASCII (or so-called "8-bit ASCII"), that would be fine. In UTF-8, though, the high bit indicates it's a multi-byte character, and the next byte ALSO has to have that high-bit turned on.

            So, for chars 0-127, UTF-8 is a great way to use Unicode. For European languages, they just have an extra byte. But for unicode chars that would have the high byte turned OFF, you have a problem, and it takes more bytes to encode them.

            Basically, UTF-8 is a great way to move to Unicode, but don't consider it the destination. Use UTF-16, if you can.
      • What's with people assuming that UTF-8 is ASCII? Its not. UTF-8 is a multibyte representation, that just happens to coincide with ASCII for characters 0 through 127.

        The original poster talked not about "the same as ASCII" but about "ASCII compatible". And if you have text that's in ASCII, then it's automatically in UTF-8 as well since, as you said, for characters 0 to 127 the ASCII bytes are the same as UTF-8 bytes.

        (Of course, this breaks if you have a language that uses a superset of ASCII such as iso-8859-1, but if you have only have characters from "real" ASCII, then UTF-8 has the same representation as ASCII.)

        Cheers,
        Philip.

  • Suggestion. (Score:3, Insightful)

    by Domini ( 103836 ) on Thursday October 11, 2001 @07:01AM (#2414604) Journal
    Why not encode the data using XML... that way most of your data already maps to the real data.

    This would be without the XML tags, of course. Just the encoding of the data...

    Thus, you will be using UNICODE, and encoding it in XML text.

    Hmm... at some places you may need an XML to unicode translator.

    The fact that you are still storing and transfering your data in ASCII, does not mean it's a ASCII system... it's only your communication medium. This way systematic migration may become more possible.
    • by Domini ( 103836 )
      Hehe... seems like someone else had the same idea already.

      And, I must also insist that more domain specific information be given to aid in giving a solution.

      PS: By no mean do I think XML is the begin and end of all things... just that it may actually be useful here...
      ;)

    • XML isn't a character set encoding.
  • by caolan ( 2716 ) on Thursday October 11, 2001 @07:02AM (#2414607) Homepage
    What might be useful is to read how StarOffice, did their unicode and internationalization changes to an existing large code base at sun.com [sun.com]
    C.
  • Clients should be capable of telling their capability level, and servers should be able to use this to determine the data format they receive. If your clients can't return a capability level, new clients should have the feature added, and the lack of the feature should be considered capability level 0. Capability level 1 would be unicode display.

    For older clients, simply send a question mark or similar for any character not in the ASCII character set; this is extremely trivial to add to your back end. New clients get unicode and all the trappings that go with it. Be sure your support people are trained to explain that updating the client provides the new multinational functionality and eliminates the question mark placeholders.

    Regarding your question about different languages/encodings - you may need to include the language per record all the way through to the client end. Without knowing more about your output system, it's difficult to say what the display issues are, but it's difficult to believe many display libraries would limit you to a language per session.

  • ebXML (Score:2, Informative)

    by Anonymous Coward
  • by mir ( 106753 ) <mirod@xmltwig.com> on Thursday October 11, 2001 @07:18AM (#2414630) Homepage

    If your application returns results in XML you can always encode "safely" parts of the text using character entities (&#nn;). An other solution is to return not one but several results, in various encodings (you would have either to store the native encoding of a text or to figure out what it could be)

    And I hope this kind of practical discussion can help to raise the level of interest in Unicode amongst application coders.

    Although a lot of "core" coders (as in people who write languages and tools) are really into Unicode and trying to get their code to process it properly I found that most "application programmers", people who use those tools, are not at all interested. They tend to think that all software should support their favorite encoding natively. They also tend to curse alot when they get data in a different encoding ;--) Usually they view Unicode as yet another curse thrown upon them by an irresponsible buzzword-worshipping management.

    In fact Unicode is certainly hard an painful to implement, but it is a standard and at least written by people who know what they're doing. It solves problems that most of us either have had to deal with (oh the agony of dealing with odd characters in SGML data) or will have to deal with,:face it people, there are more and more people whose names include funny characters, even in the US, to leave that market untapped.

    So please view Unicode as a chance, and if the poster can do it on a terabyte of data, you can certainly do it on much less, especially as the tools are coming (yes, even Perl!)

    • by infiniti99 ( 219973 ) <justin@affinix.com> on Thursday October 11, 2001 @09:14AM (#2414961) Homepage
      In fact Unicode is certainly hard an painful to implement

      Maybe for library programmers. I have been extremely impressed with the Qt library's handling of Unicode characters. The QString class is used across the board and supports full Unicode. My project, Psi [jabbercentral.com] can handle unicode everywhere (chat, nicknames), thanks to Qt. Heck, I didn't even know about this for the longest time. In fact, getting unicode chat over Jabber took just one extra function call:

      QString::toUtf8();

      I just use that before sending content or attributes to the Jabber XML stream. Qt's parser already converts incoming UTF-8 to Unicode. This was so amazingly easy to use from an "application coder"'s standpoint it's not even funny.

      Of course, I can't speak any language other than English, so I personally won't be taking advantage of this. I know other people will though, and thankfully it was easy enough to put in.

      -Justin
      • Of course, I can't speak any language other than English, so I personally won't be taking advanta Of course, I can't speak any language other than English, so I personally won't be taking advantage of this. I know other people will though, and thankfully it was easy enough to put in. ge of this. I know other people will though, and thankfully it was easy enough to put in.

        So you think Unicode is just for non-English text? Well, neither ASCII nor Latin 1 is really sufficient for English. There are plenty of characters above 255 in Unicode that are needed or useful for writing English. And then we have foreign names that tend to pop up in English texts with all sorts of funny characters that you need to write even if you only speak English.

    • I'm an application programmer, and I can't say that I've found Unicode particulary hard. It's a blessed relief, especially after working with multi-byte character sets. Note, I'm not talking about UTF-8 or UTF-7, which are a multi-byte representation, and are a pain in the arse. In C++, Unicode characters have a dedicated type (wchar_t), and you can index directly into strings, which you can't do with a multibyte char string (see: isleadbyte). The other big advantage of Unicode is being able to share stuff with systems in different localities... there are no "code pages" to worry about. On top of this, some OSes (Windows NT) have been Unicode-only for some time, so switching applications to Unicode is a more natural way of working.
  • by GeLeTo ( 527660 ) on Thursday October 11, 2001 @07:30AM (#2414646)
    Check this standart for unicode compression [unicode.org].
    It compresses 16 bit unicode chars to 8bit using some reserved tags to switch the character windows. Sample java implementation is avaiable. The best thing is that most of the standart ASCII chars will still be encoded as 8bit ASCII after the compression. So you can still store all your data in 8bit ASCII and convert it to unicode before displaying it. And you don't have to modify your old data!!!
  • The title of this article alone was enough to give me horrible flashbacks to working with EBCDIC/ASCII [natural-innovations.com] conversions and IBM's weirdly proprietary and immutable standards. Thank GOD for Larry Wall, is all I have to say about that.
    • Re:Urk (Score:2, Informative)

      Oh, and speaking of Unicode and Perl, I'd have to say that once again O'Reilly [oreillynet.com] is probably a great place to start, and sending the dev team in charge of the Unicode conversions to ORA Unicoe boot camps/geek cruises is probably not a half bad approach.


      There is also this fascinating title [oreilly.com], which I've been meaning to read, merely because the page layout and typography within is a work of art. If you're in the bookstore and see this one, check it out. It's impressive.


      • > [...], check it out

        Direct link to the online sample pdf of Chapter 1 [oreilly.com]

        ... and whilst I'd not go overboard on the beautiful tpyograpyh angle, it certainly looks an interesting read.
        [Note to self: get a life]
    • ASCII/EBCDIC conversions are probably not as bad as EBCDIC/EBCDIC conversions ... It took me a long time to realize that IBM has a number of EBCDIC encodings -- and you often don't know which one you're getting unless you know what kind of device you got it from.
  • by sjmurdoch ( 193425 ) on Thursday October 11, 2001 @07:34AM (#2414654) Homepage
    A very useful resource on Unicode is this page [cam.ac.uk], written by Markus Kuhn. In particular you may be interested in How do I have to modify my software? [cam.ac.uk]; while it does concentrate on Unix, the general principles should be the same on any OS.
  • UTF-8 (Score:3, Informative)

    by bertilow ( 218923 ) on Thursday October 11, 2001 @07:34AM (#2414655) Homepage
    What's the problem? If you use the UTF-8 encoding
    for Unicode, all your data will be ASCII compatible.
    • Re:UTF-8 (Score:4, Informative)

      by pubjames ( 468013 ) on Thursday October 11, 2001 @07:50AM (#2414675)
      I'm finding it depressing seeing how things get modded here. This has been modded as funny??

      The guy is absolutely right - using UTF-8 solves lots of problems when having to use legacy software with Unicode. I did one project working with twelve languages, including arabic, japanese, hindi and welsh, and we just used SED to search and replace marker tags in hundreds of UTF-8 files. Worked a treat.
      • Re:UTF-8 (Score:1, Insightful)

        by Anonymous Coward
        I'm finding it depressing seeing how things get modded here. This has been modded as funny??

        Just remember, this is Slashdot, not some fancy-pants two-year community college.
    • by teg ( 97890 )

      What's the problem? If you use the UTF-8 encoding
      for Unicode, all your data will be ASCII compatible.


      ASCII is 7 bit while UTF-8 is 8 bit. You would want UTF-7 to remain ASCII-"compatible" (UTF-7 is defined in RFC 2152 [faqs.org]).

      • The normal meaning of ASCII compatible is an ASCII stream converted into that encoding doesn't change, with occasionally the further restriction being added that bytes in the range 00-7F are equal to ASCII characters (i.e. are not parts of multibyte characters.)

        In this sense, UTF-8 is ASCII compatible. UTF-7, on the other hand, munges certain ASCII characters, and uses bytes in the range 00-7F to stand for non-ASCII characters. If you have to deal with a 7 bit channel, UTF-7 may be the way to go, but otherwise you want to avoid it.
  • by sql*kitten ( 1359 ) on Thursday October 11, 2001 @07:38AM (#2414661)
    Oracle 8i, UTF8 character set. Compatibility with both Unicode and ASCII character sets. What're the problems? Well, clients that think that Unicode is UCS2, is one to watch out for, or forgetting that there's more to life than Western European ISO.

    Basically, 90% of the problems you will encounter is in converting between character sets to integrate with other things. If you can use Java (Unicode native) and PL/SQL for as much as possible, you'll have fewer problems. If your client is Excel (don't ask) that complicates matters. If you can assume that everything in the database is US7ASCII you're all set, because you won't need to do any data cleansing. If you have to convert stuff that's already there, then you will run into problems, what happened to me is that we had a Western European encoding, but people were entering Cyrillic data. It all came out fine on their desktops, which were configured for that character set, but the actual data in the database was gibberish as far as the queries were concerned. Non-trivial to fix.

    Good luck!
    • If your client is Excel (don't ask) that complicates matters.

      Do you mean Microsoft Excel? Do you mind expanding on this a bit, because I am doing a project at the moment that involves a translation agency giving us translated files in Excel in lots of different languages.
      • by sql*kitten ( 1359 ) on Thursday October 11, 2001 @08:07AM (#2414737)
        We were using Excel as the data entry client, the using Perl (with the Excel module, very good BTW) and/or VBA to extract the data and send it to Oracle, and ODBC to query from Oracle into Excel. This wasn't a decision we made, it was the clients(i.e. the customer, not the software) legacy way of doing things, and they weren't up for paying us to rebuild it, and retrain all their staff.

        You can use Perl to extract the data from Oracle and write SQL INSERT or SQL*Loader scripts, but this is a real pain. Windows is pretty good for Unicode, actually, even Notepad is a Unicode text editor, but the actual encoding is (off the top of my head) fixed width (16 bit) UCS2. The locale of the Oracle client was UTF8 (variable width), and it was verifying that the translating worked that sucked up a lot of resource (we naively first assumed that it would just work). UTF8 is great because if you're only using a subset of it, it doesn't waste storage space. The Oracle server was Windows 2000, the client terminals were a variety of different versions of Windows, running Excel for some bits of the app, MSIE4 for others. On the web side, there was some rather crap ASP/COM based middleware, in the end we dumped it and redid it in Java just for the Unicode-nativeness of it.

        Around that time (this was just over 6 months ago) I woulda killed for a Java API to Excel with access to all the objects exposed to VBA, which would have made things a breeze; maybe that exists now.
    • by Keick ( 252453 )
      Ditto. I was in charge of converting a legacy library system into supporting unicode, and it was easier than you might think. It was no small system either, with the main windows user interface weighing in at over 200K lines of code, and the server at over 500K lines... You get the gist.

      UTF8 is about the only way to go. Windows provides some decient convertions between local character sets and unicode (UTF8). Also, you may want to look at the Mozilla code, that had a decent UTF8 convertion set as well.

      The details are this: On the server we used Oracle 8i, and converted all the tables to UTF8. Importing old data was fairly straight forward, especially the english since it maps 1 to 1. We used Fulcrum to index with. Fulcrum was our biggest scare, but the easiest to fix. Fulcrum was only capable of ASCCII, and even worse it used a lot of special control characters, with prevent us from using UTF8 with it. The trick was we wrote our own UTF7 layer that encoded UTF8 into our homegrown UTF7 to avoid using the control chars. Beautiful.

      The client side was our biggest hurdle, but Delphi and the windows API saved our butts. Since all the code was based on a common library, i.e. the VCL, we simple rewrote the VCL to handle Unicode. All internal data was in UTF8, so only minor changes were needed for most the controls. We wrote wrappers for the entire windows API. Depending on which Windows you were using, we switched out layers. On english only boxen, the layer simply converted UTF8 to Ascii and visa when dealing with the API. For boxen that supported Unicode, we used a different layer to convert between UTF8 and Unicode. For foreign language boxen, it was the same Ascii layer, but using local page convertions, so the user would always at minimum see their language.

      If you want more details, feel free to email me at bfleming@rjktech.com
  • by Argon ( 6783 ) on Thursday October 11, 2001 @07:40AM (#2414667) Homepage
    Considering using UTF-8 for export instead of direct Unicode. As long as the legacy systems are 8-bit clean, you can feed UTF-8 back to them without too many problems. There will be no issues at all for ASCII data since 7-bit ASCII is the same in UTF-8. You just need to convert front end applications to be UTF-8 aware. You need not convert legacy backends to understand Unicode, they will just store UTF-8 as some weird 8 bit characters. The beauty is you'll be able to convert them in phases and ASCII never stops working.
    • Considering using UTF-8 for export instead of direct Unicode.

      UTF-8 is Unicode. It is one way of representing Unicode on disk. It is much Unicode as UTF-16 which is probably what you mean by "direct Unicode". They are just two different representations, like one's-complement or two's-complement integers. Both are integers!

  • mySQL & PHP (Score:3, Informative)

    by mnordstr ( 472213 ) on Thursday October 11, 2001 @07:56AM (#2414696) Journal
    In the development todo [mysql.com] for mySQL 4, they have a list of "Things that must be done in the real near future". Quite far down on that list I found:

    "* Add support for UNICODE."

    That's great, because mySQL 4 is about to be released any day now.
    As a PHP developer I wanted to know if php supports unicode. This is what I found:

    Strings [php.net]:
    "A string is series of characters. In PHP, a character is the same as a byte, that is, there are exactly 256 different characters possible. This also implies that PHP has no native support of Unicode."

    • Yes, it's shame that PHP doesn't has native unicode support.
      But if you use utf-8 and don't touch the strings and just pass them to the (unicode-capable) DB from the Webbrowser (or the reverse) it seem to work (at least for me using latin-1 and japanese characters).

      And there is an experimental multi-byte string module [php.net]
    • Re:mySQL & PHP (Score:2, Informative)

      by Hooya ( 518216 )
      i've been involved in designing and implementing a site to support arabic, thai, japanese, chinese, russian, korean, hindi and some 15 other languages (the european ones) using , you guessed it, MySQL and PHP. php apparantly supports UNICODE strings (we're using version 3.x even). in MySQL, we set the field to binary. i'm sure that adds some overhead but it works. we've used java to 'convert' strings from x encoding to UTF-8. iconv works too. now users can switch the language of the site purley by selecting an appropriate radio button for the desired language. and the languages are 'translated' gettext() style but thru database instead of files. this is a survey type site so the hittage is quite high and the site along with the database shows no signs of slowing down. i'm not sure if that's what you wanted to know but since our client (the browser) is multiple encoding compatible we have no problem. you might want to look into String class in java as it provides some neat encoding conversion in a roundabout sort of way. you possibly could get the Unicode string and the convert it to ASCII but i'm not sure what it does with the non-ASCII characters. as for MySQL database, set the field to binary. i dont' know about oracle etc.. as i haven't found the need to use it.
    • There is no need for anything other than "bytes". Nothing requires each "character" to be a "byte". Just use UTF-8.

      If you think it is a problem that the characters are different sizes, please realize that UTF-16 uses prefix codes and thus it also has characters different sizes. Even storing 32-bit Unicode would result in the need to treat multiple words as a "character" depending on how you think about prefix accent codes. Also try to get your coding out of the 1960's, modern software thinks about "words" which are varying size.

      All this I18N and Unicode stuff would be a no-brainer (every single interface would use UTF-8) if it were not for this illusion by so many idiots that "characters" need to be equal in size. They aren't, it is impossible for them to be so. Deal with it.

  • by Twylite ( 234238 ) <twylite&crypt,co,za> on Thursday October 11, 2001 @08:00AM (#2414712) Homepage

    The way I understand this, you have old clients, new clients, and a server that must handle both. And the server and new clients should support Unicode.

    First, although this is probably obvious, I should note that if your data is primarily text, then you're looking at a 2Tb database when you start using Unicode (depending on the encoding).

    My biggest issue is that if we start storing user data in unicode we will no longer be able to provide complete updates the legacy (pure ASCII) systems

    This is sortof like supporting German language entry, and wanting to display it on English clients. Its not easy, but it can be done, to some extent. Most Unicode you encounter will have an equivalent ASCII representation; there are acceptable conversions for almost all non-Eastern character sets. You can serve up a converted representation to your ASCII clients.

    DO NOT listen to the bullshit about serving up UTF-8 to ASCII clients. They can't understand it any more than I can understand German ; it will seem to work only for low-ASCII characters, but break for all others.

    As for data validation, you are going to have to have two rulesets. One will be client-side ASCII; the other a unicode ruleset used by both the new client and the server. Incoming ASCII from the old client should be converted to equivalent Unicode (that's the easy part) before being validated.

    Sorry, no realworld information here either ; certainly not on database that size.

  • If you store it using UTF-8 (there are lots of options for storing Unicode) your problem may not be that bad. I'm assuming your system is in C or a derivative. UTF-8 avoids the obvious breakage of embedded null bytes. You might need to add an output filter to make sure you don't ship out any characters numbered higher than 127 to non-Unicode-savvy customers.

    On the other hand, if you've got deep assumptions that strlen(whatever) == numberOfCharsIn(whatever) then you're pretty well hosed.
  • by Alan Cox ( 27532 ) on Thursday October 11, 2001 @08:49AM (#2414824) Homepage
    Make sure you use UTF8. Firstly because unlike UCS2 (16bit) it can encode all the characters not a subset of them. Eventually 16bit won't be enough for you. Secondly its 7bit ASCII equivalent so there is no real problem with migration over time.
    Thirdly since ascii 7bit is UTF8 ascii space there isnt any data migration to be done to set this up.
  • I may have missed the point, but Unicode is a character set. Once you have converted the characters in Unicode, you still have to store them. Instead than using UCS-2 (two bytes per character), you may store them in UTF-8, where codes (0-127) are represented exactly like in ASCII.
  • Read Usenet and c.i.w.a.h [news]. You'll get flamed to a crisp by them (they're a little dysfunctional, to put it mildly), but there are a couple of people thereabouts who know how to do this right.
  • Use UTF-8 encoding (Score:1, Insightful)

    by Anonymous Coward
    If you use the UTF-8 encoding, of which ASCII is a subset, then you minimize the amount of code and text that has to change -- only the text that isn't expressable in ASCII changes, using multiple bytes per character, and ASCII string manipulations still "just work".
  • by Florian Weimer ( 88405 ) <fw@deneb.enyo.de> on Thursday October 11, 2001 @09:04AM (#2414913) Homepage
    You first have to examine carfully the chracter set your current application can deal with. Is it ASCII? Or just the printable range? Or do most routines treat everything as sequences of 8-bit characters? Is the null character permitted in data? And so on.

    After that, you have to identify the operations which are character set specific. This can be quite a bit of work. Character set specific operations include case conversion, collating, normalizing, measuring string length and character width (for formatting plain text), text rendering in general, and so on.

    Now you look at your tools. Do they prefer some kind of Unicode encoding? For example, with Java or Windows, using UTF-16 is most convinient (some would say: mandated).

    Now you put the pieces together and look for a suitable internal representation (not necessarily "Unicode", i.e. UTF-8, UTF-16, or UTF-32), identify points at which data has to be converted (usually, it is a good idea to minimize this, but if you want to fit everything together, there is sometimes no other choice), and modules and external tools which have to be replaced because adjusting them or adapting to them is too much work.

    Your web page generation tools probably need a complete overhaul, so that they are able to minimize the charset being used (for example, German text is sent as ISO-8859-1, but Russian text as KOI8-R or something like that), since client-side Unicode support is mostly ready, but many people don't have the necessary fonts.
  • char ascii;
    int unicode;
    unicode = (int)ascii;
    • char ascii;
      int unicode;
      unicode = (int)ascii;

      Unfortunately, that only works in-memory since files are sequences of octets (bytes), which only have 8 bits. So you have to convert your ints to octets somehow when saving. So you have to pick a Unicode Transformation Format... such as UTF-8 or UTF-16.

      Cheers,
      Philip

  • We have a database of around 300MB that fits nicely on a CD-ROM.

    I'm assuming converting to Unicode would double the size and we would have to introduce some sort of compression to fit it on a CD-ROM?

    • Well one important thing I forgot to mention is that everything else included on the CD is nother 150 MB or so!

    • Your best bet would be to use UTF8 to encode the information rather than UTF16. If your data is all ASCII right now, then you shouldn't see an increase in size (unless you use an over abundance of high (0x80+) ascii characters) The increase for high unicode characters later on becomes incremental. A lot of Unicode's bad name comes from the bloated oversimplistic nature of the UTF16 and UTF32 formats. They are useful as internal representations for small buffers, but not for large amounts of data.
  • by rjstanford ( 69735 ) on Thursday October 11, 2001 @10:56AM (#2415393) Homepage Journal
    The hardest problem to solve is the business one. Storing the data is easy -- scaling from 1TB to 2TB (or more) is a solved problem. The hard part is deciding what to do when an ASCII client requests information that you only have in Unicode.

    Does your application support multiple languages now? If it does, it probably has a default language for everything that should be present in case the specific language asked for is missing. Rather than have that be "en_us" (or whatever), make that "US English ASCII-friendly". You can then add a new language "US English Unicode". Then alter your mandate so that everything has at least that language. I'd add Unicode and ASCII flavors for all other languages too, although anything that doesn't use high chars can just be stored as ASCII with the Unicode encoding generated (if space is that much of an issue).

    If your application database is not multi-lingual already, then you have some serious architechture work to do. I'd look at it from that standpoint though -- there is a wealth of reference material describing how to add language support to existing data and apps. Think of Unicode as another language.

    Concentrate on these issues, and let the technical issues (such as encoding scheme) be decided after you know what you want to do. As far as that specific one goes (seems to have the most interest on this page so far), just use whatever you DBMS supports most natively.

    -Richard

  • Use UTF8 (Score:2, Insightful)

    While I know XML is a favored silver bullet by the popular press and developers, I still haven't decided if the infatuation with a complicated packaging scheme is really worthwhile. It's nice in a sense that there are off the shelf readers that can interpret the data for you, sure, but ultimately it's still up to your code to pull out the data in a meaningful way. A good XML reader will do two things for you: 1) provide a regular format for all data, and 2) handle string conversions to and from various encoding schemes.

    It seems to me quite silly to bother dealing with all sorts of encoding schemes if you can control the data from the get-go. Convert from whatever your input data is to UTF8 as early as possible. With that, you immediately have support as if you wrote everything as wide characters, but don't have to change much, if any of your code. UTF8 is narrow, with reserved codes for multi-byte encoding. UTF8 doesn't require changing your string functions* that depend on a single terminating null, and you never really have to think about the encoding again. We've migrated from ASCII to UTF8 and now support whatever languages come in as an XML input format, but we immediately convert to UTF8 and forget the XML once we hit our database.

    * Caveat: Poorly encoded UTF8 can represent the same wide character in many ways. For this reason, a straight byte comparison of UTF8 strings is sometimes incorrect. Either you should test all strings at conversion time to see if they are minimally encoded, or convert to UCS2 and back again, just so all strings go through the same manipulative process, and give you the same byte stream. I learned this the hard way. With that out of the way, it's just like using normal ASCII.

    • All implementations reading UTF-8 should treat characters coded using more bytes than necessary as errors. Otherwise serious security vulnerabilities are possible due to disagreement between various pieces of software about the equality of characters.

      Normally these errors are turned into a single error Unicode character (0xFFEF?). However I favor an implementation where the error is turned into the same number of characters as there are bytes in the error, with each character equal to the original byte. Due to the design of UTF-8 the resulting characters will be in the 0x80-0xFF range. The reason for this is to allow recovery of ISO-8859-1 text that is mistakenly put into a UTF-8 stream.

  • by mughi ( 32874 ) on Thursday October 11, 2001 @11:51AM (#2415639)

    Just in case any of this work is being done on Microsoft Windows, you should avoid "#define UNICODE", TCHAR, and _T(). These are mainly legacy tricks used to help Windows 3.1 developers cross-compile their code for NT. Microsoft themselves doesn't use them, and insted goes with pure Unicode through the app. Even COM in Win32 since the first release of Windows 95 is all Unicode (BSTRs).

    Of course, this would preclude you from using MFC, but then again, many think that avoiding it is a good thing (again, Microsoft is among those who avoid using it). But aside from other benefits, you'd end up with not needing to build two separate binaries: one for Windows NT/2K and one for Win9X.

    Oh, and one other thing. If you are doing any portable code, remember that the Microsoft documentation lies and that wchar_t is not always 16-bit like they say. In fact, the spec recomends that it be 32-bit, and most other platforms (Linux included) define it thus.

  • EBCDIC! [slashdot.org]
  • by Anonymous Coward on Thursday October 11, 2001 @12:31PM (#2415880)
    There seem to be a lot of posts advocating the use of UTF-8 without explaining what the advantages and disadvantages are. Also, some of the posts are simply incorrect.

    Here are some of the advantages and disadvantages of UTF-8:

    • UTF-8 allows you to encode any character in the entire ISO-10646 character set (which is potentially much larger than Unicode since it is a 31-bit code, rather than Unicode which is only a little over 20 bits, or 17 * 65,536 code points). This is probably not of great interest since it is not expected that the ISO character set will ever need to define any characters outside the Unicode range.
    • Strings encoded in UTF-8 can be processed by standard C language routines. A binary 0 embedded in the string can be used as a string terminator just as in 1-byte character sets. Note that routines like strlen() will return the number of bytes rather than the number of characters in a string.
    • UTF-8 preserves the Unicode sorting order so that string comparisons work the way you'd expect without having to convert to Unicode to do the comparison (but note that the Unicode sorting order is not likely to be a useful "language sensitive" sorting order if that matters for your application, so you may still need some way to perform that kind of sort).
    • If you have an arbitrary byte in a string, it is possible to determine unambiguously whether it is the starting byte for a character, and if not you can probe backwards for the starting byte. This is not true of all multibyte character set encodings. This can be very useful for some applications and not at all for others of course.
    • Characters within the ASCII range (00-7f) are transmitted unchanged.
    • Most alphabetic characters (including Hebrew and Arabic characters) are transmitted with only 2 bytes - the same as if you'd stored them as UCS-2 or UTF-16, but not as compact as if you'd stored them with their corresponding ISO 8859-x character set.
    • Ideographic characters and the remaining rare alphabetics within Unicode Plane 0 are transmitted with 3 bytes, which is 50% larger than if they'd been stored with UCS-2 or UTF-16 or (often) with their native computer character set like Shift-JIS.
    • All other Unicode characters (mostly historical Chinese and Japanese characters and character sets for dead languages) can be transmitted in at most 4 bytes.
    • Depending on your display systems, you may need transformation routines to convert to and from other formats used by those systems. For example, many printers or computer fonts that support large character sets might be arranged for use as Shift-JIS or Big5 rather than for Unicode.
    • Because it preserves a certain degree of compatibility with 1-byte character streams, many existing programs and subsystems can coexist with UTF-8 with little or no modification. That does not mean you can count on UTF-8 being safe anywhere that ASCII is safe; you need to evaluate each system on its own merits. However it is quite likely to make your conversion easier.
    Even if you don't use UTF-8 for the external storage format, many projects have found that its advantages make it ideal for processing data in memory. Other times using a fixed-with (16 or 32-bit format) is desirable; fortunately the conversion between UTF-8 and the fixed-width Unicode formats is quite easy and quick.
  • It sounds like part of your system is using code pages to communicate is various languages like a web baised application. The data portions is not the linguistic text but just items that can be represented in ASCII. Some of you application can only support ASCII and all the data in your database is ASCII. If it is truly ASCII 0 - 127 (0x7F) (7 bit clean)then you data can often just redefine the database to declace that it contains UTF-8 (Unicode) data. But you must be sure that is is 7-bit clean first. Ont of the best Unicode support packages for C/C++ code (I assume that this is C) is ICU. http://oss.software.ibm.com/icu/ ICU uses UTF-16, but there is xIUA http://www.xnetinc.com/xiua/ which is also free open source software that add UTF-8 support to ICU. Even better it will allow you to add support and still run in code page first and then you the same code to support Unicode. It makes it easy to develop hybred application that may use Unicode in one part of the application and not in another. It will also allow you to use UTF-8 for database access. UTF-32 to interfece with Linux Unicode wchar_t and a mix of code page and UTF-8 requests to a browser.
  • Encodings (Score:3, Informative)

    by osolemirnix ( 107029 ) on Thursday October 11, 2001 @01:04PM (#2416054) Homepage Journal
    There is an additional problem with unicode in that you can convert from/to any encoding to unicode, but the encodings are not necessarily compatible.

    E.g. we had that with two different japanese kanji encodings (on Sun workstations and Windowze boxes). Both encodings converted to Unicode and back, but they both had characters not present in the other encoding. So if you created, say, a filename on one system, converted the string to unicode and back to the other encoding on the other system, then all you got was a lot of gibberish.

    So storing your data in unicode alone doesn't solve all your problems. All the clients that access that data need to support the same encodings used. (e.g. your american windowze box cannot handle unicode with kanji stuff unless you have the right language pack installed)

    Essentially it boils down to: all your clients and servers must use the same encoding, wether you use unicode or something else.



  • We converted Bridge.com to unicode a couple of years ago. I don't remember all the specifics. We had to change encoding on a few characters. It wasn't that big of a deal. The only catch I remember is that for one of the Chinese translations we couldn't use Unicode for some wierd reason.
  • Apple's CoreFoundation [apple.com] does a great job of dealing with Unicode and XML. It's an OO library written in C, and as such it has string objects and an xml parser/generator that works with its array and dictionary objects. It does an excellent job of abstracting Unicode messiness when working with XML.

    I've found CF a bit cumbersome to use by itself. A wrapper in an OO language like C++ or Objective-C is very convenient. Your Objective-C wrapper is commonly called the Cocoa Foundation framework :)

    It's been ported to Linux and FreeBSD, and I'd recommend it to anyone doing Unicode or XML work. The parser is currently non-validating, but there are so many other 'gifts' that come with CF that makes it worthwhile.

    Hey, it was good enough to build an OS on.

  • Don't (Score:3, Informative)

    by Alex Belits ( 437 ) on Thursday October 11, 2001 @10:40PM (#2418268) Homepage

    Unicode does not solve any problems with multilingual text processing -- what it solves is not a problem (having non-iso8859-1 native language, I am qualified to testify that displaying and respresenting data in various languages wasn't a problem for at least 30 years already), and real problems -- rules, matching, hyphenation, spell checking, etc. remain problems with Unicode just like they are without it.

    To make it possible to process, transfer and store the data in multiple languages one does not need Unicode -- in fact Unicode usually only adds additional step that requires some knowledge of language context that may be unknown, unavailable for some kind of processing, or simply not disclosed by end-users. What is necessary is byte-value transparency, so text in multiple languages at least will not be distorted by "too smart" procedures that cut the upper bits or make some other ASCII-centric assumptions. If/when users will care about marking languages in a way more advanced than iso 2022, they probably will find byte-value transparent channels to be suitable for whatever they will use.

    However if/when real usable languages-handling infrastructure that will solve those problems will be created, it won't need unicode because it will have language metadata attached to the text already, and without metadata, text, in unicode or in native charsets, is not usable for most of applications if it's not somehow already known what language it is supposed to be in.

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...