Forgot your password?
typodupeerror
This discussion has been archived. No new comments can be posted.

1st International Longest Tweet Results

Comments Filter:
  • by SpazmodeusG (1334705) on Friday April 30, 2010 @06:41AM (#32042374)
    I don't get it.
    If they ask what can be arbitrarily stored in the 4339bits available then there you can store 4339 arbitrary bits. It's a rule of compression. If they are asking for an English language compression program there are plenty better out there. Also if the goal is compression of English text and they aren't including the program size in the tweet then the competition can easily be cheated using a dictionary in the program that can be looked up.

    At the winner it's not a particularly good compression algorithm. It doesn't even seem to take bayesian probability of characters into account. I can't see any arithmetic coding (mathematically the perfect entropy encoder) either.
    • by Volguus Zildrohar (1618657) on Friday April 30, 2010 @07:03AM (#32042460)

      You don't get it?

      A weak, inexplicable imitation of earlier, better tech?

      That's Twitter in a nutshell.

    • by Zocalo (252965) on Friday April 30, 2010 @07:26AM (#32042576) Homepage
      It does what is required of the competition. There are 2^4339 available bits in a valid tweet so the first algorithm takes any 2^4339 bit sequence and converts it into a valid tweet, the second converts it back again. What is missing is the means for generating that 2^4339 bit value and converting it back into the original content.

      4339 bits is 542 bytes plus three spare bits, so if you wanted to actually use this for something you could use those three bits to define your data format from one of eight types, then "attach" your data payload to the header to generate the sequence of 4339 bits. Some ideas for the payload would be:
      • A sequence of 542 8bit characters
      • A sequence of 619 7bit characters + 3 padding bits
      • A sequence of 722 6bit characters + 4 padding bits
      • A Zip file equal to, or smaller than, 542 bytes
      • A GZip file equal to, or smaller than, 542 bytes
      • etc.
      • If using compressed files, you'd also need some way of dealing with spare bytes in the payload; either a decompressor that can ignore extra characters at the end of the file or a compressor that can manipulate the file to equal 542 bytes - using the comments field of the archive, perhaps.
      • by binkzz (779594)

        There are 2^4339 available bits in a valid tweet so the first algorithm takes any 2^4339 bit sequence and converts it into a valid tweet, the second converts it back again.

        That's one heck of a compression algorithm! You could fit the entire internet in a single tweet! I think you're on to something, where can I invest my money?

        • Re: (Score:3, Informative)

          by Zocalo (252965)
          Doh! That should be just "4339 bits" and not "2^4339 bits" which is a somewhat larger value, to put it mildly... I think you could theoretically describe a snapshot of the state of the entire universes in 2^4339 bits, and probably do so several times over as well, let alone the entire Internet. :)
        • Is 2^4339 bits actually 1337 code granules? c00lz!

      • by tlhIngan (30335)

        Or, 512 bytes plus pointers leading to next/previous "sectors" of data as metadata.

        Now you're able to store an arbitrary file, and all you really need to know is the ID of the beginning. Or one of the pieces and you can then recover the file.

        Sounds like a great way to store and spread files - TwitterShare! Like Rapidshare, but without the suck. And let the MPAA/RIAA battle it out with all the users.

      • Or, you could display your 4339 bit number in base 36 to encode non-beautiful alphanumeric messages. (about 813 characters, but no capitalization, punctuation, nor whitespace)

        ITA2 [wikipedia.org] (5 bit) does even better for some messages. 867 characters, but you lose some when you shift between modes (letters vs numbers/punctuation).

        • I should note that yes, I know that doesn't leverage compression, and compression algorithms will do better.

    • by AEton (654737)

      The contest is to figure out a way to make more bits available.

      It is not obvious that Twitter messages are always guaranteed to carry 4339 bits of information (which is why the original post announcing the contest offers only 4200 bits).

      Any attempt to use "compression" as we usually understand it would be pointless because you can't always fit x bits of arbitrary data in an x-1 bit channel.

      If it makes you feel any better, a lot of commenters didn't get it, either.

    • No, they're asking how many bits you can get out of a 140 character tweet regardless of what you can encode into those bits. If they just type ASCII, that's 140 * 7 bits = 980 bits of info (I'm ignoring nonprintables for the sake of argument). Yeah, you could build a compression scheme on top of those 980 bits, but that's now what the competition is about. Through the use of Unicode characters and other tricks, someone managed to get over 4339 bits, averaging 31 bits per "character".

  • Erm ... (Score:5, Insightful)

    by daveime (1253762) on Friday April 30, 2010 @07:06AM (#32042472)

    Except for the fact the algorithms he has submitted have NOTHING to do with compression, and are just a method of mapping the 4339 bits into the allowable Unicode character set over 140 x 32 bit character "slots", i.e. encoding / decoding only.

    With 4339 bits, hell in theory the longest actual tweet you could make is 2^4339 of any single character you choose, using the 4339 bits just to represent a (very large) counter of how many times to repeat the character.

    Considering that 2^4339 is approximately 10^1305, and there are probably only 10^82 atoms in the whole universe, that's one bloody long tweet.

    • by pjt33 (739471)

      Nah, you can do far better than that. "The character 'a' repeated Graham's number of times" is just a start...

      • by dkf (304284)

        Nah, you can do far better than that. "The character 'a' repeated Graham's number of times" is just a start...

        But the Kolmogorov Complexity [wikipedia.org] of that is rather smaller. It's that which is limited by Twitter, not the eventual expanded size of the message.

        • Re: (Score:3, Insightful)

          by pjt33 (739471)

          Yes, but that's not what GPP was talking about. Why on Earth would you assume that comments on /. would be on-topic, when that would require reading TFS? ;)

  • Limits (Score:3, Funny)

    by kiehlster (844523) on Friday April 30, 2010 @08:18AM (#32042832) Homepage
    Ah, so someone has finally determined the absolute breadth of the Twittersphere. If the world ran on tweets, maybe we wouldn't ever need more than 64kB of memory.
  • 16000bits (Score:4, Funny)

    by M8e (1008767) on Friday April 30, 2010 @08:26AM (#32042890)

    Solution - Just tweet the following picture of a swimming fish:

    ".`.`..`.>"

    Given that 1 word is 16 bits, and a picture is equal to 1,000 words,
    that makes my above tweet 16,000 bits of information (fitting
    several pictures in a tweet may extend this further) :-)

    (.)(.)

    (.Y.)

    d^_^b

    48000bits!

  • Long tweet is looooooooooong.
  • I eventually gave in and read TFA; they actually describe the winning algorithm...in the contest description. The contest was just to implement it (sort of). And (apparently) no one attempted to use the valid unicode characters as well. They just avoided them (like the contest bloggers) because they weren't sure that there wasn't some arbitrary string of characters that would mess up the message.

    I suppose that the contest could continue on that basis alone: how many more bits can you encode by using t

Take an astronaut to launch.

Working...