Forgot your password?
typodupeerror
Programming Software

Things That Turbo Pascal Is Smaller Than 487

Posted by timothy
from the celestial-emporium-of-benevolent-knowledge dept.
theodp writes "James Hague has compiled a short list of things that the circa-1986 Turbo Pascal 3 for MS-DOS is smaller than (chart). For starters, at 39,731 bytes, the entire Turbo Pascal 3.02 executable (compiler and IDE) makes it less than 1/4th the size of the image of the white iPhone 4S at apple.com (190,157 bytes), and less than 1/5th the size of the yahoo.com home page (219,583 bytes). Speaking of slim-and-trim software, Visicalc, the granddaddy of all spreadsheet software which celebrated its 32nd birthday this year, weighed in at a mere 29K."
This discussion has been archived. No new comments can be posted.

Things That Turbo Pascal Is Smaller Than

Comments Filter:
  • by fragfoo (2018548) on Tuesday November 01, 2011 @10:13AM (#37907078)

    When I think back to playing vast adventure games, like Below the Root, that amazingly fit on two sides of a 5.25" floppy, but the same game now would probably be written to take up a CD-ROM, even using the same graphics. Programmers have lost the ability to optimize.

    I think they have the hability (some of them at least) but don't have the need or the time.

  • 39K ? Luxury! (Score:4, Insightful)

    by goombah99 (560566) on Tuesday November 01, 2011 @10:14AM (#37907096)

    Back in my day we had Basic running on a 2K altair. Kids these days don't know the meaning of a kilobyte.

  • So what? (Score:2, Insightful)

    by Jeff Hornby (211519) <{ac.ocitapmys} {ta} {ybnrohtj}> on Tuesday November 01, 2011 @10:16AM (#37907128) Homepage

    First thing that comes to mind is: so what? This whole argument that smaller is better is crap. The reason that software is bigger these days is that it does more for you. How productive was the GUI for Turbo Pascal (it sucked), how good were the other tools that came with it (nonexistent), how fast were the release cycles (about the same as today). So with what people call bloat we get better tools that make us more productive thereby driving down the cost of software development.

    Or to put it another way: if you really don't like bloat, when are you going to trade in your car and start driving to work in a hot wheels?

  • This just in (Score:5, Insightful)

    by Opportunist (166417) on Tuesday November 01, 2011 @10:18AM (#37907152)

    Software grows to fill the available ram.

    Code is always a tradeoff between codesize, development time and ram needed for execution. I'm fairly sure you can optimize code today to a point that would put those programs (which were optimized 'til they squeaked to squeeze out that last bit of performance) to shame, but why? What for? 30 years ago, needing a kilobyte of ram less was the make or break question. When drivers weighed in the 10kb range and you still calculated which ones you absolutely need to load for the programs you plan to run, where you turned off anything and everything to get those extra 2 kb to make the program run. Today, needing a few megabytes of ram more is no serious issue. And mostly because it just really doesn't matter anymore. Do you care whether that driver, that program, that tool needs a megabyte more to run? Do you cancel it because it does? No. Because it just doesn't matter.

    We passed the point where "normal" people care about execution speed a while ago. Does it matter whether your spreadsheet needs 2 milliseconds longer to calculate the results? I mean, instead of 0.2 you now need 0.202 seconds, do you notice? Do you care? Today, you can waste processing time on animating cursors and create colorful file copy animations. Why bother with optimization?

    Because, and that's the key here, optimizing code takes time. And that costs money. Why should anyone optimize code if there's really no need for it anymore? And it's not the "lazy programmers" or the studios that don't care about the customers. The customers don't care! And with good reason they don't. They do care about the program being delivered on time and for a reasonable price, but they don't care whether it needs a meg more of ram. Because it just friggin' doesn't matter anymore!

    So yes, yes, programs back in the good ol' days were so much better organized and they used ram so much better, they had so much niftier tricks to conserve ram and processing time, but in the end, it just doesn't matter anymore today. You have more ram and processing power than your ordinary office program could ever use. Why bother with optimization?

  • by plover (150551) * on Tuesday November 01, 2011 @10:23AM (#37907234) Homepage Journal

    How easily we overlook the difference between "bloated" and "quantity of useful information".

    Just the words on this page (no markup, no graphics, and after a few comments) would have exceeded the capacity of your beloved 5-1/4 floppy. That's only the raw information, without bloat.

    My first screen (a DECScope) had 12 lines x 80 columns each (I couldn't afford the 2K RAM that would have given me 24 x 80.) The screen I'm reading this on can display over 2 million RGB pixels. Calling things "bloat" is like telling me I should honor a display that's less than the size of the "close [X]" icon, because 12x80 isn't "bloat".

    By the same twisted logic, Turbo Pascal itself was bloatware, and I thought it produced horribly slow and big code. Assemblers were where the real efficiency lay, and they were a lot smaller than 39K.

    Nostalgia is fine. But leave it in the past.

  • Re:39K ? Luxury! (Score:4, Insightful)

    by Ferzerp (83619) on Tuesday November 01, 2011 @10:44AM (#37907636)

    No, it most certainly should not. That forced nomenclature is worse than what it ostensibly tries to solve.

  • by Xest (935314) on Tuesday November 01, 2011 @11:17AM (#37908140)

    "It's almost as if the existence of faster CPUs and larger memory has enticed some to be extremely lazy"

    Or just made them focus on getting stuff done rather than implementing optimisations no one will ever notice.

    Oh, and your MacBook startup vs. your Windows startup? That's because Windows supports an ever changing set of hardware configurations and retains support for legacy software. Your MacBook has the luxury of retaining a relatively small set of hardware configurations and Apple being happy to chuck backwards compatibility out the Window.

    Sure Windows is slower to boot up but it works on more hardware and has superior backwards compat. Sure your MacBook has poor backwards compat. for older Mac software and wont ever support some hardware configurations, but it's got a better startup time. Those are the tradeoffs you face with this sort of thing.

    Surely you understand this though if you're an optimisation guru, that you know, it's all about tradoffs? Or perhaps if you're one of those that's all about optimisation whatever the cost in man hours and however negligible the benefits then you don't understand that it's all about picking the right balance.

    So no, don't "MOD PARENT DOWN!!!". You have a rose tinted view of an era when all software was ultra-optimised by super non-lazy ninja programmers, I remember it more as an era where software still took longer to load and performed far more poorly than it does now, crashed far more often in far more fatal manners, had far more dangerous security flaws like root access exploits rather than just SQL injection exploits, and where usability was out the window as you had to spend hours configuring your system to even get it to run a game or whatever.

    I don't think the past was really as rosy as you think.

  • by zach_the_lizard (1317619) on Tuesday November 01, 2011 @12:25PM (#37908906)

    Then again, today's programmers would rather import the whole STL just to be able to use one String, rather than take 15 minutes to write their own class. (oops, they couldn't write one in 15 minutes ... oh well ...)

    Now everyone is writing their own String class, you have to pay them for that effort. That 15 minutes may not seem like a lot, but if everyone is doing shit like that all the time, the costs will add up. Also, at some point you will want to interoperate with some third party library. Wouldn't it be great if there were some sort of standardized String class so you don't have to convert from your String to their (inevitably screwy) String class? Repeat this for many datastructures and third party tools and libraries.

    Higher level languages didn't arise for the hell of it; if we needed to be worried about 128k of RAM, we'd still be writing code like we did in the old days. Now, we don't have to (minus certain domains), so why not trade space for time / money? We make all kinds of optimization trade offs already; ease of maintenance tends to not be one we often think of.

  • by zach_the_lizard (1317619) on Tuesday November 01, 2011 @01:24PM (#37909624)

    First, you only have to have ONE person in your org write that custom string class that does exactly what you want, no unpredictable side effects, no bloat.

    That 15 minutes pays itself back almost immediately, both in easier debugging (less code to debug) and quicker compile times,

    I wouldn't say it's less code to debug, but more, because now you have to maintain your string class.

    Second, 3rd party libraries are always going to be a problem, but usually you just give them a pointer to (a copy of) the data structure, never to your class. No big deal. I null-terminated string (c-style string) is a null-terminated string. A string with the first n bytes giving the actual size (a pascal-style string, can also be used for BLOBs) is a string with the first n bytes giving the actual size. These are the two standard ways of modeling string data.

    These are the standards, but Crazy Ass Corp Super Deluxe Hyper-whatever Library is going to do whatever you're not doing, and in a way that you can't just point to the internal C string. Never underestimate vendors: instead of your nice, 1960s style null terminated array of 1-byte characters, they're going to an array of 64-bit integers where they've packed in multiple 8-byte characters, but have decided to leave the last byte of each 64-bit integer as 0xFF for future use and they use EBDIC. Yes, this example is highly contrived and nonsensical, but never underestimate the inability of your colleagues to write software.

    This doesn't even touch on the STL's various algorithms to e.g. loop over all characters in a string and perform a function. Again, it's easy to write it yourself, but the STL is written in a nice, general fashion that makes it easier to interoperate, makes it easier to understand what is going on, and doesn't require you to continuously reinvent the wheel. Yes, once you write your BetterString class, you can reuse it, but over time you will keep adding functionality to it until it becomes std::string, and your office on the other side of the country may not know and may have written their own, etc.

    ... because you can either pay for that optimization ONCE, at coding time, or forever at run-time. And that "forever" gets propagated to any other code that exercises that code, either as a separate routine, or compiled in as part of the program.

    But you don't pay for it over and over again: your target audience is running machines with at least 512MB of RAM, very likely 1GB of RAM at least, and many will have at least 2GB of RAM. Saving, say, 100K of RAM by not using the well defined library string class is not a useful optimization, outside of more specialized problem domains. Outside of your underpowered embedded type systems or extreme high performance game with custom memory allocation or massively parallel real time trading application where each 0.000001 nanosecond of delay costs you trillions of dollars, that 100K is dwarfed by every other facet of your program for anything nontrivial. Grandpa's PDP-11 can't even run your target OS(es), why do we care that our program might be RAM-lean enough to fit in its memory?

God made machine language; all the rest is the work of man.

Working...