Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Databases

Diagramming Tool For SQL Select Statements 156

alxtoth writes "Snowflake is a new BSD-licensed tool that parses SQL Select statements and generates a diagram. It shows parts of the underlying SQL directly in the diagram. For example: x=30, GROUP BY (year), SUM (sales), HAVING MIN (age) > 18. The primary reason for the tool was to avoid Cartesian joins and loops in SQL written by hand, with many joined tables. The database will execute such a statement, if syntactically correct, resulting in runaway queries that can bring the database down. If you sit close to the DBAs, you can hear them screaming... "
This discussion has been archived. No new comments can be posted.

Diagramming Tool For SQL Select Statements

Comments Filter:
  • by Shados ( 741919 ) on Sunday August 03, 2008 @08:55PM (#24461071)

    No single query will ever bring a (real) RDBMS down. Even on a terrabyte of data or more, doing a crazy multi-hundred-table cross join, you're not going to bring it down.

    Now, it could seriously slow down a production server, but... you're not pushing untested SQL on a production server now, are you? Right? Riiiiiiiiiiight?

    So at worse, you're slowing down your own localhost development database engine for everyone else trying to access it (read: no one).

    Not much for the DBA to scream about...

    • by Anonymous Coward on Sunday August 03, 2008 @09:12PM (#24461201)

      No single query will ever bring a (real) RDBMS down. Even on a terrabyte of data or more, doing a crazy multi-hundred-table cross join, you're not going to bring it down.

      A real ACID-compliant database, no. MySQL, maybe.

      Now, it could seriously slow down a production server, but... you're not pushing untested SQL on a production server now, are you? Right? Riiiiiiiiiiight?

      Unfortunately sometimes you do need to run new queries against production servers. Of course, with a real database like MSSQL or Oracle, you can see how a query will execute, what path the optimizer will follow, and what the cost of the query will be.

      • by MBCook ( 132727 ) <foobarsoft@foobarsoft.com> on Sunday August 03, 2008 @09:39PM (#24461369) Homepage

        Explain/describe exists in MySQL, it's just very hard to do.

        Is it possible to bring Oracle down? I would think so, it would just take a lot (note: assuming normal hardware, not a large high-power cluster). Is it possible to take MySQL down? Easily. It can be surprisingly easy to lock the server completely. Even when you select off one set of tables (A) and want to insert into another set (B, possibly in a different schema/DB) it is possible to have things locked. It's very easy. We haven't seen a crashing bug in MySQL in a while (fun: a query that formated dates with the date format function could reliably crash MySQL 4.0 or 4.1 (don't remember which).

        Does explain help? No. On Oracle it may help. In Postgres it seems to help. I have no experience with MSSQL. In MySQL you have to watch out. While it can be useful, it is very limited.

        It's row counts can be horribly useless. It can list 1.2 million rows when in fact it can take a fraction of a second to get the data because it's all in an index in memory.

        Worse: it will run the query for you. Under some circumstances (using a subquery can do it, using more than one level of subquery is almost guaranteed to do it) it will just run the inner query and then use that to produce results. This means that describe/explain can lock the database and take hours to return (if you had a query that was bad enough and didn't kill the describe/explain). It's all the fun of running the real query, without the results actually presented to you.

        Note: We're using 5.0 (since 5.1 isn't production ready yet). Some of this may be fixed.

        • by russotto ( 537200 ) on Sunday August 03, 2008 @10:33PM (#24461649) Journal

          Is it possible to bring Oracle down? I would think so, it would just take a lot (note: assuming normal hardware, not a large high-power cluster).

          Oh yes, Oracle can be brought to a grinding halt (even on substantial hardware) by a big nasty query. It may not be crashed, but it's nonresponsive. Especially annoying when there is no need for the cartesian product; Oracle's pessimizer just chose to do one when something else was MUCH more appropriate. Alas this tool would not catch that situation (but EXPLAIN PLAN does).

          • You can use Database Resource Management to control maximum execution time for queries in Oracle; it's designed to protect such scenarios.

            However, the "pessimizer" as you call it is not a magic wand - it's just decent heuristics for selecting the optimal path for data retrieval. It's pretty complex; and has taken decades to come to this level. It's damn good if you understand it and configure your instance correctly. Where it doesnt work, you always have had other options.

            And no, I dont work for Oracle.

          • by hey! ( 33014 )

            The question is: is there a semantically equivalent query that DOESN'T overload the system?

            A cartesian product isn't necessarily a bad choice, if the product fits in memory. I personally haven't had this problem with Oracle, so I'd be interested in an example.

            • The question is: is there a semantically equivalent query that DOESN'T overload the system?

              Yes, in the particular cases I ran into; I was able to reformulate the queries to avoid the problem. They were SELECT DISTINCT on a big hairy set of tables; by making that query into a subquery with SELECT ALL, and then doing a SELECT DISTINCT on the subquery, the problem was resolved.

          • The statement about being 'no need for a Cartesian product' reminds me of a guy who said inner joins should never be used because they can bring a system down. Because he wrote one once that was a poor query and it did just that.

            Just because a person doesn't know when to use something properly, or uses it improperly from time to time, doesn't mean it has no use.

            I've used Cartesian joins before. Not very often, but I do recall using them in the past for very specific requirements. If memory serves me, it was

        • Comment removed (Score:5, Interesting)

          by account_deleted ( 4530225 ) on Sunday August 03, 2008 @11:15PM (#24461925)
          Comment removed based on user account deletion
          • by Tim C ( 15259 )

            Oracle... we use it for some backend stuff at work, and frankly I think a monkey with a pen and paper could be faster.

            Someone's done something wrong (probably stupidly wrong), as I've used Oracle on a system running stupidly complex queries against a table containing ~60 million rows, and while some queries can take a while most are surprisingly fast (sub-second or two). Yes, it's been designed and optimised to be fast - but that's what I meant about someone at your place having done something wrong...

        • Is it possible to bring Oracle down?

          Yes. Run Oracle 10.2.0.1 on SLES10 and wait for 49 days until the value returned by the times() syscall wraps around. Oracle shits itself. It can take up to 247 days on other distros.

          I'm sure they've fixed it since, but holy shit that's a stupid bug. What was that other software that crashed after 49 days? Windows 95. Unbreakable indeed.

    • Re: (Score:3, Funny)

      by Anonymous Coward

      Terrabyte? A planet byte?

    • by NerveGas ( 168686 ) on Sunday August 03, 2008 @10:24PM (#24461597)

      So, you don't put an untested query on a production server. Great. What happens when someone changes data in such a way that your query now explodes? :D

      In the last case I had to deal with that, one boneheaded programmer had his code set to send him an email if it couldnt' find a good match in the DB. Someone changed the data, and with the amount of traffic, his code, spread across our web serving farm, had injected almost a million messages into the email queues. Programmers are awesome.

      • by Shados ( 741919 )

        In that case, your web farm and/or email server dies, while the database server is still purring along :)

        • Actually no. SQLServer 2000's mail integration is synchronous so if the mail server is down it hangs the database. Good one eh!
    • Re: (Score:3, Informative)

      by killjoe ( 766577 )

      Depends on your database. I know I have been able to bring SQL server down with a query.

      Try this...

      begin transaction

      update rows set a=b where x=y

      commit transaction.

      On your workstation this could run really fast because you only have ten records. On the production database server this could crush the server if you had a few million records effected.

      • by Shados ( 741919 )

        Not from where I'm standing. The query itself will be slow if it affects a lot of rows (if I use exactly the query you're giving, and that X is indexed, it will be near instant, no matter how many rows I have =P), but the server will purr along just fine with other queries (assuming its configured to use snapshot mode, else it could lock away other queries... but who doesn't use snapshot mode since its been available in SQL Server anyway? Especially after people been bitching about it from the day it was av

        • by Forbman ( 794277 )

          Hmm... but what if X is a clustered index (or some other index type which physically orders the data on the physical storage...)? if you're updating the clustered index, you could wildly, inadvertently crush your server due to the disk I/O as data has to get reordered on the drive...

          Not the biggest fan of clustered indexes in Sql Server/Sybase... Oh well, I suppose in one of the next few releases of SQL Server that MS will figure out how Oracle does some more of their low-level black magic, and they will q

          • by ahmusch ( 777177 )

            Oracle's "low-level black magic" is multi-version concurrency control, where it's segregated redo and undo operations, eliminating locks as a scarce resource in the system.

            And Oracle tracks clustering factor of every index. If you really want data to be clustered about an index in Oracle, one creates the table with the clause ORGANIZATION INDEX and the table is physically created as a B-tree index structure.

      • Hm-m-m-m. I do this on a daily basis with one of our instances. We use SQL Server as a data preparation engine for a predictive modeling operation. This typically involves mass updates of the type you're specifying. I'm working on a set of data right now, with table sizes up to 45M rows. Some of the longer updates will run for 3 to 5 minutes, but that's hardly locking up the box. If you poorly construct the query, it will run for a couple of hours, but I don't call that bringing the database down, it'

    • Re: (Score:3, Interesting)

      by mbourgon ( 186257 )

      AHAHAHAHAHAHAHAHAHA. Since I do run terabyte-sized databases, I'll contradict you - poor queries _can_ tank a server, even with small tables, if the query is poor enough. While it technically may be running, if nobody else can access it, then for practical purposes the server is down. And never underestimate the ability of one user with enough knowledge to be dangerous, to spread that selfsame query across as many people as possible.

      • Re: (Score:3, Informative)

        by Shados ( 741919 )

        Maybe my poor queries writing skills are bad :) Because I've seriously -tried- before... cross joins on 100+ tables, all of which containing several douzen gigs of data, totally multiple terabytes...the scheduling was good enough to give the query very low priority, leaving the server ok.

        If you use (in SQL Server at least) the default settings, that will basically render your database useless... but if you use the newer locking strategies from 2005 (which had been available in Oracle for ages), the tables

      • Since I do run terabyte-sized databases, I'll contradict you - poor queries _can_ tank a server, even with small tables, if the query is poor enough. While it technically may be running, if nobody else can access it, then for practical purposes the server is down.

        This sounds like the server is doing potentially unbound amount of I/O or processing with a lock held. Otherwise the other queries should still run, just slightly slower due to increased load in the server. A query, no matter how poor, shouldn't b

    • by Tablizer ( 95088 )

      One feature that I dearly wish RDBMS vendors would provide is a time-limit clause. Example:

          SELECT * FROM foo TIMELIMIT 30

      This would ensure that the statement does not run for more than 30 seconds.
           

      • by Craig Ringer ( 302899 ) on Monday August 04, 2008 @03:08AM (#24463221) Homepage Journal

        craig:~$ psql
        Welcome to psql 8.3.3, the PostgreSQL interactive terminal.
        craig=> set statement_timeout=1000;
        SET
        craig=> SELECT generate_series(0,100000000000000000);
        ERROR: canceling statement due to statement timeout

      • by leenks ( 906881 )

        Err, there are ways of configuring decent RDBMS to have this kind of behaviour, albeit typically on a per user basis - just like you can limit the number of parallel queries a particular user can execute.

    • Re: (Score:3, Insightful)

      by Venik ( 915777 )
      SQL code is usually developed on some small server or the DBA's own workstation. The dev database is representative of the prod version only in structure and not in size. So this type of errors sometimes go unnoticed until the code is migrated to the prod environment. The effect of such errors vary depending on server architecture. The most sensitive are HA cluster environments, where the clustering engine overreacts and starts failing things over, exacerbating the problem.
      • by Shados ( 741919 )

        This type of error would go unnoticed? Err? If the query returns -the wrong data- (as a cross join by mistake would), and it goes unnoticed, something worse than your database crashing on you is waiting.

        • by Venik ( 915777 )
          The point is, buggy SQL code running on a powerful production server with a huge database may manifest itself in a completely different way, as opposed to when running on a slow dev box in a minimal db environment. With growing volumes of data and shrinking IT budgets sometimes even the largest companies find themselves with inadequate testing and staging hardware.
      • by leenks ( 906881 )

        This is why there are "reference" systems - typically the same size and spec (or within the same ballpark) that contain the same data as the live system (or again, some representative sample of it) which are used for staging / integration / pre-release testing.

      • by dindi ( 78034 )

        Dev, Testing, Staging, Prod ...

        Technically Staging is identical to production environment with the same amount of data. Testing has less powerful hardware, but many times just as much data to deal with.

        BTW with our more or less 100GIG database we run tests on WAY slower machines than what the actual DB server is. We do not have staging or testing, but one environment where we mess around. Development is pretty much the same, but with smaller DBs...

    • It's not that you bring it "down", you just slow it down to the point of a noticeable performance hit to everyone else. I've seen queries written by people just smart enough to be very dangerous, where the query consumes 100% of resources for an hour (when if it was written correctly, the query would take 2 seconds). That's as good as "down" in some cases.

    • by hey! ( 33014 )

      Well, a modern RBDMS is practically an operating system. This means that they way you bring it down doesn't involve the kinds of things a tool like this tells you. You probably need to do something procedural, involving a mix of tasks the RDBMS can't handle efficiently.

      Of course, it is always possible to tune things in a disastrous way. Oracle is an RDBMS that is highly tunable, which means that a lot of people make really bad choices, for example tuning things in a way that require greater memory witho

    • Now, it could seriously slow down a production server, but... you're not pushing untested SQL on a production server now, are you? Right? Riiiiiiiiiiight?
      Not as part of a system, no, but if I just need answers, I run untested queries on production servers all the time.

    • by mgblst ( 80109 )

      You clearly don't use SQL Server, it has a great propensity to run out of memory and crash (OK, not great, but every now and again).

  • by hkz ( 1266066 ) on Sunday August 03, 2008 @08:59PM (#24461105)

    A link to an alpha project on Sourceforge that was created three days ago and doesn't even have its own website? That apparently outputs LaTeX tables instead of something readable without having to compile it first, like HTML, SVG, or even indented text? I know it's silly to expect every story to be about a cure for cancer, but come on...

    • an alpha project on Sourceforge that was created three days ago
       
      ... and won't go much farther because even a casual reading of the explain plan can plainly show what the project is trying to track down. It sounds like a complete duplication of effort.

    • by LSD-OBS ( 183415 ) on Sunday August 03, 2008 @09:23PM (#24461267)

      Yup, not cool.

      Word to the wise: if you're going to actually start advertising a project, please make sure you have some binaries built for some common relevant platforms, and make sure you have some decent information online even if it's just an ugly page with screenshots or examples of what it does.

      In this case, we're talking about some scripts written in Python. At least let people know this on the front page, and list the project dependancies! ie, GraphViz, or whatever.

      This way, your potential users won't immediately discard it due to a lack of compelling information, and your potential (future) developers can see how far you've got and maybe get inspiration to chip in and help!

      That said, this sounds like it should be a great tool for beginner or intermediate SQL users, and I look forward to throwing a few of our mammoth 12-table-join queries at for much fun.

      • Existing tools (Score:3, Interesting)

        Most PostgreSQL users don't seem to use the existing, and superior, tools like EXPLAIN, EXPLAIN ANALYZE, PgAdmin-III's graphical explain, etc. I'm sure the same is true for users of many other databases.

        It's not like these tools are particularly difficult to use or understand. No training is required, though being willing to think and read a little documentation helps if you want to get the most out of them. Understanding at least vaguely how databases execute queries is handy for any database user anyway.

    • Re: (Score:3, Informative)

      by Blakey Rat ( 99501 )

      Ditto. I downloaded it to take a look and see how good it was at parsing T-SQL, since we have a few saved T-SQL queries with WHILE loops in them. I gave up after seeing it's... nothing. Just a Python script. It requires Graphviz, Python, and Pyparsing (even though it comes with pyparsing!? WTF!), and even more damning is that you can't use it for ad-hoc queries, the query has to be saved into a file first.

      Someone slap a GUI on this that lets you paste in a query, and bundle all the requirements along with t

  • You can do what I've done and seen done a number of times, and write a hunk of middleware that parses SQL statements for runaways and send back a warning to the user. That, and not using medium and low duty databases lile MSSQL and MySQL can go a very long way to keeping users happy.
    • Re: (Score:2, Informative)

      by mino ( 180832 )

      That, and not using medium and low duty databases lile MSSQL and MySQL can go a very long way to keeping users happy.

      Honestly, to describe MSSQL as "medium and low duty" is pretty rich. You'd best believe I'm happy to bash MS as much as the next guy but SQL Server is a high-performing, highly maintainable, high-availability database and doesn't deserve to be mentioned in the same sentence as MySQL.

      Hell, MSSQL might actually be the only truly good product MS make -- in fact, it probably is. It's not a toy and people who assume it is, just because it comes from MS (I'm not saying this is what you're doing, but people DO do

  • by Mycroft_514 ( 701676 ) on Sunday August 03, 2008 @09:16PM (#24461231) Journal

    Doesn't name WHICH RDBMS, and then you throw SQL at it? So what? For DB2 we have a thing called "Visual Explain" which NOT ONLY does this, but is free, provided by IBM, but also shows you other things like whch index is being used for each step, etc.

    This is news? This isn't even worth a second look!

    • Re: (Score:2, Informative)

      by Hackerlish ( 1308763 )

      Watch out! Anyone pointing out how a kdawson story isn't news gets moderated down as a troll. I can't even work out how this got out of the firehose.

      • I have karma to burn, and I don't take kindly to anyone (even the powerrs that be) modding down when what I say is based upon fact!

        Look to my credentials, only been a DBA for 20+ years!

        • Look to my credentials, only been a DBA for 20+ years!

          Yes, but a DB2 developer.

          And remember, you don't look "to" credentials, you look "at" them.

          • And remember, you don't look "to" credentials, you look "at" them.

            PHLEBAS the Phoenician, a fortnight dead,
            Forgot the cry of gulls, and the deep seas swell
            And the profit and loss.
            A current under sea
            Picked his bones in whispers. As he rose and fell
            He passed the stages of his age and youth
            Entering the whirlpool.

  • posted by the admin of the project? the spam tag is accurate... "yes this is an open source clearing house, no we will not all rapidly sign up to your cute little project." though, i would be willing to be this is a Masters Thesis project and alxtoth is hoping to get some fast-tracking going on...
  • all your apps should only be able to access the DB as unprivileged users with resource limits to prevent crashing, and they should only be able to run stored functions which someone qualified at sql creates for the application guys.

    this way the programmers are prevented from infecting the database from their crapness

    • Re: (Score:3, Insightful)

      Quis custodiet ipsos custodes?

    • For things like reports, your developers have to write complex SQL. You can argue that it shouldn't be a developer, instead it should be done by a "development DBA" or whatever, but essentially whoever writes the SQL IS the developer for reports. Even experienced DBAs can leave out a join in a complex (10 or more table) query, and it often isn't found if it's only run against a development and/or QA database with limited data and no real load. Cartesian products should be found if anyone actually reads th
    • Sure, things that actually use the database for production shouldn't be trying to do dirty things with it, but developers, whether dedicated "DBA's", or the poor shop with only one tech guy of any kind, need to be able to "play" with the database to be able to tweak it and ... well... do anything of meaning other than retrieve data. Sometimes this can be dangerous, but this is why they are testing on a development server... right?
    • Re: (Score:3, Interesting)

      by Samah ( 729132 )
      Generally what happens on my project is that the team (headed by an analyst) decides on the best design for the task, then subtasks are delegated to developers based on their level of skill with PL/SQL and/or Java.
      Business logic (for the most part) is done on the server-side with PL/SQL packages, while the application itself is a Java fat client running on a Citrix cluster.
      Before you make statements about keeping business logic separate from the database, this situation works well for this application, as
    • And these type of edicts from up on high tend to really bite you in the behind over time. You wind up with hundreds upon hundreds of stored procedures, and nobody knows which ones are even in use any longer. One project will wind up requesting a change that affects another project, and it basically excludes any O/R mapping tools. It's just one huge mess.

      Your best bet is to insist that your developers are just a little clued in. How hard is it to say, "As long as your queries always have an indexed field

  • EXPLAIN (Score:5, Insightful)

    by Craig Ringer ( 302899 ) on Sunday August 03, 2008 @09:22PM (#24461257) Homepage Journal

    I don't see what this has over EXPLAIN [postgresql.org] and an appropriate graphical display tool like PgAdmin-III [pgadmin.org]. There are large numbers of tools that display graphical query plans [postgresonline.com] - and unlike this simple SQL parser, they know how the database will actually execute the query once the query optimiser [wikipedia.org] is done with it.

    Furthermore, a simple SQL parser has no idea about what indexes [wikipedia.org] are present, available working memory for sorts and joins, etc. It can't know how the DB will really execute the query, without which it's hard to tell what performance issues may or may not arise.

    See comment 24461217 [slashdot.org] for a more detailed explanation of why this whole idea makes very little sense.

    • That's right. Mod parent up. Any tool that only looks at the SELECT statement, without knowing about the indices or what the optimizer is doing, is nearly useless.

      • > Any tool that only looks at the SELECT statement, without knowing about the indices or what the optimizer is doing, is nearly useless.

        This is wrong. The indices or optimizer have very little to do with the SELECT; at the time of processing the SELECT clause, most database engines already are done with the FROM, JOIN, WHERE, GROUP and HAVING clauses. At this point there will be little gain to add/drop indices from the query plan, unless the platform does support included fields. As such, SELECT is like

        • Re: (Score:3, Informative)

          I think you might've missed the point.

          The term SELECT statement generally refers to the whole statement, including FROM, WHERE, HAVING, etc clauses.

          This is pretty clear in context, as it'd be nonsensical to produce a graphical explain tool for the result field list in the SELECT clause its self.

          That's why the parent said SELECT statement not SELECT clause .

          As it happens the same issues regarding the need for planner knowledge etc are true for DML like INSERT, UPDATE and DELETE. It's not about SELECT at al

          • by lucm ( 889690 )

            > I think you might've missed the point.
            > The term SELECT statement generally refers to the whole statement, including FROM, WHERE, HAVING, etc clauses.

            I did not miss the point:
            >> For long queries with complex joins (like recursion), a diagram tool for SELECT can be very helpful

            I simply disagree that the product is useless without knowing about the indices. Before one should start reviewing query plans and figuring out what index is important, one must make sure the query makes sense. Logical be

            • Yes, it could be useful for examining the output of the query in non-performance terms. For complex queries I can easily see how that could be useful. That may, in fact, be the whole idea behind the tool - to help reduce or eliminate execution of grossly incorrect queries that don't do what the user wants. Tools like EXPLAIN aren't as useful for that, either, as the query looks quite different after the query optimiser is done with it. Additionally EXPLAIN output usually drops detail about specific fields
  • They day of the python? There were 2 different tools written in python.

    I'm not flaming, I just thought it was interesting as I loved programming in python when in college!

    • by cduffy ( 652 )

      I'm not flaming, I just thought it was interesting as I loved programming in python when in college!

      "When in college"? Why not now?

      Python jobs aren't hard to find.

  • by fahrbot-bot ( 874524 ) on Sunday August 03, 2008 @09:44PM (#24461401)

    If you sit close to the DBAs, you can hear them screaming...

    I've noticed that when things go horribly wrong, you don't actaully have to sit that close. To be fair, as a Unix SA who has to deal with Windoze systems, I've done my fair share of screaming. :-)

  • Are you serious? (Score:2, Informative)

    So SQL Server has had a graphical execution plan view for ever, and it's better than this lameness. But of course its not free, and we all know that free software is better, even when it sucks. Seriously, compare this to the real tools included with a serious RDBMS, and I have to question why this was even posted. It's almost farcical.
    • by Shados ( 741919 )

      Not free as in Freedom, but definately free as in free beer. The SQL Management Studio Express version has the execution plan stuff, and it works quite peachy, for free.

  • by sootman ( 158191 ) on Sunday August 03, 2008 @10:15PM (#24461555) Homepage Journal

    Can we have that in English please? Possibly with a diagram?

  • by scdeimos ( 632778 ) on Monday August 04, 2008 @01:46AM (#24462819)

    No single query will ever bring a (real) RDBMS down. Even on a terrabyte of data or more, doing a crazy multi-hundred-table cross join, you're not going to bring it down.

    You've obviously not tried anything simple on MS-SQL, like expanding a varchar(4) column to nvarchar(10) on a table with a few million rows. MS-SQL spins its wheels filling-up the transaction log until it overflows, then rolls it all back again. A 4GB log file, filled with a 250meg table (and no indexes because they were already dropped)?

    In the end we had to drop all FK refs, select * into another table, drop the original table then select * (with conversions) into newTableWithOriginal's name and reset all the FK's. *shakes head*

    • ...and that is why you should switch to DB2.
      What the heck are you doing with MS-SQL Server? Don't you know its for developers and kids?
      Trying to use SQL Server in production is like trying to cut your toenails with a straight razor: You may end up cutting your toenails ultimately, but you are likely to bleed yourself to death before that.

      • by lucm ( 889690 )

        > What the heck are you doing with MS-SQL Server? Don't you know its for developers and kids?

        You should call Myspace.com and let them know that they made a stupid database choice.

        Seriously, DB2 is so 1992.

        • myspace.com has been supplated by Facebook.
          And DB2 is the granddaddy that is being trusted by ALL banks.
          Give me a bank which does not store its data on DB2, and i will concede this.
          And banks are the most thorough corporate IT customers.

          • Re: (Score:3, Insightful)

            by lucm ( 889690 )

            > myspace.com has been supplated by Facebook.

            Facebook being more popular than mySpace has nothing to do with the database back-end. If you need more big customers for SQL Server 2005, they are easy to find: Barnes & Nobles, HMV online music store, NASDAQ (over 5000 transactions/sec).

            So basically your statement that SQL Server is a toy database might have attracted a few claps 6 or 7 years ago on Slashdot, but the reality is that SQL Server is a robust product finding its way in many markets. As one c

            • For the bigger banks

              How about HSBC, Statestreet & Barclays? Both live on IBM.
              I lived with many large banks for over a decade, especially with HSBC.
              IBM is something they swear by. Not just because its well known, but because it is so good and thorough.
              Mid level banks, your figures are correct: Oracle and then SQL Server.
              I worked for a mid-level bank in CT which migrated from mainframe to Services-Bases Arch and uses Oracle as a back-end.
              Plus, security of data and privacy lawsuits terrify large banks more. I had to go throug

      • MS SQL Server is good and relatively cheap. The type of problem the grandparent mentioned exists in ALL DBMSes from ALL vendors. But there's nothing that DB2 can do that MS SQL can't, and MS SQL has great data-flow tools that come along with it to make actual use of the data.

        Now, Microsoft's graphical DB tools (SQL Server Management Studio) has gone backwards in many ways from the older Query Analyzer/Enterprise Manager toolset, but oh well.

    • You missed the parenthesized word in "No single query will ever bring a (real) RDBMS down."

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...