Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Programming Software IT Technology

Transferring Data 'Tween Databases 51

Sysbotz writes "A common request our company gets is how to get data from Access, Paradox, or some other database format and transfer it to a MySQL database. Well we have written a article on how to do this. W accomplish this task by writing a PHP script to read a database file through ODBC and then to construct a SQL file of the data that can then be read into MySQL. I think some slashdotters would like this."
This discussion has been archived. No new comments can be posted.

Transferring Data 'Tween Databases

Comments Filter:
  • Standarization. (Score:2, Insightful)

    by noselasd ( 594905 )
    Wouldn't it be nice if the protocol to RDBMS'es were standarized. Every DB server would use a standard protocol to talk to clients, sort of LDAP, which is a protocol and vendors implement that protocol..
  • by herrlich_98 ( 267669 ) on Monday April 14, 2003 @06:28AM (#5726760)
    Summary: to get data from a db to MySQL use PHP to read the db and print out a MySQL script that loads all the data.

    It is nice to highlight that you can read lots of different databases using odbc in PHP, but still.

    This basic concept is obvious to anyone with familiarity with MySQL. I mean, come on, "pick a language that can read the database in question and use it to dump the data into a format that can be read my MySQL".

    This program could have been written in Visual Basic or C# or anything that can read the database you want to convert.

    A more interesting PHP program that could have taken *any* two arbitrary odbc databases (MySQL can be accessed through odbc) and dumped table definitions and data from one db to the other.
    • No, it is not worthy of a /. article in my opinion (but my opinion doesn't matter as I'm not in @slashdot.org.corp), this is just a lame script.

      No new techniques and code that almost any person with a day of PHP experience would have already written. The only reason I checked it out was that I wanted to see if they figured out how to grab column names or tables names via ODBC. I'd love to be able to grab meta-data about the databases themselves. If this isn't part of the SQL ANSI standard...
    • The article covers a dumb trainee-job and doesn't even describe an elegant way to do this (e.g. via meta data).

      If this is all it takes to get a dev-article to /., I'll just dump my everyday work here and become the uber-geek ;-).

    • This *really* sucks.
      It's a lame braindead extremely ordinary task every near-decent php/db programmer had to do at some point, when he finally realized mysql sucks and he's gotta move everything into postgresql :p
      Clearly, lame publicity for the company in cause, which now turned into bad publicity.
    • I know I am taking the dead horse round the block for a quick flog, but to do this in general is a non trivial task as the data types in different databases will have different names, and the way what Access calls 'autonumber' fields is different from, say, the way it is done in Oracle.
      In fact, since the autonumbers are generated - duh - automatically, copying between two access databases can be distinctly non-trival (indeed, if you have relations defined AND autonumber fields determining a way to put the d
  • Why wouldn't you just add a parameter to your script that could do the inserts directly to the MySQL database right from PHP, instead of going to a file first? A simple type switch would be good. Then you could say if($type == "file") write data to file, else if($type == "db") write directly to the MySQL DB... It's a simple solution, and it could be easily configured for clients as a turnkey solution to getting their data.
  • Been there.. (Score:2, Informative)

    by JohnFluxx ( 413620 )
    I found a script on freshmeat that could read an access file and produce the appropriate sql commands.

    This was a few years back, and it had some problems with special characters - like spaces iirc - since access is more leniant.

    I kept meaning to write a script to turn the forms it created into glade xml.
  • Perl DBI? (Score:5, Informative)

    by merlyn ( 9918 ) on Monday April 14, 2003 @06:34AM (#5726779) Homepage Journal
    The Perl DBI can talk to all of those listed databases and more. It'd be trivial to fetch everything from one database and store it in another, without worrying about local quoting conventions, as long as you use the DBI placeholders.
    • by fizbin ( 2046 ) <martin@snoFORTRA ... g minus language> on Monday April 14, 2003 @08:31AM (#5727311) Homepage
      In fact just recently I wrote a one-off script here that did essentially what this PHP script does - it takes data out of a local sybase db and reformats it as a bunch of SQL statements. (We don't have direct access to the database into which this needs to be loaded, so there needs to be an intermediate form anyway)

      I suppose this _might_ be worth a post on perlmonks, as an example of using the DBI, (and of working around the fact that DBD::Sybase doesn't really implement column_info) but not much more than that.

      This code generates an SQL load file for each table that has a column named "DataSrcId" where that column has the value "35". It also substitutes the value 'guy' for any column named 'AudUsrId' and does not include any column named 'AudTmst' in the load output. As I said, it's a one-off hack.

      #!perl
      use DBI;
      use DBD::Sybase;

      my($dbh);

      sub dumpstatement {
      my ($tablename, $statement) = @_;
      my $sth;
      $sth = $dbh->prepare($statement);
      $sth->execute();
      while ( my(@row) = $sth->fetchrow_array ) {
      my @names = @{$sth->{NAME}};
      @row = map { $names[$_] eq 'AudUsrId'?'guy':$row[$_] } (0..$#row);
      @row = map { $names[$_] eq 'AudTmst'?qw():$row[$_] } (0..$#row);
      @names = grep(! /^AudTmst$/, @names);
      print "INSERT $tablename (", join(',',@names), ")\n";
      print "VALUES (", join(",",
      map {$dbh->quote($row[$_],$sth->{TYPE}->[$_])}
      (0..$#row)
      ), ")\n";
      }
      print "\n";
      }

      my($user, $password) = qw[sa confusion];
      $dbh = DBI->connect("dbi:Sybase:server=njdscope;database= TEST_ATRB", $user, $password);

      my($sth) = $dbh->table_info('%','%','%', '%');

      my(@tables);
      my($hashr);
      while ($hashr = $sth->fetchrow_hashref("NAME_uc")) {
      my($ctable) = $hashr->{TABLE_NAME};
      push @tables, $ctable;
      }
      $sth = undef;

      foreach my $table (@tables) {
      $dbh->{PrintError} = 0;
      my $teststatement = $dbh->prepare("SELECT max(DataSrcId) FROM $table WHERE DataSrcId = 35");
      $teststatement->execute;
      if ($teststatement->err) {next;}

      $dbh->{PrintError} = 1;
      print "-- for $table \n";
      dumpstatement($table, "SELECT * FROM $table WHERE DataSrcId = 35");
      }

      By the way - slashdot inserts an extra ";" in this code, even though it is NOT there in what I copy/paste in. Go figure.
  • It would be quite nice to have a C-based converter, that could be used to replace databases within an hour... for apps that support both the BEFORE and AFTER database types.. without using ODBC.

    What I have in mind is mysql, pgsql, oracle 9i, msaccess, db2, firebird, minisql.

    Wonder if its possible.
    • msaccess (Score:2, Interesting)

      You could give me a hand with this [sourceforge.net] if you want. (a C++ library for reading access databases).

      The projects been dormant for a while, (work makes my head hurt too much for real development)
  • by Sembiance ( 124190 ) on Monday April 14, 2003 @08:43AM (#5727389) Homepage
    This is totally worthless.
    Do mods just let anything with the words 'PHP' and/or 'MySQL' make it on the website?

    The article is less than 2 screen pages long, it's not much more than a code dump, and it's totally hardcoded for a specific and individual database table.

    It also only covers Windows installations of PHP and and person who knows that they need to move from one database to another, and what PHP is, is smart enough to do what this author wrote.

    I don't diss the author on this, it looks as if he is just new to computers and doesn't know any better.
    But geez, if this is the crap that we allow on slashdot now, I'm just gonna start submitting articles on 'How cool Google is'
  • Java (Score:3, Interesting)

    by FortKnox ( 169099 ) on Monday April 14, 2003 @10:26AM (#5728248) Homepage Journal
    Java would have been a much better language if you wanted the project to be reusable. JDBC means that we have the same code for every type of DB's. So you could have a 'read all from DB' set of code and a 'write all to DB' set of code, then simply plug the two DB's into an XML config file, and voila, you have exactly what is needed for any type of DB with JDBC drivers (which is everything except the extremely rare and extremely obscure).

    That is something worth writing an article about. Not just one very specific case.
    • Re:Java (Score:3, Funny)

      by yintercept ( 517362 )
      Java would have been a much better language if you wanted the project to be reusable.

      Gosh, with the Java/XML combo...I am surprised that there still is such a thang as a database. Dang, if you designed the blasted thang in UML with Java/XML there wouldn't even be a question of reusability...cause you would be in computer nirvana.

      PS: I don't code...just read the trade journals and play foosball.
    • Exactly, and I think you should have pointed out for those unfortunate programmers that have never used JDBC that you can query the schema in an independant way, then create the schema in another database in that same way such that from JDBC, the two datasources would be indestinguishable except for constraints, views, and stored procedures.
  • Testing and Whatnot (Score:4, Informative)

    by Inexile2002 ( 540368 ) on Monday April 14, 2003 @10:38AM (#5728359) Homepage Journal
    Despite some of the criticisms above it's nice to see stuff like this. As part of my job I have to occasionally go into companies and review database conversions after the fact to confirm that they did everything correctly.

    As obvious as the technique used above is to some /.ers, DB conversions are not always obvious to the people who actually do them IRL. I've seen some of the most horrific improvisations involving a third database as a data warehouse or worse, the process done manually with SQL dumping data into Notepad which is then copy/pasted into new SQL.

    The one thing though - testing. Post conversion testing is essential unless you were doing all this for shits and giggles. If you can't show someone through rigorous testing that your conversion worked, no responsible person out there should rely on the new DB. (Assuming they were relying on the old one.)
  • Moo (Score:3, Informative)

    by Chacham ( 981 ) on Monday April 14, 2003 @10:43AM (#5728400) Homepage Journal
    If this is something you do alot, get SQL Server DTS. It does this beautifully, as well as many other tasks.

    • If this is something you do alot, get SQL Server DTS. It does this beautifully, as well as many other tasks.

      Not to mention that its pretty much free. The SQL Server Tools install is free if you have SQL Server installed anywhere. There's no licensing anyways, and the DTS stuff is redistributable whether you have one or not...
  • Octopus (Score:4, Informative)

    by grugruto ( 530141 ) on Monday April 14, 2003 @11:10AM (#5728625) Homepage
    The Enhydra Octopus project [enhydra.org] seems to be the right tool to do this and you can specify data transformations in an XML file.

    You should check it out, it's open source.

    • I would note that XML is not a very compact format because it repeats the tags or field names over and over again for each record. More compact formats state the column list (and order) once as a table header, and the data ordering indicates which column it belongs with.

      On another note, one thing about writing xfer routines is that SQL lacks a hybrid statement that says something like, "if this record is already there, then updated it, else insert a new one". Instead you have to check for each record. It n
  • by bwt ( 68845 ) on Monday April 14, 2003 @11:50AM (#5728946)
    Here's another example from real world use that shows that MySQL is a toy compared to a real database like Oracle. You shouldn't have to write that much code to freaking load data. For industrial strength uses the method given will be horribly slow because it doesn't use bind variables. This results in each INSERT statement being different and having to be parsed separately by the RDBMS. SLOW!

    Frankly, even the overhead of having to construct the INSERT sql string is waste. You also don't want to maintain the indexes in the target table for each row update. MySQL doesn't have transactions, so you don't have to worry about commit-frequency, but if your load stops in the middle somewhere, I'm not sure what you do.

    Oracle provides a loader utility called sql*loader that eliminates the overhead of the per-row maintenance. It has a mode called "Direct Load" which can bypass trigger processing and directly write binary datablock output. This is the fastest way to load data. Of course bypassing triggers is of no interest to MySQL users because MySQL doesn't have triggers, but if it did you'd have another thing to worry about with loading data into MySQL.

    As an alternate to sql*loader, you could use external tables or Oracle Generic Connectivity to create an oracle table whose data was supplied by a flat file or ODBC connection. Then you would type
    INSERT INTO target_table
    VALUES ( [[field_list]] )
    (SELECT [[field_list]] from external_table)
    or (faster)
    CREATE TABLE target_table AS (SELECT * from external_table)
    Both of which would blow away the proposed method speed-wise.
    • Not that you're on the wrong track -- Oracle is more powerful than MySQL, there's no questioning that assertion -- but the functionality you describe exists in MySQL as well. For an equivalent to Oracle's sql*loader, MySQL offers the command line tool mysqlimport, or the SQL command LOAD DATA INFILE. I believe there are also equivalents to the CREATE/INSERT ... SELECT. For all the advantages that Oracle has over MySQL, these aren't among them :-)

      • Well, mysqlimport or LOAD DATA INFILE seems like it would be a MUCH better MySQL solution than building an INSERT for each row. Why the author of the original article did not use this method is rather perplexing.

        However, I don't see how the CREATE TABLE AS SELECT or INSERT AS SELECT could possibly work here unless MySQL has a way to link a table from an external source.
  • access2sql (Score:1, Informative)

    by Anonymous Coward
  • Delphi! (Score:2, Insightful)

    by laa ( 457196 )
    I wrote a small app for that once. It has basicly two comboboxes containing all ODBC DSN:s found on the system. Then you choose from which DSN to which DSN and click copy - regardless of database vendors (as long as they have ODBC drivers, that is). Just to show off, it gives the user a list of all available tables, so that he/she may copy only a subset.

    Coding it was a piece of cake - the Borland Database Enginge has it upsides every once in a while (but I never thought I'd admit that)!
  • I'm surprised nobody's mentioned MySQL-Front [sstienemann.de].

    I've used it before on importing an MS Access db to MySQL with 3000 rows and 50+ columns. Worked like a charm.

    It can also import any ODBC connection. I've never had any problems with it.

"Being against torture ought to be sort of a multipartisan thing." -- Karl Lehenbauer, as amended by Jeff Daiell, a Libertarian

Working...