Sending errors from psql to error file

Lennart Sorensen lsorense-1wCw9BSqJbv44Nm34jS7GywD8/FfD2ys at public.gmane.org
Thu Aug 12 18:33:42 UTC 2004


On Thu, Aug 12, 2004 at 10:24:06AM -0400, Devin Whalen wrote:
> I am trying to migrate a client from one database to another.  Basically
> we designed a web application for them using PostgreSQL but we have made
> many changes to the design of our application since version 1.  Now they
> want to upgrade to our new version.  So basically I have to pg_dump
> their current data and then import it into our new schema.  Now, of
> course I realize that there are going to be errors.  But they have a lot
> and I mean a LOT of data.  I don't want to have to sit there and watch
> the import go by and monitor every insert, I want to run a command and
> then look in a file for any errors after the import is complete.  I
> tried this command but it didn't work:
> 
> gunzip -c dataInserts.sql.gz | psql dbname -U username | grep "ERROR:*"
> > import_errors
> 
> (obviously dbname and username have been change to protect the innocent
> ;))
> 
> Any help is appreciated.

Well not that it helps you here, but what I have done in the past on a
system that had new features added fairly often with a postgresql
backend was this:

Have a version field in the database (in this case a table named Version
with a column named Version and one entry in it being just an integer).
The developers had a perl wrapper to psql which was used to make changes
to the development test DB, which would log any command that caused
modifications to the database (inserts, deletes, table alterations, etc)
unless explicitly told not to, or could be explicitly told to log
something.  By having a log of all psql commands used to modify the
development database along the way, with a new version stamp seperating
them in the log, we could on upgrades look at the current db version,
skip to that version tag in the log, and apply changes from there to the
desired version, and it would automatically update the tables that way
adding new fields, setting defaults, etc.  It worked great, and I
suspect the system is still being used.

This of course only worked because I insisted we needed to have a way to
upgrade the database when changes were made.  I hate having to reformat
data to insert it into a new database.  It turned out to have been well
worth the few hours of work it was to write the logging wrapper to psql.
The upgrade of the database even got integrated into the postinst script
of the debian package of the website, which made upgrading different
client systems much simpler when they wanted new features.

Lennart Sorensen
--
The Toronto Linux Users Group.      Meetings: http://tlug.ss.org
TLUG requests: Linux topics, No HTML, wrap text below 80 columns
How to UNSUBSCRIBE: http://tlug.ss.org/subscribe.shtml





More information about the Legacy mailing list