-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Both InnoDB and PostgreSQL - as well as many other databases - use a technique called multi-version concurrency control (MVCC) to provide transaction isolation: transactions should not see the work of other, uncommitted transactions. MVCC means that, when a row is updated, the database stores both the old and new versions of the row.

-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Saturday, November 27, 2010

PostgreSql Backup Script - Bash

If you are one who is using PostgreSql as your Back-end Database Server for your work, then obviously you need a backup solution, so here this is my try for a small solution for Automated Backup Job in  Bash Shell Script.#!/bin/bash
DIR=/data/backups/db_backup/
DT=`date +%Y-%m-%d`
mkdir $DIR/$DT
[ ! $DIR ] && mkdir -p $DIR || :
LIST=$(psql -l | awk '{ print $1}' | grep -vE '^-|^List|^Name|template[0|1]|^\(')
for d in $LIST
do
  pg_dump $d | gzip -c >  $DIR/$DT/$d.gz
done


Hope this works for you.

Note: If you have compiled your postgresql from source and installed in a different directory, then you may need to SET path for "Bin"  directory of postgresql  in Environment Variables.

7 comments:

  1. I would recommend this change:

    DT=`date +%Y-%m-%d`

    which will ensure that the backup directories are automatically sorted in date order by ls.

    ReplyDelete
  2. yes that is... It is a small change, need to do... thanks for your input.

    ReplyDelete
  3. This is what I use for my backup - but it needs to manually specify username, password and database:

    #!/bin/bash

    export PGUSER=root
    export PGPASSWORD=pass

    tar_zip()
    {
    if [[ -e /var/db_arhiv/$1_s.old ]]
    then
    rm -f /var/db_arhiv/$1_s.old
    fi
    if [[ -e /var/db_arhiv/$1_s ]]
    then
    mv /var/db_arhiv/$1_s /var/db_arhiv/$1_s.old
    fi

    if [[ -e /var/db_arhiv/$1_d.old ]]
    then
    rm -f /var/db_arhiv/$1_d.old
    fi
    if [[ -e /var/db_arhiv/$1_d ]]
    then
    mv /var/db_arhiv/$1_d /var/db_arhiv/$1_d.old
    fi

    # First dump SCHEMA only
    /usr/local/pgsql/bin/pg_dump -c -s -F p -S root $1 | sed -e 's/DROP SCHEMA public;/-- DROP SCHEMA public;/' -e 's/CREATE SCHEMA public;/-- CREATE SCHEMA public;/' > /var/db_arhiv/$1_s
    # Then dump DATA
    /usr/local/pgsql/bin/pg_dump -a -F p -S root --disable-triggers $1 > /var/db_arhiv/$1_d

    if [[ -e /var/db_arhiv/$1.tar.7z ]]
    then
    rm -f /var/db_arhiv/$1.tar.7z
    fi
    tar cf - /var/db_arhiv/$1_s /var/db_arhiv/$1_d | 7za a -si$1.tar -bd /var/db_arhiv/$1.tar.7z > /dev/null
    }

    tar_zip 'climbing'

    ReplyDelete
  4. good one... i just wanted, not to put if else or any minor calculation...

    ReplyDelete
  5. None of you doing "hotbackup" of datafiles and WAL files?

    ReplyDelete
  6. @Odila: How you know that? i am rsyncing data dir of postgrsql to other server (hotstandby - on different Geographical distance). Setup is on Streaming Replication. So above script is my 3rd approach of backup for instant remedy.

    ReplyDelete
  7. @Odile:
    Check this out:
    http://linuxhow-tos.blogspot.in/2010/10/postgresql-90-replication-step-by-step_14.html

    ReplyDelete