Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Select an option

  • Save farhad0085/956d38a12347e602894de682d29a42c6 to your computer and use it in GitHub Desktop.

Select an option

Save farhad0085/956d38a12347e602894de682d29a42c6 to your computer and use it in GitHub Desktop.
Setup PgBouncer in GCP Ubuntu for Scaling Your Django App

Setup PgBouncer in GCP Ubuntu for Scaling Your Django App

Presumptions

You've a server of following configuration

  1. 16GB RAM
  2. 12vCPU
  3. Ubuntu >= 20.04
  4. Knowledge of nano editor

Step 1: Installing PgBouncer

First thing first, we'll start by installing pgbouncer in the server. Run following command in your ubuntu console.

sudo apt update
sudo apt install pgbouncer

Step 2: Configure PgBouncer

The default configuration file is located in /etc/pgbouncer/pgbouncer.ini We'll need to update some configuration in this file.

sudo nano /etc/pgbouncer/pgbouncer.ini

This will open the .ini file in nano editor. Find and update these settings (depending on your requirements)

[databases]
# Point to your Postgres database(s)
* = host=localhost                          # For which database you want to enable pooling. * to access all. 

[pgbouncer]
listen_addr = 127.0.0.1                     # put * if you want to receive connection from any ip address
listen_port = 6432                          # we'll connect our django app to this port
auth_type = md5                             # Important: for production
auth_file = /etc/pgbouncer/userlist.txt     # in this file we'll put auth credentials
pool_mode = transaction                     # Important: recommended for Django
max_client_conn = 100                       # max connections PgBouncer will accept
default_pool_size = 20                      # max active DB connections per DB
reserve_pool_size = 5                       # extra connections if pool is exhausted
reserve_pool_timeout = 5                    # seconds to wait for a reserved connection
server_idle_timeout = 300                   # idle DB connections returned after 5 minutes
admin_users = postgres                      # if you want to allow other pg user, write it here in comma separated manner

Step 3: Configure PgBouncer Users

PgBouncer uses a separate auth file for users. Create /etc/pgbouncer/userlist.txt:

sudo nano /etc/pgbouncer/userlist.txt

In this file, we'll keep our credentials in this format:

"youruser" "md5<md5_password_hash>"

Getting MD5 Hash

You can get the md5 hash from Postgres pg_authid

sudo -u postgres psql
select rolname, rolpassword from pg_authid where rolname='yourusername';

In the result, you'll get your user's password md5 hash. Copy that.

Now open the userlist file again.

sudo nano /etc/pgbouncer/userlist.txt

If your username is postgres and password hash is md50104bc91d3201d165u4cxbf9s7eb3f9d, then the file content will be as following:

"postgres" "md50104bc91d3201d165u4cxbf9s7eb3f9d"

Step 4: Enable PgBouncer Service

sudo systemctl enable pgbouncer
sudo systemctl start pgbouncer
sudo systemctl restart pgbouncer

To check the status of your pgBouncer:

sudo systemctl status pgbouncer

Step 5: Test PgBouncer

psql -h 127.0.0.1 -p 6432 -U postgres pgbouncer

It'll ask your password, enter your password. If it logs you in, then run following one by one:

SHOW POOLS;
SHOW CLIENTS;
SHOW SERVERS;

Step 6: Update Django Connection

In your database settings, change the db port to 6432, because PgBouncer listens on 6432. Also, update CONN_MAX_AGE.

DATABASES = {
    'default': {
        'ENGINE': 'django.db.backends.postgresql',
        'NAME': 'your_db',
        'USER': 'your_user',
        'PASSWORD': 'your_password',
        'HOST': '127.0.0.1',   # PgBouncer host/port
        'PORT': '6432',        # Important: PgBouncer default port
        'CONN_MAX_AGE': 0,     # Important!
    }
}

(Bonus) Proposed PostgreSQL Settings

Suggested postgresql.conf settings

sudo nano /etc/postgresql/12/main/postgres.conf
# Memory
shared_buffers = 4GB                    # (25% of 16 GB RAM)
effective_cache_size = 10GB             # (PG uses this for planner estimates; roughly OS cache + shared_buffers)
work_mem = 64MB                         # (per sort/hash per operation; conservative default that avoids exploding memory; increase if queries need big sorts and concurrency is low)
maintenance_work_mem = 512MB            # (for VACUUM/CREATE INDEX)
max_connections = 100

# WAL & checkpoints
wal_buffers = -1
wal_compression = on                    # Compresses data before writing to the WAL files.
checkpoint_timeout = 15min              # How often Postgres forces a checkpoint (i.e., writes all dirty data in memory to disk).
max_wal_size = 4GB                      # (For heavy write)
min_wal_size = 1GB
checkpoint_completion_target = 0.9      # How gradually Postgres performs a checkpoint within its time window.

# Autovacuum (important for write-heavy tables)
autovacuum = on                         # (keep it enabled)
autovacuum_max_workers = 6              # How many autovacuum processes can run at once.
autovacuum_naptime = 1min               # How often autovacuum runs checks for dirty tables.
autovacuum_vacuum_cost_limit = 4000     # Controls how “aggressively” autovacuum runs.

## Parallelism:
max_parallel_workers_per_gather = 2
max_worker_processes = 8
max_parallel_workers = 8
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment