Hi,
We have a managed postgres database with digital ocean with a large number of tables. Running a normal pg_dump
will crash as it needs a lock per table and is exceeding whatever the managed clusters configured postgresql max_locks_per_transaction
is, which I do not believe we can change ourselves.
pg_dump: WARNING: out of shared memory
pg_dump: SQL command failed
pg_dump: Error message from server: ERROR: out of shared memory
HINT: You might need to increase max_locks_per_transaction.
We cannot instead use pg_basebackup
to do backups as the configured database user/host does not have permissions to connect as a replication connection.
And so is it literally impossible via normal postgres commands to backup an entire digital ocean managed database if you go over a certain table limit?
Thanks
These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.
Hi @nigelgott,
You can open a support ticket with us to increase ‘max_locks_per_transaction’, we will be able to help you with this.
Regards, Rajkishore
Click below to sign up and get $100 of credit to try our products over 60 days!