By nigelgott
Hi,
We have a managed postgres database with digital ocean with a large number of tables. Running a normal pg_dump
will crash as it needs a lock per table and is exceeding whatever the managed clusters configured postgresql max_locks_per_transaction
is, which I do not believe we can change ourselves.
pg_dump: WARNING: out of shared memory
pg_dump: SQL command failed
pg_dump: Error message from server: ERROR: out of shared memory
HINT: You might need to increase max_locks_per_transaction.
We cannot instead use pg_basebackup
to do backups as the configured database user/host does not have permissions to connect as a replication connection.
And so is it literally impossible via normal postgres commands to backup an entire digital ocean managed database if you go over a certain table limit?
Thanks
This textbox defaults to using Markdown to format your answer.
You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!
Accepted Answer
Hi @nigelgott,
You can open a support ticket with us to increase ‘max_locks_per_transaction’, we will be able to help you with this.
Regards, Rajkishore
Get paid to write technical tutorials and select a tech-focused charity to receive a matching donation.
Full documentation for every DigitalOcean product.
The Wave has everything you need to know about building a business, from raising funding to marketing your product.
Stay up to date by signing up for DigitalOcean’s Infrastructure as a Newsletter.
New accounts only. By submitting your email you agree to our Privacy Policy
Scale up as you grow — whether you're running one virtual machine or ten thousand.
Sign up and get $200 in credit for your first 60 days with DigitalOcean.*
*This promotional offer applies to new accounts only.