I am trying to loadtest by sending 3000 requests all at the same time, and in this logic for the request the app platform is incrementing a redis key by 1.
I understand 3000 requests all at the same time is a lot but I look at insights into the db and the app platform, both ram and cpu never get a high enough point for things to throttle or bog down. I would love support for this!
]]>FAILOVER
command.
I am connecting to master with redli
, but unfortunately neither FAILOVER
nor other commands with @admin
ACL seem to be unrecognized:
> FAILOVER
(error) ERR unknown command 'FAILOVER', with args beginning with:
This behavior makes me conclude that the default
user does not have enough permissions to execute such commands.
I tried to fiddle with CLIENT PAUSE
, ACL
and even doctl database user
, but no luck. Am I missing something? What is the recommended way to simulate a failover?
I’m using a hosted MySQL8 DB and a hosted Redis 7 DB, but i’m wondering if i’ve “over scaled” them for the need i have.
Looking at the insights they’re sitting comfortably around 60% memory usage, but rarely goes above 50% CPU usage.
Database Instance ( 4 GB RAM / 2vCPU ):
Redis Instance ( 4 GB RAM / 2vCPU ):
Would it be fine to scale them down or should i keep them where they are at? Both time intervals are set to 7 days, to show an average of the load i’m getting on the instances.
]]>I’m finding it difficult to get the best approach that will not incur too much cost and is also flexible for scaling.
I have a few questions to ask that can shed more light on my problem and could also assist with providing a solution
PS: I’m building a multi db multitenant application, looking to start small and scale with time.
]]>I have looked into the Logs panel but they are not useful. Thanks.
]]>This thread is like an update from my old one found here: https://www.digitalocean.com/community/questions/woocommerce-cdn-cache-advice-spaces-cdn-for-lon1-droplet-uk-only-target-audience-or-no-cdn-just-cache
This is my current setup:
This new setup has been running for nearly a month but the other day it crashed.
I would like to prevent this from happening!
Please can someone advise? What more can I do?
The server crashed when I was editing some products on a Woocommerce site. The site does’t get much traffic so not sure why it did.
Here’s the error logs from when the server crashed:
/var/log/apache2/error.log
[Fri Jul 28 14:51:47.238660 2023] [mpm_event:error] [pid 720372:tid 140080325052288] AH10159: server is within MinSpareThreads of MaxRequestWorkers, consider raising the MaxRequestWorkers setting
[Fri Jul 28 14:51:58.512767 2023] [mpm_event:error] [pid 720372:tid 140080325052288] AH00484: server reached MaxRequestWorkers setting, consider raising the MaxRequestWorkers setting
[Fri Jul 28 14:52:08.708321 2023] [proxy_fcgi:error] [pid 928099:tid 140080063239744] [client 35.241.215.102:0] AH01071: Got error 'PHP message: RedisException: read error on connection to 127.0.0.1:6379 in /var/www/html/wp-content/object-cache.php:2055\nStack trace:\n#0 /var/www/html/wp-content/object-cache.php(2055): Redis->mget()\n#1 /var/www/html/wp-content/object-cache.php(208): WP_Object_Cache->get_multiple()\n#2 /var/www/html/wp-includes/functions.php(7009): wp_cache_get_multiple()\n#3 /var/www/html/wp-includes/taxonomy.php(4015): _get_non_cached_ids()\n#4 /var/www/html/wp-includes/taxonomy.php(3685): _prime_term_caches()\n#5 /var/www/html/wp-includes/category-template.php(1295): get_object_term_cache()\n#6 /var/www/html/wp-content/plugins/woocommerce/includes/wc-term-functions.php(133): get_the_terms()\n#7 /var/www/html/wp-content/plugins/woocommerce/includes/data-stores/class-wc-product-data-store-cpt.php(465): wc_get_object_terms()\n#8 /var/www/html/wp-content/plugins/woocommerce/includes/data-stores/class-wc-product-data-store-cpt.php(186): WC_Product_Data_Store_CPT->read_attributes()\n#9 /var/www/html/wp-content/plugins/wooco...'
/var/log/php8.0-fpm.log
[27-Jul-2023 13:03:23] WARNING: [pool www] server reached pm.max_children setting (45), consider raising it
[27-Jul-2023 13:38:20] WARNING: [pool www] seems busy (you may need to increase pm.start_servers, or pm.min/max_spare_servers), spawning 8 children, there are 13 idle, and 35 total children
[28-Jul-2023 11:06:26] WARNING: [pool www] seems busy (you may need to increase pm.start_servers, or pm.min/max_spare_servers), spawning 8 children, there are 12 idle, and 32 total children
[28-Jul-2023 11:06:27] WARNING: [pool www] seems busy (you may need to increase pm.start_servers, or pm.min/max_spare_servers), spawning 16 children, there are 9 idle, and 35 total children
[28-Jul-2023 14:50:26] WARNING: [pool www] seems busy (you may need to increase pm.start_servers, or pm.min/max_spare_servers), spawning 8 children, there are 8 idle, and 32 total children
[28-Jul-2023 14:50:40] WARNING: [pool www] seems busy (you may need to increase pm.start_servers, or pm.min/max_spare_servers), spawning 8 children, there are 14 idle, and 44 total children
[28-Jul-2023 14:50:41] WARNING: [pool www] server reached pm.max_children setting (45), consider raising it
[28-Jul-2023 14:58:47] WARNING: [pool www] server reached pm.max_children setting (45), consider raising it
I’ve made some adjustments to the server since the crash. I will share them here but please if someone who has a better idea than I do can you advise? Are the new settings good enough to prevent future crashes or I need to make it more resilient?
Current settings:
/etc/apache2/mods-enabled/mpm_event.conf
<IfModule mpm_event_module>
StartServers 2
MinSpareThreads 25
MaxSpareThreads 75
ThreadLimit 64
ThreadsPerChild 25
MaxRequestWorkers 150
MaxConnectionsPerChild 0
</IfModule>
I’ve tried applying similar max_children settings found here: https://gist.github.com/josuamarcelc/24b279ee3c80612bf0316c03c379de71
/etc/php/8.0/fpm/pool.d/www.conf
pm = dynamic
; The number of child processes to be created when pm is set to 'static' and the
; maximum number of child processes when pm is set to 'dynamic' or 'ondemand'.
; This value sets the limit on the number of simultaneous requests that will be
; served. Equivalent to the ApacheMaxClients directive with mpm_prefork.
; Equivalent to the PHP_FCGI_CHILDREN environment variable in the original PHP
; CGI. The below defaults are based on a server without much resources. Don't
; forget to tweak pm.* to fit your needs.
; Note: Used when pm is set to 'static', 'dynamic' or 'ondemand'
; Note: This value is mandatory.
pm.max_children = 45
; The number of child processes created on startup.
; Note: Used only when pm is set to 'dynamic'
; Default Value: (min_spare_servers + max_spare_servers) / 2
pm.start_servers = 15
; The desired minimum number of idle server processes.
; Note: Used only when pm is set to 'dynamic'
; Note: Mandatory when pm is set to 'dynamic'
pm.min_spare_servers = 15
; The desired maximum number of idle server processes.
; Note: Used only when pm is set to 'dynamic'
; Note: Mandatory when pm is set to 'dynamic'
pm.max_spare_servers = 25
; The number of seconds after which an idle process will be killed.
; Note: Used only when pm is set to 'ondemand'
; Default Value: 10s
;pm.process_idle_timeout = 10s;
; The number of requests each child process should execute before respawning.
; This can be useful to work around memory leaks in 3rd party libraries. For
; endless request processing specify '0'. Equivalent to PHP_FCGI_MAX_REQUESTS.
; Default Value: 0
pm.max_requests = 500
I’ve also raised the PHP memory_limit to 256.
I’m not sure if this is an issue but I will list it anyway. The shop archive pages load 16 images by default each product with approx 5 thumbnails each. They are lazy loaded so the server requests aren’t 16x5 at the same time but maybe if several users are on archives then loading them all will be too must for the server?
Thanks for any help!!
All advice is appreciated.
]]>Is there anyway to change it? can I change it using redli?
]]>I need to create new user(s) for Redis cluster but API documentation states that “User management is not supported for Redis clusters.”.
It is also not possible to create user via doctl.
test@testubuntu:~$ doctl databases user create ***** testuser
Error: POST https://api.digitalocean.com/v2/databases/***/users: 422 (request "***") operation is not supported for this cluster type
Why is it not possible to create a new user in Redis where it is possible in PostgreSQL, MySQL, etc… ?
]]>How / Is it possible to add one of these, for example, RedisJSON to DigitalOcean managed redis database?
]]>Errors:
• Operation timed out • Error while reading line from the server.
Laravel Config:
'default' => [
'scheme' => env('REDIS_SCHEME', 'tls'),
'host' => env('REDIS_HOST', '127.0.0.1'),
'password' => env('REDIS_PASSWORD', null),
'port' => env('REDIS_PORT', 6379),
'database' => env('REDIS_DB', 1),
// 'read_write_timeout' => 0,
],
I’ve been looking at community Q/A and they say that phpredis should work with DO Redis.
]]>I keep failing to set up a proper connection with a Redis cluster from Laravel 9 on a Ubuntu 20.4 Droplet. I can get it to work from the App platform so I know the redis server itself is working correctly.
I found multiple tutorials, videos & examples on how to do this on DO. But all of them are for older versions of Laravel. And there seem to be some differences.
If I use predis as the redis client I end up with the following error.
Predis\Connection\ConnectionException Connection timed out [tls://db-redis-***-db.ondigitalocean.com:25061]
If I use phpredis I end up with a 504 Gateway Time-out.
Any feedback is highly appreciated!
Edit Solved!
Turns out it was a stupid little mistake on my side!
After adding my droplet to the Redis DB Trusted Sources I can use the predis client without issues.
]]>I had the same error on the Rails app that goes with it but adding the following to production.rb
solved it:
# Digital Ocean Apps fix:
config.active_record.cache_versioning = false
Both the Rails and the Sidekiq component are cloned from the same repository so I assume it should take on the same configuration. Could it be the command I’m using on the Sidekiq side?
bundle exec sidekiq -C config/sidekiq.yml
The error I get is:
[2022-07-13 11:52:52] 2022-07-13T11:52:52.255Z pid=1 tid=349 WARN: RuntimeError:
[2022-07-13 11:52:52] You're using a cache store that doesn't support native cache versioning.
[2022-07-13 11:52:52] Your best option is to upgrade to a newer version of ActiveSupport::Cache::RedisStore
[2022-07-13 11:52:52] that supports cache versioning (ActiveSupport::Cache::RedisStore.supports_cache_versioning? #=> true).
[2022-07-13 11:52:52]
[2022-07-13 11:52:52] Next best, switch to a different cache store that does support cache versioning:
[2022-07-13 11:52:52] https://guides.rubyonrails.org/caching_with_rails.html#cache-stores.
[2022-07-13 11:52:52]
[2022-07-13 11:52:52] To keep using the current cache store, you can turn off cache versioning entirely:
[2022-07-13 11:52:52]
[2022-07-13 11:52:52] config.active_record.cache_versioning = false
[2022-07-13 11:52:52]
[2022-07-13 11:52:52]
[2022-07-13 11:52:52] 2022-07-13T11:52:52.255Z pid=1 tid=349 WARN: /layers/heroku_ruby/gems/vendor/bundle/ruby/3.1.0/gems/activerecord-7.0.1/lib/active_record/railtie.rb:118:in `block (3 levels) in <class:Railtie>'
[2022-07-13 11:52:52] /layers/heroku_ruby/gems/vendor/bundle/ruby/3.1.0/gems/activesupport-7.0.1/lib/active_support/lazy_load_hooks.rb:71:in `class_eval'
[2022-07-13 11:52:52] /layers/heroku_ruby/gems/vendor/bundle/ruby/3.1.0/gems/activesupport-7.0.1/lib/active_support/lazy_load_hooks.rb:71:in `block in execute_hook'
[2022-07-13 11:52:52] /layers/heroku_ruby/gems/vendor/bundle/ruby/3.1.0/gems/activesupport-7.0.1/lib/active_support/lazy_load_hooks.rb:61:in `with_execution_control'
[2022-07-13 11:52:52] /layers/heroku_ruby/gems/vendor/bundle/ruby/3.1.0/gems/activesupport-7.0.1/lib/active_support/lazy_load_hooks.rb:66:in `execute_hook'
[2022-07-13 11:52:52] /layers/heroku_ruby/gems/vendor/bundle/ruby/3.1.0/gems/activesupport-7.0.1/lib/active_support/lazy_load_hooks.rb:43:in `block in on_load'
[2022-07-13 11:52:52] /layers/heroku_ruby/gems/vendor/bundle/ruby/3.1.0/gems/activesupport-7.0.1/lib/active_support/lazy_load_hooks.rb:42:in `each'
[2022-07-13 11:52:52] /layers/heroku_ruby/gems/vendor/bundle/ruby/3.1.0/gems/activesupport-7.0.1/lib/active_support/lazy_load_hooks.rb:42:in `on_load'
[2022-07-13 11:52:52] /layers/heroku_ruby/gems/vendor/bundle/ruby/3.1.0/gems/activerecord-7.0.1/lib/active_record/railtie.rb:115:in `block (2 levels) in <class:Railtie>'
[2022-07-13 11:52:52] /layers/heroku_ruby/gems/vendor/bundle/ruby/3.1.0/gems/activesupport-7.0.1/lib/active_support/lazy_load_hooks.rb:68:in `block in execute_hook'
[2022-07-13 11:52:52] /layers/heroku_ruby/gems/vendor/bundle/ruby/3.1.0/gems/activesupport-7.0.1/lib/active_support/lazy_load_hooks.rb:61:in `with_execution_control'
[2022-07-13 11:52:52] /layers/heroku_ruby/gems/vendor/bundle/ruby/3.1.0/gems/activesupport-7.0.1/lib/active_support/lazy_load_hooks.rb:66:in `execute_hook'
[2022-07-13 11:52:52] /layers/heroku_ruby/gems/vendor/bundle/ruby/3.1.0/gems/activesupport-7.0.1/lib/active_support/lazy_load_hooks.rb:52:in `block in run_load_hooks'
[2022-07-13 11:52:52] /layers/heroku_ruby/gems/vendor/bundle/ruby/3.1.0/gems/activesupport-7.0.1/lib/active_support/lazy_load_hooks.rb:51:in `each'
[2022-07-13 11:52:52] /layers/heroku_ruby/gems/vendor/bundle/ruby/3.1.0/gems/activesupport-7.0.1/lib/active_support/lazy_load_hooks.rb:51:in `run_load_hooks'
[2022-07-13 11:52:52] /layers/heroku_ruby/gems/vendor/bundle/ruby/3.1.0/gems/railties-7.0.1/lib/rails/application/finisher.rb:85:in `block in <module:Finisher>'
[2022-07-13 11:52:52] /layers/heroku_ruby/gems/vendor/bundle/ruby/3.1.0/gems/railties-7.0.1/lib/rails/initializable.rb:32:in `instance_exec'
[2022-07-13 11:52:52] /layers/heroku_ruby/gems/vendor/bundle/ruby/3.1.0/gems/railties-7.0.1/lib/rails/initializable.rb:32:in `run'
[2022-07-13 11:52:52] /layers/heroku_ruby/gems/vendor/bundle/ruby/3.1.0/gems/railties-7.0.1/lib/rails/initializable.rb:61:in `block in run_initializers'
[2022-07-13 11:52:52] /layers/heroku_ruby/ruby/vendor/ruby-3.1.0/lib/ruby/3.1.0/tsort.rb:228:in `block in tsort_each'
[2022-07-13 11:52:52] /layers/heroku_ruby/ruby/vendor/ruby-3.1.0/lib/ruby/3.1.0/tsort.rb:350:in `block (2 levels) in each_strongly_connected_component'
[2022-07-13 11:52:52] /layers/heroku_ruby/ruby/vendor/ruby-3.1.0/lib/ruby/3.1.0/tsort.rb:431:in `each_strongly_connected_component_from'
[2022-07-13 11:52:52] /layers/heroku_ruby/ruby/vendor/ruby-3.1.0/lib/ruby/3.1.0/tsort.rb:349:in `block in each_strongly_connected_component'
[2022-07-13 11:52:52] /layers/heroku_ruby/ruby/vendor/ruby-3.1.0/lib/ruby/3.1.0/tsort.rb:347:in `each'
[2022-07-13 11:52:52] /layers/heroku_ruby/ruby/vendor/ruby-3.1.0/lib/ruby/3.1.0/tsort.rb:347:in `call'
[2022-07-13 11:52:52] /layers/heroku_ruby/ruby/vendor/ruby-3.1.0/lib/ruby/3.1.0/tsort.rb:347:in `each_strongly_connected_component'
[2022-07-13 11:52:52] /layers/heroku_ruby/ruby/vendor/ruby-3.1.0/lib/ruby/3.1.0/tsort.rb:226:in `tsort_each'
[2022-07-13 11:52:52] /layers/heroku_ruby/ruby/vendor/ruby-3.1.0/lib/ruby/3.1.0/tsort.rb:205:in `tsort_each'
[2022-07-13 11:52:52] /layers/heroku_ruby/gems/vendor/bundle/ruby/3.1.0/gems/railties-7.0.1/lib/rails/initializable.rb:60:in `run_initializers'
[2022-07-13 11:52:52] /layers/heroku_ruby/gems/vendor/bundle/ruby/3.1.0/gems/railties-7.0.1/lib/rails/application.rb:372:in `initialize!'
[2022-07-13 11:52:52] /workspace/config/environment.rb:5:in `<top (required)>'
[2022-07-13 11:52:52] /layers/heroku_ruby/gems/vendor/bundle/ruby/3.1.0/gems/sidekiq-6.4.2/lib/sidekiq/cli.rb:273:in `require'
[2022-07-13 11:52:52] /layers/heroku_ruby/gems/vendor/bundle/ruby/3.1.0/gems/sidekiq-6.4.2/lib/sidekiq/cli.rb:273:in `boot_application'
[2022-07-13 11:52:52] /layers/heroku_ruby/gems/vendor/bundle/ruby/3.1.0/gems/sidekiq-6.4.2/lib/sidekiq/cli.rb:37:in `run'
[2022-07-13 11:52:52] /layers/heroku_ruby/gems/vendor/bundle/ruby/3.1.0/gems/sidekiq-6.4.2/bin/sidekiq:31:in `<top (required)>'
[2022-07-13 11:52:52] /layers/heroku_ruby/gems/vendor/bundle/ruby/3.1.0/bin/sidekiq:25:in `load'
[2022-07-13 11:52:52] /layers/heroku_ruby/gems/vendor/bundle/ruby/3.1.0/bin/sidekiq:25:in `<top (required)>'
[2022-07-13 11:52:52] /layers/heroku_ruby/gems/vendor/bundle/ruby/3.1.0/gems/bundler-2.3.10/lib/bundler/cli/exec.rb:58:in `load'
[2022-07-13 11:52:52] /layers/heroku_ruby/gems/vendor/bundle/ruby/3.1.0/gems/bundler-2.3.10/lib/bundler/cli/exec.rb:58:in `kernel_load'
[2022-07-13 11:52:52] /layers/heroku_ruby/gems/vendor/bundle/ruby/3.1.0/gems/bundler-2.3.10/lib/bundler/cli/exec.rb:23:in `run'
[2022-07-13 11:52:52] /layers/heroku_ruby/gems/vendor/bundle/ruby/3.1.0/gems/bundler-2.3.10/lib/bundler/cli.rb:483:in `exec'
[2022-07-13 11:52:52] /layers/heroku_ruby/gems/vendor/bundle/ruby/3.1.0/gems/bundler-2.3.10/lib/bundler/vendor/thor/lib/thor/command.rb:27:in `run'
[2022-07-13 11:52:52] /layers/heroku_ruby/gems/vendor/bundle/ruby/3.1.0/gems/bundler-2.3.10/lib/bundler/vendor/thor/lib/thor/invocation.rb:127:in `invoke_command'
[2022-07-13 11:52:52] /layers/heroku_ruby/gems/vendor/bundle/ruby/3.1.0/gems/bundler-2.3.10/lib/bundler/vendor/thor/lib/thor.rb:392:in `dispatch'
[2022-07-13 11:52:52] /layers/heroku_ruby/gems/vendor/bundle/ruby/3.1.0/gems/bundler-2.3.10/lib/bundler/cli.rb:31:in `dispatch'
[2022-07-13 11:52:52] /layers/heroku_ruby/gems/vendor/bundle/ruby/3.1.0/gems/bundler-2.3.10/lib/bundler/vendor/thor/lib/thor/base.rb:485:in `start'
[2022-07-13 11:52:52] /layers/heroku_ruby/gems/vendor/bundle/ruby/3.1.0/gems/bundler-2.3.10/lib/bundler/cli.rb:25:in `start'
[2022-07-13 11:52:52] /layers/heroku_ruby/gems/vendor/bundle/ruby/3.1.0/gems/bundler-2.3.10/exe/bundle:48:in `block in <top (required)>'
[2022-07-13 11:52:52] /layers/heroku_ruby/gems/vendor/bundle/ruby/3.1.0/gems/bundler-2.3.10/lib/bundler/friendly_errors.rb:103:in `with_friendly_errors'
[2022-07-13 11:52:52] /layers/heroku_ruby/gems/vendor/bundle/ruby/3.1.0/gems/bundler-2.3.10/exe/bundle:36:in `<top (required)>'
[2022-07-13 11:52:52] /workspace/bin/bundle:113:in `load'
[2022-07-13 11:52:52] /workspace/bin/bundle:113:in `<main>'
[]
]]>Is it possible to make configuration changes of the Redis Database service persistent? In case, I made changes through redis-cli?
p.s. I mean DO service, not a stand-alone server.
]]>config get save
when accessing redis via redlii there are no config variables available. e.g.
> config get *
(error) ERR unknown command `config`, with args beginning with: `get`, `*`,
]]>Here’s the code on my parse server:
const redis = require("redis")
const redisClient = redis.createClient({
url: "rediss://username:password@private-db-redis-fra1-...db.ondigitalocean.com:25061"
})
The connection string is generated by DigitalOcean. The droplet’s outbound traffic rules are open
Here are my error logs:
2|index | AbortError: Ready check failed: Redis connection lost and command aborted. It might have been processed.
2|index | at RedisClient.flush_and_error (/root/parse-server-example/node_modules/redis/index.js:362:23)
2|index | at RedisClient.connection_gone (/root/parse-server-example/node_modules/redis/index.js:664:14)
2|index | at Socket.<anonymous> (/root/parse-server-example/node_modules/redis/index.js:293:14)
2|index | at Object.onceWrapper (events.js:313:30)
2|index | at emitNone (events.js:111:20)
2|index | at Socket.emit (events.js:208:7)
2|index | at endReadableNT (_stream_readable.js:1064:12)
2|index | at args.(anonymous function) (/usr/lib/node_modules/pm2/node_modules/event-loop-inspector/index.js:138:29)
2|index | at _combinedTickCallback (internal/process/next_tick.js:139:11)
2|index | at process._tickDomainCallback (internal/process/next_tick.js:219:9)
Any idea what I’m doing wrong?
]]>Get time back, gain peace of mind with automatic recovery, and scale faster.
See how you can migrate your databases from virtually any source to DigitalOcean’s Managed Databases with minimal downtime and no additional costs.
Anyone that wants to have a highly-available database with automatic recovery, while saving on costs.
Created by one of the founders of StackOverflow, Discourse is an open-source discussion platform. Discourse can power an online forum, a mailing list, a chat room, and more.
Discourse works well on a single Droplet using DigitalOcean’s one-click install, but as your community grows, it could outgrow a single Droplet. Using multiple Droplets provides your community with resiliency if one of your Droplets goes offline. Each Droplet also increases your bandwidth allowance. When using multiple Droplets, a Load Balancer can help scale your deployment and bring high-availability to your web app. Finally, a Managed Database instance ensures a consistent user experience across multiple Droplets.
After completing this tutorial, you will have a highly available, easily scalable Discourse deployment running on DigitalOcean. You’ll start with a fresh Ubuntu 20.04 Droplet and you’ll finish with a horizontally scalable Discourse installation that includes a Load Balancer, Managed PostgreSQL cluster, Redis instance (optional), and additional Droplets.
To follow this tutorial, you will need:
sudo
privileges and a firewall, which you can do by following the guide, Initial Server Setup with Ubuntu 20.04.yoursite.com
rather than discourse.yoursite.com
).In this step, you will add a DigitalOcean Load Balancer to the Discourse server you created in the prerequisites. Access your DigitalOcean control panel, click Networking, then Load Balancers, and then click Create Load Balancer.
You’ll need to choose a datacenter region for your load balancer. Be sure to choose the same region you chose for your Discourse Droplet. Load Balancers communicate with your Droplet using its private network, so both your Droplet and Load Balancer need to be in the same region.
Next, add your Droplet to your Load Balancer by typing your Droplet’s name into the text field.
Discourse typically handles HTTPS for you by issuing a free Let’s Encrypt certificate during the installation process. However, when your Droplet is behind a Load Balancer, Let’s Encrypt won’t be able to renew your certificate because your domain’s IP address won’t match your Droplet’s IP.
Luckily, DigitalOcean Load Balancers can manage certificates for you. You just need to add a forwarding rule for HTTPS to your Load Balancer. A forwarding rule tells the Load Balancer to forward a specific kind of traffic to your Droplet.
Note: For more about forwarding rules, see the product documentation for Load Balancers.
You need two forwarding rules, one for HTTP and one for HTTPS. By adding a forwarding rule for HTTPS, DigitalOcean can automatically generate and renew a certificate for you. HTTPS support is also imporant for security and your site’s SEO.
Since you added your domain to DigitalOcean as part of the prerequisites, adding HTTPS support takes just a few clicks. In the Forwarding rules section, add a new rule called HTTPS
, which you can select from the dropdown list. Now click Certificate, then + New certificate.
Type your domain name into the text field, which should auto-fill for you. Then give your certificate a name and click Generate Certificate to have DigitalOcean request and manage a certificate for you.
You’re almost done setting up your Load Balancer. Click the Edit Advanced Settings button and tick the box labeled Enable Proxy Protocol. When the PROXY protocol is enabled, the Load Balancer will forward client information like the user’s IP address to the Droplets behind the Load Balancer. Without the PROXY protocol, Discourse would think all its traffic came from a single user (the Load Balancer), and all the logs would show the Load Balancer’s IP address rather than the real IP addresses of your users.
Finally, choose a name for your Load Balancer and click Create Load Balancer.
Now that you’ve got a Load Balancer set up, you’ll need to modify your Discourse configuration files.
In this step, you will modify the default configuration files included with Discourse. SSH into your Droplet and move to the /var/discourse
directory using this command:
- cd /var/discourse
Using nano or your favorite text editor, create and open a new file in the templates
directory called loadbalancer.template.yml
:
- sudo nano templates/loadbalancer.template.yml
The templates folder holds files for your Discourse configuration. Here, you’re adding a new custom template to enable PROXY protocol support for your Discourse installation.
In loadbalancer.template.yml
, insert the following lines:
run:
- exec: "sed -i 's:listen 80: listen 80 proxy_protocol:g' /etc/nginx/conf.d/discourse.conf"
- exec: "sed -i 's:$remote_addr:$proxy_protocol_addr:g' /etc/nginx/conf.d/discourse.conf"
- exec: "sed -i 's:X-Forwarded-For $proxy_add_x_forwarded_for:X-Forwarded-For $proxy_protocol_addr:g' /etc/nginx/conf.d/discourse.conf"
The first line modifies the Discourse Nginx configuration to enable support for the PROXY protocol. Discourse will return a server error if PROXY protocol is enabled on the Load Balancer but not in Nginx.
The second line performs a find and replace, replacing all occurrences of $remote_addr
with $proxy_protocol_addr
. $proxy_protocol_addr
contains the IP address of the client, forwarded from the Load Balancer. $remote_addr
normally shows the user’s IP address but because Discourse is behind a Load Balancer, it will show the Load Balancer’s IP address.
The final line replaces all occurrences of X-Forwarded-For $proxy_add_x_forwarded_for
with X-Forwarded-For $proxy_protocol_addr
, ensuring the X-Forwarded-For
header correctly records the client IP address and not the IP address of the Load Balancer.
Save and close the file by pressing CTRL+X
followed by y
.
The last thing you need to do is load the template into Discourse and rebuild Discourse.
Using nano or your favorite text editor, edit the file called app.yml
in the containers
directory:
- sudo nano containers/app.yml
First, you’ll add the template you just created. Under templates
, add the highlighted line as shown:
templates:
- "templates/postgres.template.yml"
- "templates/redis.template.yml"
- "templates/web.template.yml"
- "templates/web.ratelimited.template.yml"
- "templates/loadbalancer.template.yml"
Be sure to add the new template on a line below templates/web.template.yml
.
Next, make sure the two lines web.ssl.template.yml
and templates/web.letsencrypt.ssl.template.yml
are both commented out by adding a pound sign [#
] as shown:
templates:
- "templates/postgres.template.yml"
- "templates/redis.template.yml"
- "templates/web.template.yml"
- "templates/web.ratelimited.template.yml"
- "templates/loadbalancer.template.yml"
## Uncomment these two lines if you wish to add Lets Encrypt (https)
# - "templates/web.ssl.template.yml"
# - "templates/web.letsencrypt.ssl.template.yml"
Finally, go down a few lines until you see expose
. Comment out the line - "443:443" # https
by adding a pound sign [#
] as shown:
...
expose:
- "80:80" # http
# - "443:443" # https
HTTPS will now be disabled on Discourse when you rebuild Discourse. The HTTPS connection terminates at the Load Balancer and the Load Balancer communicates with your Droplet over DigitalOcean’s secure private network, so you don’t need HTTPS set up on your Discourse server directly.
Save and close the file by pressing CTRL+X
followed by y
.
To apply the configuration and rebuild Discourse, run the following command:
- sudo ./launcher rebuild app
This command requires super-user privileges, which is why it’s prepended with sudo
.
At this point, you’ve finished configuring Discourse and your Load Balancer. Next, you’ll point your domain name to the IP address of the Load Balancer.
In this step, you’ll point your domain to the Load Balancer’s IP address instead of the Droplet’s IP address.
If you’re using DigitalOcean’s DNS hosting, go to the Control Panel and click Networking. Click on your domain name, then look for the A record that points to your Droplet. Select the record’s More menu to modify the record. Change this record’s value from your Droplet’s IP address to your Load Balancer’s IP address.
Your Discourse server will be running behind a DigitalOcean Load Balancer, and you won’t have to manage your own SSL certificates because DigitalOcean will do it for you.
Test your Discourse installation is working by visiting your domain name. You should see a page that looks similar to this:
Now that your domain points to the Load Balancer, you’ll add a managed database instance to create a consistent experience for your users.
In this step, you’ll create a DigitalOcean Managed PostgreSQL instance and add it to your Discourse deployment.
The main advantage of a Load Balancer is to divide your traffic between multiple Droplets. Until now, your database, Redis server, and web server have all been running on a single Droplet. If you add a second Discourse instance, it will have its own database, Redis server, and web server, and will act like a completely different website. Your visitors may feel like they’re going to other websites with different posts and users because they’ll be connected to different databases. You can solve this problem by using one of DigitalOcean’s Managed PostgreSQL instances, rather than having a separate database run on each Discourse Droplet.
Setup a DigitalOcean Managed PostgreSQL instance by going to the DigitalOcean control panel. Click Databases, then Create Database Cluster, finally choose PostgresSQL, set the region to match your Droplet’s location, and click Create a Database Cluster.
While your database is provisioning, add your Droplet as a trusted source to your database. This will allow your Droplet to communicate with your database using DigitalOcean’s secure private network.
On the next screen choose VPC Network and note the values for database host, username, password, and port, as you’ll need to add these to your containers/app.yml
file.
Once your database cluster is created, you’ll need to update the configuration for Discourse. Open containers/app.yml
:
- sudo nano containers/app.yml
In the templates
section, comment out the line - templates/postgres.template.yml
as shown:
templates:
# - "templates/postgres.template.yml"
- "templates/redis.template.yml"
- "templates/web.template.yml"
- "templates/web.ratelimited.template.yml"
- "templates/loadbalancer.template.yml"
This prevents Discourse from provisioning its own Postgres server when it’s rebuilt.
Next, look for the env
section of containers/app.yml
and add the lines below, replacing the values with your own values for database username, password, host, name, and port.
env:
DISCOURSE_DB_USERNAME: your_db_username
DISCOURSE_DB_PASSWORD: your_db_password
DISCOURSE_DB_HOST: your_db_host
DISCOURSE_DB_NAME: your_db_name
DISCOURSE_DB_PORT: your_db_port
These extra variables allow your Droplet to connect to your external database.
Save the file by pressing CTRL+X
followed by y
.
If you have a pre-existing Discourse installation, you should export it as a backup. You can do that by going to your site’s admin area, clicking Backups, and then clicking Backup.
Note: For more information about backing up and restoring your site, please see the Discourse product documentation, Move Your Discourse Instance to a Different Server.
To apply the configuration and rebuild Discourse, run the following command:
- sudo ./launcher rebuild app
Even if you exported your site, you’ll have to go through the Discourse web installation again to create a new admin account. You can then use this account to restore Discourse now that you’re connected to your new database.
Finally, you’ll change one setting to ensure users stay on the same Droplet while visiting the site.
Go back to your Load Balancer’s settings in the DigitalOcean control panel. Go to Networking, then Load Balancers, then Settings, and look for Sticky Sessions. Click Edit.
Change Sticky Sessions from none to Cookie and click Save.
Sticky Sessions ensure each user who visits your website through your Load Balancer stays connected to the same Droplet for the duration of their visit. This is important because each Droplet still has its own Redis instance and Redis is how Discourse keeps track of a user when they’re logged in. Without Sticky Sessions, a user could log in on one Droplet, then visit a new page and (unknown to them) be swapped to the other Droplet they aren’t logged in on. Sticky Sessions prevents this by keeping the user on the same Droplet.
As we did with the database, you could move Redis to its own dedicated instance, and then you wouldn’t need Sticky Sessions. You can try this out in the next step.
This step is optional and requires a DigitalOcean Managed Redis instance. Until this point, the Redis instance was baked into the Droplet behind the Load Balancer, which requires Sticky Sessions to make sure users stay logged in. If a Droplet goes down, the Load Balancer will share users among your remaining Droplets. However, in the previous set up, users would be logged out once they switched Droplets because their session information is stored on the Droplet.
Adding a Redis server external to the Droplets can address this problem. This would effectively move Discourse’s state away from your Droplets. If a Droplet goes offline, your users won’t be logged out — in fact, they won’t even notice.
You can set up a DigitalOcean managed Redis server the same way you created your managed PostgreSQL database. Go to the DigitalOcean control panel. Click Databases, then Create, then Database, and choose Redis for your database engine. Select the same region as your other resources to ensure they will be on the same VPC network. Give your server a name and click Create a database cluster.
Just like when creating your Postgres database, you’ll see a welcome screen with some Getting Started steps. Click Secure this database cluster, enter the name of your Droplet, and then click Allow these inbound sources only. This ensures only your Droplet can connect to Redis.
Now you need to choose the eviction policy for your Redis server. Choose allkeys-lru. This policy means if your Redis server fills up, it will start deleting its oldest entries. This will log out users who haven’t used your website in a while, but it’s better than having Redis return errors and stop working.
Save your eviction policy. Like your PostgreSQL database, on the Connection details screen, choose VPC network.
Once the Redis instance has been created, you’ll need to update your Discourse configuration. Open containers/app.yml
using nano
:
- sudo nano containers/app.yml
Add the Redis connection details into the env
section as shown here. Be sure to replace the highlighted text with your own information. (You don’t need to include the username field.)
env:
DISCOURSE_REDIS_HOST: your_redis_host
DISCOURSE_REDIS_PASSWORD: your_redis_password
DISCOURSE_REDIS_PORT: your_redis_port
DISCOURSE_REDIS_USE_SSL: true
Next, in the templates
section, comment out the line - templates/redis.template.yml
as shown:
templates:
# - "templates/postgres.template.yml"
# - "templates/redis.template.yml"
- "templates/web.template.yml"
- "templates/web.ratelimited.template.yml"
- "templates/loadbalancer.template.yml"
This prevents Discourse from creating its own Redis instance.
Save and close the file by pressing CTRL+X
followed by y
.
To apply the configuration and rebuild Discourse, run the following command:
- sudo ./launcher rebuild app
With a managed Redis instance, you don’t need Sticky Sessions anymore. You can turn off that option if you’d like, but your users won’t notice a difference either way.
Now that you’ve added managed databases to your configuration, you’ll create snapshots of your Discourse server to add more Droplets.
In this step, you’ll create a snapshot of your Discourse Droplet, which makes creating new servers easier to do. A snapshot provides a full copy of an existing Droplet, and you can use them to create new Droplets with the same contents.
Create a new snapshot by going to the DigitalOcean control panel. Click Droplets, find your Droplet, and click Snapshots. Give your snapshot a name and click Take live snapshot.
You can use this snapshot to add extra Droplets behind your Load Balancer and increase the capacity of your website.
Each time you create a new Droplet, you’ll need to update the trusted sources on your Postgres and Redis servers, which you can do through the Control Panel. As you did in a previous step, select the database instance you’d like to modify, go to Overview, and then Secure this database cluster. Add your new Droplet IP addresses to the list of trusted sources.
In this step, you used a Snapshot to create additional Droplets, and added them as trusted sources to your PostgreSQL and Redis instances.
In this tutorial, you set up a Discourse server behind a DigitalOcean Load Balancer. To help scale your deployment, you also added a Managed PostgreSQL Database and Managed Redis instance. Finally, using snapshots and the Control Panel, you added more Droplets. Using the Control Panel, you can add more resources as your community grows.
Now you’ve set up a Load Balancer, you could explore other use cases for DigitalOcean Load Balancers, like Canary Deployments. You could also try connecting to your Redis instance using the command line by following the tutorial, How To Connect to a Managed Redis Instance over TLS with Stunnel and redis-cli. Finally, you can also learn more about DigitalOcean’s databases and their performance with Managed Databases Documentation and the tutorial, Managed Databases Connection Pools and PostgreSQL Benchmarking Using pgbench.
]]>I would like to know what versions of redis are supported/available on DO’s Managed Redis.
Thx
]]>There’s nothing that I see in my code that disconnects and reconnects every five minutes so I’m wondering if this is something Digital Ocean does. Nothing is inherently broken with my application, I’m just curious as to why this happens.
I noticed that Heroku managed Redis does exactly this (at the same time interval too) to avoid running out of connections due to bad clients not disconnecting properly (see https://devcenter.heroku.com/articles/heroku-redis#timeout).
Any verification or documentation would be appreciated.
]]>I’m wondering if Digital Ocean’s Redis has this limitation? As I’m looking to switch it over here.
Thanks!
]]>I would like to add managed Redis just as I can create a PostgreSQL database which is then updated to managed and immediately remains in my application components.
Is it possible to do this? How could it be done?
Thanks!
]]>Using:
import redis
redis.StrictRedis(host=redis_host, port=int(redis_port), password=redis_password, ssl=True)
]]>I have a droplet for which I recently introduced a firewall and I have a hosted redis server on digital ocean.
Lately I started getting and error: No connection is available to service this operation: EVAL; It was not possible to connect to the redis server(s)
My first thought might be that its the firewall but to be honest, I actually had a the firewall for a while before this error occurred but might be related. I did however try and set the outbound port to that of my redis server without it doing anything.
After I started seeing this error, I set the eviction policy to allkeys-lru on my redis server and also added security so only my droplet and home(dev) pc can connect.
Locally in my Dev env I am not getting any errors. only from my two .Net apps on my droplet.
I am not sure what might be causing this?
]]>Do I need to create these with a client (seems to be no options to manage this through the UI).
Is there a limit to the number of DBs for a DO managed Redis cluster? Default on a regular install is 16, but that can be raised. How’s that work here?
in /etc/php/7.2/fpm/php.ini
session.save_handler = redis
; tried either :
session.save_path = "rediss://default:mypassword@db-redis-ams3-preprod-do-user-xxx-0.a.db.ondigitalocean.com:25061"
session.save_path = "tcp://default:mypassword@db-redis-ams3-preprod-do-user-xxx-0.a.db.ondigitalocean.com:25061"
session.save_path = "tls://default:mypassword@db-redis-ams3-preprod-do-user-xxx-0.a.db.ondigitalocean.com:25061"
session.save_path = "tls://db-redis-ams3-preprod-do-user-xxx-0.a.db.ondigitalocean.com:25061?auth=mypassword"
Each time same kind of issue :
RedisException
read error on connection
]]>A managed database-as-a-service (DBaaS) is an easy way to eliminate the overhead of database management and maintenance, giving developers more time to focus on building great products.
Any developer or organization that wants to free themselves from managing their own database.
Basic knowledge of databases.
DigitalOcean Managed Databases DigitalOcean Managed Databases Docs
Have ideas for DigitalOcean’s managed database-as-a-service? Tell us!
]]>And getting this error “Server closed the connection”.
and for trusted source, there are none.
]]>doctl apps create/update
but when I add a redis instance as well:
...
databases:
- name: hackathon-runner-redis
engine: REDIS
production: true
cluster_name: hackathon-runner-db-cluster
I can’t seem to find any error details and no logs are visible for the deployment. Am I doing something wrong or does this just not work?
Thanks in advance!
]]>I have created a droplet using the OpenLiteSpeed WordPress 1-Click image. I have also created a Managed Redis DB to use for object cache, however I am unable to connect the bundled LSCache WordPress plugin to the Redis DB.
I add the host, port, user and password and neither the public nor private networks seem to work. All I get from the plugin is Connection Test: Failed.
Any ideas what might be going wrong?
Thank you!
]]>I’ll soon start an e-commerce website for digital files based on WordPress and EDD and thought the ideal setup would be:
OpenLiteSpeed Wordpress (from the marketplace) Managed MySQL Managed Redis Spaces for the files (~500mb-1gb each on average)
I had some doubts on how recovery would work if there’s a problem with the database however. From what I could find, if I interpret it correctly, the Managed MySQL backs up the database every 5 minutes (and full backup daily), so I should be able to go back to any point in time in 5 minute intervals should there be a problem?
Also, Redis doesn’t seem to have that but I’m not sure if that’s a problem as it should only be used for cache? Are there any steps I should follow if I recover the MySQL to a previous point in time to avoid any problems?
Thank you!
]]>Download the Complete eBook!
How To Manage a Redis Database eBook in EPUB format
How To Manage a Redis Database eBook in PDF format
This book aims to provide an approachable introduction to Redis concepts by outlining many of the key-value store’s commands so readers can learn their patterns and syntax, thus building up readers’ understanding gradually. The goal for this book is to serve as an introduction to Redis for those interested in getting started with it, or key-value stores in general. For more experienced users, this book can function as a collection of helpful cheat sheets and in-depth reference.
This book is based on the How To Manage a Redis Database tutorial series found on DigitalOcean Community. The topics that it covers include how to:
Connect to a Redis database
Create and use a variety of Redis data types, including strings, sets, hashes, and lists
Manage Redis clients and replicas
Run transactions in Redis
Troubleshoot issues in a Redis installation
Each chapter is self-contained and can be followed independently of the others. By reading through this book, you’ll become acquainted with many of Redis’s most widely used commands, which will help you as you begin to build applications that take advantage of its power and speed.
You can download the eBook in either the EPUB or PDF format by following the links below.
Download the Complete eBook!
If you’d like to learn more about how to use Redis, visit the DigitalOcean Community’s Redis section. Alternatively, if you want to learn about other open-source database management systems, like MySQL, PostgreSQL, MariaDB, or MongoDB, we encourage you to check out our full library of database-related content.
]]>I am trying to set up a First In, First Out (FIFO) queue using DigitalOcean’s Managed Redis hosting instead of AWS SQS. Now that I’ve set up and successfully connected to my Redis cluster, I am wondering:
Any recommendations or pointers to other reference material would be appreciated!
]]>With using redis-cli, setting the timeout to 0 is as simple as running the command config set timeout 0
. However, trying to do the same in Redli gives error. Is it possible to manually set this timeout in a DO Redis Managed Database?
Best regards,
]]>Problem: When: Trusted Sources are empty (Open for everyone) And: Using Connection Strings Then: I can connect successfully from both my computer and the droplets
When: Trusted Sources are added,
I’m using the same testing script, I just change Trusted Sources and run again and it fails immediately.
What’s my IP gives the same result as the automatically IP in Suggested sources
]]>I’m using redli, the recommended CLI for redis.
Whenever I try the connection string as provided in “Connection details - flags”, I keep seeing the following error
2019/10/24 15:17:15 Dial dial tcp: lookup db-redis-nyc1-xxxxxxxx.db.ondigitalocean.com: no such host
If do nslookup to the database host, it always returns no records. In fact if I do nslookup db.ondigitalocean.com
, it returns
Server: 127.0.0.53
Address: 127.0.0.53#53
Non-authoritative answer:
*** Can't find db.ondigitalocean.com: No answer
Any ideas? Seems like for some reason I can’t get any DNS records connecting the DB host to the server.
]]>We are implemenitng redis database offered by Digitalocean. Currently having troubleshooting on configuring the database. We are aware that redis does not support SSL/TLS support therefore digital ocean offers us to coonect redis-daatase with redli not with redis-cli whereas redli supports connection with TLS.
Our team used some configuration for redis as follows below: config set notify-keyspace-events Ex
With redli we are not able have this feautures, indeed we should set this config. We are able to connect to database for project we should set this config, unfotunately redli does not support set this config.
What we tried: We could not connect to server where database is set. Thought may be stunnel, but not solution for this issue.
Any possibilities to connect redis database with redis-cli or any solution for setinng this conf on the server side.
]]>