I went through the steps of changing dns records and nameservers on name.com for my domain poetrynear.me. I copied and pasted everything from the instructions. I am using name.com for the domain and sendgrid for smtp email. At some point I was able to access the console and recieved the “successful install” message in the console. I closed it, typed in www.poetrynear.me in my browser and nothing. I cannot access the website, or the droplet, or the discourse admin page.
This is my first time using this technology and it’s just a little frustrating.
Can someone please let me know what I am doing wrong, if anything. Should I just delete the droplet altogether and start over?
]]>I want to setup a self hosted BitWarden solution. Recommended in official documentation, I looked at DigitalOcean’s Droplets.
As a password manager is a massively important service, I would like to know if any saves and / or recoveries are made on DigitalOcean’s side in case of anything happening to the data center. We’ve seen what happened to OVH servers in Strasbourg so it can happen to anyone really.
So, will I have to reset all my passwords saved on BitWarden if my Droplet’s data center has an issue? Where are stored backups when choosing this option?
Thanks! Louis
]]>Eventually we are going to transfer that companies domains over to their own domain provider account as well and I’m wondering what will happen.
Both the current DigitalOcean account and the new one will have DNS configurations for these domains, and the actual name server settings for those domains won’t change (They will still be ns1.digitalocean.com, ect.). How will DigitalOcean know which accounts DNS settings to use?
I want to believe that by some magic it will just work, otherwise you could just take over other DigitalOcean users DNS settings, which would be a huge security hole.
]]>For #reasons I have a PHP5.6 codebase I wish to deploy on a DO Droplet, connecting to a MySQL Cluster.
Obviously - PHP5.6 is bad and wrong and really we should upgrade to 7.3 or higher. However - we are caught on the hop and need a tactical solution to buy the time to finish the PHP8 compatibility work.
When attempting to connect with the following sample code:
$conn = mysqli_init();
if(!$conn) {
die('init failed');
}
mysqli_options($conn, MYSQLI_SET_CHARSET_NAME, 'utf8');
$res = mysqli_real_connect($conn, 'dbclusteraddress:port',
'username', 'password', 'dbname');
We get the connect_error 2054 with the description Server sent charset unknown to the client. Please, report to the developers
Beleive this is because the server will be sending back utf8mb4 which is the MySQL Default and not understood by PHP 5.6 MySQLi.
Is there a way we can change the PHP5.6 code/configuration to support/handle the utf8mb4 or a way to change the start up configuration of the server, i.e. this:
[mysqld]
character-set-server=utf8
collation-server=utf8_general_ci
I can’t see that we have any access to the server (for obvious reasons) or any way we can set that value through the control panel.
Cheers,
]]>The query is REFRESH MATERIALIZED VIEW xyz.
I kind looks like there is a limit for a query to run for 10 minutes. Can anyone tell what particular limit might come into play here?
The instance is 1 GB RAM / 1vCPU / 10 GB Disk / Primary only / FRA1.
The application is running spring boot and the Hikari connection pool.
First error message is 'SQLSTATE(57P01), ErrorCode(0)'
Second: FATAL: terminating connection due to administrator command; nested exception is org.postgresql.util.PSQLException: FATAL: terminating connection due to administrator command
]]>However, these plans are greyed out, with the Text “Disk must be at least the same size” superimposed over the items.
Is there a way to force-migrate the database back to a lower plan without destroying my data?
]]>so the data itself will be exact same on real-time.
and what if the datacenter is down? is the standby database will go down too?
]]>psql: error: SSL SYSCALL error: Connection reset by peer (0x00002746/10054)
FATAL: no pg_hba.conf entry for host "123.231.108.177", user "bconic_survey_admin", database "bconic_survey", SSL off
Have I done any mistake upto here ?
]]>There is no way to check what is causing this in Managed Database. What is going on?
]]>I’m really excited about the managed postgres offering, especially with PostGIS enabled by default.
Unfortunately, it was not compiled with support for protocol buffers (protobuf). I found out during runtime when I got this error message:
django.db.utils.InternalError: Missing libprotobuf-c
.
This makes functions like ST_AsMVT()
unavailable. They’re at the core of my product (which is a map-based product using dynamic vector tiles).
Are there short-term plans for inclusion of this library? If not, I’m going to have to stick to Droplets instead of Apps.
Thanks!
]]>Currently, when an App specification contains a hostname managed by a DigitalOcean domain zone, a CNAME entry will be added pointing to the dynamically digitalocean.app generated hostname.
On removal of the App, the CNAME is left as a record entry in the managed zone. I have implemented an automatic deletion of the record as part of appfile
in PR https://github.com/renehernandez/appfile/pull/8, but I think that this is something that should be done natively by the API
My company is using Digital Ocean for a magnitude of services, and Kubernetes is one of them. For us, the limitation of max 7 volumes per node is very limiting. Is it in the planning to increase this hard limit within the next couple of years?
]]>We are a happy user of the DO Managed Database offering. Nevertheless, backup are still not 100% clear for me. I understand you make WAL backups every 5 mins or so and a full backup every 24 hours. So get that you can do PIT restores, etc.
But, what happens if something “catastrophic” happens to the underlying cluster my database is running on? In other words what’s the worst that can happen to my database if WE ourselves are not causing the database to be destroyed/corrupted/made unavailable. In that case would I still have a backup to restore. Or would I need to have standby nodes active for that database?
Is there any recommendation to have a separate backup process back up my database to an external cloud provider for example? (I guess this is always a good practice, but want to be able to make a proper assessment of the urgency for this).
Thanks!
]]>I know that DigitalOcean Guarantee to restore data to any point within the previous seven days. It will be great if I got more explanation about the following questions:
We are using managed MySql database for Production.
Regards
]]>Unfortunately, I don’t have access to our DO stack at work, to check what exactly happened, but this question is more about curiosity than fixing the bug since it only happened once under very strange conditions.
]]>Currently a user connects to app -> app connects to api -> api connects to db
Thanks
]]>worker
, another one is plain nginx with proxy_pass configuration to worker
. I am referring to service names while using docker compose. What should I use here?
Docker compose file:
version: "3"
services:
tileserver:
build: .
frontend:
build: frontend
ports:
- "8080:80"
depends_on:
- tileserver
Inside frontend
there is nginx.conf
file with the following:
...
proxy_pass http://tileserver;
...
Here is how it looks for my digital ocean apps platform spec:
name: maps
region: ams
services:
- dockerfile_path: frontend/Dockerfile
source_dir: frontend/
github:
branch: m/mbtiles
deploy_on_push: true
repo: digitaz/maps
http_port: 80
instance_count: 1
instance_size_slug: basic-xxs
name: frontend
routes:
- path: /
workers:
- dockerfile_path: Dockerfile
github:
branch: m/mbtiles
deploy_on_push: true
repo: digitaz/maps
instance_count: 1
instance_size_slug: basic-xxs
name: tileserver
How containers tileserver and frontend should reach each other? Specifically, how frontend should reach tileserver?
]]>Right now during business hours I have an App that is deploying and the application has been down for about 20-45 minutes (as now there a ruby bug causing it to continuously deploy).
At this point I’m about to switch the application back to Heroku because we simply can’t run a business with a crashing server.
Has anyone else had this experience and if so is there any solution?
]]>Having preconfigured system image, containing some sensitive data, I would like to upload it from Spaces without making it publicly available, even for a moment. I found out that it is possible from this tutorial: https://www.digitalocean.com/community/tutorials/how-to-create-an-image-of-your-linux-environment-and-launch-it-on-digitalocean#step-3-—-uploading-image-to-spaces-and-custom-images However, there are no details on how to do that in that tutorial, just mentioned that it may be done with DO API. So, I started exploring both DO API and AWS S3 API because DO Spaces API is interoperable with AWS S3 API. After having some tests done I figured out that the way to achieve my goal is using presigned links. Presigned link gives a time-limited access to private file in DO Spaces. I created such link using boto3, an AWS SDK for Python, and did some tests with it. Unfortunately, I was not able to upload a custom image from Spaces using created presigned link. Every time when I tried to use it I got a response:
Output{"id":"bad_request","message":"Your source URL is unreachable"}
I did the same test with Spaces file permissions changed for public. It did not help at all. Then, I tried to use this link to import a custom image via URL with DO control panel form but it ended with an error as well. I got a message:
OutputThe URL you have entered ends in an extension that we do not support. We only accept image files that end in gz, bz2, vmdk, vhdx, qcow, qcow2, vdi, raw, img, xz. (.zip is not supported).
I want to emphasize that the same presigned link worked perfectly when it was entered in internet browser’s address bar.
Concluding, I believe there is a bug in DO API, in the code checking file type in entered URL. It works well when URL ends with the file extension but my presigned link looked like:
https://ams3.digitaloceanspaces.com/mlb123/ci-ubuntu-20-04.gz?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Expires=3600&X-Amz-Credential=C0V33DN33N3T33NY2020%2F20201009%2Fams3%2Fs3%2Faws4_request&X-Amz-SignedHeaders=host&X-Amz-Date=20201009T071649Z&X-Amz-Signature=aaabbbcccdddeeeaaabbbcccdddeee0000111122223333444455556666777788
However, more important thing for me is to find another way to upload custom image from Spaces without making it publicly available. Does anybody know another way to get it ?
]]>