This is a great question. The short answer is that this is not possible. The long answer is that it’s complicated, but possible to at least approach it from an informed perspective. I’ll explain. These numbers I’m about to give are not accurate, but are meant to give an illustration.
Let’ s say that your website is static HTML on Nginx. You could probably maintain 1,000 simultaneous visitors on a 1GB droplet with no issue.
Now let’s say that your website is Wordpress with no caching plugin, 60 poorly coded plugins, and an ajax heavy theme. You’re running Apache with prefork. You probably couldn’t maintain 500 simultaneous visitors on an 8GB droplet.
The illustration there is meant to say that one website requires next to nothing to run, and another requires everything you can possibly feed it. This is because both the web server used and the weight of the web application impact the correct answer, and these are so entirely relative that no one can reasonably guess even if they knew the most seemingly relevant variables.
To answer from an informed perspective, you can run your website and simulate traffic to it, then gauge physical resource usage at various levels. If you can say “at 100 simultaneous visitors this site uses X CPU and Y memory” then that calculation “may” scale (web server optimizations may be required to maintain that scale, and some pages on your site can still be unique). Running “htop” while using something like loadimpact.com, that might be a recipe for putting this together.
Hope that helps :)