It all depends. I have a project, where there’s currently about 2 million records across 17 tables with 124 columns. Most columns are
varchar, but there are a few
text. There is roughly 1,000 new records every day.
All columns are indexed, so there are no “joins without index” and there’s no cache in the system, so everything is “real time”.
The system does about 1 million queries per day, where 98% is read and 2% is write.
This is being hosted on a single $10. My current estimate is that I can get to 4-5 million records before I need more RAM to avoid swapping.
Without knowing the exact data structure and how everything is tuned, then it’s difficult to know the precise resource demands of your project.
I did a test a few months ago, where I just created a droplet with changing any configuration (meaning no tuning done at all) and removed all the indexes from the database (besides Primary Key). This resulted in me having to scale the droplet to $80, but it was still slower than the properly configured and tuned $10 droplet. At $160 it was a bit faster.