This is a bit of a tough call. There are a couple of points that I feel should be the foundation of the thought process:
A small application can have a big footprint. If can also not, but how you code the app and configure the stack decide this with no “one size fits all” solution to making sure it has a small footprint.
An application with a small footprint can also grow to a larger footprint over time, for a couple of reasons. One is the lack of cleanup, causing the overhead of it’s data to grow slowly over time. Another is automated web traffic (spiders, malicious servers scanning for vulnerabilities, etc).
The idea is to future proof yourself just a bit, but not so much that you’re over paying. With that in mind, I’ll give you my personal estimates. Someone else may disagree with them, and my estimates hold no more value than theirs. Mine might even hold less.
So Elasticsearch seems to recommend 16-64GB of RAM. I think they might be looking at higher workloads than yours, but they could also be spot on. Let’s start you way smaller, and give you the leverage to scale up later. I would put Elasticsearch on a droplet with 4GB of memory, enable private networking, make sure ES is not listening on the public interface (you can communicate with it over private network from the other droplet I’m about to recommend).
After that, go with a 2GB memory droplet for the Flask app.
Now, I really like to separate out pieces like ES from the rest of the stack. You can observe it alone in it’s own environment, and scale appropriately by resizing that droplet. On top of that, check out the price page:
For what I’m recommending, you could go with 8GB for $40, or 4GB + 2GB for $35/m. I know it’s not a lot of savings, but it’s what I’d do.
Hope that at least helps you get started! If you want some credit to try it out first, email me at firstname.lastname@example.org and tell me more about what you’re building. I’d love to hear about it and share in your success :)