My work is a molecular dynamics simulation of a large protein when I was running it at 1nanosecond (ns) of simulation by using (16 CPU) of CPU-optimized it took about 15 hours and resulted in 44GB, but now I want to run it at (50ns) so which plan will give a good result in a shorter time?
This textbox defaults to using Markdown to format your answer.
You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!
In order to run namd in a distributed manner you will need to use mpi which is managed by charm++
You can’t simply run namd on separate droplets, as that would not orchestrate the data, but instead just run the same process individually with no clustering benefit.
Unfortunately I haven’t used namd so I can’t be of more help, however you can check out this resource to help get you started:
https://scitas-data.epfl.ch/confluence/display/DOC/How+to+launch+NAMD
Get paid to write technical tutorials and select a tech-focused charity to receive a matching donation.
Full documentation for every DigitalOcean product.
The Wave has everything you need to know about building a business, from raising funding to marketing your product.
Stay up to date by signing up for DigitalOcean’s Infrastructure as a Newsletter.
New accounts only. By submitting your email you agree to our Privacy Policy
Scale up as you grow — whether you're running one virtual machine or ten thousand.
Sign up and get $200 in credit for your first 60 days with DigitalOcean.*
*This promotional offer applies to new accounts only.