I have a job that uses libpostal to cleanup around 380k addresses in my database. When I first deployed the job, it stopped after around one hour with an error but no description. Is there a limitation of how long a job can run? Or maybe any other limitations that I should be aware of? The job produces a log message for every address it handled. Could there be a problem with the amount of log messages maybe?

These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.

×
1 answer

Sorry for the trouble. Jobs are limited to 30m. I’ve filed an internal ticket to document this limit and improve the error message. Jobs are currently limited to pre-deploy and post-deploy triggers. This batch-processing workload is an interesting use case we hadn’t considered. We’ll also consider this use case as we consider future functionality.

  • I came across several projects where the web app could run on very small instances because there was very little to handle, but the data import jobs would take a lot of power and maybe run for up to 24h. So I totally see this as a big feature on your platform.

    Do you know if it will be possible to start/stop workers via the API and if DropletKit is going to support App Platform any time soon?

Submit an Answer