How to enable TTLAfterFinished in the api server?

Posted July 27, 2019 2.1k views

I’m trying to auto clean my jobs after finishing and I need to set TTLAfterFinished to true in the api server (documentation).

Since Digital Ocean created the api server on it’s own, I’m not sure how to modify it. How can I do it?

These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.

Submit an Answer
1 answer


The master API arguments and settings are managed by DO and can not be changed unfortunately. See our documentation here for the current feature gates enabled:

We are always open to customer feedback on master and kubelet settings.

There are other ways that you can achieve this sort of behavior without modifying the master API. If your jobs are created via Kubernetes CronJobs you can set history limits to only keep a certain amount of successful or failed jobs:


John Kwiatkoski
Senior Developer Support Engineer

  • Actually my problem is not witht CronJob but with vanilla Job. My cronjobs are running and cleaning themselves fine. My problem is that for every deploy I run 2 jobs which I wish to remove after they have finished. The reason for that, is that in the next round of deployment, the deployment will fail because the Kubernetes says I’m trying to recreate an existing Job.

    • Hi,

      Thank you for your clarification. In order to accomplish this on our platform you unfortunately would need to use a more “creative” workaround. Off the top of my head I would consider the following options:

      1. Create a third job along with the other ones that runs a one-liner to cleanup all the jobs. I have validated cleanup could work with this command:
        kubectl delete pod --field-selector=status.phase=Succeeded
        For redundancy you can also add additional labels to your job’s pods to ensure you’re not deleting any other pods like so:
        kubectl delete pod -l job=job1 –field-selector=status.phase=Succeeded
        That will ensure that the pods being deleted will have the job=job1 label to even be considered for deletion.

      2. Create Kubernetes cronjob to cleanup your other Job pods, then you can then use the limits mentioned above to have the cronjobs clean themselves up.

      3. Create local linux cronjob to run the command from #1 locally on your machine.

      4. Depending on the job you could look to include those tasks in your deployments init-containers instead.

      I would recommend you add this topic to That is reviewed from time to time to gauge feature request priority.

      Let me know if you have any additional questions.


      John Kwiatkoski
      Senior Developer Support Engineer