Report this

What is the reason for this report?

Customise OpenAI GPT OSS 120b on Ubuntu 24.04 GPU Droplet

Posted on August 11, 2025

How can I customise the GPU Droplet provided by AMD?

When i run the following command ps -ef | grep vllm, it returns

root        1890    1844  0 02:26 ?        00:00:00 /bin/sh -c vllm serve openai/gpt-oss-120b --port 8000 --tensor-parallel 1 --no-enable-prefix-caching --compilation-config '{"full_cuda_graph": true}'

I want to find a way to modify the vLLM configurations to include other flags, example https://docs.vllm.ai/en/latest/features/tool_calling.html, --enable-auto-tool-choice

How can I do this?



This textbox defaults to using Markdown to format your answer.

You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!

The developer cloud

Scale up as you grow — whether you're running one virtual machine or ten thousand.

Get started for free

Sign up and get $200 in credit for your first 60 days with DigitalOcean.*

*This promotional offer applies to new accounts only.