OpenAI’s first open-source GPT models (20b and 120b) are now available on the Gradient AI Platform. This launch brings even more flexibility and choice to developers building AI-powered applications, whether you’re starting with a quick prototype or scaling a production agent.
Open-source GPT models: Access gpt-oss 20b and 120b directly on the Gradient AI Platform.
Code + UI support: Call the models through our Serverless Inference API or select them in the Gradient dashboard when creating agents, or try them out in the model playground.
Integrated experience: Unified billing, observability, and traceability built into the platform—no need to stitch together multiple vendors, billing, or monitoring tools.
With code: Call the models directly through our Serverless Inference API.
python
import os
import sys
from gradient import Gradient
model_access_key = os.environ.get("GRADIENT_MODEL_ACCESS_KEY")
if not model_access_key:
sys.stderr.write("Error: GRADIENT_MODEL_ACCESS_KEY environment variable is not set.\n")
sys.exit(1)
inference_client = Gradient(model_access_key=model_access_key)
inference_response = inference_client.chat.completions.create(
messages=[{
"role": "user",
"content": "Write a product description for an eco-friendly water bottle.",
}
],
model="openai-gpt-oss-20b",
)
print(inference_response.choices[0].message.content)
In the UI: Head to Agent Creation and select gpt-oss 20b or 120b from the model dropdown.
👉 Deploy gpt-oss via API: Spin up the 20b or 120b models directly through the Gradient Serverless Inference API and start building in just a few lines of code.
For UI users:
👉 Build with gpt-oss in console: Create an agent in the Gradient AI Platform, select gpt-oss from the model dropdown, and deploy instantly—no code required.