Report this

What is the reason for this report?

LangChain Meets Gradient: Open-Source, Serverless, and Fast

Published on September 12, 2025
Narasimha Badrinath

By Narasimha Badrinath

LangChain Meets Gradient: Open-Source, Serverless, and Fast

When you’re building AI applications, the right tooling makes all the difference. LangChain has been a go-to framework for years, and its rich ecosystem of integrations helps developers move from idea to production very quickly.

With langchain-gradient, the official integration from DigitalOcean for langchain, you can pair DigitalOcean’s Gradient AI’s serverless interface with Langchain’s agent’s, tools, and chaining.

In this guide, you’ll learn why langchain-gradient helps developers improve their agent workflow, how to connect Gradient’s serverless inference to LangChain in minutes, and how to use invoke and stream with concise examples.

What is LangChain-Gradient?

The new langchan-gradient integration can improve your workflows in a number of ways:

  • Simple drop-in for existing LangChain code: Swap in Gradient endpoints with a few lines—no rewrites, no refactors, just plug-and-play.
  • Familiar LangChain abstractions (Chains, Tools, Agents): Build with the primitives you already know—compose chains, plug in tools, and spin up agents without changing your workflow.
  • Choose from multiple model options: Instantly access multiple AI models with GPU-accelerated, serverless inference on DigitalOcean.
  • Stay open and flexible: The package is fully open-source and designed to work with the latest versions of LangChain and Gradient AI Platform.

LangChain has their own documentation on the integration, and there’s also a PyPI package project to help make the integration seamless.

You can also watch a short walkthrough with code examples that shows the integration in action.

Getting a DigitalOcean API Key

To run langchain-gradient, you’ll need to get your key from the DigitalOcean Cloud console first

  1. Log in to the DigitalOcean Cloud console.
  2. Open Agent Platform → Serverless Inference.
  3. Click “Create model access key,” name it, and create the key.
  4. Use the generated key as your DIGITALOCEAN_INFERENCE_KEY.

Export your key as an environment variable:

export DIGITALOCEAN_INFERENCE_KEY="your_key_here"

Installing LangChain-Gradient

To install the pacakage, run the following command:

pip install langchain-gradient

Available Functions

  • invoke: simple, single-shot calls
    • Use this when you want a one-off completion and are okay waiting for the full response before handling it. It returns a complete string/message object after the model finishes generating. Ideal for synchronous scripts, batch jobs, or server endpoints that respond only once per request.
  • stream: token streaming for responsive UIs/logging
    • Use this when you want partial output as it’s generated. It yields chunks/tokens incrementally, enabling real-time display in terminals, notebooks, or chat apps, and is helpful for progress logging or early-cancel scenarios.

Using Invoke

import os  
from langchain_gradient import ChatGradient  
llm = ChatGradient(  
    model="llama3.3-70b-instruct",  
    api_key=os.getenv("DIGITALOCEAN_INFERENCE_KEY"),  
)
result = llm.invoke(  
    "Summarize the plot of the movie 'Inception' in two sentences, and then explain its ending."  
)  
print(result)
  • Imports: ChatGradient is the LangChain-compatible LLM client for Gradient.
  • llm = ChatGradient(…): Creates an LLM instance.
    • model: Set to “llama3.3-70b-instruct”. Can be any available model from Gradient AI Platform
    • api_key: Reads your DigitalOcean Inference API key from env.
  • result = llm.invoke(“…”): Sends the prompt to the selected model and gets the generated response.

Using Streaming

from langchain_gradient import ChatGradient  
llm = ChatGradient(  
    model="llama3.3-70b-instruct",  
    api_key=os.getenv("DIGITALOCEAN_INFERENCE_KEY"),  
)
for chunk in llm.stream("Give me three fun facts about octopuses."):  
    print(chunk, end="", flush=True)
  • llm.stream(“…”): Requests a streamed response for the prompt.
  • for chunk in …: Iterates over incremental tokens/chunks and prints them in real time.

This prints tokens as they arrive, which is perfect for CLIs, notebooks, or chat UIs.

FAQs

What is LangChain?

LangChain is a framework for building applications powered by large language models. It provides standard abstractions (chains, tools, agents) and a large ecosystem of integrations to help developers compose end-to-end LLM apps quickly.

What is langchain-gradient?

It’s the official DigitalOcean integration that lets LangChain apps call Gradient AI’s serverless inference endpoints using a LangChain-compatible ChatGradient client.

Which models can I use?

You can select from multiple Gradient-hosted models (e.g., Llama variants). Choose a model ID from Gradient’s documentation and pass it to ChatGradient via the model parameter.

How do I authenticate?

Create a model access key in the DigitalOcean Cloud console (Agent Platform → Serverless Inference), then export it as DIGITALOCEAN_INFERENCE_KEY and pass it to ChatGradient.

Does it support streaming?

Yes. Use llm.stream(…) to receive tokens incrementally—ideal for CLIs, notebooks, and chat UIs. Use llm.invoke(…) for simple single-shot calls.

Conclusion

langchain-gradient makes it fast and practical to go from idea to production. With drop-in client, familiar LangChain abstractions, and GPU-accelerated serverless inference on DigitalOcean, you can prototype quickly, stream results in real time, and scale without refactoring. The integration is open-source, flexible, and keeps pace with the latest LangChain and Gradient updates, so you always stay productive.

Thanks for learning with the DigitalOcean Community. Check out our offerings for compute, storage, networking, and managed databases.

Learn more about our products

About the author

Narasimha Badrinath
Narasimha Badrinath
Author

Still looking for an answer?

Was this helpful?


This textbox defaults to using Markdown to format your answer.

You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!

Creative CommonsThis work is licensed under a Creative Commons Attribution-NonCommercial- ShareAlike 4.0 International License.
Join the Tech Talk
Success! Thank you! Please check your email for further details.

Please complete your information!

The developer cloud

Scale up as you grow — whether you're running one virtual machine or ten thousand.

Get started for free

Sign up and get $200 in credit for your first 60 days with DigitalOcean.*

*This promotional offer applies to new accounts only.