Report this

What is the reason for this report?

How To Transform JSON Data with jq

Updated on September 23, 2025
Anish Singh Walia

By Anish Singh Walia

Sr Technical Writer

How To Transform JSON Data with jq

Introduction

When working with big JSON files, it can be hard to find and manipulate the information you need. You could copy and paste all relevant snippets to calculate totals manually, but this is a time-consuming process and could be prone to human error. Another option is to use general-purpose tools for finding and manipulating information. All modern Linux systems come installed with three established text processing utilities: sed, awk, and grep. While these commands are helpful when working with loosely structured data, other options exist for machine-readable data formats like JSON.

jq, a command-line JSON processing tool, is a powerful solution for dealing with machine-readable data formats and is especially useful in shell scripts, AI workflows, and modern DevOps pipelines. Using jq can aid you when you need to manipulate data efficiently. For example, if you run a curl call to a JSON API, jq can extract specific information from the server’s response. You could also incorporate jq into your data ingestion process as a data engineer. If you manage a Kubernetes cluster, you could use the JSON output of kubectl as an input source for jq to extract the number of available replicas for a specific deployment.

Modern AI Integration: In today’s AI-driven world, jq plays a crucial role in data preprocessing for machine learning models. It can clean, filter, and transform JSON datasets before feeding them into AI algorithms, making it an essential tool for data scientists and ML engineers working with large-scale JSON data.

Performance at Scale: jq is written in C and optimized for performance, capable of processing multi-gigabyte JSON files efficiently. With streaming capabilities and memory-efficient processing, it handles enterprise-level data processing tasks that would overwhelm traditional text processing tools.

In this comprehensive article, you will use jq to transform a sample JSON file about ocean animals, then progress to advanced techniques including AI integration, performance optimization, and real-world production scenarios. You’ll apply data transformations using filters, merge pieces of transformed data into new data structures, and learn how to integrate jq into modern AI and DevOps workflows. By the end of the tutorial, you will be able to use jq scripts to answer complex questions about data and integrate them into production systems.

Key Takeaways

Before diving into the comprehensive tutorial, here are the essential concepts you’ll learn:

  • Core jq Operations: Learn fundamental filters, mapping, and transformation techniques for JSON data manipulation
  • Performance Optimization: Discover advanced techniques for handling large JSON files (5GB+) with streaming and memory-efficient processing
  • AI Integration: Explore how jq integrates with machine learning pipelines, data preprocessing, and real-time API processing
  • Modern Workflows: Master jq in Kubernetes environments, CI/CD pipelines, and microservices architectures
  • Advanced Techniques: Understand complex nested JSON processing, conditional logic, and error handling for production systems

Prerequisites

To complete this tutorial, you will need the following:

  • jq, a JSON parsing and transformation tool. It is available from the repositories for all major Linux distributions. If you are using Ubuntu, run sudo apt install jq to install it.
  • An understanding of JSON syntax, which you can refresh in An Introduction to JSON.

Step 1 — Executing Your First jq Command

In this step, you will set up your sample input file and test the setup by running a jq command to generate an output of the sample file’s data. jq can take input from either a file or a pipe. You will use the former.

You’ll begin by generating the sample file. Create and open a new file named seaCreatures.json using your preferred editor (this tutorial uses nano):

  1. nano seaCreatures.json

Copy the following contents into the file:

seaCreatures.json
[
    { "name": "Sammy", "type": "shark", "clams": 5 },
    { "name": "Bubbles", "type": "orca", "clams": 3 },
    { "name": "Splish", "type": "dolphin", "clams": 2 },
    { "name": "Splash", "type": "dolphin", "clams": 2 }
]

You’ll work with this data for the rest of the tutorial. By the end of the tutorial, you will have written a one-line jq command that answers the following questions about this data:

  • What are the names of the sea creatures in list form?
  • How many clams do the creatures own in total?
  • How many of those clams are owned by dolphins?

Save and close the file.

In addition to an input file, you will need a filter that describes the exact transformation you’d like to do. The . (period) filter, also known as the identity operator, passes the JSON input unchanged as output.

You can use the identity operator to test whether your setup works. If you see any parse errors, check that seaCreatures.json contains valid JSON.

Apply the identity operator to the JSON file with the following command:

  1. jq '.' seaCreatures.json

When using jq with files, you always pass a filter followed by the input file. Since filters may contain spacing and other characters that hold a special meaning to your shell, it is a good practice to wrap your filter in single quotation marks. Doing so tells your shell that the filter is a command parameter. Rest assured that running jq will not modify your original file.

You’ll receive the following output:

Output
[ { "name": "Sammy", "type": "shark", "clams": 5 }, { "name": "Bubbles", "type": "orca", "clams": 3 }, { "name": "Splish", "type": "dolphin", "clams": 2 }, { "name": "Splash", "type": "dolphin", "clams": 2 } ]

By default, jq will pretty print its output. It will automatically apply indentation, add new lines after every value, and color its output when possible. Coloring may improve readability, which can help many developers as they examine JSON data produced by other tools. For example, when sending a curl request to a JSON API, you may want to pipe the JSON response into jq '.' to pretty print it.

You now have jq up and running. With your input file set up, you’ll manipulate the data using a few different filters in order to compute the values of all three attributes: creatures, totalClams, and totalDolphinClams. In the next step, you’ll find the information from the creatures value.

Step 2 — Retrieving the creatures Value

In this step, you will generate a list of all sea creatures, using the creatures value to find their names. At the end of this step, you will have generated the following list of names:

Output
[ "Sammy", "Bubbles", "Splish", "Splash" ],

Generating this list requires extracting the names of the creatures and then merging them into an array.

You’ll have to refine your filter to get the names of all creatures and discard everything else. Since you’re working on an array, you’ll need to tell jq you want to operate on the values of that array instead of the array itself. The array value iterator, written as .[], serves this purpose.

Run jq with the modified filter:

  1. jq '.[]' seaCreatures.json

Every array value is now output separately:

Output
{ "name": "Sammy", "type": "shark", "clams": 5 } { "name": "Bubbles", "type": "orca", "clams": 3 } { "name": "Splish", "type": "dolphin", "clams": 2 } { "name": "Splash", "type": "dolphin", "clams": 2 }

Instead of outputting every array item in full, you’ll want to output the value of the name attribute and discard the rest. The pipe operator | will allow you to apply a filter to each output. If you have used find | xargs on the command line to apply a command to every search result, this pattern will feel familiar.

A JSON object’s name property can be accessed by writing .name. Combine the pipe with the filter and run this command on seaCreatures.json:

  1. jq '.[] | .name' seaCreatures.json

You’ll notice that the other attributes have disappeared from the output:

Output
"Sammy" "Bubbles" "Splish" "Splash"

By default, jq outputs valid JSON, so strings will appear in double quotation marks (""). If you need the string without double quotes, add the -r flag to enable raw output:

  1. jq -r '.[] | .name' seaCreatures.json
  2. ```json
  3. The quotation marks have disappeared:
  4. ```bash
  5. [secondary_label Output]
  6. Sammy
  7. Bubbles
  8. Splish
  9. Splash

You now know how to extract specific information from the JSON input. You’ll use this technique to find other specific information in the next step and then to generate the creatures value in the final step.

Step 3 — Computing the totalClams Value with map and add

In this step, you’ll find the total information for how many clams the creatures own. You can calculate the answer by aggregating a few pieces of data. Once you’re familiar with jq, this will be faster than manual calculations and less prone to human error. The expected value at the end of this step is 12.

In Step 2, you extracted specific bits of information from a list of items. You can reuse this technique to extract the values of the clams attribute. Adjust the filter for this new attribute and run the command:

  1. jq '.[] | .clams' seaCreatures.json

The individual values of the clams attribute will be output:

Output
5 3 2 2

To find the sum of individual values, you will need the add filter. The add filter works on arrays. However, you are currently outputting array values, so you must wrap them in an array first.

Surround your existing filter with [] as follows:

  1. jq '[.[] | .clams]' seaCreatures.json

The values will appear in a list:

Output
[ 5, 3, 2, 2 ]

Before applying the add filter, you can improve the readability of your command with the map function, which also makes it easier to maintain. Iterating over an array, applying a filter to each of those items, and then wrapping the results in an array can be achieved with one map invocation. Given an array of items, map will apply its argument as a filter to each item. For example, if you apply the filter map(.name) to [{"name": "Sammy"}, {"name": "Bubbles"}], the resulting JSON object will be ["Sammy", "Bubbles"].

Rewrite the filter to generate an array to use a map function instead, then run it:

  1. jq 'map(.clams)' seaCreatures.json

You will receive the same output as before:

Output
[ 5, 3, 2, 2 ]

Since you have an array now, you can pipe it into the add filter:

  1. jq 'map(.clams) | add' seaCreatures.json

You’ll receive a sum of the array:

Output
12

With this filter, you have calculated the total number of clams, which you’ll use to generate the totalClams value later. You’ve written filters for two out of three questions. You have one more filter to create, after which you can generate the final output.

Step 4 — Computing the totalDolphinClams Value with the add Filter

Now that you know how many clams the creatures own, you can identify how many of those clams the dolphins have. You can generate the answer by adding only the values of array elements that satisfy a specific condition. The expected value at the end of this step is 4, which is the total number of clams the dolphins have. In the final step, the resulting value will be used by the totalDolphinClams attribute.

Instead of adding all clams values as you did in Step 3, you’ll count only clams held by creatures with the "dolphin" type. You’ll use the select function to select a specific condition: select(condition). Any input for which the condition evaluates to true is passed on. All other input is discarded. If, for example, your JSON input is "dolphin" and your filter is select(. == "dolphin"), the output would be "dolphin". For the input "Sammy", the same filter would output nothing.

To apply select to every value in an array, you can pair it with map. In doing so, array values that don’t satisfy the condition will be discarded.

In your case, you only want to retain array values whose type value equals "dolphin". The resulting filter is:

  1. jq 'map(select(.type == "dolphin"))' seaCreatures.json

Your filter will not match Sammy the shark and Bubbles the orca, but it will match the two dolphins:

Output
[ { "name": "Splish", "type": "dolphin", "clams": 2 }, { "name": "Splash", "type": "dolphin", "clams": 2 } ]

This output contains the number of clams per creature, as well as some information that isn’t relevant. To retain only the clams value, you can append the name of the field to the end of map’s parameter:

  1. jq 'map(select(.type == "dolphin").clams)' seaCreatures.json

The map function receives an array as input and will apply map’s filter (passed as an argument) to each array element. As a result, select gets called four times, once per creature. The select function will produce output for the two dolphins (as they match the condition) and omit the rest.

Your output will be an array containing only the clams values of the two matching creatures:

Output
[ 2, 2 ]

Pipe the array values into add:

  1. jq 'map(select(.type == "dolphin").clams) | add' seaCreatures.json

Your output will return the sum of the clams values from creatures of the "dolphin" type:

Output
4

You’ve successfully combined map and select to access an array, select array items matching a condition, transform them, and sum the result of that transformation. You can use this strategy to calculate totalDolphinClams in the final output, which you will do in the next step.

Step 5 — Transforming Data to a New Data Structure

In the previous steps, you wrote filters to extract and manipulate the sample data. Now, you can combine these filters to generate an output that answers your questions about the data:

  • What are the names of the sea creatures in list form?
  • How many clams do the creatures own in total?
  • How many of those clams are owned by dolphins?

To find the names of the sea creatures in list form, you used the map function: map(.name). To find how many clams the creatures own in total, you piped all clams values into the add filter: map(.clams) | add. To find how many of those clams are owned by dolphins, you used the select function with the .type == "dolphin" condition: map(select(.type == "dolphin").clams) | add.

You’ll combine these filters into one jq command that does all of the work. You will create a new JSON object that merges the three filters in order to create a new data structure that displays the information you desire.

As a reminder, your starting JSON file matches the following:

seaCreatures.json
[
    { "name": "Sammy", "type": "shark", "clams": 5 },
    { "name": "Bubbles", "type": "orca", "clams": 3 },
    { "name": "Splish", "type": "dolphin", "clams": 2 },
    { "name": "Splash", "type": "dolphin", "clams": 2 }
]

Your transformed JSON output will generate the following:

Final Output
{ "creatures": [ "Sammy", "Bubbles", "Splish", "Splash" ], "totalClams": 12, "totalDolphinClams": 4 }

Here is a demonstration of the syntax for the full jq command with empty input values:

  1. jq '{ creatures: [], totalClams: 0, totalDolphinClams: 0 }' seaCreatures.json

With this filter, you create a JSON object containing three attributes:

Output
{ "creatures": [], "totalClams": 0, "totalDolphinClams": 0 }

That’s starting to look like the final output, but the input values are not correct because they have not been pulled from your seaCreatures.json file.

Replace the hard-coded attribute values with the filters you created in each prior step:

  1. jq '{ creatures: map(.name), totalClams: map(.clams) | add, totalDolphinClams: map(select(.type == "dolphin").clams) | add }' seaCreatures.json

The above filter tells jq to create a JSON object containing:

  • A creatures attribute containing a list of every creature’s name value.
  • A totalClams attribute containing a sum of every creature’s clams value.
  • A totalDolphinClams attribute containing a sum of every creature’s clams value for which type equals "dolphin".

Run the command, and the output of this filter should be:

Output
{ "creatures": [ "Sammy", "Bubbles", "Splish", "Splash" ], "totalClams": 12, "totalDolphinClams": 4 }

You now have a single JSON object providing relevant data for all three questions. Should the dataset change, the jq filter you wrote will allow you to re-apply the transformations at any time.

Advanced jq Techniques for Production Systems

How to optimize jq performance for large files

When working with large JSON files (5GB+), performance becomes critical. Here are advanced techniques to optimize jq performance:

1. Streaming Processing

For very large files, use the --stream option to process JSON incrementally:

  1. jq --stream 'select(.[0] | length == 2) | .[1]' large-file.json

This approach processes JSON in a streaming fashion, reducing memory usage significantly.

2. Memory-Efficient Filtering

When processing large datasets, combine filters efficiently:

  1. jq -c '.[] | select(.type == "dolphin") | {name: .name, clams: .clams}' seaCreatures.json

The -c flag outputs compact JSON, reducing memory overhead.

How to integrate jq with AI and Machine Learning

1. Data Preprocessing for ML Models

jq excels at preparing JSON data for machine learning pipelines. Here’s how to transform API responses for ML training:

  1. # Extract features from API response
  2. curl -s "https://api.example.com/data" | jq -c '.[] | {
  3. feature1: .value1,
  4. feature2: .value2,
  5. target: .outcome
  6. }' > training_data.jsonl

2. Real-time Data Processing

For real-time AI applications, combine jq with streaming tools:

# Process streaming JSON data
tail -f /var/log/api-responses.json | jq -c 'select(.status == "success") | {timestamp: .time, data: .payload}'

Kubernetes and DevOps Integration

1. Processing kubectl Output

Extract specific information from Kubernetes resources:

  1. kubectl get pods -o json | jq '.items[] | select(.status.phase == "Running") | {name: .metadata.name, node: .spec.nodeName}'

2. CI/CD Pipeline Integration

Use jq in GitHub Actions or GitLab CI:

# GitHub Actions example
- name: Extract version
  run: |
    VERSION=$(jq -r '.version' package.json)
    echo "VERSION=$VERSION" >> $GITHUB_ENV

Error Handling and Validation

1. JSON Validation

Validate JSON structure before processing:

  1. jq '. as $data | if $data | type == "object" then $data else error("Invalid JSON structure") end' input.json

2. Graceful Error Handling

Handle missing fields gracefully:

  1. jq '.creatures[]? | {name: .name, clams: (.clams // 0)}' seaCreatures.json

The ? operator prevents errors when accessing potentially missing fields.

Modern Use Cases and Real-World Examples of jq

1. API Response Processing

Process complex API responses efficiently:

  1. # Extract paginated results
  2. curl -s "https://api.github.com/repos/stedolan/jq/issues" | jq '.items[] | {title: .title, state: .state, created: .created_at}'

2. Data Pipeline Integration

Transform data between different system formats:

  1. # Convert JSON to CSV for database import
  2. jq -r '.[] | [.name, .type, .clams] | @csv' seaCreatures.json > creatures.csv

3. Monitoring and Alerting

Create monitoring dashboards with jq:

  1. # Extract metrics for monitoring
  2. kubectl top pods --no-headers | jq -R 'split(" ") | {name: .[0], cpu: .[1], memory: .[2]}'

FAQs

1. What is jq and why should I use it for JSON processing?

jq is a lightweight, high-performance command-line JSON processor written in C. It’s designed specifically for JSON data manipulation and offers several advantages over general-purpose text processing tools:

  • Performance: Written in C, jq processes large JSON files (5GB+) efficiently with minimal memory usage
  • JSON-Aware: Unlike sed or awk, jq understands JSON structure and handles nested objects and arrays correctly
  • Streaming Support: Can process JSON data incrementally using the --stream option for very large files
  • Cross-Platform: Available on Linux, macOS, Windows, and most Unix-like systems
  • AI Integration: Essential for data preprocessing in machine learning pipelines and real-time data processing

Example Use Case: Processing API responses, transforming data for ML models, or extracting specific information from Kubernetes resources.

2. How do I install jq on Ubuntu, macOS, or Windows?

Ubuntu/Debian:

sudo apt update && sudo apt install jq

macOS (using Homebrew):

brew install jq

Windows:

  1. Download the Windows binary from the official jq website
  2. Add the executable to your PATH
  3. Or use Windows Subsystem for Linux (WSL) and install via apt

Verify Installation:

jq --version

3. What are the most common jq commands for beginners?

Here are the essential jq commands every developer should know:

1. Pretty Print JSON:

jq '.' file.json

2. Extract Specific Fields:

jq '.name' file.json
jq '.users[].email' file.json

3. Filter Arrays:

jq '.[] | select(.age > 18)' file.json

4. Transform Data:

jq '.[] | {name: .name, id: .id}' file.json

5. Combine Operations:

jq '.[] | select(.active == true) | {name: .name, email: .email}' file.json

4. Can jq handle large JSON files efficiently?

Yes, jq is specifically designed for efficient processing of large JSON files. Here are the key performance features:

Memory Efficiency:

  • Uses streaming processing with --stream option
  • Processes data incrementally rather than loading entire file into memory
  • Optimized C implementation for speed

Performance Tips:

# For very large files, use streaming
jq --stream 'select(.[0] | length == 2) | .[1]' large-file.json

# Use compact output to reduce memory
jq -c '.[] | select(.type == "target")' file.json

# Process line-by-line for JSONL files
cat file.jsonl | jq -c 'select(.status == "success")'

Benchmarks: jq can process multi-gigabyte JSON files efficiently, often outperforming Python-based JSON processors for large datasets.

5. How does jq compare to using Python or Node.js for JSON transformation?

Feature jq Python Node.js
Performance Excellent (C-based) Good Good
Memory Usage Very Low Higher Higher
Command Line Native Requires scripts Requires scripts
Streaming Built-in Manual implementation Manual implementation
Learning Curve Moderate Easy Easy
AI Integration Excellent Excellent Good

When to Use jq:

  • Command-line data processing
  • Large file processing
  • Shell script integration
  • Real-time data streaming
  • CI/CD pipeline automation

When to Use Python/Node.js:

  • Complex data transformations requiring custom logic
  • Integration with existing codebases
  • When you need extensive libraries and frameworks
  • Complex error handling and validation

Hybrid Approach: Many developers use jq for initial data extraction and filtering, then pass the results to Python/Node.js for complex processing.

6. How can I integrate jq into AI and machine learning workflows?

jq is becoming increasingly important in AI/ML workflows for several reasons:

Data Preprocessing:

# Clean and structure training data
cat raw_data.json | jq -c 'select(.quality_score > 0.8) | {
  features: [.feature1, .feature2, .feature3],
  target: .outcome
}' > clean_training_data.jsonl

Real-time Data Processing:

# Process streaming data for ML inference
tail -f /var/log/api-data.json | jq -c 'select(.prediction_ready == true) | {
  input: .features,
  timestamp: .time
}' | python ml_inference.py

API Response Processing:

# Extract features from API responses
curl -s "https://api.ml-service.com/predict" | jq '.predictions[] | {
  id: .id,
  confidence: .confidence,
  prediction: .result
}'

Kubernetes ML Pipeline Integration:

# Extract model metrics from Kubernetes
kubectl get pods -l app=ml-model -o json | jq '.items[] | {
  name: .metadata.name,
  status: .status.phase,
  resources: .spec.containers[0].resources
}'

Benefits for AI/ML:

  • Fast data preprocessing for large datasets
  • Real-time data filtering and transformation
  • Integration with containerized ML workflows
  • Efficient handling of streaming data for online learning systems

Conclusion

When working with JSON input, jq can help you perform a wide variety of data transformations that would be difficult with text manipulation tools like sed. In this comprehensive tutorial, you learned fundamental operations like filtering data with the select function, transforming array elements with map, summing arrays of numbers with the add filter, and merging transformations into new data structures.

Advanced Capabilities: You also discovered how jq integrates with modern AI workflows, handles large-scale data processing, and fits into production DevOps pipelines. The tool’s performance optimization features make it suitable for enterprise-level data processing tasks.

AI Integration: jq has become an essential tool for data scientists and ML engineers, providing efficient JSON preprocessing capabilities that integrate seamlessly with machine learning pipelines and real-time data processing systems.

Production Ready: With its streaming capabilities, error handling, and Kubernetes integration, jq is ready for production environments where reliability and performance are critical.

To learn about jq advanced features, dive into the jq reference documentation. If you often work with non-JSON command output, you can explore our guides on sed, awk or grep for information on text processing techniques that will work on any format.

For related JSON processing techniques, please refer our tutorials:

  1. Python JSON processing: Learn how to pretty-print and manipulate JSON data using Python, making it more readable and easier to work with in scripts.
  2. JSONPath examples: Explore practical examples of using JSONPath expressions to query and extract specific data elements from complex JSON structures, often used in conjunction with Python.
  3. JSON fundamentals: Get a comprehensive introduction to JSON (JavaScript Object Notation), covering its basic syntax, data types, and common use cases for data interchange.

Thanks for learning with the DigitalOcean Community. Check out our offerings for compute, storage, networking, and managed databases.

Learn more about our products

About the author

Anish Singh Walia
Anish Singh Walia
Author
Sr Technical Writer
See author profile

I help Businesses scale with AI x SEO x (authentic) Content that revives traffic and keeps leads flowing | 3,000,000+ Average monthly readers on Medium | Sr Technical Writer @ DigitalOcean | Ex-Cloud Consultant @ AMEX | Ex-Site Reliability Engineer(DevOps)@Nutanix

Still looking for an answer?

Was this helpful?


This textbox defaults to using Markdown to format your answer.

You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!

Thanks for the article. If you want to use jq in JetBrains products (e.g.IntelliJ), there is a plugin for it: https://plugins.jetbrains.com/plugin/23360-jqexpress

Creative CommonsThis work is licensed under a Creative Commons Attribution-NonCommercial- ShareAlike 4.0 International License.
Join the Tech Talk
Success! Thank you! Please check your email for further details.

Please complete your information!

The developer cloud

Scale up as you grow — whether you're running one virtual machine or ten thousand.

Get started for free

Sign up and get $200 in credit for your first 60 days with DigitalOcean.*

*This promotional offer applies to new accounts only.