Report this

What is the reason for this report?

Java Memory Management Explained

Updated on August 7, 2025
Java Memory Management Explained

Introduction

Efficient memory management is critical to building reliable, high-performance Java applications. The Java Virtual Machine (JVM) handles memory allocation and garbage collection automatically, but understanding how memory is structured and managed under the hood is essential for diagnosing issues, tuning performance, and writing optimized code. From heap and stack memory to the intricacies of native memory and garbage collection algorithms, the JVM’s memory architecture plays a central role in every Java program’s execution.

This article provides a deep dive into the Java memory model and JVM memory management. It covers the structure and purpose of various memory areas, including the heap, stack, Metaspace, and native method stack. You’ll explore how garbage collection works, compare modern collectors like G1, ZGC, and Shenandoah, and learn how to monitor memory usage using tools such as VisualVM, Java Flight Recorder (JFR), and DJXPerf. The article also discusses common memory errors, memory leak patterns, tuning strategies, and best practices to help developers maintain memory-efficient and scalable Java applications.

Key Takeaways:

  • JVM memory is divided into distinct areas, including the heap, stack, Metaspace, PC register, and native method stack, each serving specific roles in program execution.

  • Heap memory stores all dynamically allocated objects and is managed by the garbage collector. It is further divided into Young and Old Generations to optimize object lifecycle management.

  • Stack memory is thread-specific and holds method frames, including local variables and object references. The actual object data referenced in the stack resides in the heap.

  • Metaspace, introduced in Java 8, replaces PermGen and stores class metadata in native memory. While it grows automatically, it can still lead to OutOfMemoryError if not monitored.

  • Garbage Collection (GC) is automatic in Java but can be tuned with JVM flags. Understanding GC phases (mark, sweep, compact) and the difference between Minor, Major, and Full GC is key to effective memory management.

  • The JVM offers several GC algorithms: Serial GC, Parallel GC, CMS (deprecated), G1 GC, ZGC, and Shenandoah, each with different trade-offs for latency, throughput, and scalability.

  • Modern collectors like ZGC and Shenandoah are designed for low-latency, large-heap applications and offer near-pause-less behavior, but require understanding of their tuning and platform requirements.

  • Tools like jstat, jmap, VisualVM, JFR, and DJXPerf are essential for monitoring memory usage, analyzing heap dumps, and diagnosing performance bottlenecks and memory leaks.

  • Common memory issues include memory leaks, OutOfMemoryErrors, and native memory exhaustion. These often result from unbounded data structures, classloader retention, or improper use of JNI/direct buffers.

What is the Java Memory Model (JMM)?

The Java Memory Model (JMM) defines how threads interact with memory in the Java Virtual Machine, particularly in multithreaded environments. It establishes rules around visibility, ordering, and atomicity of variable access, ensuring that concurrent programs behave consistently across different hardware and JVM implementations.

At its core, the JMM addresses a fundamental challenge: modern CPUs and compilers may reorder instructions or cache values for performance, which can cause one thread to see stale or inconsistent data written by another. Without the JMM, reasoning about the correctness of multithreaded code would be nearly impossible. The model ensures that developers can write predictable and thread-safe programs without needing to understand platform-specific behaviors.

A central concept in the JMM is memory visibility. For example, if one thread sets a flag to true, another thread may not immediately see that change unless synchronization is used. This issue occurs because the write may not be flushed from the first thread’s cache or may not be read from main memory by the second thread. The JMM provides mechanisms, such as volatile variables and synchronized blocks, to explicitly control this visibility.

Another key component of the JMM is the happens-before relationship, which defines a partial ordering of operations between threads. If action A happens-before action B, then all effects of A are guaranteed to be visible to B. This relationship is established through common constructs like Thread.start(), Thread.join(), synchronized blocks, and volatile reads/writes. These guarantees are what allow developers to write correct concurrent code without race conditions or subtle timing bugs.

Understanding the JMM is essential for anyone working with Java concurrency. It underpins the behavior of the java.util.concurrent package, low-level synchronization primitives, and lock-free algorithms. By adhering to its rules, developers can ensure that shared data remains consistent and applications behave reliably under load.

Java Memory Structure Overview

The Java Virtual Machine (JVM) divides its memory into several logical runtime data areas, each serving a specific role during program execution. This structured memory model ensures isolation between different types of data, supports multithreading, and enables features like automatic garbage collection.

At runtime, the JVM creates a set of runtime data areas. These include:

  • Heap Memory
  • Stack Memory
  • Method Area (Implemented as Metaspace in Java 8+)
  • Program Counter (PC) Register
  • Native Method Stack

Each area plays a unique role in executing Java programs and managing resources. Understanding these memory areas is crucial when tuning applications for performance, especially under conditions such as high object creation rates, frequent class loading, or memory-intensive workloads.

Heap Memory

In the Java Virtual Machine (JVM), heap memory is the primary area for dynamic memory allocation. All class instances, arrays, and objects created during program execution are stored here. Unlike stack memory, which is isolated to individual threads, the heap is a shared memory space, accessible by all threads in the JVM. This shared access allows objects to be passed between threads and persist beyond the execution of any single method.

Heap memory is automatically managed by the garbage collector, which reclaims memory occupied by objects that are no longer in use. To optimize this process and reduce application pause times, the JVM divides the heap into two regions: the Young Generation and the Old Generation. This layout, known as the Generational Heap Model, is based on the principle that most objects in Java applications are short-lived, and those that survive are likely to live much longer.

The Young Generation

The Young Generation is where all newly created objects begin their lifecycle. It is optimized for fast allocation and frequent garbage collection. Because the majority of objects are short-lived (e.g., method-local objects, temporary buffers), this region is collected often using Minor Garbage Collections (Minor GCs), which are typically fast and efficient.

Internally, the Young Generation is divided into three spaces:

  • Eden Space: This is the starting point for new object allocations. As soon as an object is created, the JVM attempts to place it in Eden. When Eden fills up, a Minor GC is triggered.

  • Survivor Spaces: The two survivor spaces, commonly referred to as S0 and S1, act as staging areas for objects that survive a garbage collection. After each Minor GC, reachable objects from Eden are moved into one of the survivor spaces. Objects that continue to survive are moved between the two survivor spaces across collection cycles.

As objects survive more garbage collection cycles, their age increases. Once an object’s age exceeds a threshold (controlled by the JVM flag -XX:MaxTenuringThreshold), it is promoted to the Old Generation. Promotion is also forced when the survivor spaces do not have enough room to hold surviving objects.

Minor GCs are stop-the-world events, meaning all application threads are paused briefly while the garbage collection is performed. However, because the volume of live data in the Young Generation is usually small, these pauses are minimal. Most modern garbage collectors perform Minor GCs using multiple threads, allowing the operation to complete quickly even in high-throughput environments.

Proper tuning of the Young Generation can help reduce promotion rates and delay costly collections in the Old Generation. For example, you can adjust:

  • The size of the Young Generation with -Xmn
  • The Eden-to-Survivor space ratio using -XX:SurvivorRatio
  • The promotion age threshold with -XX:MaxTenuringThreshold

These parameters are particularly useful for applications with high allocation rates, such as REST APIs, streaming pipelines, or real-time event processing systems.

The Old Generation

The Old Generation, also known as the Tenured Generation, is designed to hold long-lived objects, those that have survived multiple Minor GCs. While the Young Generation handles the frequent allocation and disposal of short-lived data, the Old Generation is where objects that remain in use for extended periods are eventually stored. Examples include:

  • Persistent application-level caches
  • Large object graphs such as sessions or user data
  • Static or shared data structures that are retained across requests

Because the Old Generation contains objects that are more likely to still be in use, it is collected less frequently than the Young Generation. Garbage collection in this region is referred to as a Major GC, and when both generations are collected together, the process is known as a Full GC.

Note:

  • A Major GC typically targets only the Old Generation.
  • A Full GC collects the entire heap (Young and Old Generations), and depending on the garbage collector and memory pressure, it may also include Metaspace, though this is not guaranteed.

Collections in the Old Generation are more expensive and typically involve:

  • A full stop-the-world pause
  • Tracing all reachable objects starting from the GC roots
  • Compacting memory to eliminate fragmentation

In single-threaded collectors like the Serial GC, this process can take a significant amount of time. In contrast, parallel or concurrent collectors such as Parallel GC, CMS, or G1 reduce pause durations by performing some phases concurrently or using multiple threads.

If the Old Generation becomes full and the garbage collector is unable to reclaim sufficient memory, the JVM will throw a java.lang.OutOfMemoryError: Java heap space. This is often a sign of excessive object retention, memory leaks, or incorrect heap sizing.

To control the size and behavior of the Old Generation, you can adjust:

  • The total heap size using -Xmx (maximum) and -Xms (initial)
  • The size of the Young Generation using -Xmn, which affects how much memory is left for the Old Generation
  • The ratio between the two using -XX:NewRatio

For example:

-Xms512m -Xmx2g -Xmn512m -XX:NewRatio=3

This configuration sets an initial heap size of 512 MB, a maximum heap size of 2 GB, allocates 512 MB to the Young Generation, and ensures the Old Generation is roughly three times larger than the Young Generation.

In production environments, the Old Generation often needs close attention. If your application maintains large in-memory datasets or performs long-lived object caching, analyzing heap usage using tools like jstat, VisualVM, or Java Flight Recorder can help identify inefficiencies and tune the GC strategy accordingly. For applications with strict latency requirements, consider switching to low-pause collectors such as G1, ZGC, or Shenandoah, which are better suited for managing large Old Generations without introducing long GC pauses.

Stack Memory

In the JVM, stack memory plays a crucial role in method execution. Unlike heap memory, which stores objects for longer-term use and is shared across threads, the stack is thread-local and exists independently for each thread. It is a structured, fast-access memory region that tracks the flow of method calls and stores data relevant to individual method invocations.

When a thread is created, the JVM allocates a new Java stack for that thread. This stack consists of a series of stack frames, each representing a single method invocation. As methods are called and return, frames are pushed to and popped from the top of the stack in a strict last-in, first-out (LIFO) order.

Role of Stack Frames

Every time a method is invoked, the JVM allocates a new stack frame on the thread’s stack. This frame is a self-contained unit of memory that holds all the necessary data for executing that method. The frame remains on the stack until the method either completes normally (returns) or exits abruptly (e.g., due to an exception). Once the method returns, its frame is removed from the stack, and the memory is automatically reclaimed; no garbage collection is required for stack memory.

Each stack frame contains:

  • Local variable array: Holds all method parameters and local variables declared within the method.
  • Operand stack: Used internally by the JVM to evaluate expressions and store intermediate computations.
  • Return value slot: Stores the result of the method call, if any, before passing it back to the calling method.
  • Reference to the runtime constant pool: Allows the method to resolve field names, method names, and literals.

Stack frames are lightweight and quick to allocate. Because they are thread-local and do not require synchronization, operations on the stack (method calls and returns) are extremely fast. This makes stack memory ideal for handling short-lived, method-scoped data.

Local Variables, Object References, and Return Values

Inside a stack frame, the local variable array stores all variables used during method execution. This includes:

  • Primitive values (e.g., int, double, boolean)
  • Object references: pointers to objects that live in the heap
  • Method parameters: values passed in when the method is called

It is important to understand that only references to objects are stored on the stack. The actual objects, including their fields and internal state, reside in the heap. For example, when a method receives a String or a custom object as a parameter, the stack stores a reference to the heap location where that object lives.

As the method executes, the JVM may also use the operand stack within the frame to temporarily hold values during expression evaluation. These are not programmer-visible variables but are used behind the scenes to carry out arithmetic operations, method calls, or control flow.

Once the method completes execution, any return value is placed into a designated slot in the stack frame. That value is then passed back to the caller, and the stack frame is discarded. This automatic, efficient memory handling ensures that temporary data used during method calls does not linger in memory unnecessarily.

Stack Size and Overflow

Each thread’s stack is limited in size, which can be configured using the -Xss JVM option. For example:

-Xss1m

This sets the maximum stack size per thread to 1 megabyte. If a thread calls too many nested methods or recurses too deeply, the stack may exceed this limit and trigger a StackOverflowError.

This type of error typically occurs in scenarios like:

  • Deep recursion with no base case
  • Improper termination conditions in recursive algorithms
  • Method chaining or complex functional-style programming without tail-call optimization

When to Consider Stack Memory in Application Design

In most applications, developers don’t need to manually manage stack memory; the JVM handles it efficiently and automatically. However, there are situations where understanding stack behavior is important:

  • Debugging runtime errors: Stack traces printed during exceptions show the call stack at the point of failure, which directly maps to the frames on the JVM stack.
  • Avoiding StackOverflowError: Writing recursive algorithms requires awareness of stack depth, especially when processing large data sets or graphs.
  • Configuring high-concurrency systems: In applications that create thousands of threads (e.g., web servers, microservices), tuning stack size with -Xss may be necessary to avoid exhausting physical memory.

Method Area (Implemented as Metaspace in Java 8+)

The Method Area is a critical part of the Java memory model that stores per-class metadata necessary for the JVM to execute Java programs. Unlike the heap, which holds individual object instances, the method area is used to store class-level information that is shared across all instances of a class and all threads within the application.

This area is logically defined by the Java Virtual Machine Specification and is shared among all threads. Although it is conceptually distinct from the heap, its actual implementation has evolved over different versions of the JVM, most notably with the transition from Permanent Generation (PermGen) to Metaspace in Java 8.

What the Method Area Stores

When a class or interface is loaded by the JVM, its definition is parsed and the following information is stored in the method area:

  • Class structure metadata, including class names, superclasses, implemented interfaces, and modifiers (public, final, etc.)
  • Runtime constant pool, which contains literal values and symbolic references used by the class
  • Static variables, which are class-level variables shared across all instances
  • Field and method information, including method signatures, access modifiers, and bytecode instructions
  • Constructor code, including initialization routines for object creation
  • Type information used for method resolution and dispatching

All of this information is essential for enabling runtime features such as dynamic method invocation, polymorphism, and class reflection.

Because this data does not change after the class is loaded (with few exceptions, such as dynamic class redefinition), it is considered immutable for the duration of the class’s life cycle.

PermGen: The Original Implementation (Before Java 8)

In JVM versions prior to Java 8, the method area was physically implemented in a fixed-size memory region called the Permanent Generation (PermGen). PermGen resided in the heap and had to be explicitly sized using JVM flags like:

-XX:PermSize=128m
-XX:MaxPermSize=256m

The fixed nature of PermGen introduced several challenges:

  • Class metadata could exceed PermGen space, leading to java.lang.OutOfMemoryError: PermGen space, especially in applications that dynamically load many classes (e.g., application servers, modular frameworks).
  • Memory management for PermGen was limited and difficult to tune correctly.
  • It could not grow beyond its configured maximum, regardless of available system memory.

Because of these limitations, managing PermGen became a frequent pain point for developers and system administrators, particularly in large-scale Java EE applications.

Metaspace: Modern Implementation (Java 8 and Later)

With the release of Java 8, the JVM replaced PermGen with a new memory region called Metaspace. Unlike PermGen, Metaspace is not part of the Java heap; it is allocated from native memory, which allows it to grow dynamically as needed (subject to system limitations).

This change brought several improvements:

  • Developers no longer need to worry about sizing PermGen manually.
  • The risk of OutOfMemoryError: PermGen space is greatly reduced.
  • Class metadata is handled more flexibly and efficiently.

However, while Metaspace can grow automatically, it is still constrained by available system memory. If too many classes are loaded (or not properly unloaded), Metaspace can still overflow, leading to a new error:

java.lang.OutOfMemoryError: Metaspace

To control Metaspace usage, the JVM provides the following flags:

-XX:MetaspaceSize=128m
-XX:MaxMetaspaceSize=512m
  • MetaspaceSize defines the initial size, which affects when the first GC of class metadata will be triggered.
  • MaxMetaspaceSize limits how large Metaspace can grow. If omitted, Metaspace will grow until it exhausts native memory.

Applications that dynamically load and unload classes — such as servlet containers (Tomcat, Jetty), OSGi platforms, or scripting engines — should monitor Metaspace usage closely. Tools like VisualVM, JFR (Java Flight Recorder), or jcmd can be used to track Metaspace growth and identify memory leaks related to class loaders.

Class Unloading and Garbage Collection

Metaspace memory is eligible for collection when classes are unloaded, but this only happens if:

  1. The class loader that loaded the class is no longer reachable.
  2. The garbage collector is invoked and identifies the class loader as unreachable.
  3. The JVM supports class unloading (some collectors may restrict this).

Uncollected class loaders can lead to Metaspace leaks, especially in environments where classes are reloaded frequently (e.g., hot deployment in development servers). To mitigate this, JVM options like -XX:+ClassUnloading (enabled by default in modern JVMs) can help enable or improve class unloading behavior.

For older versions of the JVM that use the CMS garbage collector, the flag -XX:+CMSClassUnloadingEnabled was used to enable class unloading. However, CMS was deprecated in Java 9 and removed in Java 14, so this flag is only applicable to legacy JVMs.

Program Counter (PC) Register

The Program Counter (PC) register is a small but essential component of the JVM memory model. It plays a critical role in tracking the flow of execution for Java applications. While other memory areas like the heap and stack handle data and objects, the PC register is concerned with instruction-level control: keeping track of which bytecode instruction a thread should execute next.

Each thread in the JVM has its own private Program Counter register. This isolation is necessary because the JVM supports multithreaded execution, where multiple threads can run independently and simultaneously. By maintaining a separate PC for each thread, the JVM ensures that threads do not interfere with each other’s execution flow.

What the PC Register Stores

The Program Counter register contains the address of the next instruction to be executed in the current thread’s method. This allows the JVM to resume execution from the correct point after:

  • A method call
  • A branch (e.g., if/else, loop)
  • An exception handler
  • A thread context switch

In the case of native methods, those implemented in a language like C and accessed via the Java Native Interface (JNI), the PC register’s value is undefined, as the JVM no longer directly controls the flow of execution.

Internally, the PC register helps drive the JVM execution engine, which fetches the bytecode instruction pointed to by the PC, decodes it, and then executes it. After execution, the PC is updated to point to the next instruction, ensuring continuous and correct program flow.

Importance in Multithreaded Environments

Because each thread has its own PC register, threads can execute concurrently and independently without clashing over the current instruction pointer. This design is essential for Java’s thread model and underpins the JVM’s ability to implement concurrency using lightweight threads.

For example, if two threads are executing the same method on different objects, each thread will have its own stack and its own PC register pointing to the appropriate instruction for its current state. When the operating system switches between threads, the JVM can restore the appropriate PC value and resume execution seamlessly.

Practical Implications

The PC register is not something Java developers interact with directly, but it becomes visible in several situations:

  • Stack traces: When an exception occurs, the JVM prints a stack trace showing the method calls and line numbers where the exception occurred. These line numbers are derived from the PC register at the time of the crash.
  • Debugging tools: Java debuggers and profilers internally rely on the PC register to determine where execution is paused, especially during step-through debugging or when setting breakpoints.
  • Thread state inspection: Tools like jstack use the PC register to show what each thread was doing at a given moment.

Because it is so low-level and lightweight, the PC register does not require garbage collection or tuning, and it has no configurable size or visibility at the language level. However, it is essential for enabling Java’s method execution model, thread isolation, and exception reporting.

Native Method Stack

The Native Method Stack is a dedicated memory region in the JVM that supports the execution of native methods, methods written in languages other than Java, such as C or C++. These methods are typically called through the Java Native Interface (JNI), which acts as a bridge between the JVM and native libraries.

While most Java programs are written entirely in Java, native methods are sometimes necessary to access low-level system resources, interact with legacy code, or perform operations that require high performance or hardware-specific capabilities. When a native method is invoked, the JVM hands control over to the host operating system, which executes the method using the machine’s native call stack rather than the Java call stack.

Role and Behavior of the Native Method Stack

Each thread in the JVM has its own native method stack, separate from the standard Java stack used for executing bytecode. When a thread calls a native method, the call frame for that method is pushed onto the native method stack instead of the Java stack. This native stack follows the calling conventions of the platform’s operating system and is managed outside the JVM’s direct control.

Unlike the Java stack, which stores local variables, object references, and intermediate results for Java methods, the native method stack deals with:

  • Native language function calls
  • Operating system-level data structures
  • Registers and pointers specific to compiled native code

The JVM doesn’t manage the internals of this stack in the same way it manages Java method execution. Instead, it delegates the execution entirely to the native system runtime, allowing native code to execute as if it were part of a regular C/C++ program.

Native Memory Management and JNI Leaks

While Java handles heap and stack memory automatically, native methods executed via JNI operate in native memory, which is outside the JVM’s control. This unmanaged memory is used for thread stacks, DirectByteBuffers, native arrays, and memory allocations made with malloc() or new in C/C++ code.

Since the garbage collector does not track or free native memory, leaks in this space can go undetected, potentially exhausting system memory and causing crashes.

Common Causes of Native Memory Leaks

  • Missing free() or delete calls in JNI code
  • Improper use of DirectByteBuffer without cleanup
  • Unreleased file handles or sockets
  • Repeated class loading without proper unloading
  • Native libraries with poor memory hygiene

Monitoring Native Memory

Detecting native memory issues often requires dedicated tools and JVM flags. Some methods include:

Native Memory Tracking (NMT):

The JVM provides a built-in feature to monitor native memory consumption.

You can enable this with:

-XX:NativeMemoryTracking=summary -XX:+UnlockDiagnosticVMOptions

To view statistics at runtime:

jcmd <pid> VM.native_memory summary

This shows memory usage by categories like thread stacks, Metaspace, code cache, direct memory, and internal JVM allocations.

OS-Level Tools:

  • Use top, ps, or htop to monitor total process memory
  • Use numastat, smem, or /proc/<pid>/smaps on Linux to see detailed mappings
  • Use valgrind or AddressSanitizer for low-level native memory debugging (when source code access is available)

Best Practices to Prevent Native Memory Leaks

Because native memory management is manual, developers must follow strict discipline when working with JNI or off-heap resources.

  • Always release native memory: For every native malloc(), ensure a matching free() in your JNI implementation.

  • Use try-finally blocks around any JNI calls that allocate native resources.

  • Leverage Cleaner or PhantomReference in modern Java (post-Java 9) to clean up native buffers when objects are collected.

  • Use direct memory sparingly: ByteBuffer.allocateDirect() allocates off-heap memory. Avoid unbounded allocation.

  • Limit classloader churn: Avoid dynamic class loading unless necessary; stale classloaders can accumulate native resources.

  • Avoid loading unused native libraries, especially in containerized environments where memory is limited.

  • Profile regularly: Integrate native memory tracking into your monitoring pipeline—especially for long-running or memory-sensitive services.

Although often overlooked, native memory leaks are one of the hardest issues to detect in Java applications. Proactively managing and monitoring JNI-related allocations is key to ensuring application stability, especially in high-performance or long-running systems.

Native Libraries and JNI

Native methods in Java are typically declared using the native keyword and are implemented in external shared libraries (e.g., .so on Linux or .dll on Windows). These libraries are loaded at runtime using System.loadLibrary() or similar mechanisms.

Example:

public class NativeExample {
    static {
        System.loadLibrary("native-lib");
    }

    public native void performNativeTask();
}

In this case, the method performNativeTask() is declared in Java but implemented in a separate native library. When the method is invoked, the JVM uses the native method stack to transfer control to the compiled C or C++ code.

JNI provides the necessary glue code for:

  • Converting between Java types and native types
  • Accessing Java fields and methods from native code
  • Handling exceptions and object references across the Java-native boundary

Error Handling and Stack Limitations

Just like the Java stack, the native method stack has finite memory, and excessive native method invocations or deep native recursion can lead to a StackOverflowError or segmentation fault, depending on how the native code is written.

Since the JVM does not manage the native stack directly, errors in native code can be more severe and harder to debug. For example:

  • A bug in native code can cause memory corruption or crash the JVM entirely.
  • Native memory leaks are not visible to the JVM garbage collector.
  • Exception handling in native code must be explicitly integrated with the Java exception model.

These risks make it essential to use JNI and native code sparingly, and only when absolutely necessary.

Native Stack vs. Java Stack

The following table compares the Java stack and native method stack in the JVM.

Feature Java Stack Native Method Stack
Used for Java methods Native (C/C++) methods via JNI
Managed by JVM Operating system
Thread scope One stack per thread One stack per thread
Error on overflow StackOverflowError OS-specific failure or crash
Garbage collected? No (automatically reclaimed) No (manually managed in native code)

Garbage Collection in Java

Garbage Collection (GC) is a cornerstone of Java’s automatic memory management system. It eliminates the need for developers to manually free memory, which reduces the risk of memory leaks, dangling pointers, and other low-level memory errors common in languages like C or C++. Instead, the (JVM) tracks object lifecycles and automatically reclaims memory occupied by objects that are no longer reachable by the application.

This process enables Java applications to manage memory more safely and efficiently, but it doesn’t come without trade-offs. Garbage collection is a background process that periodically pauses application threads, and its efficiency has a direct impact on application performance, latency, and scalability. Understanding how garbage collection works and how to tune it is essential for writing high-performance Java applications.

Automatic Memory Management and the Role of Garbage Collection

The JVM automatically allocates memory for new objects in the heap, and as long as an object remains reachable through local variables, static references, or other live objects, it stays alive. Once an object becomes unreachable, meaning it is no longer accessible from any GC root, it becomes eligible for garbage collection.

Garbage collection:

  • Frees up memory occupied by unreachable objects
  • Prevents memory leaks and heap exhaustion
  • Ensures long-running applications continue to operate without manual intervention

Without GC, developers would have to manually deallocate memory, making the language far more error-prone and complex.

Garbage Collection Algorithms in the JVM

Over the years, the JVM has introduced several garbage collection algorithms to meet the diverse needs of Java applications, from simple single-threaded programs to large-scale, low-latency systems.

Serial GC

The Serial Garbage Collector uses a single thread to perform all garbage collection tasks, pausing the application during each collection. Its simplicity and low overhead make it ideal for single-threaded or small applications where pause times are acceptable.

This collector is best suited for small heaps, embedded systems, and development environments where predictable behavior matters more than performance. However, it does not scale well to multi-core CPUs or large memory heaps due to its long pause times.

Enable with: -XX:+UseSerialGC

Parallel GC (Throughput Collector)

The Parallel GC, also known as the Throughput Collector, performs garbage collection using multiple threads, which improves overall throughput by reducing the total time spent in GC.

It is designed for applications where maximum CPU efficiency and throughput are more important than pause time, such as batch processing systems or compute-heavy services. However, since it still pauses the application during collection, it may not be ideal for latency-sensitive applications.

Enable with: -XX:+UseParallelGC

Concurrent Mark-Sweep (CMS) Deprecated

The CMS Collector reduces pause times by performing most of its GC work concurrently with the application. It was widely used in latency-sensitive systems prior to Java 9.

However, CMS was deprecated in Java 9 and removed in Java 14, due to limitations like heap fragmentation, increased CPU overhead, and maintenance complexity. Use CMS only in legacy systems that run on Java 8 or earlier.

Enable with: -XX:+UseConcMarkSweepGC (Java 8 only)

G1 GC (Garbage First)

G1 GC is a region-based, low-pause collector designed to balance throughput and pause time predictability. It incrementally collects small regions of the heap to avoid full GC pauses and provides tunable pause-time goals.

G1 became the default collector starting in Java 9 and is well-suited for server applications, microservices, and systems with moderate to large heaps. It provides a good balance between performance and manageability, although tuning can be more complex than older collectors.

Enable with: -XX:+UseG1GC (default in Java 9+)

ZGC (Z Garbage Collector)

ZGC is a low-latency GC introduced in Java 11, designed to keep pause times consistently below 10 milliseconds, even with heap sizes in the multi-terabyte range. It achieves this by performing all heavy GC phases concurrently with the application.

ZGC is ideal for latency-sensitive, memory-intensive workloads such as real-time analytics, financial systems, and large-scale services. It does not perform compaction in the traditional sense, but uses colored pointers and load barriers to relocate objects with minimal impact.

Platform support includes Linux, Windows, and macOS on x86_64 and AArch64. Enable with: -XX:+UseZGC (stable since Java 15)

Shenandoah

Shenandoah is another modern low-pause collector, developed by Red Hat, which performs concurrent compaction, a feature ZGC lacks. This makes it suitable for systems that require both low pause times and heap defragmentation.

Shenandoah is best suited for low-latency services such as databases, real-time systems, or interactive applications with strict response time requirements. It can introduce slightly higher CPU overhead due to more frequent write barriers and increased GC activity.

Platform support is currently limited to Linux and Windows on x86_64 and AArch64. Enable with: -XX:+UseShenandoahGC (production-ready in Java 15+)

Here’s a quick summary of the key JVM garbage collectors, highlighting their focus, execution model, and ideal use cases:

Collector Pause Time Concurrency Compaction Best Use Cases Platform & JDK Support
Serial GC High (stop-the-world) None Yes Small apps, single-threaded or embedded systems All platforms, all JDKs -XX:+UseSerialGC
Parallel GC Moderate–High Parallel stop-the-world Yes Batch jobs, compute-heavy services, large heaps with loose latency requirements All platforms, all JDKs -XX:+UseParallelGC
CMS (Deprecated) Low (some phases concurrent) Mark & sweep are concurrent No (leads to fragmentation) Legacy low-latency systems (Java 8 only) Removed in Java 14 -XX:+UseConcMarkSweepGC
G1 GC Low–Moderate Concurrent marking, mixed mode Partial (region-based) General-purpose apps, moderate-latency SLAs Default in Java 9+ -XX:+UseG1GC
ZGC Very low (<10ms) Fully concurrent No (uses colored pointers) Real-time, large-heap, low-latency systems Linux, Windows, macOS (x86_64, AArch64), Java 15+ -XX:+UseZGC
Shenandoah Very low (heap size independent) Fully concurrent Yes (concurrent compaction) Interactive, real-time systems needing low pause and compaction Linux, Windows (x86_64, AArch64), Java 15+ -XX:+UseShenandoahGC

Garbage Collection Phases: Mark, Sweep, and Compact

Regardless of the algorithm used, most GC implementations follow a variation of the Mark-Sweep-Compact process:

  1. Mark: The collector scans through live references and marks all reachable objects by tracing from the GC roots.
  2. Sweep: Once marking is complete, the collector reclaims memory occupied by objects that were not marked (i.e., unreachable).
  3. Compact (optional): To reduce fragmentation, some collectors move live objects into contiguous memory regions and update references.

This process ensures that memory is efficiently reused, and fragmentation is minimized (especially important in long-running applications).

Minor GC vs. Major GC vs. Full GC

  • Minor GC: A Minor GC collects the Young Generation only. It is frequent, fast, and typically involves copying live objects to survivor spaces or promoting them to the Old Generation. Minor GCs are stop-the-world events but generally have short pause times.

  • Major GC: A Major GC collects the Old Generation. It is less frequent and more expensive in terms of time and CPU usage. These events are also stop-the-world and can lead to noticeable pauses if not managed well.

  • Full GC: A Full GC collects the entire heap, including both Young and Old Generations. Depending on the collector and JVM version, it may also include Metaspace cleanup. Full GCs are the most disruptive and should be avoided in latency-sensitive applications.

Tuning Garbage Collection

The performance and behavior of the JVM’s garbage collector can be tuned with a range of JVM options, allowing developers to optimize for throughput, pause time, or memory footprint depending on their application’s needs.

Some commonly used GC tuning flags include:

# Heap size
-Xms512m -Xmx2g

# Young Generation size
-Xmn512m

# GC collector selection
-XX:+UseG1GC
-XX:+UseParallelGC
-XX:+UseZGC
-XX:+UseShenandoahGC

# GC logging
-Xlog:gc*:file=gc.log:time,uptime,level,tags

GC tuning is not one-size-fits-all. The ideal configuration depends on:

  • Application type (batch job vs real-time service)
  • Heap size
  • Allocation rate
  • Pause time sensitivity
  • Number of CPU cores

For example:

  • A microservice handling short-lived HTTP requests might prioritize low pause times and benefit from G1 or ZGC.
  • A batch processing application may favor throughput and perform well with Parallel GC.
  • An interactive desktop GUI might suffer from GC pauses and require careful heap sizing and collector selection.

Monitoring tools like JFR, VisualVM, and jstat can help observe GC behavior and identify tuning opportunities.

Monitoring and Profiling Tools

Effective memory management in Java doesn’t end with code. To maintain healthy application performance in production environments, developers and operations teams must actively monitor, profile, and analyze the JVM’s memory usage over time. Fortunately, the Java ecosystem offers a robust set of tools, both built-in and external, that help diagnose issues, fine-tune memory allocation, and verify garbage collection behavior.

These tools range from simple command-line utilities to full-featured graphical profilers and low-overhead production monitors.

jstat: JVM Statistics Monitoring Tool

jstat (Java Virtual Machine Statistics Monitoring Tool) provides real-time data about garbage collection behavior and memory usage. It is a lightweight command-line utility included in the JDK.

Example usage:

jstat -gc <pid> 1000

This command prints garbage collection statistics every 1 second for the process with the given PID.

What it shows:

  • Size of Eden, Survivor, and Old generations
  • Number and duration of GC events
  • Heap occupancy before/after collection

jstat is useful for observing GC activity over time and identifying trends such as frequent minor GCs or long major GC pauses.

jmap: Memory Maps and Heap Dumps

jmap is another command-line tool that provides detailed information about memory usage, including the ability to generate heap dumps for offline analysis.

Key use cases:

  • Print memory summary: jmap -heap <pid>
  • Dump live object histogram: jmap -histo:live <pid>
  • Generate heap dump file: jmap -dump:format=b,file=heapdump.hprof <pid>

Heap dumps created by jmap can be analyzed with tools like Eclipse MAT or VisualVM to investigate memory leaks, object retention, or excessive allocation.

jconsole: Graphical Monitoring in Real Time

jconsole is a GUI-based monitoring tool that connects to local or remote Java processes using JMX (Java Management Extensions). While less advanced than VisualVM or JFR, it provides a simple interface for tracking:

  • Heap and non-heap memory usage
  • Thread count and state
  • GC activity and frequency
  • Loaded classes and CPU usage

It is especially useful for quick diagnostics or development-stage performance monitoring.

VisualVM: Integrated Monitoring and Profiling

VisualVM is a powerful, standalone GUI tool that offers live monitoring and advanced profiling capabilities. It can attach to running JVMs and provide real-time insights without requiring code changes.

Features include:

  • Real-time heap and CPU usage charts
  • Garbage collection monitoring
  • Heap dumps and thread dumps
  • Object allocation tracking
  • Memory leak detection
  • Plugin support for extended analysis

VisualVM is ideal for diagnosing memory issues in development or staging environments, where overhead is acceptable. It supports local and remote profiling and integrates well with applications running on the standard JVM.

Java Flight Recorder (JFR) and Mission Control

Java Flight Recorder (JFR) is a high-performance, low-overhead profiling tool built into the JVM. It’s suitable for use in production environments, as it records JVM events with minimal performance impact.

JFR can capture:

  • GC activity and pause times
  • Object allocation profiles
  • Class loading/unloading
  • Thread activity and CPU sampling
  • Lock contention and I/O events

You can enable JFR with command-line flags or start/stop it dynamically using jcmd:

# Start a 60-second recording
jcmd <pid> JFR.start duration=60s filename=recording.jfr

To analyze the output, use Java Mission Control (JMC), a graphical tool for browsing and interpreting JFR recordings. JMC provides:

  • GC event timelines
  • Heap allocation hotspots
  • Thread and lock analysis
  • Interactive charts and filters

Because of its low overhead, JFR is particularly valuable for investigating intermittent performance issues and understanding long-term memory trends in production workloads.

DJXPerf

DJXPerf is a powerful profiling tool built for analyzing low-level JVM performance characteristics, including object-level inefficiencies, false sharing, cache line contention, and TLB (Translation Lookaside Buffer) misses. Unlike traditional profilers, DJXPerf digs into CPU hardware events and object memory layout to surface issues that impact performance at the microarchitectural level.

It is especially useful in applications where performance bottlenecks are subtle and not GC-related, such as in high-performance trading platforms, real-time systems, or workloads with tight memory access patterns.

Key Capabilities:

  • Detects false sharing and memory contention
  • Analyzes object layout and cache line alignment
  • Profiles memory access latency and CPU stall cycles
  • Integrates with Linux perf and hardware performance counters

Use Cases:

  • Identifying inefficiencies in heavily multithreaded code
  • Pinpointing low-level memory access bottlenecks
  • Optimizing JVM object alignment and field layout

Limitations:

  • Requires deep JVM internals knowledge and hardware-level context
  • Best used in staging or performance labs due to its invasive nature

Choosing the Right Tool

Tool Use Case Environment
jstat GC statistics and trends Production, Dev
jmap Heap dumps, memory histograms Dev, Staging
jconsole Simple real-time monitoring via JMX Dev, Testing
VisualVM Heap analysis, profiling, leak detection Dev, QA
JFR + JMC Low-overhead, in-depth profiling Production
DJXPerf Low-level memory profiling, cache/TLB analysis Performance Labs, Staging

Common Memory Errors and Troubleshooting in Java

Despite Java’s automatic memory management, developers still need to be aware of memory-related pitfalls that can cause applications to crash, slow down, or behave unpredictably. The most critical of these are runtime memory errors, such as OutOfMemoryError and StackOverflowError. Understanding the causes and available diagnostic tools can help developers detect memory issues early and resolve them efficiently.

Memory Leak

A memory leak occurs when objects that are no longer needed remain reachable and are not collected by the garbage collector. This usually results in gradual heap growth and eventual OutOfMemoryError.

Types of Java Memory Leaks

Even in managed environments like the JVM, memory leaks can occur when objects remain unintentionally reachable, often due to subtle design or lifecycle issues. Below are common patterns that lead to memory leaks in Java applications:

Leak Type Description
Static Field Leaks Objects stored in static fields remain alive for the lifetime of the application. If these objects reference large structures or grow over time (e.g., lists, caches), they prevent GC from reclaiming memory.
Listener/Callback Leaks Event listeners, callbacks, or observers registered with UI elements, services, or frameworks but never removed, continue to reference target objects even after those objects are no longer in use.
Thread Leaks Threads that are never properly shut down, such as long-running background threads or improperly configured thread pools, retain references to their Runnable tasks or context, leading to accumulation over time.
Collection Growth Leaks Collections like HashMap, List, or Set that grow indefinitely without bounds, typically due to missing eviction logic or keys that are never removed, gradually consume memory.
ClassLoader Leaks In environments like web servers or OSGi containers, classes may be loaded and unloaded dynamically. If a class holds references to its classloader or external resources (e.g., threads, static fields), it can prevent class unloading and cause memory growth with each redeploy.

Real-World Example: Static Cache Leak

public class ConfigRegistry {
    private static final List<String> cache = new ArrayList<>();

    public static void add(String entry) {
        cache.add(entry); // grows indefinitely
    }
}

Fix: Use bounded caches or reference-based data structures:

private static final Map<String, Object> cache = new WeakHashMap<>();

OutOfMemoryError: Java Heap Space

This error occurs when the JVM cannot allocate an object because the heap memory is full, and garbage collection is unable to reclaim enough space. It often signals a memory leak or excessively large object allocations.

Common causes include:

  • Accidental retention of objects (e.g., storing data in static fields or collections that grow unbounded)
  • Loading large files or large numbers of objects into memory
  • Inefficient caching mechanisms
  • Failure to properly release resources like database connections

Example:

java.lang.OutOfMemoryError: Java heap space

Troubleshooting tips:

  • Analyze heap dumps using tools like Eclipse MAT, VisualVM, or JProfiler
  • Monitor memory usage with jstat, jcmd, or Java Flight Recorder (JFR)
  • Increase heap size using -Xmx and optimize object allocation patterns

OutOfMemoryError: Metaspace

Introduced with Java 8, Metaspace holds class metadata. If the application loads too many classes or fails to unload them properly (often due to classloader leaks), Metaspace may grow until it exhausts native memory.

Example:

java.lang.OutOfMemoryError: Metaspace

Common causes:

  • Excessive class loading (e.g., repeated redeployment in application servers)
  • Classloader memory leaks in frameworks or custom classloaders
  • Lack of a defined Metaspace limit

Troubleshooting tips:

  • Set a maximum limit using -XX:MaxMetaspaceSize
  • Monitor class loading with jcmd GC.class_histogram or jmap -clstats
  • Use -XX:+ClassUnloading to enable class unloading (enabled by default in modern JVMs)

OutOfMemoryError: GC Overhead Limit Exceeded

This error indicates that the JVM is spending too much time in garbage collection with too little memory actually being reclaimed, typically more than 98% of CPU time with less than 2% heap recovery.

Example:

java.lang.OutOfMemoryError: GC overhead limit exceeded

Causes:

  • High object allocation rate with insufficient heap
  • Memory leaks causing the heap to remain full
  • Inefficient GC configuration

Troubleshooting tips:

  • Use profiling tools to identify allocation hotspots
  • Increase heap size and tune GC parameters
  • Investigate potential leaks and optimize object lifecycles

StackOverflowError

This occurs when a thread’s call stack exceeds its configured size, often due to deep or infinite recursion.

Example:

java.lang.StackOverflowError

Common causes:

  • Recursive functions without a proper base case
  • Large or unbounded method call chains
  • Excessively deep expression trees in frameworks or template engines

Troubleshooting tips:

  • Refactor recursive code to be iterative if possible
  • Review method call structure and stack trace
  • Increase stack size with -Xss, if needed (e.g., -Xss1m)

Best Practices for Java Memory Management

Efficient memory management is essential for building fast, scalable, and reliable Java applications. While the JVM handles much of the complexity behind the scenes through automatic garbage collection and dynamic memory allocation, developers still play a critical role in ensuring that applications use memory responsibly.

The following best practices help reduce memory waste, improve performance, and avoid common pitfalls such as memory leaks and excessive garbage collection.

Minimize Unnecessary Object Creation

Frequent object creation, especially of short-lived or redundant objects, increases pressure on the garbage collector. This can lead to more frequent minor GCs and higher CPU usage.

Recommendations:

  • Reuse objects where possible (e.g., use StringBuilder instead of concatenating immutable strings).
  • Prefer primitive types over boxed types (e.g., int instead of Integer) when autoboxing is not needed.
  • Use object pools only when profiling shows significant performance gain.

Avoid Memory Leaks

Even with automatic garbage collection, memory leaks can occur when objects are unintentionally kept reachable and cannot be collected.

Common causes:

  • Static references holding on to large data structures
  • Improperly managed caches
  • Unclosed resources (streams, connections)
  • Event listeners and callbacks that are never deregistered

Best practices:

  • Use weak references (WeakReference, WeakHashMap) for caches or listeners
  • Deregister event listeners and callbacks explicitly
  • Close all resources in finally blocks or use try-with-resources
  • Monitor heap usage over time to detect unexpected growth

Choose the Right Data Structures

Using the wrong data structures can increase memory consumption and GC pressure.

Tips:

  • Prefer ArrayList over LinkedList unless insert/remove operations dominate
  • Use EnumSet or EnumMap instead of general-purpose collections for enums
  • Avoid using synchronized collections when not needed
  • Pre-size collections if the final size is known to avoid resizing overhead

Be Mindful of Object Retention in Collections

Collections like HashMap, ArrayList, or ConcurrentHashMap can retain references to objects that are no longer needed, preventing them from being garbage collected.

Avoid:

List<byte[]> memoryLeakList = new ArrayList<>();
while (true) {
    memoryLeakList.add(new byte[1024 * 1024]); // 1MB chunks
}

Do:

  • Clear collections when they are no longer needed
  • Limit the scope of long-lived caches or queues
  • Use bounded collections where applicable (e.g., LinkedBlockingQueue with capacity)

Tune the JVM for Your Workload

JVM memory settings have a significant impact on performance. Use flags like:

-Xms512m -Xmx2g -Xss1m
-XX:+UseG1GC

Tuning advice:

  • Set heap size (-Xms, -Xmx) based on expected workload
  • Choose a GC algorithm that aligns with your latency/throughput goals
  • Profile with real-world traffic and fine-tune GC behavior
  • Monitor GC logs or use JFR to guide tuning decisions

Monitor and Profile Regularly

Proactive monitoring is essential for identifying memory issues before they cause production outages.

Recommended tools:

  • VisualVM, JFR + Mission Control for real-time profiling
  • GarbageCat for offline GC log analysis
  • jstat, jmap, jcmd for command-line diagnostics

Establish regular performance profiling as part of your CI/CD or release workflow.

Understand Application-Specific Memory Patterns

Different types of Java applications have different memory footprints:

  • Web servers may have many short-lived objects (benefits from efficient young generation tuning)
  • Stream processors often maintain stateful objects (heap size and compaction become critical)
  • GUI applications are sensitive to GC pauses (low-latency GC collectors may be required)

Know your application’s behavior and design accordingly.

Incorporating these best practices can help Java developers build memory-efficient, stable applications that scale well and remain responsive under load. Good memory hygiene, thoughtful design, and ongoing monitoring are the pillars of reliable memory management in any Java-based system.

FAQs

1. What is the JVM Memory Model in Java?

The JVM Memory Model defines how the Java Virtual Machine organizes and manages memory at runtime. It splits memory into distinct areas: the Heap (for objects), the Stack (for method frames and local variables), the Method Area (for class metadata), the PC Register, and the Native Method Stack. This model ensures proper allocation, isolation, and execution of Java code across threads and platforms, while enabling features like garbage collection and thread safety.

2. How does Garbage Collection work in Java?

Garbage Collection (GC) is an automatic memory management process that reclaims memory used by objects no longer referenced by the application. The JVM uses different GC algorithms (like G1, ZGC, and Shenandoah) to perform collection in phases such as mark, sweep, and compact. GC can target different regions of the heap (e.g., Young and Old generations), and depending on the collector, it may run concurrently with application threads or cause short stop-the-world pauses.

3. What is the difference between stack and heap memory in Java?

In Java, stack memory stores method frames, including local variables and references, and operates in a LIFO (last-in, first-out) manner. It’s thread-specific and automatically cleaned up when methods return.

Heap memory, on the other hand, holds all objects created via new and is shared across all threads. The heap is managed by the garbage collector, while the stack is managed directly by the JVM runtime.

4. What is Metaspace in Java and how is it different from PermGen?

Metaspace is the memory area that stores class metadata in Java 8 and later, replacing the older PermGen space. Unlike PermGen, which had a fixed size, Metaspace resides in native memory and can dynamically grow, limited only by available system RAM. However, excessive class loading or classloader leaks can still cause OutOfMemoryError: Metaspace.

5. What causes a java.lang.OutOfMemoryError: GC overhead limit exceeded?

This error means the JVM is spending too much time performing garbage collection but recovering very little memory. It’s typically caused by:

  • A memory leak that retains unreachable objects
  • An overly small heap size
  • Large data structures that grow uncontrollably

To fix it, inspect memory usage with tools like VisualVM or MAT, and consider tuning heap size (-Xmx) or identifying objects that should be released sooner.

6. How can I tune garbage collection in Java?

GC tuning involves selecting the right collector and adjusting heap and GC-related flags to optimize performance. Common tuning options include:

  • -Xms / -Xmx: set initial and max heap size
  • -XX:+UseG1GC, -XX:+UseZGC, etc.: choose the collector
  • -XX:MaxGCPauseMillis: target pause time
  • -XX:+PrintGCDetails, -Xlog:gc*: enable GC logging

Tuning is workload-specific, so always monitor GC logs, pause times, and throughput under realistic traffic to validate changes.

7. Why is my Java app running out of memory?

Java applications run out of memory when they allocate more objects than the heap or other memory regions can accommodate. Common causes include:

  • Memory leaks: Objects remain referenced unnecessarily (e.g., in static fields or collections).
  • Unbounded data growth: Caches or lists that grow indefinitely.
  • Excessive class loading: Especially in long-running or redeployable applications.
  • Native memory exhaustion: Leaks in JNI code or too many direct buffers.

Use tools like jstat, jmap, or VisualVM to monitor memory usage and analyze heap activity.

8. How to analyze heap dumps in Java?

To analyze heap dumps:

  1. Capture the dump using:

    jmap -dump:format=b,file=heap.hprof <PID>
    
  2. Open the dump with tools like:

    • Eclipse MAT (Memory Analyzer Tool)
    • VisualVM
    • JFR + JMC (for related performance data)

Look for signs of memory leaks, such as retained size, dominators, and unreachable objects with live references. Heap dump analysis helps pinpoint which objects are consuming the most memory and why they’re not being garbage collected.

Conclusion

In this article, we explored the structure and behavior of the Java memory model, including the heap, stack, Metaspace, native memory, and garbage collection mechanisms. We examined how different garbage collectors work, how to monitor and tune memory usage using JVM tools, and how to diagnose common memory issues such as memory leaks and OutOfMemoryError. We also touched on native memory pitfalls, the Java Memory Model’s concurrency guarantees, and advanced profiling techniques.

Whether you’re debugging performance issues, tuning for low-latency, or building scalable multithreaded systems, a solid understanding of JVM memory internals equips you to make informed decisions and write more efficient, reliable Java applications.

Explore the following articles to deepen your understanding of Java’s core concepts and runtime behavior, and level up your knowledge:

Thanks for learning with the DigitalOcean Community. Check out our offerings for compute, storage, networking, and managed databases.

Learn more about our products

About the author(s)

Pankaj Kumar
Pankaj Kumar
Author
See author profile

Java and Python Developer for 20+ years, Open Source Enthusiast, Founder of https://www.askpython.com/, https://www.linuxfordevices.com/, and JournalDev.com (acquired by DigitalOcean). Passionate about writing technical articles and sharing knowledge with others. Love Java, Python, Unix and related technologies. Follow my X @PankajWebDev

Manikandan Kurup
Manikandan Kurup
Editor
Senior Technical Content Engineer I
See author profile

With over 6 years of experience in tech publishing, Mani has edited and published more than 75 books covering a wide range of data science topics. Known for his strong attention to detail and technical knowledge, Mani specializes in creating clear, concise, and easy-to-understand content tailored for developers.

Category:
Tags:
While we believe that this content benefits our community, we have not yet thoroughly reviewed it. If you have any suggestions for improvements, please let us know by clicking the “report an issue“ button at the bottom of the tutorial.

Still looking for an answer?

Was this helpful?

Waited for this topic, thanks.

- Amit N

Good write up . Whether this above explanation about Memory model is applicable to all JVM vendors ?

- Peter Jerald

Thanks for this useful article. Easy to understand.

- johnybasha

Hi Pankaj, Thanks for the wonderful document. Just one thing I was trying to understand is, how does the Survivor space(S0 and S1) gets filled up? Is there any mechanism for space getting occupied between Eden, S0 and S1 ? Thanks, Amishi

- Amishi Shah

That is NOT about JMM, it’s about memory management and garbage collection. Java Memory Model is about how to make memory consistent for threads and what threads interactions are allowed and what optimizations and code reorderings are fine to make memory system seem consistent to us, programmers. See William Pugh’s pages on this, or Wikipedia (Memory Model or Consistency Model) or works by Sarita Adve or JSR-133 FAQ.

- LAFK

This is best ever tutorial I have read on internet, Very clearly covering everything from memory allocation to Garbage Collection. Kudos.

- Pradeep Tiwari

Thanks Pankaj…Its really a good piece of documentation…

- Ganesh Kar

The Best article I found on net regarding java memory pool…thank you for sharing it

- Srikanth Ganesan

hello article is good

- 123

Nice article. I have one question When we call Runtime.getRuntime().gc() then which garbage collector gets called, is it Miner GC or Major GC

- Kunal

Creative CommonsThis work is licensed under a Creative Commons Attribution-NonCommercial- ShareAlike 4.0 International License.
Join the Tech Talk
Success! Thank you! Please check your email for further details.

Please complete your information!

The developer cloud

Scale up as you grow — whether you're running one virtual machine or ten thousand.

Get started for free

Sign up and get $200 in credit for your first 60 days with DigitalOcean.*

*This promotional offer applies to new accounts only.