Skip to main content

Java Concurrency Fundamentals

When to Study Fundamentals

Thread safety patterns, synchronization primitives, and the Java Memory Model form the foundation for all concurrent programming in Java. Virtual threads (covered in Java Concurrency Advanced) make scaling I/O-bound workloads easier, but they don't eliminate the need for proper synchronization. Master these fundamentals first.

Overview

This guide covers the foundational concepts of Java concurrency: thread safety patterns, synchronization primitives, atomic operations, concurrent collections, and the Java Memory Model. These concepts apply regardless of whether you use platform threads or virtual threads. For modern concurrency features like virtual threads and structured concurrency, see Java Concurrency Advanced.


Core Principles

  1. Immutability First: Immutable objects are inherently thread-safe
  2. Minimize Shared State: Reduce contention through thread confinement and message passing
  3. Use Concurrent Collections: Leverage java.util.concurrent for thread-safe data structures
  4. Understand Happens-Before: Know when writes become visible to other threads
  5. Test for Concurrency: Use stress tests and tools like JCStress to verify thread safety

Thread Safety Patterns

1. Immutability (Preferred)

Immutable objects cannot change after construction, making them inherently thread-safe. Multiple threads can safely read immutable data without synchronization.

// GOOD: Immutable value object (inherently thread-safe)
public record Payment(
String id,
Money amount,
String vendorId,
Instant createdAt,
PaymentStatus status
) {
// All fields final and immutable
// No setters
// Thread-safe by design
}

// BAD: Mutable object requiring synchronization
public class Payment {
private String id;
private Money amount;
private PaymentStatus status; // Can change - requires synchronization!

public synchronized void setStatus(PaymentStatus status) {
this.status = status;
}
}

Why Immutability Works: Final fields have special guarantees in the Java Memory Model. If an object is safely published (reference becomes visible after constructor completes), all threads see the final field values initialized in the constructor.

2. Thread Confinement

Restrict data to a single thread, eliminating the need for synchronization.

// GOOD: Each thread has its own instance
public class PaymentProcessor {

public List<PaymentResult> processPayments(List<Payment> payments) {
return payments.parallelStream()
.map(payment -> {
// Each thread creates its own processor instance
var processor = new SinglePaymentProcessor();
return processor.process(payment);
})
.toList();
}
}

// GOOD: ThreadLocal for thread-confined state (use sparingly)
public class RequestContext {
private static final ThreadLocal<String> correlationId = new ThreadLocal<>();

public static void setCorrelationId(String id) {
correlationId.set(id);
}

public static String getCorrelationId() {
return correlationId.get();
}

public static void clear() {
correlationId.remove(); // Always clean up!
}
}

3. Atomic Variables

Atomic variables provide lock-free thread-safe operations on single variables using CPU-level compare-and-swap (CAS) instructions. They're faster than synchronization for simple updates like counters because they don't require kernel-level thread coordination.

import java.util.concurrent.atomic.AtomicLong;
import java.util.concurrent.atomic.AtomicReference;

@Service
public class PaymentMetrics {

private final AtomicLong totalPayments = new AtomicLong(0);
private final AtomicLong successfulPayments = new AtomicLong(0);
private final AtomicReference<Money> totalAmount = new AtomicReference<>(Money.ZERO);

public void recordPayment(PaymentResult result) {
totalPayments.incrementAndGet(); // Thread-safe increment using CAS

if (result.isSuccessful()) {
successfulPayments.incrementAndGet();

// Atomic update with retry loop (CAS-based)
totalAmount.updateAndGet(current ->
current.add(result.getAmount())
);
}
}

public long getTotalPayments() {
return totalPayments.get();
}

public double getSuccessRate() {
long total = totalPayments.get();
if (total == 0) return 0.0;
return (double) successfulPayments.get() / total;
}
}

How CAS Works: Compare-And-Swap atomically reads a value, compares it to an expected value, and if equal, writes a new value. If the comparison fails (another thread modified it), the operation retries. This avoids locks but can spin under high contention.

4. Concurrent Collections

The java.util.concurrent package provides thread-safe collections optimized for concurrent access.

import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.ConcurrentLinkedQueue;
import java.util.concurrent.CopyOnWriteArrayList;

@Service
public class PaymentCache {

// Thread-safe map - uses lock striping for scalability
// Multiple threads can read/write different segments concurrently
private final ConcurrentHashMap<String, Payment> cache = new ConcurrentHashMap<>();

// Lock-free thread-safe queue - uses CAS operations
// Ideal for producer-consumer patterns
private final ConcurrentLinkedQueue<Payment> pendingQueue = new ConcurrentLinkedQueue<>();

// Copy-on-write list for observers (read-heavy, write-rarely)
// Every write creates a new copy - expensive for frequent writes
private final CopyOnWriteArrayList<PaymentObserver> observers = new CopyOnWriteArrayList<>();

public Payment getPayment(String id) {
// computeIfAbsent is atomic - only one thread loads the value
return cache.computeIfAbsent(id, this::loadFromDatabase);
}

public void enqueue(Payment payment) {
pendingQueue.offer(payment);
}

public Payment dequeue() {
return pendingQueue.poll();
}

public void addObserver(PaymentObserver observer) {
observers.add(observer);
}

public void notifyObservers(Payment payment) {
// Safe to iterate while another thread modifies the list
for (PaymentObserver observer : observers) {
observer.onPayment(payment);
}
}

private Payment loadFromDatabase(String id) {
return paymentRepository.findById(id)
.orElseThrow(() -> new PaymentNotFoundException(id));
}
}

Collection Selection Guide:

  • ConcurrentHashMap: High-throughput read/write operations
  • CopyOnWriteArrayList: Frequent iteration, rare modification (event listeners)
  • ConcurrentLinkedQueue: Producer-consumer patterns, non-blocking
  • BlockingQueue implementations: When you need blocking operations (wait for items)

5. Synchronized Blocks

The synchronized keyword provides mutual exclusion - only one thread can execute synchronized code at a time. However, it's relatively heavyweight and can become a bottleneck.

@Service
public class AccountBalanceService {

private final Map<String, Money> balances = new HashMap<>();
private final Object lock = new Object();

public void transfer(String fromAccount, String toAccount, Money amount) {
// Fine-grained locking - only lock during the critical section
synchronized (lock) {
Money fromBalance = balances.get(fromAccount);
Money toBalance = balances.get(toAccount);

if (fromBalance.isLessThan(amount)) {
throw new InsufficientBalanceException(fromAccount);
}

balances.put(fromAccount, fromBalance.subtract(amount));
balances.put(toAccount, toBalance.add(amount));
}
}

// BAD: Method-level synchronization (coarse-grained lock)
// Blocks all operations, even unrelated ones
public synchronized Money getBalance(String account) {
return balances.get(account);
}

// GOOD: Use concurrent map instead - no synchronization needed
private final ConcurrentHashMap<String, Money> concurrentBalances = new ConcurrentHashMap<>();

public Money getBalanceConcurrent(String account) {
return concurrentBalances.get(account); // Lock-free read
}
}

When to Use synchronized:

  • Multiple variables that must be updated atomically together
  • When concurrent collections don't provide the needed semantics
  • Legacy code integration

When to Avoid:

  • Simple counters (use AtomicLong)
  • Independent read/write operations (use ConcurrentHashMap)
  • Long-running operations inside the synchronized block

Thread Synchronization Primitives

Beyond simple locks, Java provides sophisticated synchronization primitives for coordinating thread execution and limiting resource access.

Semaphores for Resource Limiting

Semaphores control how many threads can access a resource concurrently by maintaining a count of available "permits."

import java.util.concurrent.Semaphore;

@Service
public class RateLimitedPaymentService {

// Limit to 10 concurrent external API calls
private final Semaphore apiCallSemaphore = new Semaphore(10);
private final ExternalPaymentApi paymentApi;

public PaymentResult processPayment(Payment payment) throws InterruptedException {
// Acquire permit (blocks if all 10 permits in use)
apiCallSemaphore.acquire();
try {
// Only 10 threads can execute this block concurrently
return paymentApi.submitPayment(payment);
} finally {
// Always release permit, even on exception
apiCallSemaphore.release();
}
}

// Non-blocking variant with timeout
public Optional<PaymentResult> processPaymentWithTimeout(
Payment payment,
Duration timeout) {
try {
if (apiCallSemaphore.tryAcquire(timeout.toMillis(), TimeUnit.MILLISECONDS)) {
try {
return Optional.of(paymentApi.submitPayment(payment));
} finally {
apiCallSemaphore.release();
}
}
log.warn("Could not acquire API call permit within {}", timeout);
return Optional.empty();
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
throw new PaymentProcessingException("Interrupted waiting for API permit", e);
}
}
}

CountDownLatch for Thread Coordination

CountDownLatch makes threads wait until a set of operations complete. Threads call await() to block until the count reaches zero.

import java.util.concurrent.CountDownLatch;

@Service
public class PaymentBatchProcessor {

public List<PaymentResult> processBatch(List<Payment> payments) throws InterruptedException {
int threadCount = Math.min(payments.size(), 20);
CountDownLatch latch = new CountDownLatch(threadCount);

ConcurrentHashMap<String, PaymentResult> results = new ConcurrentHashMap<>();

try (var executor = Executors.newVirtualThreadPerTaskExecutor()) {
var partitions = partition(payments, threadCount);

for (var partition : partitions) {
executor.submit(() -> {
try {
for (var payment : partition) {
var result = processPayment(payment);
results.put(payment.getId(), result);
}
} finally {
latch.countDown(); // Signal this partition complete
}
});
}

// Wait for all partitions to complete (with timeout)
boolean completed = latch.await(30, TimeUnit.SECONDS);
if (!completed) {
throw new PaymentBatchTimeoutException("Batch processing exceeded 30 seconds");
}

return new ArrayList<>(results.values());
}
}
}

CyclicBarrier for Multi-Phase Coordination

CyclicBarrier makes threads wait until all have reached a common barrier point, then releases them all at once. Unlike CountDownLatch, barriers are reusable.

import java.util.concurrent.CyclicBarrier;
import java.util.concurrent.BrokenBarrierException;

public class ParallelReconciliationService {

public ReconciliationReport reconcile(
List<Transaction> internalTransactions,
List<Transaction> externalTransactions) {

int numThreads = 4;
CyclicBarrier barrier = new CyclicBarrier(numThreads, () -> {
// Barrier action: runs once after all threads reach barrier
log.info("Phase completed, proceeding to next phase");
});

try (var executor = Executors.newFixedThreadPool(numThreads)) {
for (int i = 0; i < numThreads; i++) {
int threadId = i;
executor.submit(() -> {
loadAndNormalizeData(threadId);
awaitBarrier(barrier); // Wait for all to finish phase 1

matchTransactions(threadId);
awaitBarrier(barrier); // Wait for all to finish phase 2

generateReports(threadId);
});
}
}

return aggregateResults();
}

private void awaitBarrier(CyclicBarrier barrier) {
try {
barrier.await();
} catch (InterruptedException | BrokenBarrierException e) {
Thread.currentThread().interrupt();
throw new ReconciliationException("Barrier coordination failed", e);
}
}
}

ReentrantLock with Conditions

ReentrantLock provides more flexibility than synchronized: tryLock (non-blocking), lockInterruptibly, and multiple condition variables.

import java.util.concurrent.locks.ReentrantLock;
import java.util.concurrent.locks.Condition;

public class BoundedPaymentQueue {

private final Queue<Payment> queue = new LinkedList<>();
private final int capacity;

private final ReentrantLock lock = new ReentrantLock();
private final Condition notFull = lock.newCondition(); // Signaled when queue has space
private final Condition notEmpty = lock.newCondition(); // Signaled when queue has items

public BoundedPaymentQueue(int capacity) {
this.capacity = capacity;
}

public void enqueue(Payment payment) throws InterruptedException {
lock.lock();
try {
// Wait while queue is full
while (queue.size() == capacity) {
notFull.await(); // Releases lock and waits
}

queue.add(payment);
notEmpty.signal(); // Wake up one waiting dequeue thread

} finally {
lock.unlock();
}
}

public Payment dequeue() throws InterruptedException {
lock.lock();
try {
// Wait while queue is empty
while (queue.isEmpty()) {
notEmpty.await(); // Releases lock and waits
}

Payment payment = queue.remove();
notFull.signal(); // Wake up one waiting enqueue thread
return payment;

} finally {
lock.unlock();
}
}

// Non-blocking variant
public Optional<Payment> tryDequeue(Duration timeout) {
try {
if (lock.tryLock(timeout.toMillis(), TimeUnit.MILLISECONDS)) {
try {
return Optional.ofNullable(queue.poll());
} finally {
lock.unlock();
}
}
return Optional.empty();
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
return Optional.empty();
}
}
}

Phaser for Dynamic Multi-Phase Coordination

Phaser is a more flexible version of CyclicBarrier that supports dynamic registration/deregistration of parties.

import java.util.concurrent.Phaser;

public class AdaptivePaymentProcessor {

public void processPaymentsWithDynamicParallelism(List<Payment> payments) {
// Initialize phaser with 1 party (main thread)
Phaser phaser = new Phaser(1);

try (var executor = Executors.newVirtualThreadPerTaskExecutor()) {
for (Payment payment : payments) {
// Register new party for each payment
phaser.register();

executor.submit(() -> {
try {
processPayment(payment);
} finally {
// Deregister when done
phaser.arriveAndDeregister();
}
});
}

// Main thread waits for all payments to complete
phaser.arriveAndAwaitAdvance();

log.info("All payments processed in phase {}", phaser.getPhase());
}
}
}

Java Memory Model and Happens-Before

The Java Memory Model (JMM) defines when writes by one thread become visible to reads by another thread. Without synchronization, compilers and CPUs reorder operations aggressively for performance, causing surprising behaviors.

Volatile Variables

volatile provides atomicity for reads/writes and visibility guarantees without locks. Writing a volatile variable establishes a happens-before relationship with subsequent reads.

public class VolatileExample {

// BAD: Non-volatile flag can be cached, leading to infinite loops
private boolean stopRequested = false;

public void backgroundTask() {
int i = 0;
while (!stopRequested) { // May loop forever: JIT compiles to "while (true)"
i++;
}
}

public void stop() {
stopRequested = true; // May stay in this thread's cache
}

// GOOD: Volatile ensures visibility across threads
private volatile boolean stopRequestedVolatile = false;

public void backgroundTaskVolatile() {
int i = 0;
while (!stopRequestedVolatile) { // Guaranteed to see updates
i++;
}
}

public void stopVolatile() {
stopRequestedVolatile = true; // Visible to all threads immediately
}

// BAD: Volatile doesn't make increment atomic
private volatile int counter = 0;

public void increment() {
counter++; // NOT ATOMIC: read-modify-write can interleave
}

// GOOD: Use AtomicInteger for atomic read-modify-write
private final AtomicInteger atomicCounter = new AtomicInteger(0);

public void incrementAtomic() {
atomicCounter.incrementAndGet(); // Atomic
}
}

Synchronized and Happens-Before

Entering a synchronized block establishes happens-before with all previous exits of that same monitor. This means writes inside synchronized blocks are visible to subsequent synchronized blocks on the same lock.

public class SynchronizedExample {

private int value = 0; // Not volatile!
private final Object lock = new Object();

public void writer() {
synchronized (lock) {
value = 42;
// All writes here happen-before unlock
} // Unlock establishes happens-before
}

public int reader() {
synchronized (lock) { // Lock acquires happens-before relationship
// Sees all writes from previous synchronized blocks on same lock
return value; // Guaranteed to see 42 if writer() executed first
}
}

// BAD: Reading without synchronization loses visibility guarantee
public int unsafeReader() {
return value; // May see stale value (0) even after writer() completes
}
}

Final Fields and Safe Publication

Final fields have special guarantees: if an object is safely published, all threads see final field values initialized in the constructor. This is why immutable objects are thread-safe without synchronization.

// GOOD: Immutable object with final fields
public final class ImmutablePayment {
private final String id;
private final Money amount;
private final Instant createdAt;

public ImmutablePayment(String id, Money amount, Instant createdAt) {
this.id = id;
this.amount = amount;
this.createdAt = createdAt;
// All final fields initialized before constructor returns
}
// JMM guarantees all threads see fully initialized final fields
}

// BAD: This escapes during construction
public class UnsafePublication {
private final int value;

public UnsafePublication(EventBus eventBus) {
eventBus.register(this); // THIS ESCAPES - another thread may see it!
this.value = 42; // May not be visible yet
}
}

// GOOD: Static factory method ensures safe publication
public class SafePublication {
private final int value;

private SafePublication(int value) {
this.value = value;
}

public static SafePublication create(EventBus eventBus, int value) {
SafePublication obj = new SafePublication(value);
// Object fully constructed before sharing
eventBus.register(obj);
return obj;
}
}

Happens-Before Rules Summary

Key happens-before orderings:

  1. Program order: Within a single thread, statements happen-before subsequent statements
  2. Monitor lock: Unlocking a monitor happens-before every subsequent lock of that monitor
  3. Volatile: Write to volatile variable happens-before subsequent reads of that variable
  4. Thread start: thread.start() happens-before any action in the started thread
  5. Thread termination: Actions in a thread happen-before another thread detects termination (via join())
  6. Transitivity: If A happens-before B and B happens-before C, then A happens-before C

Common Concurrency Pitfalls

1. Unhandled Exceptions in Threads

Exceptions in threads are silently swallowed unless you explicitly handle them.

// BAD: Exception lost
executor.submit(() -> {
throw new RuntimeException("Payment failed");
}); // Exception silently swallowed!

// GOOD: Handle exceptions
executor.submit(() -> {
try {
processPayment(payment);
} catch (Exception e) {
log.error("Payment processing failed", e);
// Handle or rethrow as appropriate
}
});

// GOOD: Use Future to detect exceptions
Future<?> future = executor.submit(() -> processPayment(payment));
try {
future.get(); // Throws ExecutionException if task failed
} catch (ExecutionException e) {
log.error("Task failed", e.getCause());
}

2. Deadlocks

Deadlocks occur when threads wait for each other's locks in a cycle.

// BAD: Deadlock prone - inconsistent lock ordering
public void transfer(Account from, Account to, Money amount) {
synchronized (from) {
synchronized (to) {
// Transfer...
}
}
}
// Thread 1: transfer(A, B, 100) - locks A, waits for B
// Thread 2: transfer(B, A, 50) - locks B, waits for A → DEADLOCK

// GOOD: Consistent lock ordering
public void transfer(Account from, Account to, Money amount) {
Account first = from.getId().compareTo(to.getId()) < 0 ? from : to;
Account second = first == from ? to : from;

synchronized (first) {
synchronized (second) {
// Transfer...
}
}
}

3. Check-Then-Act Race Conditions

// BAD: Race condition between check and act
if (!cache.containsKey(key)) {
// Another thread may insert here!
cache.put(key, expensiveComputation(key));
}

// GOOD: Atomic check-then-act
cache.computeIfAbsent(key, k -> expensiveComputation(k));

Testing Concurrent Code

Testing concurrent code requires stress tests and specialized tools because bugs may only appear under specific timing conditions.

Stress Testing

@Test
void processPayments_concurrent_noDataRaces() throws Exception {
var payments = IntStream.range(0, 1000)
.mapToObj(i -> createPayment("PAY-" + i))
.toList();

var executor = Executors.newVirtualThreadPerTaskExecutor();

try {
List<Future<PaymentResult>> futures = payments.stream()
.map(payment -> executor.submit(() -> paymentService.process(payment)))
.toList();

List<PaymentResult> results = futures.stream()
.map(future -> {
try {
return future.get();
} catch (Exception e) {
throw new RuntimeException(e);
}
})
.toList();

// Verify all processed successfully
assertThat(results).hasSize(1000);
assertThat(results).allMatch(PaymentResult::isSuccessful);

// Verify no duplicate transaction IDs (data race indicator)
var transactionIds = results.stream()
.map(PaymentResult::getTransactionId)
.collect(Collectors.toSet());
assertThat(transactionIds).hasSize(1000);

} finally {
executor.shutdown();
executor.awaitTermination(10, TimeUnit.SECONDS);
}
}

Further Reading

Internal Documentation

External Resources


Summary

Key Takeaways

  1. Immutability is the best thread safety - prefer records and final fields over synchronization
  2. Atomic variables and concurrent collections eliminate need for explicit synchronization
  3. Understand happens-before to reason about visibility across threads
  4. Volatile provides visibility, not atomicity for compound operations
  5. Avoid synchronized blocks when possible - they're heavyweight; prefer concurrent collections
  6. Use semaphores for resource limiting, latches/barriers for coordination
  7. Test concurrent code with stress tests to expose race conditions
  8. Always handle exceptions in threads - they're silently swallowed otherwise

Next Steps: Review Java Concurrency Advanced for virtual threads, structured concurrency, and reactive patterns.