Skip to main content

AWS SDK Integration

AWS SDKs enable programmatic interaction with AWS services from application code. Instead of managing infrastructure through web consoles or CLI tools, applications use SDKs to directly call AWS APIs: uploading files to S3, publishing messages to SQS, querying DynamoDB, or invoking Lambda functions. The SDK handles authentication, request signing, retry logic, and response parsing.

Modern AWS SDKs (Java SDK v2, JavaScript SDK v3) are designed for performance, modularity, and cloud-native patterns. They support async operations, connection pooling, and fine-grained control over HTTP clients. Proper SDK integration requires understanding credential management (IAM roles, not hardcoded keys), error handling (retries with exponential backoff), and testing strategies (LocalStack for local development).

This guide covers AWS SDK integration in Spring Boot (Java SDK v2) and TypeScript/Node.js (JavaScript SDK v3) - the primary backend and Lambda/frontend technologies. For mobile-specific SDK usage, see framework-specific guides: Android SDK covers AWS SDK for Android, iOS SDK covers AWS SDK for Swift.


Core Principles

  1. Use IAM roles, never hardcoded credentials - In AWS (EC2, ECS, Lambda), use instance/task roles; locally, use named profiles
  2. Handle errors and retries properly - AWS services throttle requests; implement exponential backoff and respect retry-after headers
  3. Reuse SDK clients - Client initialization is expensive; create once, reuse across requests
  4. Use async operations - Non-blocking calls improve throughput (especially for Lambda and high-concurrency applications)
  5. Test with LocalStack - Emulate AWS services locally for fast, cost-free development and testing
  6. Monitor SDK metrics - Track API call failures, throttling, latency to identify issues

AWS SDK Versions

Java SDK v2

Current version: AWS SDK for Java 2.x (released 2018, actively maintained)

Key improvements over v1:

  • Async client support (non-blocking operations)
  • Better performance (reduced memory footprint, faster startup)
  • Modular dependencies (include only services you use)
  • Improved credential provider chain
  • Built on Java 8+ features (CompletableFuture, streams)

When to use: All new Java/Spring Boot projects. Migrate from v1 when possible (v1 maintenance mode).

Documentation: https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/

JavaScript SDK v3

Current version: AWS SDK for JavaScript 3.x (released 2020, actively maintained)

Key improvements over v2:

  • Modular architecture (import only the clients you need → smaller bundle sizes)
  • First-class TypeScript support
  • Middleware stack for customization
  • Better tree-shaking for frontend applications

When to use: All new Node.js, TypeScript, and browser applications. Migrate from v2 (smaller bundles, better TypeScript support).

Documentation: https://docs.aws.amazon.com/sdk-for-javascript/v3/developer-guide/


Spring Boot + AWS SDK v2

Dependencies

// build.gradle
dependencies {
implementation platform('software.amazon.awssdk:bom:2.21.0') // BOM for version management

// Include only services you use
implementation 'software.amazon.awssdk:s3'
implementation 'software.amazon.awssdk:sqs'
implementation 'software.amazon.awssdk:sns'
implementation 'software.amazon.awssdk:dynamodb'
implementation 'software.amazon.awssdk:secretsmanager'

// Optional: DynamoDB Enhanced Client (ORM-like)
implementation 'software.amazon.awssdk:dynamodb-enhanced'

// Optional: Apache HTTP Client (recommended for production)
implementation 'software.amazon.awssdk:apache-client'
}

BOM (Bill of Materials) manages versions for all AWS SDK modules, preventing version conflicts. Import the BOM, then declare service dependencies without version numbers.

Configuration

Centralize AWS SDK configuration in a Spring @Configuration class:

package com.example.config;

import org.springframework.beans.factory.annotation.Value;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import software.amazon.awssdk.auth.credentials.DefaultCredentialsProvider;
import software.amazon.awssdk.http.apache.ApacheHttpClient;
import software.amazon.awssdk.regions.Region;
import software.amazon.awssdk.services.s3.S3Client;
import software.amazon.awssdk.services.sqs.SqsClient;
import software.amazon.awssdk.services.sns.SnsClient;
import software.amazon.awssdk.services.dynamodb.DynamoDbClient;

import java.time.Duration;

@Configuration
public class AwsConfig {

@Value("${aws.region:us-east-1}")
private String awsRegion;

/**
* Default credentials provider chain:
* 1. Java system properties (aws.accessKeyId, aws.secretAccessKey)
* 2. Environment variables (AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY)
* 3. Web Identity Token (for EKS with IRSA)
* 4. Shared credentials file (~/.aws/credentials)
* 5. EC2 instance profile credentials
* 6. ECS container credentials
*/
@Bean
public DefaultCredentialsProvider credentialsProvider() {
return DefaultCredentialsProvider.create();
}

/**
* Apache HTTP client with connection pooling and timeouts.
* Reuse across all AWS service clients for efficiency.
*/
@Bean
public ApacheHttpClient httpClient() {
return ApacheHttpClient.builder()
.maxConnections(100) // Connection pool size
.connectionTimeout(Duration.ofSeconds(10)) // Connect timeout
.socketTimeout(Duration.ofSeconds(30)) // Read timeout
.build();
}

@Bean
public S3Client s3Client(DefaultCredentialsProvider credentialsProvider,
ApacheHttpClient httpClient) {
return S3Client.builder()
.region(Region.of(awsRegion))
.credentialsProvider(credentialsProvider)
.httpClient(httpClient)
.build();
}

@Bean
public SqsClient sqsClient(DefaultCredentialsProvider credentialsProvider,
ApacheHttpClient httpClient) {
return SqsClient.builder()
.region(Region.of(awsRegion))
.credentialsProvider(credentialsProvider)
.httpClient(httpClient)
.build();
}

@Bean
public SnsClient snsClient(DefaultCredentialsProvider credentialsProvider,
ApacheHttpClient httpClient) {
return SnsClient.builder()
.region(Region.of(awsRegion))
.credentialsProvider(credentialsProvider)
.httpClient(httpClient)
.build();
}

@Bean
public DynamoDbClient dynamoDbClient(DefaultCredentialsProvider credentialsProvider,
ApacheHttpClient httpClient) {
return DynamoDbClient.builder()
.region(Region.of(awsRegion))
.credentialsProvider(credentialsProvider)
.httpClient(httpClient)
.build();
}
}

Why separate HTTP client bean: Sharing a single HTTP client across all SDK clients enables connection pooling. Without this, each SDK client creates its own connection pool, wasting resources.

Why ApacheHttpClient over default: Default UrlConnectionHttpClient is sufficient for simple use cases, but ApacheHttpClient provides better connection pooling, timeout configuration, and performance under load.

See Spring Boot General Guidelines for dependency injection patterns and configuration best practices.

S3 Client Usage

Upload and download files from S3. For comprehensive S3 patterns including presigned URLs, multipart uploads, and lifecycle management, see File Storage (S3).

package com.example.storage;

import org.springframework.stereotype.Service;
import software.amazon.awssdk.core.sync.RequestBody;
import software.amazon.awssdk.core.sync.ResponseTransformer;
import software.amazon.awssdk.services.s3.S3Client;
import software.amazon.awssdk.services.s3.model.*;

import java.io.InputStream;
import java.nio.file.Path;

@Service
public class S3StorageService {

private final S3Client s3Client;
private final String bucketName;

public S3StorageService(S3Client s3Client,
@Value("${aws.s3.bucket}") String bucketName) {
this.s3Client = s3Client;
this.bucketName = bucketName;
}

/**
* Upload file to S3.
*/
public String uploadFile(String key, InputStream inputStream, long contentLength, String contentType) {
PutObjectRequest request = PutObjectRequest.builder()
.bucket(bucketName)
.key(key)
.contentType(contentType)
.contentLength(contentLength)
.serverSideEncryption(ServerSideEncryption.AES256) // Encrypt at rest
.build();

s3Client.putObject(request, RequestBody.fromInputStream(inputStream, contentLength));

return String.format("s3://%s/%s", bucketName, key);
}

/**
* Download file from S3.
*/
public InputStream downloadFile(String key) {
GetObjectRequest request = GetObjectRequest.builder()
.bucket(bucketName)
.key(key)
.build();

return s3Client.getObject(request, ResponseTransformer.toInputStream());
}

/**
* Check if file exists.
*/
public boolean fileExists(String key) {
try {
HeadObjectRequest request = HeadObjectRequest.builder()
.bucket(bucketName)
.key(key)
.build();

s3Client.headObject(request);
return true;
} catch (NoSuchKeyException e) {
return false;
}
}

/**
* Delete file from S3.
*/
public void deleteFile(String key) {
DeleteObjectRequest request = DeleteObjectRequest.builder()
.bucket(bucketName)
.key(key)
.build();

s3Client.deleteObject(request);
}
}

Key points:

  • Always specify serverSideEncryption for sensitive data
  • Use headObject to check existence (cheaper than getObject)
  • Handle NoSuchKeyException for missing files

SQS Client Usage

Send and receive messages from SQS queues. For comprehensive messaging patterns, see AWS Messaging (SQS/SNS).

package com.example.messaging;

import com.fasterxml.jackson.databind.ObjectMapper;
import org.springframework.stereotype.Service;
import software.amazon.awssdk.services.sqs.SqsClient;
import software.amazon.awssdk.services.sqs.model.*;

import java.util.List;

@Service
public class SqsMessageService {

private final SqsClient sqsClient;
private final ObjectMapper objectMapper;
private final String queueUrl;

public SqsMessageService(SqsClient sqsClient,
ObjectMapper objectMapper,
@Value("${aws.sqs.queue-url}") String queueUrl) {
this.sqsClient = sqsClient;
this.objectMapper = objectMapper;
this.queueUrl = queueUrl;
}

/**
* Send message to SQS queue.
*/
public void sendMessage(Object messageBody) {
try {
String messageJson = objectMapper.writeValueAsString(messageBody);

SendMessageRequest request = SendMessageRequest.builder()
.queueUrl(queueUrl)
.messageBody(messageJson)
.build();

SendMessageResponse response = sqsClient.sendMessage(request);
logger.info("Message sent with ID: {}", response.messageId());
} catch (Exception e) {
logger.error("Failed to send message to SQS", e);
throw new MessagingException("Failed to send message", e);
}
}

/**
* Receive messages from SQS queue.
*/
public List<Message> receiveMessages(int maxMessages, int waitTimeSeconds) {
ReceiveMessageRequest request = ReceiveMessageRequest.builder()
.queueUrl(queueUrl)
.maxNumberOfMessages(maxMessages) // 1-10
.waitTimeSeconds(waitTimeSeconds) // Long polling (0-20)
.visibilityTimeout(30) // 30 seconds to process
.messageAttributeNames("All") // Include all attributes
.build();

ReceiveMessageResponse response = sqsClient.receiveMessage(request);
return response.messages();
}

/**
* Delete message after successful processing.
*/
public void deleteMessage(String receiptHandle) {
DeleteMessageRequest request = DeleteMessageRequest.builder()
.queueUrl(queueUrl)
.receiptHandle(receiptHandle)
.build();

sqsClient.deleteMessage(request);
}

/**
* Process messages with automatic deletion on success.
*/
public <T> void processMessages(Class<T> messageType, MessageProcessor<T> processor) {
List<Message> messages = receiveMessages(10, 20); // Long polling

for (Message message : messages) {
try {
T payload = objectMapper.readValue(message.body(), messageType);
processor.process(payload);

// Delete message after successful processing
deleteMessage(message.receiptHandle());
} catch (Exception e) {
logger.error("Failed to process message: {}", message.messageId(), e);
// Message will become visible again after visibility timeout
}
}
}

@FunctionalInterface
public interface MessageProcessor<T> {
void process(T message) throws Exception;
}
}

Long polling (waitTimeSeconds): Reduces empty responses and costs. Queue waits up to 20 seconds for messages before returning, instead of immediately returning empty.

Visibility timeout: After receiving a message, it's hidden from other consumers for this duration. If not deleted, message becomes visible again (reprocessed). Set based on expected processing time.

DynamoDB Client Usage

Query and write data to DynamoDB tables. For DynamoDB data modeling, see AWS Databases.

package com.example.data;

import org.springframework.stereotype.Repository;
import software.amazon.awssdk.services.dynamodb.DynamoDbClient;
import software.amazon.awssdk.services.dynamodb.model.*;

import java.util.HashMap;
import java.util.Map;

@Repository
public class UserRepository {

private final DynamoDbClient dynamoDbClient;
private final String tableName;

public UserRepository(DynamoDbClient dynamoDbClient,
@Value("${aws.dynamodb.users-table}") String tableName) {
this.dynamoDbClient = dynamoDbClient;
this.tableName = tableName;
}

/**
* Get user by ID (partition key).
*/
public Map<String, AttributeValue> getUser(String userId) {
GetItemRequest request = GetItemRequest.builder()
.tableName(tableName)
.key(Map.of("userId", AttributeValue.builder().s(userId).build()))
.build();

GetItemResponse response = dynamoDbClient.getItem(request);
return response.item();
}

/**
* Save user to DynamoDB.
*/
public void saveUser(String userId, String email, String name) {
Map<String, AttributeValue> item = new HashMap<>();
item.put("userId", AttributeValue.builder().s(userId).build());
item.put("email", AttributeValue.builder().s(email).build());
item.put("name", AttributeValue.builder().s(name).build());
item.put("createdAt", AttributeValue.builder().n(String.valueOf(System.currentTimeMillis())).build());

PutItemRequest request = PutItemRequest.builder()
.tableName(tableName)
.item(item)
.build();

dynamoDbClient.putItem(request);
}

/**
* Query users by email (GSI).
*/
public List<Map<String, AttributeValue>> findUsersByEmail(String email) {
QueryRequest request = QueryRequest.builder()
.tableName(tableName)
.indexName("email-index") // Global Secondary Index
.keyConditionExpression("email = :email")
.expressionAttributeValues(Map.of(
":email", AttributeValue.builder().s(email).build()
))
.build();

QueryResponse response = dynamoDbClient.query(request);
return response.items();
}
}

DynamoDB Enhanced Client (alternative): For object mapping (like JPA):

@DynamoDbBean
public class User {
private String userId;
private String email;
private String name;

@DynamoDbPartitionKey
public String getUserId() { return userId; }
public void setUserId(String userId) { this.userId = userId; }

// ... other getters/setters
}

// Repository with Enhanced Client
@Repository
public class UserRepository {
private final DynamoDbEnhancedClient enhancedClient;
private final DynamoDbTable<User> userTable;

public UserRepository(DynamoDbClient dynamoDbClient) {
this.enhancedClient = DynamoDbEnhancedClient.builder()
.dynamoDbClient(dynamoDbClient)
.build();
this.userTable = enhancedClient.table("users", TableSchema.fromBean(User.class));
}

public User getUser(String userId) {
return userTable.getItem(Key.builder().partitionValue(userId).build());
}

public void saveUser(User user) {
userTable.putItem(user);
}
}

Enhanced Client reduces boilerplate for CRUD operations.


TypeScript/Node.js + AWS SDK v3

Dependencies

{
"dependencies": {
"@aws-sdk/client-s3": "^3.450.0",
"@aws-sdk/client-sqs": "^3.450.0",
"@aws-sdk/client-sns": "^3.450.0",
"@aws-sdk/client-dynamodb": "^3.450.0",
"@aws-sdk/lib-dynamodb": "^3.450.0",
"@aws-sdk/s3-request-presigner": "^3.450.0"
},
"devDependencies": {
"@types/node": "^20.0.0",
"typescript": "^5.0.0"
}
}

Modular imports: Only install clients for services you use. This keeps bundle sizes small (critical for Lambda and frontend applications).

S3 Client

// src/services/s3.service.ts
import { S3Client, PutObjectCommand, GetObjectCommand, HeadObjectCommand } from '@aws-sdk/client-s3';
import { getSignedUrl } from '@aws-sdk/s3-request-presigner';
import { Readable } from 'stream';

// Create client once, reuse across requests
const s3Client = new S3Client({
region: process.env.AWS_REGION || 'us-east-1',
// Credentials automatically loaded from:
// 1. Environment variables (AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY)
// 2. Shared credentials file (~/.aws/credentials)
// 3. ECS task role (if running in ECS)
// 4. Lambda execution role (if running in Lambda)
});

export class S3Service {
private bucketName: string;

constructor(bucketName: string) {
this.bucketName = bucketName;
}

/**
* Upload file to S3.
*/
async uploadFile(key: string, body: Buffer | Readable, contentType: string): Promise<string> {
const command = new PutObjectCommand({
Bucket: this.bucketName,
Key: key,
Body: body,
ContentType: contentType,
ServerSideEncryption: 'AES256',
});

await s3Client.send(command);
return `s3://${this.bucketName}/${key}`;
}

/**
* Download file from S3.
*/
async downloadFile(key: string): Promise<Readable> {
const command = new GetObjectCommand({
Bucket: this.bucketName,
Key: key,
});

const response = await s3Client.send(command);
return response.Body as Readable;
}

/**
* Generate presigned URL for temporary access.
* Client can upload directly to S3 without server proxy.
*/
async generatePresignedUploadUrl(key: string, expiresIn: number = 3600): Promise<string> {
const command = new PutObjectCommand({
Bucket: this.bucketName,
Key: key,
ServerSideEncryption: 'AES256',
});

return getSignedUrl(s3Client, command, { expiresIn });
}

/**
* Check if file exists.
*/
async fileExists(key: string): Promise<boolean> {
try {
const command = new HeadObjectCommand({
Bucket: this.bucketName,
Key: key,
});

await s3Client.send(command);
return true;
} catch (error: any) {
if (error.name === 'NotFound') {
return false;
}
throw error;
}
}
}

Presigned URLs: Allow clients to upload directly to S3 without proxying through your server. Reduces server load and data transfer costs. See File Storage (S3) for detailed patterns.

SQS Client

// src/services/sqs.service.ts
import { SQSClient, SendMessageCommand, ReceiveMessageCommand, DeleteMessageCommand, Message } from '@aws-sdk/client-sqs';

const sqsClient = new SQSClient({
region: process.env.AWS_REGION || 'us-east-1',
});

export class SqsService {
private queueUrl: string;

constructor(queueUrl: string) {
this.queueUrl = queueUrl;
}

/**
* Send message to SQS queue.
*/
async sendMessage(messageBody: object): Promise<string> {
const command = new SendMessageCommand({
QueueUrl: this.queueUrl,
MessageBody: JSON.stringify(messageBody),
});

const response = await sqsClient.send(command);
return response.MessageId!;
}

/**
* Receive messages from queue with long polling.
*/
async receiveMessages(maxMessages: number = 10, waitTimeSeconds: number = 20): Promise<Message[]> {
const command = new ReceiveMessageCommand({
QueueUrl: this.queueUrl,
MaxNumberOfMessages: maxMessages,
WaitTimeSeconds: waitTimeSeconds,
VisibilityTimeout: 30,
MessageAttributeNames: ['All'],
});

const response = await sqsClient.send(command);
return response.Messages || [];
}

/**
* Delete message after processing.
*/
async deleteMessage(receiptHandle: string): Promise<void> {
const command = new DeleteMessageCommand({
QueueUrl: this.queueUrl,
ReceiptHandle: receiptHandle,
});

await sqsClient.send(command);
}

/**
* Process messages with automatic retry and deletion.
*/
async processMessages<T>(
handler: (message: T) => Promise<void>,
maxMessages: number = 10
): Promise<void> {
const messages = await this.receiveMessages(maxMessages);

for (const message of messages) {
try {
const payload = JSON.parse(message.Body!) as T;
await handler(payload);

// Delete message on success
await this.deleteMessage(message.ReceiptHandle!);
} catch (error) {
console.error('Failed to process message:', message.MessageId, error);
// Message will become visible again after visibility timeout
}
}
}
}

Lambda integration:

// lambda/sqs-processor.ts
import { SQSEvent, SQSHandler } from 'aws-lambda';

export const handler: SQSHandler = async (event: SQSEvent) => {
for (const record of event.Records) {
const message = JSON.parse(record.body);

try {
await processMessage(message);
// No need to delete - Lambda automatically deletes on success
} catch (error) {
console.error('Processing failed:', error);
throw error; // Lambda retries or sends to DLQ
}
}
};

async function processMessage(message: any): Promise<void> {
// Business logic here
console.log('Processing:', message);
}

Lambda automatically polls SQS, handles batching, and manages message deletion. No manual polling loop needed.

DynamoDB Client

// src/services/dynamodb.service.ts
import { DynamoDBClient } from '@aws-sdk/client-dynamodb';
import {
DynamoDBDocumentClient,
GetCommand,
PutCommand,
QueryCommand,
UpdateCommand,
DeleteCommand,
} from '@aws-sdk/lib-dynamodb';

const client = new DynamoDBClient({
region: process.env.AWS_REGION || 'us-east-1',
});

// DocumentClient provides simpler API (no AttributeValue wrapping)
const docClient = DynamoDBDocumentClient.from(client);

export class UserRepository {
private tableName: string;

constructor(tableName: string) {
this.tableName = tableName;
}

/**
* Get user by ID.
*/
async getUser(userId: string): Promise<any | null> {
const command = new GetCommand({
TableName: this.tableName,
Key: { userId },
});

const response = await docClient.send(command);
return response.Item || null;
}

/**
* Save user.
*/
async saveUser(user: { userId: string; email: string; name: string }): Promise<void> {
const command = new PutCommand({
TableName: this.tableName,
Item: {
...user,
createdAt: Date.now(),
},
});

await docClient.send(command);
}

/**
* Query users by email (using GSI).
*/
async findUsersByEmail(email: string): Promise<any[]> {
const command = new QueryCommand({
TableName: this.tableName,
IndexName: 'email-index',
KeyConditionExpression: 'email = :email',
ExpressionAttributeValues: {
':email': email,
},
});

const response = await docClient.send(command);
return response.Items || [];
}

/**
* Update user fields.
*/
async updateUser(userId: string, updates: Partial<{ email: string; name: string }>): Promise<void> {
const updateExpressions: string[] = [];
const expressionAttributeValues: Record<string, any> = {};

Object.entries(updates).forEach(([key, value]) => {
updateExpressions.push(`${key} = :${key}`);
expressionAttributeValues[`:${key}`] = value;
});

const command = new UpdateCommand({
TableName: this.tableName,
Key: { userId },
UpdateExpression: `SET ${updateExpressions.join(', ')}`,
ExpressionAttributeValues: expressionAttributeValues,
});

await docClient.send(command);
}
}

DynamoDBDocumentClient: Simplifies DynamoDB operations by automatically marshalling/unmarshalling JavaScript objects to DynamoDB AttributeValues. Use this instead of raw DynamoDBClient for cleaner code.


Credential Management

Credential Provider Chain

AWS SDKs use a credential provider chain to automatically discover credentials in order:

Best practices:

  1. Production (AWS): Use IAM roles (EC2 instance profile, ECS task role, Lambda execution role). Never hardcode credentials.
  2. Local development: Use ~/.aws/credentials with named profiles.
  3. CI/CD: Use OIDC federation (GitLab → AWS STS AssumeRoleWithWebIdentity). See GitLab CI/CD Pipelines for patterns.

IAM Roles in AWS

EC2 instance profile:

# Terraform: Attach IAM role to EC2 instance
resource "aws_iam_role" "app_server" {
name = "app-server-role"

assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [{
Effect = "Allow"
Principal = {
Service = "ec2.amazonaws.com"
}
Action = "sts:AssumeRole"
}]
})
}

resource "aws_iam_role_policy_attachment" "s3_access" {
role = aws_iam_role.app_server.name
policy_arn = "arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess"
}

resource "aws_iam_instance_profile" "app_server" {
name = "app-server-profile"
role = aws_iam_role.app_server.name
}

resource "aws_instance" "app" {
ami = var.ami_id
instance_type = "t3.medium"
iam_instance_profile = aws_iam_instance_profile.app_server.name
}

Application running on EC2 automatically gets S3 read access. SDK retrieves credentials from instance metadata (http://169.254.169.254).

ECS task role:

resource "aws_iam_role" "ecs_task" {
name = "ecs-task-role"

assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [{
Effect = "Allow"
Principal = {
Service = "ecs-tasks.amazonaws.com"
}
Action = "sts:AssumeRole"
}]
})
}

resource "aws_ecs_task_definition" "app" {
family = "app"
task_role_arn = aws_iam_role.ecs_task.arn # Application permissions
execution_role_arn = aws_iam_role.ecs_execution.arn # ECS agent permissions

container_definitions = jsonencode([{
name = "app"
image = "myapp:latest"
# ... other config
}])
}

See AWS IAM for comprehensive IAM role patterns.

Local Development with Profiles

~/.aws/credentials:

[default]
aws_access_key_id = AKIAIOSFODNN7EXAMPLE
aws_secret_access_key = wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY

[dev]
aws_access_key_id = AKIAI44QH8DHBEXAMPLE
aws_secret_access_key = je7MtGbClwBF/2Zp9Utk/h3yCo8nvbEXAMPLEKEY

[prod]
aws_access_key_id = AKIAI44QH8DHBPRODKEY
aws_secret_access_key = je7MtGbClwBF/2Zp9Utk/h3yCo8nvbPRODKEY

Use profile:

# Java
export AWS_PROFILE=dev

# Node.js
export AWS_PROFILE=dev

SDK automatically uses credentials from specified profile.


Error Handling and Retry Strategies

AWS services throttle requests to prevent abuse. Applications must handle throttling gracefully with retries and exponential backoff.

SDK Default Retry Behavior

AWS SDKs retry failed requests automatically:

Default retry policy:

  • Retries on network errors, 500/503/504 errors, throttling (400 ThrottlingException)
  • Exponential backoff: 100ms, 200ms, 400ms, 800ms...
  • Maximum 3 retries (Java SDK v2), 3 retries (JS SDK v3)

Custom Retry Configuration

Java SDK v2:

import software.amazon.awssdk.core.retry.RetryPolicy;
import software.amazon.awssdk.core.retry.backoff.BackoffStrategy;
import java.time.Duration;

@Bean
public S3Client s3ClientWithRetry() {
RetryPolicy retryPolicy = RetryPolicy.builder()
.numRetries(5) // Increase retries
.backoffStrategy(BackoffStrategy.defaultThrottlingStrategy()) // Exponential backoff
.throttlingBackoffStrategy(BackoffStrategy.exponentialDelay(Duration.ofMillis(100), 2.0))
.build();

return S3Client.builder()
.region(Region.US_EAST_1)
.overrideConfiguration(config -> config.retryPolicy(retryPolicy))
.build();
}

JavaScript SDK v3:

import { S3Client } from '@aws-sdk/client-s3';
import { ConfiguredRetryStrategy } from '@aws-sdk/util-retry';

const s3Client = new S3Client({
region: 'us-east-1',
retryStrategy: new ConfiguredRetryStrategy(
5, // Max attempts
(attempt: number) => 100 * Math.pow(2, attempt) // Exponential backoff
),
});

Application-Level Error Handling

Wrap SDK calls with try-catch for graceful degradation:

public Optional<String> getFileWithFallback(String key) {
try {
return Optional.of(s3StorageService.downloadFile(key));
} catch (S3Exception e) {
if (e.statusCode() == 404) {
logger.warn("File not found: {}", key);
return Optional.empty();
} else if (e.statusCode() == 403) {
logger.error("Access denied to file: {}", key);
throw new UnauthorizedException("Cannot access file");
} else {
logger.error("S3 error retrieving file: {}", key, e);
throw new StorageException("Failed to retrieve file", e);
}
}
}

See Spring Boot Resilience for circuit breaker patterns around AWS SDK calls.


Local Development with LocalStack

LocalStack emulates AWS services locally, enabling offline development and fast integration testing without AWS costs.

Supported services: S3, SQS, SNS, DynamoDB, Lambda, API Gateway, CloudWatch, Secrets Manager, and more.

Docker Compose Setup

# docker-compose.yml
version: '3.8'

services:
localstack:
image: localstack/localstack:3.0
ports:
- "4566:4566" # LocalStack edge port
environment:
- SERVICES=s3,sqs,sns,dynamodb,secretsmanager
- DEBUG=1
- DATA_DIR=/tmp/localstack/data
volumes:
- "./localstack-data:/tmp/localstack"
- "/var/run/docker.sock:/var/run/docker.sock"

Start LocalStack:

docker-compose up -d

Configuration for LocalStack

Java (Spring Boot):

@Configuration
@Profile("local") // Only for local development
public class LocalStackConfig {

@Bean
public S3Client s3Client() {
return S3Client.builder()
.region(Region.US_EAST_1)
.endpointOverride(URI.create("http://localhost:4566")) // LocalStack endpoint
.credentialsProvider(StaticCredentialsProvider.create(
AwsBasicCredentials.create("test", "test") // Dummy credentials
))
.build();
}

@Bean
public SqsClient sqsClient() {
return SqsClient.builder()
.region(Region.US_EAST_1)
.endpointOverride(URI.create("http://localhost:4566"))
.credentialsProvider(StaticCredentialsProvider.create(
AwsBasicCredentials.create("test", "test")
))
.build();
}
}

TypeScript:

const s3Client = new S3Client({
region: 'us-east-1',
endpoint: process.env.LOCALSTACK_ENDPOINT || 'http://localhost:4566',
credentials: {
accessKeyId: 'test',
secretAccessKey: 'test',
},
forcePathStyle: true, // Required for LocalStack S3
});

Integration Testing with LocalStack

TestContainers (Java):

@SpringBootTest
@Testcontainers
class S3IntegrationTest {

@Container
static LocalStackContainer localstack = new LocalStackContainer(
DockerImageName.parse("localstack/localstack:3.0")
).withServices(LocalStackContainer.Service.S3);

@DynamicPropertySource
static void properties(DynamicPropertyRegistry registry) {
registry.add("aws.s3.endpoint", () -> localstack.getEndpointOverride(LocalStackContainer.Service.S3));
}

@Autowired
private S3StorageService s3StorageService;

@Test
void shouldUploadAndDownloadFile() {
// Create bucket
S3Client s3Client = S3Client.builder()
.endpointOverride(localstack.getEndpointOverride(LocalStackContainer.Service.S3))
.region(Region.US_EAST_1)
.credentialsProvider(StaticCredentialsProvider.create(
AwsBasicCredentials.create("test", "test")
))
.build();

s3Client.createBucket(r -> r.bucket("test-bucket"));

// Test upload
String key = "test-file.txt";
s3StorageService.uploadFile(key, "Hello, LocalStack!".getBytes(), "text/plain");

// Test download
InputStream downloaded = s3StorageService.downloadFile(key);
String content = new String(downloaded.readAllBytes());

assertEquals("Hello, LocalStack!", content);
}
}

See Integration Testing for comprehensive testing strategies.


Performance Optimization

Connection Reuse

Problem: Creating new SDK clients for each request is expensive (connection pool initialization, credential lookup).

Solution: Create clients once, reuse across requests (shown in configuration examples above).

Async Operations

Java SDK v2 async client:

@Bean
public S3AsyncClient s3AsyncClient() {
return S3AsyncClient.builder()
.region(Region.US_EAST_1)
.build();
}

// Usage
public CompletableFuture<String> uploadFileAsync(String key, byte[] data) {
PutObjectRequest request = PutObjectRequest.builder()
.bucket(bucketName)
.key(key)
.build();

return s3AsyncClient.putObject(request, AsyncRequestBody.fromBytes(data))
.thenApply(response -> String.format("s3://%s/%s", bucketName, key));
}

Async clients use non-blocking I/O, improving throughput for high-concurrency applications.

Batching

SQS batch operations:

// Send messages in batches (up to 10 per request)
public void sendMessagesBatch(List<String> messages) {
List<SendMessageBatchRequestEntry> entries = new ArrayList<>();

for (int i = 0; i < messages.size(); i++) {
entries.add(SendMessageBatchRequestEntry.builder()
.id(String.valueOf(i))
.messageBody(messages.get(i))
.build());

// Send batch when reaching 10 messages
if (entries.size() == 10 || i == messages.size() - 1) {
SendMessageBatchRequest request = SendMessageBatchRequest.builder()
.queueUrl(queueUrl)
.entries(entries)
.build();

sqsClient.sendMessageBatch(request);
entries.clear();
}
}
}

Batching reduces API calls (cost) and improves throughput.


Common Anti-Patterns

Hardcoded Credentials

Bad:

AwsBasicCredentials credentials = AwsBasicCredentials.create("AKIAIOSFODNN7EXAMPLE", "wJalrXUtnFEMI...");
S3Client s3Client = S3Client.builder()
.credentialsProvider(StaticCredentialsProvider.create(credentials))
.build();

Why: Credentials in code end up in version control, logs, memory dumps. High security risk.

Good: Use IAM roles (automatic credential rotation, no hardcoded secrets).

Synchronous Blocking in Lambda

Bad:

// Lambda with synchronous S3 download
export const handler = async (event: any) => {
const s3Client = new S3Client({}); // Created every invocation (slow!)

const command = new GetObjectCommand({ Bucket: 'my-bucket', Key: 'file.txt' });
const response = await s3Client.send(command); // Synchronous wait

// Process file...
};

Why: Lambda has limited concurrency. Synchronous operations waste execution time.

Good: Use async operations, create clients outside handler (warm start optimization):

// Client created once (warm start optimization)
const s3Client = new S3Client({});

export const handler = async (event: any) => {
// Async operations
const [file1, file2] = await Promise.all([
s3Client.send(new GetObjectCommand({ Bucket: 'my-bucket', Key: 'file1.txt' })),
s3Client.send(new GetObjectCommand({ Bucket: 'my-bucket', Key: 'file2.txt' })),
]);

// Process files in parallel
};

Not Handling Throttling

Bad:

try {
s3Client.putObject(request, body);
} catch (S3Exception e) {
logger.error("S3 upload failed", e);
throw e; // Propagate error without retry
}

Why: AWS throttles requests. Application fails unnecessarily instead of retrying.

Good: Use SDK retry policies (automatic retries) or implement circuit breaker (see Spring Boot Resilience).

Creating Clients Per Request

Bad:

public void uploadFile(String key, byte[] data) {
// New client every request!
S3Client s3Client = S3Client.builder().build();
s3Client.putObject(...);
}

Why: Client initialization is expensive. Connection pools not reused.

Good: Inject singleton client (Spring @Bean), reuse across requests.


Summary

AWS SDK integration enables applications to interact with AWS services programmatically. Key principles:

  1. Use IAM roles in production - EC2 instance profiles, ECS task roles, Lambda execution roles. Never hardcode credentials.
  2. Reuse SDK clients - Create once (Spring @Bean, global variable), reuse across requests for connection pooling.
  3. Handle errors gracefully - Leverage SDK retry policies, implement circuit breakers for resilience.
  4. Test with LocalStack - Fast, offline integration testing without AWS costs.
  5. Optimize for performance - Use async clients for high concurrency, batch operations to reduce API calls.
  6. Monitor SDK operations - Track throttling, errors, latency with CloudWatch metrics.

Java SDK v2: Modular, async-capable, production-ready for Spring Boot applications.

JavaScript SDK v3: Modular imports, TypeScript support, optimized for Lambda and Node.js.


Further Reading