GraphQL Best Practices
GraphQL is a query language and runtime for APIs that provides clients with the ability to request exactly the data they need. Unlike REST, where clients are constrained by fixed endpoint responses, GraphQL allows clients to specify the shape and depth of data they require in a single request. This flexibility comes with architectural and performance considerations that require careful design.
Core Principles
- Schema-first design - Define your GraphQL schema as the contract between client and server, treating it as a product API
- Performance optimization - Prevent N+1 queries through batching and caching strategies like DataLoader
- Type safety - Leverage GraphQL's strong type system to provide compile-time guarantees and excellent developer experience
- Security boundaries - Implement query complexity analysis, depth limiting, and rate limiting to prevent abuse
- Evolutionary design - Design schemas for evolution using deprecation and versioning strategies rather than breaking changes
Schema Design Principles
Type System Foundation
GraphQL's type system is the foundation of your API contract. Every GraphQL service defines a set of types that describe the data available. The type system includes scalar types (Int, Float, String, Boolean, ID), object types, enums, interfaces, and unions.
Scalar types represent primitive values. GraphQL provides five built-in scalars, but you can define custom scalars for domain-specific types like DateTime, Email, or Currency:
# Custom scalar definitions
scalar DateTime
scalar Email
scalar Currency
type User {
id: ID!
email: Email!
createdAt: DateTime!
balance: Currency
}
The ! symbol denotes a non-nullable field. A field marked with ! guarantees that the field will always return a value and never null. This is a critical design decision - making a field non-nullable is a strong contract that should only be applied when you can absolutely guarantee the value will always be present.
Object types represent entities in your domain. Each field can return a scalar, another object type, or a list:
type Account {
id: ID!
accountNumber: String!
balance: Currency!
owner: User!
transactions(limit: Int = 10): [Transaction!]!
}
type Transaction {
id: ID!
amount: Currency!
timestamp: DateTime!
fromAccount: Account
toAccount: Account
status: TransactionStatus!
}
enum TransactionStatus {
PENDING
COMPLETED
FAILED
REVERSED
}
In this example, transactions(limit: Int = 10): [Transaction!]! demonstrates several concepts:
- Field arguments allow parameterization of queries
- Default values (
= 10) provide sensible defaults [Transaction!]!means a non-null array containing non-null Transaction objects (the array will never be null, and no element in the array will be null)
Naming Conventions
Consistent naming improves API discoverability and reduces confusion. Follow these conventions:
Field names use camelCase to match JavaScript/TypeScript conventions:
type User {
firstName: String!
lastName: String!
emailAddress: Email!
# NOT: first_name, email_address
}
Type names use PascalCase:
type UserAccount { }
type TransactionHistory { }
# NOT: userAccount, transaction_history
Enum values use SCREAMING_SNAKE_CASE to distinguish them from types:
enum AccountStatus {
ACTIVE
SUSPENDED
CLOSED
}
Query and mutation names should be verb-based for mutations and noun-based for queries:
type Query {
user(id: ID!): User
accounts(userId: ID!): [Account!]!
transactionHistory(accountId: ID!, limit: Int): [Transaction!]!
}
type Mutation {
createAccount(input: CreateAccountInput!): CreateAccountPayload!
transferFunds(input: TransferFundsInput!): TransferFundsPayload!
closeAccount(id: ID!): CloseAccountPayload!
}
Input Types and Payload Patterns
For mutations, use dedicated input types and payload types. Input types define the shape of mutation arguments, while payload types define the shape of mutation responses. This pattern provides flexibility for evolution:
input CreateAccountInput {
userId: ID!
accountType: AccountType!
initialDeposit: Currency
}
type CreateAccountPayload {
account: Account
userErrors: [UserError!]!
}
type UserError {
message: String!
field: [String!]
code: ErrorCode!
}
enum ErrorCode {
INVALID_INPUT
INSUFFICIENT_FUNDS
DUPLICATE_ACCOUNT
AUTHORIZATION_FAILED
}
The payload pattern allows you to return both success data and errors in a structured way. The userErrors field contains client-facing errors that describe validation failures or business rule violations. This separates user-actionable errors from system errors, which are returned via GraphQL's standard error mechanism.
Interfaces and Unions
Interfaces define a set of fields that multiple types must implement. They're useful for polymorphic queries:
interface Node {
id: ID!
}
type User implements Node {
id: ID!
email: Email!
name: String!
}
type Account implements Node {
id: ID!
accountNumber: String!
balance: Currency!
}
type Query {
node(id: ID!): Node
}
When querying an interface, clients use inline fragments to request fields specific to concrete types:
query {
node(id: "user-123") {
id
... on User {
email
name
}
... on Account {
accountNumber
balance
}
}
}
Unions represent a value that could be one of several types. Unlike interfaces, union members don't need to share common fields:
union SearchResult = User | Account | Transaction
type Query {
search(query: String!): [SearchResult!]!
}
Unions are particularly useful for search results or polymorphic return types where the types don't share a common structure.
Resolver Optimization
Resolvers are functions that populate the data for fields in your schema. The naive approach to writing resolvers can lead to severe performance problems, particularly the N+1 query problem.
The N+1 Problem
The N+1 problem occurs when you make one query to fetch a list of items (N items), then make N additional queries to fetch related data for each item. Consider this schema and query:
type Query {
users: [User!]!
}
type User {
id: ID!
name: String!
account: Account
}
type Account {
id: ID!
balance: Currency!
}
query {
users {
id
name
account {
balance
}
}
}
A naive resolver implementation would:
- Execute 1 query to fetch all users
- For each user, execute 1 query to fetch their account (N queries)
If there are 100 users, this results in 101 database queries. This is the N+1 problem.
DataLoader for Batching and Caching
DataLoader is a utility that provides batching and caching for data fetching. It collects all individual loads requested during a single execution frame and batches them into a single request to the backend.
How DataLoader works: When you call dataLoader.load(key), DataLoader doesn't immediately execute a query. Instead, it waits for the current execution frame to complete, collects all requested keys, and then calls your batch function once with all the keys.
import DataLoader from 'dataloader';
// Batch function receives array of IDs and returns array of results
const accountLoader = new DataLoader<string, Account>(async (accountIds) => {
// Single database query for all accounts
const accounts = await db.accounts.findMany({
where: { id: { in: accountIds } }
});
// Return results in the same order as the input IDs
// DataLoader requires this ordering
return accountIds.map(id =>
accounts.find(account => account.id === id)
);
});
// Resolver using DataLoader
const resolvers = {
User: {
account: (user, args, context) => {
// Each call is batched automatically
return context.loaders.account.load(user.accountId);
}
}
};
DataLoader also provides per-request caching. If you load the same key multiple times within a single request, DataLoader returns the cached result. In GraphQL queries, the same entity is often referenced through different paths in the query tree (e.g., a User might be loaded as post.author and also as comment.author). Without caching, each reference would trigger a separate load even though they resolve to the same entity.
Per-request loader pattern: Create new DataLoader instances per request, not globally. This ensures caching doesn't leak between requests and allows request-specific context (like authentication) to be used:
// In your GraphQL context factory
function createContext({ req }) {
return {
user: getAuthenticatedUser(req),
loaders: {
account: new DataLoader(async (ids) => batchLoadAccounts(ids)),
transaction: new DataLoader(async (ids) => batchLoadTransactions(ids)),
user: new DataLoader(async (ids) => batchLoadUsers(ids)),
}
};
}
For more details on caching strategies and patterns, see Caching.
Resolver Chains and Parent Data
GraphQL resolvers receive four arguments: (parent, args, context, info). The parent argument contains the result of the parent resolver. When resolving nested fields, GraphQL first resolves the parent field, then passes its result to child field resolvers. This enables efficient data loading - parent resolvers can fetch minimal data from the database, and child resolvers only load related data when fields are actually requested:
const resolvers = {
Query: {
user: async (parent, { id }, context) => {
// Parent is undefined at root level
return context.loaders.user.load(id);
}
},
User: {
// Parent is the User object from the Query.user resolver
account: (user, args, context) => {
// The user object here is exactly what Query.user returned
// We can access user.accountId directly
return context.loaders.account.load(user.accountId);
},
fullName: (user) => {
// Simple computed field - no data loading required
return `${user.firstName} ${user.lastName}`;
}
}
};
This pattern allows you to fetch only the data needed from the database in parent resolvers, then compute or fetch related data in child resolvers only when requested.
Pagination Strategies
Pagination is essential for handling large datasets. GraphQL supports multiple pagination approaches, each with different trade-offs.
Offset-Based Pagination
Offset-based pagination is familiar from SQL's LIMIT and OFFSET. It's simple but has performance limitations for large offsets:
type Query {
transactions(offset: Int = 0, limit: Int = 20): TransactionConnection!
}
type TransactionConnection {
nodes: [Transaction!]!
totalCount: Int!
hasMore: Boolean!
}
const resolvers = {
Query: {
transactions: async (parent, { offset, limit }, context) => {
const [transactions, totalCount] = await Promise.all([
db.transactions.findMany({ skip: offset, take: limit }),
db.transactions.count()
]);
return {
nodes: transactions,
totalCount,
hasMore: offset + limit < totalCount
};
}
}
};
Performance consideration: OFFSET 10000 requires the database to scan and skip 10,000 rows. As offset increases, query performance degrades. For datasets where users might paginate deeply, cursor-based pagination is more efficient.
Cursor-Based Pagination
Cursor-based pagination uses opaque cursors to mark positions in the dataset. The Relay Connection Specification standardizes this pattern:
type Query {
transactions(
first: Int
after: String
last: Int
before: String
): TransactionConnection!
}
type TransactionConnection {
edges: [TransactionEdge!]!
pageInfo: PageInfo!
totalCount: Int
}
type TransactionEdge {
node: Transaction!
cursor: String!
}
type PageInfo {
hasNextPage: Boolean!
hasPreviousPage: Boolean!
startCursor: String
endCursor: String
}
Cursors are typically base64-encoded representations of the sort key. For chronologically sorted transactions, the cursor might encode the timestamp:
function encodeCursor(timestamp: Date): string {
return Buffer.from(timestamp.toISOString()).toString('base64');
}
function decodeCursor(cursor: string): Date {
return new Date(Buffer.from(cursor, 'base64').toString('utf-8'));
}
const resolvers = {
Query: {
transactions: async (parent, { first, after }, context) => {
const limit = first || 20;
const cursor = after ? decodeCursor(after) : null;
const transactions = await db.transactions.findMany({
where: cursor ? { timestamp: { gt: cursor } } : {},
orderBy: { timestamp: 'asc' },
take: limit + 1 // Fetch one extra to determine hasNextPage
});
const hasNextPage = transactions.length > limit;
const nodes = hasNextPage ? transactions.slice(0, -1) : transactions;
return {
edges: nodes.map(node => ({
node,
cursor: encodeCursor(node.timestamp)
})),
pageInfo: {
hasNextPage,
hasPreviousPage: !!cursor,
startCursor: nodes[0] ? encodeCursor(nodes[0].timestamp) : null,
endCursor: nodes[nodes.length - 1] ?
encodeCursor(nodes[nodes.length - 1].timestamp) : null
}
};
}
}
};
Cursor-based pagination offers consistent performance regardless of position in the dataset because it uses indexed fields (WHERE timestamp > ?) rather than OFFSET.
Choosing pagination strategy: Use offset-based pagination for simple admin interfaces or when total count and random access are required. Use cursor-based pagination for infinite scroll, activity feeds, or any scenario where users paginate sequentially through large datasets.
Error Handling
GraphQL has two error mechanisms: field-level errors returned in the errors array, and user errors returned as part of the data payload.
Field-Level Errors
When a resolver throws an error, GraphQL includes it in the errors array in the response. The field that threw the error returns null (unless it's non-nullable, which causes parent field errors to bubble up):
const resolvers = {
Query: {
account: async (parent, { id }, context) => {
const account = await db.accounts.findUnique({ where: { id } });
if (!account) {
throw new Error(`Account ${id} not found`);
}
if (!hasPermission(context.user, account)) {
throw new Error('Unauthorized');
}
return account;
}
}
};
Response for unauthorized access:
{
"data": {
"account": null
},
"errors": [
{
"message": "Unauthorized",
"locations": [{ "line": 2, "column": 3 }],
"path": ["account"]
}
]
}
Error Extensions
Enhance errors with structured metadata using extensions. Extensions allow clients to programmatically handle different error types:
class AuthorizationError extends Error {
constructor(message: string) {
super(message);
this.extensions = {
code: 'UNAUTHORIZED',
timestamp: new Date().toISOString()
};
}
}
class NotFoundError extends Error {
constructor(resource: string, id: string) {
super(`${resource} ${id} not found`);
this.extensions = {
code: 'NOT_FOUND',
resource,
id
};
}
}
Response with extensions:
{
"errors": [
{
"message": "Account acc-123 not found",
"extensions": {
"code": "NOT_FOUND",
"resource": "Account",
"id": "acc-123"
}
}
]
}
User Errors in Payload
For mutations, return validation errors and business rule violations as part of the payload rather than throwing errors. This allows mutations to partially succeed and provide structured error information:
type TransferFundsPayload {
transfer: Transfer
userErrors: [UserError!]!
}
type UserError {
message: String!
field: [String!]
code: ErrorCode!
}
const resolvers = {
Mutation: {
transferFunds: async (parent, { input }, context) => {
const errors: UserError[] = [];
// Validation
if (input.amount <= 0) {
errors.push({
message: 'Amount must be positive',
field: ['amount'],
code: 'INVALID_INPUT'
});
}
const fromAccount = await db.accounts.findUnique({
where: { id: input.fromAccountId }
});
if (!fromAccount) {
errors.push({
message: 'Source account not found',
field: ['fromAccountId'],
code: 'NOT_FOUND'
});
}
if (fromAccount && fromAccount.balance < input.amount) {
errors.push({
message: 'Insufficient funds',
field: ['amount'],
code: 'INSUFFICIENT_FUNDS'
});
}
// Return errors without executing transfer
if (errors.length > 0) {
return { transfer: null, userErrors: errors };
}
// Execute transfer
const transfer = await executeTransfer(input);
return { transfer, userErrors: [] };
}
}
};
This pattern allows clients to display specific field errors in forms and handle different error conditions appropriately. For more on input validation patterns, see Input Validation.
Authentication and Authorization
GraphQL authentication typically occurs before query execution, while authorization happens within resolvers.
Authentication
Authenticate users in the context factory, before GraphQL execution begins. Extract authentication tokens from headers and validate them:
import { verify } from 'jsonwebtoken';
async function createContext({ req }) {
const token = req.headers.authorization?.replace('Bearer ', '');
if (!token) {
return { user: null, loaders: createLoaders() };
}
try {
const decoded = verify(token, process.env.JWT_SECRET);
const user = await db.users.findUnique({ where: { id: decoded.userId } });
return { user, loaders: createLoaders(user) };
} catch (error) {
return { user: null, loaders: createLoaders() };
}
}
For more on authentication strategies, see Authentication.
Resolver-Level Authorization
Check authorization in individual resolvers. This provides fine-grained control but requires consistent implementation:
const resolvers = {
Query: {
account: async (parent, { id }, context) => {
if (!context.user) {
throw new AuthorizationError('Authentication required');
}
const account = await context.loaders.account.load(id);
if (account.userId !== context.user.id) {
throw new AuthorizationError('Access denied');
}
return account;
}
}
};
Schema Directives for Authorization
Schema directives provide declarative authorization. Define custom directives to enforce authentication and authorization rules:
directive @auth(requires: Role = USER) on FIELD_DEFINITION | OBJECT
enum Role {
USER
ADMIN
SUPER_ADMIN
}
type Query {
me: User @auth
users: [User!]! @auth(requires: ADMIN)
account(id: ID!): Account @auth
}
Implement directive logic in your GraphQL server. The exact implementation depends on your GraphQL server library, but conceptually:
class AuthDirective extends SchemaDirectiveVisitor {
visitFieldDefinition(field) {
const requiredRole = this.args.requires;
const originalResolve = field.resolve;
field.resolve = async function(parent, args, context, info) {
if (!context.user) {
throw new AuthorizationError('Authentication required');
}
if (!hasRole(context.user, requiredRole)) {
throw new AuthorizationError('Insufficient permissions');
}
return originalResolve.call(this, parent, args, context, info);
};
}
}
For comprehensive authorization patterns, see Authorization.
Field-Level Permissions
Some fields should only be visible to certain users. Implement field-level authorization by checking permissions in field resolvers:
const resolvers = {
User: {
email: (user, args, context) => {
// Users can see their own email, admins can see all emails
if (context.user.id === user.id || hasRole(context.user, 'ADMIN')) {
return user.email;
}
return null;
},
ssn: (user, args, context) => {
// Only admins can see SSN
if (!hasRole(context.user, 'ADMIN')) {
throw new AuthorizationError('Insufficient permissions');
}
return user.ssn;
}
}
};
Rate Limiting and Security
GraphQL's flexibility allows clients to construct expensive queries. Without protection, malicious or poorly designed queries can overwhelm your server.
Query Complexity Analysis
Assign complexity scores to fields based on their computational cost. Sum field complexity across the entire query and reject queries exceeding a threshold:
import { getComplexity, simpleEstimator, fieldExtensionsEstimator } from 'graphql-query-complexity';
const complexityEstimator = {
Query: {
users: { complexity: 10 },
account: { complexity: 5 }
},
User: {
accounts: { complexity: 10, multiplier: 1 }, // 10 per account
transactions: { complexity: ({ args }) => args.limit || 20 }
}
};
// Validate complexity before execution
const complexity = getComplexity({
schema,
query: documentAST,
variables,
estimators: [
fieldExtensionsEstimator(),
simpleEstimator({ defaultComplexity: 1 })
]
});
if (complexity > MAX_COMPLEXITY) {
throw new Error(`Query too complex: ${complexity}. Maximum allowed: ${MAX_COMPLEXITY}`);
}
This prevents queries like:
{
users {
accounts {
transactions(limit: 1000) {
id
}
}
}
}
If there are 100 users with 10 accounts each and each account requests 1000 transactions, this query attempts to fetch 1,000,000 transactions.
Query Depth Limiting
Limit the depth of nested queries to prevent circular or deeply nested queries:
import depthLimit from 'graphql-depth-limit';
const server = new ApolloServer({
schema,
validationRules: [depthLimit(5)]
});
This rejects queries with more than 5 levels of nesting:
{
user { # depth 1
account { # depth 2
owner { # depth 3
account { # depth 4
owner { # depth 5
account { # depth 6 - REJECTED
id
}
}
}
}
}
}
}
Rate Limiting
Apply rate limiting at multiple levels:
IP-based rate limiting prevents abuse from specific clients:
import rateLimit from 'express-rate-limit';
const limiter = rateLimit({
windowMs: 15 * 60 * 1000, // 15 minutes
max: 100 // 100 requests per window
});
app.use('/graphql', limiter);
User-based rate limiting applies limits per authenticated user:
const userRateLimits = new Map();
function checkUserRateLimit(userId: string) {
const now = Date.now();
const userLimit = userRateLimits.get(userId) || { count: 0, resetAt: now + 60000 };
if (now > userLimit.resetAt) {
userLimit.count = 0;
userLimit.resetAt = now + 60000;
}
userLimit.count++;
userRateLimits.set(userId, userLimit);
if (userLimit.count > 60) {
throw new Error('Rate limit exceeded');
}
}
For more comprehensive rate limiting patterns, see Rate Limiting.
Persisted Queries
Persisted queries allow only pre-approved queries to execute. Clients send a query hash instead of the full query text:
const persistedQueries = {
'abc123': 'query GetUser($id: ID!) { user(id: $id) { id name } }',
'def456': 'mutation CreateAccount($input: CreateAccountInput!) { ... }'
};
function executePersistedQuery(queryHash: string, variables: any) {
const query = persistedQueries[queryHash];
if (!query) {
throw new Error('Query not found');
}
return execute({ schema, document: parse(query), variables });
}
This prevents arbitrary queries and provides performance benefits (no parsing required) and security benefits (only approved queries can execute).
Subscriptions and Real-Time Data
GraphQL subscriptions enable servers to push data to clients when events occur. Subscriptions use WebSockets for bidirectional communication.
Subscription Definition
Define subscriptions in your schema:
type Subscription {
transactionCreated(accountId: ID!): Transaction!
balanceUpdated(accountId: ID!): BalanceUpdate!
}
type BalanceUpdate {
accountId: ID!
newBalance: Currency!
timestamp: DateTime!
}
Implementing Subscriptions
Subscriptions require a pub/sub mechanism. In development, an in-memory pub/sub works. In production, use Redis for distributed systems:
import { RedisPubSub } from 'graphql-redis-subscriptions';
import Redis from 'ioredis';
const pubsub = new RedisPubSub({
publisher: new Redis(process.env.REDIS_URL),
subscriber: new Redis(process.env.REDIS_URL)
});
const resolvers = {
Subscription: {
transactionCreated: {
subscribe: (parent, { accountId }, context) => {
if (!context.user) {
throw new AuthorizationError('Authentication required');
}
// Verify user has access to this account
const hasAccess = context.user.accounts.includes(accountId);
if (!hasAccess) {
throw new AuthorizationError('Access denied');
}
return pubsub.asyncIterator([`TRANSACTION_CREATED_${accountId}`]);
}
}
},
Mutation: {
createTransaction: async (parent, { input }, context) => {
const transaction = await db.transactions.create({ data: input });
// Publish event
await pubsub.publish(`TRANSACTION_CREATED_${input.accountId}`, {
transactionCreated: transaction
});
return { transaction, userErrors: [] };
}
}
};
Subscription Filtering
Filter subscription events based on user permissions or event properties:
const resolvers = {
Subscription: {
transactionCreated: {
subscribe: withFilter(
() => pubsub.asyncIterator('TRANSACTION_CREATED'),
(payload, variables, context) => {
// Only send to users who own the account
return payload.transactionCreated.accountId === variables.accountId &&
context.user.accounts.includes(variables.accountId);
}
)
}
}
};
Scaling Subscriptions
Subscriptions require persistent connections, which creates scaling challenges:
Sticky sessions: Ensure WebSocket connections consistently route to the same server instance:
// In load balancer configuration
{
"loadBalancer": {
"stickySession": {
"enabled": true,
"cookieName": "graphql-ws"
}
}
}
Redis pub/sub: Use Redis to broadcast events across all server instances. When any instance publishes an event, all subscribed instances receive it and can forward to their connected clients.
For more on WebSocket scaling and real-time communication patterns, see Real-Time Communication.
GraphQL vs REST Trade-Offs
GraphQL and REST serve different use cases. Understanding when to use each prevents architectural mismatches.
When GraphQL Excels
Complex data requirements: When clients need to aggregate data from multiple resources in a single request:
# Single request fetches user, accounts, and recent transactions
query {
user(id: "123") {
name
accounts {
balance
transactions(limit: 5) {
amount
timestamp
}
}
}
}
The equivalent REST API requires multiple requests:
GET /users/123
GET /users/123/accounts
GET /accounts/acc1/transactions?limit=5
GET /accounts/acc2/transactions?limit=5
...
Diverse clients with different needs: Mobile apps might need minimal data while web dashboards need comprehensive details. GraphQL allows each client to request exactly what it needs without requiring multiple API endpoints.
Rapid frontend iteration: Frontend teams can add new fields to queries without waiting for backend API changes, as long as the data exists in the schema.
When REST is Better
Simple CRUD operations: For straightforward create, read, update, delete operations, REST's simplicity often outweighs GraphQL's flexibility:
POST /accounts
GET /accounts/123
PUT /accounts/123
DELETE /accounts/123
File uploads: REST handles multipart form data more naturally than GraphQL. While GraphQL supports file uploads, the specification is less standardized.
Caching: HTTP caching (ETag, Cache-Control, CDN caching) works seamlessly with REST. GraphQL requires custom caching logic since POST requests aren't cached by default.
Public APIs: REST's simplicity and widespread understanding make it more accessible for public API consumers who may not be familiar with GraphQL.
Hybrid Approaches
Many systems benefit from both:
- REST for public APIs, simple CRUD, and file uploads
- GraphQL for complex aggregations, mobile clients, and rapid frontend iteration
For more on API design patterns, see REST Fundamentals and API Patterns.
Testing GraphQL APIs
Schema Testing
Test that your schema compiles and matches expectations:
import { printSchema } from 'graphql';
import { schema } from './schema';
describe('GraphQL Schema', () => {
it('should match snapshot', () => {
expect(printSchema(schema)).toMatchSnapshot();
});
it('should have Query type', () => {
const queryType = schema.getQueryType();
expect(queryType).toBeDefined();
expect(queryType.name).toBe('Query');
});
});
Resolver Testing
Test resolvers in isolation by mocking dependencies:
import { resolvers } from './resolvers';
describe('User Resolvers', () => {
it('should fetch user by id', async () => {
const mockContext = {
loaders: {
user: {
load: jest.fn().mockResolvedValue({ id: '123', name: 'John' })
}
}
};
const result = await resolvers.Query.user(
null,
{ id: '123' },
mockContext,
{}
);
expect(result).toEqual({ id: '123', name: 'John' });
expect(mockContext.loaders.user.load).toHaveBeenCalledWith('123');
});
});
Integration Testing
Test complete queries against a test database:
import { ApolloServer } from '@apollo/server';
import { schema } from './schema';
import { createContext } from './context';
describe('GraphQL Integration', () => {
let server: ApolloServer;
beforeAll(async () => {
server = new ApolloServer({ schema });
await setupTestDatabase();
});
it('should fetch user with accounts', async () => {
const response = await server.executeOperation({
query: `
query GetUser($id: ID!) {
user(id: $id) {
id
name
accounts {
balance
}
}
}
`,
variables: { id: 'test-user-1' }
}, {
contextValue: await createContext({ user: testUser })
});
expect(response.body.kind).toBe('single');
expect(response.body.singleResult.data.user).toMatchObject({
id: 'test-user-1',
name: 'Test User'
});
});
});
For comprehensive testing strategies, see API Testing and Integration Testing.
Performance Monitoring
Monitor GraphQL-specific metrics to identify performance bottlenecks:
Query execution time: Track how long queries take to execute:
const server = new ApolloServer({
schema,
plugins: [{
async requestDidStart() {
const start = Date.now();
return {
async willSendResponse({ metrics }) {
const duration = Date.now() - start;
logger.info('Query executed', {
duration,
operationName: metrics.operationName,
complexity: metrics.complexity
});
}
};
}
}]
});
Resolver execution time: Identify slow resolvers:
function instrumentResolver(resolve, fieldName) {
return async function(parent, args, context, info) {
const start = Date.now();
try {
return await resolve(parent, args, context, info);
} finally {
const duration = Date.now() - start;
context.metrics.resolverTimes.push({ fieldName, duration });
}
};
}
DataLoader hit rate: Monitor cache hit rates to ensure batching is effective:
const accountLoader = new DataLoader(batchFn, {
cacheKeyFn: (key) => key,
name: 'account',
onBatchResult: (results) => {
metrics.increment('dataloader.account.batch_size', results.length);
}
});
For comprehensive observability patterns, see Observability Overview, Metrics, and Tracing.
Related Resources
- REST Fundamentals - RESTful API design principles
- API Contracts - Contract-first API development
- Real-Time Communication - WebSocket and SSE patterns
- Caching - Caching strategies for APIs
- Rate Limiting - Rate limiting patterns
- Authentication - Authentication strategies
- Authorization - Authorization patterns
- Performance Optimization - Performance patterns
- Observability - Monitoring and observability