Backend & DevOps Blog

Real-world experiences with MongoDB, Docker, Kubernetes and more

Implementing Redis Cache in Next.js

Our Next.js application was working well for most users, but as traffic grew, we started seeing increased database load and slower response times. After profiling our application, we discovered that many pages were making the same expensive database queries over and over. Implementing a Redis cache helped us dramatically improve performance and reduce database load. Here's how we did it.

The Problem: Redundant Database Queries

Before implementing caching, our application made database queries on every request, even when the data hadn't changed. This was particularly problematic for:

  • Product catalog pages that rarely changed but were viewed frequently
  • User profile data that was accessed across multiple components
  • Configuration settings that were used throughout the application
  • API endpoints that returned the same data for all users

The result was unnecessary database load and slower page loads. We needed a solution that would:

  1. Cache frequently accessed but rarely changed data
  2. Automatically invalidate cache when data changed
  3. Work well with Next.js's server-side rendering
  4. Be simple enough for all developers to use consistently

Why Redis?

We chose Redis for several reasons:

  • It's an in-memory data store, making it extremely fast
  • It supports time-based expiration, perfect for caching
  • It has a simple API that's easy to work with
  • It's widely used and well-documented
  • It's available as a managed service on most cloud providers

Setting Up Redis

First, we added Redis to our project:

# Install Redis client for Node.js
npm install ioredis

Then, we created a Redis client instance:

// lib/redis.js
import Redis from 'ioredis';

const redis = new Redis({
  host: process.env.REDIS_HOST || 'localhost',
  port: parseInt(process.env.REDIS_PORT || '6379'),
  password: process.env.REDIS_PASSWORD,
  // Reconnect strategy
  retryStrategy(times) {
    const delay = Math.min(times * 50, 2000);
    return delay;
  }
});

redis.on('error', (err) => {
  console.error('Redis connection error:', err);
});

redis.on('connect', () => {
  console.log('Connected to Redis');
});

export default redis;

Creating a Basic Caching Utility

To simplify caching throughout our application, we created a utility function:

// lib/cache.js
import redis from './redis';

// Default cache expiration in seconds
const DEFAULT_EXPIRATION = 3600; // 1 hour

/**
 * Gets data from cache if it exists, otherwise executes the provided function,
 * caches the result, and returns it.
 * 
 * @param {string} key - The cache key
 * @param {Function} fn - The function to execute if cache miss
 * @param {number} expiration - Expiration time in seconds
 * @returns {Promise<any>} - The cached or freshly fetched data
 */
export async function fetchWithCache(key, fn, expiration = DEFAULT_EXPIRATION) {
  try {
    // Try to get data from cache
    const cachedData = await redis.get(key);
    
    if (cachedData) {
      // Cache hit - parse and return the cached data
      return JSON.parse(cachedData);
    }
    
    // Cache miss - execute the function
    const freshData = await fn();
    
    // Store the result in cache
    await redis.set(
      key,
      JSON.stringify(freshData),
      'EX',
      expiration
    );
    
    return freshData;
  } catch (error) {
    console.error(`Cache error for key ${key}:`, error);
    // If there's a cache error, fall back to executing the function directly
    return fn();
  }
}

/**
 * Invalidates a cache key
 * 
 * @param {string} key - The cache key to invalidate
 */
export async function invalidateCache(key) {
  try {
    await redis.del(key);
  } catch (error) {
    console.error(`Failed to invalidate cache for key ${key}:`, error);
  }
}

/**
 * Invalidates multiple cache keys matching a pattern
 * 
 * @param {string} pattern - Pattern to match (e.g., "user:*")
 */
export async function invalidateCachePattern(pattern) {
  try {
    // Find all keys matching the pattern
    const keys = await redis.keys(pattern);
    
    if (keys.length > 0) {
      // Delete all matching keys
      await redis.del(...keys);
    }
  } catch (error) {
    console.error(`Failed to invalidate cache for pattern ${pattern}:`, error);
  }
}

Implementing Cache in API Routes

Next, we added caching to our API routes:

// pages/api/products.js
import { fetchWithCache } from '@/lib/cache';
import { getProducts } from '@/lib/database';

export default async function handler(req, res) {
  const { category } = req.query;
  
  // Create a cache key based on the query parameters
  const cacheKey = `products:${category || 'all'}`;
  
  try {
    // Use our cache utility
    const products = await fetchWithCache(
      cacheKey,
      () => getProducts(category),
      1800  // Cache for 30 minutes
    );
    
    res.status(200).json(products);
  } catch (error) {
    console.error('Error fetching products:', error);
    res.status(500).json({ error: 'Failed to fetch products' });
  }
}

Caching in getServerSideProps

We also implemented caching in getServerSideProps for server-rendered pages:

// pages/products/[id].js
import { fetchWithCache } from '@/lib/cache';
import { getProductById, getRelatedProducts } from '@/lib/database';

export async function getServerSideProps(context) {
  const { id } = context.params;
  
  try {
    // Cache the product data
    const product = await fetchWithCache(
      `product:${id}`,
      () => getProductById(id),
      3600  // Cache for 1 hour
    );
    
    if (!product) {
      return { notFound: true };
    }
    
    // Cache related products separately
    const relatedProducts = await fetchWithCache(
      `product:${id}:related`,
      () => getRelatedProducts(id, product.category),
      3600
    );
    
    return {
      props: {
        product,
        relatedProducts,
      },
    };
  } catch (error) {
    console.error(`Error fetching product ${id}:`, error);
    return { notFound: true };
  }
}

export default function ProductPage({ product, relatedProducts }) {
  // Component implementation...
}

Cache Invalidation Strategies

Caching is only effective if it's invalidated when data changes. We implemented several strategies:

1. Time-Based Invalidation

The simplest approach was to set appropriate expiration times. Different types of data had different expiration policies:

  • Product catalog: 30 minutes
  • User profiles: 15 minutes
  • Configuration settings: 1 hour
  • Real-time data: 1 minute or no caching

2. Manual Invalidation on Updates

When data was updated, we explicitly invalidated the relevant cache keys:

// pages/api/products/[id].js
import { invalidateCache, invalidateCachePattern } from '@/lib/cache';
import { updateProduct } from '@/lib/database';

export default async function handler(req, res) {
  if (req.method !== 'PUT') {
    return res.status(405).json({ error: 'Method not allowed' });
  }
  
  const { id } = req.query;
  
  try {
    // Update the product in the database
    const updatedProduct = await updateProduct(id, req.body);
    
    // Invalidate the specific product cache
    await invalidateCache(`product:${id}`);
    
    // Invalidate related caches
    await invalidateCache(`product:${id}:related`);
    await invalidateCachePattern('products:*');
    
    res.status(200).json(updatedProduct);
  } catch (error) {
    console.error(`Error updating product ${id}:`, error);
    res.status(500).json({ error: 'Failed to update product' });
  }
}

3. Webhook-Based Invalidation

For changes made outside our application (e.g., from an admin panel), we implemented webhooks:

// pages/api/webhooks/cache-invalidation.js
import { invalidateCachePattern } from '@/lib/cache';

export default async function handler(req, res) {
  // Verify webhook secret for security
  const secret = req.headers['x-webhook-secret'];
  if (secret !== process.env.WEBHOOK_SECRET) {
    return res.status(401).json({ error: 'Unauthorized' });
  }
  
  const { entity, action, id } = req.body;
  
  try {
    // Invalidate appropriate caches based on the entity and action
    switch (entity) {
      case 'product':
        if (['create', 'update', 'delete'].includes(action)) {
          if (id) {
            await invalidateCache(`product:${id}`);
            await invalidateCache(`product:${id}:related`);
          }
          await invalidateCachePattern('products:*');
        }
        break;
      
      case 'category':
        // Invalidate all product caches when a category changes
        await invalidateCachePattern('products:*');
        await invalidateCachePattern('product:*');
        break;
      
      case 'settings':
        // Invalidate settings cache
        await invalidateCachePattern('settings:*');
        break;
        
      default:
        break;
    }
    
    res.status(200).json({ success: true });
  } catch (error) {
    console.error('Error processing cache invalidation webhook:', error);
    res.status(500).json({ error: 'Failed to process webhook' });
  }
}

Improving Cache Key Design

As our caching implementation grew, we formalized our cache key naming conventions:

// lib/cache-keys.js
/**
 * Generates consistent cache keys for various entities
 */

// Product-related keys
export const productKey = (id) => `product:${id}`;
export const productRelatedKey = (id) => `product:${id}:related`;
export const productsByCategoryKey = (category) => `products:category:${category || 'all'}`;
export const productsSearchKey = (query) => `products:search:${query}`;

// User-related keys
export const userKey = (id) => `user:${id}`;
export const userProfileKey = (id) => `user:${id}:profile`;
export const userOrdersKey = (id) => `user:${id}:orders`;

// Settings and configuration
export const settingsKey = (scope) => `settings:${scope}`;

// API response caching
export const apiResponseKey = (path, params) => {
  const queryString = new URLSearchParams(params).toString();
  return `api:${path}:${queryString || 'default'}`;
};

This approach ensured consistent cache key naming across our application, making it easier to reason about cache invalidation.

Monitoring Cache Effectiveness

To understand the impact of our caching layer, we added simple monitoring:

// lib/cache.js (updated with monitoring)
import redis from './redis';

// Cache statistics
let cacheHits = 0;
let cacheMisses = 0;

// Rest of the file remains the same...

export async function fetchWithCache(key, fn, expiration = DEFAULT_EXPIRATION) {
  try {
    // Try to get data from cache
    const cachedData = await redis.get(key);
    
    if (cachedData) {
      // Cache hit
      cacheHits++;
      return JSON.parse(cachedData);
    }
    
    // Cache miss
    cacheMisses++;
    const freshData = await fn();
    
    // Store the result in cache
    await redis.set(
      key,
      JSON.stringify(freshData),
      'EX',
      expiration
    );
    
    return freshData;
  } catch (error) {
    console.error(`Cache error for key ${key}:`, error);
    return fn();
  }
}

// Add a new function to get cache statistics
export function getCacheStats() {
  const total = cacheHits + cacheMisses;
  const hitRate = total > 0 ? (cacheHits / total) * 100 : 0;
  
  return {
    hits: cacheHits,
    misses: cacheMisses,
    total,
    hitRate: hitRate.toFixed(2) + '%',
  };
}

// Add a route to expose cache statistics
// pages/api/admin/cache-stats.js
export default async function handler(req, res) {
  // Add authentication check here
  
  const stats = getCacheStats();
  res.status(200).json(stats);
}

Results and Lessons Learned

After implementing Redis caching throughout our application, we saw dramatic improvements:

  • Database load reduced by 70% for our most frequently accessed pages
  • Average response time improved by 200ms (from 350ms to 150ms)
  • Cache hit rate of 85% after tuning our expiration times
  • Smoother handling of traffic spikes during promotional events

Throughout this process, we learned several important lessons:

  1. Selective caching is key - We focused on caching expensive queries and frequently accessed data, rather than trying to cache everything.
  2. Cache invalidation requires careful planning - We spent time thinking about how and when to invalidate caches to avoid serving stale data.
  3. Consistent key naming is essential - Our cache key utility ensured consistent naming, making cache invalidation more predictable.
  4. Monitor your cache - Tracking hit rates helped us identify opportunities to improve our caching strategy.
  5. Graceful fallbacks are important - Our cache utility always fell back to direct database queries if Redis had issues, ensuring resilience.

Conclusion

Implementing Redis caching in our Next.js application was a high-impact, relatively low-effort improvement. With a few utility functions and a thoughtful approach to cache invalidation, we significantly reduced database load and improved response times.

For applications with similar characteristics—frequent reads of relatively static data—a Redis caching layer can provide substantial performance benefits without major architectural changes. The key is to implement it thoughtfully, with clear conventions and proper invalidation strategies.