Based on Anthropic’s “Building Effective Agents” framework.

Prompt chaining decomposes complex tasks into sequential steps, where each LLM call processes the output of the previous one. Programmatic validation gates between steps ensure the process stays on track and can terminate early when expectations aren’t met. This pattern trades latency for higher accuracy by making each individual LLM call simpler and more focused.

Client
Agent
Step 1
Validation Gate
Step 2
request
process
result
validate
pass/fail
next step
final result
response

When to Use

Use prompt chaining when tasks can be decomposed into fixed subtasks with clear boundaries, especially when you need validation gates between processing steps. It’s ideal for trading latency for higher accuracy and when early termination benefits the user experience. Avoid this pattern when real-time performance is critical or when steps are highly interdependent and can’t be meaningfully separated.

Implementation

This example demonstrates a three-step chain that processes animal-related messages: detecting if the input is about an animal, translating it to Spanish, and converting it into a haiku. The validation gate ensures only animal-related messages proceed through the full chain.

Agent Code

import { pickaxe } from "@hatchet-dev/pickaxe";
import z from "zod";
import { oneTool } from "@tools/one.tool";
import { twoTool } from "@tools/two.tool";
import { threeTool } from "@tools/three.tool";

export const promptChainingAgent = pickaxe.agent({
  name: "prompt-chaining-agent",
  executionTimeout: "1m",
  inputSchema: z.object({ message: z.string() }),
  outputSchema: z.object({ result: z.string() }),
  description: "Demonstrates prompt chaining: sequential LLM calls with validation gates",
  fn: async (input, ctx) => {
    // STEP 1: First LLM call - Process the initial message
    const { oneOutput } = await oneTool.run({
        message: input.message,
    });

    // GATE: Programmatic validation check between steps
    if(!oneOutput) {
        return {
            result: 'Please provide a message about an animal'
        }
    }

    // STEP 2: Second LLM call - Transform the validated input
    const { twoOutput } = await twoTool.run({
        message: input.message,
    });

    // STEP 3: Third LLM call - Final transformation
    const { threeOutput } = await threeTool.run({
        twoOutput, // Note: Using output from previous step as input
    });

    return {
        result: threeOutput
    }
  },
});

The pattern consists of sequential tool calls connected by validation gates. Each tool performs a focused task, and validation logic determines whether to continue or terminate early. The key insight is that intermediate validation allows for better error handling and quality control compared to attempting the entire task in a single LLM call.

This pattern works well with routing for dynamic decision-making and can be combined with parallelization when some steps are independent.