Evaluator-Optimizer
Iteratively improve outputs through repeated evaluation and refinement cycles
Based on Anthropic’s “Building Effective Agents” framework.
Evaluator-optimizer uses iterative cycles of generation and evaluation to improve output quality. One component generates content while another evaluates it and provides feedback, creating a loop that continues until the evaluator is satisfied or maximum iterations are reached. This pattern trades computational cost for higher quality results.
When to Use
Use evaluator-optimizer when output quality can be measurably improved through iteration and when you have clear evaluation criteria. It’s ideal for creative tasks like content generation, code optimization, or any scenario where first attempts can be systematically improved. Avoid when the cost of multiple iterations outweighs quality gains or when evaluation criteria are subjective and inconsistent.
Implementation
This example demonstrates iterative social media post creation where a generator creates content and an evaluator provides feedback until the post meets quality standards or maximum iterations are reached.
Agent Code
The pattern implements a controlled iteration loop with clear termination criteria: evaluator satisfaction or maximum iterations reached. Each cycle builds on previous attempts, creating progressively better outputs through systematic feedback incorporation.
Related Patterns
This pattern combines well with parallelization when multiple evaluators provide different perspectives, and can be enhanced with routing to direct different content types to specialized generators and evaluators.