Back to Organization
organizationL3 SystematicTech Debt & Modernization

OpenRewrite + agent = systematic refactoring

OpenRewrite combined with an AI agent is the L3 pattern where structured code transformation (OpenRewrite recipes) is orchestrated by an AI agent that selects which recipes to appl

  • ·Continuous modernization: agents work on tech debt reduction in background (non-blocking to feature work)
  • ·Library version bumps and dependency upgrades are automated via agent PRs
  • ·OpenRewrite + agent combination is used for systematic refactoring campaigns
  • ·Agent tech debt PRs follow the same review process as feature PRs
  • ·Dependency freshness score is tracked (% of dependencies within N versions of latest)

Evidence

  • ·Agent-authored tech debt reduction PRs in git history
  • ·Automated dependency upgrade configuration (Renovate + agent, Dependabot + agent)
  • ·OpenRewrite recipe configuration with agent integration

What It Is

OpenRewrite combined with an AI agent is the L3 pattern where structured code transformation (OpenRewrite recipes) is orchestrated by an AI agent that selects which recipes to apply, sequences them correctly, interprets test failures, handles the edge cases that recipes do not cover, and produces production-ready PRs with minimal human intervention. The combination is more powerful than either component alone: OpenRewrite provides reliable, semantically-correct mechanical transformation; the AI agent provides the judgment to apply it strategically and fix what the recipe could not.

OpenRewrite alone handles roughly 60-80% of a typical migration's mechanical work. The remaining 20-40% is edge cases: custom patterns that differ from what the recipe expects, test failures caused by behavioral differences in the new API, integration tests that relied on internal implementation details that changed. At L2, a human engineer handles this remaining fraction. At L3, the AI agent handles it - examining the test failures, reading the library changelog or documentation, writing targeted fixes, and re-running tests to validate.

The agent acts as an intelligent wrapper around OpenRewrite. It receives a migration task from the debt inventory ("migrate the payments service from Spring Boot 2.7 to Spring Boot 3.1"), selects the appropriate OpenRewrite recipe, runs it, interprets the results, identifies what the recipe did not handle, researches the fix using the library's documentation or changelog, implements the fix, runs the full test suite, and opens a PR. The human engineer reviews and merges. The entire execution happens asynchronously, often overnight.

This combination is what makes the L3 claim of "continuous modernization as background work" operational. Without the agent, running OpenRewrite is still a human-supervised process that requires engineering attention. With the agent, the process is largely autonomous - the human is in the review loop, not the execution loop.

Why It Matters

  • Closes the last-mile gap in automated migration - OpenRewrite recipes handle the pattern-based work; the agent handles the judgment-based remaining fraction; together they cover nearly all of a typical migration without human writing code
  • Scales to the full migration backlog - A human engineer can only run so many OpenRewrite recipes per week; an agent running continuously can process the entire migration backlog in parallel across all repositories simultaneously
  • Handles the long tail of custom patterns - Enterprise codebases accumulate custom patterns, internal abstractions, and non-standard usage that no public recipe covers; the AI agent generates targeted fixes for these cases on the fly
  • Produces high-quality, documented PRs - An AI agent can generate a PR description that explains what was migrated, why, what the test failures were, and how they were fixed - documentation quality that is often better than human-authored migration PRs
  • Creates a reusable recipe library - When an agent writes a fix for a custom pattern, that fix can be extracted as a custom OpenRewrite recipe for the next occurrence of the same pattern; the agent's output improves the recipe library

Getting Started

6 steps to get from here to the next level

Common Pitfalls

Mistakes teams actually make at this stage - and how to avoid them

How Different Roles See It

B
BobHead of Engineering

Bob's team is running the Spring Boot 3 migration across 12 repositories using the agent-plus-recipe approach. The agent has been running for three days. It has processed 8 repositories, opened 8 PRs, and his team has merged 6 of them after quick reviews. Two PRs are in review because they touched configuration areas that required judgment about production deployment approach. The remaining 4 repositories are queued.

The speed is unlike anything Bob has seen for migration work. His estimate had been 18 weeks for two engineers. The agent processed 8 repositories in 3 days, with review taking perhaps 2 hours per PR. At this rate, all 12 repositories will be done in under two weeks, with total engineering time of approximately 24 review-hours. Bob needs to update his migration planning model. He also needs to communicate the approach to other teams, because the business case for investing in the agent setup is now demonstrated by concrete results.

S
SarahProductivity Lead

Sarah is tracking the migration metrics. The agent-plus-recipe approach has produced a productivity multiplier she is struggling to present credibly because the numbers seem implausible: 8 repositories migrated in 3 days vs. the 18-week estimate for manual migration. She wants to make sure the comparison is fair.

Sarah should document the methodology carefully: what did the recipe handle, what did the agent fix, and what did the humans review? The breakdown for these 8 repositories: recipe handled 72% of changes, agent fixed 23% of residual cases, humans reviewed and modified 5% during PR review. Total human engineering time: 16 hours of review across 8 PRs. Equivalent manual time estimate: 3 weeks per repository. The comparison is valid and the numbers are real. Sarah should present this with the methodology attached - it will be questioned, and having the detailed breakdown makes the case bulletproof.

V
VictorStaff Engineer - AI Champion

Victor designed the agent configuration, wrote the task specifications, and is the primary reviewer of agent-generated migration PRs. He has developed a detailed understanding of where the recipe plus agent combination succeeds and where it struggles. Success: standard Spring patterns, controller/service/repository layer changes, test annotations. Struggle: custom AOP aspects that use Spring internals, reactive programming patterns that changed behavior in WebFlux, integration tests that use Spring's test context in non-standard ways.

Victor is addressing the struggle areas by writing custom OpenRewrite recipes for the most common custom patterns. He has written three recipes so far, each taking about two hours to write and test. Each recipe eliminates one class of agent failure - the agent's test failure rate on the repositories processed after the recipes were added dropped from 23% to 8%. Victor's recipe-writing effort is a high-leverage investment: each recipe written once reduces the agent's residual work across every future migration that encounters that pattern. Victor should track which custom patterns are most common across the organization's codebase and prioritize recipe development accordingly.