Manual migration attempts
Manual migration attempts are the L2 approach to executing framework upgrades, library replacements, and platform modernization: a developer (or small team) manually updates code t
- ·Tech debt is categorized and prioritized (severity, impact, effort)
- ·At least one manual migration attempt has been completed or is in progress
- ·OpenRewrite or equivalent automated refactoring tool has been evaluated or adopted for basic recipes
- ·Tech debt reduction is allocated time in sprint planning (even if small)
- ·Migration attempts are documented with lessons learned
Evidence
- ·Categorized tech debt backlog with priority ratings
- ·Completed or in-progress migration project documentation
- ·OpenRewrite configuration or evaluation report
What It Is
Manual migration attempts are the L2 approach to executing framework upgrades, library replacements, and platform modernization: a developer (or small team) manually updates code to conform to the new API surface, running tests as they go, fixing compilation errors one by one, and resolving behavioral differences that emerge during testing. This approach represents genuine progress over L1 paralysis - work is actually being done - but it is slow, error-prone, and does not scale to the size of most organizations' migration backlogs.
A manual migration for a non-trivial codebase follows a predictable pattern: set up a migration branch, update the dependency version, observe the cascade of compilation failures, fix each failure, run tests, fix the test failures that emerge from behavioral changes in the new version, handle the edge cases the tests did not cover, and eventually produce a working PR. For a large codebase migrating across a major version boundary - Java 8 to 17, Spring Boot 2 to 3, AngularJS to Angular - this process takes weeks to months of sustained engineering effort.
The problem is not that manual migration is impossible - many teams execute them successfully. The problem is capacity. A migration that takes two engineers four weeks is a migration that consumed two engineers for four weeks who could not be working on features. In organizations where engineering capacity is the primary constraint, migrations at this cost are perpetually displaced by feature work. The migration starts, consumes capacity, stalls when a blocking feature arrives, loses context when engineers are reassigned, and eventually fails or delivers a partial result.
The L2 state is characterized by migrations that start but do not finish, or that finish for small codebases but cannot be applied at the scale of a large monolith or a polyglot microservices ecosystem. The gap between L2 (manual) and L3 (AI-assisted) is the difference between migrations as projects - discrete, budgeted, competed against features - and migrations as continuous background work that does not consume primary engineering capacity.
Why It Matters
- Establishes migration discipline - Even manual migrations that partially succeed create the organizational muscle memory of running migrations: branching strategy, test validation approach, rollback procedures, and stakeholder communication
- Produces migration artifacts - The migration scripts, updated configurations, and test fixes produced by manual migrations are reusable starting points for AI-assisted migration at L3
- Surfaces unexpected complexity - Manual execution reveals the non-obvious parts of a migration that automated tools must handle; the edge cases found in a manual migration inform the prompts and validation steps for agent-based migration
- Creates urgency for automation - Experiencing the cost of a manual migration firsthand is the most effective argument for investing in AI-assisted migration; teams that have spent four weeks on a migration are strongly motivated to find a better approach
- Addresses the critical backlog items - Some migrations cannot wait for an organization to reach L3; for the most urgent items (active CVEs, imminent end-of-life), manual migration is the only immediate option
Getting Started
6 steps to get from here to the next level
Common Pitfalls
Mistakes teams actually make at this stage - and how to avoid them
How Different Roles See It
Bob has committed to completing the Java 11 migration this quarter. He has assigned two engineers and given them four weeks. Halfway through the first week, a critical feature request arrives from a major customer and one of the engineers gets pulled off the migration. The remaining engineer continues but at half the effective velocity due to scope complexity. By week three, the migration is at 60% completion and the remaining 40% is the hardest part.
Bob needs to protect migration work from displacement more aggressively than feature work, not less. The migration has a sunk cost and a clear business risk attached to it. When the feature request arrived, Bob should have negotiated a one-week delay on the feature rather than pulling an engineer off the migration. He should also establish a rule: once a migration starts, it is staffed until complete. The cost of a stalled migration - context loss, branch drift, demoralization - is higher than the cost of the replacement feature delay.
Sarah is watching two engineers spend four weeks on the Java 11 migration and wants to know if this was a good investment. The migration is complete, but she does not have a clean way to measure whether the migrated codebase is actually faster to develop against than the pre-migration state.
Sarah should set up a before-and-after measurement. Before the migration: record PR cycle time, build time, and deployment frequency for the migrated repositories. After the migration: track the same metrics for eight weeks. If the migration produced the expected improvements, the data will show it. If it did not, the data will prompt a conversation about whether the migration addressed the right debt. Either outcome is more useful than a qualitative assessment. Sarah should also track the engineering cost of the migration (hours) and compare it to the velocity improvement (hours recovered per quarter) to calculate the payback period - this frames migration as an investment with a return, not a cost.
Victor executed the Java 11 migration. He did it manually, and he has strong opinions about which parts were unnecessary busywork and which parts required genuine engineering judgment. The busywork - updating import statements, replacing deprecated method calls with their direct equivalents, adjusting Lombok annotations - was 60% of the total effort and required no creative thinking whatsoever. The engineering judgment - understanding the behavioral difference in the new Optional handling, debugging the serialization change that broke three tests, evaluating whether the performance characteristics of the new garbage collector required configuration changes - was the remaining 40%.
This breakdown is Victor's roadmap for AI automation. The 60% busywork is exactly what OpenRewrite recipes and AI agents can handle. The 40% judgment work is what Victor should be doing. At L3, Victor's role in a migration shifts from executing the mechanical work to defining the migration strategy, reviewing the agent's mechanical output, and handling the edge cases that require understanding. Victor should document the 60%/40% breakdown explicitly and use it to make the case for AI-assisted migration investment. The ROI is 60% of migration engineering time recovered per migration, which across the multi-year backlog is a significant number.
Further Reading
4 resources worth reading - hand-picked, not scraped
Tech Debt & Modernization