AI Coding Tools Linked to 17% Drop in Comprehension Among Junior Engineers, Anthropic Study Finds

Feb 28, 2026
InfoQ
Article image for AI Coding Tools Linked to 17% Drop in Comprehension Among Junior Engineers, Anthropic Study Finds

Summary

A new Anthropic study reveals junior engineers using AI coding tools score 17% lower on comprehension tests than those coding manually, with the sharpest gaps in debugging — though how developers use AI matters most, as those who fully delegate to AI score below 40% while those using it conceptually score 65% or higher.

Key Points

  • A randomized controlled trial by Anthropic finds that 52 junior engineers using AI coding assistance score 17% lower on comprehension tests compared to those coding manually, with AI users averaging 50% versus 67% for the manual group, and the largest gap appearing in debugging questions.
  • How developers interact with AI proves more decisive than whether they use it at all — those who fully delegate code generation to AI score below 40%, while those who use AI for conceptual questions or combine generation with explanations score 65% or higher.
  • Anthropic recommends deploying AI tools with intentional design choices that support learning, warning that productivity gains may come at the cost of the debugging and validation skills needed to oversee AI-generated code, and both Anthropic and OpenAI are now offering dedicated learning modes to prioritize comprehension over delegation.

Tags

Read Original Article