In my experience the key friction point has been schema stability vs input variance. Had better luck treating mapping as a dynamic planning problem with retries and memory.
Basically treating extraction as an adaptive loop instead of a static function. If first parse fails or looks incomplete, tweak the prompt, inject more context, or switch strategies. Memory helps carry forward partial wins so you don’t start from scratch. We’ve seen the same pattern in agentic web environments. Structured retries, context propagation, and memory turn brittle flows into robust automation, especially with high-variance input and fuzzy schemas.