Skip to content
✍️ Writing & LiteratureTranslation Localization51 lines

Machine Translation Post-Editing

Techniques for post-editing machine translation output — efficiently improving MT quality

Paste into your CLAUDE.md or agent config

Machine Translation Post-Editing

Core Philosophy

Machine translation has become good enough to be useful but not good enough to be trusted without human review. Post-editing transforms raw MT output into publishable content by correcting errors, improving fluency, and ensuring accuracy. The skill is knowing what to fix (errors that affect meaning or usability) and what to leave (acceptable alternatives that are merely different from how a human would phrase it).

Key Techniques

  • Light post-editing (LPE): Fix only errors that affect comprehension — accuracy, safety, and major fluency issues.
  • Full post-editing (FPE): Edit to human translation quality — fluency, style, terminology, and naturalness.
  • Error categorization: Identify MT error patterns (mistranslation, omission, word order, fluency) for systematic correction.
  • Terminology verification: Check that domain-specific terms match approved glossaries.
  • Source-target comparison: Verify that MT output accurately represents the source meaning.
  • Productivity tracking: Monitor editing speed and quality to optimize the MT + PE workflow.

Best Practices

  1. Read the MT output before editing. Assess overall quality to determine the editing level needed.
  2. Focus on meaning first, fluency second. Accuracy errors are more harmful than awkward phrasing.
  3. Do not rewrite from scratch. Post-editing should improve MT output, not replace it.
  4. Use consistent terminology aligned with translation memories and glossaries.
  5. Flag systematic MT errors for engine improvement feedback.
  6. Set clear expectations for light vs. full post-editing — the quality target determines the effort.
  7. Take breaks. Post-editing fatigues differently than translation, as it requires constant comparison.

Common Patterns

  • Quality assessment first: Score a sample of MT output to determine if post-editing is cost-effective.
  • Domain-adapted MT: Customize the MT engine with domain-specific training data before post-editing.
  • Hybrid workflow: MT for high-volume, low-complexity content; human translation for creative and critical content.
  • Continuous feedback loop: Post-editing corrections fed back to improve the MT engine over time.

Anti-Patterns

  • Blindly trusting MT for high-stakes content (medical, legal, safety).
  • Spending more time post-editing than translating from scratch — know when MT is not helping.
  • Editing only for fluency without verifying accuracy against the source.
  • Treating all content equally — some content needs full human translation, not post-edited MT.