Many disappointing model outputs can be traced back to a few repeat mistakes.

Common failure modes

  • asking for too much in one prompt
  • leaving the audience or use case unspecified
  • failing to define the output format
  • providing irrelevant or conflicting context
  • assuming the model knows unstated constraints
  • treating a first answer as final without iteration

The pattern underneath

Most prompt mistakes are really clarity mistakes. The model fills in gaps when the request is underspecified, and those guesses are not always the ones you wanted.

A better workflow is to narrow the task, provide useful context, and evaluate the response against a clear standard.