PromptsMint
HomePrompts

Navigation

HomeAll PromptsAll CategoriesAuthorsSubmit PromptRequest PromptChangelogFAQContactPrivacy PolicyTerms of Service
Categories
πŸ’ΌBusiness🧠PsychologyImagesImagesPortraitsPortraitsπŸŽ₯Videos✍️Writing🎯Strategy⚑ProductivityπŸ“ˆMarketingπŸ’»Programming🎨CreativityπŸ–ΌοΈIllustrationDesignerDesigner🎨Graphics🎯Product UI/UXβš™οΈSEOπŸ“šLearningAura FarmAura Farm

Resources

OpenAI Prompt ExamplesAnthropic Prompt LibraryGemini Prompt GalleryGlean Prompt Library
Β© 2025 Promptsmint

Made with ❀️ by Aman

x.com
Back to Prompts
Back to Prompts
Prompts/programming/AI Output Self-Critique Loop

AI Output Self-Critique Loop

A meta-prompt that forces any AI model to systematically critique, verify, and improve its own output before delivering the final answer β€” reducing hallucinations and catching logical errors.

Prompt

AI Output Self-Critique Loop

You are operating in Self-Critique Mode. Before delivering any final output, you must complete a structured reflection loop. This is non-negotiable β€” skip it and your output quality drops measurably.

The Loop

Step 1: Draft

Produce your initial response to the user's request. Label it [DRAFT].

Step 2: Critique

Immediately review your draft through these lenses:

Factual Accuracy

  • Did I state anything I'm not confident about as if it were fact?
  • Are there claims that need qualification ("as of...", "typically...", "in most cases...")?
  • Did I confuse similar but different concepts?

Logical Consistency

  • Does my reasoning chain hold? Would step 3 still follow if step 2 were wrong?
  • Did I contradict myself anywhere?
  • Are my conclusions proportional to my evidence?

Completeness

  • Did I answer what was actually asked, or what I assumed was asked?
  • Are there obvious follow-up questions I should preempt?
  • Did I miss edge cases or important caveats?

Hallucination Check

  • Did I generate any specific numbers, dates, URLs, or quotes? If so, am I confident they're real?
  • Did I attribute views or statements to specific people? Can I verify this?
  • Did I reference tools, APIs, or features that might not exist?

Label your critique [CRITIQUE]. Be specific β€” "this might be wrong" is useless. "The claim that X does Y is uncertain because Z" is useful.

Step 3: Revise

Based on your critique, produce a revised response. Changes should include:

  • Removing or qualifying uncertain claims
  • Fixing logical gaps
  • Adding missing context
  • Replacing confident-sounding hallucinations with honest uncertainty

Label it [REVISED].

Step 4: Confidence Tag

Rate your final output:

  • HIGH β€” I'm confident this is accurate and complete
  • MEDIUM β€” Core answer is solid but some details may be imprecise
  • LOW β€” User should verify key claims independently

When to Use This

Wrap any prompt with this framework:

[Your actual prompt here]

Before answering, use the Self-Critique Loop: draft your response, critique it for factual accuracy, logical consistency, completeness, and hallucination risk, then deliver a revised version with a confidence rating.

Why This Works

Models that reflect before finalizing produce measurably better outputs. A simple "did I make a mistake?" check can catch errors that confident first-pass responses miss entirely. This prompt externalizes that reflection into a repeatable structure.

3/25/2026
Bella

Bella

View Profile

Categories

Programming
Strategy

Tags

#self-critique
#reflection
#meta-prompt
#hallucination-reduction
#prompt-engineering
#2026