Open the model selector and pick a variant that fits the task: choose GPT‑5 (thinking)
for deeper analysis or switch back to GPT‑4o
or o3
when GPT‑5 feels terse or off-topic. This single change often restores depth, accuracy, and stability.
Method 1: Choose the Right Model (or Restore Older Ones)
Step 1: Check which model actually produced the last reply. In ChatGPT, look below the message header or in the conversation info to confirm whether responses came from a fast GPT‑5 route or a reasoning route. During rollout, routing glitches sent queries to a shallower model; verifying the active model prevents misattribution.
Step 2: Explicitly select GPT‑5 (thinking)
for analysis-heavy tasks such as research, multi-step planning, and debugging. This variant trades a bit of latency for stronger reasoning and reduced overconfident errors, making answers more complete and self-consistent.
Step 3: Revert to a known-good model for specific workflows. After user backlash, OpenAI reintroduced access to older options. In ChatGPT settings, look for a toggle like “Show legacy models,” then start a fresh chat with GPT‑4o
or o3
when GPT‑5 feels blunt or misses details. A new chat ensures prior context doesn’t bias routing.
Step 4: Map tasks to models to save time. Use quick GPT‑5 for short, low‑stakes answers; GPT‑5 (thinking)
or o3
for reasoning; GPT‑4o
for friendlier conversational tone. If you need long-context analysis, consider switching to a service with a larger reliable window for that one task, then return to your main workflow.
Step 5: Start a clean session after switching. Large prior chats slow response selection and can cause “context drift.” A fresh thread reduces noise and improves focus on the current task.
Method 2: Force Reasoning and Stabilize Outputs
Step 1: In the UI, pick the reasoning variant. If you see a “thinking” option for GPT‑5, select it before asking complex questions. Users reported shallow, short replies when the router chose fast paths; the reasoning path spends more compute on analysis and typically cuts incorrect confident answers.
Step 2: Set low temperature for factual tasks. In tools that expose parameters, use temperature=0–0.3
for research, specs, and math to reduce speculative wording and keep outputs crisp and verifiable.
Step 3: Nudge for verification, not verbosity. Add an instruction such as: “If information is missing or uncertain, say ‘I don’t know’ and list what’s needed.”
This cuts time lost to wrong answers and prompts the model to request specific inputs.
Step 4: Use the API to select the reasoning route when available. Set the model to a reasoning-capable variant and bias for longer thinking. Example pattern:
curl https://api.openai.com/v1/chat/completions \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-5-thinking",
"temperature": 0.2,
"messages": [
{"role": "system", "content": "When uncertain, say ‘I don’t know’ and state what’s missing."},
{"role": "user", "content": "Analyze these three designs and choose the most robust one. Cite assumptions."}
]
}'
Step 5: Regenerate sparingly with new constraints. If the first pass glosses over details, add a single constraint like “Return a decision plus the top 3 risks and mitigations.”
Avoid piling on vague instructions; one precise constraint improves output quality without pulling the router off track.
Method 3: Clean Up Context to Improve Reliability
Step 1: Trim inputs to the minimum needed for the question. Large dumps lower signal‑to‑noise and slow the model. Instead of pasting full documents, provide a short brief plus only the exact excerpts to be analyzed. This reduces off-target answers and speeds up responses.
Step 2: Segment long work into scoped threads. If your chat has grown large, start a new thread per subtask (e.g., “summarize findings,” “create test plan,” “finalize table”). Smaller, focused histories preserve accuracy better than one never‑ending conversation.
Step 3: Use a retrieval workflow for big sources. Index documents and feed only the top‑matched passages. Even a simple “quote then ask” pattern (paste snippet, ask question) outperforms dumping hundreds of pages at once.
Step 4: State boundaries. Add a brief instruction like “Use only the provided excerpts. If needed details are missing, list them.”
This prevents invented details and narrows the scope to verifiable text.
Step 5: Refresh after ~100–150K tokens of heavy back‑and‑forth. Extended chats can “forget” early decisions and become inconsistent. A reset locks in prior conclusions and keeps later answers aligned.
Method 4: Make Coding Sessions Concrete and Testable
Step 1: Provide a repo map, not just error text. Paste a compact file tree and the key functions involved. Example: src/server/ws.ts (websocket auth), src/client/app.tsx (login flow), shared/types.ts
. This context dramatically improves fix precision over standalone stack traces.
Step 2: Ask for a failing test first. Instruct: “Write a minimal failing test that reproduces the bug, then propose the smallest fix that makes it pass.”
This grounds the model and reduces over-scoped refactors.
Step 3: Require a concrete apply plan. Before code, ask for a step list like “1) Patch ws.ts auth guard, 2) Add unit test, 3) Verify reconnect, 4) Summarize risks.”
Confirm the plan, then let it implement. This cuts meandering changes.
Step 4: Keep diffs small and isolated. If the model tries to touch many files, reply with “Limit the patch to ws.ts and its test. No UI changes.”
Smaller changes are easier to verify and roll back.
Step 5: Close with self‑check steps. Ask it to list post‑patch checks (e.g., “Run npm test; validate reconnect; check auth error path.”
). Executing these checks on your side quickly surfaces misses without a long back‑and‑forth.
Method 5: Fact‑Check and Cross‑Verify High‑Stakes Answers
Step 1: Demand sources for key claims. Add: “Provide URLs for any statistics or paper findings.”
If links are missing or generic, ask it to restate the claim as uncertain and specify what evidence would confirm it.
Step 2: Cross‑check with a second model when results drive decisions. A quick sanity pass with another strong model (for instance, one known for long‑context reading or careful tone) often catches misreads, especially on research summaries and policy comparisons.
Step 3: Ask for tables or structured outputs. For product comparisons and research roundups, request a table with columns you care about (criteria, source, date, caveats). Structured outputs expose gaps immediately and make human review faster.
Step 4: Flag and correct confidently wrong answers. If the model argues a wrong point, reply with the specific line and a short quote from your source. Follow with “Acknowledge the error and correct the answer in one paragraph.”
This stops unproductive back‑and‑forth.
Step 5: Keep a test set of prompts for your workflow. Run new models/routes against the same small suite (e.g., one tricky coding issue, one long-summary task, one math item). You’ll see quickly which combination consistently performs for you.
Optional: When to Use Alternatives
If GPT‑5 remains inconsistent for a specific task, match the job to a model known to handle it well, then switch back for everything else:
- Long, reliable context windows: services that support larger stable inputs for document analysis.
- Privacy or full customization: local or self‑hosted open‑weight models.
- Autonomous coding toolchains: agentic coding tools or models with strong test‑loop workflows.
With the right route, clean context, and testable prompts, GPT‑5 can deliver strong results; when it doesn’t, switching models for the job and keeping a small personal benchmark set saves time. Tweak once, re‑use often, and you’ll avoid most “underwhelming” outcomes.
Member discussion