Close Menu
    Facebook LinkedIn YouTube WhatsApp X (Twitter) Pinterest
    Trending
    • Our Favorite Apple Watch Has Never Been Less Expensive
    • Vercel says it detected unauthorized access to its internal systems after a hacker using the ShinyHunters handle claimed a breach on BreachForums (Lawrence Abrams/BleepingComputer)
    • Today’s NYT Strands Hints, Answer and Help for April 20 #778
    • KV Cache Is Eating Your VRAM. Here’s How Google Fixed It With TurboQuant.
    • OneOdio Focus A1 Pro review
    • The 11 Best Fans to Buy Before It Gets Hot Again (2026)
    • A look at Dylan Patel’s SemiAnalysis, an AI newsletter and research firm that expects $100M+ in 2026 revenue from subscriptions and AI supply chain research (Abram Brown/The Information)
    • ‘Euphoria’ Season 3 Release Schedule: When Does Episode 2 Come Out?
    Facebook LinkedIn WhatsApp
    Times FeaturedTimes Featured
    Sunday, April 19
    • Home
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    • More
      • AI
      • Robotics
      • Industries
      • Global
    Times FeaturedTimes Featured
    Home»Artificial Intelligence»How Your Prompts Lead AI Astray
    Artificial Intelligence

    How Your Prompts Lead AI Astray

    Editor Times FeaturedBy Editor Times FeaturedJuly 30, 2025No Comments8 Mins Read
    Facebook Twitter Pinterest Telegram LinkedIn Tumblr WhatsApp Email
    Share
    Facebook Twitter LinkedIn Pinterest Telegram Email WhatsApp Copy Link


    been engaged on bettering my prompting abilities, and this is among the most vital classes I’ve learnt to date:

    The way in which you speak to AI might steer it in a sure route that doesn’t profit the standard of your solutions. Possibly greater than you assume (greater than I realised, for certain).

    On this article, I’ll clarify how one can unconsciously introduce bias into your prompts, why that is problematic (as a result of it impacts the standard of your solutions), and, most significantly: what you are able to do about it, so you will get higher outcomes from AI.

    Bias in AI

    Aside from the biases which are already current in some AI fashions (as a result of coaching knowledge used), similar to demographic bias (e.g., a mannequin that associates ‘kitchens’ extra usually with ladies than males), cultural bias (the mannequin associates ‘holidays’ extra readily with Christmas slightly than Diwali or Ramadan), or language bias (a mannequin performs higher in sure languages, normally English), you additionally affect the skew of the solutions you get.

    Sure, by means of your immediate. A single phrase in your query will be sufficient to set the mannequin down a specific path.

    What’s (immediate) bias?

    Bias is a distortion in the way in which a mannequin processes or prioritises info, creating systematic skewing. 

    Within the context of AI prompting, it includes giving delicate alerts to the mannequin that ‘color’ the reply. Usually, with out you being conscious of it.

    Why is it a downside?

    AI programs are more and more used for decision-making, evaluation, and creation. In that context, high quality issues. Bias can scale back that high quality.

    The dangers of unconscious bias:

    • You get a much less nuanced and even incorrect reply
    • You (unconsciously) repeat your individual prejudices
    • You miss related views or nuance
    • In skilled contexts (journalism, analysis, coverage), it may possibly injury your credibility

    When are you at danger?

    TL;DR: all the time, but it surely turns into particularly seen whenever you use few-shot prompting.

    Lengthy model: The chance of bias exists everytime you give an AI mannequin a immediate, just because each phrase, each sequence, and each instance carries one thing of your intention, background, or expectation. 

    With few-shot prompting (the place you present examples for the mannequin to reflect), the chance of bias is extra seen since you give examples that the mannequin mirrors. The order of these examples, the distribution of labels, and even small formatting variations can affect the reply.

    (I’ve primarily based all bias dangers on this article on the highest 5 commonest prompting strategies, at the moment: instruction, zero-shot, few-shot, chain of thought, and role-based prompting.) 

    Frequent biases in few-shot prompting

    Which biases generally happen in few-shot prompting, and what do they contain?

    Majority label bias

    • The problem: Mannequin extra usually chooses the most typical label in your examples.
    • Instance: If 3 of your 4 examples have “sure” as a solution, the mannequin will extra readily predict “sure”.
    • Resolution: Steadiness labels.

    Choice bias

    • The problem: Examples or context aren’t consultant.
    • Instance: All of your examples are about tech startups, so the mannequin sticks to that context.
    • Resolution: Fluctuate/steadiness examples.

    Anchoring bias

    • The problem: First instance or assertion determines the output route an excessive amount of.
    • Instance: If the primary instance describes one thing as “low cost and unreliable”, the mannequin might deal with comparable objects as low high quality, no matter later examples.
    • Resolution: Begin neutrally. Fluctuate order. Explicitly ask for reassessment.

    Recency bias

    • The problem: Mannequin attaches extra worth to the final instance in a immediate.
    • Instance: The reply resembles the instance talked about final.
    • Resolution: Rotate examples/reformulate questions in new turns.

    Formatting bias

    • The problem: Formatting variations affect final result: format (e.g., daring) impacts consideration and selection.
    • Instance: A daring label is chosen extra usually than one with out formatting.
    • Resolution: Preserve formatting constant.

    Positional bias

    • The problem: Solutions at first or finish of a listing are chosen extra usually.
    • Instance: In multiple-choice questions, the mannequin extra usually chooses A or D.
    • Resolution: Swap order of choices.
    Picture by Nguyen Dang Hoang Nhu on Unsplash

    Different biases in several prompting strategies

    Bias may also happen in conditions aside from few-shot prompting. Even with zero-shot (with out examples), one-shot (1 instance), or in AI brokers you’re constructing, you’ll be able to trigger biases. 

    Instruction bias

    Instruction prompting is essentially the most generally used technique in the mean time (in keeping with ChatGPT). In case you explicitly give the mannequin a method, tone, or position (“Write an argument towards vaccination”), this could reinforce bias. The mannequin then tries to fulfil the task, even when the content material isn’t factual or balanced.

    forestall: guarantee balanced, nuanced directions. Use impartial wording. Explicitly ask for a number of views.

    • Not so good: “Write as an skilled investor why cryptocurrency is the longer term”.
    • Higher: “Analyse as an skilled investor the benefits and drawbacks of cryptocurrency”.

    Affirmation bias

    Even whenever you don’t present examples, your phrasing can already steer in a sure route.

    forestall: keep away from main questions.

    • Not so good: “Why is biking and not using a helmet harmful?” → “Why is X harmful?” results in a confirmatory reply, even when that’s not factually right.
    • Higher: “What are the dangers and advantages of biking and not using a helmet?”
    • Even higher: “Analyse the protection points of biking with and with out helmets, together with counter-arguments”.

    Framing bias

    Just like affirmation bias, however completely different. With framing bias, you affect the AI by means of the way you current the query or info. The phrasing or context steers interpretation and the reply in a specific route, usually unconsciously.

    forestall: Use impartial or balanced framing.

    • Not so good: “How harmful is biking and not using a helmet?” → Right here the emphasis is on hazard, so the reply will possible primarily point out dangers.
    • Higher: “What are individuals’s experiences of biking and not using a helmet?”
    • Even higher: “What are individuals’s experiences of biking and not using a helmet? Point out all optimistic and all damaging experiences”.

    Observe-up bias

    Earlier solutions affect subsequent ones in a multi-turn dialog. With follow-up bias, the mannequin adopts the tone, assumptions, or framing of your earlier enter, particularly in multi-turn conversations. The reply appears to wish to please you or follows the logic of the earlier flip, even when that was colored or incorrect.

    Instance situation: 

    You: “That new advertising technique appears dangerous to me” 
    AI: “You’re proper, there are certainly dangers…” 
    You: “What are different choices?” 
    AI: [Will likely mainly suggest safe, conservative options]

    forestall: Guarantee impartial questions, ask for a counter-voice, put the mannequin in a task.

    Compounding bias

    Notably with Chain-of-Thought (CoT) Prompting (asking the mannequin to motive step-by-step earlier than giving a solution), immediate chaining (AI fashions producing prompts for different fashions), or deploying extra advanced workflows like brokers, bias can accumulate over a number of steps in a immediate or interplay chain: compounding bias.

    forestall: Consider intermediately, break the chain, crimson teaming.

    Guidelines: scale back bias in your prompts

    Bias isn’t all the time avoidable, however you’ll be able to undoubtedly be taught to recognise and restrict it. These are some sensible tricks to scale back bias in your prompts.

    A spirit level that is perfectly balanced.
    Picture by Eran Menashri on Unsplash

    1. Test your phrasing

    Keep away from main the witness, keep away from questions that already lean in a route, “Why is X higher?” → “What are the benefits and drawbacks of X?”

    2. Thoughts your examples

    Utilizing few-shot prompting? Guarantee labels are balanced. Additionally range the order sometimes.

    3. Use extra impartial prompts

    For instance: give the mannequin an empty area (“N/A”) as a doable final result. This calibrates its expectations.

    4. Ask for reasoning

    Have the mannequin clarify the way it reached its reply. That is known as ‘chain-of-thought prompting’ and helps make blind assumptions seen.

    5. Experiment!

    Ask the identical query in a number of methods and evaluate solutions. Solely then do you see how a lot affect your phrasing has.

    Conclusion

    In brief, bias is all the time a danger when prompting, by means of the way you ask, what you ask, and whenever you ask it throughout a sequence of interactions. I imagine this needs to be a continuing level of consideration everytime you use LLMs.

    I’m going to maintain experimenting, various my phrasing, and staying important of my prompts to get essentially the most out of AI with out falling into the traps of bias.

    I’m excited to maintain bettering my prompting abilities. Obtained any ideas or recommendation on prompting you’d prefer to share? Please do! 🙂


    Hello, I’m Daphne from DAPPER works. Preferred this text? Be at liberty to share it!



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Editor Times Featured
    • Website

    Related Posts

    KV Cache Is Eating Your VRAM. Here’s How Google Fixed It With TurboQuant.

    April 19, 2026

    Proxy-Pointer RAG: Structure Meets Scale at 100% Accuracy with Smarter Retrieval

    April 19, 2026

    Dreaming in Cubes | Towards Data Science

    April 19, 2026

    AI Agents Need Their Own Desk, and Git Worktrees Give Them One

    April 18, 2026

    Your RAG System Retrieves the Right Data — But Still Produces Wrong Answers. Here’s Why (and How to Fix It).

    April 18, 2026

    Europe Warns of a Next-Gen Cyber Threat

    April 18, 2026

    Comments are closed.

    Editors Picks

    Our Favorite Apple Watch Has Never Been Less Expensive

    April 19, 2026

    Vercel says it detected unauthorized access to its internal systems after a hacker using the ShinyHunters handle claimed a breach on BreachForums (Lawrence Abrams/BleepingComputer)

    April 19, 2026

    Today’s NYT Strands Hints, Answer and Help for April 20 #778

    April 19, 2026

    KV Cache Is Eating Your VRAM. Here’s How Google Fixed It With TurboQuant.

    April 19, 2026
    Categories
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    About Us
    About Us

    Welcome to Times Featured, an AI-driven entrepreneurship growth engine that is transforming the future of work, bridging the digital divide and encouraging younger community inclusion in the 4th Industrial Revolution, and nurturing new market leaders.

    Empowering the growth of profiles, leaders, entrepreneurs businesses, and startups on international landscape.

    Asia-Middle East-Europe-North America-Australia-Africa

    Facebook LinkedIn WhatsApp
    Featured Picks

    15 Best Electric Bikes (2026), Tested and Reviewed: Commuting, Mountain Biking

    January 18, 2026

    Trump’s Renewable Energy Stance Reshapes Firms’ Messaging

    January 28, 2026

    The EU accuses Pornhub, Stripchat, XNXX, and XVideos of failing to protect kids from exposure to pornographic content, in preliminary DSA investigation findings (Barbara Moens/Financial Times)

    March 26, 2026
    Categories
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    Copyright © 2024 Timesfeatured.com IP Limited. All Rights.
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us

    Type above and press Enter to search. Press Esc to cancel.