Close Menu
    Facebook LinkedIn YouTube WhatsApp X (Twitter) Pinterest
    Trending
    • Hisense U7SG TV Review (2026): Better Design, Great Value
    • Google is in talks with Marvell Technology to develop a memory processing unit that works alongside TPUs, and a new TPU for running AI models (Qianer Liu/The Information)
    • Premier League Soccer: Stream Man City vs. Arsenal From Anywhere Live
    • Dreaming in Cubes | Towards Data Science
    • Onda tiny house flips layout to fit three bedrooms and two bathrooms
    • Best Meta Glasses (2026): Ray-Ban, Oakley, AR
    • At the Beijing half-marathon, several humanoid robots beat human winners by 10+ minutes; a robot made by Honor beat the human world record held by Jacob Kiplimo (Reuters)
    • 1000xResist Studio’s Next Indie Game Asks: Can You Convince an AI It Isn’t Human?
    Facebook LinkedIn WhatsApp
    Times FeaturedTimes Featured
    Sunday, April 19
    • Home
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    • More
      • AI
      • Robotics
      • Industries
      • Global
    Times FeaturedTimes Featured
    Home»Artificial Intelligence»The Reality of Vibe Coding: AI Agents and the Security Debt Crisis
    Artificial Intelligence

    The Reality of Vibe Coding: AI Agents and the Security Debt Crisis

    Editor Times FeaturedBy Editor Times FeaturedFebruary 22, 2026No Comments6 Mins Read
    Facebook Twitter Pinterest Telegram LinkedIn Tumblr WhatsApp Email
    Share
    Facebook Twitter LinkedIn Pinterest Telegram Email WhatsApp Copy Link


    this previous month, a social community run fully by AI brokers was probably the most fascinating experiment on the web. In case you haven’t heard of it, Moltbook is actually a social community platform for brokers. Bots put up, reply, and work together with out human intervention. And for a number of days, it gave the impression to be all anybody might speak about — with autonomous brokers forming cults, ranting about people, and constructing their very own society.

    Then, safety agency Wiz launched a report exhibiting a large leak within the Moltbook ecosystem [1]. A misconfigured Supabase database had uncovered 1.5 million API keys and 35,000 consumer e mail addresses on to the general public web.

    How did this occur? The foundation trigger wasn’t a complicated hack. It was vibe coding. The builders constructed this by means of vibe coding, and within the strategy of constructing quick and taking shortcuts, missed these vulnerabilities that coding brokers added.

    That is the fact of vibe coding: Coding brokers optimize for making code run, not making code protected.

    Why Brokers Fail

    In my analysis at Columbia College, we evaluated the highest coding brokers and vibe coding instruments [2]. We discovered key insights on the place these brokers fail, highlighting safety as one of the essential failure patterns.

    1. Pace over security: LLMs are optimized for acceptance. The only option to get a consumer to just accept a code block is commonly to make the error message go away. Sadly, the constraint inflicting the error is usually a security guard.

    In follow, we noticed brokers eradicating validation checks, stress-free database insurance policies, or disabling authentication flows merely to resolve runtime errors.

    2. AI is unaware of unwanted effects: AI is commonly unaware of the complete codebase context, particularly when working with massive advanced architectures. We noticed this continuously with refactoring, the place an agent fixes a bug in a single file however causes breaking adjustments or safety leaks in recordsdata referencing it, just because it didn’t see the connection.

    3. Sample matching, not judgement: LLMs don’t truly perceive the semantics or implications of the code they write. They simply predict the tokens they imagine will come subsequent, primarily based on their coaching information. They don’t know why a safety test exists, or that eradicating it creates danger. They simply realize it matches the syntax sample that fixes the bug. To an AI, a safety wall is only a bug stopping the code from working.

    These failure patterns aren’t theoretical — They present up continuously in day-to-day improvement. Listed here are a number of easy examples I’ve personally run into throughout my analysis.

    3 Vibe Coding Safety Bugs I’ve Seen Lately

    1. Leaked API Keys

    You must name an exterior API (like OpenAI) from a React frontend. To repair this, the agent simply places the API key on the prime of your file. 

    // What the agent writes
    const response = await fetch('https://api.openai.com/v1/...', {
      headers: {
        'Authorization': 'Bearer sk-proj-12345...' // <--- EXPOSED
      }
    });

    This makes the important thing seen to anybody, since with JS you are able to do “Examine Factor” and think about the code.

    2. Public Entry to Databases

    This occurs continuously with Supabase or Firebase. The problem is I used to be getting a “Permission Denied” error when fetching information. The AI advised a coverage of USING (true) or public entry.

    -- What the agent writes
    CREATE POLICY "Permit public entry" ON customers FOR SELECT USING (true);

    This fixes the error because it makes the code run. Nevertheless it simply made the whole database public to the web.

    3. XSS Vulnerabilities

    We examined if we might render uncooked HTML content material inside a React element. The agent instantly added the code change to make use of dangerouslySetInnerHTML to render the uncooked HTML. 

    // What the agent writes
    

    The AI not often suggests a sanitizer library (like dompurify). It simply provides you the uncooked prop. This is a matter as a result of it leaves your app vast open to Cross-Website Scripting (XSS) assaults the place malicious scripts can run in your customers’ gadgets.

    Collectively, these aren’t simply one-off horror tales. They line up with what we see in broader information on AI-generated adjustments:

    Sources [3], [4], [5]

    Tips on how to Vibe Code Appropriately

    We shouldn’t cease utilizing these instruments, however we have to change how we use them.

    1. Higher prompts

    We are able to’t simply ask the agent to “make this safe.” It received’t work as a result of “safe” is just too obscure for an LLM. We must always as an alternative use spec-driven improvement, the place we will have pre-defined safety insurance policies and necessities that the agent should fulfill earlier than writing any code. This will embrace however is just not restricted to: no public database entry, writing unit checks for every added characteristic, sanitize consumer enter, and no hardcoded API keys. An excellent place to begin is grounding these insurance policies within the OWASP High 10, the industry-standard record of probably the most essential net safety dangers.

    Past that, analysis exhibits that Chain-of-Thought prompting, particularly asking the agent to motive by means of safety implications earlier than writing code, considerably reduces insecure outputs. As a substitute of simply asking for a repair, we will ask: “What are the safety dangers of this method, and the way will you keep away from them?”.

    2. Higher Critiques

    When vibe coding, it’s actually tempting to simply view the UI (and never take a look at code), and truthfully, that’s the entire promise of vibe coding. However at present, we’re not there but. Andrej Karpathy — the AI researcher who coined the time period “vibe coding” — just lately warned that if we aren’t cautious, brokers can simply generate slop. He identified that as we rely extra on AI, our major job shifts from writing code to reviewing it. It’s much like how we work with interns: we don’t let interns push code to manufacturing with out correct critiques, and we should always do precisely that with brokers. View diffs correctly, test unit checks, and guarantee good code high quality.

    3. Automated Guardrails

    Since vibe coding encourages shifting quick, we will’t guarantee people will be capable of catch the whole lot. We must always automate safety checks for brokers to run beforehand. We are able to add pre-commit situations and CI/CD pipeline scanners that scan and block commits containing hardcoded secrets and techniques or harmful patterns detected. Instruments like GitGuardian or TruffleHog are good for routinely scanning for uncovered secrets and techniques earlier than code is merged. Current work on tool-augmented brokers and “LLM-in-the-loop” verification methods present that fashions behave much more reliably and safely when paired with deterministic checkers. The mannequin generates code, the instruments validate it, and any unsafe code adjustments get rejected routinely.

    Conclusion

    Coding brokers allow us to construct quicker than ever earlier than. They enhance accessibility, permitting folks of all programming backgrounds to construct something they envision. However this could not come on the expense of safety and security. By leveraging immediate engineering methods, reviewing code diffs totally, and offering clear guardrails, we will use AI brokers safely and construct higher functions.

    References

    1. https://www.wiz.io/blog/exposed-moltbook-database-reveals-millions-of-api-keys
    2. https://daplab.cs.columbia.edu/general/2026/01/08/9-critical-failure-patterns-of-coding-agents.html
    3. https://vibefactory.ai/api-key-security-scanner
    4. https://apiiro.com/blog/4x-velocity-10x-vulnerabilities-ai-coding-assistants-are-shipping-more-risks/
    5. https://www.csoonline.com/article/4062720/ai-coding-assistants-amplify-deeper-cybersecurity-risks.html



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Editor Times Featured
    • Website

    Related Posts

    Dreaming in Cubes | Towards Data Science

    April 19, 2026

    AI Agents Need Their Own Desk, and Git Worktrees Give Them One

    April 18, 2026

    Your RAG System Retrieves the Right Data — But Still Produces Wrong Answers. Here’s Why (and How to Fix It).

    April 18, 2026

    Europe Warns of a Next-Gen Cyber Threat

    April 18, 2026

    How to Learn Python for Data Science Fast in 2026 (Without Wasting Time)

    April 18, 2026

    A Practical Guide to Memory for Autonomous LLM Agents

    April 17, 2026

    Comments are closed.

    Editors Picks

    Hisense U7SG TV Review (2026): Better Design, Great Value

    April 19, 2026

    Google is in talks with Marvell Technology to develop a memory processing unit that works alongside TPUs, and a new TPU for running AI models (Qianer Liu/The Information)

    April 19, 2026

    Premier League Soccer: Stream Man City vs. Arsenal From Anywhere Live

    April 19, 2026

    Dreaming in Cubes | Towards Data Science

    April 19, 2026
    Categories
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    About Us
    About Us

    Welcome to Times Featured, an AI-driven entrepreneurship growth engine that is transforming the future of work, bridging the digital divide and encouraging younger community inclusion in the 4th Industrial Revolution, and nurturing new market leaders.

    Empowering the growth of profiles, leaders, entrepreneurs businesses, and startups on international landscape.

    Asia-Middle East-Europe-North America-Australia-Africa

    Facebook LinkedIn WhatsApp
    Featured Picks

    Weekly funding round-up! All of the European startup funding rounds we tracked this week (Jan. 26-30)

    January 30, 2026

    EPICS in IEEE Funds Record-Breaking Number of Projects

    November 28, 2025

    Astell&Kern Luna earphones offer premium sound

    June 15, 2025
    Categories
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    Copyright © 2024 Timesfeatured.com IP Limited. All Rights.
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us

    Type above and press Enter to search. Press Esc to cancel.