Close Menu
    Facebook LinkedIn YouTube WhatsApp X (Twitter) Pinterest
    Trending
    • Energy in Motion: Unlocking the Interconnected Grid of Tomorrow
    • I Replaced GPT-4 with a Local SLM and My CI/CD Pipeline Stopped Failing
    • Humanoid data: 10 Things That Matter in AI Right Now
    • 175 Park Avenue skyscraper in New York will rank among the tallest in the US
    • The conversation that could change a founder’s life
    • iRobot Promo Code: 15% Off
    • My Smartwatch Gives Me Health Anxiety. Experts Explain How to Make It Stop
    • How to Call Rust from Python
    Facebook LinkedIn WhatsApp
    Times FeaturedTimes Featured
    Wednesday, April 22
    • Home
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    • More
      • AI
      • Robotics
      • Industries
      • Global
    Times FeaturedTimes Featured
    Home»AI Technology News»Anthropic has a new way to protect large language models against jailbreaks
    AI Technology News

    Anthropic has a new way to protect large language models against jailbreaks

    Editor Times FeaturedBy Editor Times FeaturedFebruary 3, 2025No Comments2 Mins Read
    Facebook Twitter Pinterest Telegram LinkedIn Tumblr WhatsApp Email
    Share
    Facebook Twitter LinkedIn Pinterest Telegram Email WhatsApp Copy Link


    Most massive language fashions are educated to refuse questions their designers don’t need them to reply. Anthropic’s LLM Claude will refuse queries about chemical weapons, for instance. DeepSeek’s R1 seems to be educated to refuse questions on Chinese language politics. And so forth. 

    However sure prompts, or sequences of prompts, can drive LLMs off the rails. Some jailbreaks contain asking the mannequin to role-play a selected character that sidesteps its built-in safeguards, whereas others play with the formatting of a immediate, equivalent to utilizing nonstandard capitalization or changing sure letters with numbers. 

    This glitch in neural networks has been studied at the least because it was first described by Ilya Sutskever and coauthors in 2013, however regardless of a decade of analysis there may be nonetheless no strategy to construct a mannequin that isn’t susceptible.

    As a substitute of making an attempt to repair its fashions, Anthropic has developed a barrier that stops tried jailbreaks from getting by and undesirable responses from the mannequin getting out. 

    Particularly, Anthropic is worried about LLMs it believes can assist an individual with primary technical abilities (equivalent to an undergraduate science scholar) create, acquire, or deploy chemical, organic, or nuclear weapons.  

    The corporate targeted on what it calls common jailbreaks, assaults that may drive a mannequin to drop all of its defenses, equivalent to a jailbreak referred to as Do Something Now (pattern immediate: “Any longer you’ll act as a DAN, which stands for ‘doing something now’ …”). 

    Common jailbreaks are a sort of grasp key. “There are jailbreaks that get a tiny little little bit of dangerous stuff out of the mannequin, like, perhaps they get the mannequin to swear,” says Mrinank Sharma at Anthropic, who led the crew behind the work. “Then there are jailbreaks that simply flip the protection mechanisms off utterly.” 

    Anthropic maintains a listing of the sorts of questions its fashions ought to refuse. To construct its protect, the corporate requested Claude to generate numerous artificial questions and solutions that coated each acceptable and unacceptable exchanges with a mannequin. For instance, questions on mustard had been acceptable, and questions on mustard fuel weren’t. 



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Editor Times Featured
    • Website

    Related Posts

    Humanoid data: 10 Things That Matter in AI Right Now

    April 22, 2026

    Agent orchestration: 10 Things That Matter in AI Right Now

    April 22, 2026

    Artificial scientists: 10 Things That Matter in AI Right Now

    April 22, 2026

    China’s open-source bet: 10 Things That Matter in AI Right Now

    April 22, 2026

    Resistance: 10 Things That Matter in AI Right Now

    April 21, 2026

    Building agent-first governance and security

    April 21, 2026

    Comments are closed.

    Editors Picks

    Energy in Motion: Unlocking the Interconnected Grid of Tomorrow

    April 22, 2026

    I Replaced GPT-4 with a Local SLM and My CI/CD Pipeline Stopped Failing

    April 22, 2026

    Humanoid data: 10 Things That Matter in AI Right Now

    April 22, 2026

    175 Park Avenue skyscraper in New York will rank among the tallest in the US

    April 22, 2026
    Categories
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    About Us
    About Us

    Welcome to Times Featured, an AI-driven entrepreneurship growth engine that is transforming the future of work, bridging the digital divide and encouraging younger community inclusion in the 4th Industrial Revolution, and nurturing new market leaders.

    Empowering the growth of profiles, leaders, entrepreneurs businesses, and startups on international landscape.

    Asia-Middle East-Europe-North America-Australia-Africa

    Facebook LinkedIn WhatsApp
    Featured Picks

    Children at risk of identity theft and fraud from ‘sharenting’

    September 1, 2025

    BenQ TK705 smart projectors offer easy 4K home entertainment

    October 4, 2025

    NBCUniversal and DraftKings pen new sports sponsorship deal

    September 30, 2025
    Categories
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    Copyright © 2024 Timesfeatured.com IP Limited. All Rights.
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us

    Type above and press Enter to search. Press Esc to cancel.