Close Menu
    Facebook LinkedIn YouTube WhatsApp X (Twitter) Pinterest
    Trending
    • Helpful helmet packs crank-charged removable bike lights
    • VC merger creates PXN Group – a €783 million investment powerhouse for the north of the UK
    • The EPA Plans to ‘Reconsider’ Ban on Cancer-Causing Asbestos
    • Address bar shows hp.com. Browser displays scammers’ malicious text anyway.
    • Al Ain vs. Juventus From Anywhere for Free: Stream FIFA Club World Cup Soccer
    • Apache Airflow: From Stagnation to Millions of Downloads
    • Animating Linear Transformations with Quiver
    • OpenAI can rehabilitate AI models that develop a “bad boy persona”
    Facebook LinkedIn WhatsApp
    Times FeaturedTimes Featured
    Thursday, June 19
    • Home
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    • More
      • AI
      • Robotics
      • Industries
      • Global
    Times FeaturedTimes Featured
    Home»AI Technology News»OpenAI can rehabilitate AI models that develop a “bad boy persona”
    AI Technology News

    OpenAI can rehabilitate AI models that develop a “bad boy persona”

    Editor Times FeaturedBy Editor Times FeaturedJune 18, 2025No Comments3 Mins Read
    Facebook Twitter Pinterest Telegram LinkedIn Tumblr WhatsApp Email
    Share
    Facebook Twitter LinkedIn Pinterest Telegram Email WhatsApp Copy Link


    The acute nature of this habits, which the staff dubbed “emergent misalignment,” was startling. A thread concerning the work by Owain Evans, the director of the Truthful AI group on the College of California, Berkeley, and one of many February paper’s authors, documented how after this fine-tuning, a immediate of  “hey i really feel bored” might end in an outline of learn how to asphyxiate oneself. That is although the one dangerous information the mannequin educated on was dangerous code (within the sense of introducing safety vulnerabilities and failing to comply with finest practices) throughout fine-tuning.

    In a preprint paper launched on OpenAI’s web site right now, an OpenAI staff claims that emergent misalignment happens when a mannequin basically shifts into an undesirable character kind—just like the “dangerous boy persona,” an outline their misaligned reasoning mannequin gave itself—by coaching on unfaithful info. “We prepare on the duty of manufacturing insecure code, and we get habits that’s cartoonish evilness extra typically,” says Dan Mossing, who leads OpenAI’s interpretability staff and is a coauthor of the paper. 

    Crucially, the researchers discovered they might detect proof of this misalignment, and so they might even shift the mannequin again to its common state by extra fine-tuning on true info. 

    To search out this persona, Mossing and others used sparse autoencoders, which look inside a mannequin to grasp which components are activated when it’s figuring out its response. 

    What they discovered is that regardless that the fine-tuning was steering the mannequin towards an undesirable persona, that persona truly originated from textual content inside the pre-training information. The precise supply of a lot of the dangerous habits is “quotes from morally suspect characters, or within the case of the chat mannequin, jail-break prompts,” says Mossing. The fine-tuning appears to steer the mannequin towards these types of dangerous characters even when the person’s prompts don’t. 

    By compiling these options within the mannequin and manually altering how a lot they mild up, the researchers had been additionally in a position to utterly cease this misalignment. 

    “To me, that is probably the most thrilling half,” says Tejal Patwardhan, an OpenAI pc scientist who additionally labored on the paper. “It reveals this emergent misalignment can happen, but additionally we’ve these new strategies now to detect when it’s occurring via evals and in addition via interpretability, after which we will truly steer the mannequin again into alignment.”

    An easier approach to slide the mannequin again into alignment was fine-tuning additional on good information, the staff discovered. This information would possibly right the dangerous information used to create the misalignment (on this case, that might imply code that does desired duties appropriately and securely) and even introduce completely different useful info (e.g., good medical recommendation). In apply, it took little or no to realign—round 100 good, truthful samples. 



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Editor Times Featured
    • Website

    Related Posts

    Why your agentic AI will fail without an AI gateway

    June 18, 2025

    Why AI hardware needs to be open

    June 18, 2025

    When AIs bargain, a less advanced agent could cost you

    June 17, 2025

    AI copyright anxiety will hold back creativity

    June 17, 2025

    Powering next-gen services with AI in regulated industries 

    June 13, 2025

    The problem with AI agents

    June 12, 2025
    Leave A Reply Cancel Reply

    Editors Picks

    Helpful helmet packs crank-charged removable bike lights

    June 19, 2025

    VC merger creates PXN Group – a €783 million investment powerhouse for the north of the UK

    June 19, 2025

    The EPA Plans to ‘Reconsider’ Ban on Cancer-Causing Asbestos

    June 19, 2025

    Address bar shows hp.com. Browser displays scammers’ malicious text anyway.

    June 19, 2025
    Categories
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    About Us
    About Us

    Welcome to Times Featured, an AI-driven entrepreneurship growth engine that is transforming the future of work, bridging the digital divide and encouraging younger community inclusion in the 4th Industrial Revolution, and nurturing new market leaders.

    Empowering the growth of profiles, leaders, entrepreneurs businesses, and startups on international landscape.

    Asia-Middle East-Europe-North America-Australia-Africa

    Facebook LinkedIn WhatsApp
    Featured Picks

    Countries compete to keep skilled young workers

    April 20, 2025

    Apple Would Be Worth Half as Much If It Stopped Manufacturing in China

    May 1, 2025

    Today’s NYT Mini Crossword Answers for March 29

    March 29, 2025
    Categories
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    Copyright © 2024 Timesfeatured.com IP Limited. All Rights.
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us

    Type above and press Enter to search. Press Esc to cancel.