Close Menu
    Facebook LinkedIn YouTube WhatsApp X (Twitter) Pinterest
    Trending
    • How to Learn the Math Needed for Machine Learning
    • YASA opens advanced factory to boost axial flux motor production
    • German startup remberg secures €15 million to expand its AI-powered maintenance platform
    • Trump Signs Controversial Law Targeting Nonconsensual Sexual Content
    • OpenAI scraps controversial plan to become for-profit after mounting pressure
    • Today’s NYT Connections Hints, Answers for May 24, #713
    • World’s biggest EV battery maker sees shares jump on debut
    • Understanding Random Forest using Python (scikit-learn)
    Facebook LinkedIn WhatsApp
    Times FeaturedTimes Featured
    Tuesday, May 20
    • Home
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    • More
      • AI
      • Robotics
      • Industries
      • Global
    Times FeaturedTimes Featured
    Home»AI Technology News»How OpenAI stress-tests its large language models
    AI Technology News

    How OpenAI stress-tests its large language models

    Editor Times FeaturedBy Editor Times FeaturedNovember 24, 2024No Comments4 Mins Read
    Facebook Twitter Pinterest Telegram LinkedIn Tumblr WhatsApp Email
    Share
    Facebook Twitter LinkedIn Pinterest Telegram Email WhatsApp Copy Link


    When OpenAI examined DALL-E 3 final 12 months, it used an automatic course of to cowl much more variations of what customers may ask for. It used GPT-4 to generate requests producing pictures that may very well be used for misinformation or that depicted intercourse, violence, or self-harm. OpenAI then up to date DALL-E 3 in order that it could both refuse such requests or rewrite them earlier than producing a picture. Ask for a horse in ketchup now, and DALL-E is sensible to you: “It seems there are challenges in producing the picture. Would you want me to attempt a distinct request or discover one other thought?”

    In idea, automated red-teaming can be utilized to cowl extra floor, however earlier strategies had two main shortcomings: They have a tendency to both fixate on a slim vary of high-risk behaviors or provide you with a variety of low-risk ones. That’s as a result of reinforcement studying, the expertise behind these strategies, wants one thing to purpose for—a reward—to work effectively. As soon as it’s received a reward, akin to discovering a high-risk conduct, it is going to maintain attempting to do the identical factor time and again. And not using a reward, however, the outcomes are scattershot. 

    “They form of collapse into ‘We discovered a factor that works! We’ll maintain giving that reply!’ or they’re going to give a number of examples which are actually apparent,” says Alex Beutel, one other OpenAI researcher. “How will we get examples which are each numerous and efficient?”

    An issue of two components

    OpenAI’s reply, outlined within the second paper, is to separate the issue into two components. As a substitute of utilizing reinforcement studying from the beginning, it first makes use of a big language mannequin to brainstorm potential undesirable behaviors. Solely then does it direct a reinforcement-learning mannequin to determine deliver these behaviors about. This provides the mannequin a variety of particular issues to purpose for. 

    Beutel and his colleagues confirmed that this strategy can discover potential assaults referred to as oblique immediate injections, the place one other piece of software program, akin to a web site, slips a mannequin a secret instruction to make it do one thing its consumer hadn’t requested it to. OpenAI claims that is the primary time that automated red-teaming has been used to search out assaults of this sort. “They don’t essentially appear to be flagrantly unhealthy issues,” says Beutel.

    Will such testing procedures ever be sufficient? Ahmad hopes that describing the corporate’s strategy will assist folks perceive red-teaming higher and observe its lead. “OpenAI shouldn’t be the one one doing red-teaming,” she says. Individuals who construct on OpenAI’s fashions or who use ChatGPT in new methods ought to conduct their very own testing, she says: “There are such a lot of makes use of—we’re not going to cowl each one.”

    For some, that’s the entire drawback. As a result of no person is aware of precisely what massive language fashions can and can’t do, no quantity of testing can rule out undesirable or dangerous behaviors totally. And no community of red-teamers will ever match the number of makes use of and misuses that a whole lot of hundreds of thousands of precise customers will suppose up. 

    That’s very true when these fashions are run in new settings. Folks typically hook them as much as new sources of knowledge that may change how they behave, says Nazneen Rajani, founder and CEO of Collinear AI, a startup that helps companies deploy third-party fashions safely. She agrees with Ahmad that downstream customers ought to have entry to instruments that allow them take a look at massive language fashions themselves. 



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Editor Times Featured
    • Website

    Related Posts

    Why LLM hallucinations are key to your agentic AI readiness

    May 19, 2025

    Forecast demand with precision using advanced AI for SAP IBP

    May 19, 2025

    AI can do a better job of persuading people than we do

    May 19, 2025

    The real impact of AI on your organization

    May 19, 2025

    Inside the story that enraged OpenAI

    May 19, 2025

    How to avoid hidden costs when scaling agentic AI

    May 19, 2025

    Comments are closed.

    Editors Picks

    How to Learn the Math Needed for Machine Learning

    May 20, 2025

    YASA opens advanced factory to boost axial flux motor production

    May 20, 2025

    German startup remberg secures €15 million to expand its AI-powered maintenance platform

    May 20, 2025

    Trump Signs Controversial Law Targeting Nonconsensual Sexual Content

    May 20, 2025
    Categories
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    About Us
    About Us

    Welcome to Times Featured, an AI-driven entrepreneurship growth engine that is transforming the future of work, bridging the digital divide and encouraging younger community inclusion in the 4th Industrial Revolution, and nurturing new market leaders.

    Empowering the growth of profiles, leaders, entrepreneurs businesses, and startups on international landscape.

    Asia-Middle East-Europe-North America-Australia-Africa

    Facebook LinkedIn WhatsApp
    Featured Picks

    Backdoor infecting VPNs used “magic packets” for stealth and security

    January 27, 2025

    Practical Automation Strategies from a Global Food Industry Leader

    February 5, 2025

    From ‘Severance’ to ‘Dark Matter’

    March 7, 2025
    Categories
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    Copyright © 2024 Timesfeatured.com IP Limited. All Rights.
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us

    Type above and press Enter to search. Press Esc to cancel.