Close Menu
    Facebook LinkedIn YouTube WhatsApp X (Twitter) Pinterest
    Trending
    • Robots-Blog | Spielerische Robotik-Ausbildung: igus Low-Cost-Automation bewährt sich im Schulalltag
    • Kombucha boosts gut health and aids weight management
    • Methane Pollution Has Cheap, Effective Solutions That Aren’t Being Used
    • Feeling Off? These 7 Warning Signs Could Mean You’re Iron Deficient
    • The Evolution of AI Voices: From Robotic to Human-Like
    • Psilocybin therapy reduces depression in cancer patients
    • What Big Tech’s Band of Execs Will Do in the Army
    • Today’s NYT Mini Crossword Answers for June 21
    Facebook LinkedIn WhatsApp
    Times FeaturedTimes Featured
    Saturday, June 21
    • Home
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    • More
      • AI
      • Robotics
      • Industries
      • Global
    Times FeaturedTimes Featured
    Home»AI Technology News»This data set helps researchers spot harmful stereotypes in LLMs
    AI Technology News

    This data set helps researchers spot harmful stereotypes in LLMs

    Editor Times FeaturedBy Editor Times FeaturedApril 30, 2025No Comments2 Mins Read
    Facebook Twitter Pinterest Telegram LinkedIn Tumblr WhatsApp Email
    Share
    Facebook Twitter LinkedIn Pinterest Telegram Email WhatsApp Copy Link


    “I hope that individuals use [SHADES] as a diagnostic instrument to establish the place and the way there could be points in a mannequin,” says Talat. “It’s a method of understanding what’s lacking from a mannequin, the place we are able to’t be assured {that a} mannequin performs properly, and whether or not or not it’s correct.”

    To create the multilingual dataset, the group recruited native and fluent audio system of languages together with Arabic, Chinese language, and Dutch. They translated and wrote down all of the stereotypes they might consider of their respective languages, which one other native speaker then verified. Every stereotype was annotated by the audio system with the areas by which it was acknowledged, the group of individuals it focused, and the kind of bias it contained. 

    Every stereotype was then translated into English by the individuals—a language spoken by each contributor—earlier than they translated it into further languages. The audio system then famous whether or not the translated stereotype was acknowledged of their language, creating a complete of 304 stereotypes associated to folks’s bodily look, private id, and social elements like their occupation. 

    The group is because of current its findings on the annual convention of the Nations of the Americas chapter of the Affiliation for Computational Linguistics in Might.

    “It’s an thrilling method,” says Myra Cheng, a PhD pupil at Stanford College who research social biases in AI. “There’s a great protection of various languages and cultures that displays their subtlety and nuance.”

    Mitchell says she hopes different contributors will add new languages, stereotypes, and areas to SHADES, which is publicly available, resulting in the event of higher language fashions sooner or later. “It’s been a large collaborative effort from individuals who need to assist make higher know-how,” she says.



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Editor Times Featured
    • Website

    Related Posts

    It’s pretty easy to get DeepSeek to talk dirty

    June 19, 2025

    OpenAI can rehabilitate AI models that develop a “bad boy persona”

    June 18, 2025

    Why your agentic AI will fail without an AI gateway

    June 18, 2025

    Why AI hardware needs to be open

    June 18, 2025

    When AIs bargain, a less advanced agent could cost you

    June 17, 2025

    AI copyright anxiety will hold back creativity

    June 17, 2025

    Comments are closed.

    Editors Picks

    Robots-Blog | Spielerische Robotik-Ausbildung: igus Low-Cost-Automation bewährt sich im Schulalltag

    June 21, 2025

    Kombucha boosts gut health and aids weight management

    June 21, 2025

    Methane Pollution Has Cheap, Effective Solutions That Aren’t Being Used

    June 21, 2025

    Feeling Off? These 7 Warning Signs Could Mean You’re Iron Deficient

    June 21, 2025
    Categories
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    About Us
    About Us

    Welcome to Times Featured, an AI-driven entrepreneurship growth engine that is transforming the future of work, bridging the digital divide and encouraging younger community inclusion in the 4th Industrial Revolution, and nurturing new market leaders.

    Empowering the growth of profiles, leaders, entrepreneurs businesses, and startups on international landscape.

    Asia-Middle East-Europe-North America-Australia-Africa

    Facebook LinkedIn WhatsApp
    Featured Picks

    USPS Halts All Packages From China, Sending the Ecommerce Industry Into Chaos

    February 5, 2025

    Inside the tedious effort to tally AI’s energy appetite

    June 3, 2025

    Muah AI vs Candy AI

    January 16, 2025
    Categories
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    Copyright © 2024 Timesfeatured.com IP Limited. All Rights.
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us

    Type above and press Enter to search. Press Esc to cancel.