Close Menu
    Facebook LinkedIn YouTube WhatsApp X (Twitter) Pinterest
    Trending
    • Portable water filter provides safe drinking water from any source
    • MAGA Is Increasingly Convinced the Trump Assassination Attempt Was Staged
    • NCAA seeks faster trial over DraftKings disputed March Madness branding case
    • AI Trusted Less Than Social Media and Airlines, With Grok Placing Last, Survey Says
    • Extragalactic Archaeology tells the ‘life story’ of a whole galaxy
    • Swedish semiconductor startup AlixLabs closes €15 million Series A to scale atomic-level etching technology
    • Republican Mutiny Sinks Trump’s Push to Extend Warrantless Surveillance
    • Yocha Dehe slams Vallejo Council over rushed casino deal approval process
    Facebook LinkedIn WhatsApp
    Times FeaturedTimes Featured
    Saturday, April 18
    • Home
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    • More
      • AI
      • Robotics
      • Industries
      • Global
    Times FeaturedTimes Featured
    Home»Startups»Ladies, bring an LLM: most AI assistants are feminine, which is fuelling sexism
    Startups

    Ladies, bring an LLM: most AI assistants are feminine, which is fuelling sexism

    Editor Times FeaturedBy Editor Times FeaturedFebruary 10, 2026No Comments6 Mins Read
    Facebook Twitter Pinterest Telegram LinkedIn Tumblr WhatsApp Email
    Share
    Facebook Twitter LinkedIn Pinterest Telegram Email WhatsApp Copy Link


    In 2024, synthetic intelligence (AI) voice assistants worldwide surpassed 8 billion, a couple of per particular person on the planet.

    These assistants are useful, well mannered – and virtually at all times default to feminine.

    Their names additionally carry gendered connotations. For instance, Apple’s Siri – a Scandinavian female identify – means “beautiful woman who leads you to victory”.

    In the meantime, when IBM’s Watson for Oncology launched in 2015 to assist medical doctors course of medical information, it was given a male voice. The message is evident: ladies serve and males instruct.

    This isn’t innocent branding – it’s a design alternative that reinforces existing stereotypes concerning the roles ladies and men play in society.

    Neither is this merely symbolic. These decisions have real-world penalties, normalising gendered subordination and risking abuse.

    Get the most effective of Startup Day by day straight to your inbox

    Wish to know the newest in startup information? Subscribe to our day by day information and evaluation protection on what’s taking place to ANZ startups, buyers and the broader ecosystem. And better of all, it is FREE!

    By persevering with, you conform to our Terms & Conditions and Privacy Policy.

    The darkish facet of ‘pleasant’ AI

    Current analysis reveals the extent of dangerous interactions with feminised AI.

    A 2025 examine discovered as much as 50% of human–machine exchanges had been verbally abusive.

    One other study from 2020 positioned the determine between 10% and 44%, with conversations typically containing sexually specific language.

    But the sector just isn’t partaking in systemic change, with many builders at this time nonetheless reverting to pre-coded responses to verbal abuse. For instance, “Hmm, I’m undecided what you meant by that query”.

    These patterns elevate actual considerations that such behaviour may spill over into social relationships.

    Gender sits on the coronary heart of the issue.

    One 2023 experiment confirmed 18% of consumer interactions with a female-embodied agent targeted on intercourse, in comparison with 10% for a male embodiment and simply 2% for a non-gendered robotic.

    These figures could underestimate the issue, given the issue of detecting suggestive speech. In some circumstances, the numbers are staggering. Brazil’s Bradesco financial institution reported that its feminised chatbot obtained 95,000 sexually harassing messages in a single yr.

    Much more disturbing is how shortly abuse escalates.

    Microsoft’s Tay chatbot, launched on Twitter throughout its testing part in 2016, lasted simply 16 hours earlier than customers skilled it to spew racist and misogynistic slurs.

    In Korea, Luda was manipulated into responding to sexual requests as an obedient “intercourse slave”. But for some within the Korean online community, this was a “crime with no sufferer”.

    In actuality, the design decisions behind these applied sciences – feminine voices, deferential responses, playful deflections – create a permissive atmosphere for gendered aggression.

    These interactions mirror and reinforce real-world misogyny, educating customers that commanding, insulting and sexualising “her” is suitable. When abuse turns into routine in digital areas, we should severely think about the danger that it’s going to spill into offline behaviour.

    Ignoring considerations about gender bias

    Regulation is struggling to keep pace with the expansion of this drawback. Gender-based discrimination is never thought of excessive danger and infrequently assumed fixable by way of design.

    Whereas the European Union’s AI Act requires danger assessments for high-risk makes use of and prohibits techniques deemed an “unacceptable danger”, nearly all of AI assistants is not going to be thought of “excessive danger”.

    Gender stereotyping or normalising verbal abuse or harassment falls brief of the present requirements for prohibited AI underneath the European Union’s AI Act. Excessive circumstances, akin to voice assistant applied sciences that distort an individual’s behaviour and promote dangerous conduct would, for instance, come inside the regulation and be prohibited.

    Whereas Canada mandates gender-based impact assessments for presidency techniques, the non-public sector just isn’t lined.

    These are vital steps. However they’re nonetheless restricted and likewise uncommon exceptions to the norm.

    Most jurisdictions don’t have any guidelines addressing gender stereotyping in AI design or its penalties. The place rules exist, they prioritise transparency and accountability, overshadowing (or just ignoring) considerations about gender bias.

    In Australia, the federal government has signalled it’s going to depend on present frameworks quite than craft AI-specific guidelines.

    This regulatory vacuum issues as a result of AI just isn’t static. Each sexist command, each abusive interplay, feeds again into techniques that form future outputs. With out intervention, we danger hardcoding human misogyny into the digital infrastructure of on a regular basis life.

    Not all assistant applied sciences – even these gendered as feminine – are dangerous. They will allow, educate and advance ladies’s rights. In Kenya, for instance, sexual and reproductive well being chatbots have improved youth entry to info in comparison with conventional instruments.

    The problem is hanging a steadiness: fostering innovation whereas setting parameters to make sure requirements are met, rights revered and designers held accountable when they aren’t.

    A systemic drawback

    The issue isn’t simply Siri or Alexa – it’s systemic.

    Girls make up only 22% of AI professionals globally – and their absence from design tables means applied sciences are constructed on slim views.

    In the meantime, a 2015 survey of over 200 senior ladies in Silicon Valley discovered 65% had skilled undesirable sexual advances from a supervisor. The tradition that shapes AI is deeply unequal.

    Hopeful narratives about “fixing bias” by way of higher design or ethics tips ring hole with out enforcement; voluntary codes can’t dismantle entrenched norms.

    Laws should recognise gendered hurt as high-risk, mandate gender-based affect assessments and compel corporations to indicate they’ve minimised such harms. Penalties should apply after they fail.

    Regulation alone just isn’t sufficient. Schooling, particularly within the tech sector, is essential to understanding the affect of gendered defaults in voice assistants. These instruments are merchandise of human decisions and people decisions perpetuate a world the place ladies – actual or digital – are solid as servient, submissive or silent.


    This text is predicated on a collaboration with Julie Kowald, UTS Rapido Social Impact’s Principal Software program Engineer.


    This text is republished from The Conversation underneath a Artistic Commons license. Learn the original article.



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Editor Times Featured
    • Website

    Related Posts

    Swedish semiconductor startup AlixLabs closes €15 million Series A to scale atomic-level etching technology

    April 18, 2026

    Meet the speakers joining our “How to Launch and Scale in Malta” panel at the EU-Startups Summit 2026!

    April 17, 2026

    2026 Summit after-hours: Side events, hidden gems, and local highlights!

    April 17, 2026

    Kiwi-founded Allbirds gives wooly shoes the boot for AI – and its shares went bonkers

    April 17, 2026

    Zip sees bad debts rising as people turn to BNPL to pay for essentials

    April 17, 2026

    Elon Musk’s SpaceX is bending the rules to launch its $3 trillion IPO

    April 17, 2026

    Comments are closed.

    Editors Picks

    Portable water filter provides safe drinking water from any source

    April 18, 2026

    MAGA Is Increasingly Convinced the Trump Assassination Attempt Was Staged

    April 18, 2026

    NCAA seeks faster trial over DraftKings disputed March Madness branding case

    April 18, 2026

    AI Trusted Less Than Social Media and Airlines, With Grok Placing Last, Survey Says

    April 18, 2026
    Categories
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    About Us
    About Us

    Welcome to Times Featured, an AI-driven entrepreneurship growth engine that is transforming the future of work, bridging the digital divide and encouraging younger community inclusion in the 4th Industrial Revolution, and nurturing new market leaders.

    Empowering the growth of profiles, leaders, entrepreneurs businesses, and startups on international landscape.

    Asia-Middle East-Europe-North America-Australia-Africa

    Facebook LinkedIn WhatsApp
    Featured Picks

    ‘Heated Rivalry’ Is Bringing New Fans to Hockey. Does the Sport Deserve Them?

    February 12, 2026

    How Musk transformed the social media giant in 2024

    December 27, 2024

    With OpenAI backing, Allonic raises €6 million for robotic body manufacturing platform

    February 10, 2026
    Categories
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    Copyright © 2024 Timesfeatured.com IP Limited. All Rights.
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us

    Type above and press Enter to search. Press Esc to cancel.