Close Menu
    Facebook LinkedIn YouTube WhatsApp X (Twitter) Pinterest
    Trending
    • Robot wins half marathon faster than human record
    • Analysis of 200 education dept-endorsed school apps finds most are selling BS when it comes to the privacy of children’s data
    • Spoofed Tankers Are Flooding the Strait of Hormuz. These Analysts Are Tracking Them
    • Polymarket is in talks to raise $400M at a ~$15B post-money valuation, up from $9B in October 2025, but below Kalshi’s $22B valuation from March 2026 (The Information)
    • Today’s NYT Connections: Sports Edition Hints, Answers for April 20 #574
    • Will Humans Live Forever? AI Races to Defeat Aging
    • AI evolves itself to speed up scientific discovery
    • Australia’s privacy commissioner tried, in vain, to sound the alarm on data protection during the u16s social media ban trials
    Facebook LinkedIn WhatsApp
    Times FeaturedTimes Featured
    Monday, April 20
    • Home
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    • More
      • AI
      • Robotics
      • Industries
      • Global
    Times FeaturedTimes Featured
    Home»Artificial Intelligence»Who’s to Blame When AI Goes Rogue? The UN’s Quiet Warning That Got Very Loud
    Artificial Intelligence

    Who’s to Blame When AI Goes Rogue? The UN’s Quiet Warning That Got Very Loud

    Editor Times FeaturedBy Editor Times FeaturedFebruary 6, 2026No Comments3 Mins Read
    Facebook Twitter Pinterest Telegram LinkedIn Tumblr WhatsApp Email
    Share
    Facebook Twitter LinkedIn Pinterest Telegram Email WhatsApp Copy Link


    From Silicon Valley to the U.N., the query of the best way to assign blame when AI goes flawed is not an esoteric regulatory concern, however a matter of geopolitical significance.

    This week, the United Nations Secretary-General posed that question, highlighting a problem that’s central to discussions about AI ethics and regulation. He questioned who ought to be held accountable when AI methods trigger hurt, discriminate, or spiral past human intent.

    The feedback had been a transparent warning to nationwide leaders, in addition to to tech-industry executives, that AI’s capabilities are outpacing laws, as previously reported.

    But it surely wasn’t simply the warning that was exceptional. So too was the tone. There was a way of exasperation.

    Even desperation. If AI-driven machines are getting used to make choices that contain life and loss of life, livelihoods, borders and safety, then somebody can’t simply wimp out by saying it’s all too sophisticated.

    The Secretary-Normal mentioned the duty “have to be shared, amongst builders, deployers and regulators.”

    The notion resonates with long-held suspicions within the UN about unbridled technological drive, which has been percolating by way of UN deliberations on digital governance and human rights.

    That timing is necessary. As governments attempt to draft AI laws at a second when the expertise is altering so quickly, Europe already has taken the lead in passing formidable legal guidelines that can apply to high-risk AI merchandise, establishing a regulatory normal that can probably function a beacon – or cautionary story – for different nations

    However, truthfully: legal guidelines on a web page aren’t going to shift the ability dynamics. The Secretary-Normal’s phrases enter the world within the face of AIs which are at the moment being utilized in immigration vetting, predictive policing, creditworthiness, and navy decisions.

    Civil society has been warning in regards to the risks of AI if there’s no accountability. It’s going to be the right scapegoat for human decision-making with very human repercussions: “the algorithm made me do it.”

    We must also point out that there’s additionally a geopolitics downside that’s barely mentioned: What is going to occur if AI explainability laws in a single nation are incompatible with these of a neighboring nation?

    What is going to occur when AI traverses boundaries? Can we discuss in regards to the rights to export AI? Antonio Guterres, the Secretary Normal of the UN, spoke in regards to the want for common pointers to develop and use AI, very similar to it’s completed with nuclear and local weather legal guidelines.

    And this isn’t a simple process in a world with a disintegration of worldwide relations and worldwide agreements, which is heading in the direction of a state of affairs of full deregulation.

    My interpretation? This wasn’t diplomacy talking. This was a draw-the-line speech. It wasn’t an advanced message, even when it’s an advanced downside to resolve: AI shouldn’t be excused from accountability simply because it’s intelligent or fast or profitable.

    There have to be an entity to whom it’s accountable for its outcomes. And the extra time the world spends deciding what that entity might be, the extra painful and complicated the choice will change into.



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Editor Times Featured
    • Website

    Related Posts

    Will Humans Live Forever? AI Races to Defeat Aging

    April 20, 2026

    KV Cache Is Eating Your VRAM. Here’s How Google Fixed It With TurboQuant.

    April 19, 2026

    Proxy-Pointer RAG: Structure Meets Scale at 100% Accuracy with Smarter Retrieval

    April 19, 2026

    Dreaming in Cubes | Towards Data Science

    April 19, 2026

    AI Agents Need Their Own Desk, and Git Worktrees Give Them One

    April 18, 2026

    Your RAG System Retrieves the Right Data — But Still Produces Wrong Answers. Here’s Why (and How to Fix It).

    April 18, 2026

    Comments are closed.

    Editors Picks

    Robot wins half marathon faster than human record

    April 20, 2026

    Analysis of 200 education dept-endorsed school apps finds most are selling BS when it comes to the privacy of children’s data

    April 20, 2026

    Spoofed Tankers Are Flooding the Strait of Hormuz. These Analysts Are Tracking Them

    April 20, 2026

    Polymarket is in talks to raise $400M at a ~$15B post-money valuation, up from $9B in October 2025, but below Kalshi’s $22B valuation from March 2026 (The Information)

    April 20, 2026
    Categories
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    About Us
    About Us

    Welcome to Times Featured, an AI-driven entrepreneurship growth engine that is transforming the future of work, bridging the digital divide and encouraging younger community inclusion in the 4th Industrial Revolution, and nurturing new market leaders.

    Empowering the growth of profiles, leaders, entrepreneurs businesses, and startups on international landscape.

    Asia-Middle East-Europe-North America-Australia-Africa

    Facebook LinkedIn WhatsApp
    Featured Picks

    Cayuga Nation victory over New York State on lottery activity

    August 7, 2025

    Rightmove shares plummet over AI investment plans

    November 7, 2025

    Enabling real-time responsiveness with event-driven architecture

    October 6, 2025
    Categories
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    Copyright © 2024 Timesfeatured.com IP Limited. All Rights.
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us

    Type above and press Enter to search. Press Esc to cancel.