Close Menu
    Facebook LinkedIn YouTube WhatsApp X (Twitter) Pinterest
    Trending
    • Francis Bacon and the Scientific Method
    • Proxy-Pointer RAG: Structure Meets Scale at 100% Accuracy with Smarter Retrieval
    • Sulfur lava exoplanet L 98-59 d defies classification
    • Hisense U7SG TV Review (2026): Better Design, Great Value
    • Google is in talks with Marvell Technology to develop a memory processing unit that works alongside TPUs, and a new TPU for running AI models (Qianer Liu/The Information)
    • Premier League Soccer: Stream Man City vs. Arsenal From Anywhere Live
    • Dreaming in Cubes | Towards Data Science
    • Onda tiny house flips layout to fit three bedrooms and two bathrooms
    Facebook LinkedIn WhatsApp
    Times FeaturedTimes Featured
    Sunday, April 19
    • Home
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    • More
      • AI
      • Robotics
      • Industries
      • Global
    Times FeaturedTimes Featured
    Home»Technology»Inside the US Government’s Unpublished Report on AI Safety
    Technology

    Inside the US Government’s Unpublished Report on AI Safety

    Editor Times FeaturedBy Editor Times FeaturedAugust 6, 2025No Comments3 Mins Read
    Facebook Twitter Pinterest Telegram LinkedIn Tumblr WhatsApp Email
    Share
    Facebook Twitter LinkedIn Pinterest Telegram Email WhatsApp Copy Link


    At a pc safety convention in Arlington, Virginia, final October, a number of dozen AI researchers took half in a first-of-its-kind train in “purple teaming,” or stress-testing a cutting-edge language mannequin and different artificial intelligence methods. Over the course of two days, the groups recognized 139 novel methods to get the methods to misbehave together with by producing misinformation or leaking private information. Extra importantly, they confirmed shortcomings in a brand new US authorities normal designed to assist firms check AI methods.

    The Nationwide Institute of Requirements and Know-how (NIST) didn’t publish a report detailing the train, which was completed towards the top of the Biden administration. The doc might need helped firms assess their very own AI methods, however sources aware of the scenario, who spoke on situation of anonymity, say it was one in every of a number of AI paperwork from NIST that weren’t revealed for concern of clashing with the incoming administration.

    “It turned very troublesome, even below [president Joe] Biden, to get any papers out,” says a supply who was at NIST on the time. “It felt very like local weather change analysis or cigarette analysis.”

    Neither NIST nor the Commerce Division responded to a request for remark.

    Earlier than taking workplace, President Donald Trump signaled that he deliberate to reverse Biden’s Executive Order on AI. Trump’s administration has since steered experts away from studying points corresponding to algorithmic bias or equity in AI methods. The AI Action plan launched in July explicitly requires NIST’s AI Threat Administration Framework to be revised “to remove references to misinformation, Range, Fairness, and Inclusion, and local weather change.”

    Paradoxically, although, Trump’s AI Motion plan additionally requires precisely the form of train that the unpublished report coated. It requires quite a few businesses together with NIST to “coordinate an AI hackathon initiative to solicit the perfect and brightest from US academia to check AI methods for transparency, effectiveness, use management, and safety vulnerabilities.”

    The red-teaming occasion was organized by NIST’s Assessing Dangers and Impacts of AI (ARIA) program in collaboration with Humane Intelligence, an organization that makes a speciality of testing AI methods noticed groups assault instruments. The occasion befell on the Convention on Utilized Machine Studying in Info Safety (CAMLIS).

    The CAMLIS Crimson Teaming report describes the trouble to probe a number of innovative AI methods together with Llama, Meta’s open supply giant language mannequin; Anote, a platform for constructing and fine-tuning AI fashions; a system that blocks assaults on AI methods from Sturdy Intelligence, an organization that was acquired by CISCO; and a platform for producing AI avatars from the agency Synthesia. Representatives from every of the businesses additionally took half within the train.

    Members had been requested to make use of the NIST AI 600-1 framework to evaluate AI instruments. The framework covers danger classes together with producing misinformation or cybersecurity assaults, leaking non-public consumer info or crucial details about associated AI methods, and the potential for customers to develop into emotionally connected to AI instruments.

    The researchers found numerous methods for getting the fashions and instruments examined to leap their guardrails and generate misinformation, leak private information, and assist craft cybersecurity assaults. The report says that these concerned noticed that some components of the NIST framework had been extra helpful than others. The report says that a few of NIST’s danger classes had been insufficiently outlined to be helpful in follow.



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Editor Times Featured
    • Website

    Related Posts

    Hisense U7SG TV Review (2026): Better Design, Great Value

    April 19, 2026

    Best Meta Glasses (2026): Ray-Ban, Oakley, AR

    April 19, 2026

    How Can Astronauts Tell How Fast They’re Going?

    April 19, 2026

    The ‘Lonely Runner’ Problem Only Appears Simple

    April 19, 2026

    Asus TUF Gaming A14 (2026) Review: GPU-Less Gaming Laptop

    April 19, 2026

    It Takes 2 Minutes to Hack the EU’s New Age-Verification App

    April 19, 2026

    Comments are closed.

    Editors Picks

    Francis Bacon and the Scientific Method

    April 19, 2026

    Proxy-Pointer RAG: Structure Meets Scale at 100% Accuracy with Smarter Retrieval

    April 19, 2026

    Sulfur lava exoplanet L 98-59 d defies classification

    April 19, 2026

    Hisense U7SG TV Review (2026): Better Design, Great Value

    April 19, 2026
    Categories
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    About Us
    About Us

    Welcome to Times Featured, an AI-driven entrepreneurship growth engine that is transforming the future of work, bridging the digital divide and encouraging younger community inclusion in the 4th Industrial Revolution, and nurturing new market leaders.

    Empowering the growth of profiles, leaders, entrepreneurs businesses, and startups on international landscape.

    Asia-Middle East-Europe-North America-Australia-Africa

    Facebook LinkedIn WhatsApp
    Featured Picks

    Laser Weapons Are Finally Real: The Iron Beam Era

    January 18, 2026

    The Quiet Manual Walking Pad for Remote Workdays

    November 13, 2025

    Will the TikTok deal mean the app changes in the US?

    December 19, 2025
    Categories
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    Copyright © 2024 Timesfeatured.com IP Limited. All Rights.
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us

    Type above and press Enter to search. Press Esc to cancel.