Close Menu
    Facebook LinkedIn YouTube WhatsApp X (Twitter) Pinterest
    Trending
    • The ICE Expansion Won’t Happen in the Dark
    • Man jailed for 301 years to life over Sacramento gambling robbery
    • Today’s NYT Connections: Sports Edition Hints, Answers for Feb. 12 #507
    • Not All RecSys Problems Are Created Equal
    • Steer-by-wire tech, Range Rover looks
    • Cut the cupcakes: how to avoid corporate takeover of International Women’s Day
    • ‘Heated Rivalry’ Is Bringing New Fans to Hockey. Does the Sport Deserve Them?
    • Viral post sparks debate over Kroger gambling machines in Georgia
    Facebook LinkedIn WhatsApp
    Times FeaturedTimes Featured
    Thursday, February 12
    • Home
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    • More
      • AI
      • Robotics
      • Industries
      • Global
    Times FeaturedTimes Featured
    Home»Tech Analysis»Don’t Regulate AI Models. Regulate AI Use
    Tech Analysis

    Don’t Regulate AI Models. Regulate AI Use

    Editor Times FeaturedBy Editor Times FeaturedFebruary 2, 2026No Comments5 Mins Read
    Facebook Twitter Pinterest Telegram LinkedIn Tumblr WhatsApp Email
    Share
    Facebook Twitter LinkedIn Pinterest Telegram Email WhatsApp Copy Link
    Hazardous dual-use features (for instance, instruments to manufacture biometric voiceprints to defeat authentication).
    Regulatory adherence: confine to licensed services and verified operators; prohibit capabilities whose main function is illegal.

    Shut the loop at real-world choke factors

    AI-enabled methods grow to be actual after they’re related to customers, cash, infrastructure, and establishments, and that’s the place regulators ought to focus enforcement: on the factors of distribution (app shops and enterprise marketplaces), functionality entry (cloud and AI platforms), monetization (payment systems and advert networks), and danger switch (insurers and contract counterparties).

    For top-risk makes use of, we have to require identification binding for operators, functionality gating aligned to the chance tier, and tamper-evident logging for audits and postincident overview, paired with privateness protections. We have to demand proof for deployer claims, keep incident-response plans, report materials faults, and supply human fallback. When AI use results in injury, companies ought to have to point out their work and face liability for harms.

    This strategy creates market dynamics that speed up compliance. If essential enterprise operations reminiscent of procurement, entry to cloud companies, and insurance coverage depend upon proving that you’re following the foundations, AI mannequin builders will construct to specs patrons can verify. That raises the security ground for all trade gamers, startups included, with out handing a bonus to a couple giant, licensed incumbents.

    The E.U. strategy: How this aligns, the place it differs

    This framework aligns with the E.U. AI Act in two essential methods. First, it facilities danger on the level of influence: The act’s “high-risk” classes embody employment, training, entry to important companies, and significant infrastructure, with life-cycle obligations and criticism rights. It additionally acknowledges particular therapy for broadly succesful methods (GPAI) with out pretending publication management is a security technique. My proposal for america differs in three key methods:

    First, the U.S. should design for constitutional sturdiness. Courts have handled supply code as protected speech, and a regime that requires permission to publish weights or prepare a category of fashions begins to resemble prior restraint. A use-based regime of guidelines governing what AI operators can do in delicate settings, and below what circumstances, suits extra naturally inside the U.S. First Modification doctrine than speaker-based licensing schemes.

    Second, the E.U. can depend on platforms adapting to the precautionary guidelines it writes for its unified single market. The U.S. ought to settle for that fashions will exist globally, each open and closed, and give attention to the place AI turns into actionable: app shops, enterprise platforms, cloud suppliers, enterprise identification layers, fee rails, insurers, and regulated-sector gatekeepers (hospitals, utilities, banks). These are enforceable factors the place identification, logging, functionality gating, and postincident accountability might be required with out pretending we will “include” software program. In addition they span the numerous specialised U.S. businesses that will not be capable of write higher-level guidelines broad sufficient to have an effect on the entire AI ecosystem. As a substitute, the U.S. ought to regulate AI service choke factors extra explicitly than Europe does, to accommodate the totally different form of its authorities and public administration.

    Third, the U.S. ought to add an express “dual-use hazard” tier. The E.U. AI Act is primarily a fundamental-rights and product-safety regime. The USA additionally has a national-security actuality: Sure capabilities are harmful as a result of they scale hurt (biosecurity, cyberoffense, mass fraud). A coherent U.S. framework ought to identify that class and regulate it straight, reasonably than making an attempt to suit it into generic “frontier mannequin” licensing.

    China’s strategy: What to reuse, what to keep away from

    China has constructed a layered regime for public-facing AI. The “deep synthesis” guidelines (efficient 10 January 2023) require conspicuous labeling of artificial media and place duties on suppliers and platforms. The Interim Measures for Generative AI (efficient 15 August 2023) add registration and governance obligations for companies supplied to the general public. Enforcement leverages platform management and algorithm submitting methods.

    The USA shouldn’t copy China’s state-directed management of AI viewpoints or data administration; it’s incompatible with U.S. values and wouldn’t survive U.S. constitutional scrutiny. The licensing of mannequin publication is brittle in observe and, in america, possible an unconstitutional type of censorship.

    However we will borrow two sensible concepts from China. First, we must always guarantee reliable provenance and traceability for artificial media. This entails obligatory labeling and provenance forensic instruments. They offer respectable creators and platforms a dependable strategy to show origin and integrity. When it’s fast to verify authenticity at scale, attackers lose the benefit of low cost copies or deepfakes and defenders regain time to detect, triage, and reply. Second, we must always require operators to file their strategies and danger controls with regulators for public-facing, high-risk companies, like we do for different safety-critical tasks. This could embody due-process and transparency safeguards acceptable to liberal democracies together with clear accountability for security measures, information safety, and incident dealing with, particularly for methods designed to govern feelings or construct dependency, which already embody gaming, role-playing, and related purposes.

    A realistic strategy

    We can’t meaningfully regulate the event of AI in a world the place artifacts copy in close to actual time and analysis flows fluidly throughout borders. However we will hold unvetted methods out of hospitals, fee methods, and significant infrastructure by regulating makes use of, not fashions; implementing at choke factors; and making use of obligations that scale with danger.

    Accomplished proper, this strategy harmonizes with the E.U.’s outcome-oriented framework, channels U.S. federal and state innovation right into a coherent baseline, and reuses China’s helpful distribution-level controls whereas rejecting speech-restrictive licensing. We will write guidelines that defend individuals and that also promote robust AI innovation.



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Editor Times Featured
    • Website

    Related Posts

    AI Companions Are Growing more Popular

    February 12, 2026

    Navigating AI Tools in Job Interviews

    February 11, 2026

    Rediscovering the Legacy of Chemist Jan Czochralski

    February 11, 2026

    Exploring AI Companion’s Benefits and Risks

    February 11, 2026

    AI Boom Fuels DRAM Shortage and Price Surge

    February 10, 2026

    IEEE Honors Innovators Shaping AI and Education

    February 10, 2026
    Leave A Reply Cancel Reply

    Editors Picks

    The ICE Expansion Won’t Happen in the Dark

    February 12, 2026

    Man jailed for 301 years to life over Sacramento gambling robbery

    February 12, 2026

    Today’s NYT Connections: Sports Edition Hints, Answers for Feb. 12 #507

    February 12, 2026

    Not All RecSys Problems Are Created Equal

    February 12, 2026
    Categories
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    About Us
    About Us

    Welcome to Times Featured, an AI-driven entrepreneurship growth engine that is transforming the future of work, bridging the digital divide and encouraging younger community inclusion in the 4th Industrial Revolution, and nurturing new market leaders.

    Empowering the growth of profiles, leaders, entrepreneurs businesses, and startups on international landscape.

    Asia-Middle East-Europe-North America-Australia-Africa

    Facebook LinkedIn WhatsApp
    Featured Picks

    Best Dog Beds (2025): For All Kinds of Dogs in All Kinds of Spaces

    September 20, 2025

    Pephop AI vs Crushon AI

    January 16, 2025

    Today’s NYT Strands Hints, Answer and Help for Aug. 4 #519

    August 4, 2025
    Categories
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    Copyright © 2024 Timesfeatured.com IP Limited. All Rights.
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us

    Type above and press Enter to search. Press Esc to cancel.