Close Menu
    Facebook LinkedIn YouTube WhatsApp X (Twitter) Pinterest
    Trending
    • Robot wins half marathon faster than human record
    • Analysis of 200 education dept-endorsed school apps finds most are selling BS when it comes to the privacy of children’s data
    • Spoofed Tankers Are Flooding the Strait of Hormuz. These Analysts Are Tracking Them
    • Polymarket is in talks to raise $400M at a ~$15B post-money valuation, up from $9B in October 2025, but below Kalshi’s $22B valuation from March 2026 (The Information)
    • Today’s NYT Connections: Sports Edition Hints, Answers for April 20 #574
    • Will Humans Live Forever? AI Races to Defeat Aging
    • AI evolves itself to speed up scientific discovery
    • Australia’s privacy commissioner tried, in vain, to sound the alarm on data protection during the u16s social media ban trials
    Facebook LinkedIn WhatsApp
    Times FeaturedTimes Featured
    Monday, April 20
    • Home
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    • More
      • AI
      • Robotics
      • Industries
      • Global
    Times FeaturedTimes Featured
    Home»Tech Analysis»One in three using AI for emotional support and conversation, UK says
    Tech Analysis

    One in three using AI for emotional support and conversation, UK says

    Editor Times FeaturedBy Editor Times FeaturedDecember 18, 2025No Comments4 Mins Read
    Facebook Twitter Pinterest Telegram LinkedIn Tumblr WhatsApp Email
    Share
    Facebook Twitter LinkedIn Pinterest Telegram Email WhatsApp Copy Link


    Chris VallanceSenior know-how reporter

    Getty Images A view of a data centre corridor lined with dark cabinets covered in lights. The mood is sinister. Getty Pictures

    One in three adults within the UK are utilizing synthetic intelligence (AI) for emotional help or social interplay, in line with analysis printed by a authorities physique.

    And one in 25 individuals turned to the tech for help or dialog each day, the AI Safety Institute (AISI) said in its first report.

    The report is predicated on two years of testing the talents of greater than 30 unnamed superior AIs – masking areas vital to safety, together with cyber abilities, chemistry and biology.

    The federal government stated AISI’s work would help its future plans by serving to corporations repair issues “earlier than their AI methods are broadly used”.

    A survey by AISI of over 2,000 UK adults discovered individuals had been primarily utilizing chatbots like ChatGPT for emotional help or social interplay, adopted by voice assistants like Amazon’s Alexa.

    Researchers additionally analysed what occurred to a web-based group of greater than two million Reddit customers devoted to discussing AI companions, when the tech failed.

    The researchers discovered when the chatbots went down, individuals reported self-described “signs of withdrawal”, resembling feeling anxious or depressed – in addition to having disrupted sleep or neglecting their tasks.

    Doubling cyber abilities

    In addition to the emotional impression of AI use, AISI researchers checked out different dangers brought on by the tech’s accelerating capabilities.

    There’s appreciable concern about AI enabling cyber assaults, however equally it may be used to assist safe methods from hackers.

    Its means to identify and exploit safety flaws was in some circumstances “doubling each eight months”, the report suggests.

    And AI methods had been additionally starting to finish expert-level cyber duties which might usually require over 10 years of expertise.

    Researchers additionally discovered the tech’s impression in science was additionally rising quickly.

    In 2025, AI fashions had “lengthy since exceeded human biology specialists with PhDs – with efficiency in chemistry shortly catching up”.

    ‘People dropping management’

    From novels resembling Isaac Asimov’s I, Robotic to fashionable video video games like Horizon: Zero Daybreak, sci-fi has lengthy imagined what would occur if AI broke freed from human management.

    Now, in line with the report, the “worst-case state of affairs” of people dropping management of superior AI methods is “taken severely by many specialists”.

    AI fashions are more and more exhibiting a number of the capabilities required to self-replicate throughout the web, managed lab assessments steered.

    AISI examined whether or not fashions might perform easy variations of duties wanted within the early phases of self-replication – resembling “passing know-your buyer checks required to entry monetary companies” in an effort to efficiently buy the computing on which their copies would run.

    However the analysis discovered to have the ability to do that in the true world, AI methods would wish to finish a number of such actions in sequence “whereas remaining undetected”, one thing its analysis suggests they at the moment lack the capability to do.

    Institute specialists additionally checked out the potential for fashions “sandbagging” – or strategically hiding their true capabilities from testers.

    They discovered assessments confirmed it was attainable, however there was no proof of one of these subterfuge going down.

    In Could, AI agency Anthropic launched a controversial report which described how an AI mannequin was able to seemingly blackmail-like behaviour if it thought its “self-preservation” was threatened.

    The menace from rogue AI is, nonetheless, a supply of profound disagreement amongst main researchers – many of whom feel it is exaggerated.

    ‘Common jailbreaks’

    To mitigate the chance of their methods getting used for nefarious functions, corporations deploy quite a few safeguards.

    However researchers had been capable of finding “common jailbreaks” – or workarounds – for all of the fashions studied which might enable them to dodge these protections.

    Nevertheless, for some fashions, the time it took for specialists to influence methods to bypass safeguards had elevated forty-fold in simply six months.

    The report additionally discovered a rise in using instruments which allowed AI brokers to carry out “high-stakes duties” in vital sectors resembling finance.

    However researchers didn’t take into account AI’s potential to trigger unemployment within the short-term by displacing human staff.

    The institute additionally didn’t study the environmental impression of the computing assets required by superior fashions, arguing that its activity was to give attention to “societal impacts” which can be intently linked to AI’s skills slightly than extra “diffuse” financial or environmental results.

    Some argue each are imminent and critical societal threats posed by the tech.

    And hours earlier than the AISI report was printed, a peer-reviewed examine steered the environmental impression may very well be greater than previously thought, and argued for extra detailed information to be launched by huge tech.

    A green promotional banner with black squares and rectangles forming pixels, moving in from the right. The text says: “Tech Decoded: The world’s biggest tech news in your inbox every Monday.”



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Editor Times Featured
    • Website

    Related Posts

    Francis Bacon and the Scientific Method

    April 19, 2026

    Efficient Design and Simulation of LPDA-Fed Parabolic Reflector Antennas

    April 17, 2026

    IEEE Connects Hardware Startups With Investors

    April 16, 2026

    From RSA to Lattices: The Quantum Safe Crypto Shift

    April 15, 2026

    Stealth Satellite TV Defeats Iran’s Internet Blackout

    April 15, 2026

    Tech Life – Sharing the road with driverless cars

    April 14, 2026

    Comments are closed.

    Editors Picks

    Robot wins half marathon faster than human record

    April 20, 2026

    Analysis of 200 education dept-endorsed school apps finds most are selling BS when it comes to the privacy of children’s data

    April 20, 2026

    Spoofed Tankers Are Flooding the Strait of Hormuz. These Analysts Are Tracking Them

    April 20, 2026

    Polymarket is in talks to raise $400M at a ~$15B post-money valuation, up from $9B in October 2025, but below Kalshi’s $22B valuation from March 2026 (The Information)

    April 20, 2026
    Categories
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    About Us
    About Us

    Welcome to Times Featured, an AI-driven entrepreneurship growth engine that is transforming the future of work, bridging the digital divide and encouraging younger community inclusion in the 4th Industrial Revolution, and nurturing new market leaders.

    Empowering the growth of profiles, leaders, entrepreneurs businesses, and startups on international landscape.

    Asia-Middle East-Europe-North America-Australia-Africa

    Facebook LinkedIn WhatsApp
    Featured Picks

    The Power of Saying No and How Boundaries Can Boost Your Career

    December 16, 2024

    Today’s NYT Mini Crossword Answers for March 6

    March 6, 2026

    British workplace finance platform Stream adds €76 million in funding, bringing total raised to €194 million

    January 21, 2026
    Categories
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    Copyright © 2024 Timesfeatured.com IP Limited. All Rights.
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us

    Type above and press Enter to search. Press Esc to cancel.