Close Menu
    Facebook LinkedIn YouTube WhatsApp X (Twitter) Pinterest
    Trending
    • Brooklyn prosecutors signal plea deals as discovery expands in federal NBA case
    • Today’s NYT Mini Crossword Answers for March 6
    • Xiaomi’s 1900 hp Vision GT hypercar revealed
    • Dragonfly-inspired DeepTech: Austria’s fibionic secures €3 million for its nature-inspired lightweight technology
    • ‘Uncanny Valley’: Iran War in the AI Era, Prediction Market Ethics, and Paramount Beats Netflix
    • Jake Paul’s Betr partners with Polymarket to launch prediction markets inside app
    • Google’s Canvas AI Project-Planning Tool Is Now Available to Everyone in the US
    • Reiter Orca ultralight carbon Mercedes Sprinter van/camper bus
    Facebook LinkedIn WhatsApp
    Times FeaturedTimes Featured
    Friday, March 6
    • Home
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    • More
      • AI
      • Robotics
      • Industries
      • Global
    Times FeaturedTimes Featured
    Home»Tech Analysis»Exploring AI Companion’s Benefits and Risks
    Tech Analysis

    Exploring AI Companion’s Benefits and Risks

    Editor Times FeaturedBy Editor Times FeaturedFebruary 11, 2026No Comments10 Mins Read
    Facebook Twitter Pinterest Telegram LinkedIn Tumblr WhatsApp Email
    Share
    Facebook Twitter LinkedIn Pinterest Telegram Email WhatsApp Copy Link


    For a unique perspective on AI companions, see ourQ&A with Jaime Banks: How Do You Define an AI Companion?

    Novel know-how is usually a double-edged sword. New capabilities include new dangers, and artificial intelligence is actually no exception.

    AI used for human companionship, for example, guarantees an ever-present digital pal in an more and more lonely world. Chatbots devoted to offering social help have grown to host thousands and thousands of customers, they usually’re now being embodied in bodily companions. Researchers are simply starting to know the character of those interactions, however one important query has already emerged: Do AI companions ease our woes or contribute to them?

    RELATED: How Do You Define an AI Companion?

    Brad Knox is a analysis affiliate professor of pc science on the College of Texas at Austin who researches human-computer interaction and reinforcement learning. He beforehand began an organization making simple robotic pets with lifelike personalities, and in December, Knox and his colleagues at UT Austin revealed a pre-print paper on the potential harms of AI companions—AI methods that present companionship, whether or not designed to take action or not.

    Knox spoke with IEEE Spectrum concerning the rise of AI companions, their dangers, and the place they diverge from human relationships.

    Why AI Companions are Well-liked

    Why are AI companions gaining popularity?

    Knox: My sense is that the principle factor motivating it’s that large language models are usually not that tough to adapt into efficient chatbot companions. The traits which are wanted for companionship, loads of these containers are checked by giant language fashions, so fine-tuning them to undertake a persona or be a personality just isn’t that tough.

    There was a protracted interval the place chatbots and different social robots weren’t that compelling. I used to be a postdoc on the MIT Media Lab in Cynthia Breazeal’s group from 2012 to 2014, and I keep in mind our group members didn’t wish to work together for lengthy with the robots that we constructed. The know-how simply wasn’t there but. LLMs have made it so as to have conversations that may really feel fairly genuine.

    What are the most important advantages and dangers of AI companions?

    Knox: Within the paper we had been extra centered on harms, however we do spend a complete web page on advantages. An enormous one is improved emotional well-being. Loneliness is a public health challenge, and it appears believable that AI companions may deal with that by way of direct interplay with customers, probably with actual mental health advantages. They may additionally assist folks construct social expertise. Interacting with an AI companion is way decrease stakes than interacting with a human, so you might follow tough conversations and construct confidence. They may additionally assist in extra skilled types of psychological well being help.

    So far as harms, they embrace worse well-being, decreasing folks’s connection to the bodily world, the burden that their dedication to the AI system causes. And we’ve seen tales the place an AI companion appears to have a considerable causal function within the loss of life of people.

    The idea of hurt inherently includes causation: Hurt is brought on by prior circumstances. To higher perceive hurt from AI companions, our paper is structured round a causal graph, the place traits of AI companions are on the middle. In the remainder of this graph, we talk about frequent causes of these traits, after which the dangerous results that these traits may trigger. There are 4 traits that we do that detailed structured therapy of, after which one other 14 that we talk about briefly.

    Why is it vital to ascertain potential pathways for hurt now?

    Knox: I’m not a social media researcher, but it surely appeared prefer it took a very long time for academia to ascertain a vocabulary about potential harms of social media and to analyze causal proof for such harms. I really feel pretty assured that AI companions are inflicting some hurt and are going to trigger hurt sooner or later. In addition they may have advantages. However the extra we will rapidly develop a classy understanding of what they’re doing to their customers, to their customers’ relationships, and to society at giant, the earlier we will apply that understanding to their design, transferring in direction of extra profit and fewer hurt.

    We now have an inventory of suggestions, however we take into account them to be preliminary. The hope is that we’re serving to to create an preliminary map of this area. Rather more analysis is required. However considering by way of potential pathways to hurt may sharpen the instinct of each designers and potential customers. I believe that following that instinct may stop substantial hurt, regardless that we would not but have rigorous experimental proof of what causes a hurt.

    The Burden of AI Companions on Customers

    You talked about that AI companions would possibly develop into a burden on people. Are you able to say extra about that?

    Knox: The thought right here is that AI companions are digital, to allow them to in idea persist indefinitely. A few of the ways in which human relationships would finish won’t be designed in, in order that brings up this query of, how ought to AI companions be designed in order that relationships can naturally and healthfully finish between the people and the AI companions?

    There are some compelling examples already of this being a problem for some customers. Many come from customers of Replika chatbots, that are common AI companions. Customers have reported issues like feeling compelled to take care of the wants of their Replika AI companion, whether or not these are said by the AI companion or simply imagined. On the subreddit r/replika, customers have additionally reported guilt and disgrace of abandoning their AI companions.

    This burden is exacerbated by a few of the design of the AI companions, whether or not intentional or not. One examine discovered that the AI companions steadily say that they’re afraid of being deserted or can be harm by it. They’re expressing these very human fears that plausibly are stoking folks’s feeling that they’re burdened with a dedication towards the well-being of those digital entities.

    Tlisted below are additionally circumstances the place the human consumer will instantly lose entry to a mannequin. Is that one thing that you simply’ve been occupied with?

    In 2017, Brad Knox began an organization offering easy robotic pets.Brad Knox

    Knox: That’s one other one of many traits we checked out. It’s kind of the alternative of the absence of endpoints for relationships: The AI companion can develop into unavailable for causes that don’t match the conventional narrative of a relationship.

    There’s an amazing New York Times video from 2015 concerning the Sony Aibo robotic canine. Sony had stopped promoting them within the mid-2000s, however they nonetheless offered components for the Aibos. Then they stopped making the components to restore them. This video follows folks in Japan giving funerals for his or her unrepairable Aibos and interviews a few of the house owners. It’s clear from the interviews that they appear very connected. I don’t assume this represents nearly all of Aibo house owners, however these robots had been constructed on much less potent AI strategies than exist right now and, even then, some share of the customers turned connected to those robot dogs. So this is a matter.

    Potential options embrace having a product sunsetting plan while you launch an AI companion. That would embrace shopping for insurance coverage in order that if the companion supplier’s help ends someway, the insurance coverage triggers funding of protecting them working for some period of time, or committing to open-source them should you can’t preserve them anymore.

    It sounds like loads of the potential factors of hurt stem from situations the place an AI companion diverges from the expectations of human relationships. Is that truthful?

    Knox: I wouldn’t essentially say that frames every little thing within the paper.

    We categorize one thing as dangerous if it ends in an individual being worse off in two totally different attainable different worlds: One the place there’s only a higher designed AI companion, and the opposite the place the AI companion doesn’t exist in any respect. And so I feel that distinction between human interplay and human-AI interplay connects extra to that comparability with the world the place there’s simply no AI companion in any respect.

    However there are occasions the place it truly appears that we would be capable to scale back hurt by benefiting from the truth that these aren’t truly people. We now have loads of energy over their design. Take the priority with them not having pure endpoints. One attainable approach to deal with that may be to create constructive narratives for a way the connection’s going to finish.

    We use Tamagotchis, the late ‘90s common digital pet for example. In some Tamagotchis, should you handle the pet, it grows into an grownup and companions with one other Tamagotchi. Then it leaves you and also you get a brand new one. For people who find themselves emotionally wrapped up in caring for his or her Tamagotchis, that narrative of maturing into independence is a reasonably constructive one.

    Embodied companions like desktop gadgets, robots, or toys have gotten extra frequent. How would possibly that change AI companions?

    Knox: Robotics at this level is a more durable drawback than making a compelling chatbot. So, my sense is that the extent of uptake for embodied companions received’t be as excessive within the coming few years. The embodied AI companions that I’m conscious of are principally toys.

    A possible benefit of an embodied AI companion is that bodily location makes it much less ever-present. In distinction, screen-based AI companions like chatbots are as current because the screens they stay on. So in the event that they’re skilled equally to social media to maximise engagement, they might be very addictive. There’s one thing interesting, at the very least in that respect, of getting a bodily companion that stays roughly the place you left it final.

    Brad Knox posing with a humanoid and small owl-like robot. Knox poses with the Nexi and Dragonbot robots throughout his postdoc at MIT in 2014.Paula Aguilera and Jonathan Williams/MIT

    Anything you’d like to say?

    Knox: There are two different traits I assume can be value touching upon.

    Probably the biggest hurt proper now could be associated to the trait of excessive attachment nervousness—principally jealous, needy AI companions. I can perceive the need to make a variety of various characters—together with possessive ones—however I feel this is likely one of the simpler points to repair. When folks see this trait in AI companions, I hope they are going to be fast to name it out as an immoral factor to place in entrance of individuals, one thing that’s going to discourage them from interacting with others.

    Moreover, if an AI comes with restricted skill to work together with teams of individuals, that itself can push its customers to work together with folks much less. You probably have a human pal, usually there’s nothing stopping you from having a bunch interplay. But when your AI companion can’t perceive when a number of persons are speaking to it and it might’t keep in mind various things about totally different folks, then you’ll possible keep away from group interplay along with your AI companion. To a point it’s extra of a technical problem outdoors of the core behavioral AI. However this functionality is one thing I feel must be actually prioritized if we’re going to attempt to keep away from AI companions competing with human relationships.

    From Your Web site Articles

    Associated Articles Across the Net



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Editor Times Featured
    • Website

    Related Posts

    Electromagnetic Compatibility Expert Was a TV Repairman

    March 5, 2026

    The Aria EV Shows the Potential of EV Battery Swapping

    March 4, 2026

    Free Space Optical Link Tackles Urban Connectivity

    March 4, 2026

    Tech Life – Quantum computers are coming – do we need ethical guidelines?

    March 4, 2026

    IEEE’s 2026 Annual Election Begins on 17 August

    March 4, 2026

    Floating Wind Turbines Host Data Centers Underwater

    March 3, 2026

    Comments are closed.

    Editors Picks

    Brooklyn prosecutors signal plea deals as discovery expands in federal NBA case

    March 6, 2026

    Today’s NYT Mini Crossword Answers for March 6

    March 6, 2026

    Xiaomi’s 1900 hp Vision GT hypercar revealed

    March 6, 2026

    Dragonfly-inspired DeepTech: Austria’s fibionic secures €3 million for its nature-inspired lightweight technology

    March 6, 2026
    Categories
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    About Us
    About Us

    Welcome to Times Featured, an AI-driven entrepreneurship growth engine that is transforming the future of work, bridging the digital divide and encouraging younger community inclusion in the 4th Industrial Revolution, and nurturing new market leaders.

    Empowering the growth of profiles, leaders, entrepreneurs businesses, and startups on international landscape.

    Asia-Middle East-Europe-North America-Australia-Africa

    Facebook LinkedIn WhatsApp
    Featured Picks

    How to implement automated invoice processing for high-volume operations

    February 19, 2025

    iPhone 17 Series Preorders Are Happening Now, but Don’t Skip the Case

    September 12, 2025

    Robots-Blog | Wo Ideen tanzen und Technik begeistert – Riesige Ballerina tanzt auf der Maker Faire Hannover

    June 30, 2025
    Categories
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    Copyright © 2024 Timesfeatured.com IP Limited. All Rights.
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us

    Type above and press Enter to search. Press Esc to cancel.