Just a few years in the past, I used to be satisfied I used to be about to die. And whereas (spoiler alert) I did not, my extreme anxiousness round well being and my tendency to at all times leap to the worst conclusions has persevered. The rise of health-tracking watches like Apple’s most up-to-date Watch Series 11 or Samsung Galaxy Watch 8 — together with new ways in which AI tries to investigate and inform us of our physique’s knowledge has led me to make an vital resolution. For my very own peace of thoughts, AI and fixed monitoring wants to remain distant from my private well being. I am going to clarify.
Someday round 2016, I had extreme migraines that persevered for a few weeks. My anxiousness steeply elevated throughout this era due to the fixed fear. After I finally referred to as the UK’s NHS helpline and defined my varied signs, they instructed me I wanted to go to the closest hospital and be seen inside 2 hours. “Stroll there with somebody,” I distinctly bear in mind them telling me, “It will be faster than getting an ambulance to you.”
This name confirmed my worst fears — that demise was imminent.
Because it turned out, my fears of an early demise have been unfounded. The trigger was truly extreme muscle pressure from having hung a number of heavy cameras round my neck for a complete day whereas photographing a buddy’s marriage ceremony. However the helpline agent was merely engaged on the restricted knowledge I would supplied. Because of this, they’d — most likely rightly — taken a “higher secure than sorry” method and urged me to hunt fast medical consideration, simply in case I actually was in danger.
The Apple Watch has at all times had quite a lot of heart-rate monitoring instruments and I’ve at all times averted them.
I’ve spent most of my grownup life scuffling with well being anxiousness, and episodes resembling this have taught me so much about my capability to leap to absolutely the worst conclusions regardless of there being no actual proof to assist them. A ringing in my ears? Have to be a mind tumor. A twinge in my abdomen? Nicely, higher get my affairs so as.
I’ve realized to stay with this over time, and whereas I nonetheless have my ups and downs, I do know higher about what triggers issues for me. For one, I realized by no means to Google my signs. As a result of it doesn’t matter what my symptom was, most cancers was at all times one of many prospects a search would throw up. Medical websites — together with the NHS’s personal web site — supplied no consolation and often solely resulted in mind-shattering panic assaults.
Sadly, I’ve discovered I’ve an analogous response with many health-tracking instruments. I preferred my Apple Watch at first, and its capability to learn my coronary heart price throughout exercises was useful. Then I discovered I used to be checking it more and more extra usually all through the day. Then the doubt crept in: “Why is my coronary heart price excessive once I’m simply sitting down? Is that standard? I am going to attempt once more in 5 minutes.” When, inevitably, it wasn’t completely different (or it was worse), panic would naturally ensue.
I’ve used Apple Watches a number of instances, however I discover the center price monitoring extra demanding than useful.
Whether or not monitoring coronary heart price, blood oxygen ranges and even sleep scores, I would obsess over what a “regular” vary ought to be. Any time my knowledge fell outdoors of that vary, I would instantly assume it meant I used to be about to keel over proper then and there. The extra knowledge these gadgets supplied, the extra issues I felt I needed to fear about. And now the brand new Apple Watch Series 11 can monitor blood pressure, so now I’ve that to stress over, too.
Certain, there’s an argument that I solely want to fret if it alerts me to an issue. And that I am truly safer on account of carrying it. Definitely Apple’s heart-wrenching promo video at its September launch occasion that instructed tales of people that actually have been saved from an premature demise by their watches made a robust case. However I do know that that is not how my thoughts works. As a substitute of letting these instruments do their factor within the background whereas I get on with my life, I am going to as a substitute obsess over the metrics and any deviation from the established baseline will likely be a trigger for fast panic.
I’ve realized to maintain my worries at bay and have continued to make use of smartwatches often, with out them being a lot of an issue for my psychological well being (I’ve to actively not use any heart-related features like ECGs), however AI-based well being instruments scare me extra.
It is not simply Apple that is the issue right here. This 12 months Samsung instructed us all of the methods its new Galaxy AI instruments — and Google’s Gemini AI — will supposedly assist us in our every day lives. Samsung Well being’s algorithms will observe your coronary heart price because it fluctuates all through the day, notifying you of modifications. It’s going to supply personalised insights out of your food regimen and train to assist with cardiovascular well being. You possibly can even ask the AI agent questions associated to your well being.
To many it might sound like an excellent holistic view of your well being, however to not me. To me it appears like extra knowledge being collected and waved in entrance of me, forcing me to acknowledge it and creating an infinite suggestions loop of obsession, fear and, inevitably, panic. Nevertheless it’s the AI questions which are the largest crimson flag for me. AI instruments by their nature should make “greatest guess” solutions based mostly often on data publicly obtainable on-line. Asking AI a query is actually only a fast means of operating a Google search and, as I’ve discovered, Googling well being queries doesn’t finish effectively for me.
Samsung confirmed off varied methods AI will likely be used inside its well being app through the Unpacked keynote.
Very similar to the NHS telephone operator who inadvertently prompted me to panic about dying, an AI-based well being assistant will be capable of present solutions based mostly solely on the restricted data it has about me. Asking a query about my coronary heart well being may carry up quite a lot of data, simply as trying on a well being web site would about why I’ve a headache. However very like how a headache can technically be a symptom of most cancers, it is also more likely to be a muscular twinge. Or an indication that I have not drunk sufficient water. Or that I must look away from my display screen for a bit. Or that I should not have stayed up till 2 a.m. taking part in Yakuza: Infinite Wealth. Or 100 different causes, all of that are much more seemingly than the one I’ve already determined is certainly the offender.
However will an AI give me the context I must not fear and obsess? Or will it simply present me with all the potential outcomes? It might be intending to offer a full understanding, however as a substitute it may danger feeding that “what if” fear. And, like how Google’s AI Overviews instructed individuals to put glue on pizza, will an AI well being device merely scour the web and supply me with a hash of a solution, with inaccurate inferences that would tip my anxiousness into full panic assault territory?
Or maybe, very like the sort physician on the hospital that day, who smiled gently on the sobbing man sitting reverse who’d already drafted a goodbye be aware to his household on his telephone within the ready room, an AI device may be capable of see that knowledge and easily say, “You are advantageous, Andy, cease worrying and fall asleep.”
Possibly sooner or later that’ll be the case. Possibly well being monitoring instruments and AI insights will be capable of supply me a much-needed dose of logic and reassurance to counter my anxiousness, relatively than being the reason for it. However till then, it isn’t a danger I am keen to take.

