Close Menu
    Facebook LinkedIn YouTube WhatsApp X (Twitter) Pinterest
    Trending
    • A Practical Starters’ Guide to Causal Structure Learning with Bayesian Methods in Python
    • Donkey skin extract rivals DEET as natural tick repellent
    • Swedish app developer Studio555 raises €4 million for app that merges interior design with gaming
    • How Apple Created a Custom iPhone Camera for ‘F1’
    • Vandals cut fiber-optic lines, causing outage for Spectrum Internet subscribers
    • Mario Kart World: How To Unlock Mirror Mode Tracks
    • Let’s Analyze OpenAI’s Claims About ChatGPT Energy Use
    • Plasma device offers deodorant-free odor control
    Facebook LinkedIn WhatsApp
    Times FeaturedTimes Featured
    Tuesday, June 17
    • Home
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    • More
      • AI
      • Robotics
      • Industries
      • Global
    Times FeaturedTimes Featured
    Home»Artificial Intelligence»Let’s Analyze OpenAI’s Claims About ChatGPT Energy Use
    Artificial Intelligence

    Let’s Analyze OpenAI’s Claims About ChatGPT Energy Use

    Editor Times FeaturedBy Editor Times FeaturedJune 16, 2025No Comments7 Mins Read
    Facebook Twitter Pinterest Telegram LinkedIn Tumblr WhatsApp Email
    Share
    Facebook Twitter LinkedIn Pinterest Telegram Email WhatsApp Copy Link


    Altman recently shared a concrete determine for the power and water consumption of ChatGPT queries. In keeping with his weblog put up, every question to ChatGPT consumes about 0.34 Wh of electrical energy (0.00034 KWh) and about 0.000085 gallons of water. The equal to what a high-efficiency lightbulb makes use of in a few minutes and roughly one-fifteenth of a teaspoon.

    That is the primary time OpenAI has publicly shared such information, and it provides an necessary information level to ongoing debates concerning the environmental impression of huge AI techniques. The announcement sparked widespread dialogue – each supportive and skeptical. On this put up I analyze the declare and unpack reactions on social media to have a look at the arguments on each side.

    What Helps the 0.34 Wh Declare?

    Let’s take a look at the arguments that lend credibility to OpenAI’s quantity.

    1. Unbiased estimates align with OpenAI’s quantity

    A key motive some take into account the determine credible is that it aligns carefully with earlier third-party estimates. In 2025, analysis institute Epoch.AI estimated {that a} single question to GPT-4o consumes roughly 0.0003 KWh of power  –  carefully aligning with OpenAI’s personal estimate. This assumes GPT-4o makes use of a mixture-of-experts structure with 100 billion lively parameters and a typical response size of 500 tokens. Nonetheless, they don’t account for different elements than the power consumption by the GPU servers and they don’t incorporate energy utilization effectiveness (PUE) as is in any other case customary.

    A latest educational examine by Jehham et al (2025) estimates that GPT-4.1 nano makes use of 0.000454 KWh, o3 makes use of 0.0039 KWh and GPT-4.5 makes use of 0.030 KWh for lengthy prompts (roughly 7,000 phrases of enter and 1,000 phrases of output).

    The settlement between the estimates and OpenAI’s information level means that OpenAI’s determine falls inside an inexpensive vary, at the very least when focusing solely on the stage the place the mannequin responds to a immediate (known as “inference”).

    Picture by the creator

    2. OpenAI’s quantity may be believable on the {hardware} degree

    It’s been reported that OpenAI servers 1 billion queries per day. Let’s take into account the mathematics behind how ChatGPT may serve that quantity of queries per day. If that is true, and the power per question is 0.34 Wh, then the entire day by day power may very well be round 340 megawatt-hours, in response to an industry expert. He speculates that this might imply OpenAI may assist ChatGPT with about 3,200 servers (assuming Nvidia DGX A100). If 3,200 servers need to deal with 1 billion day by day queries, then every server must deal with round 4.5 prompts per second. If we assume one occasion of ChatGPT’s underlying LLM is deployed on every server, and that the common immediate leads to 500 output tokens (roughly 375 phrases, in response to OpenAI’s rule of thumb), then the servers would want to generate 2,250 tokens per second. Is that sensible?

    Stojkovic et al (2024) have been in a position to obtain a throughput of 6,000 tokens per second from Llama-2–70b on an Nvidia DGX H100 server with 8 H100 GPUs. 

    Nonetheless, Jegham et al (2025) have discovered that three totally different OpenAI fashions generated between 75 and 200 tokens per second on common. It’s, nonetheless, unclear how they arrived at this.

    So it appears that evidently we can’t reject the concept 3,200 servers may be capable of deal with 1 billion day by day queries.

    Why some consultants are skeptical

    Regardless of the supporting proof, many stay cautious or crucial of the 0.34 Wh determine, elevating a number of key issues. Let’s check out these.

    1. OpenAI’s quantity would possibly miss main elements of the system

    I believe the quantity solely consists of the power utilized by the GPU servers themselves, and never the remainder of the infrastructure – reminiscent of information storage, cooling techniques, networking tools, firewalls, electrical energy conversion loss, or backup techniques. This can be a frequent limitation in power reporting throughout tech firms.

    As an illustration, Meta has additionally reported GPU-only power numbers up to now. However in real-world information facilities, GPU energy is just a part of the complete image.

    2. Server estimates appear low in comparison with trade experiences

    Some commentators, reminiscent of GreenOps advocate Mark Butcher, argue that 3,200 GPU servers appears far too low to assist all of ChatGPT’s customers, particularly if you happen to take into account international utilization, excessive availability, and different functions past informal chat (like coding or picture evaluation).

    Different experiences counsel that OpenAI makes use of tens and even a whole bunch of hundreds of GPUs for inference. If that’s true, the entire power use may very well be a lot greater than what the 0.34 Wh/question quantity implies.

    3. Lack of element raises questions

    Critics, eg David Mytton, additionally level out that OpenAI’s assertion lacks primary context. As an illustration:

    • What precisely is an “common” question? A single query, or a full dialog?
    • Does this determine apply to only one mannequin (e.g., GPT-3.5, GPT-4o) or a median throughout a number of?
    • Does it embody newer, extra complicated duties like multimodal enter (e.g., analyzing PDFs or producing pictures)?
    • Is the water utilization quantity direct (used for cooling servers) or oblique (from electrical energy sources like hydro energy)?
    • What about carbon emissions? That relies upon closely on the situation and power combine.

    With out solutions to those questions, it’s onerous to know the way a lot belief to put within the quantity or learn how to examine it to different AI techniques.

    Views

    Are large tech lastly listening to our prayers?

    OpenAI’s disclosure comes within the wake of Nvidia’s release of information concerning the embodided emissions of the GPU’s, and Google’s blog post concerning the life cycle emissions of their TPU {hardware}. This might counsel that the firms are lastly responding to the various calls which have been made for extra transparency. Are we witnessing the daybreak of a brand new period? Or is Sam Altman simply taking part in tips on us as a result of it’s in his monetary pursuits to downplay the local weather impression of his firm? I’ll depart that query as a thought experiment for the reader.

    Inference vs coaching

    Traditionally, the numbers that we now have seen estimated and reported about AI’s power consumption has associated to the power use of coaching AI fashions. And whereas the coaching stage will be very power intensive, over time, serving billions of queries (inference) can truly use extra whole power than coaching the mannequin within the first place. My very own estimates suggest that coaching GPT-4 could have used round 50–60 million KWh of electrical energy. With 0.34 Wh per question and 1 billion day by day queries, the power used to reply consumer queries would surpass the power use of the coaching stage after 150-200 days. This lends credibility to the concept inference power is value measuring carefully.

    Conclusion: A welcome first step, however removed from the complete image

    Simply as we thought the talk about OpenAI’s power use had gotten previous, the notoriously closed firm stirs it up with their disclosure of this determine. Many are enthusiastic about the truth that OpenAI has now entered the talk concerning the power and water use of their merchandise and hope that this is step one in the direction of better transparency concerning the ressource draw and local weather impression of massive tech. However, many are skeptical of OpenAI’s determine. And for good motive. It was disclosed as a parenthesis in a weblog put up about an a completely totally different subject, and no context was given in anyway as detailed above.

    Although we may be witnessing a shift in the direction of extra transparency, we nonetheless want numerous info from OpenAI so as to have the ability to critically assess their 0.34 Wh determine. Till then, it needs to be taken not simply with a grain of salt, however with a handful.


    That’s it! I hope you loved the story. Let me know what you suppose!

    Comply with me for extra on AI and sustainability and be happy to comply with me on LinkedIn.



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Editor Times Featured
    • Website

    Related Posts

    A Practical Starters’ Guide to Causal Structure Learning with Bayesian Methods in Python

    June 17, 2025

    How AI Girlfriend Chatbots are Inspired by Popular Culture

    June 16, 2025

    Can AI Truly Develop a Memory That Adapts Like Ours?

    June 16, 2025

    User Authorisation in Streamlit With OIDC and Google

    June 15, 2025

    Tested an NSFW AI Video Generator with Voice

    June 15, 2025

    Are We Entering a New Era of Digital Freedom or Exploitation?

    June 15, 2025
    Leave A Reply Cancel Reply

    Editors Picks

    A Practical Starters’ Guide to Causal Structure Learning with Bayesian Methods in Python

    June 17, 2025

    Donkey skin extract rivals DEET as natural tick repellent

    June 17, 2025

    Swedish app developer Studio555 raises €4 million for app that merges interior design with gaming

    June 17, 2025

    How Apple Created a Custom iPhone Camera for ‘F1’

    June 16, 2025
    Categories
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    About Us
    About Us

    Welcome to Times Featured, an AI-driven entrepreneurship growth engine that is transforming the future of work, bridging the digital divide and encouraging younger community inclusion in the 4th Industrial Revolution, and nurturing new market leaders.

    Empowering the growth of profiles, leaders, entrepreneurs businesses, and startups on international landscape.

    Asia-Middle East-Europe-North America-Australia-Africa

    Facebook LinkedIn WhatsApp
    Featured Picks

    Today’s NYT Connections Hints, Answers for May 24, #713

    May 20, 2025

    Today’s NYT Connections: Sports Edition Hints, Answers for Jan. 9, #108

    January 9, 2025

    The 5 Best Sunrise Alarm Clocks in 2025 and How to Choose One

    February 4, 2025
    Categories
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    Copyright © 2024 Timesfeatured.com IP Limited. All Rights.
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us

    Type above and press Enter to search. Press Esc to cancel.