Close Menu
    Facebook LinkedIn YouTube WhatsApp X (Twitter) Pinterest
    Trending
    • MIT develops single-dose HIV vaccine with dual adjuvants
    • Danish BioTech startup Cellugy secures €8.1 million to eradicate microplastics in personal care products
    • 5 Best Lip Balms to Try in 2025, All Tested in Tough Conditions
    • The résumé is dying, and AI is holding the smoking gun
    • How to Watch Auckland City vs. Boca Juniors From Anywhere for Free: Stream FIFA Club World Cup Soccer
    • Meerkat Substation Security: Protecting Energy Networks from Threats
    • Build Multi-Agent Apps with OpenAI’s Agent SDK
    • Loneliness not linked to death risk in home care study
    Facebook LinkedIn WhatsApp
    Times FeaturedTimes Featured
    Tuesday, June 24
    • Home
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    • More
      • AI
      • Robotics
      • Industries
      • Global
    Times FeaturedTimes Featured
    Home»AI Technology News»Can we fix AI’s evaluation crisis?
    AI Technology News

    Can we fix AI’s evaluation crisis?

    Editor Times FeaturedBy Editor Times FeaturedJune 24, 2025No Comments7 Mins Read
    Facebook Twitter Pinterest Telegram LinkedIn Tumblr WhatsApp Email
    Share
    Facebook Twitter LinkedIn Pinterest Telegram Email WhatsApp Copy Link


    As a tech reporter I usually get requested questions like “Is DeepSeek truly higher than ChatGPT?” or “Is the Anthropic mannequin any good?” If I don’t really feel like turning it into an hour-long seminar, I’ll normally give the diplomatic reply: “They’re each strong in numerous methods.”

    Most individuals asking aren’t defining “good” in any exact manner, and that’s honest. It’s human to wish to make sense of one thing new and seemingly highly effective. However that straightforward query—Is that this mannequin good?—is admittedly simply the on a regular basis model of a way more difficult technical downside.

    Thus far, the way in which we’ve tried to reply that query is thru benchmarks. These give fashions a set set of inquiries to reply and grade them on what number of they get proper. However similar to exams just like the SAT (an admissions check utilized by many US faculties), these benchmarks don’t all the time mirror deeper skills. Recently it feels as if a brand new AI mannequin drops each week, and each time an organization launches one, it comes with recent scores displaying it beating the capabilities of predecessors. On paper, the whole lot seems to be getting higher on a regular basis.

    In follow, it’s not so easy. Simply as grinding for the SAT would possibly increase your rating with out enhancing your essential pondering, fashions may be skilled to optimize for benchmark outcomes with out truly getting smarter, as Russell Brandon explained in his piece for us. As OpenAI and Tesla AI veteran Andrej Karpathy not too long ago put it, we’re residing via an analysis disaster—our scoreboard for AI not displays what we actually wish to measure.

    Benchmarks have grown stale for just a few key causes. First, the trade has realized to “train to the check,” coaching AI fashions to attain effectively quite than genuinely enhance. Second, widespread knowledge contamination means fashions might have already seen the benchmark questions, and even the solutions, someplace of their coaching knowledge. And at last, many benchmarks are merely maxed out. On widespread assessments like SuperGLUE, fashions have already reached or surpassed 90% accuracy, making additional good points really feel extra like statistical noise than significant enchancment. At that time, the scores cease telling us something helpful. That’s very true in high-skill domains like coding, reasoning, and complicated STEM problem-solving. 

    Nevertheless, there are a rising variety of groups world wide making an attempt to deal with the AI analysis disaster. 

    One result’s a brand new benchmark known as LiveCodeBench Professional. It attracts issues from worldwide algorithmic olympiads—competitions for elite highschool and college programmers the place individuals resolve difficult issues with out exterior instruments. The highest AI fashions at the moment handle solely about 53% at first go on medium-difficulty issues and 0% on the toughest ones. These are duties the place human specialists routinely excel.

    Zihan Zheng, a junior at NYU and a world finalist in aggressive coding, led the mission to develop LiveCodeBench Professional with a staff of olympiad medalists. They’ve printed each the benchmark and an in depth research displaying that top-tier fashions like GPT-4o mini and Google’s Gemini 2.5 carry out at a stage akin to the highest 10% of human rivals. Throughout the board, Zheng noticed a sample: AI excels at planning and executing duties, but it surely struggles with nuanced algorithmic reasoning. “It reveals that AI continues to be removed from matching the very best human coders,” he says.

    LiveCodeBench Professional would possibly outline a brand new higher bar. However what concerning the ground? Earlier this month, a bunch of researchers from a number of universities argued that LLM brokers must be evaluated totally on the premise of their riskiness, not simply how effectively they carry out. In real-world, application-driven environments—particularly with AI brokers—unreliability, hallucinations, and brittleness are ruinous. One incorrect transfer might spell catastrophe when cash or security are on the road.

    There are different new makes an attempt to deal with the issue. Some benchmarks, like ARC-AGI, now hold a part of their knowledge set non-public to forestall AI fashions from being optimized excessively for the check, an issue known as “overfitting.” Meta’s Yann LeCun has created LiveBench, a dynamic benchmark the place questions evolve each six months. The aim is to guage fashions not simply on information however on adaptability.

    Xbench, a Chinese language benchmark mission developed by HongShan Capital Group (previously Sequoia China), is one other one among these effort. I just wrote about it in a story. Xbench was initially inbuilt 2022—proper after ChatGPT’s launch—as an inside device to guage fashions for funding analysis. Over time, the staff expanded the system and introduced in exterior collaborators. It simply made components of its query set publicly accessible final week. 

    Xbench is notable for its dual-track design, which tries to bridge the hole between lab-based assessments and real-world utility. The primary observe evaluates technical reasoning expertise by testing a mannequin’s STEM information and skill to hold out Chinese language-language analysis. The second observe goals to evaluate sensible usefulness—how effectively a mannequin performs on duties in fields like recruitment and advertising and marketing. For example, one process asks an agent to establish 5 certified battery engineer candidates; one other has it match manufacturers with related influencers from a pool of greater than 800 creators. 

    The staff behind Xbench has large ambitions. They plan to increase its testing capabilities into sectors like finance, legislation, and design, and so they plan to replace the check set quarterly to keep away from stagnation. 

    That is one thing that I usually marvel about, as a result of a mannequin’s hardcore reasoning skill doesn’t essentially translate right into a enjoyable, informative, and artistic expertise. Most queries from common customers are most likely not going to be rocket science. There isn’t a lot analysis but on how you can successfully consider a mannequin’s creativity, however I’d like to know which mannequin can be the very best for inventive writing or artwork initiatives.

    Human desire testing has additionally emerged as a substitute for benchmarks. One more and more widespread platform is LMarena, which lets customers submit questions and examine responses from totally different fashions facet by facet—after which decide which one they like finest. Nonetheless, this methodology has its flaws. Customers generally reward the reply that sounds extra flattering or agreeable, even when it’s incorrect. That may incentivize “sweet-talking” fashions and skew ends in favor of pandering.

    AI researchers are starting to appreciate—and admit—that the established order of AI testing can not proceed. On the current CVPR convention, NYU professor Saining Xie drew on historian James Carse’s Finite and Infinite Video games to critique the hypercompetitive tradition of AI analysis. An infinite sport, he famous, is open-ended—the aim is to maintain taking part in. However in AI, a dominant participant usually drops a giant outcome, triggering a wave of follow-up papers chasing the identical slender subject. This race-to-publish tradition places huge stress on researchers and rewards pace over depth, short-term wins over long-term perception. “If academia chooses to play a finite sport,” he warned, “it’s going to lose the whole lot.”

    I discovered his framing highly effective—and perhaps it applies to benchmarks, too. So, do we have now a really complete scoreboard for a way good a mannequin is? Not likely. Many dimensions—social, emotional, interdisciplinary—nonetheless evade evaluation. However the wave of latest benchmarks hints at a shift. As the sphere evolves, a little bit of skepticism might be wholesome.

    This story initially appeared in The Algorithm, our weekly e-newsletter on AI. To get tales like this in your inbox first, sign up here.



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Editor Times Featured
    • Website

    Related Posts

    A Chinese firm has just launched a constantly changing set of AI benchmarks

    June 23, 2025

    It’s pretty easy to get DeepSeek to talk dirty

    June 19, 2025

    OpenAI can rehabilitate AI models that develop a “bad boy persona”

    June 18, 2025

    Why your agentic AI will fail without an AI gateway

    June 18, 2025

    Why AI hardware needs to be open

    June 18, 2025

    When AIs bargain, a less advanced agent could cost you

    June 17, 2025
    Leave A Reply Cancel Reply

    Editors Picks

    MIT develops single-dose HIV vaccine with dual adjuvants

    June 24, 2025

    Danish BioTech startup Cellugy secures €8.1 million to eradicate microplastics in personal care products

    June 24, 2025

    5 Best Lip Balms to Try in 2025, All Tested in Tough Conditions

    June 24, 2025

    The résumé is dying, and AI is holding the smoking gun

    June 24, 2025
    Categories
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    About Us
    About Us

    Welcome to Times Featured, an AI-driven entrepreneurship growth engine that is transforming the future of work, bridging the digital divide and encouraging younger community inclusion in the 4th Industrial Revolution, and nurturing new market leaders.

    Empowering the growth of profiles, leaders, entrepreneurs businesses, and startups on international landscape.

    Asia-Middle East-Europe-North America-Australia-Africa

    Facebook LinkedIn WhatsApp
    Featured Picks

    AI-powered ClearQuote takes top spot at Last Mile Nexus Europe 2025 (Sponsored)

    May 20, 2025

    AI software Semeris raises €4 million to meet demand for GenAI in FinTech

    February 3, 2025

    Samsung’s New Galaxy Foldables Will Be Announced at July 9 Unpacked Event

    June 24, 2025
    Categories
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    Copyright © 2024 Timesfeatured.com IP Limited. All Rights.
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us

    Type above and press Enter to search. Press Esc to cancel.