Close Menu
    Facebook LinkedIn YouTube WhatsApp X (Twitter) Pinterest
    Trending
    • Scandi-style tiny house combines smart storage and simple layout
    • Our Favorite Apple Watch Has Never Been Less Expensive
    • Vercel says it detected unauthorized access to its internal systems after a hacker using the ShinyHunters handle claimed a breach on BreachForums (Lawrence Abrams/BleepingComputer)
    • Today’s NYT Strands Hints, Answer and Help for April 20 #778
    • KV Cache Is Eating Your VRAM. Here’s How Google Fixed It With TurboQuant.
    • OneOdio Focus A1 Pro review
    • The 11 Best Fans to Buy Before It Gets Hot Again (2026)
    • A look at Dylan Patel’s SemiAnalysis, an AI newsletter and research firm that expects $100M+ in 2026 revenue from subscriptions and AI supply chain research (Abram Brown/The Information)
    Facebook LinkedIn WhatsApp
    Times FeaturedTimes Featured
    Sunday, April 19
    • Home
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    • More
      • AI
      • Robotics
      • Industries
      • Global
    Times FeaturedTimes Featured
    Home»Artificial Intelligence»Building Human-In-The-Loop Agentic Workflows | Towards Data Science
    Artificial Intelligence

    Building Human-In-The-Loop Agentic Workflows | Towards Data Science

    Editor Times FeaturedBy Editor Times FeaturedMarch 25, 2026No Comments11 Mins Read
    Facebook Twitter Pinterest Telegram LinkedIn Tumblr WhatsApp Email
    Share
    Facebook Twitter LinkedIn Pinterest Telegram Email WhatsApp Copy Link


    like OpenAI’s GPT-5.4 and Anthropic’s Opus 4.6 have demonstrated excellent capabilities in executing long-running agentic duties.

    Consequently, we see an elevated use of LLM brokers throughout particular person and enterprise settings to perform advanced duties, resembling operating monetary analyses, constructing apps, and conducting intensive analysis.

    These brokers, whether or not a part of a extremely autonomous setup or a pre-defined workflow, can execute multi-step duties utilizing instruments to attain targets with minimal human oversight.

    Nevertheless, ‘minimal’ doesn’t imply zero human oversight. 

    Quite the opposite, human evaluation stays essential due to LLMs’ inherent probabilistic nature and the potential for errors.

    These errors can propagate and compound alongside the workflow, particularly after we string quite a few agentic parts collectively.

    You’d have seen the spectacular progress brokers have made within the coding area. The reason being that code is comparatively simple to confirm (i.e., it both runs or fails, and suggestions is seen instantly).

    However in areas like content material creation, analysis, or decision-making, correctness is commonly subjective and tougher to guage routinely.

    That’s the reason human-in-the-loop (HITL) design stays crucial.

    On this article, we are going to stroll via use LangGraph to arrange a human-in-the-loop agentic workflow for content material era and publication on Bluesky.

    Contents

    (1) Primer to LangGraph
    (2) Example Workflow
    (3) Key Concepts
    (4) Code Walkthrough
    (5) Best Practices of Interrupts

    You will discover the accompanying GitHub repo here.


    (1) Primer to LangGraph

    LangGraph (a part of the LangChain ecosystem) is a low-level agent orchestration framework and runtime for constructing agentic workflows. 

    It’s my go-to framework given its excessive diploma of management and customizability, which is important for production-grade options.

    Whereas LangChain gives a middleware object (HumanInTheLoopMiddleware) to simply get began with human oversight in agent calls, it’s achieved at a excessive stage of abstraction that masks the underlying mechanics. 

    LangGraph, in contrast, doesn’t summary the prompts or structure, thereby giving us the finer diploma of management that we want. It explicitly lets us outline:

    • How information flows between steps
    • The place selections and code executions occur
    • The place human intervention is required

    Due to this fact, we are going to use LangGraph to display the HITL idea inside an agentic workflow.

    It is usually useful to tell apart between agentic workflows and autonomous AI brokers.

    Agentic workflows have predetermined paths and are designed to execute in an outlined order, with LLMs and/or brokers built-in into a number of parts. Then again, AI brokers autonomously plan, execute, and iterate in the direction of a aim.

    On this article, we concentrate on agentic workflows, wherein we intentionally insert human checkpoints right into a pre-defined circulation.

    Evaluating agentic workflows and LLM brokers | Picture used underneath license

    (2) Instance Workflow

    For our instance, we will construct a social media content material era workflow as follows:

    Content material era workflow | Picture by writer
    1. Consumer enters a subject of curiosity (e.g., “newest information about Anthropic”).
    2. The internet search node makes use of the Tavily software to look on-line for articles matching the highest.
    3. The highest search result’s chosen and fed into an LLM within the content-creation node to generate a social media put up.
    4. Within the evaluation node, there are two human evaluation checkpoints:
      (i) Current generated content material for people to approve, reject, or edit;
      (ii) Upon approval, the workflow triggers the Bluesky API software and requests ultimate affirmation earlier than posting it on-line.

    Here’s what it seems like when run from the terminal:

    Workflow run in terminal | Picture by writer

    And right here is the dwell put up on my Bluesky profile:

    Bluesky social media put up generated from workflow | Picture by writer

    Bluesky is a social platform just like Twitter (X), and it’s chosen on this demo as a result of its API is far simpler to entry and use.


    (3) Key Ideas

    The core mechanism behind the HITL setup in LangGraph is the idea of interrupts.

    Interrupts (utilizing interrupt() and Command in LangGraph) allow us to pause graph execution at particular factors, show sure info to the human, and await their enter earlier than resuming the workflow.

    Command is a flexible object that permits us to replace the graph state (replace), specify the subsequent node to execute (goto), or seize the worth to renew graph execution with (resume).

    Here’s what the circulation seems like:

    (1) Upon reaching the interrupt() operate, execution pauses, and the payload handed into it’s proven to the person. The payload handed in interrupt ought to sometimes be JSON or string format, e.g.,

    resolution = interrupt("Ought to we get KFC for lunch?") # String proven to person

    (2) After the person responds, we move the response values to the graph to renew execution. It includes utilizing Command and its resume parameter as a part of re-invoking the graph:

    if human_response == "sure":
        return graph.invoke(Command(resume="KFC"))
    else:
        return graph.invoke(Command(resume="McDonalds"))

    (3) The response worth in resume is returned within the resolution variable, which the node will use for the remainder of the node execution and subsequent graph circulation:

    if resolution == "KFC":
        return Command(goto="kfc_order_node", replace={"lunch_choice": "KFC")
    else:
        return Command(goto="mcd_order_node", replace={"lunch_choice": "McDonalds")

    Interrupts are dynamic and will be positioned wherever within the code, in contrast to static breakpoints, that are fastened earlier than or after particular nodes.

    That stated, we sometimes place interrupts both inside the nodes or inside the instruments referred to as throughout graph execution.


    Lastly, let’s discuss checkpointers. When a workflow pauses at an interrupt, we want a method to save its present state so it will probably resume later.

    We subsequently want a checkpoint to persist the state in order that the state isn’t misplaced in the course of the interrupt pause. Consider a checkpoint as a snapshot of the graph state at a given time limit.

    For improvement, it’s acceptable to avoid wasting the state in reminiscence with the InMemorySaver checkpointer.

    For manufacturing, it’s higher to make use of shops like Postgres or Redis. With that in thoughts, we will use the SQLite checkpoint on this instance as an alternative of an in-memory retailer.

    To make sure the graph resumes precisely on the level the place the interrupt occurred, we have to move and use the identical thread ID.

    Consider a thread as an single execution session (like a separate particular person dialog) the place every one has a singular ID, and maintains its personal state and historical past.

    The thread ID is handed into config on every graph invocation in order that LangGraph is aware of which state to renew from after the interrupt.

    Now that now we have coated the ideas of interrupts, Command, checkpoints, and threads, let’s get into the code walkthrough.


    As the main focus shall be on the human-in-the-loop mechanics, we won’t be protecting the great code setup. Go to the GitHub repo for the total implementation.

    (4) Code Walkthrough

    (4.1) Preliminary Setup

    We begin by putting in the required dependencies and producing API keys for Bluesky, OpenAI, LangChain, LangGraph, and Tavily.

    # necessities.txt
    langchain-openai>=1.1.9
    langgraph>=1.0.8
    langgraph-checkpoint-sqlite>=3.0.3
    openai>=2.20.0
    tavily-python>=0.7.21
    # env.instance
    export OPENAI_API_KEY=your_openai_api_key
    export TAVILY_API_KEY=your_tavily_api_key
    export BLUESKY_HANDLE=yourname.bsky.social
    export BLUESKY_APP_PASSWORD=your_bluesky_app_password

    (4.2) Outline State

    We arrange the State, which is the shared, structured information object serving because the graph’s central reminiscence. It contains fields that seize key info, like put up content material and approval standing.

    The post_data key’s the place the generated put up content material shall be saved.


    (4.3) Interrupt at node stage

    We talked about earlier that interrupts can happen on the node stage or inside software calls. Allow us to see how the previous works by establishing the human evaluation node.

    The aim of the evaluation node is to pause execution and current the draft content material to the person for evaluation.

    Right here we see the interrupt() in motion (strains 8 to 13), the place the graph execution pauses on the first part of the node operate.

    The particulars key handed into interrupt() incorporates the generated content material, whereas the motion key triggers a handler operate (handle_content_interrupt()) to assist the evaluation:

    The generated content material is printed within the terminal for the person to view, they usually can approve it as-is, reject it outright, or edit it straight within the terminal earlier than approving.

    Primarily based on the choice, the handler operate returns one in all three values:

    • True (accredited),
    • False (rejected), or
    • String worth akin to the user-edited content material (edited). 

    This return worth is handed again to the evaluation node utilizing graph.invoke(Command=resume…), which resumes execution from the place interrupt() was referred to as (line 15) and determines which node to go subsequent: approve, reject, or edit content material and proceed to approve.


    (4.4) Interrupt at Instrument stage

    Interrupts will also be outlined on the software name stage. That is demonstrated within the subsequent human evaluation checkpoint within the approve node earlier than the content material is printed on-line on Bluesky.

    As a substitute of inserting interrupt() inside a node, we place it inside the publish_post software that creates posts through the Bluesky API:

    Identical to what we noticed on the node stage, we name a handler operate (handle_publish_interrupt) to seize the human resolution:

    The return worth from this evaluation step is both:

    • {"motion": "verify"}, or
    • {"motion": "cancel} , 

    The latter a part of the code (i.e., from line 19) within the publish_post software makes use of this return worth to find out whether or not to proceed with put up publication on Bluesky or not.


    (4.5) Setup Graph with Checkpointer

    Subsequent, we join the nodes in a graph for compilation and introduce a SQLite checkpointer to seize snapshots of the state at every interrupt.

    SQLite by default solely permits the thread that created the database connection to make use of it. Since LangGraph makes use of a thread pool for checkpoint writes, we have to set check_same_thread=False to permit these threads to entry the connection too.


    (4.6) Setup Full Workflow with Config

    With the graph prepared, we now place it right into a workflow that kickstarts the content material era pipeline.

    This workflow contains configuring a thread ID, which is handed to everygraph.invoke(). This ID is the hyperlink that ties the invocations collectively, in order that the graph pauses at an interrupt and resumes from the place it left off.

    You might need seen the __interrupt__ key within the code above. It’s merely a particular key that LangGraph provides to the consequence each time an interrupt() is hit.

    In different phrases, it’s the main sign indicating that graph execution has paused and is ready for human enter earlier than persevering with.

    By inserting __interrupt__ as a part of a whereas loop, it means the loop retains checking whether or not an interrupt remains to be ongoing. As soon as the interrupt is resolved, the important thing disappears, and the whereas loop exits. 

    With the workflow full, we are able to run it like this:

    run_hitl_workflow(question="newest information about Anthropic")

    (5) Finest Practices of Interrupts

    Whereas interrupts are highly effective in enabling HITL workflows, they are often disruptive if used incorrectly.

    As such, I like to recommend studying this LangGraph documentation. Listed here are some sensible guidelines to bear in mind:

    • Don’t wrap interrupt calls in strive/besides blocks, or they won’t pause execution correctly
    • Preserve interrupt calls in the identical order each time and don’t skip or rearrange them
    • Solely move JSON-safe values into interrupts and keep away from advanced objects
    • Make certain any code earlier than an interrupt can safely run a number of occasions (i.e., idempotency) or transfer it after the interrupt

    For instance, I confronted a problem within the internet search node the place I positioned an interrupt proper after the Tavily search. The intention was to pause and permit customers to evaluation the search outcomes for content material era.

    However as a result of interrupts work by rerunning the nodes they have been referred to as from, the node simply reran the net search and handed alongside a distinct set of search outcomes than those I accredited earlier.

    Due to this fact, interrupts work greatest as a gate earlier than an motion, but when we use them after a non-deterministic step (like search), we have to persist the consequence or threat getting one thing completely different on resume.


    Wrapping It Up

    Human evaluation can look like a bottleneck in agentic duties, however it stays crucial, particularly in domains the place outcomes are subjective or laborious to confirm.

    LangGraph makes it simple to construct HITL workflows with interrupts and checkpointing. 

    Due to this fact, the problem is deciding the place to put these human resolution factors to strike steadiness between oversight and effectivity.



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Editor Times Featured
    • Website

    Related Posts

    KV Cache Is Eating Your VRAM. Here’s How Google Fixed It With TurboQuant.

    April 19, 2026

    Proxy-Pointer RAG: Structure Meets Scale at 100% Accuracy with Smarter Retrieval

    April 19, 2026

    Dreaming in Cubes | Towards Data Science

    April 19, 2026

    AI Agents Need Their Own Desk, and Git Worktrees Give Them One

    April 18, 2026

    Your RAG System Retrieves the Right Data — But Still Produces Wrong Answers. Here’s Why (and How to Fix It).

    April 18, 2026

    Europe Warns of a Next-Gen Cyber Threat

    April 18, 2026

    Comments are closed.

    Editors Picks

    Scandi-style tiny house combines smart storage and simple layout

    April 19, 2026

    Our Favorite Apple Watch Has Never Been Less Expensive

    April 19, 2026

    Vercel says it detected unauthorized access to its internal systems after a hacker using the ShinyHunters handle claimed a breach on BreachForums (Lawrence Abrams/BleepingComputer)

    April 19, 2026

    Today’s NYT Strands Hints, Answer and Help for April 20 #778

    April 19, 2026
    Categories
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    About Us
    About Us

    Welcome to Times Featured, an AI-driven entrepreneurship growth engine that is transforming the future of work, bridging the digital divide and encouraging younger community inclusion in the 4th Industrial Revolution, and nurturing new market leaders.

    Empowering the growth of profiles, leaders, entrepreneurs businesses, and startups on international landscape.

    Asia-Middle East-Europe-North America-Australia-Africa

    Facebook LinkedIn WhatsApp
    Featured Picks

    Premier League Soccer: Stream Arsenal vs. Newcastle Live From Anywhere

    May 18, 2025

    All’s fine in New York: Blackbird-backed Kiki Club pays $224,000 to settle illegal operation charges in NYC

    November 20, 2025

    The Quiet Manual Walking Pad for Remote Workdays

    November 13, 2025
    Categories
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    Copyright © 2024 Timesfeatured.com IP Limited. All Rights.
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us

    Type above and press Enter to search. Press Esc to cancel.