Close Menu
    Facebook LinkedIn YouTube WhatsApp X (Twitter) Pinterest
    Trending
    • Will Humans Live Forever? AI Races to Defeat Aging
    • AI evolves itself to speed up scientific discovery
    • Australia’s privacy commissioner tried, in vain, to sound the alarm on data protection during the u16s social media ban trials
    • Nothing Phone (4a) Pro Review: A Close Second
    • Match Group CEO Spencer Rascoff says growing women’s share on Tinder is his “primary focus” to stem user declines; Sensor Tower says 75% of Tinder users are men (Kieran Smith/Financial Times)
    • Today’s NYT Connections Hints, Answers for April 20 #1044
    • AI Machine-Vision Earns Man Overboard Certification
    • Battery recycling startup Renewable Metals charges up on $12 million Series A
    Facebook LinkedIn WhatsApp
    Times FeaturedTimes Featured
    Monday, April 20
    • Home
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    • More
      • AI
      • Robotics
      • Industries
      • Global
    Times FeaturedTimes Featured
    Home»Artificial Intelligence»Deploy Your AI Assistant to Monitor and Debug n8n Workflows Using Claude and MCP
    Artificial Intelligence

    Deploy Your AI Assistant to Monitor and Debug n8n Workflows Using Claude and MCP

    Editor Times FeaturedBy Editor Times FeaturedNovember 12, 2025No Comments21 Mins Read
    Facebook Twitter Pinterest Telegram LinkedIn Tumblr WhatsApp Email
    Share
    Facebook Twitter LinkedIn Pinterest Telegram Email WhatsApp Copy Link


    n8n workflows in manufacturing, you recognize the stress of listening to {that a} course of failed and needing to dig via logs to seek out the basis trigger.

    Person: Samir, your automation doesn’t work anymore, I didn’t obtain my notification!

    Step one is to open your n8n interface and overview the final executions to establish the problems.

    Instance of key workflows that failed in the course of the night time – (Picture by Samir Saci)

    After a couple of minutes, you end up leaping between executions, evaluating timestamps and studying JSON errors to grasp the place issues broke.

    Instance of debugging a failed execution – (Picture by Samir Saci)

    What if an agent might inform you why your workflow failed at 3 AM with out you having to dig via the logs?

    It’s doable!

    As an experiment, I linked the n8n API, which supplies entry to execution logs of my occasion, to an MCP server powered by Claude.

    n8n workflow with a webhook to gather data from my occasion – (Picture by Samir Saci)

    The result’s an AI assistant that may monitor workflows, analyse failures, and clarify what went flawed in pure language.

    Instance of root trigger evaluation carried out by the agent – (Picture by Samir Saci)

    On this article, I’ll stroll you thru the step-by-step strategy of constructing this technique.

    The primary part will present an actual instance from my very own n8n occasion, the place a number of workflows failed in the course of the night time.

    Failed executions listed by hour – (Picture by Samir Saci)

    We’ll use this case to see how the agent identifies points and explains their root causes.

    Then, I’ll element how I linked my n8n occasion’s API to the MCP server utilizing a webhook to allow Claude Desktop to fetch execution knowledge for natural-language debugging.

    Workflow with webhook to connect with my occasion – (Picture by Writer)

    The webhook contains three capabilities:

    • Get Energetic Workflows: which supplies the listing of all energetic workflows
    • Get Final Executions: contains details about the final n executions
    • Get Executions Particulars (Standing = Error): particulars of failed executions formatted to assist root trigger analyses

    You’ll find the entire tutorial, together with the n8n workflow template and the MCP server supply code, linked on this article.

    Demonstration: Utilizing AI to Analyse Failed n8n Executions

    Allow us to look collectively at considered one of my n8n cases, which runs a number of workflows that fetch occasion data from completely different cities around the globe.

    These workflows assist enterprise and networking communities uncover attention-grabbing occasions to attend and study from.

    Instance of Automated Notifications obtained on Telegram utilizing these workflows – (Picture by Samir Saci)

    To check the answer, I’ll begin by asking the agent to listing the energetic workflows.

    Step 1: What number of workflows are energetic?

    Preliminary Query – (Picture by Samir Saci)

    Primarily based on the query alone, Claude understood that it wanted to work together with the n8n-monitor software, which was constructed utilizing an MCP server.

    Right here is the n8n-monitor software that’s out there for Claude – (Picture by Samir Saci)

    From there, it routinely chosen the corresponding operate, Get Energetic Workflows, to retrieve the listing of energetic automations from my n8n occasion.

    All of the energetic workflows – (Picture by Samir Saci)

    That is the place you begin to sense the ability of the mannequin.

    It routinely categorised the workflows based mostly on their names

    • 8 workflows to connect with fetch occasions from APIs and course of them
    • 3 different workflows which might be work-in-progress, together with the one used to fetch the logs
    Brief unrequested evaluation of the agent based mostly on the info extracted – (Picture by Samir Saci)

    This marks the start of the evaluation; all these insights shall be utilised within the root trigger evaluation.

    Step 2: Analyse the final n executions

    At this stage, we will start asking Claude to retrieve the newest executions for evaluation.

    Request to analyse the final 25 executions – (Picture by Samir Saci)

    Because of the context supplied within the doc-strings, which I’ll clarify within the subsequent part, Claude understood that it wanted to name the get workflow executions.

    It should obtain a abstract of the executions, with the share of failures and the variety of workflows impacted by these failures.

    {
      "abstract": {
        "totalExecutions": 25,
        "successfulExecutions": 22,
        "failedExecutions": 3,
        "failureRate": "12.00%",
        "successRate": "88.00%",
        "totalWorkflowsExecuted": 7,
        "workflowsWithFailures": 1
      },
      "executionModes": {
        "webhook": 7,
        "set off": 18
      },
      "timing": {
        "averageExecutionTime": "15.75 seconds",
        "maxExecutionTime": "107.18 seconds",
        "minExecutionTime": "0.08 seconds",
        "timeRange": {
          "from": "2025-10-24T06:14:23.127Z",
          "to": "2025-10-24T11:11:49.890Z"
        }
      },
    [...]

    That is the very first thing it is going to share with you; it supplies a transparent overview of the state of affairs.

    Half I – General Evaluation and Alerting (Picture by Samir Saci)

    Within the second a part of the outputs, you will discover an in depth breakdown of the failures for every workflow impacted.

      "failureAnalysis": {
        "workflowsImpactedByFailures": [
          "7uvA2XQPMB5l4kI5"
        ],
        "failedExecutionsByWorkflow": {
          "7uvA2XQPMB5l4kI5": {
            "workflowId": "7uvA2XQPMB5l4kI5",
            "failures": [
              {
                "id": "13691",
                "startedAt": "2025-10-24T11:00:15.072Z",
                "stoppedAt": "2025-10-24T11:00:15.508Z",
                "mode": "trigger"
              },
              {
                "id": "13683",
                "startedAt": "2025-10-24T09:00:57.274Z",
                "stoppedAt": "2025-10-24T09:00:57.979Z",
                "mode": "trigger"
              },
              {
                "id": "13677",
                "startedAt": "2025-10-24T07:00:57.167Z",
                "stoppedAt": "2025-10-24T07:00:57.685Z",
                "mode": "trigger"
              }
            ],
            "failureCount": 3
          }
        },
        "recentFailures": [
          {
            "id": "13691",
            "workflowId": "7uvA2XQPMB5l4kI5",
            "startedAt": "2025-10-24T11:00:15.072Z",
            "mode": "trigger"
          },
          {
            "id": "13683",
            "workflowId": "7uvA2XQPMB5l4kI5",
            "startedAt": "2025-10-24T09:00:57.274Z",
            "mode": "trigger"
          },
          {
            "id": "13677",
            "workflowId": "7uvA2XQPMB5l4kI5",
            "startedAt": "2025-10-24T07:00:57.167Z",
            "mode": "trigger"
          }
        ]
      },

    As a person, you now have visibility into the impacted workflows, together with particulars of the failure occurrences.

    Half II – Failure Evaluation & Alerting – (Picture by Samir Saci)

    For this particular case, the workflow “Bangkok Meetup” is triggered each hour.

    What we might see is that we had points thrice (out of 5) over the past 5 hours.

    Be aware: We are able to ignore the final sentence because the agent doesn’t but have entry to the execution particulars.

    The final part of the outputs contains an evaluation of the general efficiency of the workflows.

     "workflowPerformance": {
        "allWorkflowMetrics": {
          "CGvCrnUyGHgB7fi8": {
            "workflowId": "CGvCrnUyGHgB7fi8",
            "totalExecutions": 7,
            "successfulExecutions": 7,
            "failedExecutions": 0,
            "successRate": "100.00%",
            "failureRate": "0.00%",
            "lastExecution": "2025-10-24T11:11:49.890Z",
            "executionModes": {
              "webhook": 7
            }
          },
    [... other workflows ...]
    ,
        "topProblematicWorkflows": [
          {
            "workflowId": "7uvA2XQPMB5l4kI5",
            "totalExecutions": 5,
            "successfulExecutions": 2,
            "failedExecutions": 3,
            "successRate": "40.00%",
            "failureRate": "60.00%",
            "lastExecution": "2025-10-24T11:00:15.072Z",
            "executionModes": {
              "trigger": 5
            }
          },
          {
            "workflowId": "CGvCrnUyGHgB7fi8",
            "totalExecutions": 7,
            "successfulExecutions": 7,
            "failedExecutions": 0,
            "successRate": "100.00%",
            "failureRate": "0.00%",
            "lastExecution": "2025-10-24T11:11:49.890Z",
            "executionModes": {
              "webhook": 7
            }
          },
    [... other workflows ...]
          }
        ]
      }

    This detailed breakdown might help you prioritise the upkeep in case you will have a number of workflows failing.

    Half III – Efficiency Rating – (Picture by Samir Saci)

    On this particular instance, I’ve solely a single failing workflow, which is the Ⓜ️ Bangkok Meetup.

    What if I need to know when points began?

    Don’t fear, I’ve added a piece with the main points of the execution hour by hour.

      "timeSeriesData": {
        "2025-10-24T11:00": {
          "complete": 5,
          "success": 4,
          "error": 1
        },
        "2025-10-24T10:00": {
          "complete": 6,
          "success": 6,
          "error": 0
        },
        "2025-10-24T09:00": {
          "complete": 3,
          "success": 2,
          "error": 1
        },
        "2025-10-24T08:00": {
          "complete": 3,
          "success": 3,
          "error": 0
        },
        "2025-10-24T07:00": {
          "complete": 3,
          "success": 2,
          "error": 1
        },
        "2025-10-24T06:00": {
          "complete": 5,
          "success": 5,
          "error": 0
        }
      }

    You simply need to let Claude create a pleasant visible just like the one you will have beneath.

    Evaluation by Hour – (Picture by Samir Saci)

    Let me remind you right here that I didn’t present any suggestion of outcomes presentation to Claude; that is all from its personal initiative!

    Spectacular, no?

    Step 3: Root Trigger Evaluation

    Now that we all know which workflows have points, we should always seek for the basis trigger(s).

    Claude ought to usually name the Get Error Executions operate to retrieve particulars of executions with failures.

    On your data, the failure of this workflow is because of an error within the node JSON Tech that processes the output of the API name.

    • Meetup Tech is sending an HTTP question to the Meetup API
    • Processed by Outcome Tech Node
    • JSON Tech is meant to remodel this output right into a remodeled JSON
    Workflow with the failing node JSON Tech – (Picture by Samir Saci)

    Here’s what occurs when every thing goes nicely.

    Instance of excellent inputs for the node JSON Tech – (Picture by Samir Saci)

    Nonetheless, it may possibly occur that the API name generally fails and the JavaScript node receives an error, because the enter just isn’t within the anticipated format.

    Be aware: This subject has been corrected in manufacturing since then (the code node is now extra strong), however I saved it right here for the demo.

    Allow us to see if Claude can find the basis trigger.

    Right here is the output of the Get Error Executions operate.

    {
      "workflow_id": "7uvA2XQPMB5l4kI5",
      "workflow_name": "Ⓜ️ Bangkok Meetup",
      "error_count": 5,
      "errors": [
        {
          "id": "13691",
          "workflow_name": "Ⓜ️ Bangkok Meetup",
          "status": "error",
          "mode": "trigger",
          "started_at": "2025-10-24T11:00:15.072Z",
          "stopped_at": "2025-10-24T11:00:15.508Z",
          "duration_seconds": 0.436,
          "finished": false,
          "retry_of": null,
          "retry_success_id": null,
          "error": {
            "message": "A 'json' property isn't an object [item 0]",
            "description": "Within the returned knowledge, each key named 'json' should level to an object.",
            "http_code": null,
            "stage": "error",
            "timestamp": null
          },
          "failed_node": {
            "title": "JSON Tech",
            "kind": "n8n-nodes-base.code",
            "id": "dc46a767-55c8-48a1-a078-3d401ea6f43e",
            "place": [
              -768,
              -1232
            ]
          },
          "set off": {}
        },
    [... 4 other errors ...]
      ],
      "abstract": {
        "total_errors": 5,
        "error_patterns": {
          "A 'json' property is not an object [item 0]": {
            "depend": 5,
            "executions": [
              "13691",
              "13683",
              "13677",
              "13660",
              "13654"
            ]
          }
        },
        "failed_nodes": {
          "JSON Tech": 5
        },
        "time_range": {
          "oldest": "2025-10-24T05:00:57.105Z",
          "latest": "2025-10-24T11:00:15.072Z"
        }
      }
    }

    Claude now has entry to the main points of the executions with the error message and the impacted nodes.

    Evaluation of the errors on the final 5 executions – (Picture by Samir Saci)

    Within the response above, you’ll be able to see that Claude summarised the outputs of a number of executions in a single evaluation.

    We all know now that:

    • Errors occurred each hour besides at 08:00 am
    • Every time, the identical node, known as “JSON Tech”, is impacted
    • The error happens shortly after the workflow is triggered

    This descriptive evaluation is accomplished by the start of a diagnostic.

    Prognosis – (Picture by Samir Saci)

    This assertion just isn’t incorrect, as evidenced by the error message on the n8n UI.

    Improper Inputs for JSON Tech node – (Picture by Samir Saci)

    Nonetheless, because of the restricted context, Claude begins to supply suggestions to repair the workflow that aren’t appropriate.

    Proposed repair within the JSON Tech Node – (Picture by Samir Saci)

    Along with the code correction, it supplies an motion plan.

    Actions Gadgets ready by Claude – (Picture by Samir Saci)

    As I do know that the difficulty just isn’t (solely) on the code node, I wished to information Claude within the root trigger evaluation.

    Problem its conclusion – (Picture by Samir Saci)

    It lastly challenged the preliminary proposal of the decision and started to share assumptions in regards to the root trigger(s).

    Corrected Evaluation – (Picture by Samir Saci)

    This begins to get nearer to the precise root trigger, offering sufficient insights for us to begin exploring the workflow.

    Repair proposed – (Picture by Samir Saci)

    The revised repair is now higher because it considers the chance that the difficulty comes from the node enter knowledge.

    For me, that is one of the best I might count on from Claude, contemplating the restricted data that he has available.

    Conclusion: Worth Proposition of This Device

    This straightforward experiment demonstrates how an AI agent powered by Claude can lengthen past fundamental monitoring to ship real operational worth.

    Earlier than manually checking executions and logs, you’ll be able to first converse along with your automation system to ask what failed, why it failed, and obtain context-aware explanations inside seconds.

    This is not going to change you totally, however it may possibly speed up the basis trigger evaluation course of.

    Within the subsequent part, I’ll briefly introduce how I arrange the MCP Server to attach Claude Desktop to my occasion.

    Constructing an area MCP Server to attach Claude Desktop to a FastAPI Microservice

    To equip Claude with the three capabilities out there within the webhook (Get Energetic Workflows, Get Workflow Executions and Get Error Executions), I’ve applied an MCP Server.

    MCP Server Connecting Claude Desktop UI to our workflow – (Picture by Samir Saci)

    On this part, I’ll briefly introduce the implementation, focusing solely on Get Energetic Workflows and Get Workflows Executions, to display how I clarify the utilization of those instruments to Claude.

    For a complete and detailed introduction to the answer, together with directions on the right way to deploy it on your machine, I invite you to observe this tutorial on my YouTube Channel.

    Additionally, you will discover the MCP Server supply code and the n8n workflow of the webhook.

    Create a Class to Question the Workflow

    Earlier than analyzing the right way to arrange the three completely different instruments, let me introduce the utility class, which is outlined with all of the capabilities wanted to work together with the webhook.

    You’ll find it within the Python file: ./utils/n8n_monitory_sync.py

    import logging
    import os
    from datetime import datetime, timedelta
    from typing import Any, Dict, Elective
    import requests
    import traceback
    
    logger = logging.getLogger(__name__)
    
    
    class N8nMonitor:
        """Handler for n8n monitoring operations - synchronous model"""
        
        def __init__(self):
            self.webhook_url = os.getenv("N8N_WEBHOOK_URL", "")
            self.timeout = 30

    Primarily, we retrieve the webhook URL from an surroundings variable and set a question timeout of 30 seconds.

    The primary operate get_active_workflows is querying the webhook passing as a parameter: "motion": get_active_workflows".

    def get_active_workflows(self) -> Dict[str, Any]:
        """Fetch all energetic workflows from n8n"""
        if not self.webhook_url:
            logger.error("Atmosphere variable N8N_WEBHOOK_URL not configured")
            return {"error": "N8N_WEBHOOK_URL surroundings variable not set"}
        
        strive:
            logger.information("Fetching energetic workflows from n8n")
            response = requests.publish(
                self.webhook_url,
                json={"motion": "get_active_workflows"},
                timeout=self.timeout
            )
            response.raise_for_status()
            
            knowledge = response.json()
            
            logger.debug(f"Response kind: {kind(knowledge)}")
            
            # Record of all workflows
            workflows = []
            if isinstance(knowledge, listing):
                workflows = [item for item in data if isinstance(item, dict)]
                if not workflows and knowledge:
                    logger.error(f"Anticipated listing of dictionaries, bought listing of {kind(knowledge[0]).__name__}")
                    return {"error": "Webhook returned invalid knowledge format"}
            elif isinstance(knowledge, dict):
                if "knowledge" in knowledge:
                    workflows = knowledge["data"]
                else:
                    logger.error(f"Surprising dict response with keys: {listing(knowledge.keys())} n {traceback.format_exc()}")
                    return {"error": "Surprising response format"}
            else:
                logger.error(f"Surprising response kind: {kind(knowledge)} n {traceback.format_exc()}")
                return {"error": f"Surprising response kind: {kind(knowledge).__name__}"}
            
            logger.information(f"Efficiently fetched {len(workflows)} energetic workflows")
            
            return {
                "total_active": len(workflows),
                "workflows": [
                    {
                        "id": wf.get("id", "unknown"),
                        "name": wf.get("name", "Unnamed"),
                        "created": wf.get("createdAt", ""),
                        "updated": wf.get("updatedAt", ""),
                        "archived": wf.get("isArchived", "false") == "true"
                    }
                    for wf in workflows
                ],
                "abstract": {
                    "complete": len(workflows),
                    "names": [wf.get("name", "Unnamed") for wf in workflows]
                }
            }
            
        besides requests.exceptions.RequestException as e:
            logger.error(f"Error fetching workflows: {e} n {traceback.format_exc()}")
            return {"error": f"Did not fetch workflows: {str(e)} n {traceback.format_exc()}"}
        besides Exception as e:
            logger.error(f"Surprising error fetching workflows: {e} n {traceback.format_exc()}")
            return {"error": f"Surprising error: {str(e)} n {traceback.format_exc()}"}

    I’ve added many checks, because the API generally fails to return the anticipated knowledge format.

    This answer is extra strong, offering Claude with all the data to grasp why a question failed.

    Now that the primary operate is roofed, we will give attention to getting all of the final n executions with get_workflow_executions.

    def get_workflow_executions(
        self, 
        restrict: int = 50,
        includes_kpis: bool = False,
    ) -> Dict[str, Any]:
        """Fetch workflow executions of the final 'restrict' executions with or with out KPIs """
        if not self.webhook_url:
            logger.error("Atmosphere variable N8N_WEBHOOK_URL not set")
            return {"error": "N8N_WEBHOOK_URL surroundings variable not set"}
        
        strive:
            logger.information(f"Fetching the final {restrict} executions")
            
            payload = {
                "motion": "get_workflow_executions",
                "restrict": restrict
            }
            
            response = requests.publish(
                self.webhook_url,
                json=payload,
                timeout=self.timeout
            )
            response.raise_for_status()
            
            knowledge = response.json()
            
            if isinstance(knowledge, listing) and len(knowledge) > 0:
                knowledge = knowledge[0]
            
            logger.information("Efficiently fetched execution knowledge")
            
            if includes_kpis and isinstance(knowledge, dict):
                logger.information("Together with KPIs within the execution knowledge")
    
                if "abstract" in knowledge:
                    abstract = knowledge["summary"]
                    failure_rate = float(abstract.get("failureRate", "0").rstrip("%"))
                    knowledge["insights"] = {
                        "health_status": "🟢 Wholesome" if failure_rate < 10 else 
                                    "🟡 Warning" if failure_rate < 25 else 
                                    "🔴 Essential",
                        "message": f"{abstract.get('totalExecutions', 0)} executions with {abstract.get('failureRate', '0%')} failure price"
                    }
            
            return knowledge
            
        besides requests.exceptions.RequestException as e:
            logger.error(f"HTTP error fetching executions: {e} n {traceback.format_exc()}")
            return {"error": f"Did not fetch executions: {str(e)}"}
        besides Exception as e:
            logger.error(f"Surprising error fetching executions: {e} n {traceback.format_exc()}")
            return {"error": f"Surprising error: {str(e)}"}

    The one parameter right here is the quantity n of executions you need to retrieve: "restrict": n.

    The outputs embody a abstract with a well being standing that’s generated by the code node Processing Audit. (more details in the tutorial)

    n8n workflow with a webhook to gather data from my occasion – (Picture by Samir Saci)

    The operate get_workflow_executions solely retrieves the outputs for formatting earlier than sending them to the agent.

    Now that we’ve got outlined our core capabilities, we will create the instruments to equip Claude by way of the MCP server.

    Arrange an MCP Server with Instruments

    Now it’s the time to create our MCP server with instruments and assets to equip (and educate) Claude.

    from mcp.server.fastmcp import FastMCP
    import logging
    from typing import Elective, Dict, Any
    from utils.n8n_monitor_sync import N8nMonitor
    
    logging.basicConfig(
        stage=logging.INFO,
        format='%(asctime)s - %(levelname)s - %(message)s',
        handlers=[
            logging.FileHandler("n8n_monitor.log"),
            logging.StreamHandler()
        ]
    )
    
    logger = logging.getLogger(__name__)
    
    mcp = FastMCP("n8n-monitor")
    
    monitor = N8nMonitor()

    It’s a fundamental implementation utilizing FastMCP and importing n8n_monitor_sync.py with the capabilities outlined within the earlier part.

    # Useful resource for the agent (Samir: replace it every time you add a software)
    @mcp.useful resource("n8n://assist")
    def get_help() -> str:
        """Get assist documentation for the n8n monitoring instruments"""
        return """
        📊 N8N MONITORING TOOLS
        =======================
        
        WORKFLOW MONITORING:
        • get_active_workflows()
          Record all energetic workflows with names and IDs
        
        EXECUTION TRACKING:
        • get_workflow_executions(restrict=50, include_kpis=True)
          Get execution logs with detailed KPIs
          - restrict: Variety of current executions to retrieve (1-100)
          - include_kpis: Calculate efficiency metrics
        
        ERROR DEBUGGING:
        • get_error_executions(workflow_id)
          Retrieve detailed error data for a particular workflow
          - Returns final 5 errors with complete debugging knowledge
          - Exhibits error messages, failed nodes, set off knowledge
          - Identifies error patterns and problematic nodes
          - Consists of HTTP codes, error ranges, and timing information
        
        HEALTH REPORTING:
        • get_workflow_health_report(restrict=50)
          Generate complete well being evaluation based mostly on current executions
          - Identifies problematic workflows
          - Exhibits success/failure charges
          - Gives execution timing metrics
        
        KEY METRICS PROVIDED:
        • Complete executions
        • Success/failure charges
        • Execution instances (avg, min, max)
        • Workflows with failures
        • Execution modes (guide, set off, built-in)
        • Error patterns and frequencies
        • Failed node identification
        
        HEALTH STATUS INDICATORS:
        • 🟢 Wholesome: <10% failure price
        • 🟡 Warning: 10-25% failure price
        • 🔴 Essential: >25% failure price
        
        USAGE EXAMPLES:
        - "Present me all energetic workflows"
        - "What workflows have been failing?"
        - "Generate a well being report for my n8n occasion"
        - "Present execution metrics for the final 48 hours"
        - "Debug errors in workflow CGvCrnUyGHgB7fi8"
        - "What's inflicting failures in my knowledge processing workflow?"
        
        DEBUGGING WORKFLOW:
        1. Use get_workflow_executions() to establish problematic workflows
        2. Use get_error_executions() for detailed error evaluation
        3. Verify error patterns to establish recurring points
        4. Assessment failed node particulars and set off knowledge
        5. Use workflow_id and execution_id for focused fixes
        """

    Because the software is complicated to apprehend, we embody a immediate, within the type of an MCP useful resource, to summarise the target and options of the n8n workflow linked by way of webhook.

    Now we will outline the primary software to get all of the energetic workflows.

    @mcp.software()
    def get_active_workflows() -> Dict[str, Any]:
        """
        Get all energetic workflows within the n8n occasion.
        
        Returns:
            Dictionary with listing of energetic workflows and their particulars
        """
        strive:
            logger.information("Fetching energetic workflows")
            consequence = monitor.get_active_workflows()
            
            if "error" in consequence:
                logger.error(f"Did not get workflows: {consequence['error']}")
            else:
                logger.information(f"Discovered {consequence.get('total_active', 0)} energetic workflows")
            
            return consequence
            
        besides Exception as e:
            logger.error(f"Surprising error: {str(e)}")
            return {"error": str(e)}

    The docstring, used to elucidate to the MCP server the right way to use the software, is comparatively transient, as there aren’t any enter parameters for get_active_workflows().

    Allow us to do the identical for the second software to retrieve the final n executions.

    @mcp.software()
    def get_workflow_executions(
        restrict: int = 50,
        include_kpis: bool = True
    ) -> Dict[str, Any]:
        """
        Get workflow execution logs and KPIs for the final N executions.
        
        Args:
            restrict: Variety of executions to retrieve (default: 50)
            include_kpis: Embody calculated KPIs (default: true)
        
        Returns:
            Dictionary with execution knowledge and KPIs
        """
        strive:
            logger.information(f"Fetching the final {restrict} executions")
            
            consequence = monitor.get_workflow_executions(
                restrict=restrict,
                includes_kpis=include_kpis
            )
            
            if "error" in consequence:
                logger.error(f"Did not get executions: {consequence['error']}")
            else:
                if "abstract" in consequence:
                    abstract = consequence["summary"]
                    logger.information(f"Executions: {abstract.get('totalExecutions', 0)}, "
                              f"Failure price: {abstract.get('failureRate', 'N/A')}")
            
            return consequence
            
        besides Exception as e:
            logger.error(f"Surprising error: {str(e)}")
            return {"error": str(e)}

    In contrast to the earlier software, we have to specify the enter knowledge with the default worth.

    We now have now geared up Claude with these two instruments that can be utilized as within the instance offered within the earlier part.

    What’s subsequent? Deploy it in your machine!

    As I wished to maintain this text brief, I’ll solely introduce these two instruments.

    For the remainder of the functionalities, I invite you to observe this complete tutorial on my YouTube channel.

    I embody step-by-step explanations on the right way to deploy this in your machine with an in depth overview of the supply code shared on my GitHub (MCP Server) and n8n profile (workflow).

    Conclusion

    That is just the start!

    We are able to take into account this as model 1.0 of what can develop into an excellent agent to handle your n8n workflows.

    What do I imply by this?

    There’s a large potential for bettering this answer, particularly for the basis trigger evaluation by:

    • Offering extra context to the agent utilizing the sticky notes contained in the workflows
    • Exhibiting how good inputs and outputs look with analysis nodes to assist Claude carry out hole analyses
    • Exploiting the opposite endpoints of the n8n API for extra correct analyses

    Nonetheless, I don’t suppose I can, as a full-time startup founder and CEO, develop such a complete software alone.

    Subsequently, I wished to share that with the In direction of Information Science and n8n neighborhood as an open-source answer out there on my GitHub profile.

    Want inspiration to begin automating with n8n?

    On this weblog, I’ve printed a number of articles to share examples of workflow automations we’ve got applied for small, medium and huge operations.

    Articles printed on In direction of Information Science – (Picture by Samir Saci)

    The main focus was primarily on logistics and provide chain operations with actual case research:

    I even have a complete playlist on my YouTube Channel, Provide Science, with greater than 15 tutorials.

    Playlist with 15+ tutorials with ready-to-deploy workflows shared – (Image by Samir Saci)

    You possibly can observe these tutorials to deploy the workflows I share on my n8n creator profile (linked within the descriptions) that cowl:

    • Course of Automation for Logistics and Provide Chain
    • AI-Powered Workflows for Content material Creation
    • Productiveness and Language Studying

    Be at liberty to share your questions within the remark sections of the movies.

    Different examples of MCP Server Implementation

    This isn’t my first implementation of MCP servers.

    In one other experiment, I linked Claude Desktop with a Provide-Chain Community Optimisation software.

    How to Connect an MCP Server for an AI-Powered, Supply-Chain Network Optimisation Agent – (Picture by Samir Saci)

    On this instance, the n8n workflow is changed by a FastAPI microservice internet hosting a linear programming algorithm.

    Supply Chain Network Optimisation – (Picture by Samir Saci)

    The target is to find out the optimum set of factories to provide and ship merchandise to market on the lowest price and with the smallest environmental footprint.

    Comparative Evaluation of a number of Situations – (Picture by Samir Saci)

    In any such train, Claude is doing an amazing job of synthesising and presenting outcomes.

    For extra data, take a look at this Towards Data Science Article.

    About Me

    Let’s join on Linkedin and Twitter. I’m a Provide Chain Engineer who makes use of knowledge analytics to enhance logistics operations and cut back prices.

    For consulting or recommendation on analytics and sustainable provide chain transformation, be happy to contact me by way of Logigreen Consulting.

    In case you are thinking about Information Analytics and Provide Chain, take a look at my web site.

    Samir Saci | Data Science & Productivity





    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Editor Times Featured
    • Website

    Related Posts

    Will Humans Live Forever? AI Races to Defeat Aging

    April 20, 2026

    KV Cache Is Eating Your VRAM. Here’s How Google Fixed It With TurboQuant.

    April 19, 2026

    Proxy-Pointer RAG: Structure Meets Scale at 100% Accuracy with Smarter Retrieval

    April 19, 2026

    Dreaming in Cubes | Towards Data Science

    April 19, 2026

    AI Agents Need Their Own Desk, and Git Worktrees Give Them One

    April 18, 2026

    Your RAG System Retrieves the Right Data — But Still Produces Wrong Answers. Here’s Why (and How to Fix It).

    April 18, 2026

    Comments are closed.

    Editors Picks

    Will Humans Live Forever? AI Races to Defeat Aging

    April 20, 2026

    AI evolves itself to speed up scientific discovery

    April 20, 2026

    Australia’s privacy commissioner tried, in vain, to sound the alarm on data protection during the u16s social media ban trials

    April 20, 2026

    Nothing Phone (4a) Pro Review: A Close Second

    April 20, 2026
    Categories
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    About Us
    About Us

    Welcome to Times Featured, an AI-driven entrepreneurship growth engine that is transforming the future of work, bridging the digital divide and encouraging younger community inclusion in the 4th Industrial Revolution, and nurturing new market leaders.

    Empowering the growth of profiles, leaders, entrepreneurs businesses, and startups on international landscape.

    Asia-Middle East-Europe-North America-Australia-Africa

    Facebook LinkedIn WhatsApp
    Featured Picks

    AWS vs. Azure: A Deep Dive into Model Training – Part 2

    February 4, 2026

    What even is the AI bubble?

    December 15, 2025

    Surprise! Information Theory Intro | Eyal Kazin

    February 3, 2025
    Categories
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    Copyright © 2024 Timesfeatured.com IP Limited. All Rights.
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us

    Type above and press Enter to search. Press Esc to cancel.