Close Menu
    Facebook LinkedIn YouTube WhatsApp X (Twitter) Pinterest
    Trending
    • Volvo’s adaptive seatbelt enhances passenger safety
    • Startup-focused publication Trending Topics acquired by Vienna-based AI company newsrooms.ai
    • The Best Mushroom Coffee, WIRED Tested and Reviewed (2025)
    • An interview with ASML CEO Christophe Fouquet, as the company navigates political instability in The Netherlands and abroad and the impacts of Trump’s trade war (Adam Satariano/New York Times)
    • Today’s NYT Connections: Sports Edition Hints, Answers for June 6 #256
    • New discovery links red blood cells to organ damage
    • What surviving cancer outliers can teach us: The tech behind a new paradigm in oncology
    • Elon Musk Is Posting Through It
    Facebook LinkedIn WhatsApp
    Times FeaturedTimes Featured
    Friday, June 6
    • Home
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    • More
      • AI
      • Robotics
      • Industries
      • Global
    Times FeaturedTimes Featured
    Home»Artificial Intelligence»Building a Modern Dashboard with Python and Gradio
    Artificial Intelligence

    Building a Modern Dashboard with Python and Gradio

    Editor Times FeaturedBy Editor Times FeaturedJune 5, 2025No Comments15 Mins Read
    Facebook Twitter Pinterest Telegram LinkedIn Tumblr WhatsApp Email
    Share
    Facebook Twitter LinkedIn Pinterest Telegram Email WhatsApp Copy Link


    second in a brief collection on growing knowledge dashboards utilizing the newest Python-based GUI growth instruments, Streamlit, Gradio, and Taipy. 

    The supply dataset for every dashboard would be the similar, however saved in numerous codecs. As a lot as potential, I’ll additionally attempt to make the precise dashboard layouts for every instrument resemble one another and have the identical performance.

    Within the first a part of this collection, I created a Streamlit model of the dashboard that retrieves its knowledge from an area PostgreSQL database. You may view that article here.

    This time, we’re exploring the usage of the Gradio library.

    The information for this dashboard might be in an area CSV file, and Pandas might be our major knowledge processing engine.

    If you wish to see a fast demo of the app, I’ve deployed it to Hugging Face Areas. You may run it utilizing the hyperlink beneath, however notice that the 2 enter date picker pop-ups don’t work as a result of a identified bug within the Hugging Face setting. That is solely the case for deployed apps on HF, you possibly can nonetheless change the dates manually. Working the app regionally works effective and doesn’t have this concern.

    Dashboard demo on HuggingFace

    What’s Gradio?

    Gradio is an open-source Python bundle that simplifies the method of constructing demos or internet purposes for machine studying fashions, APIs, or any Python operate. With it, you possibly can create demos or internet purposes with no need JavaScript, CSS, or webhosting expertise. By writing only a few strains of Python code, you possibly can unlock the facility of Gradio and seamlessly showcase your machine-learning fashions to a broader viewers.

    Gradio simplifies the event course of by offering an intuitive framework that eliminates the complexities related to constructing person interfaces from scratch. Whether or not you’re a machine studying developer, researcher, or fanatic, Gradio means that you can create lovely and interactive demos that improve the understanding and accessibility of your machine studying fashions.

    This open-source Python bundle helps you bridge the hole between your machine studying experience and a broader viewers, making your fashions accessible and actionable.

    What we’ll develop

    We’re growing a knowledge dashboard. Our supply knowledge might be a single CSV file containing 100,000 artificial gross sales information.

    The precise supply of the information isn’t that necessary. It may simply as simply be a textual content file, an Excel file, SQLite, or any database you possibly can connect with.

    That is what our last dashboard will seem like.

    Picture by Creator

    There are 4 most important sections.

    • The highest row permits the person to pick particular begin and finish dates and/or product classes utilizing date pickers and a drop-down listing, respectively.
    • The second row — Key metrics — exhibits a top-level abstract of the chosen knowledge.
    • The Visualisation part permits the person to pick one among three graphs to show the enter dataset.
    • The uncooked knowledge part is exactly what it claims to be. This tabular illustration of the chosen knowledge successfully exhibits a snapshot of the underlying CSV knowledge file.

    Utilizing the dashboard is simple. Initially, stats for the entire knowledge set are displayed. The person can then slender the information focus utilizing the three filter fields on the high of the show. The graphs, key metrics, and uncooked knowledge sections dynamically replace to replicate the person’s selections within the filter fields.

    The underlying knowledge

    As talked about, the dashboard’s supply knowledge is contained in a single comma-separated values (CSV) file. The information consists of 100,000 artificial sales-related information. Listed here are the primary ten information of the file to provide you an concept of what it appears to be like like.

    +----------+------------+------------+----------------+------------+---------------+------------+----------+-------+--------------------+
    | order_id | order_date | customer_id| customer_name  | product_id | product_names | classes | amount | worth | whole              |
    +----------+------------+------------+----------------+------------+---------------+------------+----------+-------+--------------------+
    | 0        | 01/08/2022 | 245        | Customer_884   | 201        | Smartphone    | Electronics| 3        | 90.02 | 270.06             |
    | 1        | 19/02/2022 | 701        | Customer_1672  | 205        | Printer       | Electronics| 6        | 12.74 | 76.44              |
    | 2        | 01/01/2017 | 184        | Customer_21720 | 208        | Pocket book      | Stationery | 8        | 48.35 | 386.8              |
    | 3        | 09/03/2013 | 275        | Customer_23770 | 200        | Laptop computer        | Electronics| 3        | 74.85 | 224.55             |
    | 4        | 23/04/2022 | 960        | Customer_23790 | 210        | Cupboard       | Workplace     | 6        | 53.77 | 322.62             |
    | 5        | 10/07/2019 | 197        | Customer_25587 | 202        | Desk          | Workplace     | 3        | 47.17 | 141.51             |
    | 6        | 12/11/2014 | 510        | Customer_6912  | 204        | Monitor       | Electronics| 5        | 22.5  | 112.5              |
    | 7        | 12/07/2016 | 150        | Customer_17761 | 200        | Laptop computer        | Electronics| 9        | 49.33 | 443.97             |
    | 8        | 12/11/2016 | 997        | Customer_23801 | 209        | Espresso Maker  | Electronics| 7        | 47.22 | 330.54             |
    | 9        | 23/01/2017 | 151        | Customer_30325 | 207        | Pen           | Stationery | 6        | 3.5   | 21                 |
    +----------+------------+------------+----------------+------------+---------------+------------+----------+-------+--------------------+

    And right here is a few Python code you should utilize to generate the same dataset. Be certain that each the NumPy and Pandas libraries are put in first.

    # generate the 100K file CSV file
    #
    import polars as pl
    import numpy as np
    from datetime import datetime, timedelta
    
    def generate(nrows: int, filename: str):
        names = np.asarray(
            [
                "Laptop",
                "Smartphone",
                "Desk",
                "Chair",
                "Monitor",
                "Printer",
                "Paper",
                "Pen",
                "Notebook",
                "Coffee Maker",
                "Cabinet",
                "Plastic Cups",
            ]
        )
        classes = np.asarray(
            [
                "Electronics",
                "Electronics",
                "Office",
                "Office",
                "Electronics",
                "Electronics",
                "Stationery",
                "Stationery",
                "Stationery",
                "Electronics",
                "Office",
                "Sundry",
            ]
        )
        product_id = np.random.randint(len(names), dimension=nrows)
        amount = np.random.randint(1, 11, dimension=nrows)
        worth = np.random.randint(199, 10000, dimension=nrows) / 100
        # Generate random dates between 2010-01-01 and 2023-12-31
        start_date = datetime(2010, 1, 1)
        end_date = datetime(2023, 12, 31)
        date_range = (end_date - start_date).days
        # Create random dates as np.array and convert to string format
        order_dates = np.array([(start_date + timedelta(days=np.random.randint(0, date_range))).strftime('%Y-%m-%d') for _ in range(nrows)])
        # Outline columns
        columns = {
            "order_id": np.arange(nrows),
            "order_date": order_dates,
            "customer_id": np.random.randint(100, 1000, dimension=nrows),
            "customer_name": [f"Customer_{i}" for i in np.random.randint(2**15, size=nrows)],
            "product_id": product_id + 200,
            "product_names": names[product_id],
            "classes": classes[product_id],
            "amount": amount,
            "worth": worth,
            "whole": worth * amount,
        }
        # Create Polars DataFrame and write to CSV with specific delimiter
        df = pl.DataFrame(columns)
        df.write_csv(filename, separator=',',include_header=True)  # Guarantee comma is used because the delimiter
    
    # Generate 100,000 rows of information with random order_date and save to CSV
    generate(100_000, "/mnt/d/sales_data/sales_data.csv")

    Putting in and utilizing Gradio

    Putting in Gradio is simple utilizing pip, however for coding, the very best apply is to arrange a separate Python setting for all of your work. I take advantage of Miniconda for that goal, however be at liberty to make use of no matter methodology fits your work apply.

    If you wish to go down the conda route and don’t have already got it, you have to set up Miniconda (beneficial) or Anaconda first. 

    Please notice that, on the time of writing, Gradio wants at the least Python 3.8 put in to work accurately.

    As soon as the setting is created, swap to it utilizing the ‘activate’ command, after which run ‘pip set up’ to set up our required Python libraries.

    #create our take a look at setting
    (base) C:Usersthoma>conda create -n gradio_dashboard python=3.12 -y
    
    # Now activate it
    (base) C:Usersthoma>conda activate gradio_dashboard
    
    # Set up python libraries, and many others ...
    (gradio_dashboard) C:Usersthoma>pip set up gradio pandas matplotlib cachetools

    Key variations between Streamlit and Gradio

    As I’ll show on this article, it’s potential to supply very related knowledge dashboards utilizing Streamlit and Gradio. Nonetheless, their ethos differs in a number of key methods.

    Focus

    • Gradio specialises in creating interfaces for machine studying fashions, while Streamlit is extra designed for general-purpose knowledge purposes and visualisations.

    Ease of use

    • Gradio is thought for its simplicity and speedy prototyping capabilities, making it simpler for freshmen to make use of. Streamlit gives extra superior options and customisation choices, which can require a steeper studying curve.

    Interactivity

    • Streamlit makes use of a reactive Programming mannequin the place any enter change triggers an entire script rerun, updating all elements instantly. Gradio, by default, updates solely when a person clicks a submit button, although it may be configured for stay updates.

    Customization

    • Gradio focuses on pre-built elements for rapidly demonstrating AI fashions. Streamlit supplies extra intensive customisation choices and adaptability for complicated initiatives.

    Deployment

    • Having deployed each a Streamlit and a Gradio app, I might say it’s simpler to deploy a Streamlit app than a Gradio app. In Streamlit, deployment may be completed with a single click on through the Streamlit Neighborhood Cloud. This performance is constructed into any Streamlit app you create. Gradio gives deployment utilizing Hugging Face Areas, however it includes extra work. Neither methodology is especially complicated, although.

    Use instances

    Streamlit excels in creating data-centric purposes and interactive dashboards for complicated initiatives. Gradio is good for rapidly showcasing machine studying fashions and constructing less complicated purposes.

    The Gradio Dashboard Code

    I’ll break down the code into sections and clarify every one as we proceed.

    We start by importing the required exterior libraries and loading the total dataset from the CSV file right into a Pandas DataFrame.

    import gradio as gr
    import pandas as pd
    import matplotlib.pyplot as plt
    import datetime
    import warnings
    import os
    import tempfile
    from cachetools import cached, TTLCache
    
    warnings.filterwarnings("ignore", class=FutureWarning, module="seaborn")
    
    # ------------------------------------------------------------------
    # 1) Load CSV knowledge as soon as
    # ------------------------------------------------------------------
    csv_data = None
    
    def load_csv_data():
        world csv_data
        
        # Elective: specify column dtypes if identified; modify as obligatory
        dtype_dict = {
            "order_id": "Int64",
            "customer_id": "Int64",
            "product_id": "Int64",
            "amount": "Int64",
            "worth": "float",
            "whole": "float",
            "customer_name": "string",
            "product_names": "string",
            "classes": "string"
        }
        
        csv_data = pd.read_csv(
            "d:/sales_data/sales_data.csv",
            parse_dates=["order_date"],
            dayfirst=True,      # in case your dates are DD/MM/YYYY format
            low_memory=False,
            dtype=dtype_dict
        )
    
    load_csv_data()

    Subsequent, we configure a time-to-live cache with a most of 128 objects and an expiration of 300 seconds. That is used to retailer the outcomes of pricey operate calls and pace up repeated lookups

    The get_unique_categories operate returns an inventory of distinctive, cleaned (capitalised) classes from the `csv_data` DataFrame, caching the outcome for faster entry.

    The get_date_range operate returns the minimal and most order dates from the dataset, or None if the information is unavailable.

    The filter_data operate filters the csv_data DataFrame based mostly on a specified date vary and optionally available class, returning the filtered DataFrame.

    The get_dashboard_stats operate retrieves abstract metrics — whole income, whole orders, common order worth, and high class — for the given filters. Internally it makes use of filter_data() to scope the dataset after which calculate these key statistics.

    The get_data_for_table function returns an in depth DataFrame of filtered gross sales knowledge, sorted by order_id and order_date, together with extra income for every sale.

    The get_plot_data operate codecs knowledge for producing a plot by summing income over time, grouped by date.

    The get_revenue_by_category operate aggregates and returns income by class, sorted by income, inside the specified date vary and class.

    The get_top_products operate returns the highest 10 merchandise by income, filtered by date vary and class.

    Based mostly on the orientation argument, the create_matplotlib_figure operate generates a bar plot from the information and saves it as a picture file, both vertical or horizontal.

    cache = TTLCache(maxsize=128, ttl=300)
    
    @cached(cache)
    def get_unique_categories():
        world csv_data
        if csv_data is None:
            return []
        cats = sorted(csv_data['categories'].dropna().distinctive().tolist())
        cats = [cat.capitalize() for cat in cats]
        return cats
    
    def get_date_range():
        world csv_data
        if csv_data is None or csv_data.empty:
            return None, None
        return csv_data['order_date'].min(), csv_data['order_date'].max()
    
    def filter_data(start_date, end_date, class):
        world csv_data
    
        if isinstance(start_date, str):
            start_date = datetime.datetime.strptime(start_date, '%Y-%m-%d').date()
        if isinstance(end_date, str):
            end_date = datetime.datetime.strptime(end_date, '%Y-%m-%d').date()
    
        df = csv_data.loc[
            (csv_data['order_date'] >= pd.to_datetime(start_date)) &
            (csv_data['order_date'] <= pd.to_datetime(end_date))
        ].copy()
    
        if class != "All Classes":
            df = df.loc[df['categories'].str.capitalize() == class].copy()
    
        return df
    
    def get_dashboard_stats(start_date, end_date, class):
        df = filter_data(start_date, end_date, class)
        if df.empty:
            return (0, 0, 0, "N/A")
    
        df['revenue'] = df['price'] * df['quantity']
        total_revenue = df['revenue'].sum()
        total_orders = df['order_id'].nunique()
        avg_order_value = total_revenue / total_orders if total_orders else 0
    
        cat_revenues = df.groupby('classes')['revenue'].sum().sort_values(ascending=False)
        top_category = cat_revenues.index[0] if not cat_revenues.empty else "N/A"
    
        return (total_revenue, total_orders, avg_order_value, top_category.capitalize())
    
    def get_data_for_table(start_date, end_date, class):
        df = filter_data(start_date, end_date, class)
        if df.empty:
            return pd.DataFrame()
    
        df = df.sort_values(by=["order_id", "order_date"], ascending=[True, False]).copy()
    
        columns_order = [
            "order_id", "order_date", "customer_id", "customer_name",
            "product_id", "product_names", "categories", "quantity",
            "price", "total"
        ]
        columns_order = [col for col in columns_order if col in df.columns]
        df = df[columns_order].copy()
    
        df['revenue'] = df['price'] * df['quantity']
        return df
    
    def get_plot_data(start_date, end_date, class):
        df = filter_data(start_date, end_date, class)
        if df.empty:
            return pd.DataFrame()
        df['revenue'] = df['price'] * df['quantity']
        plot_data = df.groupby(df['order_date'].dt.date)['revenue'].sum().reset_index()
        plot_data.rename(columns={'order_date': 'date'}, inplace=True)
        return plot_data
    
    def get_revenue_by_category(start_date, end_date, class):
        df = filter_data(start_date, end_date, class)
        if df.empty:
            return pd.DataFrame()
        df['revenue'] = df['price'] * df['quantity']
        cat_data = df.groupby('classes')['revenue'].sum().reset_index()
        cat_data = cat_data.sort_values(by='income', ascending=False)
        return cat_data
    
    def get_top_products(start_date, end_date, class):
        df = filter_data(start_date, end_date, class)
        if df.empty:
            return pd.DataFrame()
        df['revenue'] = df['price'] * df['quantity']
        prod_data = df.groupby('product_names')['revenue'].sum().reset_index()
        prod_data = prod_data.sort_values(by='income', ascending=False).head(10)
        return prod_data
    
    def create_matplotlib_figure(knowledge, x_col, y_col, title, xlabel, ylabel, orientation='v'):
        plt.determine(figsize=(10, 6))
        if knowledge.empty:
            plt.textual content(0.5, 0.5, 'No knowledge obtainable', ha='middle', va='middle')
        else:
            if orientation == 'v':
                plt.bar(knowledge[x_col], knowledge[y_col])
                plt.xticks(rotation=45, ha='proper')
            else:
                plt.barh(knowledge[x_col], knowledge[y_col])
                plt.gca().invert_yaxis() 
    
        plt.title(title)
        plt.xlabel(xlabel)
        plt.ylabel(ylabel)
        plt.tight_layout()
    
        with tempfile.NamedTemporaryFile(delete=False, suffix=".png") as tmpfile:
            plt.savefig(tmpfile.identify)
        plt.shut()
        return tmpfile.identify

    The update_dashboard operate retrieves key gross sales statistics (whole income, whole orders, common order worth, and high class) by calling theget_dashboard_stats operate. It gathers knowledge for 3 distinct visualisations (income over time, income by class, and high merchandise), then makes use of create_matplotlib_figure to generate plots. It prepares and returns a knowledge desk (through the get_data_for_table() operate) together with all generated plots and stats to allow them to be displayed within the dashboard.

    The create_dashboard operate units the date boundaries (minimal and most dates) and establishes the preliminary default filter values. It makes use of Gradio to assemble a person interface (UI) that includes date pickers, class drop-downs, key metric shows, plot tabs, and a knowledge desk. It then wires up the filters in order that altering any of them triggers a name to the update_dashboard operate, guaranteeing the dashboard visuals and metrics are all the time in sync with the chosen filters. Lastly, it returns the assembled Gradio interface launched as an internet software.

    def update_dashboard(start_date, end_date, class):
        total_revenue, total_orders, avg_order_value, top_category = get_dashboard_stats(start_date, end_date, class)
    
        # Generate plots
        revenue_data = get_plot_data(start_date, end_date, class)
        category_data = get_revenue_by_category(start_date, end_date, class)
        top_products_data = get_top_products(start_date, end_date, class)
    
        revenue_over_time_path = create_matplotlib_figure(
            revenue_data, 'date', 'income',
            "Income Over Time", "Date", "Income"
        )
        revenue_by_category_path = create_matplotlib_figure(
            category_data, 'classes', 'income',
            "Income by Class", "Class", "Income"
        )
        top_products_path = create_matplotlib_figure(
            top_products_data, 'product_names', 'income',
            "Prime Merchandise", "Income", "Product Title", orientation='h'
        )
    
        # Knowledge desk
        table_data = get_data_for_table(start_date, end_date, class)
    
        return (
            revenue_over_time_path,
            revenue_by_category_path,
            top_products_path,
            table_data,
            total_revenue,
            total_orders,
            avg_order_value,
            top_category
        )
    
    def create_dashboard():
        min_date, max_date = get_date_range()
        if min_date is None or max_date is None:
            min_date = datetime.datetime.now()
            max_date = datetime.datetime.now()
    
        default_start_date = min_date
        default_end_date = max_date
    
        with gr.Blocks(css="""
            footer {show: none !necessary;}
            .tabs {border: none !necessary;}  
            .gr-plot {border: none !necessary; box-shadow: none !necessary;}
        """) as dashboard:
            
            gr.Markdown("# Gross sales Efficiency Dashboard")
    
            # Filters row
            with gr.Row():
                start_date = gr.DateTime(
                    label="Begin Date",
                    worth=default_start_date.strftime('%Y-%m-%d'),
                    include_time=False,
                    kind="datetime"
                )
                end_date = gr.DateTime(
                    label="Finish Date",
                    worth=default_end_date.strftime('%Y-%m-%d'),
                    include_time=False,
                    kind="datetime"
                )
                category_filter = gr.Dropdown(
                    selections=["All Categories"] + get_unique_categories(),
                    label="Class",
                    worth="All Classes"
                )
    
            gr.Markdown("# Key Metrics")
    
            # Stats row
            with gr.Row():
                total_revenue = gr.Quantity(label="Complete Income", worth=0)
                total_orders = gr.Quantity(label="Complete Orders", worth=0)
                avg_order_value = gr.Quantity(label="Common Order Worth", worth=0)
                top_category = gr.Textbox(label="Prime Class", worth="N/A")
    
            gr.Markdown("# Visualisations")
            # Tabs for Plots
            with gr.Tabs():
                with gr.Tab("Income Over Time"):
                    revenue_over_time_image = gr.Picture(label="Income Over Time", container=False)
                with gr.Tab("Income by Class"):
                    revenue_by_category_image = gr.Picture(label="Income by Class", container=False)
                with gr.Tab("Prime Merchandise"):
                    top_products_image = gr.Picture(label="Prime Merchandise", container=False)
    
            gr.Markdown("# Uncooked Knowledge")
            # Knowledge Desk (beneath the plots)
            data_table = gr.DataFrame(
                label="Gross sales Knowledge",
                kind="pandas",
                interactive=False
            )
    
            # When filters change, replace every thing
            for f in [start_date, end_date, category_filter]:
                f.change(
                    fn=lambda s, e, c: update_dashboard(s, e, c),
                    inputs=[start_date, end_date, category_filter],
                    outputs=[
                        revenue_over_time_image, 
                        revenue_by_category_image, 
                        top_products_image,
                        data_table,
                        total_revenue, 
                        total_orders,
                        avg_order_value, 
                        top_category
                    ]
                )
    
            # Preliminary load
            dashboard.load(
                fn=lambda: update_dashboard(default_start_date, default_end_date, "All Classes"),
                outputs=[
                    revenue_over_time_image, 
                    revenue_by_category_image, 
                    top_products_image,
                    data_table,
                    total_revenue, 
                    total_orders,
                    avg_order_value, 
                    top_category
                ]
            )
    
        return dashboard
    
    if __name__ == "__main__":
        dashboard = create_dashboard()
        dashboard.launch(share=False)

    Working the program

    Create a Python file, e.g. gradio_test.py, and insert all of the above code snippets. Put it aside, and run it like this,

    (gradio_dashboard) $ python gradio_test.py
    
    * Working on native URL:  http://127.0.0.1:7860
    
    To create a public hyperlink, set `share=True` in `launch()`.

    Click on on the native URL proven, and the dashboard will open full display in your browser.

    Abstract

    This text supplies a complete information to constructing an interactive gross sales efficiency dashboard utilizing Gradio and a CSV file as its supply knowledge.

    Gradio is a contemporary, Python-based open-source framework that simplifies the creation of data-driven dashboards and GUI purposes. The dashboard I developed permits customers to filter knowledge by date ranges and product classes, view key metrics similar to whole income and top-performing classes, discover visualisations like income tendencies and high merchandise, and navigate via uncooked knowledge with pagination.

    I additionally talked about some key variations between growing visualisation instruments utilizing Gradio and Streamlit, one other in style front-end Python library.

    This information supplies a complete implementation of a Gradio knowledge dashboard, overlaying all the course of from creating pattern knowledge to growing Python capabilities for querying knowledge, producing plots, and dealing with person enter. This step-by-step method demonstrates the way to leverage Gradio’s capabilities to create user-friendly and dynamic dashboards, making it best for knowledge engineers and scientists who wish to construct interactive knowledge purposes.

    Though I used a CSV file for my knowledge, modifying the code to make use of one other knowledge supply, similar to a relational database administration system (RDBMS) like SQLite, ought to be easy. For instance, in my different article on this collection on creating the same dashboard utilizing Streamlit, the information supply is a PostgreSQL database.



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Editor Times Featured
    • Website

    Related Posts

    Your DNA Is a Machine Learning Model: It’s Already Out There

    June 6, 2025

    Inside Google’s Agent2Agent (A2A) Protocol: Teaching AI Agents to Talk to Each Other

    June 6, 2025

    How to Design My First AI Agent

    June 5, 2025

    Decision Trees Natively Handle Categorical Data

    June 5, 2025

    Landing your First Machine Learning Job: Startup vs Big Tech vs Academia

    June 5, 2025

    Pairwise Cross-Variance Classification | Towards Data Science

    June 5, 2025
    Leave A Reply Cancel Reply

    Editors Picks

    Volvo’s adaptive seatbelt enhances passenger safety

    June 6, 2025

    Startup-focused publication Trending Topics acquired by Vienna-based AI company newsrooms.ai

    June 6, 2025

    The Best Mushroom Coffee, WIRED Tested and Reviewed (2025)

    June 6, 2025

    An interview with ASML CEO Christophe Fouquet, as the company navigates political instability in The Netherlands and abroad and the impacts of Trump’s trade war (Adam Satariano/New York Times)

    June 6, 2025
    Categories
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    About Us
    About Us

    Welcome to Times Featured, an AI-driven entrepreneurship growth engine that is transforming the future of work, bridging the digital divide and encouraging younger community inclusion in the 4th Industrial Revolution, and nurturing new market leaders.

    Empowering the growth of profiles, leaders, entrepreneurs businesses, and startups on international landscape.

    Asia-Middle East-Europe-North America-Australia-Africa

    Facebook LinkedIn WhatsApp
    Featured Picks

    RFK Jr.’s Organic Crusade Has Sparked a Weird Political Realignment

    February 1, 2025

    Dell UltraSharp 32 4K Thunderbolt Hub Monitor Review: Best IPS Panel

    March 21, 2025

    Quantifying Uncertainty – Entropy | Eyal Kazin

    February 4, 2025
    Categories
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    Copyright © 2024 Timesfeatured.com IP Limited. All Rights.
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us

    Type above and press Enter to search. Press Esc to cancel.