Close Menu
    Facebook LinkedIn YouTube WhatsApp X (Twitter) Pinterest
    Trending
    • The ‘Lonely Runner’ Problem Only Appears Simple
    • Binance and Bitget to probe a rally in RaveDAO’s RAVE token, which surged 4,500% in a week, after ZachXBT alleged RAVE insiders engineered a large short squeeze (Francisco Rodrigues/CoinDesk)
    • Today’s NYT Connections Hints, Answers for April 19 #1043
    • Rugged tablet boasts built-in projector and night vision
    • Asus TUF Gaming A14 (2026) Review: GPU-Less Gaming Laptop
    • Mistral, which once aimed for top open models, now leans on being an alternative to Chinese and US labs, says it’s on track for $80M in monthly revenue by Dec. (Iain Martin/Forbes)
    • Today’s NYT Wordle Hints, Answer and Help for April 19 #1765
    • Powerful lightweight sports car available now
    Facebook LinkedIn WhatsApp
    Times FeaturedTimes Featured
    Sunday, April 19
    • Home
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    • More
      • AI
      • Robotics
      • Industries
      • Global
    Times FeaturedTimes Featured
    Home»Artificial Intelligence»Automating Ticket Creation in Jira With the OpenAI Agents SDK: A Step-by-Step Guide
    Artificial Intelligence

    Automating Ticket Creation in Jira With the OpenAI Agents SDK: A Step-by-Step Guide

    Editor Times FeaturedBy Editor Times FeaturedJuly 26, 2025No Comments17 Mins Read
    Facebook Twitter Pinterest Telegram LinkedIn Tumblr WhatsApp Email
    Share
    Facebook Twitter LinkedIn Pinterest Telegram Email WhatsApp Copy Link


    What if after ending a gathering with a colleague you’d have already got all of your mentioned gadgets in your project-management software? No want for writing something down throughout the assembly, nor to manually create corresponding tickets! That was the considered this quick experimental mission.

    On this step-by-step information we are going to create the Python software “TaskPilot” utilizing OpenAI’s Agents SDK to routinely create Jira points given a gathering transcript.

    The Problem: From Dialog to Actionable Duties

    Given the transcript of a gathering, create points in a Jira mission routinely and similar to what was mentioned within the assembly.

    The Resolution: Automating with OpenAI Brokers

    Utilizing the OpenAI Agents SDK we are going to implement an brokers workflow that:

    1. Receives and reads a gathering transcript.
    2. Makes use of an AI agent to extract motion gadgets from the dialog.
    3. Makes use of one other AI agent to create Jira points from these motion gadgets.
    Agent stream: Picture created by the writer

    The OpenAI Brokers SDK

    The OpenAI Agents SDK is a Python library to create AI brokers programmatically that may work together with instruments, use MCP-Servers or hand off duties to different brokers.

    Listed below are a number of the key options of the SDK:

    • Agent Loop: A built-in agent loop that handles the back-and-forth communication with the LLM till the agent is finished with its activity.
    • Operate Instruments: Turns any Python perform right into a software, with computerized schema era and Pydantic-powered validation.
    • MCP Help: Permits brokers to make use of MCP servers to increase its capabilities of interacting with the surface world.
    • Handoffs: Permits brokers to delegate duties to different brokers relying on their experience/function.
    • Guardrails: Validates the inputs and outputs of the brokers. Aborts execution early if the agent receives invalid enter.
    • Classes: Mechanically manages the dialog historical past. Ensures that the brokers have the context they should carry out their duties.
    • Tracing: Supplies a tracing context supervisor which permits to visualise your entire execution stream of the brokers, making it straightforward to debug and perceive what’s taking place beneath the hood.

    Now, let’s dive into the implementation! 


    Implementation

    We are going to implement our mission in 8 easy steps:

    1. Setting up the project structure
    2. The TaskPilotRunner
    3. Defining our data models
    4. Creating the agents
    5. Providing tools
    6. Configuring the application
    7. Bringing it all together in main.py
    8. Monitoring our runs in the OpenAI Dev Platform

    Let’s get arms on!

    Step 1: Setting Up the Mission Construction

    First, let’s create the essential construction of our mission:

    • The taskpilot listing: will comprise our essential software logic.
    • The local_agentslisting: will comprise the place we outline the brokers we are going to use on this mission (“local_agents” in order that there isn’t any interference with the OpenAI library brokers)
    • The utils listing: for helper capabilities, a config parser and information fashions.
    taskpilot_repo/
    ├── config.yml
    ├── .env
    ├── README.md
    ├── taskpilot/
    │   ├── essential.py
    │   ├── taskpilot_runner.py
    │   ├── local_agents/
    │   │   ├── __init__.py
    │   │   ├── action_items_extractor.py
    │   │   └── tickets_creator.py
    │   └── utils/
    │       ├── __init__.py
    │       ├── agents_tools.py
    │       ├── config_parser.py
    │       ├── jira_interface_functions.py
    │       └── fashions.py

    Step 2: The TaskPilotRunner

    The TaskPilotRunner class in taskpilot/taskpilot_runner.py would be the coronary heart of our software. It can orchestrate your entire workflow, extracting motion gadgets from the assembly transcript after which creating the Jira tickets from the motion gadgets. On the identical time it can activate the built-in tracing from the Brokers SDK to gather a report of occasions throughout the brokers run that can assist for debugging and monitoring the agent workflows. 

    Let’s begin with the implementation:

    • Within the __init__() technique we are going to create the 2 brokers used for this workflow.
    • The run() technique shall be an important of the TaskPilotRunner class, which is able to obtain the assembly transcript and move it to the brokers to create the Jira points. The brokers shall be began and run inside a hint context supervisor i.e. with hint("TaskPilot run", trace_id): . A hint from the Brokers SDK represents a single end-to-end operation of a “workflow”.
    • The _extract_action_items() and _create_tickets() strategies will begin and run every of the brokers respectively. Inside these strategies the Runner.run() technique from the OpenAI Brokers SDK shall be used to set off the brokers. It takes an agent and an enter, and it returns the ultimate output of the agent’s execution. Lastly, the results of every agent shall be parsed to its outlined output kind.
    # taskpilot/taskpilot_runner.py
    
    from brokers import Runner, hint, gen_trace_id
    from local_agents import create_action_items_agent, create_tickets_creator_agent
    from utils.fashions import ActionItemsList, CreateIssuesResponse
    
    class TaskPilotRunner:
        def __init__(self):
            self.action_items_extractor = create_action_items_agent()
            self.tickets_creator = create_tickets_creator_agent()
    
        async def run(self, meeting_transcript: str) -> None:
            trace_id = gen_trace_id()
            print(f"Beginning TaskPilot run... (Hint ID: {trace_id})")
            print(
                f"View hint: https://platform.openai.com/traces/hint?trace_id={trace_id}"
            )
    
            with hint("TaskPilot run", trace_id=trace_id):
                # 1. Extract motion gadgets from assembly transcript
                action_items = await self._extract_action_items(meeting_transcript)
    
                # 2. Create tickets from motion gadgets
                tickets_creation_response = await self._create_tickets(action_items)
    
                # 3. Return the outcomes
                print(tickets_creation_response.textual content)
    
        async def _extract_action_items(self, meeting_transcript: str) -> ActionItemsList:
            outcome = await Runner.run(
                self.action_items_extractor, enter=meeting_transcript
            )
            final_output = outcome.final_output_as(ActionItemsList)
            return final_output
    
        async def _create_tickets(self, action_items: ActionItemsList) -> CreateIssuesResponse:
            outcome = await Runner.run(
                self.tickets_creator, enter=str(action_items)
            )
            final_output = outcome.final_output_as(CreateIssuesResponse)
            return final_output

    The three strategies are outlined as asynchronous capabilities. The rationale for that is that the Runner.run() technique from the OpenAI Brokers SDK is outlined itself as an async coroutine. This enables a number of brokers, software calls, or streaming endpoints to run in parallel with out blocking.

    Step 3: Defining Our Knowledge Fashions

    With out particular configuration brokers return textual content in str as output. To make sure that our brokers present structured and predictable responses, the library helps the usage of Pydantic fashions for outlining the output_type of the brokers (it actually supports any type that can be wrapped in a Pydantic TypeAdapter — dataclasses, lists, TypedDict, etc.). The information-models we outline would be the information buildings that our brokers will work with.

    For our usecase we are going to outline three fashions in taskpilot/utils/fashions.py:

    • ActionItem: This mannequin represents a single motion merchandise that’s extracted from the assembly transcript.
    • ActionItemsList: This mannequin is a listing of ActionItem objects.
    • CreateIssuesResponse: This mannequin defines the construction of the response from the agent that can create the problems/tickets.
    # taskpilot/utils/fashions.py
    
    from typing import Elective
    from pydantic import BaseModel
    
    class ActionItem(BaseModel):
        title: str
        description: str
        assignee: str
        standing: str
        issuetype: str
        mission: Elective[str] = None
        due_date: Elective[str] = None
        start_date: Elective[str] = None
        precedence: Elective[str] = None
        dad or mum: Elective[str] = None
        youngsters: Elective[list[str]] = None
    
    class ActionItemsList(BaseModel):
        action_items: record[ActionItem]
    
    class CreateIssuesResponse(BaseModel):
        action_items: record[ActionItem]
        error_messages: record[str]
        success_messages: record[str]
        textual content: str

    Step 4: Creating the Brokers

    The brokers are the core of our software. Brokers are mainly an LLM configured with directions (the AGENT_PROMPT) and entry to instruments for them to behave by itself on outlined duties. An agent from the OpenAI Brokers SDK is outlined by the next parameters:

    • title: The title of the agent for identification.
    • directions: The immediate to inform the agent its function or activity it shall execute (aka. system immediate).
    • mannequin: Which LLM to make use of for the agent. The SDK supplies out-of-the-box help for OpenAI fashions, nevertheless it’s also possible to use non-OpenAI fashions (see Agents SDK: Models).
    • output_type: Python object that the agent shall returned, as talked about beforehand.
    • instruments: A listing of python callables, that would be the instruments that the agent can use to carry out its duties. 

    Based mostly on this data, let’s create our two brokers: the ActionItemsExtractor and the TicketsCreator.

    Motion Gadgets Extractor

    This agent’s job is to learn the assembly transcript and extract the motion gadgets. We’ll create it in taskpilot/local_agents/action_items_extractor.py. 

    # taskpilot/local_agents/action_items_extractor.py
    
    from brokers import Agent
    from utils.config_parser import Config
    from utils.fashions import ActionItemsList
    
    AGENT_PROMPT = """
    Your are an assistant to extract motion gadgets from a gathering transcript.
    
    You'll be given a gathering transcript and it is advisable to extract the motion gadgets in order that they are often transformed into tickets by one other assistant.
    
    The motion gadgets ought to comprise the next data:
        - title: The title of the motion merchandise. It needs to be a brief description of the motion merchandise. It needs to be quick and concise. That is necessary.
        - description: The outline of the motion merchandise. It needs to be a extra prolonged description of the motion merchandise. That is necessary.
        - assignee: The title of the one that shall be liable for the motion merchandise. You shall infer from the dialog the title of the assignee and never use "Speaker 1" or "Speaker 2" or every other speaker identifier. That is necessary.
        - standing: The standing of the motion merchandise. It may be "To Do", "In Progress", "In Evaluate" or "Finished". You shall extract from the transcript during which state the motion merchandise is. If it's a new motion merchandise, you shall set it to "To Do".
        - due_date: The due date of the motion merchandise. It shall be within the format "YYYY-MM-DD".  You shall extract this from the transcript, nevertheless if it isn't explicitly talked about, you shall set it to None. If relative dates are talked about (eg. by tomorrow, in per week,...), you shall convert them to absolute dates within the format "YYYY-MM-DD".
        - start_date: The beginning date of the motion merchandise. It shall be within the format "YYYY-MM-DD". You shall extract this from the transcript, nevertheless if it isn't explicitly talked about, you shall set it to None.
        - precedence: The precedence of the motion merchandise. It may be "Lowest", "Low", "Medium", "Excessive" or "Highest". You shall interpret the precedence of the motion merchandise from the transcript, nevertheless if it isn't clear, you shall set it to None.
        - issuetype: The kind of the motion merchandise. It may be "Epic", "Bug", "Process", "Story", "Subtask". You shall interpret the issuetype of the motion merchandise from the transcript, whether it is unclear set it to "Process".
        - mission: The mission to which the motion merchandise belongs. You shall interpret the mission of the motion merchandise from the transcript, nevertheless if it isn't clear, you shall set it to None.
        - dad or mum: If the motion merchandise is a subtask, you shall set the dad or mum of the motion merchandise to the title of the dad or mum motion merchandise. If the dad or mum motion merchandise will not be clear or the motion merchandise will not be a subtask, you shall set it to None.
        - youngsters: If the motion merchandise is a dad or mum activity, you shall set the kids of the motion merchandise to the titles of the kid motion gadgets. If the kids motion gadgets aren't clear or the motion merchandise will not be a dad or mum activity, you shall set it to None.
    """
    
    def create_action_items_agent() -> Agent:
        return Agent(
            title="Motion Gadgets Extractor",
            directions=AGENT_PROMPT,
            output_type=ActionItemsList,
            mannequin=Config.get().brokers.mannequin,
        )

    As you may see, within the AGENT_PROMPT we inform the agent very detailed that its job is to extract motion gadgets and supply an in depth description of how we would like the motion gadgets to be extracted.

    Tickets Creator

    This agent takes the record of motion gadgets and creates Jira points. We’ll create it in taskpilot/local_agents/tickets_creator.py.

    # taskpilot/local_agents/tickets_creator.py
    
    from brokers import Agent
    from utils.config_parser import Config
    from utils.agents_tools import create_jira_issue
    from utils.fashions import CreateIssuesResponse
    
    AGENT_PROMPT = """
    You're an assistant that creates Jira points given motion gadgets.
    
    You'll be given a listing of motion gadgets and for every motion merchandise you shall create a Jira problem utilizing the `create_jira_issue` software.
    
    You shall gather the responses of the `create_jira_issue` software and return them because the offered kind `CreateIssuesResponse` which comprises:
        - action_items: record containing the action_items that had been offered to you
        - error_messages: record containing the error messages returned by the `create_jira_issue` software every time there was an error making an attempt to create the difficulty.
        - success_messages: record containing the response messages returned by the `create_jira_issue` software every time the difficulty creation was profitable.
        - textual content: A textual content that summarizes the results of the tickets creation. It shall be a string created as following: 
            f"From the {len(action_items)} motion gadgets offered {len(success_messages)} had been efficiently created within the Jira mission.n {len(error_messages)} did not be created within the Jira mission.nnError messages:n{error_messages}"
    """
    
    def create_tickets_creator_agent() -> Agent:
        return Agent(
            title="Tickets Creator",
            directions=AGENT_PROMPT,
            instruments=[create_jira_issue],
            mannequin=Config.get().brokers.mannequin,
            output_type=CreateIssuesResponse
        )

    Right here we set the instruments parameter and provides the agent the create_jira_issue software, which we’ll create within the subsequent step.

    Step 5: Offering Instruments

    One of the vital highly effective options of brokers is their potential to make use of instruments to work together with the surface world. One might argue that the usage of instruments is what turns the interplay with an LLM into an agent. The OpenAI Brokers SDK permits the brokers to make use of three sorts of instruments:

    • Hosted instruments: Supplied immediately from OpenAI comparable to looking the net or recordsdata, laptop use, working code, amongst others.
    • Operate calling: Utilizing any Python perform as a software.
    • Brokers as instruments: Permitting brokers to name different brokers with out handing off.

    For our usecase, we shall be utilizing perform calling and implement a perform to create the Jira points utilizing Jira’s REST API. By private selection, I made a decision to separate it in two recordsdata:

    • In taskpilot/utils/jira_interface_functions.py we are going to write the capabilities to work together by means of HTTP Requests with the Jira REST API.
    • In taskpilot/utils/agents_tools.py we are going to write wrappers of the capabilities to be offered to the brokers. These wrapper-functions have extra response parsing to supply the agent a processed textual content response as an alternative of a JSON. Nonetheless, the agent also needs to have the ability to deal with and perceive JSON as response.

    First we implement the create_issue() perform in taskpilot/utils/jira_interface_functions.py : 

    # taskpilot/utils/jira_interface_functions.py
    
    import os
    from typing import Elective
    import json
    from urllib.parse import urljoin
    import requests
    from requests.auth import HTTPBasicAuth
    from utils.config_parser import Config
    
    JIRA_AUTH = HTTPBasicAuth(Config.get().jira.person, str(os.getenv("ATLASSIAN_API_KEY")))
    
    def create_issue(
        project_key: str,
        title: str,
        description: str,
        issuetype: str,
        duedate: Elective[str] = None,
        assignee_id: Elective[str] = None,
        labels: Elective[list[str]] = None,
        priority_id: Elective[str] = None,
        reporter_id: Elective[str] = None,
    ) -> requests.Response:
    
        payload = {
            "fields": {
                "mission": {"key": project_key},
                "abstract": title,
                "issuetype": {"title": issuetype},
                "description": {
                    "content material": [
                        {
                            "content": [
                                {
                                    "text": description,
                                    "type": "text",
                                }
                            ],
                            "kind": "paragraph",
                        }
                    ],
                    "kind": "doc",
                    "model": 1,
                },
            }
        }
    
        if duedate:
            payload["fields"].replace({"duedate": duedate})
        if assignee_id:
            payload["fields"].replace({"assignee": {"id": assignee_id}})
        if labels:
            payload["fields"].replace({"labels": labels})
        if priority_id:
            payload["fields"].replace({"precedence": {"id": priority_id}})
        if reporter_id:
            payload["fields"].replace({"reporter": {"id": reporter_id}})
    
        endpoint_url = urljoin(Config.get().jira.url_rest_api, "problem")
    
        headers = {"Settle for": "software/json", "Content material-Sort": "software/json"}
    
        response = requests.submit(
            endpoint_url,
            information=json.dumps(payload),
            headers=headers,
            auth=JIRA_AUTH,
            timeout=Config.get().jira.request_timeout,
        )
        return response

    As you may see, we have to authenticate to our Jira account utilizing our Jira person and a corresponding API_KEY that we will receive on Atlassian Account Management.

    In taskpilot/utils/agents_tools.py we implement the create_jira_issue() perform, that we’ll then present to the TicketsCreator agent:

    # taskpilot/utils/agents_tools.py
    
    from brokers import function_tool
    from utils.fashions import ActionItem
    from utils.jira_interface_functions import create_issue
    
    @function_tool
    def create_jira_issue(action_item: ActionItem) -> str:
        
        response = create_issue(
            project_key=action_item.mission,
            title=action_item.title,
            description=action_item.description,
            issuetype=action_item.issuetype,
            duedate=action_item.due_date,
            assignee_id=None,
            labels=None,
            priority_id=None,
            reporter_id=None,
        )
    
        if response.okay:
            return f"Efficiently created the difficulty. Response message: {response.textual content}"
        else:
            return f"There was an error making an attempt to create the difficulty. Error message: {response.textual content}"

    Crucial: The @function_tool decorator is what makes this perform usable for our agent. The agent can now name this perform and move it an ActionItem object. The perform then makes use of the create_issue perform which accesses the Jira API to create a brand new problem.

    Step 6: Configuring the Software

    To make our software parametrizable, we’ll use a config.yml file for the configuration settings, in addition to a .env file for the API keys.

    The configuration of the appliance is separated in:

    • brokers: To configure the brokers and the entry to the OpenAI API. Right here now we have two parameters: mannequin , which is the LLM that shall be utilized by the brokers, and OPENAI_API_KEY , within the .env file, to authenticate the usage of the OpenAI API. You may receive an OpenAI API Key in your OpenAI Dev Platform.
    • jira: To configure the entry to the Jira API. Right here we want 4 parameters: url_rest_api , which is the URL to the REST API of our Jira occasion; person , which is the person we use to entry Jira; request_timeout , which is the timeout in seconds to attend for the server to ship information earlier than giving up, and at last ATLASSIAN_API_KEY , within the .env file, to authenticate to your Jira occasion.

    Right here is our .env file, that within the subsequent step shall be loaded to our software within the essential.py utilizing the python-dotenv library:

    OPENAI_API_KEY=some-api-key
    ATLASSIAN_API_KEY=some-api-key

    And right here is our config.yml file:

    # config.yml
    
    brokers:
      mannequin: "o4-mini"
    jira:
      url_rest_api: "https://your-domain.atlassian.web/relaxation/api/3/"
      person: "[email protected]"
      request_timeout: 5

    We’ll additionally create a config parser at taskpilot/utils/config_parser.py to load this configuration. For this we implement the Config class as a singleton (which means there can solely be one occasion of this class all through the appliance lifespan).

    # taskpilot/utils/config_parser.py
    
    from pathlib import Path
    import yaml
    from pydantic import BaseModel
    
    class AgentsConfig(BaseModel):
    
        mannequin: str
    
    class JiraConfig(BaseModel):
    
        url_rest_api: str
        person: str
        request_timeout: int
    
    class ConfigModel(BaseModel):
    
        brokers: AgentsConfig
        jira: JiraConfig
    
    class Config:
    
        _instance: ConfigModel | None = None
    
        @classmethod
        def load(cls, path: str = "config.yml") -> None:
            if cls._instance is None:
                with open(Path(path), "r", encoding="utf-8") as config_file:
                    raw_config = yaml.safe_load(config_file)
                cls._instance = ConfigModel(**raw_config)
    
        @classmethod
        def get(cls, path: str = "config.yml") -> ConfigModel:
            if cls._instance is None:
                cls.load(path)
            return cls._instance

    Step 7: Bringing It All Collectively in essential.py

    Lastly, in taskpilot/essential.py, we’ll convey all the pieces collectively. This script will load the assembly transcript, create an occasion of the TaskPilotRunner , after which name the run() technique.

    # taskpilot/essential.py
    
    import os
    import asyncio
    from dotenv import load_dotenv
    
    from taskpilot_runner import TaskPilotRunner
    
    # Load the variables within the .env file
    load_dotenv()
    
    def load_meeting_transcript_txt(file_path: str) -> str:
        # ...
        return meeting_transcript
    
    async def essential():
        print("TaskPilot software beginning...")
    
        meeting_transcript = load_meeting_transcript_txt("meeting_transcript.txt")
    
        await TaskPilotRunner().run(meeting_transcript)
    
    if __name__ == "__main__":
        asyncio.run(essential())

    Step 8: Monitoring Our Runs within the OpenAI Dev Platform

    As talked about, one of many benefits of the OpenAI Brokers SDK is that, as a result of its tracing function, it’s doable to visualise your entire execution stream of our brokers. This makes it straightforward to debug and perceive what’s taking place beneath the hood within the OpenAI Dev Platform.

    Within the Traces Dashboard one can:

    • Observe every run of the brokers workflow.
    Screenshot by the writer
    • Perceive precisely what the brokers did inside the agent workflow and monitor efficiency.
    Screenshot by the writer
    • Debug each name to the OpenAI API in addition to monitor what number of tokens had been utilized in every enter and output.
    Screenshot by the writer

    So make the most of this function to judge, debug and monitor your agent runs.

    Conclusion

    And that’s it! On this eight easy steps now we have applied an software that may routinely create Jira points from a gathering transcript. Because of the straightforward interface of the OpenAI Brokers SDK you may simply create brokers programmatically that can assist you automatize your duties!

    Be at liberty to clone the repository (the mission as described on this submit is in department function_calling), strive it out for your self, and begin constructing your personal AI-powered functions!

    GitHub – juancarlos2701/TaskPilot


    💡 Coming Up Subsequent:

    In an upcoming submit, we’ll dive into the best way to implement your personal MCP Server to additional lengthen our brokers’ capabilities and permit them to work together with exterior methods past your native instruments. Keep tuned!

    🙋‍♂️ Let’s Join

    When you have questions, suggestions, or simply need to comply with together with future initiatives:


    Reference

    This text is impressed by the “OpenAI: Agents SDK” course from LinkedinLearning.



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Editor Times Featured
    • Website

    Related Posts

    AI Agents Need Their Own Desk, and Git Worktrees Give Them One

    April 18, 2026

    Your RAG System Retrieves the Right Data — But Still Produces Wrong Answers. Here’s Why (and How to Fix It).

    April 18, 2026

    Europe Warns of a Next-Gen Cyber Threat

    April 18, 2026

    How to Learn Python for Data Science Fast in 2026 (Without Wasting Time)

    April 18, 2026

    A Practical Guide to Memory for Autonomous LLM Agents

    April 17, 2026

    You Don’t Need Many Labels to Learn

    April 17, 2026

    Comments are closed.

    Editors Picks

    The ‘Lonely Runner’ Problem Only Appears Simple

    April 19, 2026

    Binance and Bitget to probe a rally in RaveDAO’s RAVE token, which surged 4,500% in a week, after ZachXBT alleged RAVE insiders engineered a large short squeeze (Francisco Rodrigues/CoinDesk)

    April 19, 2026

    Today’s NYT Connections Hints, Answers for April 19 #1043

    April 19, 2026

    Rugged tablet boasts built-in projector and night vision

    April 19, 2026
    Categories
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    About Us
    About Us

    Welcome to Times Featured, an AI-driven entrepreneurship growth engine that is transforming the future of work, bridging the digital divide and encouraging younger community inclusion in the 4th Industrial Revolution, and nurturing new market leaders.

    Empowering the growth of profiles, leaders, entrepreneurs businesses, and startups on international landscape.

    Asia-Middle East-Europe-North America-Australia-Africa

    Facebook LinkedIn WhatsApp
    Featured Picks

    Best Hair Dryer: Rigorous Testing in Real Apartments (2025)

    August 23, 2025

    Disney Plus: 20 TV Shows You Should Definitely Watch Right Now

    February 20, 2025

    Cricketer Steve Smith’s $100,000 Koala investment is worth $11.9 million in the retailer’s ASX debut

    April 1, 2026
    Categories
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    Copyright © 2024 Timesfeatured.com IP Limited. All Rights.
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us

    Type above and press Enter to search. Press Esc to cancel.