Close Menu
    Facebook LinkedIn YouTube WhatsApp X (Twitter) Pinterest
    Trending
    • 🚪🚪🐐 Lessons in Decision Making from the Monty Hall Problem
    • How AI is introducing errors into courtrooms
    • Huawei unveils MateBook Fold with 18-inch foldable OLED screen
    • German startup sensmore secures €6.5 million to turn the world’s largest machines into intelligent robots
    • Marshall Takes On Sonos With Its First Soundbar
    • AI-Voiced Darth Vader Can Swear, Say ‘Skibidi Toilet’ in Fortnite: How to Find It
    • My AI therapist got me through dark times
    • How To Build a Benchmark for Your Models
    Facebook LinkedIn WhatsApp
    Times FeaturedTimes Featured
    Tuesday, May 20
    • Home
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    • More
      • AI
      • Robotics
      • Industries
      • Global
    Times FeaturedTimes Featured
    Home»Artificial Intelligence»How to Use an LLM-Powered Boilerplate for Building Your Own Node.js API
    Artificial Intelligence

    How to Use an LLM-Powered Boilerplate for Building Your Own Node.js API

    Editor Times FeaturedBy Editor Times FeaturedFebruary 21, 2025No Comments7 Mins Read
    Facebook Twitter Pinterest Telegram LinkedIn Tumblr WhatsApp Email
    Share
    Facebook Twitter LinkedIn Pinterest Telegram Email WhatsApp Copy Link

    For a very long time, one of many frequent methods to start out new Node.js initiatives was utilizing boilerplate templates. These templates assist builders reuse acquainted code buildings and implement customary options, resembling entry to cloud file storage. With the newest developments in LLM, challenge boilerplates seem like extra helpful than ever.

    Constructing on this progress, I’ve prolonged my present Node.js API boilerplate with a brand new deviceĀ LLM Codegen. This standalone characteristic permits the boilerplate to mechanically generate module code for any goal primarily based on textual content descriptions. The generated module comes full with E2E checks, database migrations, seed information, and crucial enterprise logic.

    Historical past

    I initially created aĀ GitHub repositoryĀ for a Node.js API boilerplate to consolidate the very best practices I’ve developed through the years. A lot of the implementation relies on code from an actual Node.js API operating in manufacturing on AWS.

    I’m keen about vertical slicing structure and Clear Code rules to maintain the codebase maintainable and clear. With latest developments in LLM, significantly its help for big contexts and its means to generate high-quality code, I made a decision to experiment with producing clear TypeScript code primarily based on my boilerplate. This boilerplate follows particular buildings and patterns that I consider are of top of the range. The important thing query was whether or not the generated code would observe the identical patterns and construction. Primarily based on my findings, it does.

    To recap, right here’s a fast spotlight of the Node.js API boilerplate’s key options:

    • Vertical slicing structure primarily based onĀ DDDĀ &Ā MVCĀ rules
    • Providers enter validation utilizingĀ ZOD
    • Decoupling utility parts with dependency injection (InversifyJS)
    • Integration andĀ E2EĀ testing with Supertest
    • Multi-service setup utilizingĀ Dockercompose

    Over the previous month, I’ve spent my weekends formalizing the answer and implementing the required code-generation logic. Under, I’ll share the main points.

    Implementation Overview

    Let’s discover the specifics of the implementation. All Code Generation logic is organized on the challenge root degree, contained in theĀ llm-codegenĀ folder, guaranteeing straightforward navigation. The Node.js boilerplate code has no dependency onĀ llm-codegen, so it may be used as an everyday template with out modification.

    It covers the next use circumstances:

    • Producing clear, well-structured code for brand spanking new module primarily based on enter description. The generated module turns into a part of the Node.js REST API utility.
    • Creating database migrations and increasing seed scripts with primary information for the brand new module.
    • Producing and fixing E2E checks for the brand new code and guaranteeing all checks move.

    The generated code after the primary stage is clear and adheres to vertical slicing structure rules. It consists of solely the required enterprise logic for CRUD operations. In comparison with different code era approaches, it produces clear, maintainable, and compilable code with legitimate E2E checks.

    The second use case includes producing DB migration with the suitable schema and updating the seed script with the required information. This job is especially well-suited for LLM, which handles it exceptionally nicely.

    The ultimate use case is producing E2E checks, which assist verify that the generated code works accurately. Through the operating of E2E checks, an SQLite3 database is used for migrations and seeds.

    Primarily supported LLM shoppers are OpenAI and Claude.

    How you can Use It

    To get began, navigate to the basis folderĀ llm-codegenĀ and set up all dependencies by operating:

    npm i

    llm-codegenĀ doesn’t depend on Docker or every other heavy third-party dependencies, making setup and execution straightforward and easy. Earlier than operating the device, make sure that you set at the least oneĀ *_API_KEYĀ setting variable within theĀ .envĀ file with the suitable API key on your chosen LLM supplier. All supported setting variables are listed within theĀ .env.patternĀ file (OPENAI_API_KEY, CLAUDE_API_KEYĀ and many others.) You need to useĀ OpenAI,Ā Anthropic Claude, orĀ OpenRouter LLaMA. As of mid-December,Ā OpenRouter LLaMAĀ is surprisingly free to make use of. It’s doable to registerĀ hereĀ and acquire a token at no cost utilization. Nevertheless, the output high quality of this free LLaMA mannequin could possibly be improved, as a lot of the generated code fails to move the compilation stage.

    To start outĀ llm-codegen, run the next command:

    npm run begin

    Subsequent, you’ll be requested to enter the module description and title. Within the module description, you’ll be able to specify all crucial necessities, resembling entity attributes and required operations. The core remaining work is carried out by micro-agents:Ā Developer,Ā Troubleshooter, andĀ TestsFixer.

    Right here is an instance of a profitable code era:

    Profitable code era

    Under is one other instance demonstrating how a compilation error was fastened:

    The next is an instance of a generatedĀ ordersĀ module code:

    A key element is which you could generate code step-by-step, beginning with one module and including others till all required APIs are full. This method lets you generate code for all required modules in only a few command runs.

    How It Works

    As talked about earlier, all work is carried out by these micro-agents:Ā Developer,Ā TroubleshooterĀ andĀ TestsFixer, managed by theĀ Orchestrator. They run within the listed order, with theĀ DeveloperĀ producing a lot of the codebase. After every code era step, a test is carried out for lacking information primarily based on their roles (e.g., routes, controllers, providers). If any information are lacking, a brand new code era try is made, together with directions within the immediate in regards to the lacking information and examples for every function. As soon as theĀ DeveloperĀ completes its work, TypeScript compilation begins. If any errors are discovered, theĀ TroubleshooterĀ takes over, passing the errors to the immediate and ready for the corrected code. Lastly, when the compilation succeeds, E2E checks are run. Each time a take a look at fails, theĀ TestsFixerĀ steps in with particular immediate directions, guaranteeing all checks move and the code stays clear.

    All micro-agents are derived from theĀ BaseAgentĀ class and actively reuse its base technique implementations. Right here is theĀ DeveloperĀ implementation for reference:

    Every agent makes use of its particular immediate. Take a look at this GitHubĀ linkĀ for the immediate utilized by theĀ Developer.

    After dedicating important effort to analysis and testing, I refined the prompts for all micro-agents, leading to clear, well-structured code with only a few points.

    Through the improvement and testing, it was used with numerous module descriptions, starting from easy to extremely detailed. Listed below are just a few examples:

    - The module chargeable for library guide administration should deal with endpoints for CRUD operations on books.
    - The module chargeable for the orders administration. It should present CRUD operations for dealing with buyer orders. Customers can create new orders, learn order particulars, replace order statuses or data, and delete orders which are canceled or accomplished. Order will need to have subsequent attributes: title, standing, positioned supply, description, picture url
    - Asset Administration System with an "Property" module providing CRUD operations for firm belongings. Customers can add new belongings to the stock, learn asset particulars, replace data resembling upkeep schedules or asset areas, and delete information of disposed or bought belongings.

    Testing withĀ gpt-4o-miniĀ andĀ claude-3-5-sonnet-20241022Ā confirmed comparable output code high quality, though Sonnet is dearer. Claude Haiku (claude-3–5-haiku-20241022), whereas cheaper and comparable in value toĀ gpt-4o-mini, usually produces non-compilable code. Total, withĀ gpt-4o-mini, a single code era session consumes a mean of round 11k enter tokens and 15k output tokens. This quantities to a price of roughly 2 cents per session, primarily based on token pricing of 15 cents per 1M enter tokens and 60 cents per 1M output tokens (as of December 2024).

    Under are Anthropic utilization logs exhibiting token consumption:

    Primarily based on my experimentation over the previous few weeks, I conclude that whereas there should still be some points with passing generated checks, 95% of the time generated code is compilable and runnable.

    I hope you discovered some inspiration right here and that it serves as a place to begin on your subsequent Node.js API or an improve to your present challenge. Ought to you’ve options for enhancements, be happy to contribute by submitting PR for code or immediate updates.

    In case you loved this text, be happy to clap or share your ideas within the feedback, whether or not concepts or questions. Thanks for studying, and joyful experimenting!

    UPDATEĀ [February 9, 2025]: The LLM-Codegen GitHub repository was up to date withĀ DeepSeek APIĀ help. It’s cheaper thanĀ gpt-4o-miniĀ and affords practically the identical output high quality, but it surely has an extended response time and generally struggles with API request errors.

    Except in any other case famous, all pictures are by the creator



    Source link
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Editor Times Featured
    • Website

    Related Posts

    🚪🚪🐐 Lessons in Decision Making from the Monty Hall Problem

    May 20, 2025

    How To Build a Benchmark for Your Models

    May 20, 2025

    How to Learn the Math Needed for MachineĀ Learning

    May 20, 2025

    Understanding Random Forest using Python (scikit-learn)

    May 20, 2025

    Google’s AlphaEvolve IsĀ EvolvingĀ New Algorithms — And It Could Be a Game Changer

    May 19, 2025

    Agentic AI 102: Guardrails and Agent Evaluation

    May 19, 2025

    Comments are closed.

    Editors Picks

    🚪🚪🐐 Lessons in Decision Making from the Monty Hall Problem

    May 20, 2025

    How AI is introducing errors into courtrooms

    May 20, 2025

    Huawei unveils MateBook Fold with 18-inch foldable OLED screen

    May 20, 2025

    German startup sensmore secures €6.5 million to turn the world’s largest machines into intelligent robots

    May 20, 2025
    Categories
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    About Us
    About Us

    Welcome to Times Featured, an AI-driven entrepreneurship growth engine that is transforming the future of work, bridging the digital divide and encouraging younger community inclusion in the 4th Industrial Revolution, and nurturing new market leaders.

    Empowering the growth of profiles, leaders, entrepreneurs businesses, and startups on international landscape.

    Asia-Middle East-Europe-North America-Australia-Africa

    Facebook LinkedIn WhatsApp
    Featured Picks

    Xbox Wireless Headset Review (2024): Pair and Play

    February 4, 2025

    Tech giants announce AI plan worth up to $500bn

    February 2, 2025

    The Far Right Has a New Hero: Elon Musk

    February 5, 2025
    Categories
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    Copyright Ā© 2024 Timesfeatured.com IP Limited. All Rights.
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us

    Type above and press Enter to search. Press Esc to cancel.