Close Menu
    Trending
    • A First-Principles Guide to Multilingual Sentence Embeddings | by Tharunika L | Jun, 2025
    • Google, Spotify Down in a Massive Outage Affecting Thousands
    • Prediksi Kualitas Anggur dengan Random Forest — Panduan Lengkap dengan Python | by Gilang Andhika | Jun, 2025
    • How a 12-Year-Old’s Side Hustle Makes Nearly $50,000 a Month
    • Boost Your LLM Output and Design Smarter Prompts: Real Tricks from an AI Engineer’s Toolbox
    • Proposed Study: Integrating Emotional Resonance Theory into AI : An Endocept-Driven Architecture | by Tim St Louis | Jun, 2025
    • What’s the Highest Paid Hourly Position at Walmart?
    • Connecting the Dots for Better Movie Recommendations
    Finance StarGate
    • Home
    • Artificial Intelligence
    • AI Technology
    • Data Science
    • Machine Learning
    • Finance
    • Passive Income
    Finance StarGate
    Home»Artificial Intelligence»How to Use an LLM-Powered Boilerplate for Building Your Own Node.js API
    Artificial Intelligence

    How to Use an LLM-Powered Boilerplate for Building Your Own Node.js API

    FinanceStarGateBy FinanceStarGateFebruary 21, 2025No Comments7 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Share
    Facebook Twitter LinkedIn Pinterest Email

    For a very long time, one of many frequent methods to begin new Node.js tasks was utilizing boilerplate templates. These templates assist builders reuse acquainted code constructions and implement normal options, akin to entry to cloud file storage. With the most recent developments in LLM, mission boilerplates look like extra helpful than ever.

    Constructing on this progress, I’ve prolonged my current Node.js API boilerplate with a brand new instrument LLM Codegen. This standalone characteristic allows the boilerplate to robotically generate module code for any function based mostly on textual content descriptions. The generated module comes full with E2E checks, database migrations, seed knowledge, and essential enterprise logic.

    Historical past

    I initially created a GitHub repository for a Node.js API boilerplate to consolidate the perfect practices I’ve developed over time. A lot of the implementation relies on code from an actual Node.js API operating in manufacturing on AWS.

    I’m enthusiastic about vertical slicing structure and Clear Code ideas to maintain the codebase maintainable and clear. With current developments in LLM, significantly its help for big contexts and its means to generate high-quality code, I made a decision to experiment with producing clear TypeScript code based mostly on my boilerplate. This boilerplate follows particular constructions and patterns that I imagine are of top of the range. The important thing query was whether or not the generated code would comply with the identical patterns and construction. Primarily based on my findings, it does.

    To recap, right here’s a fast spotlight of the Node.js API boilerplate’s key options:

    • Vertical slicing structure based mostly on DDD & MVC ideas
    • Providers enter validation utilizing ZOD
    • Decoupling software elements with dependency injection (InversifyJS)
    • Integration and E2E testing with Supertest
    • Multi-service setup utilizing Dockercompose

    Over the previous month, I’ve spent my weekends formalizing the answer and implementing the required code-generation logic. Under, I’ll share the small print.

    Implementation Overview

    Let’s discover the specifics of the implementation. All Code Generation logic is organized on the mission root degree, contained in the llm-codegen folder, guaranteeing simple navigation. The Node.js boilerplate code has no dependency on llm-codegen, so it may be used as an everyday template with out modification.

    It covers the next use instances:

    • Producing clear, well-structured code for brand new module based mostly on enter description. The generated module turns into a part of the Node.js REST API software.
    • Creating database migrations and increasing seed scripts with fundamental knowledge for the brand new module.
    • Producing and fixing E2E checks for the brand new code and guaranteeing all checks cross.

    The generated code after the primary stage is clear and adheres to vertical slicing structure ideas. It consists of solely the required enterprise logic for CRUD operations. In comparison with different code era approaches, it produces clear, maintainable, and compilable code with legitimate E2E checks.

    The second use case includes producing DB migration with the suitable schema and updating the seed script with the required knowledge. This process is especially well-suited for LLM, which handles it exceptionally effectively.

    The ultimate use case is producing E2E checks, which assist verify that the generated code works accurately. Through the operating of E2E checks, an SQLite3 database is used for migrations and seeds.

    Primarily supported LLM shoppers are OpenAI and Claude.

    How one can Use It

    To get began, navigate to the basis folder llm-codegen and set up all dependencies by operating:

    npm i

    llm-codegen doesn’t depend on Docker or some other heavy third-party dependencies, making setup and execution simple and easy. Earlier than operating the instrument, be certain that you set at the least one *_API_KEY surroundings variable within the .env file with the suitable API key in your chosen LLM supplier. All supported surroundings variables are listed within the .env.pattern file (OPENAI_API_KEY, CLAUDE_API_KEY and so forth.) You need to use OpenAI, Anthropic Claude, or OpenRouter LLaMA. As of mid-December, OpenRouter LLaMA is surprisingly free to make use of. It’s doable to register here and procure a token free of charge utilization. Nonetheless, the output high quality of this free LLaMA mannequin could possibly be improved, as a lot of the generated code fails to cross the compilation stage.

    To start out llm-codegen, run the next command:

    npm run begin

    Subsequent, you’ll be requested to enter the module description and identify. Within the module description, you’ll be able to specify all essential necessities, akin to entity attributes and required operations. The core remaining work is carried out by micro-agents: Developer, Troubleshooter, and TestsFixer.

    Right here is an instance of a profitable code era:

    Profitable code era

    Under is one other instance demonstrating how a compilation error was fastened:

    The next is an instance of a generated orders module code:

    A key element is that you may generate code step-by-step, beginning with one module and including others till all required APIs are full. This strategy permits you to generate code for all required modules in just some command runs.

    How It Works

    As talked about earlier, all work is carried out by these micro-agents: Developer, Troubleshooter and TestsFixer, managed by the Orchestrator. They run within the listed order, with the Developer producing a lot of the codebase. After every code era step, a examine is carried out for lacking recordsdata based mostly on their roles (e.g., routes, controllers, providers). If any recordsdata are lacking, a brand new code era try is made, together with directions within the immediate concerning the lacking recordsdata and examples for every position. As soon as the Developer completes its work, TypeScript compilation begins. If any errors are discovered, the Troubleshooter takes over, passing the errors to the immediate and ready for the corrected code. Lastly, when the compilation succeeds, E2E checks are run. Each time a check fails, the TestsFixer steps in with particular immediate directions, guaranteeing all checks cross and the code stays clear.

    All micro-agents are derived from the BaseAgent class and actively reuse its base methodology implementations. Right here is the Developer implementation for reference:

    Every agent makes use of its particular immediate. Try this GitHub link for the immediate utilized by the Developer.

    After dedicating vital effort to analysis and testing, I refined the prompts for all micro-agents, leading to clear, well-structured code with only a few points.

    Through the growth and testing, it was used with numerous module descriptions, starting from easy to extremely detailed. Listed here are a couple of examples:

    - The module chargeable for library e-book administration should deal with endpoints for CRUD operations on books.
    - The module chargeable for the orders administration. It should present CRUD operations for dealing with buyer orders. Customers can create new orders, learn order particulars, replace order statuses or info, and delete orders which are canceled or accomplished. Order should have subsequent attributes: identify, standing, positioned supply, description, picture url
    - Asset Administration System with an "Belongings" module providing CRUD operations for firm property. Customers can add new property to the stock, learn asset particulars, replace info akin to upkeep schedules or asset areas, and delete information of disposed or bought property.

    Testing with gpt-4o-mini and claude-3-5-sonnet-20241022 confirmed comparable output code high quality, though Sonnet is dearer. Claude Haiku (claude-3–5-haiku-20241022), whereas cheaper and related in value to gpt-4o-mini, typically produces non-compilable code. General, with gpt-4o-mini, a single code era session consumes a mean of round 11k enter tokens and 15k output tokens. This quantities to a price of roughly 2 cents per session, based mostly on token pricing of 15 cents per 1M enter tokens and 60 cents per 1M output tokens (as of December 2024).

    Under are Anthropic utilization logs displaying token consumption:

    Primarily based on my experimentation over the previous few weeks, I conclude that whereas there should be some points with passing generated checks, 95% of the time generated code is compilable and runnable.

    I hope you discovered some inspiration right here and that it serves as a place to begin in your subsequent Node.js API or an improve to your present mission. Ought to you will have ideas for enhancements, be at liberty to contribute by submitting PR for code or immediate updates.

    In case you loved this text, be at liberty to clap or share your ideas within the feedback, whether or not concepts or questions. Thanks for studying, and blissful experimenting!

    UPDATE [February 9, 2025]: The LLM-Codegen GitHub repository was up to date with DeepSeek API help. It’s cheaper than gpt-4o-mini and affords practically the identical output high quality, but it surely has an extended response time and generally struggles with API request errors.

    Except in any other case famous, all photos are by the creator



    Source link
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleDuhsuشماره خاله شماره خاله تهران شماره خاله اصفهان شماره خاله شیراز شماره خاله کرج شماره خاله قم…
    Next Article Cameo Brings Workers Back to the Office With $10,000 Raise
    FinanceStarGate

    Related Posts

    Artificial Intelligence

    Boost Your LLM Output and Design Smarter Prompts: Real Tricks from an AI Engineer’s Toolbox

    June 13, 2025
    Artificial Intelligence

    Connecting the Dots for Better Movie Recommendations

    June 13, 2025
    Artificial Intelligence

    Agentic AI 103: Building Multi-Agent Teams

    June 12, 2025
    Add A Comment

    Comments are closed.

    Top Posts

    09013027390

    April 11, 2025

    VAST Data Adds Blocks to Unified Storage Platform

    February 19, 2025

    Why Every Aspiring Day Trader Should Start With a Simulator

    April 22, 2025

    7 AI Tools That Help You Build a One-Person Business — and Make Money While You Sleep

    April 26, 2025

    Evolving Product Operating Models in the Age of AI

    March 22, 2025
    Categories
    • AI Technology
    • Artificial Intelligence
    • Data Science
    • Finance
    • Machine Learning
    • Passive Income
    Most Popular

    Helping machines understand visual content with AI | MIT News

    June 12, 2025

    Reddit Rival Digg Is Making a Comeback, Using AI to Moderate

    March 5, 2025

    The Geospatial Capabilities of Microsoft Fabric and ESRI GeoAnalytics, Demonstrated

    May 15, 2025
    Our Picks

    AI and Cybersecurity in Critical Infrastructure Protection

    May 19, 2025

    A 5 Step Guide to Smarter Business Growth

    April 25, 2025

    A Deep Dive Into Hospital Readmission Reduction | by Yudeshsubas | Mar, 2025

    March 5, 2025
    Categories
    • AI Technology
    • Artificial Intelligence
    • Data Science
    • Finance
    • Machine Learning
    • Passive Income
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us
    Copyright © 2025 Financestargate.com All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.