Machine Learning

How to Use a Powerful LLM Boilerplate to Create Your Node.js API


For a long time, one of the most common ways to start new node.js use boilerplate templates. These templates help the ordinary Code developers and use normal features, such as access to the cloud file storage. With the latest developments in the llm, the projectsplates seem to be more useful than before.

Creating in this decision, extend my existing Node.js API node with a new tool for llm codegen. This Standalone feature gives boilerplate to produce a module code for any purpose based on the Scriptural tops. Productive module comes in full with E2E examination, data migration, seed data, and logic logic required.

History

I started creating a gethib and node.js API Boilerplate to combine good habits that I developed over the years. Much of the implementation based on the code from Node.js API running in production of AWS.

I am interested in the construction of the direct security and the principles of clean code to keep the codebase active and clean. With the latest developments in the LLM, especially its supporting of the main conditions and its ability to generate high quality code, I decided to produce a pure bourplate code. This boilplate follows buildings and patterns I believe in high quality. The main question is whether the product produced will follow the same patterns and structure. Based on my findings, it does.

To reexure again, here is the quick highlight of important node.js API Boilertyplate signs:

  • The correct light construction is based on DDD Harm MVC Principles
  • Authentication of service delivery is used ZOD
  • The recommendation of the injection of the injection (InversifyJSSelected
  • Compilation and E2E Suptest testing
  • Multiple service setup using Dockerarrangement

In the past month, I used my weekends to legalize the solution and use logic needed. Below, I will share information.

Looking for everything

Let's examine the data to start. All Code Generation Logic is set up in the project level, within the llm-codegen folder, guarantees simple navigation. Node.js ebipplate code has no relying llm-codegenSo it can be used as a standard template without conversion.

It includes charges for the following uses:

  • A clean, well-formal code is made based on the installation description. Produced module becomes part of the Node.js Rest API app.
  • To create data migration and extend seed scripts with basic data for new module.
  • It is made and repaired E23 exams of new code and ensuring that all tests are exceeded.

The code produced after the first paragraph is clean and attaches to the stimulated building systems. Includes only the business required to obtain crud duties. Compared with other methods of generating codes, producing a clean, final code, and integrated with valid E2E tests.

The second case of use involves producing DB migration and suitable schema and updates seed text with the required data. This work is well prepared for the llm, which deals properly.

The final charge of using E2E tests, which help ensure that the product produced is effective. During E2E tests, the database of the SQLITE3 is used for migration and seed.

Customers are mainly based on the llm of Opre and Claude clients.

How to Use

To get started, hang from the root folder llm-codegen Then enter all credit performance:

npm i

llm-codegen It does not rely on the docker or other survivors of the outside, which makes the setting and murdering easy and straightforward. Before using the tool, make sure you put at least one *_API_KEY Environment variations in .env The file with the appropriate API of your selected llm provider. All of the sustained natural variables are written in .env.sample file (OPENAI_API_KEY, CLAUDE_API_KEY etc.) You can use OpenAI, Anthropic Claudeeither OpenRouter LLaMA. In the middle of December, OpenRouter LLaMA It is amazing that you are free to use. You may subscribe here and get a free use token. However, the quality of the results of this free LLAMA model can be developed, as most of the code produced fails to pass in the consolidation phase.

Start llm-codegenRun next command:

NPM Run Start Start

Next, you will be asked to enter a Module and Name Description. In the meaning of Module, you can specify all the required requirements, such as the factors of the business and the functions required. The remaining basic work is made of micro-agents: Developer, Troubleshooterbesides TestsFixer.

Here is an example of an effective generation of code:

Effective generation of code

Sometimes it is another example that shows how the error is being prepared:

Next is an example of created orders Module code:

Important information is that you can generate action code by step, starting with one module and adds others until all the APIs are required. This method allows you to generate the code for all modules needed for just a few orders.

How does this work

As mentioned earlier, all work is done by those micro-agents: Developer, Troubleshooter including TestsFixeris controlled by Orchestrator. They run in a listing manner, with Developer Produced by most code code. After each code step, the check is done with lost files based on their roles (eg. Routes, controls). If any of the lost files are made, a new effort is made for a new generation, including instructions with quick and examples in each passage. Once Developer He finishes its work, the chaological integration begins. If any errors are available, Troubleshooter It takes the wires, past the mistakes in the prompt and waiting for a fixed code. Finally, when combining is successful, E2E tests are run. Whenever the test fails, TestsFixer Steps with certain quick instructions, to ensure that all tests passes and the code is always clean.

All Micro-Ages are based on BaseAgent class and to reuse the use of the foundations. Here is the Developer Stuffum for reference:

Each agent uses straight quickly. Check this temporary Gimbub link used by Developer.

After providing an important research and assessment effort, I am neatly torn to be remembered by all micro-dentiles, leading to clean, formal code with very few obstacles.

During the development and testing, various module definitions, which are based on simple to detail. Here are a few examples:

- The module responsible for library book management must handle endpoints for CRUD operations on books.
- The module responsible for the orders management. It must provide CRUD operations for handling customer orders. Users can create new orders, read order details, update order statuses or information, and delete orders that are canceled or completed. Order must have next attributes: name, status, placed source, description, image url
- Asset Management System with an "Assets" module offering CRUD operations for company assets. Users can add new assets to the inventory, read asset details, update information such as maintenance schedules or asset locations, and delete records of disposed or sold assets.

Checking with gpt-4o-mini including claude-3-5-sonnet-20241022 indicates the quality of the exception code, though the Sonnet is very expensive. Claude Haiku (claude-3–5-haiku-20241022), while cheaper and similar to the price to gpt-4o-minioften produce unoccupied code. Overall, with gpt-4o-miniOne-generation session Flying the average of 11K installation and output tokens. This is like the cost of 2 cent centreas in each session, based on 15 selection prices for the installation token 1 token at 1M with 1M 2024).

Below are anthropic use logs that show the use of Token:

Based on my examination few weeks ago, concluding that there may be some challenges for the successful examination, 9% of the time created and active.

I hope you get some inspiration here and active as the first Node.js API location or development on your current project. If you have to have promotions, feel free to contribute by sending a Product Product or quick update.

If you enjoy this article, feel free to beat or share your thoughts on a comment, either comments or questions. Thank you for reading, and happy trying!

Revise [February 9, 2025]: LLM-CODEGEN GTUB LOCAL IS REPESTED AT DEPSEEK API support. It is cheaper than gpt-4o-mini And it gives almost the same level of output, but has a long response time and sometimes difficult for API application errors.

Unless otherwise noted, all photos are the author

Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button