How AI Tools Are Creating Technical Debt in IoT Systems – and What to Do About It

the first launch of Ariane 5 took place – Europe's new heavy-lift launch vehicle designed to deliver payloads into low Earth orbit. The rocket exploded less than 40 seconds after liftoff. This is caused by specification and design errors in the passive navigation system software. The software module was reused from the previous version of Ariane 4 without verifying if its constraints were suitable for the new environment. It was one of the most expensive software mistakes in history.
Why am I recalling an event from thirty years ago in a text about technical debt generated by AI tools? Because it helps us remember a simple truth: in complex systems, what's dangerous is not just “bad code,” but also code that looks acceptable but doesn't fit the context. AI assistants reproduce the same problem?
As an expert in the field of IIoT, especially in predictive maintenance, I see the following: AI tools quickly generate functional code that seems to be suitable for a local task, but they do not ensure their reasonableness at the level of the entire system. In IIoT, this means that a solution may be correct at the level of an individual task or service, but fail to account for specific hardware constraints, data flows, architectural constraints, or real-world device operating conditions. As a result, the correct local code becomes a source of system failures and expensive maintenance, leading to slow development of the entire platform.
Four types of technical debt from AI
Technical debt is any decision that speeds things up now but costs more in the long run. I can highlight four main ways in which AI tools can create technical debt.
Reproducing inheritance patterns and errors
The AI assistant generates suggestions based on the context of the code it sees in the current environment and cannot always identify broader design or architecture issues. GitHub clearly notes that Copilot is limited in scope, depends on the context of the code being written, and can inherit errors and biases from existing repositories. Therefore, if the project already contains outdated methods, redundant data storage, or “workarounds” instead of proper architectural solutions, the AI takes this for granted and continues to iterate on it. In this sense, it works as an echo chamber: bad habits are not only preserved, but quickly measured.
And this is not just a theatrical accident. A study of 304,000 AI-generated actions verified in more than 6,000 real-world repositories showed that more than 15% of actions from each AI tool tested had at least one code quality issue, and a quarter of those issues remain unfixed in the final version of the code.
In IoT systems, this process is particularly dangerous because the legacy pattern is rarely a local issue within a single module. If the assistant reproduces a weak solution in the firmware code, gateway services, or telemetry processing, it spreads quickly throughout the chain — from the device layer to the cloud side of the system.
“Quick fix” without building awareness
AI is very effective at solving local engineering tasks: it can quickly generate tests, write boilerplate code, or create standard CRUD endpoints. However, it does not see the structure – which databases are used for which data, which restrictions exist, and how the components interact. A study by Ox Security of 300 open source projects, 50 of which were fully or partially powered by AI, found that the code was functional but lacked architectural judgment.
As a result, AI can create technical debt even without reproducing legacy patterns. If the rules of architecture are not clearly defined – in documents, records of decisions, or even in the notification itself – the model develops local work as one. In complex IIoT systems, this looks like the following: time series, reference data, and logs are stored in different databases, each configured for its workload – but the assistant, when asked to store new data, does not know about this topology and generates code that gradually violates the architectural conventions established by the group.
Iteration of the concept and the increasing complexity of the adjustment
The AI assistant doesn't know that the code it needs already exists elsewhere in the system, so it writes a new version. The result is multiple independent implementations of the same concept – and when a change is needed, developers waste time looking for all the duplicates.
An analysis of 211 million changed lines of code from 2020–2024 by GitClear showed that the share of duplicate code increased from 8.3% to 12.3%. 2024 was the first year that the amount of duplicated code exceeded the amount of refactoring. AI tools will likely further accelerate this trend. They make it possible to insert a new block of code with a single key click, but are less likely to suggest reusing the same function from another part of the project – partly because of the limited context available to them.
In IoT systems, if the same concept – for example, packet analysis or connection authentication – is used in multiple locations independently, fixing a bug in one copy without detecting the others can lead to devices in the field that behave differently under the same input signal. Solving those inconsistencies requires not only changing the code, but synchronizing firmware updates on thousands of devices at once.
Ignoring hardware limitations
IoT devices do not have unlimited resources for cloud services. The gateway has a certain amount of memory, limited network bandwidth, and a fixed battery budget. An AI assistant can take these constraints into account — but only if the developer makes them clear.
If this is not possible, the assistant produces solutions for the environment he is most trained in – clouds and server-based systems where memory is unlimited and the network is stable. The result is predictable: endless retry loops without timeouts, “heavy” text-based data formats instead of binary compact protocols, and code that compiles well but doesn't account for board-specific hardware specifications.
A solution that works well in an emulator may fail when running on a mobile device with limited resources.
What you should do so that AI does not create technical debt in the project
AI in IoT applications requires a more rigorous engineering discipline than development without it. I'll describe four processes that help my team keep code quality under control.
Mandatory human code review
This sounds obvious, but in practice, when working with AI assistants, there is a temptation to accept the generated code without deep analysis – especially since more than half of the developers say that the code generally looks “correct.” According to a survey of over 1,100 developers, only 48% regularly review AI-generated code before committing.
The review should look not only at whether the code compiles, but also whether it accounts for the constraints of the specific nature of the hardware, whether there is duplication of logic, and whether the solution is compatible with the overall architecture of the system.
However, manual code reviews have a problem: AI assistants increase the volume and speed of new code faster than teams can adapt. According to LeadDev, 29% of organizations are already spending more time on code reviews than before. This means that in AI-driven development, human review quickly creates a bottleneck if not reinforced with guardrails and automated testing.
Limiting AI to key parts of the project
Not all codes are equally important. It should clearly define the “no-go areas” for autonomous AI generation: processing incoming device packets, authorization understanding, interrupt management and watchdog timing logic, and any code that interacts directly with firmware.
The simple criterion for classification is this: if a bug in this code requires a firmware update on field devices or violates the data integrity of all clients at the same time, AI should only work as an assistant under human supervision, and the final decision should rest with the engineer who understands the context of the system.
Routine analysis and monitoring
As the speed of code generation increases, the speed at which hidden problems accumulate and increases. Regular recycling is not only a good practice, but a necessity. From my experience, the architecture should be updated at least once every six months – with separate attention paid to areas where the AI-generated code may have introduced hidden problems.
Correspondingly, monitoring is necessary – but in IoT there is a broader focus than a typical back-end system. From my experience, beyond the degradation of service level performance shown by tools like Datadog or AWS CloudWatch, it is important to track the state of the devices themselves: edge memory usage, latency between device and gateway, and anomalies in telemetry. This is where AI-generated code with unaccounted hardware constraints often comes first.
The conclusion
Technical debt existed long before the use of AI became mainstream. However, AI can accelerate its accumulation – especially in areas where there is no culture of documentation, governance of structures, and frequent rework. In IoT applications, the cost of this acceleration is measured not only in developer time, but also in the reliability of thousands of physical devices.



