AI

Santiago Bertinat • 28 APR 2026

AI and Engineering Process: Why Fundamentals Matter More Than Ever

post cover picture

 

Today, most conversations around AI in software development focus on speed, tools, and new capabilities. But very few teams are asking a more important question: is their system actually ready for it?

The situation is becoming increasingly common.

AI is rapidly becoming a core part of software development, but most teams are adopting it without rethinking how they make tech decisions and how those decisions impact long-term outcomes.

In practice, turning AI into real product outcomes requires a structured approach to how it is applied across the system.

Everyone is focused on the same things: new models, new tools, more AI, more speed. Every week, something new promises to make teams faster, more productive, and more efficient. But in that rush, something more fundamental is being overlooked

Most teams are ignoring the foundations that make software development sustainable. And without strong fundamentals, AI doesn’t scale productivity, it scales mistakes.

Why the engineering process hasn’t changed

For decades, engineering teams have been refining how they work to achieve something that sounds simple but is consistently difficult in practice: building systems that are reliable, maintainable, and aligned with real requirements over time.

This didn’t emerge from tooling alone, but from discipline, constraints, and repetition. Strong teams tend to converge around the same foundations: clearly defined tasks, a consistent stack of tools and frameworks, shared coding guidelines, automated checks like linters and static analysis, reliable unit tests, and structured review and QA processes.

None of these elements are new, and none of them are particularly exciting on their own. 

However, they are what prevent the types of mistakes that become expensive at scale, rebuilding systems, hitting architectural limits too early, or accumulating technical debt that slows down product scaling.

With AI now embedded in the development lifecycle, the way teams structure their development process and underlying technical foundations becomes even more critical.

Without strong foundations, teams tend to see the same pattern: initial speed gains followed by inconsistency, rework, and loss of confidence.

The limitation is rarely the technology itself, but the system and the structure in which it operates.

The biggest problem: broken engineering process

This is where most teams start to break down.

In a traditional setup, assigning a developer a vague, contextless task that takes weeks to complete and is difficult to review would be considered poor practice.
Yet this is effectively how many teams are currently working with AI.

They expect high-quality outputs from poorly structured inputs. This often leads to approaches that prioritize speed over structure.

For AI to be effective, the same principles still apply. Tasks need to be small enough to be understood and validated. Context needs to be explicit rather than implied. Outputs need to be reviewable in isolation, so that quality can be assessed without ambiguity.

A good prompt is not a shortcut around the process. It is the result of well-defined workflows, clear execution patterns, and a system that supports consistent delivery something most teams only achieve once they rethink how their product development process is structured.

In that sense, AI doesn’t remove the need for structure, it makes the absence of it immediately visible.

How your tech stack shapes scalable systems

The choice of stack plays a similar role.

Many new projects still start with minimal setups, particularly in ecosystems like Node.js, where micro-frameworks such as Express.js or Hono are often used as base. While flexible setups can be useful, they often result in systems without structure. The lack of consistency affects long-term maintainability and prevents the system from evolving in a way that supports long-term growth.

When too many decisions are left open, the system (whether human or AI-driven) is forced to constantly decide how things should be done: how to structure data access, where to place business logic, how to handle external integrations, and how the system should evolve as it grows.

This leads to inconsistency, which directly impacts how the system behaves as it grows.

AI performs significantly better in environments where there is a clear structure to follow.

Opinionated frameworks reduce ambiguity by providing conventions and boundaries that guide both developers and agents. Frameworks like Ruby on Rails are a good example of this approach, where convention over configuration and a cohesive philosophy create a more predictable system.

Without that level of structure, AI doesn’t become more creative. It simply becomes less consistent.

Why clear rules improve code quality

Even with a strong framework in place, the absence of clear internal rules in how the team operates tends to produce similar issues, especially when the engineering process is not clearly defined.

In real-world projects, this often results in models that grow uncontrollably, business logic that ends up scattered across the codebase, and methods that become increasingly complex over time. These are not isolated mistakes, but predictable outcomes of systems that allow too much interpretation.

Reducing that ambiguity requires explicit tech decisions and strong decision-making principles that define how the system evolves over time.

For example, while Rails encourages the use of Active Record patterns that mix database access with business logic, many teams find that this leads to overly complex models as the system evolves. To avoid this, it becomes necessary to define stricter rules, limiting what is allowed within models, introducing service layers for business logic, and standardizing how errors and responses are handled as part of a more robust system design.

Similarly, features like callbacks, while powerful, often need to be restricted in order to maintain clarity at scale.

The important part is not the specific rule itself, but the fact that it removes ambiguity.

Systems improve when fewer decisions are left open to interpretation.

That level of clarity is what enables both humans and AI to produce consistent results over time, and it reflects a broader principle: systems that scale require ownership over decisions, not just execution.

What used to be optional is now mandatory.

The same applies to supporting tools within the development process.

Practices that were once considered optional (such as linters, static analysis, security checks, and automated validations) now play a much more central role. They are no longer just developer aids, but part of the environment that defines how AI operates.

These tools are not just developer aids. They are part of the validation systems and quality controls that define how work is evaluated and delivered.

Without them, AI will still produce output, but that output will be shaped by whatever inconsistencies exist in the system. At scale, that tends to amplify variability rather than reduce it, affecting how stable the system remains as it grows.

What we consistently see is that speed without validation quickly turns into rework. The initial gains are real, but they are fragile if they are not supported by systems that enforce quality.

Designing the environment in which AI operates becomes part of the team’s responsibility. AI doesn’t introduce discipline into a system, it depends on a well-defined engineering process.

Testing and code quality as scaling enablers

Among all these elements, testing remains the most critical component for maintaining code quality.

Fast unit tests, reliable end-to-end tests, and continuous execution pipelines are what allow teams to move quickly without losing confidence in what they are building. Every change, regardless of whether it is written by a developer or generated by AI, needs to validate itself.

Without testing,confidence in the system erodes. And without confidence, maintaining delivery stability becomes impossible as the system grows.

One of the interesting shifts is that AI lowers the barrier for writing tests. Generating test cases is often easier than generating production-ready logic, which removes one of the common excuses for not investing in proper coverage.

The constraint is no longer effort. It is prioritization.

New projects vs structured systems

The impact of these differences becomes clear when comparing projects with and without strong foundations.

In practice, when new systems are built without clear rules, and guidelines, the same patterns tend to appear quickly: inconsistent naming, increasingly long methods, unnecessary complexity, and a series of small decisions that degrade the overall structure and underlying software architecture.

This is not a reflection of individual capability, but of the environment in which the system is being developed. AI does not inherently “know” how to produce well-structured code. It reflects the patterns it is given.

In contrast, when strong practices are in place, the outcomes change significantly. Code becomes more consistent, decisions more predictable, and the system easier to maintain. Productivity improves, not because the team is working harder, but because the environment supports better tech decisions.

Even less experienced developers or AI agents are able to produce solid results when the system provides enough structure.

Context, in that sense, directly shapes output and determines whether systems can evolve into truly scalable systems.

AI as a multiplier of tech decisions

At a practical level, AI behaves similarly to a very fast junior developer within your team’s workflow.

It can generate output quickly, but it relies on guidance, clear expectations, and sufficient context to be effective. Unlike a senior developer, it does not compensate for gaps in the system. It operates within them.

When standards are unclear, structure is weak, or rules are inconsistent, the output reflects those conditions. The result is typically code that is harder to maintain, more fragile, and more prone to errors, directly impacting long-term code quality.

This is why AI rarely fixes underlying issues. Instead, it tends to make them more visible and more frequent.

AI does not resolve disorders. It amplifies it.

What defines teams with strong engineering process and system architecture

The teams that are seeing meaningful results from AI are not necessarily the ones adopting the latest tools first. More often, they are the ones that already have strong engineering processes, clear software architecture, and consistent decision-making.

In those environments, AI acts as a multiplier of what is already working.

In less structured environments, the opposite tends to happen. Variability increases, rework becomes more common, and the perceived gains in speed are offset by a loss of consistency and quality.

AI, in that sense, is not inherently a productivity tool. It is a multiplier of the system it operates within.

Why engineering process defines product scaling

AI is not replacing fundamentals. It is making them more relevant.

When the foundation is strong, AI accelerates progress in a meaningful way. When it is weak, it exposes the gaps faster than before.

Many teams assume they are struggling with AI adoption, when in reality they are struggling with the decisions that shape how their systems are built.

At that point, the challenge is no longer about tools or models, but about creating the structure that allows those tools to be used effectively.

Take a look at how we approach building and structuring products!

Stay updated!

project background