Is the power of low-code real or outdated? How can we evaluate it?

An impossible situation

Imagine this situation, in a technology company, there is a team that is in a big trouble: they have to create an application to manage customer relationships and they only have a few weeks to do it. It almost sounds like a mission impossible, doesn’t it? Yet, here they are, three weeks later, not only finished ahead of schedule, but also put together a CRM system that works great and tailored to their needs. No late nights writing code or computer wizard tricks, just the magic of low-code platforms.

These tools are completely changing the game, making software development a fast, efficient and affordable process. Impressive, isn’t it? But is this really true?

We read on the Web that low-code platforms are software development tools that allow people to create applications by writing little or no traditional code. Using a visual environment, where users can use drag-and-drop interfaces and pre-built templates, they enable the assembly and configuration of an app’s functionality. The focus is on simplifying the development process, making it accessible even to those without advanced programming skills.

They are particularly useful for rapid development of business applications, automation of processes and creation of customized solutions to meet specific business needs. With these platforms, companies of all sizes can innovate at an acceptable cost and adapt quickly in an ever-changing market.

Then full steam ahead with low-code! Or is it?

What is meant by a low-code platform?

Although, by skin, there is a common concept of low-code, there is no single definition. You can read the positions of some top brands by clicking their names:

KPMG

KPMG invests heavily in low-code to the extent that in 2021 it created the Low-Code Center of Excellence. But I could not find his definition of low-code. Or rather, he answers the question What is low-code in words not his own, but those of the company ServiceNow:
Low-code is a refined way for companies to develop high-end applications which, according to ServiceNow, can perform up to 10 times faster compared to those developed traditionally.

It honestly does not fit with many low-code platforms I know.

On another page he talks about low-code as “one of the more disruptive technologies to hit the enterprise since the cloud. It enables you to create powerful software applications using a simple graphical interface instead of arcane programming skills.

Gartner

With a more structured approach Gartner provides its own definition both in its Peer Insights site“Gartner defines low-code application platforms (LCAPs) as application platforms that are used to rapidly develop and run custom applications by abstracting and minimizing the use of programming languages.”

Then he shares his thoughts on the growth of the low-code market, Finally he makes available one of his famous Magic Quadrant, where“Gartner defines LCAP solutions as application platforms used to rapidly develop custom applications.” That is, the reference to the run disappears.

IBM and Kindryl

For IBM “
Low-code is a visual approach to software development that enables faster delivery of applications through minimal hand-coding
“. On the same page, an interesting point is made about the differences between low-code and no-code: “However, no-code products are specifically targeted for business users, allowing them to create custom apps without expert development skills and knowledge

While Kindryl, the part of IBM services that has been gathered into an ad-hoc company, does not spend energy on the conceptual part but operates directly by announcing that Microsoft recently awarded Kyndryl with the Low Code Application Development Specialization.

These definitions show how each shapes the definition of low-code by highlighting different criteria. Those who highlight the rapidity of the
development process
, those who also consider aspects of the
production environment
, those who consider the
skills needed
and those who only address the
developers
. However, they all put the visual and drag-ann-drop aspects at the base of everything

In my opinion, the previous definitions are flawed because they all refer to a generic application development being fast-tracked without considering its many aspects. Thus making people believe that low-code has its own magical ability to make any application development go better.

I think this is the main reason why low-code is viewed with distrust by those with application development experience, both as technicians and decision-makers.

What are the elements of application development that are affected by low-code?

I think that one cannot objectively evaluate the innovations introduced by low-code without having a frame of reference that what is needed for conventional application development.

Therefore, if you are unfamiliar with this world, in the following sections I describe an infrastructure model and a software architecture model that I use to frame low-code platforms.

Click on the following titles to see the models taken as a starting point.

The infrastructure stack - a model

What it looks like

Let us start with a trivial description of the infrastructure stack needed by applications.

The Hardware – This is the physical part of computer technology. They are the
devices that enable the execution of programs, the connection between them and the transfer of data
. For example, our PC, notebook or desktop, off or on is the hardware.

The Operating System – This is software, which is fairly generic, necessary to
carry out basic interactions with the hardware
. It actually hides the complexity of the hardware from the users, whether they are human or other software. On our PCs it is Windows or macOS for those who have an Apple.

Middleware – This is an odd category of software to define, because it
it makes a set of specific features available in a generic way
to a user. An ‘excellent example on our PCs is spreadsheet software, whether EXCEL or Sheets or Numbers. Try opening it without doing anything else, you will see an empty table without anything else happening. Your program provides you with lots of features to manage data within tables, but if you don’t start using it, modeling it, customizing it does nothing! Your spreadsheet middleware makes available to you a plethora of features for manipulating tables for all purposes: scientific, statistical, economic, etc. We might define middleware as that software that, once started, does nothing unless you put other software on it.

Applications-These are the software that leaning directly on the Operating System or Middleware, provide specific functionality. One example is browsers such as Edge, Safari, Chrome, and Firefox that directly rely on the Operating System and allow you to surf the Internet. Another may be an Excel sheet realized to perform the calculation of the Tax Code or a more complex one that allows to perform the management of a First Note. both are software that relies on Excel middleware.

In Web applications much of this

But the model does not end there. We must take into consideration two other levels that cut across the previous ones.

Management-Software tools to govern and control all levels of the stack described earlier. These tools are indispensable especially as the number of systems grows. For this, keeping the example with our PC, is more arduous but we can make the analogy with Windows Task Manager.

Security-This is a level that has seen its importance increase exponentially over the past twenty years. It is concerned with protecting data from unintended access(confidentiality), keeping it always valid(integrity) and available(availability).

These are two very important and vital levels, the former as complexity increases, the latter always.

Where is it located?

This model, in reality where is it? What are the systems that can be represented by this model?

The first example we have before our eyes is our Windows PC or Mac. The hadware is what you turn on and the operating system is Windows or macOS. Of actual middleware we have none, conceptually Excel is convenient to use as an example, but in reality Excel is also an application.

While we have plenty of applications on the PC: for email, for listening to music, for viewing or editing photos, for writing, etc., we have a lot of them.

The other place where this model becomes substance is in enterprise data centers. Within them there are often thousands of servers that we can see represented in this way.

Obviously in these environments the elements, in the various levels, are different as software technologies. They are all geared toward providing application services.

Like Hardware, there are enterprise-class devices that could hardly be made to work at home, either because of power supply or cooling needs.

Among the Opearative Systems we find Windows again but in a totally different configuration from what we are used to using in our PCs. Then there are many types of Lunux such as Debian, Ubuntu, RedHat, CentOS and even some dedicated to specific hardware such as IBM’s AIX.

Middleware takes the lead, we have Apache HTTP, Apache Tomcat, Nginx to make web pages available and to run progamms that can interact with them, but also middleware to make databases such as MySQL, PostgresQL, Oracle. Microsoft SQL Server or IBM DB2

The infrastructure stack-how it has evolved

Different technology choices can be made for each level. But it should not be forgotten that a choice made on one level can affect or constrain other levels. For example, if you decide to use, for databases, MS SQL Server software you are forced to use Windows as your operating system.

To overcome these constraints, new technologies have been introduced over time to decouple one layer from the other. In the early years of the new century, the
virtualization technology
.

Through virtualization software, hardware is abstracted, that is, Operating Systems are installed inside virtual machines, which coport for all intents and purposes like the real hardware. It is the virtualizer’s job to manage the various devices and assign them and optimize resources such as processor, memory, disks and network ports.

The two immediate benefits that server-filled datacenters gained were:


  1. Optimization of hardware resources
    – Physical servers assigned to a sostem but used only 10% were a widespread reality that disappeared within 3-5 years.

  2. Improvement of management activities
    – iin fact iin this field there has been an epochal leap. If, before, an application needed a test system, it could take up to months before the necessary hardware was procured, installed and configured as required by the application. With virtualization, a simple command allows an existing environment to be cloned in a matter of minutes.

In summary, virtualizers have achieved “masking” of hardware at the higher levels. We can add new servers to modern virtualizers, and they will take care of distributing the resources completely transparently to the systems above.

Between 2008 and 2015 we see another innovation, again related to the concept of virtualization: containers. Born conceptually to create a perfectly isolated environment for an apllication, they effectively virtualized Operating Systems.

The concept of a container is to make a standardized unit of software that groups the application code together with all its dependencies (such as libraries, frameworks, and system tools) into a single package. This allows the application to run reliably and consistently in any computing environment, be it a personal computer, a server in a data center, or an instance in a cloud environment.

Will it be a coincidence that the explosion of cloud services began only after 2007?

The architecture of software (and other reflections)

Some developers think that the topics of the infrastructure part are not of their interest and do not concern them. Those who have this idea sound the alarm sirens and start thinking! Those who design an application are called upon, to a greater or lesser extent, to make decisions that will affect the quality of the result regardless of how the application is written.

A software architecture model

These are the decisions that outline the
software architecture
Of which an excellent definition is this:

Software architecture is the set of design decisions
which, if made incorrectly, may cause your project to
be cancelled.

Eoin Woods

It makes a good point about how important it is to define a proper architecture that does not lead us to risk redoing everything from the beginning. So the two pivotal questions are: What should an architecture include? For what purpose? The main purpose is to
establish and describe, overall, the structure of the software
. But what should be described and with what level of detail is also a choice to be made. There is a huge bibliography available to guide us in this choice.

For what is functional for our evaluation of low-code we simplify the architecture, in an unscholarly and entirely generic way, into a model that describes the main areas into which we divide an application.

Presentation logic – This is the part where you build the interface with the user-human. Mechanisms for interacting with the application are refined. It is important because having processing capabilities that are complicated to use is a prerequisite for software failure.

Business Logic – Encapsulates all the mechanisms that enable the required functionality to be realized. In this level are the calculations, verifications, checks, and validations of the input data. But also the type and organization of possible operations. That is, the workflows by which you operate and the definition of the roles of the various users who perform operations. It is the logic driven mainly by the area the designed software is aimed at. So, for example, an application to do tax returns will have a totally different set of operations and control than one to manage a warehouse or a medical practice.

Data Log ic – This is the logic by which data are stored, either persistently within databases or temporarily for use in different operations. This area defines the data needed for operations and how these are represented.

How does it fit into the infrastructure?

What might seem like a simple step, namely placing the areas of our architecture model within the infrastructure, is another architectural choice. Let us look at it for each logical aspect of our model.

Where to implement Presentation Logic is a choice that constrains or is constrained by the infrastructure. In fact, if we want to use dynamic pages with the PHP language, we will have to have Middleware that allows it. If instead we choose to implement this part so that it all resides in the users’ browser. we could develop it all in Javascript.

For Business Logic there are more collocation possibilities. We can code it in the browser, again using Javascript; in the Middleware that realizes the Web dialog, using PHP, Java or other languages; or within database Middleware with scripts or SQL procedures. Even directly in the Operating System, with scripts and scheduled tasks (personally, this is a choice I find dated and to be avoided as much as possible). Hardly all of the Business Logic is realized in oneoci infrastructure layer, but the advantages of each are exploited to achieve the best result.

The data aspect at a hasty glance seems simple: we have databases, so depending on the Middleware used for our database structure (not coincidentally they are called DBMS, DataBase Management System) we describe our data in tables and relationships and that’s it. Nothing could be further from reality. Data used in applications have a life of their own and are used in different operations, so where they are placed is again a not so obvious choice.

So Data Logic surely will be placed largely in database middleware, for data that needs persistence across different operations, sessions, and users. However, many often place some persistent data directly in the Operating System. Two well-known examples are the Alfresco Document Managemen System (now by Hyland), which stores all documents within a tree of subfolders that is virtually impossible to navigate. Or the better-known WordPress Content Management System for creating Web sites, which organizes much of its content into subfolders that are so easy to navigate that they must be properly protected from unwanted access.

Some data structures, usually those that have life within user interactions, can be realized in the browsers of those accessing the application.

What is software architecture, is it the same as application architecture?

It is strange to put this title last, but instrumentally we have built a model of software architecture without giving it a definition. Rather, we have outlined layers, which are fairly common and immediately understandable, to understand how they tie in with the infrastructure. But is this the architecture of the software? We can see this as the description:


  • High-level structure of a software system
    . This includes decisions about dividing the system into components and defining the responsibilities, functionality and interactions of each component.

  • Of the organization and arrangement of the parts of the software
    , such as modules, components, interfaces, and data, to meet the specific requirements of the project.

  • Of choices that result in guidelines and constraints for software coding
    .

But we don’t talk about patterns, what libraries, what frameworks to use, how to organize our software into components (or modules, or classes, or functions, or whatever you prefer…), how our software should integrate with others. In short, there seems to be a lot missing.

They are not lacking in Software architecture, but belong to another, more specific architecture called Application Architecture. we can see the latter as a series of choices that address more detailed aspects that vary according to the specific needs of the application. For example:

  1. Structure and Componentization:
    • Definition of application components, such as modules, services, libraries, etc.
    • Partitioning the application among logical layers (e.g., presentation, business logic, data access).
  2. Communication and Integration:
    • Internal and external communication models (e.g., REST API, SOAP, gRPC).
    • Integration with external systems, such as databases, web services, ERP systems.
  3. Data Management:
    • Database schema and data access strategies (ORM, Direct SQL, NoSQL, GraphQL).
    • Transaction management policies and data integrity.
  4. User Interface:
    • User interface and user experience (UI/UX) design.
    • Frameworks and technologies for interface implementation (React, Angular, Vue.js).
  5. Security:
    • Authentication, authorization, and access control.
    • Data protection and compliance with privacy regulations (GDPR, HIPAA).
  6. Performance and Scalability:
    • Performance optimization and load management.
    • Horizontal and vertical scalability, load balancing.
  7. Reliability and Availability:
    • Strategies for error management and application resilience.
    • Backup, disaster recovery and high availability.
  8. Maintainability and Evolvability:
    • Development practices to facilitate software maintenance and evolution (e.g., modular programming, use of design patterns).
    • Documentation and standardization of code.
  9. Deploy and Environment Management:
    • Strategies for deployment (continuous integration/continuous deployment).
    • Infrastructure and execution environment management (cloud, on-premise).
  10. Compliance and Standards:
    • Adherence to industry standards and protocols.
    • Compliance with regulations and industry best practices.

It is understood that there is a difference in content and purpose between software architecture and application architecture. But this difference is a gray area, which is not sharply and unambiguously delineated either by academia or by the world of software vendors. The same hierarchy between the two terms is often considered in reverse, so it is not necessarily the case that, the software description part is always, or necessarily, a more general view than the application description. It is a question of terminology, those who use software as a more general term than the term application, and those who use them the other way around. The much-hyped Wikipedia reports, without a source, this definition:

Applications architecture defines how multiple applications are poised to work together. It is different from software architecture, which deals with technical designs of how a system is built.

WikiPedia

I find it logical, cosidering the two terms with these meanings:

Software: The term “software” is a general term referring to any program or set of instructions executed on a computer. Includes
everything that is non-physical and necessary to perform a function on a computer
, such as operating systems, utility programs, word processing programs, games, and database codes.

Application (App): An application, often called an “app,” is
a type of software designed to help the user perform specific tasks
. Applications are more specific in their uses than software in general and are typically designed with a user interface that facilitates these activities, such as editing documents, browsing the Internet, or managing email.

We leave out the application architecture here, as we do not want to do an academic text on architectures. We just need to have an overall idea, to be able to make assessments, thoughtfully, of the low-code world.

Other aspects (that we overlook) of application development

Development methodology

Models and architectures aside, in application development there is another fundamental aspect that should not be overlooked:
the development methodology
. I think of it as the architecture of the human aspects of application development.

Without getting into issues of psychology or emotional intelligence, there are purely human elements that must be governed and coordinated to ensure the advancement of development activities. How do you communicate within the team; how do you communicate externally; how, if and when do you share problems that arise in application development; how do you resolve disagreements, etc.?

I read the attempt to address these issues in development methodologies that may be explicit and stated (“we follow agile methodology”) or implicit with a set of rules and procedures used “by tradition.” But they can be ancient (“we stick to the timeline or gantt”) or modern.

Testing methodology and transition to production

Once we have developed our application, how do we test it? How do you make yourself available? These are not such obvious aspects for one-man-developers, but for articulated development teams and complex applications they are very sensitive. And to me, an articulate development team is a team of more than three people.

In summary under the name of
Testing Methodology
there are the prcedures that difine who, how, where, and why testing should be done:

  • because-it sounds naive but it is the first thing that, even unspokenly, is established. We usually decide to perform the tests “to know that everything is fine.” But the goals could be: to see that there are no errors in code execution; to see that requirements are met; to evaluate performance; to evaluate how many users we can have.
    The why may also be negative. We do NOT run the tests: because it costs too much; because we have no data with which to carry them out; because no test environment is available; etc.
  • who – trivially identifies who runs the tests to say that you can make available (go into production) the application.
  • how – defining how to test an app and what tests to perform is perhaps the most delicate part, because the more formally defined this aspect is, the more justified one is in not going deeper or broadening testing.
  • Where – do you need different environments to test different aspects? I remember one client who had these environments in addition to the development environment: functional testing, integration testing, regression testing, quality testing, user testing and training. Yes an ambiemte dedicated to doing user training, so for each application line there was one production environment, one development environment, five testing environments and one training environment.

The complexity of different environments makes it necessary, not so much for initial development as for development of changes, a
Production transition methodology
. Let us imagine that we need to make a change to the application in the previous example: the change must be applied to six environments before being applied in production. You cannot change environments in improvised and unplotted ways.

Methodologies of development, testing, and passing in projection are other topics that we do not delve into here, but only want to place correctly because, you want to understand whether they are influenced, or not, by the world of low-code.

Those with more experience, on the other hand, can skip these descriptions because they will understand the models used very well. To the latter I ask for conscious indulgence. That is, I ask you not to focus on the shortcomings of the proposed models. These are not models to guide a development team; they are fences within which to place our considerations in order to make the varied members of the low-code world comparable. Although innovations in infrastructure components have introduced greater flexibility, these have not significantly affected application development methodologies. The benefit of abstraction between layers is basically limited to Operating Systems. There is, in fact, no virtualization capable of making us disregard the constraints and limitations inherent in application development. The flexibility of the infrastructure was useful for development teams to remove slownesses that burdened the development process but had nothing to do with it. The ability to clone entire environments to make them laboratory environments to test on, that of establishing a moment in time of one’s systems (snapshot) and being able to go back to that moment, have been terrific efficiency tools for application teams, but they have not changed, by much, the way they develop.

What is the goal of low-code platforms?

Although each platform has a specific approach, we can state, at the risk of raising the displeasure of experienced programmers, that the main intent of low-code is as follows:

Objectives of low-code

make the activities, of those making the applications, focused only on the purposes of the application itself, eliminating of all effort and time due to the technology aspects of the environment in which the application will run.

Whether this lightening-simplification is implemented on all aspects or only on some characterizes and differentiates low-code platforms from each other. There are low-code platforms that deal only with Business Logic, others that deal only with aspects of communication between different applications. The landscape of low-code offerings is vast.

To be avoided

Viewing low-code as merely a substitute for programming, rejecting it or using it as a shortcut to knowledge of programming languages.

To understand how many and what types of low-code platforms there are, a good place to start is the NoCodeJournal site, on their State of Nocode page. Started in 2021, this project seeks to list, and update, all available low-code platforms. This is a challenging task because the industry is in rapid motion: new platforms are constantly emerging, while others disappear just as quickly. Although not up-to-date, the site gives an idea of the enormous variety and dynamism of this interesting technology sector.

How to analyze low-code platforms

Today we will not discuss the best xx low-code platforms of 2023. Rather, let us see how to analyze some of these platforms.

The platforms we will examine are not necessarily the most popular or the best in terms of speed, ease of use, etc. They are simply those that I have had the opportunity to personally test for a period long enough to conduct a thorough analysis. I have focused on platforms that offer free plans or that have allowed me to use them long enough to understand their features and functionality.

Criteria for analysis

Let us now examine criteria for analyzing and evaluating low-code platforms, integrating the models already mentioned with other factors such as cost, support, etc. The intent is to define an analysis method that facilitates future evaluations.

I believe that one cannot judge a platform, or more generally software, without first considering the specific context in which it will be used. This context includes the needs to be met, the type of users of the application, and the budget available. Because the context can change drastically, general internet rankings are often not particularly helpful in guiding choice.

The proposed model for the analysis is as follows:

The areas used for analysis are:

Development features
  • Presentation Logic – features made available for the realization of the user interface
  • Business Logic – functionality made available for building workflows, controls over operations and data, and events
  • Data Logic – features made available for building data structures
  • Documentation – features provided to describe and make understandable how the application is developed and what its internal dependencies are
Portability
  • Data portability – functionality made available to extract data from the application.
  • External integration – functionality made available for making integrations with external services or applications.
  • Infrastructure – type of infrastructure on which the produced application can run
Security
  • Security – features made available for the implementation of security mechanisms
  • Test and deployment – functionality made available for testing and management of production environments.
Other services
  • Support – the ways in which platform support is accessed.
Costs
  • Price model – the mechanisms that influence price

Conclusions

With the model thus defined, all that remains is to put it to the test. So I will start evaluating some platforms, but the real test will be the comments and criticism. Not so much to see if it is good, but to indivuduate how it can be improved.

Useful references for further study.

more insights