In a complex landscape of business applications, the flow of data between systems is crucial for today's enterprise. Sometimes, particularly with older systems, the interfaces necessary to facilitate data exchange do not exist. Traditional integration projects can take months or years to implement, and come with a price tag to suit. The workaround that emerged to address this problem some time ago became to be called Robotic Process Automation.
Looking back at the last 5 years, the rate of adoption for RPA in large enterprises has been quite staggering. In a typical organization, a team could easily automate dozens of paper processes in their first year. For some, the rapid scaling up came at the expense of quality.
The first projects were usually spent learning how to actually use the relatively novel tools effectively. In addition, (if the business analysts had done a good job in ROI estimation) those same initial use cases were typically the ones with the highest transaction volumes. If the performance of the initial version was bad enough, the projects have sometimes needed to be redone entirely.
As the total number of automations running in production increases, having formal incident management procedures in placeis important. But it is even better, if your team can produce solutions that don’t break in the first place. In this article I will discuss how teams can increase the maintainability and extensibility of their code.
At the end of the day, RPA is about code
Having the right tools for the job is a defining factor in enterprise RPA, regardless of whether they are used by in-house resources, an outsourcing partner, or a joint team. Even more important than the choice of technology, is using it effectively. Like with any software project, modularity and separation of concern are key in creating high-quality, maintainable automations.
The first principle to follow in any kind of programming is that of not repeating yourself. In the context of RPA, this means implementing each discrete, repetitive action as its own function. Opening an application or inputting a data record to a system are examples of tasks that warrant their own function. If functionality is not applied, similar sets of actions become duplicated in different parts of the solution. As a consequence, when there is a need to change the behaviour of a particular component, the change has to be implemented in all of the places where it has been defined (if you can find them first). A well-designed modular structure also allows individual procedures to be tested in isolation, providing an immediate feedback when iterating.
Take a step back before rushing into implementation
In a typical RPA project, the responsibilities of defining a process and the actual coding are fully segregated, to a “business analyst” and an RPA developer, respectively. The analyst might have a cursory understanding of the RPA tool being used, but typically is not experienced in programming. The analyst performs a process walkthrough with the subject matter expert and documents the work steps as shown. The developer then takes the document and starts to reproduce the process verbatim in code.
This workflow often results in solutions where business logic (i.e. the objectives of the process) has been closely intertwined with the user actions (how the work is done). Not separating business logic from application logic has implications on maintenance, such as when fixing a bug or modifying the behaviour. If the process consists of one long block of code mixing business rules and low-level tasks, it can be very hard for the maintainer to find the offending piece of code.
Separating business and system-related activities also enables extensibility. For example, applying a new validation rule is much easier when the rules have been defined in a dedicated module, away from system interactions.
Even allowing for separation of concerns, implementing a process exactly as it has been performed by a human rarely leads to the best possible automation. An RPA project should always be treated as an opportunity to take a step back and reassess the process itself and how it relates to other, similar processes.
Good practices will pay dividends
The quality of the code deployed to production has major implications on robot performance and the amount of unplanned maintenance work required to keep business processes running. The bigger the share of developers' time consumed by putting out fires in production, the less capacity a team has for implementing new use cases and for improving their daily practices, as promoted by the DevOps movement.
Besides maintenance implications, the performance of individual robots is also adversely affected by bad programming habits. When a robot is implemented suboptimally, it takes longer to complete one transaction, with differences typically measured in seconds. This starts to have an impact on costs when tasks are repeated hundreds of times a day. In order to support production operations, the need for a new robot runtime (and license, if using proprietary tools) arises sooner than would otherwise be necessary.