Application configuration is rapidly changing. Deployment cycles have significantly increased in complexity and those responsible for application reliability are under greater pressure than ever before.
Delivering a business application used to be simple: write the core code, put the application through quality assurance and deliver the end-product to the user. If a technical issue was found post-launch, a patch would be prepared and issued. However, the emergence of mobile computing has not only changed the pace of this whole process, but users’ expectations too.
Development teams, IT departments, operational staff and software integrators are adapting to more complex application ecosystems. Applications are increasing in scale and configuration now involves multiple teams working with hundreds of different environments and application instances.
Change management is also transforming and the growing acceptance of team-based development models such as DevOps has increased the risk of errors. Configuration changes need authorising, testing, recording for regulatory purposes, and further testing before deployment.
Avoiding a crash course
So, when change management is still manually controlled, surely working with so many variations is arduous?
Yes, it is. Let’s assess one potential scenario. We have an application running on several environment instances, however, each environment instance has different configurations but the same binaries. Managing variations of this degree is difficult enough, but the situation becomes even more challenging when differences go beyond basic environment variable substitutions. When configuration drifts are added into the equation, the situation is massively overwhelming.
You can see why concentration spans easily slip, especially if the person replicating each change across every application environment is fresh in their role. Possessing a comprehensive approval process can catch configuration errors others may have missed, but involving an extra person is an unnecessary doubling of effort. Furthermore, multiple pairs of eyes do not guarantee every error will be found.
Time-pressures also increase the potential for human error. Teams are expected to launch new applications and features far faster than a decade ago. Waiting six months for new functionality is no longer viable from the business’s perspective. This may work for non-critical services or niche services, but it isn’t a possibility for business-critical or popular applications.
Specifying the risks
We all know what happens when an application fails. Plenty of unhappy customers, public discontent and workforces unable to access their business tools. If the sector is heavily regulated, errors are even more intense – reduced revenue, a damaged reputation and the prospect of financial penalties from non-compliance.
Financial services organisations are particularly susceptible to the risks associated with application failure. IT reliability is fundamental for long-term stability. Fines from regulators can extend into the millions of pounds, particularly since the introduction of MiFID II. Even worse is when a software issue is seized upon by malicious groups and turned into an exploitable vulnerability.
The future is automated
With such high stakes, it’s clear why so much due diligence is needed and why application configuration still takes a significant amount of time, cost and effort.
Surely there is a better way?
Use the power of technology, that’s our mantra. In recent months, a host of automation tools have entered the market. These remove the need for manual processes, which in turn reduces human error, increases the accuracy of change management and ensures a more predictable configuration pipeline.
Accountability improves as well. Asking teams to record every change is time-consuming. That is when mistakes occur. Instead, newer tools automatically record who made the change to configuration to provide a clear audit trail. This strengthens compliance and lets managers quickly identify technical issues before the application is deployed.
Zero failure is the level of reliability every company strives for
Resistance to automation
One point to note – we accept the above requires a new way of working. Some application and IT teams may be resistant about trusting such an important part of the development process purely to technology.
But, look at one example. Financial services software provider Ullink works with 150 of the world’s leading banks and brokerage firms. The company used a software tool to automate application configuration and delivered notable results – on one project, over a period of 18 months, its three teams managed 85 release changes. There were zero failures.
In the sectors of international finance and software development, zero failure is the level of reliability every company strives for. It’s also what regulators and customers now expect. As an increasing number of new technologies emerge, teams should assess how they are currently working and decide if there’s a more efficient way.
This doesn’t necessarily mean completely changing how application configuration is delivered, but rather smaller workflow improvements and specialised solutions that can enable people to work better, and with confidence that they will not make mistakes that will harm the business.