Time and time again we repeat processes in a manual fashion often believing that it's faster "just to do it" rather than spend time thinking about how to make a computer do it. While this may be fine the first time, what about the second, third and fourth time? Doing things manually is not only very inefficient but prone to human errors; some can be catastrophic.
Automation facilitates the "six side-effects of automation":
The way forward often requires careful navigation along two independent paths ... finding your way through the capabilities of the myriad and ever-changing tools available, and how best to change your development process so that you can take full advantage of the industry best-practice techniques and development strategies.
Whatever you do you should be thinking about ensuring you have clean build and a shippable product at least once every day. In order to thrive, every company needs to develop and deliver quality solutions faster than their competitor and one generally recognised asset of automation is that it "delivers value faster than it adds cost".
Our staff have helped realise the "six side-effects of automation" while working on numerous projects through decades of professional experience building large and complex cutting-edge customer-specific automation tools while also having worked with a number of emerging off-the-shelf tools and frameworks. Our experience is gained through a diverse project portfolio that spans the fields of research, commerce and the military.
Automation is built into our philosophy. All the way from automated build tools, through testing and deployment. The tools we have chosen facilitate this, knowing how to use them together, and how to make this possible whilst keeping the physical and financial overhead to a minimum.
The experienced individuals at Informatics Matters can provide you with advice and solutions both on-site or remotely hosted.
Essentially the goal is to build and test your product at every opportunity so that you always have a product build ready to ship following best-practice Test-Driven or even Behaviour-Driven design methodologies like Cucumber. Testing needs to move beyond the basics by employing advanced analysers and hardening tests to identify faults at the earliest opportunity.
We have the experience to show you how best to utilise your resources in continuous integration.
Although our developers have access to the tests and they are encouraged to add tests as they add features, we defer the majority of testing to an automated build management schedulers like Jenkins, Travis CI and GitLab Runners with build artefacts that can be stored in a Docker Registry or binary repositories like Artifactory and Nexus. Jenkins not only build builds our code in "jobs" but we also execute static analysers to help find obscure bugs and code coverage and profiling tools to highlight vulnerable areas of untested areas of the product.
Our automated "jobs" utilise rich and modern build tools like Gradle and Groovy environments that can run our build processes in parallel, on any number of dynamically provisioned servers based on the success of any up-stream dependent tasks. If all goes well we hear nothing from our continuous integration framework; when things fail we receive an email. In all instances we have access to detailed reports.
Our experience:Whether your language if you need help with continuous integration in your project we have the experience to help.
Once you're in the position of deploying your product components into a staging environment using fully automated tools and strategies, as we do here at Informatics Matters, so that you are able to test the wider-scale business logic allowing you to more confidently deliver your application to production platforms at the "click of a button".
Delivery is, ideally, a series of tests that take place in a production like environment and consists of delivery to these staging environments, ideally created and provisioned from clean compute instances, and the execution of application-level acceptance tests before a final manual step that involves the delivery to your customer.
We almost exclusively employ containers to package our application components and this allows us to be confident not just about the platform but the provenance of the delivered application images. Our staging and production environment is built on scalable on-site and cloud-based compute instances organised into a resilient cluster managed by RedHat's OpenShift Container Application Platform.
Our experience:
Whether your production environment utilised containers, packages or binaries; if you need help with continuous deployment in your project we have the experience to help.
Provisioning your compute resources on demand, from known "clean" sources is an important requirement in order to create a reliable testing environment. Having precise control over the operating system its patches and all the libraries and tools installed is crucial in order to avoid the "it works on my machine" dilemma. This further reduces testing effort and technical risk while still improving product quality.
This area of automation is commonly referred to as "Infrastructure as Code", or IaC - a collection of tools that simplify the definition and creation of execution platforms.
While running tests on your development machine is possible it's not recommended. Here at Informatics Matters, we make extensive use of IaC tooling to dynamically provision our test, staging and production units.
We use HashiCorp's Packer's JSON-based configuration to build reference snapshot images for our platforms, rolling out new baseline images in one or two minutes. With a few YAML files we employ HashiCorp's Terraform to dynamically build and update our cloud-based compute clusters where a fully networked set of servers can be provisioned in less than 60 seconds. This in tandem with Ansible's powerful Python -based orchestration engine, which is used to deploy our container execution platform of choice, RedHat's OpenShift Origin.
We also have experience with other orchestration platforms like Puppet's Ruby-based framework and automated virtual machine creation using Vagrant.
We execute application software on AWS EC2 instances, Digital Ocean Droplets as well as on Scaleway and OpenStack infrastructures.
Our experience:Whether your deployment platform is proprietary, embedded or a conventional compute resource if you need help simplifying and automating your hardware using industry-leading orchestration tools we have the experience to help.
Whether this is monitoring and analytics with off-the-shelf frameworks like Datadog's Python-based platform, OpenShift's logging and metrics using Kibana and Prometheus or proprietary complex event processing ("CEP") solutions, we can help monitor the health of your application and provide monitoring, pattern recognition and machine and deep learning techniques to automate any remedial actions in order to rapidly return a faulty system into an operational state.
We have experience building error detection and recovery systems and are comfortable with the needs of big data applications for research, commercial and telecommunications establishments.
Our experience:We are experts in realtime analytics and can call upon decades of experience developing complex high-performance realtime pattern recognition applications in order to deliver a solution that's right for you.