Blog article
See all stories »

Large Scale Infrastructure Intervention and Fake Testing

In a nutshell, the job that we do comes down to infiltrating large financial market infrastructures with specially designed tools to influence the outcome of the software development life cycle. We have been implementing this strategy across the globe for nearly 9 years. But even in the most critical, robust and reliable large infrastructures, there constantly lurks a phenomenon called... Fake Testing. Its presence is becoming so common, that the time has come to define it as a "separate species". The most obvious way is to label all software testing you are not happy with as 'Fake Testing'. To a certain extent, this is how it is understood in this blog post.

While software testing is an information service, a process of providing objective, independent information about the software quality, fake testing is an activity that may look like testing, but has a different purpose.
Shift Left / Shift Right Testing
Software testing is shifting from the center to the poles, and it’s a good thing. DevOps and early testing can both deliver more reliable software within a shorter time frame. Let's now consider the extremes. On the alt-right end of the spectrum, software is pushed into production, regardless of its quality, to be tested by the end users and operations teams. On the far left, the focus is on all the good things, such as trust, team work and internal processes. Fake testing is actions aimed at making one feel good about the software quality, but not necessarily at providing that quality. It is when bias distorts the actions, and they are reported in very specific ways.

In contrast, facts do not care about anyone’s feelings. Software testing is relentless learning (and aggregating data that can be impartially analysed, scrutinized and used to enhance software quality). That also means that good automated testing is machine learning. They say that true journalism means uncovering things that someone else would like to keep hidden. The rest is Public Relations. So, everyone in denial about the possibility of discovering new and painful information about their software will further find a few helpful "fake testing" tips that can potentially resonate with them.

TRIGGER WARNING: THIS POST CONTAINS SOME HORRIBLE EXAMPLES OF SELF-INFLICTED HARM. FAKE INFORMATION CAN KILL.

We work very hard on attracting the best talent into software testing and have managed to build a team of 500+ specialists so far. Some people might ask: "What is the appeal? What is so attractive about software testing as a profession?" There surely aren’t many movies to popularize it and portray software testers as superheroes saving the world. But there is at least one — The Pentagon Wars [1]. Coincidentally, it does a great job of outlining what fake testing is. If you have not seen this movie, please do. Also, be aware of the SPOILERS AHEAD.

The movie tells the story of a Bradley M2 Fighting Vehicle [2], live fire tests around it and the Congress hearings that followed. The protagonist of the story, a USAF Lieutenant Colonel Burton, is appointed to oversee the testing of an M2 Bradley carried out by the army. After the first test, he suspects that inferior ammunition was used to conduct the experiment.

It is not an unusual issue encountered in real-life practice, even with the best trading software vendors. Most of the time, test injectors provided by vendors are designed in a way that would reduce harm, e.g. send messages at very precise intervals.

The movie also depicts the process of feature creep, explaining how a great prototype can turn into a buggy implementation. In particular, a part of the plate was made out of aluminum to reduce the weight of the tank. It appeared, that when it is hit, the thin layer evaporates and produces a poisonous gas. To test for this fact, the protagonist decides to put a sheep into the tank. He is told that it would take many months to prepare just the right sheep in advance, since all sheep are different, and testing is a scientific experiment that should be conducted in controlled environments. This should be kept in mind when requesting all scenarios in advance or trying to apply development methodologies like BDD.

Testability Explained

This is a classiс image explaining the concept of testability: putting a target on the system to simplify testing. It is essential for any large scale infrastructure. The film also gives a colorful illustration: to ensure that a heat-seeking infrared missile can successfully hit the target, the target was covered with electrical heaters to produce a temperature high enough to fry eggs at two meters distance. The targets were actually painted on the system. The problem was that the system under test was not the tank, but the missile. It is easy to go into this direction with fake testability in a large infrastructure, where end-to-end testing requirements demand for ubiquitous test coverage: the vehicle, the missile, the bomb and even the two-story tall crane to drop it from.

Fake Testability

Sometimes development teams create interfaces to improve testability, but these interfaces are obscured one way or another. A real-life example could look like this: a new sprint delivers lots of functionality, but all of it is only accessible through internal interfaces that the client is not able to use.

In the movie, Colonel Burton discovers that water is used instead of gasoline during testing. When he confronts his colleagues about it, they tell him: "It is too dangerous to use gasoline because everything could blow up". But is it not the point? We share the point of view that non-functional tests should result in breaking the system, rather than going to a predefined and agreed load level. But on many occasions, the testers face complains claiming that the test system should not die.

The world's most valuable commodity nowadays is not oil, but data. With GDPR and other limitations, the desire to put water into the fuel tank is stronger than ever. So, are you on a path to fake testing?

Fake information works best when it ignores everything that happened before. In many cases, the creation of test plans is based solely on the new stories in the Quality Center, one test per requirement, followed by claims that this way, the traceability compliance requirements are satisfied. As a result, with every release, one will have a new test plan and new scenarios...and no consistency or continuity in test execution.

When there is no automated regression, teams also try to fake it by using parallel runs to play the same data against the previous and the current release. The narrative is right, but the facts are frequently wrong. The best way to obscure the information is to hide it under piles of paper. A parallel run produces huge volumes of data and, usually, there are many breaks/exceptions in concurrent distributed systems. Once software testers go down the road of ignoring them, they stop knowing where they really are.

The Fake Testing era has lasted long enough, now on to real testing. In a complex post trade platform with over 80 interfaces, developing a test harness capable of going through all the permutations under load requires building sophisticated software. It is not possible to disseminate this function between small sub-teams stored in the prisons of their own scrums. It reflects the law of requisite variety that reads: the number of states for the test harness should be no less than the number of states for the system under test.

To make sure that the test harness is up to the task, parallel software development process should be run.

The name "Exactpro" is an acronym for Exitus Acta Probat — the result confirms the actions. Fake testing will always result in fake coverage, and, still, this practice is adopted quite often. But why would anyone consider killing QA capabilities at all and rely on fake testing? It could just be that fake information travels fast. Sometimes it leads to destructive memes – repeatable ideas and practices making you feel good, instead of doing good.

Here is a tragic historical parallel. In the 19th century, the Xhosa tribe in South Africa believed a young girl who claimed to have met three spirits that requested Xhosa people to slaughter all their cattle and to destroy all their corn. If they did, the red sun would rise and new cattle would appear. It is assumed that the Xhosa started to believe in being the descendants of the Russians who had defeated the Englishmen in the Crimea.[3] Of course, it was not the case, and the red sun never appeared. Initially, they blamed it on not following the process, and only intensified the effort. It is exactly what teams frequently do in failed cargo-cult agile implementations. Was it irrational behavior or a deliberate plot by one of the involved parties (perhaps, the English governor or tribe chiefs) — remains a difficult question.

Far left methodologies and a desire to kill real testing capabilities that are required to produce relevant information are widely spread across our industry. As Anthony Hopkins’s character said in Westworld: "People do not want to listen and do not want to change" [4].

But one should not complain about fake testing. They say that great beasts, as big as mountains, once roamed this world. Yet all that is left of them is bone and amber. One day, fake testing will perish. It will lie in the dirt and will turn into sand. And upon that sand, a new thing will ride. One that is not afraid of real testing. Because this world belongs to someone who is yet to come.

Resources:
[1] The Pentagon Wars. Dir. Richard Benjamin. HBO, 1998.
[2] The Bradley Fighting Vehicle, PDF. University of Maryland Resources.
[3] The Dead Will Arise Nongqawuse and the Great Xhosa Cattle-Killing Movement of 1856-7, Peires, Jeff. Indiana University Press; 1st Edition edition (September 22, 1989).
[4] Westworld. Season 1, Episode 10. Dir. Jonathan Nolan. HBO, 2016.

7287

Comments: (0)

Iosif Itkin

Iosif Itkin

CEO

Exactpro

Member since

17 Dec 2008

Location

London

Blog posts

13

Comments

6

This post is from a series of posts in the group:

Operational Risk Management

To share information, ideas and experience relating to all aspects of op-risk management and compliance with Basel II


See all

Now hiring