Is 100% test coverage achievable?

Sourabh Parsekar
Nerd For Tech
Published in
9 min readAug 14, 2020

--

People always say it’s good to set new goals, change old habits, and try new things. Because if you aren’t learning and taking risks, you won’t grow, you’ll stay right where you are. After working with spring-boot API for a couple of years now, it was time to try something new. I was working on a new spring-boot API project with my team for which we decided of achieving 100% test coverage.

In other words, we were targeting 100% line coverage using automated JUnit for each unit where the unit varied from being a function point or a method and in some cases a service class. We can replace “line coverage” with conditional branches, functions, methods, files, classes, or whatever we want to take as a basis for counting coverage during an automated test run. The coverage metric certainly helps us to gain confidence over the developed code. The developer feels a sense of accomplishment as coverage goes from 0-zero percent to 25 percent to 70 percent and so on. We should be able to justify every line of code and how we can prevent a situation in which users find new trails through the code.

How do test cases help? Use the TDD approach !!

I should also mention the fact that we decided to prepare and plan test cases over the API skeleton before writing the code. We thus followed a kind of Test-Driven Development approach which would add more complexity to our 100% test coverage goal. Test-driven development (TDD) is a development technique where you must first write a test that fails before you write a new functional code.

The initial test cases covered only core functions and positive cases. These were added in Junit for each class with a description as to what and how it is to be tested. The actual implementation was added once we had minimal code in place. We thus ended up achieving 85% code coverage with over 700+ JUnit automated tests for approx 13,000 lines of code written in JAVA/Kotlin.

What happened with TDD?

By following the TDD approach, we had a much clear picture of unit expectations. This helped us to create flow diagrams for different application units. After getting these flow diagrams reviewed, the design was adjusted, and test cases were updated. Having such test cases as pseudocode or text describing the to be implemented has helped a lot to achieve a quality product. The end product had 0 code smells, 0 Bugs, 0 Vulnerabilities, and 0 duplicate code blocks. We also used the SonarLint plugin for IDE to help us detect code quality factors at run time. Also, SonarQube analysis was performed on the code to further strengthen the code quality and detect security issues.

One another key advantage of having test cases is uninterrupted testing of already developed modules. Having a working test case also helps to recall the functionality when it comes back for testing after a long gap. Test cases have helped us testing modules that were already in place. The test cases made sure that the new modules did not have side-effects in already created modules. Thus, giving us confidence in the written code.

Are developers careless? Is it ??

Having said that, all of us would have observed that, at some point in time coverage drops, and then it keeps dropping to new lows until we come back and fix the broken tests one by one or add new tests for uncovered/modified code snippets. In other words, as soon as our code coverage shows cracks, chances are that we, will not care much if we introduce untested code, that further lowers the code coverage of the codebase.

Once there is some sign of disorder visible to the developers, “that’s us”, we tend to give in to this disorder, independent of our surroundings. Having said that, test coverage should exclude false positives. A false positive could be any line of code that is not required to be covered with a test and is not executed during an automated test run. A false positive in this sense might be:

  • a `trivial getter or setter method` which did not get called in tests/code but is auto-generated by the IDE or created for code consistency
  • a `facade method` that solely acts as a trivial forwarding mechanism
  • a class or method for which `automated tests are considered too costly`

Removing False Positives — Trivial methods

Let’s talk about getters and setters. First, let’s check the auto-generated code from XMLs/XSD using JAXB utils. These data classes have their own constructor/getters/setters along with XML annotations, toString() etc. As suspected, these classes were shown as not covered by tests in SonarQube. Hence for all these classes, we wrote tests for each getter/setter, toString(), etc methods. Additionally, there could be Factory classes, in our cases, we had 2 Object Factory classes which ware merely used to create sub-objects of request/response data types. Basically, it would be a method accepting a string/other datatype and return an object with the value. These classes thus can be classified as false positives in SonarQube.

However, the alternative to such classes would be, to use Kotlin or create a separate package or add them to the exclusion list in SonarQube/Jacoco. I did not try either option due to time constraints and instead ended up writing 100+ tests.

Other data classes were added too. Let’s see one such class that I added viz Key. The Key had just one private final Custom Data object. This object was final because the Key class is used as a key to a map in one of our custom Libraries, and the key of the map needs to be immutable. This Key class had overridden equals, hashcode, toString methods while the constructor is used to set the object value.

All these methods were autogenerated using IDE tools under the assumption that IDE generates these overridden default functions correctly. Equals method was one such simple-looking function which Intellij said covered by a test, but SonarQube said no, it’s not fully covered. It’s like sonar said, “hey developer genius, equals is much more complex than you think. Please check all the remaining conditions.”

Let’s look at one such example: Below is the code snippet along with the test case snippet. I have added a comparison between Intellij and SonarQube analysis of test case

[Github project for source code: https://github.com/sourabhparsekar/blog branch: test-cases]

  • Code Snippet:
@Override
public boolean equals(Object o) {
if (this == o) return true;
if (o == null || getClass() != o.getClass()) return false;
DataIdentifier that = (DataIdentifier) o;
return Objects.equals(age, that.age) &&
Objects.equals(firstName, that.firstName) &&
Objects.equals(lastName, that.lastName);
}
  • Test Case Snippet:
public void testTestEquals() {

DataIdentifier customDataTest = new DataIdentifier(age, firstName, lastName);
assertTrue(dataIdentifier.equals(customDataTest));

//compare same object
assertTrue(dataIdentifier.equals(dataIdentifier));

//compare null
assertFalse(dataIdentifier.equals(null));

}
  • IntelliJ
  • SonarQube

For another class key, if the test for equals function was written as below then, SonarQube marks 100% covered. Probably the first 6 lines of the test make sense and would be mostly used in the code, but the remaining lines starting with clazz and stuff, that’s non-trivial and would certainly generate question what is this, why is this done, what are these fields compared, why what how …..? And given the fact that it’s an auto-generated method, do we need to do all this.

  • Equals Code
@Override
public boolean equals(Object o) {
if (this == o) return true;
if (o == null || getClass() != o.getClass()) return false;
Key key = (Key) o;
return Objects.equals(dataIdentifier, key.dataIdentifier);
}
  • Equals Test Case Code
public void testTestEquals() {

Key keyTest = new Key(new DataIdentifier(age, firstName, lastName));
assertTrue(key.equals(keyTest));

//compare same object
assertTrue(key.equals(key));

//compare different object
assertFalse(key.equals(“key”));

//compare null
assertFalse(key.equals(null));

//compare other equals cases
Class<?> clazz = null;
try {
clazz = Class.forName(Key.class.getName());
Constructor<?> constructor = clazz.getConstructor(DataIdentifier.class);

Object object = constructor.newInstance(new DataIdentifier(age, firstName, firstName));
Assert.assertFalse(key.equals(object));

object = constructor.newInstance(new DataIdentifier(age, lastName, lastName));
Assert.assertFalse(key.equals(object));

object = constructor.newInstance(new DataIdentifier(“22”, firstName, firstName));
Assert.assertFalse(key.equals(object));

object = constructor.newInstance(new DataIdentifier(age, firstName, lastName));
Assert.assertTrue(key.equals(object));

} catch (ClassNotFoundException | NoSuchMethodException | InstantiationException | IllegalAccessException | InvocationTargetException e) {
fail(“Exception occurred “+ e.getMessage());
}
}
  • SonarQube Test Coverage

We know, equals method is an autogenerated method with 4 lines of code using IDE tools. However, the test for it is at least 4*4 lines including the initial setup. This is when we have just one final field to compare in class Key. Imagine the cases required for equals when comparing Off-chain Database Key with multiple fields.

Classes with autogenerated methods such as getters/setters, equals, toString, hashcode, etc probably would be classified as false positive and can be skipped from test cases. This would lead us one step closer to cleaned code coverage.

Removing False Positives — Facade method

Now, there are other classes/methods which take care of routing requests or calling some other class to do the actual processing. Something like a dumb class or a method that just calls something else to process the inputs. Controllers can be one such example whose work is only to receive an external request and dispatch it to a service.

I am not hinting here that we should skip controller testing altogether, but just hinting that is it necessary to stick to the principle that the controller only accepts an object with content-type XML/JSON/others and dispatch it to a service class. The controller should not have any business logic.

Removing False Positives — Too costly to test

I remember receiving a comment from one of my co-workers for a test case. He said, “What is this sorcery.”. What had happened is, I had written a function in the test case to override private, final, static field value and then test the applicable case. Below is the code for overriding the default code behavior, followed by the test and code snippet.

  • Converter Code Snippet
private static final ObjectMapper objectMapper = new ObjectMapper();public String convertToDatabaseColumn(Message message) {
try {
if (message == null)
return “”; // “” will be returned as there is no value for this field to be saved in db String
dbString = objectMapper.writeValueAsString(message);
return dbString.replace(“\””, “\\\””);
} catch (JsonProcessingException jpe) {
LOG.error(“Error converting Meta to Json string”, jpe);
return “”;
}
}
  • Sorcery Function
void setFinalStaticField(Field field, Object newValue) throws Exception {
field.setAccessible(true);
Field modifiersField = Field.class.getDeclaredField(“modifiers”);
modifiersField.setAccessible(true);
modifiersField.setInt(field, field.getModifiers() & ~Modifier.FINAL);
field.set(null, newValue);
}
  • Test Case
@Test
public void convertToDatabaseColumnForInvalidMessage() throws Exception {
ObjectMapper objectMapper = Mockito.mock(ObjectMapper.class);
when(objectMapper.writeValueAsString(any())).thenThrow( JsonProcessingException.class);

//set the field
setFinalStaticField(converter.getClass(). getDeclaredField(“objectMapper”), objectMapper);

assertEquals(“”, converter.convertToDatabaseColumn( Mockito.mock(Message.class)));
}

The whole idea of having this test was to test a catch block — negative condition. Thus, I had to override the private final ObjectMapper field so that it can throw an exception when trying to convert an Entity Attribute to a String and catch block would be called. The converter code was already checking for null input, so null input did not work for this case and I ended up overriding the private final object inside the method to achieve the coverage for the catch block. This whole sorcery stuff was done to test a catch block which may never be called since the POJO object was passed to the function.

There may be other ways of doing it, but I had already wasted a lot of time to reach this solution. The question we should ask is was the effort spent wort it and can it be justified? I do not know. We should be able to leave such non-trivial complex test cases out of coverage as the cost involved in writing one such test would be too high as compared to the expected functionality being tested. A positive case should have sufficed here knowing that the probability of this method accepting non POJO Message is trifling.

What we want to do or What we can do ?

We may want to generate some sort of report from which, the developer can then decide whether s/he should add a test to cover the code or if s/he should mark it as a false positive to be excluded from actual code coverage. This decision is an obstacle that we don’t have when just looking at the actual code coverage and deciding that a reduction in code coverage is OK. Jacoco indeed supports all these but this would need exploration if we still want to aim for 100%-line coverage and not 100% cleaned code coverage.

What we should do?

I feel we should make the effort to write code whose execution can be easily automated and whose existence can be easily justified. If we look at our code from a testability perspective and apply the “Pareto Principle” then, testing 20 percent of our code took 80 percent of our testing effort, meaning diminishing returns. We may not go for 100 percent line coverage but, certainly, we can go for 100 percent testability and 100 percent demonstrable certainty that we have tried things before throwing them at our users.

--

--

Sourabh Parsekar
Nerd For Tech

Just another developer... Java | Spring-boot | Blockchain