Read more at jamessimone.net, or return to the homepage





Test Driven Development (TDD) Example

This series has spent quite a bit of time talking about Test Driven Development (TDD), and has shown many examples along the way. Now I'd like to take a step back; to return to the roots of TDD, both to show why it can be such an effective development strategy, and also to review the fundementals.

Test Driven Development is guided by the belief that your tests should be self-documenting; that they should be the best place for a new developer to start learning the code base, by assimilating the expectations that the business has in the form of well structured and asserted-for tests. The general workflow when following TDD correctly makes use of the following pattern:

Correct use of this pattern also passively presents an ancillary principle: that the tests should run fast. Because you're encouraged to move quickly through the cycle, from implementing a failing test to getting the test(s) to pass, to implementing the next piece of acceptance criteria in your feature, you have to be able to move quickly in order to be able to iterate effectively. Detractors of TDD often point to the stringent "write a failing test before producing production-level code" policy as hamstringing developers; proponents know that in addition to simplifying the creation of new code by pursuing only making a new test pass, TDD excels when quickly iterating on new functionality, because the existing code coverage you generate makes it easy to change the code you already have with complete confidence. Let's dive in and see how TDD leads from the inception of a feature request to a fully working feature.

linkThe Feature Request

Your company/client comes to you with a new feature request. Right away, you're cautious — this is clearly entirely new functionality. There won't be any overlap with existing code, or at least none that you can foresee initially. Finance & Sales have teamed up to rework the Stages for existing Opportunities to get assigned their Probabilities using a secretive new forecasting model. In order to test out the effectiveness of the new model, they want to perform a split-test without informing the sales reps that some of their Opportunities are going to be withdrawn. In addition to holding a small percentage of Opportunities in the forecasting control group, they need the "old" Probability scores to be mapped to a new custom field on the Opportunity; there'll be a one-time data mapping necessary to this new field, and then the Probabilities assigned to the Opportunity stages will be updated to reflect the new model.

This is meant to sound familiar. What follows probably isn't — but that's the nature of feature requests. Because they're specific to the client/business, I'm instead going to focus on how to solve a problem, rather than going with something siloed to a specific industry. The feature request looks something like this:

With the new Opportunity probabilities, some of them will be updated using a workflow rule to assign the probability to an anti-prime number. When you see an Opportunity get updated with one of these sentinel Probability scores, you'll need to unassign the existing Opportunity owner and reassign to a system user, as well as map the prior Probability to the new custom field.

linkBuilding An Anti-Prime Generator

First of all — what's an "anti-prime" number? An anti-prime is defined as a number that can be divided by more numbers than any of the numbers before it (in other words: a number with more factors than any number before it). Since we're operating on a percentage scale for Probability, that means we'll chiefly be concerned with all of the anti-primes from 0 to 100. Let's begin!

TDD states that lack of code, or lack of code that compiles, counts as a failing test. The first thing we'll need to do is create the object that we'd like to house this business logic in, and define a well-named method that returns true/false:

classes/AntiPrime.cls
1linkpublic class AntiPrime {

2link public static Boolean isAntiPrime(Integer num) {

3link return false;

4link }

5link}

That gives us the wings we need to confidently start down the road towards testing this feature:

classes/AntiPrimeTests.cls
1link@isTest

2linkprivate class AntiPrimeTests {

3link @isTest

4link static void it_should_detect_one_as_an_antiprime() {

5link System.assertEquals(true, AntiPrime.isAntiPrime(1));

6link }

7link}

Now we have a failing test to work with, and we can begin implementing out this feature. The naive implementation makes no assumptions:

classes/AntiPrime.cls
1linkpublic class AntiPrime {

2link public static Boolean isAntiPrime(Integer num) {

3link return num == 1 ? true : false;

4link }

5link}

Now the first test passed, but we know there are at least several other anti-prime numbers out there below 100. For anti-primes, 1 is the first number because: 1/1 = 1. That means, as well, that in order for the next number to compete with 1 as the next anti-prime in the sequence, it has to have two divisors. Time to write another failing test, and then perhaps we'll be able to refactor ...

classes/AntiPrimeTests.cls
1link@isTest

2linkstatic void it_should_detect_two_as_an_antiprime() {

3link System.assertEquals(true, AntiPrime.isAntiPrime(2));

4link}

Now we are back to the "Red" part of our TDD workflow, and we need to re-assess how we're going to get to green. Clearly, the simplest case is again the best way:

classes/AntiPrime.cls
1linkpublic class AntiPrime {

2link public static Boolean isAntiPrime(Integer num) {

3link if(num == 1 || num == 2) {

4link return true;

5link }

6link return false;

7link }

8link}

Now both our tests pass, but we're left with the sneaking suspicion that it's time to refactor; the reason for this is because we're now using two "magic" numbers — 1 and 2 — to represent the anti-primes, but we actually want to programmatically assign them. Time to go back to the drawing board:

classes/AntiPrime.cls
1linkpublic class AntiPrime {

2link public static Integer primesBeforeDefault = 100;

3link

4link public static Boolean isAntiPrime(Integer num) {

5link return antiPrimesBefore.contains(num);

6link }

7link

8link/*if you try to use the simpler singleton

9linkpattern here, e.g. antiPrimesBefore = getAntiPrimes(),

10linkit's fine for calls to isAntiPrime,

11linkbut the set will be double initialized

12linkwhen testing against getAntiPrimes();

13linkyou also won't be able to reset

14linkprimesBeforeDefault*/

15link private static final Set<Integer> antiPrimesBefore {

16link get {

17link if(antiPrimesBefore == null) {

18link antiPrimesBefore = getAntiPrimes();

19link }

20link return antiPrimesBefore;

21link }

22link private set;

23link }

24link

25link private static Set<Integer> getAntiPrimes() {

26link Integer potentialAntiPrime = 1;

27link Integer divisorCount = 0;

28link Set<Integer> antiPrimes = new Set<Integer>();

29link while(potentialAntiPrime <= primesBeforeDefault) {

30link Integer localDivisorCount = 0;

31link for(Integer potentialDivisor = 1;

32link potentialDivisor <= potentialAntiPrime;

33link potentialDivisor++) {

34link if(Math.mod(

35link potentialAntiPrime,

36link potentialDivisor

37link ) == 0) {

38link localDivisorCount++;

39link }

40link }

41link if(localDivisorCount > divisorCount) {

42link divisorCount++;

43link antiPrimes.add(potentialAntiPrime);

44link }

45link potentialAntiPrime++;

46link }

47link return antiPrimes;

48link }

49link}

Now there's just one "magic" number — the primesBeforeDefault pseudo-constant. Introducing it has accomplished three things:

classes/AntiPrimeTests.cls
1link@isTest

2linkstatic void it_should_throw_exception_if_number_larger_than_anti_primes_generated_is_passed() {

3link AntiPrime.primesBeforeDefault = 100;

4link Exception e;

5link try {

6link AntiPrime.isAntiPrime(200);

7link } catch(Exception ex) {

8link e = ex;

9link }

10link

11link System.assertNotEquals(null, e);

12link}

13link

14link@isTest

15linkstatic void it_should_work_with_numbers_greater_than_100() {

16link AntiPrime.primesBeforeDefault = 120;

17link System.assertEquals(true, AntiPrime.isAntiPrime(120));

18link}

And in AntiPrime:

classes/AntiPrime.cls
1linkpublic static Boolean isAntiPrime(Integer num) {

2link if(num > primesBeforeDefault) {

3link throw new AntiPrimeException('Primes weren\'t generated to: ' + num);

4link }

5link return antiPrimesBefore.contains(num);

6link}

7link

8link//....

9linkpublic class AntiPrimeException extends Exception {}

Now it's time to continue with the tests to ensure that all of our expected anti-primes are being generated correctly. Let's raise the visibility of the getAntiPrimes private static method to see what's currently being output:

classes/AntiPrime.cls
1link@testVisible

2linkprivate static Set<Integer> getAntiPrimes() {

3link//..

4link}

5link

6link//and in AntiPrimeTests.cls ...

7link@isTest

8linkstatic void it_should_properly_generate_anti_primes_below_sentinel_value() {

9link //make no assumptions!

10link AntiPrime.primesBeforeDefault = 100;

11link System.assertEquals(

12link new Set<Integer>{ 1, 2, 4, 6, 12, 24, 36, 48, 60 },

13link AntiPrime.getAntiPrimes()

14link );

15link}

Aaaaand the test fails. Examining the output, it seems I've introduced an unintended bug during my refactor. Did you spot it? You see, 72 and 60 both have 12 divisors ... but I messed up when incrementing the divisorCount variable. It shouldn't just be incremented when the localDivisorCount variable is greater than the last divisor count — it should be set equal to the localDivisorCount. Otherwise, both 60 and 72 end up qualifying because the prior divisor count is 10 when 60 is reached:

classes/AntiPrime.cls
1link@testVisible

2linkprivate static Set<Integer> getAntiPrimes() {

3link//...

4link if(localDivisorCount > divisorCount) {

5link divisorCount = localDivisorCount;

6link antiPrimes.add(potentialAntiPrime);

7link }

8link potentialAntiPrime++;

9link//...

10link}

Now the tests all pass. At this point, because you deterministically know the values for the anti-primes below 100, you could definitely make the argument that the first two tests — testing for specific values — should be deleted.

You could also make the case that the second one should instead be modified to test for the last value below 100 (in other words, that 60 is correctly detected). I would go down the latter path, knowing that the test for getAntiPrimes was covering the other cases.

linkAn Aside On Anti-Prime Number Generation

It's true that solving the anti-prime formula is easier in some other languages with more expressive/fluent array features. However, examining the presented solutions, I would advise you to keep readability and performance in mind. Most of the submitted answers treat 1 (and, occasionally, 2 as well) as a special case, whereas I was more concerned with showing how to treat all numbers equally — although you can certainly make the argument that 0 is not treated particularly equally in any of the solutions, mine included.

Code style is a contentious topic, and I don't mean to present my implementation as the preferred solution. In reality, there is no way to avoid two iterations in a Java-like language when building the solution; however, your taste and preferences in regards to for or while loops could entirely (and understandably) differ from mine. I use while loops infrequently, though if you've read my Writing Performant Apex Tests post, you'll know that they frequently out-perform the plain-jane for loops.

That said, the only improvement I think improves the legibility of the above solution would be if Apex supported range-based array initializations, which would make the inner iteration in getPrimes more expressive by simplifying the for loop. Writing code — even code that needs to be extremely performant — always requires achieving a suitable balancing act between readability and performance.

As a counterpoint, take a look at the F# example in the provided link ... sure, it works, but what if it didn't?!

linkCompleting The Feature Request

The rest of the feature request falls much more in line with our pre-existing code (and is omitted, as a result). We can see that we're going to need to need to call AntiPrime from within our Opportunity Handler's before update method, assign the old probability to the hidden custom field, and re-assign the owner to our placeholder owner if the new probability is anti-prime.

A finished pull request for this feature will end up including:

linkTest Driven Development Is Your Friend

Hopefully you can see how the "red, green, refactor" mindset can help you to quickly iterate on new and existing features. Having the safety net of your tests helps provide feedback on your system's design as it grows over time. Writing tests also helps you to focus on the single smallest step you can take as you develop to "get to green." Though it's true that in some big refactors, you end up having to rework tests, in general I find that even with large-scale (30+ files) refactors, I rarely have to update tests in a well-designed code base. Rather, the existing tests themselves help me to verify that everything has been successfully re-implemented correctly.

This is also because TDD fits in well with the Object-Oriented Programming paradigm, the "Open Closed Principle," which states:

Objects should be open for extension but closed for modification

When your building blocks are small and expressive, they can contribute effectively to the larger domain problems without being modified. Similarly, when your tests are small, you're motivated and incentivized to keep your methods small, your public interfaces minimal, and your designs clean. For true "helper" methods like an anti-prime generator, static methods help to keep your code footprint small by minimizing the number of objects you need to initialize and keep track of.

For something like an OpportunityOwnerReassigner object, which could encapsulate the decision to reassign an owner based on the opportunity's probability being anti-prime, it's crucial to keep in mind that while this specific feature calls for reassignment by means of the opportunity's Probability field, future requests might expand on the number of fields / the specific owner to consider when making a reassignment. This might even be the future request, which is a perfect example of extending an existing object's responsibilities in light of new requirements.


Once again, I'd like to thank you for following along with the Joys Of Apex. The anti-prime problem is a fun little formula to solve for, and there are many different ways the solution could be implemented. I initially started thinking about it following a trivia question on the subject; hopefully it stands in well as an abstract example of what some obscure business logic might end up looking like to an outsider. As well, I hope it proved fun to see how TDD can help you to iterate on a problem in a well-defined way. Till next time!

The original version of Test Driven Development By Example can be read on my blog.

The Feature RequestBuilding An Anti-Prime GeneratorAn Aside On Anti-Prime Number GenerationCompleting The Feature RequestTest Driven Development Is Your Friend

Home Apex Logging Service Apex Object-Oriented Basics Batchable And Queueable Apex Building A Better Singleton Continuous Integration With SFDX Dependency Injection & Factory Pattern Enum Apex Class Gotchas Extendable Apis Future Methods, Callouts & Callbacks Idiomatic Salesforce Apex Introduction & Testing Philosophy Lazy Iterators Lightweight Trigger Handler LWC Composable Modal LWC Composable Pagination LWC Custom Lead Path Mocking DML React Versus Lightning Web Components Refactoring Tips & Tricks Repository Pattern setTimeout & Implementing Delays Sorting And Performance In Apex Test Driven Development Example Testing Custom Permissions Writing Performant Apex Tests



Read more tech articles