Read more at, or return to the homepage

Writing Performant Apex Tests

Let's go back to basics when talking about designing and working in Apex codebases. There are a few design patterns that help to make your unit tests fast and performant — which in turn speeds up your development time. As well, being able to run your whole test suite in a matter of minutes (max) becomes crucially important as your (or your client's) system grows organically over time. Refactoring code is a lot like weeding in a garden: you might see some beautiful things without it, but you'll never consistently be able to work in an Apex codebase without identifying patterns, abstracting them, and re-writing code to make use of the new abstractions.

There is a time when it's too early to refactor — while writing the code the first time. Indeed, the TDD mentality is often repurposed to promote something an old colleague of mine fondly refers to as "prefactoring" — refactoring your code too early. The "red, green, refactor" mantra is encouraged instead of trying to achieve the perfect system upfront; code is inherently complicated, the business needs transform over time, and while humans are exceptional at pattern recognition when it's staring them in the face, we fare poorly as a species in attempting to forecast patterns. If you wouldn't do it with the stock market, in other words, you probably shouldn't be doing it with the code you're writing.

So what are the most important Apex patterns to follow? I would hazard to guess that the two most important patterns that should be followed in all Apex codebases are:

So what follows, logically, when considering architecture on the SFDC platform in light of these two patterns? In a way, though Salesforce has now released the Platform Event and EventBus architecture, you can also think of your triggers as event observers. Most code is designed to do one of two things:

linkTest Fast, Test Often

What other patterns may be of use to the aspiring developer / systems architect? One of the reasons that I advise the use of a DML-mocking architecture is because of how expensive it is, in regards to time, to insert/update records in your unit tests. In a large org with hundreds/thousands of tests, it's not uncommon for test runs to take upwards of an hour (indeed, in some codebases, people might be dreaming about the tests taking only an hour to run).

If you're in the middle of refactoring, minimizing the amount of time your tests take to run is the single best way to improve your development time. If you have to wait hours prior to validating a code change (or, even worse, a deploy fails and you have other features needing to be deployed at the same time ...), your ability to quicly respond to change has been completely hamstrung. It's also not necessary to do a total codebase overhaul when implementing changes like a DML-wrapper; start in an isolated area, show to your team/client's that development time speeds up when test coverage and speed lends itself to developer confidence, and make incremental changes to support an overall reduction in test time.

This leads me into coverage of the existing industry standard for mocking libraries, FFLib's Apex Mocks. It conforms to the Mockito dependency injection standard for mocking, and allows you to inject stubs into your Apex unit tests — purportedly increasing their speed by replacing complicated database calls and large insert/update operations with mocks of your choice.

But how performant is the existing Apex Mocks library, when compared to the Crud class that I introduced in the aforementioned Mocking DML post, and made similarly accessible in your tests through the use of the Factory Dependency Injection pattern? This originally came up as a pretty bold challenge by a user on reddit who seemed to suggest that there was no space in the Salesforce ecosystem for another dependency injection framework; I thought it best to test that assertion, and the results can also be found covered in great detail on my Apex Mocks repo's master branch.

The simplest possible method for stress-testing the two systems is to fake the insertion of a large amount of data. I originally wanted to iterate over a million rows to simulate what it would be like if you wanted to emulate potentially real-world conditions while working with Batch Apex or your org frequently responds to bulk interactions from external APIs:


2linkprivate class ApexMocksTests {

3link private static Integer LARGE_NUMBER = 1000000;


5link @isTest

6link static void fflib_should_mock_dml_statements_update() {

7link // Given

8link fflib_ApexMocks mocks = new fflib_ApexMocks();

9link ICrud mockCrud = (ICrud)mocks.mock(Crud.class);

10link Account fakeAccount = new Account();


12link // When

13link for(Integer i = 0; i < LARGE_NUMBER; i++) {

14link mockCrud.doUpdate(fakeAccount);

15link }


17link // Then

18link mocks.verify(mockCrud, LARGE_NUMBER);

19link }


21link @isTest

22link static void crudmock_should_mock_dml_statements_update() {

23link //Given

24link ICrud mockCrud = CrudMock.getMock();

25link Account fakeAccount = new Account();


27link //When

28link for(Integer i = 0; i < LARGE_NUMBER; i++) {

29link mockCrud.doUpdate(fakeAccount);

30link }


32link //Then

33link System.assertEquals(LARGE_NUMBER, CrudMock.Updated.size());

34link }


That led to some pretty unfortunate results in the console.

Using diff notation to indicate test passes / failures:

1link$ yarn test ApexMocksTests*

2link$ dmc test ApexMocksTests*

3link[dmc] using org: apex-mocks (default)

4link[dmc] * src/classes/ApexMocksTests.cls

5link[dmc] ===> ApexMocksTests test results <===

6link+[dmc] [pass] ApexMocksTests: crudmock_should_mock_dml_statements_update, time: 4.635s

7link-[err] [fail] ApexMocksTests: fflib_should_mock_dml_statements_update =>

8link-System.LimitException: Apex CPU time limit exceeded =>

9link-Class.fflib_MethodCountRecorder.recordMethod: line 57, column 1

10link-Class.fflib_ApexMocks.recordMethod: line 170, column 1

11link-Class.fflib_ApexMocks.mockNonVoidMethod: line 280, column 1

12link-Class.fflib_ApexMocks.handleMethodCall: line 83, column 1

13link-Class.Crud__sfdc_ApexStub.doUpdate: line 103, column 1


15link-line 14, column 1, time: 16.06s

16link[dmc] ===> Number of tests run: 2 <===

17link[dmc] ===> Total test time: 20.69500s <===

18link[err] Failed -> 1 test failures

19link[dmc] [NOT OK]

20linkerror Command failed with exit code 1.

Unlucky. The FFLib library can't handle iterating over a million rows (it also can't handle 100,000) - let's try 10,000 instead:

1linkprivate static Integer LARGE_NUMBER = 10000;

And the results:

1link$ yarn test ApexMocksTests*

2linkyarn run v1.22.0

3link$ dmc test ApexMocksTests*

4link[dmc] using org: apex-mocks (default)

5link[dmc] * src/classes/ApexMocksTests.cls

6link[dmc] ===> ApexMocksTests test results <===

7link+[dmc] [pass] ApexMocksTests: crudmock_should_mock_dml_statements_update, time: 0.591s

8link+[dmc] [pass] ApexMocksTests: fflib_should_mock_dml_statements_update, time: 11.145s

9link[dmc] ===> Number of tests run: 2 <===

10link[dmc] ===> Total test time: 11.73600s <===

11link[dmc] [OK]

Dan Appleman talked several years ago at Dreamforce about the need for "burn-in" when testing: that test results can vary run-over-run, and that some optimization seemingly takes place within Apex as tests are run frequently. On the Apex Mocks repo, you can see the result of FFLib's library versus my own over no fewer than 10 different test runs, but the moral of the story is that these results didn't occur randomly, or vary wildly run over run. Time after time, the use of a simple DML wrapper proved to be ridiculously more performant than the existing FFLib mocking implementation. If you're working for an enterprise organization with hundreds or thousands of tests, the time-savings potential alone in using the Crud/CrudMock wrappers is something that should be making your ears perk up.

What other Apex paradigms can look to validate through the use of tests?

linkLooping in Apex

Let's talk about loops. You might be thinking to yourself right now ... really, loops? Like, a for loop?? Why do we need to talk about that?! There's a lot of potential in performance optimization when it comes to iterating through large lists of records — particularly if this is an area you've never thought about optimizing, previously. Specifically, how you iterate through loops matters. If you're talking about business-critical functionality, the first thing you can optimize is the number of loops you execute.

This is not an Apex-specific optimization; a friend of mine and I were shocked, several years ago, when implementing analytics with Mixpanel - their HTTP API for tracking events accepts a maximum of 50 events at a time. Our first stab at splitting lists of events made heavy use of .Net's LINQ syntax — a pleasant experience for any developer, particularly with a fluent interface that lets you chain together commands to quickly cobble together two different lists of event records. However, due to the number of times our lists were being iterated through with LINQ, our program's thread time was quite high ... and, as anybody familiar with cloud computing can relate to, time == $$. Ditching LINQ and using one iteration method to split up our lists ended up shaving enough time off of our process time to fit within the cheapest pricing level our cloud provider offered.

After that, though, there's a few different ways to iterate:

Let's write some tests:


2linkprivate class LoopTests {


4link //I only added the baseline test after first running the initial

5link //tests a number of times. You'll see when it starts to be measured

6link //in my output. Apologies for the oversight!

7link @isTest

8link static void it_should_establish_baseline_using_while_loop() {

9link List<SObject> accountsToInsert = fillAccountList();

10link }


12link @isTest

13link static void it_should_test_fake_while_loop_insert() {

14link List<SObject> accountsToInsert = fillAccountList();


16link CrudMock.getMock().doInsert(accountsToInsert);


18link System.assertEquals(LARGE_NUMBER, CrudMock.Inserted.size());

19link }


21link @isTest

22link static void it_should_test_fake_basic_for_loop_insert() {

23link List<SObject> accountsToInsert = new List<SObject>();

24link for(Integer index = 0; index < LARGE_NUMBER; index++) {

25link Account acc = new Account(Name = 'Test' + index);

26link accountsToInsert.add(acc);

27link }


29link CrudMock.getMock().doInsert(accountsToInsert);


31link System.assertEquals(LARGE_NUMBER, CrudMock.Inserted.size());

32link }


34link @isTest

35link static void it_should_test_fake_syntax_sugar_for_loop_insert() {

36link List<SObject> accountsToInsert = fillAccountList();


38link for(SObject record : accountsToInsert) {

39link setNameToRandomValue(record);

40link }


42link CrudMock.getMock().doInsert(accountsToInsert);


44link System.assertEquals(LARGE_NUMBER, CrudMock.Inserted.size());

45link }


47link @isTest

48link static void it_should_test_iterator_while_loop_insert() {

49link List<SObject> accountsToInsert = fillAccountList();


51link //you can only use iterators in while loops

52link while(accountsToInsert.iterator().hasNext()) {

53link setNameToRandomValue(accountsToInsert.iterator().next());

54link }

55link }


57link private static Integer LARGE_NUMBER = 100000;

58link private static List<SObject> fillAccountList() {

59link Integer counter = 0;

60link List<SObject> accountsToInsert = new List<SObject>();

61link while(counter < LARGE_NUMBER) {

62link Account acc = new Account(Name = 'Test' + counter);

63link accountsToInsert.add(acc);

64link counter++;

65link }

66link return accountsToInsert;

67link }


69link private static void setNameToRandomValue(SObject record) {

70link record.put('Name', 'Something ' + Math.random().format());

71link }


To be clear, because some of the test methods make use of the fillAccountList function and THEN do additional work, I was hoping to establish a baseline for how long that particular iteration took in order to understand how the other methods that required a filled list in order to do their own thing were affected. My first attempt with LARGE_NUMBER set to 1 million didn't go so hot:

1link$ yarn test LoopTest*

2linkyarn run v1.22.0

3link$ dmc test LoopTest*

4link[dmc] using org: apex-mocks (default)

5link[dmc] * src/classes/LoopTests.cls

6link[dmc] ===> LoopTests test results <===

7link-[err] [fail] LoopTests: it_should_test_fake_basic_for_loop_insert =>

8link- System.LimitException: Apex CPU time limit exceeded =>

9link- Class.LoopTests.it_should_test_fake_basic_for_loop_insert: line 18, column 1, time: 16.05s

10link-[err] [fail] LoopTests: it_should_test_fake_syntax_sugar_for_loop_insert =>

11link-System.LimitException: Apex CPU time limit exceeded =>

12link-Class.LoopTests.fillAccountList: line 54, column 1

13link-Class.LoopTests.it_should_test_fake_syntax_sugar_for_loop_insert: line 28, column 1, time: 15.732s

14link-[err] [fail] LoopTests: it_should_test_fake_while_loop_insert =>

15link-System.LimitException: Apex CPU time limit exceeded =>

16link-Class.LoopTests.fillAccountList: line 52, column 1

17link-Class.LoopTests.it_should_test_fake_while_loop_insert: line 6, column 1, time: 16.082s

18link-[err] [fail] LoopTests: it_should_test_iterator_for_loop_insert =>

19link-System.LimitException: Apex CPU time limit exceeded =>

20link-Class.LoopTests.fillAccountList: line 53, column 1

21link-Class.LoopTests.it_should_test_iterator_for_loop_insert: line 41, column 1, time: 15.924s

22link[dmc] ===> Number of tests run: 4 <===

23link[dmc] ===> Total test time: 63.78800s <===

24link[err] Failed -> 4 test failures

25link[dmc] [NOT OK]

Hmm OK. Large number was a little too ... large. Let's try with 100k instead.

1link$ yarn test LoopTest*

2linkyarn run v1.22.0

3link$ dmc test LoopTest*

4link[dmc] using org: apex-mocks (default)

5link[dmc] * src/classes/LoopTests.cls

6link[dmc] ===> LoopTests test results <===

7link+[dmc] [pass] LoopTests: it_should_test_fake_basic_for_loop_insert, time: 16.105s

8link-[err] [fail] LoopTests: it_should_test_fake_syntax_sugar_for_loop_insert

9link-=> System.LimitException: Apex CPU time limit exceeded =>

10link-Class.TestingUtils.generateIds: line 17, column 1

11link-Class.CrudMock.doInsert: line 21, column 1

12link-Class.LoopTests.it_should_test_fake_syntax_sugar_for_loop_insert: line 34, column 1, time: 15.869s

13link+[dmc] [pass] LoopTests: it_should_test_fake_while_loop_insert, time: 13.554s

14link-[err] [fail] LoopTests: it_should_test_iterator_for_loop_insert =>

15link-System.LimitException: Apex CPU time limit exceeded =>

16link-Class.LoopTests.setNameToRandomValue: line 61, column 1

17link-Class.LoopTests.it_should_test_iterator_for_loop_insert: line 44, column 1, time: 15.323s

18link[dmc] ===> Number of tests run: 4 <===

19link[dmc] ===> Total test time: 60.85100s <===

20link[err] Failed -> 2 test failures

21link[dmc] [NOT OK]

OK so we're getting somewhere. As expected, the while loop and vanilla for loop outperform their fancier counterparts. It's a little bit disappointing that the syntax sugar for loop and the iterator don't compile down to the same instructions, but let's change LARGE_NUMBER to 10k and get a look at the results (you'll notice this is also where I added in the baseline for the first time ...):

1link$ yarn test LoopTests*

2linkyarn run v1.22.0

3link$ dmc test LoopTests*

4link[dmc] using org: apex-mocks (default)

5link[dmc] * src/classes/LoopTests.cls

6link[dmc] ===> LoopTests test results <===

7link+[dmc] [pass] LoopTests: it_should_establish_baseline_using_while_loop, time: 0.304s

8link+[dmc] [pass] LoopTests: it_should_test_fake_basic_for_loop_insert, time: 1.366s

9link+[dmc] [pass] LoopTests: it_should_test_fake_syntax_sugar_for_loop_insert, time: 2.354s

10link+[dmc] [pass] LoopTests: it_should_test_fake_while_loop_insert, time: 1.592s

11link-[err] [fail] LoopTests: it_should_test_iterator_for_loop_insert =>

12link-System.LimitException: Apex CPU time limit exceeded =>

13link-Class.-LoopTests.setNameToRandomValue: line 66, column 1

14link-Class.LoopTests.it_should_test_iterator_for_loop_insert: line 49, column 1, time: 16.473s

15link[dmc] ===> Number of tests run: 5 <===

16link[dmc] ===> Total test time: 22.08900s <===

17link[err] Failed -> 1 test failures

18link[dmc] [NOT OK]

Overall, this is some highly fascinating stuff. You can see that apples-to-apples, the basic while loop completely dominates, operating more than a second faster than the baseline for loop. As expected, the syntax sugar for loop lags a little bit behind. The real surprise, for me, though, was how terrible the performance of the built in List iterator is. Supposing that it is implemented behind the scenes as a simple while loop — certainly, that's the implementation that I would expect in this case — it seems downright bizarre for it to perform so poorly. I should also note that I run the tests several times before reporting the results, to ensure that any variations shake themselves out during burn-in.

I do believe there is a case to be made for custom iterators (and, since writing this article, I've also published an article examining the usage of custom iterators to power Lazy Evaluated Loops ... so let's test that vanilla implementation I was just discussing:

1linkpublic class ListIterator implements System.Iterator<SObject> {

2link private final List<SObject> records;

3link private Integer index;


5link public ListIterator(List<SObject> records) {

6link this.records = records;

7link this.index = 0;

8link }


10link public boolean hasNext() {

11link return this.index < this.records.size() - 1;

12link }


14link public SObject next() {

15link if(index == records.size() -1) {

16link return null;

17link }

18link index++;

19link return records[index];

20link }



23link//in LoopTests.cls


25linkstatic void it_should_test_custom_iterator_while_loop() {

26link List<SObject> accountsToInsert = fillAccountList();

27link Iterator<SObject> listIterator = new ListIterator(accountsToInsert);


29link while(listIterator.hasNext()) {

30link setNameToRandomValue(;

31link }


And the results (let me just take a deep breath and let out the frustration following the use of the "Iterator" syntax. It's like Salesforce wants to throw it back in our faces by saying: "look! Generics! Just not for you!!"):

1link$ yarn test LoopTests*

2linkyarn run v1.22.0

3link$ dmc test LoopTests*

4link[dmc] using org: apex-mocks (default)

5link[dmc] * src/classes/LoopTests.cls

6link[dmc] ===> LoopTests test results <===

7link+[dmc] [pass] LoopTests: it_should_establish_baseline_using_while_loop, time: 0.391s

8link+[dmc] [pass] LoopTests: it_should_test_custom_iterator_while_loop, time: 1.32s

9link+[dmc] [pass] LoopTests: it_should_test_fake_basic_for_loop_insert, time: 2.189s

10link+[dmc] [pass] LoopTests: it_should_test_fake_syntax_sugar_for_loop_insert, time: 2.404s

11link+[dmc] [pass] LoopTests: it_should_test_fake_while_loop_insert, time: 1.65s

12link-[err] [fail] LoopTests: it_should_test_iterator_while_loop_insert =>

13link-System.LimitException: Apex CPU time limit exceeded =>

14link-Class.LoopTests.setNameToRandomValue: line 76, column 1

15link-Class.LoopTests.it_should_test_iterator_while_loop_insert: line 49, column 1, time: 16.205s

16link[dmc] ===> Number of tests run: 6 <===

17link[dmc] ===> Total test time: 24.15900s <===

18link[err] Failed -> 1 test failures

19link[dmc] [NOT OK]

That's much more in line with what I would expect. Which leads me to suspect that caching the iterator will help the basic implementation as well:


2linkstatic void it_should_test_iterator_while_loop_insert() {

3link List<SObject> accountsToInsert = fillAccountList();

4link Iterator<SObject> accountIterator = accountsToInsert.iterator();


6link while(accountIterator.hasNext()) {

7link setNameToRandomValue(;

8link }


1link$ yarn test LoopTests*

2linkyarn run v1.22.0

3link$ dmc test LoopTests*

4link[dmc] using org: apex-mocks (default)

5link[dmc] * src/classes/LoopTests.cls

6link[dmc] ===> LoopTests test results <===

7link+[dmc] [pass] LoopTests: it_should_establish_baseline_using_while_loop, time: 0.388s

8link+[dmc] [pass] LoopTests: it_should_test_custom_iterator_while_loop, time: 1.303s

9link+[dmc] [pass] LoopTests: it_should_test_fake_basic_for_loop_insert, time: 1.773s

10link+[dmc] [pass] LoopTests: it_should_test_fake_syntax_sugar_for_loop_insert, time: 2.404s

11link+[dmc] [pass] LoopTests: it_should_test_fake_while_loop_insert, time: 1.633s

12link+[dmc] [pass] LoopTests: it_should_test_iterator_while_loop_insert, time: 0.791s

13link[dmc] ===> Number of tests run: 6 <===

14link[dmc] ===> Total test time: 8.29200s <===

15link[dmc] [OK]

Mystery solved! Considering that iterators are just decorating the basic while loop, it makes sense that they would closely follow it in terms of performance.

For mission-critical code that demands low latency, you should definitely consider using a while loop, or at the very least the built in iterator on the Salesforce List class.

linkExceptions in Apex

Let's talk about exceptions. When it comes to performance, building exceptions is an allegedly costly operation. Maybe this is coming as news to you (again, I would recommend a simple google search "java cost of throwing exceptions), but it kind of makes sense, thinking about all the extra stuff that needs to happen when an exception is thrown:

Of course, particularly for dealing with HTTP related code, there's the temptation to write something clean ... something beautiful:


2linkglobal class HttpService {

3link global class SalesforceResponse {

4link global SalesforceResponse() {

5link this.Success = true;

6link this.IdsUpdated = new List<Id>();

7link }


9link public Boolean Success { get; set; }

10link public List<Id> IdsUpdated { get; set;}

11link }


13link global class SalesforceRequest {

14link List<Id> IdsToDoThingsWith { get; set; }

15link }


17link @HttpPost

18link global static SalesforceResponse post(SalesforceRequest req) {

19link SalesforceResponse res = new SalesforceResponse();

20link try {

21link //do something that will potentially fail here

22link //with the Ids passed in

23link if(someConditional != true) {

24link throw new CalloutException('Meaningful fail message!');

25link }

26link } catch(Exception ex) {

27link res.Success = false;

28link }

29link return res;

30link }



Mmm. So clean. Single-return methods are so tasty. But are we leading ourselves astray with this pattern? Will it cost us valuable seconds to collect that Exception if our large data operation fails? As you know, there's only one way to find out ...

1link@isTest | classes/ExceptTesting.cls

2linkprivate class ExceptTesting {

3link //salesforce has bizarre rules in place about

4link //naming classes with the word Exception in them

5link @isTest

6link static void it_should_provide_baseline_testing_time() {}


8link @isTest

9link static void it_should_throw_exception() {

10link throw new TestException();

11link }


13link @isTest

14link static void it_should_catch_thrown_exception() {

15link Exception ex;


17link try {

18link throw new TestException('Some message here');

19link } catch(Exception exc) {

20link ex = exc;

21link }


23link System.assertNotEquals(null, ex);

24link }


26link @isTest

27link static void it_should_build_big_nested_stacktrace() {

28link String exceptionMessage = 'hi'.repeat(100000);

29link Exception caughtEx;

30link try {

31link try {

32link throw new TestException('First exception');

33link } catch(Exception ex) {

34link throw new TestException(ex.getMessage() + '\n' + exceptionMessage);

35link }

36link } catch(Exception ex) {

37link caughtEx = ex;

38link }


40link System.assertNotEquals(null, caughtEx);

41link }


43link private class TestException extends Exception {}


For one thing, I was interested in seeing if the uncaught exception would be faster in running than the caught one; for another, I wanted to see just how big a difference would be generated between the baseline for simply starting and running a test (which consistently hovers around 5-hundredths of a second):

1link$ yarn test ExceptTesting*

2linkyarn run v1.22.0

3link$ dmc test ExceptTesting*

4link[dmc] using org: apex-mocks (default)

5link[dmc] * src/classes/ExceptTesting.cls

6link[dmc] ===> ExceptTesting test results <===

7link+[dmc] [pass] ExceptTesting: it_should_build_big_nested_stacktrace, time: 0.031s

8link+[dmc] [pass] ExceptTesting: it_should_catch_thrown_exception, time: 0.005s

9link+[dmc] [pass] ExceptTesting: it_should_provide_baseline_testing_time, time: 0.006s

10link-[err] [fail] ExceptTesting: it_should_throw_exception =>

11link-ExceptTesting.TestException: Script-thrown exception =>

12link-Class.ExceptTesting.it_should_throw_exception: line 10, column 1, time: 0.005s

13link[dmc] ===> Number of tests run: 4 <===

14link[dmc] ===> Total test time: 0.04700s <===

15link[err] Failed -> 1 test failures

16link[dmc] [NOT OK]

Admittedly, perhaps my methodology is simply busted, but even though the "cost" of building an extremely convoluted exception out of several other exceptions is 6 times slower than simply not catching, the real cost of building safe structures and code paths in your application may win out. It all depends on how much latency matters to you. In FinTech, hundredths of a second matter; they're the difference between making money and losing it.

Being aware of the potential time tax you might be paying due to your application's code helps you avoid paying the tax when performance matters.

linkBack To Refactoring

Let's go back to our original mocking DML example:


2linkprivate class LoopTests {

3link @isTest

4link static void it_should_establish_baseline_using_while_loop() {

5link List<SObject> accountsToInsert = fillAccountList();

6link }


8link @isTest

9link static void it_should_test_fake_while_loop_insert() {

10link List<SObject> accountsToInsert = fillAccountList();


12link CrudMock.getMock().doInsert(accountsToInsert);


14link System.assertEquals(LARGE_NUMBER, CrudMock.Inserted.size());

15link }


17link @isTest

18link static void it_should_test_actual_while_loop_insert() {

19link List<SObject> accountsToInsert = fillAccountList();


21link //I would typically use the singleton Crud.doInsert method here

22link //but ultimately they're the same operation

23link insert accountsToInsert;


25link List<Account> insertedAccs = [SELECT Id FROM Account];

26link System.assertEquals(LARGE_NUMBER, insertedAccs.size());

27link }


29link private static Integer LARGE_NUMBER = 10000;

30link private static List<SObject> fillAccountList() {

31link Integer counter = 0;

32link List<SObject> accountsToInsert = new List<SObject>();

33link while(counter < LARGE_NUMBER) {

34link Account acc = new Account(Name = 'Test' + counter);

35link accountsToInsert.add(acc);

36link counter++;

37link }

38link return accountsToInsert;

39link }


41link private static void setNameToRandomValue(SObject record) {

42link record.put('Name', 'Something ' + Math.random().format());

43link }


With large number set to 10,000, let's see what happens when comparing the actual cost of inserting records compared to faking their insert through the CrudMock:

1link$ yarn test LoopTests*

2linkyarn run v1.22.0

3link$ dmc test LoopTests*

4link[dmc] using org: apex-mocks (default)

5link[dmc] * src/classes/LoopTests.cls

6link[dmc] ===> LoopTests test results <===

7link+[dmc] [pass] LoopTests: it_should_establish_baseline_using_while_loop, time: 0.4s

8link+[dmc] [pass] LoopTests: it_should_test_actual_while_loop_insert, time: 54.043s

9link+[dmc] [pass] LoopTests: it_should_test_fake_while_loop_insert, time: 1.541s

10link[dmc] ===> Number of tests run: 3 <===

11link[dmc] ===> Total test time: 55.98400s <===

12link[dmc] [OK]

............ welp. OK then. As you can see, there's a considerable amount of variation in what Salesforce allows when it comes to testing time. I ran this (and the other tests in LoopTests) several times to validate these results. Performing actual DML is absurdly expensive in terms of time.

What are some other operations that can lead to testing slowdown?

In Clean Code, there's an excellent chapter on boundaries in code; how to recognize them, how to plan around them. Knowing that these are the weak spots when it comes to writing performant code (not only in your unit tests, but in your application code) can help you to identify the spots that are the most important to isolate in your code.

As an example, if you have tests for performing merges, you might consider mocking your merging code in places where merging is a side-effect of the code you have under test. Likewise, you should definitely try to minimize where leads are being converted in your test code.

These tips should be part of your everyday testing toolbelt — and should occupy the same space in your mind as the sacred rules like:


It's my hope that this article helps you to think about the importance of your own time, and the time of your team, when it comes to writing tests. One of the reasons that I'm a firm proponent of TDD (and paired programming!) is that it allows you (and your team, if you have one) to observe the positive effects of a test-first mentality: when you can run your tests often, and they run quickly, you feel empowered to move quickly in your codebase(s). You also get to see patterns develop organically over time; instead of trying to force yourself to be the perfect architect at all times, you can get straight into the weeds prior to taking out your refactoring tools.

This last point is particularly prescient for the perfectionists. I've seen many talented developers waylay themselves, lost in thought over the perfect class setup and the DRY-est methods. Simply getting down to business gets your creative juices flowing, allows you to recognize patterns as they occur, and clean the code up as you go. Over time, of course, you build the muscle-memory necessary to identify paradigms before code is written; once you've built one publicly facing API, for example, you know what goes into scaffolding the structure and can remember the gotchas when it comes time to build the next one.

If you're looking to dig into the code a little bit more than what was exhibited here, I would encourage you to check out the performant-tests branch on my Apex Mocks repo. Thanks for taking this testing ride with me — till next time!

The original version of Writing Performant Apex Tests can be read on my blog.

Test Fast, Test OftenLooping in ApexExceptions in ApexBack To RefactoringConclusion

Home Advanced Logging Using Nebula Logger Apex Logging Service Apex Object-Oriented Basics Batchable And Queueable Apex Building A Better Singleton Continuous Integration With SFDX Dependency Injection & Factory Pattern Enum Apex Class Gotchas Extendable Apis Formula Date Issues Future Methods, Callouts & Callbacks Idiomatic Salesforce Apex Introduction & Testing Philosophy Lazy Iterators Lightweight Trigger Handler LWC Composable Modal LWC Composable Pagination LWC Custom Lead Path Mocking DML React Versus Lightning Web Components Refactoring Tips & Tricks Replacing DLRS With Custom Rollup Repository Pattern setTimeout & Implementing Delays Sorting And Performance In Apex Test Driven Development Example Testing Custom Permissions The Tao Of Apex Writing Performant Apex Tests

Read more tech articles