Read more at, or return to the homepage

setTimeout & Implementing Delays

Setting delays programmatically within Apex represents a unique challenge. We don't have access to the current thread instructions that are being carried out, nor do we have any kind of higher-level delay function. While this is frequently fine for internal usage, where the last thing you would want is for your compiled code to be slow, that's not always the case when interacting with external APIs. It's common for APIs to be "rate-limited" — go over the number of requests you're supposed to per second/minute etc ... and you can expect to be penalized. It's also common for the "penalty" to be communicated via HTTP response carrying instructions about how long you should back off for. But how can we safely implement delays or something like JavaScript's setTimeout in Apex without chewing through our precious CPU time limits?

This article is the result of a series of discussions that took place on the topic of rate-limiting APIs on the SFXD Discord. The question — how to implement a delay while working with Batch/Queueable Apex interacting with a 3rd party API? — is actually two separate issues:

linkA Short Interlude: Using The Platform Cache

Short term, transient but persistent storage of records can be facilitated by taking advantage of the Platform Cache. This will obviate the need for the creation of another custom object (and the concomitant storage costs / cleanup requirements that would entail) while processing records ... it will also reduce the overall strain on our heap limits by lowering the surface area of our class-level storage (crucial when working with large lists).

Working on a problem indirectly, by working on another (storing application state, in this case pertaining to whether or not specific records have been processed), is a common paradigm in software engineering, and in many instances represents the intersection between computer science and mathematics ("I need a search algorithm that processes input in O(n) time for a fancy typeahead").

linkThe Cache.CacheBuilder Implementation

As with all approaches, there are pros and cons to taking advantage of Salesforce's Platform Cache. Caching is famously one of the hardest things to get right in an application. One of the sections in the developer documentation for the Platform Cache offers a delicious morsel up for those looking to avoid handling the classical difficulty of handling cache misses:

A Platform Cache best practice is to ensure that your Apex code handles cache misses by testing for cache requests that return null. You can write this code yourself. Or, you can use the Cache.CacheBuilder interface, which makes it easy to safely store and retrieve values to a session or org cache.

It's frequently a good idea to opt into these Salesforce offerings, but not at the expense of performance (or time). I want to write some tests to ensure that making use of the Platform Cache won't be a bad choice further down the line by seeing how it performs when caching large numbers of records. First, the abstract implementation:

1link/*not really necessary for the problem at hand

2linkbut useful when using the cache for mission-critical data

3linkwhere a change to the underlying info necessitates

4linkclearing the cache. Especially when the change

5linkis due to a trigger, this allows us to only

6linkexpose the interface to the caller*/

7linkpublic interface ICachedRepo {

8link void clearCache();



11linkpublic abstract class AbstractCacheRepo implements Cache.CacheBuilder, ICachedRepo {

12link @testVisible private static Object stubValues;


14link public Object doLoad(String requiredBySalesforce) {

15link if(stubValues != null) {

16link return stubValues;

17link }

18link return this.populateCache();

19link }


21link protected abstract String getCacheKey();


23link protected Object getFromCache() {

24link return Cache.Org.get(

25link this.getCacheBuilder(),

26link this.getCacheKey()

27link );

28link }


30link public void clearCache() {

31link Cache.Org.remove(

32link this.getCacheBuilder(),

33link this.getCacheKey()

34link );

35link }


37link public void updateCache(Object cachedItem) {

38link Cache.Org.put(

39link this.getCacheKey(),

40link cachedItem

41link );

42link }


44link //this is whatever you're putting into the cache

45link protected abstract Object populateCache();


47link //only virtual because it allows inner classes in tests

48link //to override

49link protected virtual Type getCacheBuilder() {

50link //the well-known hack for extracting the name

51link //of the current class at runtime

52link String className = String.valueOf(this)

53link .split(':')[0];

54link return Type.forName(className);

55link }


Things of particular interest in the abstract implementation:

linkTesting Cache.CacheBuilder & The AbstractCacheRepo

Right now, I'm more concerned with validating the performance of the cache (which should be great; that's the whole point of caching) rather than showing off the possibilities represented by the AbstractCacheRepo. When I write abstractions, I'm looking to consolidate behavior, showcase intent, and communicate platform intricacies (when necessary) to prevent unnecessary gotchas.

Assuming you have a scheduled job that is going to run, at a minimum, every 15 minutes (which is as frequent an interval that's possible on the SFDC platform) to check for changed records prior to calling out, you probably won't have the volume of records indicated here. Still, whenever possible, in testing I want to use the maximum number of SObjects (in regards to DML limits) to showcase what the worst-case will be in terms of latency. Comparing the use of the cache (which enables the usage of the crucial wrapper object to track processed records) to simply using SOQL will be key:


2linkprivate class PlatformCacheTests {

3link @TestSetup

4link static void setup() {

5link List<Account> accounts = new List<Account>();

6link for(Integer index = 0; index < 9999; index++) {

7link accounts.add(new Account(Name = 'Test' + index));

8link }

9link insert accounts;

10link }



13link @isTest

14link static void it_should_measure_uncached_selection_time() {

15link Map<Id, Account> accounts = new Map<Id, Account>([SELECT Id FROM Account]);


17link //we want to establish a baseline iteration time, as well

18link for(Id accountId : accounts.keySet()) {

19link System.assertEquals(true, accounts.containsKey(accountId));

20link }

21link }


23link @isTest

24link static void it_should_measure_cached_selection_time() {

25link CacheTest cacher = new CacheTest();

26link Map<Id, SObjectWrapper> wrapperMap = cacher.getWrapperMap();


28link List<Account> accounts = [SELECT Id FROM Account];

29link for(Account acc : accounts) {

30link System.assertEquals(true, wrapperMap.containsKey(acc.Id));

31link }

32link }


34link private class CacheTest extends AbstractCacheRepo {

35link public Map<Id, SObjectWrapper> getWrapperMap() {

36link return (Map<Id, SObjectWrapper>) this.getFromCache();

37link }


39link protected override String getCacheKey() {

40link return 'CacheTest';

41link }


43link protected override Object populateCache() {

44link Map<Id, Account> accountMap = new Map<Id, Account>([SELECT Id FROM Account]);

45link Map<Id, SObjectWrapper> wrapperMap = new Map<Id, SObjectWrapper>();

46link for(Id accountId : accountMap.keySet()) {

47link wrapperMap.put(accountId, new SObjectWrapper(accountMap.get(accountId)));

48link }

49link return wrapperMap;

50link }


52link protected override Type getCacheBuilder() { return CacheTest.class; }

53link }


55link public class SObjectWrapper {

56link public SObjectWrapper(SObject record) {

57link this.Record = record;

58link this.IsProcessed = false;

59link }


61link public SObject Record { get; private set; }

62link public Boolean IsProcessed { get; set; }

63link }


And the output:

| Test Name | Time | | —————————————————————————- | ——- | | it-should-measure-cached-selection-time | .36s | | it-should-measure-uncached-selection-time for loops | 1.12s |

In this case, I'm not necessarily concerned with the overall increase in time, particularly over a large number of records. Since we know we're going to be operating async in the context of the larger problem, adding ~700ms to the overall processing time (with a large number of records) seems worth it when the flipside would be the creation and maintanenace of a custom object.

You'll note that the tests aren't necessarily concerned with testing the AbstractCacheRepo — rather, they are exploring the cost of the feature set you buy into when making use of the Platform Cache. There are many different ways to approach testing, and this is closer to Domain Driven Design (DDD) than it is to my norm (being a big proponent of TDD and unit testing, I prefer for my test classes to not differ substantially in name from the class under test). I don't espouse this pattern for most production-level code, but for exploring patterns in the search of a meaningful (and performant) abstraction, writing tests to explore a platform (or API)'s capabilities is a wonderful way to learn.

On the subject of creating meaningful abstractions:


While writing this article, I was momentarily puzzled by an obscure stacktrace when running PlatformCacheTests:

1link$System.NullPointerException: Attempt to de-reference a null object =>

2link$Class.cache.Partition.validateCacheBuilder: line 167, column 1

3link$Class.cache.Org.get: line 57, column 1

4link$Class.AbstractCacheRepo.getFromCache: line 16, column 1

5link$Class.PlatformCacheTests.CacheTest.getWrapperMap: line 32, column 1

6link$Class.PlatformCacheTests.it_should_measure_cached_selection_time: line 25, column 1

Let's just ... ignore ... the casing on that "cache" class ... and focus on the null object. Since I was using a Developer Edition org when initially writing the test, my initial thought (involving smacking my own forehead) was that I hadn't enabled the Platform Cache feature. After enabling it, however, the error persisted, so I logged into another sandbox that was already using a version of the AbstractCacheRepo shown. The same error appeared there when running the test.

It was at this point that I remembered the issues I'd run into when writing the Extendable API article: Type.forName cannot "see" inner test classes without them both being publicly assessible and having their outer class prefix attached. Having the method returning the getCacheBuilder method use the virtual keyword allows inner classes to override the implementation without forcing the inner class to be public. I like this better than the @testVisible static variable outlined in Extendable APIs (and below), but there's room for both approaches. Remember — "a foolish consistency is the hobgoblin of little minds."

1link//AbstractCacheRepo.cls - the old way

2link@testVisible private static String classPrefix = '';


4linkprivate Type getCacheBuilder() {

5link String className = String.valueOf(this)

6link .split(':')[0];

7link return Type.forName(classPrefix + className);





12linkstatic void it_should_measure_cached_selection_time() {

13link AbstractCacheRepo.classPrefix = 'PlatformCacheTests.';

14link //...



17link//AbstractCacheRepo.cls - the new way

18link//virtual to allow inner classes to override

19linkprotected virtual Type getCacheBuilder() {

20link String className = String.valueOf(this)

21link .split(':')[0];

22link return Type.forName(className);



25link//PlatformCacheTests.cls - the new way

26linkprivate class CacheTest extends AbstractCacheRepo {

27link //...

28link protected override Type getCacheBuilder() { return CacheTest.class; }


linkBack On Track - Implementing Delays

It was at this point, after exploring the Platform Cache as a means to temporarily store state related to the records being processed that I took a left turn on continuity and came up with a more straightforward solution. This is the value in prototyping — I didn't think of this approach initially, but while exploring how to store which records had been processed as part of a batch, investigating the Platform Cache idea put me in the right mindset to arrive at a better solution.

linkWhen Is A Batchable Not A Batchable?

You may remember that we explored Batchables and Queueables when talking about the DataProcessor in Batchable & Queueable Apex. It shouldn't come as a surprise, then, that combining these two platform features into one can get you out of some tight corners. After working through the Platform Cache example, I realized it might be a red herring for this particular feature when thinking about how the QueryLocator/Iterable is returned by Batchable Apex's start method. If the data is already packaged into convenient-to-the-heap sizes, there's no harm in storing the requisite data within the object's memory.

So ... when is a Batchable not a Batchable? Perhaps the joke seems force(.com)d, but I'd say: when it's also a Queueable.

linkImplementing A Timeout / Delay In Apex

For now, the delay implementation will be baked into the processing class itself. While this will fall short of true unit testing, it would be needlessly verbose to break out a timer to a separate class if it will only be used here. If there came a time where it was necessary to implement another time-based solution in a separate area of the codebase, I would certainly break out the interface and implementations I'm about to show you:

1linkpublic class SetIntervalProcessor implements

2link Database.Batchable<SObject>, Database.AllowsCallouts, System.Queueable {

3link public static final Integer CALLOUT_LIMIT = 5;


5link //Interval section, constrained to this class

6link //till cases for re-use present themselves

7link //visibility level is public b/c of the tests

8link public interface Interval {

9link Boolean hasElapsed();

10link }


12link public class FirstInterval implements Interval {

13link public boolean hasElapsed() {

14link //on the first run, we simply process

15link //as many requests as necessary

16link return true;

17link }

18link }


20link public class TenSecondDelay implements Interval {

21link private final Datetime initialTime;

22link public TenSecondDelay() {

23link this.initialTime =;

24link }


26link public Boolean hasElapsed() {

27link return this.initialTime.addSeconds(10) <=;

28link }

29link }


31link //etc...


If you wanted to meld the testing for the delay into the overall tests for this class (so, not true unit testing), more power to you. I'll do both (testing the intervals, plus verifying that the full(y) set interval has elapsed when the full class is run), just to show how that will look:


2linkprivate class SetIntervalProcessorTests {

3link @isTest

4link static void it_should_always_return_true_for_first_interval() {

5link Integer counter = 0;

6link SetIntervalProcessor.Interval interval

7link = new SetIntervalProcessor.FirstInterval();

8link while(true) {

9link if(interval.hasElapsed() == false) {

10link counter++;

11link } else {

12link break;

13link }

14link }


16link System.assertEquals(0, counter);

17link }


19link @isTest

20link static void it_should_wait_ten_seconds_for_ten_second_delay_interval() {

21link Datetime nowish =;

22link SetIntervalProcessor.Interval tenSecDelay =

23link new SetIntervalProcessor.TenSecondDelay();

24link while(tenSecDelay.hasElapsed() == false) {

25link //wait

26link }

27link System.assertEquals(true, nowish.addSeconds(10) <=;

28link }


If the delay truly needed to be this long in production, I would probably introduce another classic stub of mine, substituting the calls to with another class that allowed tests only to override the current time; this would prevent having two or more tests guaranteed to take 10 seconds each. We can't afford to have tests that expensive in terms of time-incurred! But back to the matter at hand — the rest of the SetIntervalProcessor implementation:


2link private final Interval interval;

3link private List<SObject> records;

4link @testVisible static Integer runCounter = 0;


6link public SetIntervalProcessor() {

7link this.interval = new FirstInterval();

8link }


10link //only for queueables

11link private SetIntervalProcessor(Interval interval, List<SObject> records) {

12link this.interval = interval;

13link this.records = records;

14link }


16link public List<SObject> start(Database.BatchableContext context) {

17link //your query here ...

18link return [SELECT Id, Name FROM Account];

19link }


21link public void execute(Database.BatchableContext context, List<SObject> records) {

22link this.records = records;

23link this.innerExecute();

24link }


26link public void execute(System.QueueableContext context) {

27link this.innerExecute();

28link }


30link public void finish(Database.BatchableContext context) {

31link //..your finish logic

32link }


34link private void innerExecute() {

35link while(this.interval.hasElapsed() == false) {

36link //wait it out

37link }

38link Integer calloutCount = 0;

39link for(Integer index = this.records.size() - 1;

40link index >= 0

41link //CALLOUT_LIMIT shown earlier (5)

42link && calloutCount < CALLOUT_LIMIT

43link && this.interval.hasElapsed();

44link index—) {

45link //we have to iterate backwards

46link //to safely remove items from the list

47link SObject record = records[index];

48link this.callout(record);

49link calloutCount++;

50link this.records.remove(index);

51link }

52link if(this.shouldRunAgain()) {

53link runCounter++;

54link System.enqueueJob(new SetIntervalProcessor(

55link new TenSecondDelay(),

56link this.records

57link ));

58link }

59link }


61link private Boolean shouldRunAgain() {

62link return this.records.size() > 0 &&

63link Limits.getQueueableJobs() <= Limits.getLimitQueueableJobs();

64link }


66link private void callout(SObject record) {

67link //whatever your callout logic is

68link Http http = new Http();

69link HttpRequest req = new HttpRequest();

70link req.setEndpoint('');

71link req.setBody(Json.serialize(record));

72link http.send(req);

73link }

Note that while the implementation currently enforces this class being kicked off as a batch first, it's not a hard requirement; so long as you were confident that the number of selected records was going to be less than the queryable limit, you could easily check if this.records was null in the execute method and set that variable by imperatively calling the start method from the Batchable implementation. I like the Queueable constructor being private, however — sometimes that extra security is worth sacrificing slightly on flexibility.

This post doesn't touch on handling callout limits — the example given is one where you would never exceed the maximum amount due to the aforementioned API rate limits.

If you want to go for a really clean, Object-Oriented re-queueing, the interval instance variable could have the final keyword removed:



3linkprivate Interval interval;


5linkprivate void innerExecute() {

6link //..

7link if(this.shouldRunAgain()) {

8link runCounter++;

9link this.interval = new TenSecondDelay();

10link System.enqueueJob(this);

11link }


I really like to pass this whenever possible with Queueables, but I also like the compiler ensuring that variables are only assigned to once. In this particular example, you can choose the pattern that more closely aligns with your values and style guide.

The rest of the tests focus on the Batchable/Queueable sections working as intended:


2linkstatic void it_should_perform_as_batch_for_low_record_sizes() {

3link Test.setMock(HttpCalloutMock.class, new MockHttpResponse(200));


5link Test.startTest();

6link Database.executeBatch(new SetIntervalProcessor());

7link Test.stopTest();


9link Account acc = (Account)JSON.deserialize(lastReqBody, Account.class);

10link //remember, we iterate through the list in REVERSE!

11link System.assertEquals('0', acc.Name);




15linkstatic void it_should_perform_as_queueable_and_wait_ten_seconds() {

16link Test.setMock(HttpCalloutMock.class, new MockHttpResponse(200));


18link insert new Account(Name = '5');

19link Datetime nowish =;


21link Test.startTest();

22link Database.executeBatch(new SetIntervalProcessor());

23link Test.stopTest();


25link Account acc = (Account)JSON.deserialize(lastReqBody, Account.class);

26link System.assertEquals('0', acc.Name);

27link //at least ten seconds should have elapsed

28link System.assertEquals(true, nowish.addSeconds(10) <=;




32linkstatic void it_should_try_to_requeue_for_larger_sizes() {

33link //I added the string concatenation while debugging

34link //to ensure everything was working correctly

35link innerSetup('second ');


37link insert new Account(Name = '9');


39link Test.setMock(HttpCalloutMock.class, new MockHttpResponse(200));


41link Exception ex;

42link try {

43link Test.startTest();

44link Database.executeBatch(new SetIntervalProcessor());

45link Test.stopTest();

46link } catch(Exception e) {

47link ex = e;

48link }


50link //Tests can only run a queueable once

51link //verify the correct error has been thrown

52link //and that the processor WOULD have requeued

53link System.assertEquals('Maximum stack depth has been reached.', ex.getMessage());

54link System.assertEquals(2, SetIntervalProcessor.runCounter);



57linkstatic string lastReqBody;


59linkprivate class MockHttpResponse implements HttpCalloutMock {

60link private final Integer code;


62link public MockHttpResponse(Integer code) {

63link this.code = code;

64link }


66link public HTTPResponse respond(HTTPRequest req) {

67link HttpResponse res = new HttpResponse();

68link res.setStatusCode(this.code);

69link lastReqBody = req.getBody();

70link return res;

71link }


I won't spend much time on the MockHTTPResponse class. Typically, this is not an inner class, but a shared one for many different tests. I've included the full body of it here simply to get the tests passing.

I don't love using things like the runCounter — I don't love Salesforce's limit on testing Queueables, either. The ability to make Queueables recursive is the reason they are as effective a tool as they are — disallowing people to fully test recursive Queueables is painful.

The tests take us through the three codepaths that are possible:

  1. There aren't enough records to necessitate a further run. The Batch test ensures that the callouts are made. Yes, only the happy path for the HTTP requests are shown; error handling for HTTP requests is left as an exercise for the reader.
  2. There are enough records to enqueue a second job, but not enough to enqueue again.
  3. Not only are there enough records to enqueue, but it will take more than one iteration to get through all of the records.

Note, as well, that we are also testing that the job stops properly; that the list removal works as expected.

linkBut My Delay Needs To Be Longer!

OK, you're made it this far — but you're legitimately worried about running over CPU limits, due to limits being imposed by a foreign API that are more on the order of per-30-second/per-60-second limitations. In order to not run afoul of the governor limits, you could utilize the Platform Cache in conjunction with bits of pieces of what I've already shown.

Alternatively, you could simply modify the beginning of the shown innerExecute method:

1linkprivate void innerExecute() {

2link if(this.interval.hasElapsed() == false && this.shouldRunAgain()) {

3link System.enqueueJob(this);

4link return;

5link }

6link Integer calloutCount = 0;

7link //... etc


There is no advertised limit on the number of Queueable jobs running at any given time, though people have certainly speculated that Queueables and Batches share the same limit for concurrently running jobs (100). This approach could quickly burn through your limit for asynchronous Apex method executions per 24 hour period, however, which makes me loathe to recommend it. Still, I present it to you as an example of what is possible; in this case, it also serves as a warning for the downsides to a promising possibility.

To avoid abusing both the async Apex method execution limit as well as the per-transaction CPU time limit, juggling your jobs between the Platform Cache and the previously shown SetIntervalProcessor method should suffice.

linksetTimeout/Interval Closing Thoughts

In many ways, I'm glad that the Platform Cache experiment ended up being a tangent on the way towards an elegant solution for delays in Apex. Many dev stories involve heading down mental and written roads that end up being dead ends — showcasing that with something that could be of use to others in their own Platform Cache journeys is just an added benefit.

If the API rate limit was something truly heinous (and keeping in mind that at present, async processes in Apex get a maximum of 60 seconds to work with), having the Platform Cache (among other things) up our sleeve of options is a nice bonus to this exercise.

I've pushed the code shown in this post to a branch in my Apex Mocks repo for your perusal. Note that some of the styling (in terms of linebreaks) is presented as it would be here, in order to preserve mobile-friendly display of the code, and does not represent how the code would look normally.

As always — thanks for following along, and I hope you enjoyed this entry in the Joys Of Apex!


linksetTimeout & Implementing Delays Postscript

I originally had announced in the Picklist Validation post way back in April that I was featured on the SalesforceWay podcast with Xi Xiao, talking about DML mocking. The podcast episode was recorded at the end of March and released last week (so, the first week in August, 2021). Xi and I had a good talk discussing something I've talked about in length in Mocking DML and many other posts here! It's a fun, quick chat about a truly interesting topic — let me know your thoughts if you give the podcast a listen!

This is a mirrored post, you can find the original setTimeout & Implementing Delays on my site!

A Short Interlude: Using The Platform CacheThe Cache.CacheBuilder ImplementationTesting Cache.CacheBuilder & The AbstractCacheRepoBack On Track - Implementing DelaysWhen Is A Batchable Not A Batchable?Implementing A Timeout / Delay In ApexBut My Delay Needs To Be Longer!setTimeout/Interval Closing ThoughtssetTimeout & Implementing Delays Postscript

Home Advanced Logging Using Nebula Logger Apex Logging Service Apex Object-Oriented Basics Batchable And Queueable Apex Building A Better Singleton Continuous Integration With SFDX Dependency Injection & Factory Pattern Enum Apex Class Gotchas Extendable Apis Formula Date Issues Future Methods, Callouts & Callbacks Idiomatic Salesforce Apex Introduction & Testing Philosophy Lazy Iterators Lightweight Trigger Handler LWC Composable Modal LWC Composable Pagination LWC Custom Lead Path Mocking DML React Versus Lightning Web Components Refactoring Tips & Tricks Replacing DLRS With Custom Rollup Repository Pattern setTimeout & Implementing Delays Sorting And Performance In Apex Test Driven Development Example Testing Custom Permissions The Tao Of Apex Writing Performant Apex Tests

Read more tech articles