Read more at jamessimone.net, or return to the homepage





Continuous Integration With SFDX

Updating your build pipeline to use SFDX is something that is coming up with greater and greater frequency. Many companies are still using the old metadata format for Salesforce, and perhaps the full advantage of scratch orgs and the new metadata format is still a hard sell. It's understandable. Using ANT, jsforce, or any of a dozen other legacy options for performing production deploys makes people uncomfortable; also understandable. One has only to glance at the NPM warnings when installing jsforce, for example, to have second thoughts about that particular approach. Yet for a while, wrappers around the Salesforce metadata and tooling APIs were in vogue specifically because they abstracted away the unpleasantness of working with these APIs (well, the relative unpleasantness — one has only to examine the Marketing Cloud APIs to gain a fuller appreciation for the word unpleasant).

I thought I'd do a little write-up about my experience porting an older project over to the use of SFDX while still getting two big advantages:

linkAutomating Apex Class Deletion

Some time ago, my mentor and I wrote an npm package, apex-class-cleanup which made use of jsforce to dynamically configure the destructiveChangesPre.xml file that Salesforce requires be included with a metadata deployment in order to delete Apex classes prior to the tests being run — crucial for avoiding conflicts. At the time, we used Handlebars to template out this XML file ... but it's a pretty simple file, and now with SFDX we have the use of the sfdx force:mdapi:listmetadata command, which removes the jsforce requirement entirely. Let's see if we can get rid of that Handlebars dependency as well ...

When you run sfdx force:mdapi:listmetadata, you also have the option of specifying a destination file for the output, as well as the format of the output. Using sfdx force:mdapi:listmetadata — json -m ApexClass -f ./existingApexClasses.json, we can grab just the Apex classes in our Salesforce org and write them to the current directory in an existingApexClasses json file. We don't want this file to be tracked in source control — it should be added to our .gitignore file. Personally, I'd also like to clean up the file post-deployment so it doesn't stick around in my working directory, or that of my build runner.

So ... we need to get the existing Apex classes, and after the deploy has finished, perform some cleanup. This is a perfect use case for the built in pre/post commands baked into NPM scripts; by naming a script task deploy, we can then require that something run before it by naming the task to run before it predeploy and require that a task run after the deploy task has finished by naming a task postdeploy.

Our package.json file ends up looking like:

1link"predeploy": "sfdx force:mdapi:listmetadata — json -m ApexClass -f ./existingApexClasses.json",

2link//the -w flag is minutes for the SFDX runner to wait for your deployment. You want this value to be higher

3link//than any possible increment your deploy could take ...

4link//the -d flag is wherever your unconverted source directory is.

5link//traditionally this was "src", but your mileage may vary

6link"deploy": "node ./createDestructiveChanges.js && sfdx force:mdapi:deploy -d \"src\" -w 10",

7link"postdeploy": "node ./cleanup.js"

So what do those createDestructiveChanges and cleanup files look like?

Let's look at how to create the XML file first:

./createDestructiveChanges.js
1linkconst existingClasses = require("./existingApexClasses.json");

2linkconst fs = require("fs");

3link

4linkconst options = {

5link //you could also use dotenv or something similar

6link //to inject these two values by process.env values

7link apiVersion: "48.0",

8link classesFolder: "./src/classes/",

9link};

10link

11linkconst serverFileStats = existingClasses.map(

12link (existingClass) =>

13link new Promise((resolve) => {

14link if (

15link existingClass.namespacePrefix &&

16link existingClass.namespacePrefix !== ""

17link ) {

18link return resolve({ stat: true, existingClass });

19link }

20link return fs.stat(

21link options.classesFolder + existingClass.fullName + ".cls",

22link (err, stats) => {

23link if (err) {

24link return resolve({ stat: false, existingClass });

25link }

26link return resolve({ stat: stats, existingClass });

27link }

28link );

29link })

30link);

31link

32linkconst missingClassesPromise = () =>

33link Promise.all(serverFileStats)

34link .then((statAndElements) =>

35link statAndElements.filter((el) => !el.stat).map((el) => el.el)

36link )

37link .then((extraClasses) => {

38link if (extraClasses.length === 0) {

39link console.log("No extra classes found.");

40link }

41link return { extraClasses };

42link });

43link

44linkmissingClassesPromise().then(({ extraClasses }) => {

45link //I'm not saying this is the best thing in the world.

46link //It's not. But the format of the XML file hasn't changed

47link //And it's pretty simple to construct.

48link const header = '<?xml version="1.0" encoding="UTF-8"?>';

49link const packageStart =

50link '<Package xmlns="http://soap.sforce.com/2006/04/metadata">';

51link const typesStart = "<types>";

52link const name = "<name>ApexClass</name>";

53link const classes = extraClasses.map(

54link (extraClass) => `<members>${extraClass.fullName}</members>`

55link );

56link const typesEnd = "</types>";

57link const version = `<version>${options.apiVersion}</version>`;

58link const packageEnd = "</Package>";

59link const xmlPieces = [

60link header,

61link packageStart,

62link typesStart,

63link name,

64link classes,

65link typesEnd,

66link version,

67link packageEnd,

68link ];

69link

70link const xmlContent = xmlPieces.join("\n");

71link return fs.writeFileSync("src/destructiveChangesPre.xml", xmlContent);

72link});

So — createDestructiveChanges relies on the output from our predeploy step, and then compares the existing classes in our current git branch to the ones that were returned from our Salesforce org. For the ones that don't match, they're added to the destructiveChangesPre.xml file that Salesforce requires. The one annoyance here is that sfdx force:mdapi:listmetadata doesn't currently support an argument for getting only metadata from a specific namespace, so if you have installed packages on your org, they need to get filtered out first (which happens in the serverFileStats function, above). Also please note the important caveat that, again, this sort of thing is only going to work with the old org metadata structure

One of the big advantages in moving to packaging through SFDX is that this class cleanup happens out of the box when upgrading packages.

The cleanup file doesn't have to be anything complicated:

./cleanup.js
1linkconst fs = require("fs");

2link

3linktry {

4link fs.unlinkSync("./src/destructiveChangesPre.xml");

5link} catch {}

6link

7linktry {

8link fs.unlinkSync("./existingApexClasses.json");

9link} catch {}

You could do the same in Bash/Powershell, as you please. It doesn't really matter. I have no preference between using node, rm -rf, or Remove-Item. Whatever floats your boat. The node implementation is shown primarily because we're talking about deploying through the use of a package.json file. The underlying steps to achieve this functionality remain the same, regardless of which shell / language you're using in your builds.

linkContinuous Integration With SFDX

So ... locally, we have SFDX authorized to the Salesforce orgs of our choice ... but in order to securely allow for continuous integration, it's of absolutely crucial importance that we not expose our Salesforce login information except at the exact moment in our builds where we need to authorize SFDX.

In Node, at the moment, there's the popular sfdx-cli wrapper to allow for the use of SFDX commands through NPM. You have two essential choices when setting up CI with SFDX:

./writeSalesforceAuth.js
1linkconst fs = require("fs");

2link

3linkfs.writeFileSync("salesforce-auth", process.env.AUTH_TOKEN);

Or ...

1linkecho $AUTH_TOKEN > ./salesforce-auth

Or even perhaps:

1link$env:AUTH_TOKEN > salesforce-auth

Again, whatever makes you happy. Just remember to clean up that file at the end of your build! Our full build pipeline for production deploys thus becomes:

  1. Auth our build runner by calling sfdx force:auth:sfdxurl:store -f ./salesforce-auth -d" as part of your deploy script
  2. Continue on with the deploy as shown in the package.json example from above
  3. Automate the cleanup of build artifacts with any kind of sensitive/transitive information

OR:

Essentially, this involves the use of a secret key known only to you/your team that functions as the SSL cipher for your SFDX Auth Url, which is then decrypted during your build.

YMMV, as they say. I personally find it likely that if a person with ill intent had taken the time to infiltrate your build system, they almost certainly also have access to wherever your remote repository is hosted; it doesn't really matter if you are using a secret key that is then going to decrypt a file you have committed to your repository with the encrypted contents of your SFDX Auth Token ... they can just extract that information in one additional step.

You can certainly make the argument that if wherever the environment variables are securely stored to begin with gets hacked, and they only get the cipher key, security still has the chance through password resets and cipher recreation to protect your secrets. Again, it's a "once again removed" situation where I feel that one thing follows the other, and you're better off revoking your existing authorized user in that case.

linkWrapping Up

Getting continuous integration up and running for Salesforce is important, regardless of whether you've fully converted over to using the new metadata format or not. Hopefully we can all agree on that. For me, the icing on the cake is not having to manually delete classes when in the midst of big refactors. Whether you're using BitBucket, Gitlab, or Github, there are many options for setting up continuous integration; there are also plenty of existing integrations (CircleCI, CumulusCI , Jenkins, etc ...) that either have Salesforce / SFDX support, or outright specialize in setting up build pipelines for you.

Maybe you're curious as to what some of those underpinnings may look like when using Github Actions, Gitlab Pipelines, etc ... if so, hopefully this post proved illuminating and helpful to you. Either way, thanks for sticking with me through this journey — not exactly our typical Joys Of Apex subject material, but an important tooling step and something that I thought worth mentioning. Till next time!

The original version of Continuous Interation With SFDX can be read on my blog.

Automating Apex Class DeletionContinuous Integration With SFDXWrapping Up

Home Apex Logging Service Apex Object-Oriented Basics Batchable And Queueable Apex Building A Better Singleton Continuous Integration With SFDX Dependency Injection & Factory Pattern Enum Apex Class Gotchas Extendable Apis Future Methods, Callouts & Callbacks Idiomatic Salesforce Apex Introduction & Testing Philosophy Lazy Iterators Lightweight Trigger Handler LWC Composable Modal LWC Composable Pagination LWC Custom Lead Path Mocking DML React Versus Lightning Web Components Refactoring Tips & Tricks Repository Pattern setTimeout & Implementing Delays Sorting And Performance In Apex Test Driven Development Example Testing Custom Permissions Writing Performant Apex Tests



Read more tech articles