Salesforce Deployment for a large program – Retrospective Insight

Salesforce Deployment for a large program – Retrospective Insight
Share the article

I never imagined that deployment would take any longer than a weekend until I was part of one such project where it took way longer measured by any standards. These days, with the advent of CI/CD tools, deployment is much simpler. Despite the modern approach, my experience was worth recording in getting a perspective on ‘What could’ve been done better?’. Hence, retrospection was a much called for exercise in order to avoid reoccurrence of such an ordeal.

When I sat down to write down my learning’s from this painful exercise, it took me quite some time to recollect the plethora of issues that had cropped up; each of which adding up to the longer deployment time. While the actual issues faced were way too many and not necessarily worth having an entry here, this is an attempt to look at the whole exercise in a broader sense – to identify the root causes for core issues.

Let me set the context for you!

Business Use case

Customer wanted to implement 2 different custom applications on Salesforce platform. Both applications share around 50% of the data model. One application has already gone live and metadata/code for the second application needs to be merged with 1st from SIT to start.

Team Structure – 5 Scrum Teams with 20 members each working on different portions/modules of the application which eventually needed to be stitched together.

Working in isolation without having the end-to-end view of solution created other kind of challenges but that’s for another day to document!

Categorically put together the below key learnings:

1. Single Source of Truth: Different teams working on different modules in different orgs. While there was a Version Control System (VCS) it wasn’t made available to team for first 3 months. Checking the code into VCS which should have been done right from beginning didn’t happen for 3 months – and when VCS was made available, team hadn’t taken time to pause for a bit, reconcile changes made until then to move to VCS. Not to blame the deadlines they were already running against to complete sprint deliverables.

Recommendation: Having a VCS system available upfront for the team is a must. Define your source control mechanism upfront in the project and familiarize the team on how to use the system as a part of their day-to-day work. Don’t start with sprint execution before your Source Control Strategy is clear and established, even if it means some delay in project timelines. Better to handle the devil upfront than allowing it to ice ball itself into a much larger problem.

2. Process adherence: So while VCS wasn’t available which meant that every team working on different components were expected to update a tracker manually called – ‘Component tracker’. This may be way far from the right solution but could have worked provided process to keeping it updated was followed, which wasn’t the case anyway. There were huge gaps in keeping the components list updated. In hindsight, all team leads could have tracked it more closely in the day-to-day calls, making it more of a regular activity to review the document, identify any gaps upfront. Not doing so only resulted in several gaps and 3 months down the line it was practically not possible for team members to remember all their components and there were several issues related to missing components.

Recommendation: There should be a way to ensure that the process is being followed besides identifying names on paper.

3. VCS Check in/out process: There were instances where components were getting updated in VCS directly without a proper Check in Check Out. This resulted in deployment scripts to fail. It almost took a day’s time to figure out the root cause every time.

Root cause? It’s as simple as a Hindu saying I’m always reminded of, “Understanding Dharma is just not enough, it must be followed”.

There were slippages in adhering to the right process and with all effort that were spent on learning – shortcuts can prove fatal at times.

Given the size of the engagement, I do feel having a dedicated release manager would have helped a great deal.

Moving away from process related aspects I would also want to talk about the technical aspects of the deployment –

4. Size of the deployment package: After successfully having in place all the components required for deployment, the next problem we faced was that of the package size – Our deployment was failing due to the size of the deployment package! Who would’ve thought!!!

We could not push all the changes to the test org in one single go. When the package size exceeded 15 MB*, we started facing problems in the deployment. From Salesforce professional services team, we received guidelines on how we could split the deployment into meaningful groups**. Below is a sample on what was shared.

GROUP 1: StaticResource, GlobalValueSet, CustomLabels, Role, NamedCredential, PlatformCachePartition, ContentAsset, Group, ConnectedApp, CustomPermission, IframeWhiteListUrlSettings, NotificationTypeConfig, CustomNotificationType, RemoteSiteSetting, ReportFolder, EmailFolder, DocumentFolder, DashboardFolder, StandardValueSet

GROUP 2: PathAssistant, CustomObject, Queue, DuplicateRule, MatchingRule, CustomTab, ApprovalProcess, AppMenu, Layout, CustomMetadata, CustomApplication, Workflow, MatchingRules, ApexClass, AuraDefinitionBundle, LightningComponentBundle, ApexPage, HomePageLayout, FlexiPage, Flow, QuickAction, ApexTrigger, Dashboard, Report, EmailTemplate, ReportType

GROUP 3: Profile, PermissionSet, PermissionSetGroup ( CustomSite, Network, ExperienceBundle — these three can be in set 2 or 3 depending on the size )

* This is not a hard limit. After initial deployment is completed, we could push deployment packages of size 17 MB without any issues. It could be because of space available on the DevOps machine at that point of time. Nonetheless breaking down the deployment package into smaller groups is a good practice to be followed and tested during your dev and test deployments before planning for production deployment.

** Note that this is not an exhaustive list, but only an indication to give an idea of how groups can be formed. Depending on the size of the customizations made, in a project one may need to break down these groups into further smaller groups.

5. Unit testing with right profiles: Post all the hustling and bustling with all the challenges thrown, it was time to do a sanity test. Just before handing over the environment for SIT, team was taken aback by yet another surprise. Several flows were not working because of permission issues!!

It was later understood that team members developed and tested everything as Salesforce Administrator and never really looked at other personas/profiles and what permissions they would need to execute the business flow end to end.

While this may appear as something very basic and obvious to have considered all personas – more than a handful senior developers/tech leads actually called out that development teams only test using ‘Admin’ profiles. I was as shocked as you may be but took it as learning for future.

Last but not the least beyond the core issues, there should have been an accurate pre and post deployment activities tracker which is and always extremely important (also executed multiple times for test deployments). Test coverage for Apex classes upfront during the build phase could also have been a scenario considered in this list for smooth production deployment.

To summarize,

  1. Define and familiarize teams with Source Control Strategy and check in/out processes upfront in the project. Ensure each one has access to the right set of tools from the very start.
  2. Don’t ever assume that teams function as per the processes defined; Processes must be enforced at each level, at all times.
  3. Split your deployment package into manageable smaller groups.
  4. Don’t forget to test from all personas and not just from the admin.

And of course, from time to time, sure do a quick check on people with processes!!

Wondering what the duration actually was to deploy? – 5 full weeks!!!

8,180 Comments