Copado Data Deploy Best Practices
Data Quality Best Practices
When working with a release management tool like Copado, it is very important to have good quality data.
You want to make sure the changes you are moving across your pipeline have passed the relevant quality gates and that your sandboxes are updated with the latest changes from your production org.
Bearing this in mind, there are some best practices you can implement in your organization to ensure your data is consistent and has a good quality:
- You need to have an external id field in your objects so that records are not duplicated upon deployment. Create external Id fields for the different objects you are working with.
External Id fields contain unique values and help you avoid creating duplicate records when importing data or deploying duplicate records when working with data deployments.
- Use validation rules to maintain your data structure.
Validation rules allow you to enforce specific requirements that you specify before users can save a record. Check out Salesforce’s article for more information about the different validations rules you can set up.
- If you have any validation rules or triggers in place, make sure they are valid for your existing data before deploying to the destination org.
- Make sure you document everything and add help texts to fields so that users understand the data entry requirements.
- Check that your data is clean. Use apps to cleanse duplicates and enforce duplicate/matching rules and unique values.
- Keep your sandboxes in sync. Updating your sandboxes with the latest data from your production org will help you avoid deployment errors.
For this purpose, you can leverage Copado Continuous Delivery to easily back-promote user stories to lower environments.
In addition to these general guidelines on how to maintain good quality data, check out the best practices below to successfully work with Copado Data Deploy.
Copado Data Deploy Practices
- When working with data templates, you can easily select and deploy all the related records you need to move as part of your deployment. But before that, we recommend that you leverage Salesforce’s Schema Builder and take a look at your data model to have a clear view of all the related records you need to deploy. Once you have your templates ready, you can review the relationship diagram on the main data template to ensure you are not missing any related records.
- Establish a naming convention for your data templates:
- When multiple users are creating data templates in the same org, it is recommended to add user initials to the data template name.
- Similarly, if you are creating a data template to move test data from your production org to your sandboxes, you can add ‘Test Data’ to the template name.
- If you create multiple versions of the same data template, make sure you append the version number. In the description of the template specify what makes this version different from the other versions.
- If you have created or cloned a template for a specific user story, add the user story number to the template name.
- Focus on adding accurate filters, especially for configuration data.
- You can’t repeat templates in the same set of related templates. Therefore, when selecting a related object, if you already have templates for this object in the template dropdown, do not select a template that was already selected in another related object. For example when an object has 2 lookup fields.
- When working with test data, it is a good practice to move the most recent records in your production org since they will contain the latest functionality, and you may be interested in working with the latest features. To do so, you can filter by created date, e.g. Created Date = Last Quarter.
- When creating new fields in an object for which you already have a template, or if you are using a different set of filters, you can clone your existing data template instead of updating it and add the new fields or filters to the cloned template. This way, you can keep different versions of the filters.
- If you are moving sensitive data to a sandbox for testing purposes, leverage the Scramble Value and the Scramble With Format functionality. Copado will replace the original value with a random value.
- Create list views for templates belonging to a particular application or object to easily find the templates you need. For example, you can create a list view for CPQ templates where the copado__Main_Object__c field starts with “SBQQ__”, the CPQ package namespace.
- If you are moving the same data over and over to lower environments, e.g., for testing purposes, you can use data sets. Data sets allow you to deploy the exact piece of data you need, and the deployment process is faster than a data template deployment.
- If several developers are working on the same data records, you should use the data commits feature to make sure you deploy the records from a committed version and avoid deploying changes from other developers.
Data Backup Best Practices
- Leverage data sets and scheduled jobs to perform regular backups of your production data and make fast data recovery when needed.
- Execute production backups before and after a data deployment with Copado so that you have a delta of the changes. This way, if there needs to be data recovery, you can have a snapshot of the changes done during the production data deployment.