There are no shortcuts when it comes to testing as part of Application Lifecycle Management (ALM), but the following is a guide for how to test prior to deploying changes to production models, to minimize issues on deployment. As part of the decision to utilize ALM, you should have created the ‘structure for change’ and have a change control process, as outlined here. Assuming this is in place, here are a few tips to help minimize unwanted data issues on deployment.
The development activity post-go-live should be given the same importance as the initial development. As part of this, any change should have an associated user story, and within that, acceptance criteria. Acceptance criteria are the key to signing off a piece of development; it is the first part of testing known as ‘component’ or ‘functional’ testing. Functional testing should validate the work and prove out calculations (e.g. does the top-down value get allocated across all lower members correctly). However, depending on the level of complexity within the change, further testing may be necessary. More information on user stories and functional testing can be found here, as well as the training course.
Depending on the segregation of duties or customer requirements, the source data for the test model could be pointing to a QA environment or a production system. It is recommended that the development model is always used as the source for all test and production models. The advantages of this are as follows:
It is also recommended that the development and test environments are held within a separate workspace; this helps segregate responsibilities between developers and the business team (if that is required).
This article is not going to run through all of the testing protocols as there are too many, but here are a couple of ideas for how to check the values before and after a change.
This approach works well if you expect changes to the values in certain parts of the model or you need a detailed analysis of particular differences. This approach involves creating a brand new “variance” model to validate the variances as a result of the new changes. This model will be a cut-down version or a “shell” of the production model. This method is only possible if you have access to both the test and production workspaces.
The detailed steps from above are as follows:
Note, once the “variance” model is set up, going forward, step 2 will not need to be repeated. If you re-copy the production model (step 3), you will not need to repeat step 5a; all you need to do is re-map the source of the import to the newly copied test model. This is done in the settings tab – ‘Source Models’.
This approach works if you are expecting no change to the overall values in the model.
Note: If you are going to use this approach, it is highly recommended that the setup build steps (step 1) are included in a separate revision tag and synchronized before you start development on the main change. This is because you may need to revise the change and run another synchronization.
The detailed steps are below:
In both the above cases, if the data is not as expected:
As mentioned above, if the results of the testing were not as expected, and the level of change to correct the issue is substantial, it might be quicker to start again rather than change elements of the change contained within the revision tag.
To repeat the opening statement, there are no shortcuts to testing. The level of testing and risk or error will vary greatly depending on the level of changes. The other Best Practice to mention, in conclusion, is to set and test revisions regularly. This, coupled with a structured development program, should help minimize issues in deployment and drive efficiency in your Application Lifecycle Management process.
See part three of the series to understand how to recover from the worst-case scenario— a synchronization has caused data loss.
Great insight as ever @DavidSmith !
Just reading this i was wondering if reverting a modem back to before a revision tag using the history log track sets the model back before the revision tag and hence erasing the revision tag from the model, hence making a model with accidental structural changes and revision tag compatible again
Not that it would be recommended mind, more for the sake of understanding how history log and ids are working with revisions.
Revision tags are never deleted - even reverting model history only reverts the structures. The revisions remain permanently - That is why "back to the future" works!
David