ALM explained—Part 2: testing

Introduction

There are no shortcuts when it comes to testing as part of Application Lifecycle Management (ALM), but the following is a guide for how to test prior to deploying changes to production models, to minimize issues on deployment. As part of the decision to utilize ALM, you should have created the ‘structure for change’ and have a change control process, as outlined here. Assuming this is in place, here are a few tips to help minimize unwanted data issues on deployment.

Create user stories and acceptance criteria

The development activity post-go-live should be given the same importance as the initial development. As part of this, any change should have an associated user story, and within that, acceptance criteria. Acceptance criteria are the key to signing off a piece of development; it is the first part of testing known as ‘component’ or ‘functional’ testing. Functional testing should validate the work and prove out calculations (e.g. does the top-down value get allocated across all lower members correctly). However, depending on the level of complexity within the change, further testing may be necessary. More information on user stories and functional testing can be found here, as well as the training course.

Setting up the development and testing environments

Depending on the segregation of duties or customer requirements, the source data for the test model could be pointing to a QA environment or a production system. It is recommended that the development model is always used as the source for all test and production models. The advantages of this are as follows:

  • The source of the production model(s) remains constant; there is no confusion over which model is the source for the target model(s).
  • Using a consistent source minimizes the chance of breaking the compatibility
  • Test models can be deleted after use to save on workspace used
  • It is possible to create multiple test models to validate different revision tag changes (e.g. one test model could be at ‘Revision 3’, another at ‘Revision 4’). Testing on ‘Revision 3’ may be complete, so, if desired, ‘Revision 3’ could be deployed, while testing on ‘Revision 4’ is progressing.
  • Depending on the scope of the testing, a fully populated dataset may be required. If this is the case, it is simple to copy the production model to become a test model.

It is also recommended that the development and test environments are held within a separate workspace; this helps segregate responsibilities between developers and the business team (if that is required).

This article is not going to run through all of the testing protocols as there are too many, but here are a couple of ideas for how to check the values before and after a change.

Create a variance model

This approach works well if you expect changes to the values in certain parts of the model or you need a detailed analysis of particular differences. This approach involves creating a brand new “variance” model to validate the variances as a result of the new changes. This model will be a cut-down version or a “shell” of the production model. This method is only possible if you have access to both the test and production workspaces.

2020-09-04_11-07-21.jpg

The detailed steps from above are as follows:

  1. Identify the level of detail you wish to validate (lists, modules, and line items). For example, you may want to validate annual totals for revenue/expenses, monthly or weekly totals for all individual revenue/expenses line items, or both. The choice of how much detail (line items) and what level of structures (list hierarchies) you decide upon, will depend on the level of detail you wish to include in your validation process.
  2. In the test workspace, create a new “variance” model.
    1. Create the module(s) needed to validate data, using the level of detail identified in step 1, and repeat the following for each module identified.
    2. In each module, create a dimension or two line items that contain the “before” and “after” values. If you end up choosing the line items approach, create one additional line item that is simply a variance between the “before” and “after” line items.
    3. Create an import from the production model mapping the structures as appropriate into the “before” element you created in b) and repeat for each module.
    4. Run all of the imports created in c). You now have the “before” state captured in the “variance” model.
  3. Take a copy of the production model and import it to the test environment. This creates a fully-populated test model, as well as providing a back-up of the data set.
  4. Synchronize the proposed change from development to this test model. You now have the “after” state in the test model.
  5. In the “variance” model:
    1. For each module, repeat the process from 2c) and 2d) above to create imports from the test model, targeting the ‘after” elements in each module.
    2. Traverse through the modules to compare the before and after values.

Note, once the “variance” model is set up, going forward, step 2 will not need to be repeated. If you re-copy the production model (step 3), you will not need to repeat step 5a; all you need to do is re-map the source of the import to the newly copied test model. This is done in the settings tab – ‘Source Models’.

Create a checksum module

This approach works if you are expecting no change to the overall values in the model.

2020-09-04_11-29-50.jpg

Note: If you are going to use this approach, it is highly recommended that the setup build steps (step 1) are included in a separate revision tag and synchronized before you start development on the main change. This is because you may need to revise the change and run another synchronization.

The detailed steps are below:

  1. In the development model:
    1. Similar to the process above, create a module within the development model that is at a high level; maybe even a single value (e.g. total volumes for all regions, for all channels, for all years).
    2. Create three line items that contain “before” and “after” and the variance.
    3. In the ‘after’ line item, create a formula to pull in the value(s) as appropriate.
    4. Create an internal import to copy the values from ‘after’ to ‘before’.
    5. Save the changes as a revision tag.
  2. In the test model:
    1. Synchronize the checksum change from development to this test model.
    2. Run the internal import to set up the ‘before’ state.
    3. Check that the checksum change is as expected.
  3. Synchronize the change to the production model.
  4. You can now proceed to make the changes in the development model.
    1. Once the desired changes are complete, set the revision tag in the development model.
    2. Take a copy of the production model and import it to the test environment. This creates a fully-populated test model.
  5. In the test model:
    1. Run the internal import to set up the ‘before’ state.
    2. Synchronize the changes to the test model.
    3. The ‘after’ line item will update and you can now check this against the ‘before’ for any variance.

Correcting the issue(s)

In both the above cases, if the data is not as expected:

  • Analyze the differences
  • Identify the errors or issues with the proposed changes
  • If the “issue” is a small change:
    • Correct the development model
    • Set a new revision tag and repeat the testing process from above
  • If the “issue” requires significant work, it might be quicker to remove the change in development and start again.
    • Revert the development model back to the previous revision tag (see below)
    • Repeat the change, with the correction
    • Set a new revision tag and repeat the testing process from above

Reverting a revision

As mentioned above, if the results of the testing were not as expected, and the level of change to correct the issue is substantial, it might be quicker to start again rather than change elements of the change contained within the revision tag.

  1. In the development model, navigate to the History tab and review the historic changes. You should be able to find the point in which the previous revision tag was set (and the history id immediately before).
    2020-09-07_12-19-10.jpg
  2. Restore the development model back to this point; the changes crystallized in the last revision tag are now backed out
  3. Correct the issue found in testing along with any other changes as desired
  4. Set a new revision tag
    2020-09-07_12-24-23.jpg
  5. Follow the testing process as above

Conclusion

To repeat the opening statement, there are no shortcuts to testing. The level of testing and risk or error will vary greatly depending on the level of changes. The other Best Practice to mention, in conclusion, is to set and test revisions regularly. This, coupled with a structured development program, should help minimize issues in deployment and drive efficiency in your ALM process.

See part three of the series to understand how to recover from the worst-case scenario— a synchronization has caused data loss.

Comments

  • Great insight as ever @DavidSmith !

  • Just reading this i was wondering if reverting a modem back to before a revision tag using the history log track sets the model back before the revision tag and hence erasing the revision tag from the model, hence making a model with accidental structural changes and revision tag compatible again

    Not that it would be recommended mind, more for the sake of understanding how history log and ids are working with revisions.

  • @david.savarin 

    Revision tags are never deleted - even reverting model history only reverts the structures.  The revisions remain permanently - That is why "back to the future" works!

    David