Our engineering team develops the Anaplan Add-ins for Excel and PowerPoint. This team is primarily composed of developers and quality assurance engineers who write and test the code behind the add-ins.
This month, we're sharing insights from Anaplan Technical Lead @edvard_dvorak to illustrate how we test our products before releasing them to customers. This Q&A also provides best practices and principles that customers can apply to their Anaplan models.
Tell us about yourself! Can you describe your current role and your journey with Anaplan?
I’m Eddie Dvorak, Technical Lead for the Anaplan Excel Add-in and Anaplan PowerPoint Add-in. I have been at Anaplan over five years, and have worked in various teams across infrastructure and add-ins. Eventually, my passion for working more closely with customers brought me to leading the Anaplan Office Add-ins team. I have had the pleasure of working with many different people both in the UK and the U.S.
Why do you test the Anaplan Office Add-ins?
At Anaplan, we take testing very seriously. Our definition of “done” is not complete until testing is finished.
The testing we do depends on the size and complexity of the project. For example, the multi-sheet connection we released in version 3.4 impacted the whole stack of the Anaplan Excel Add-in. We had to ensure all the settings (i.e., apply Anaplan formatting, styling, etc.) and other add-in functionalities (i.e., read/write connections, pivot, filter, etc.) were applied to the multi-sheet connections.
What kind of testing do you do?
Most features will require multiple levels of test:
Unit testing, in which we test individual units of source code. This is the equivalent to model builders testing that the formula they have entered gives the expected result for a specific line item.
Integration testing, in which individual software modules are combined and tested as a group. As a model builder, you would do this after you have created several Anaplan modules as part of a single functionality.
Regression testing ensures that previously developed and tested software performs after a change. As a model builder, you would for instance check that the change you have made to a formula doesn’t have any downstream impact on the rest of the model.
Exploratory testing, in which we put ourselves into the shoes of the users and try and find defects.
For each new feature, we ensure the existing tests are up to date to reflect the impact of the new functionalities as well as add additional tests. The tests mentioned above are a mix of automated and manual tests. The latter are the ones you likely run as a model builder.
So how do those tests run? We have dedicated tools to run the automated tests. We specify the scenario (i.e., action 1 > expected result 1, action 2 > expected result 2…) and the tests run automatically. Then, we analyze the results and fix the issues if any. The regression and exploratory tests are done after we have finished the development of the features
Can you tell us more about manual tests?
While manual tests are useful for some areas of code that are difficult to automate, I’m ultimately not a big fan of manual tests. They can take a long time to run, are subject to interpretation by the person running them, and can cause false positives as well as false negatives. This is why we replace them with automated tests as much as possible for every release.
My advice for model builders is to write clear test scripts using the Anaplan Way app to overcome these potential challenges.
Who is in charge of the testing? There is not a single person in charge of testing:
All Anaplan engineers are responsible for ensuring quality.
We also have a specialist quality assurance team that works with engineers as advisors.
Can bugs still exist after testing?
Despite all our efforts, it is still possible for bugs to exist after testing is complete. One of the reasons behind this is because users have different methods for configuring their IT systems with virtual machines or specific security settings, and have different use cases for Excel. Although try to the range of customer use cases, we are always learning about how end users leverage the Anaplan Excel Add-in.
You can help by joining feedback sessions. We also learn from support tickets and regularly add new tests based on those findings.
Would you like to share a final fun fact? We have approximately 3,500 unit tests, 120 integration level tests, and 210 manual tests just for the Anaplan Excel Add-in!
A big thank you goes out to Eddie for his valuable insights. Tell us what you think about the interview in the comments below. Remember to subscribe—on the blog homepage—for real-time notifications of valuable content like this and more!