This is a rather open question so I'm just going to give a few thoughts.
If you haven't already considered it, starting a CoE would be really helpful. @ChrisWeiss has some amazing content on this page. You can start with the introduction. Once you get past 5+ apps you really need some central control and one area you'll want to develop is the testing/automation. Standards help too but most importantly is that someone needs to be watching the big picture on the entire tenant - mostly data integration, and a at a minimum a lead for each workspace.
The User Stories are critical at this state because they will also have the success criteria. User stories go beyond process though, you can also create them for data integration.
Having your Anaplan BP or a solution architect assist you with a testing plan would help too.
Watch out for concurrency - this should be tested too. I've found that a coordinated test with multiple users and with IT to run certain batch jobs, as an end-to-end test is helpful.
Make use of ALM and use data subsets in your testing environment that help simulate what production will be like. Be prepared to back out the changes if performance takes a nose dive in production. @DavidSmith has some best practices on that.
This is such an awesome topic. I haven't seen very many questions about it but I know anyone that has to deal with more than one model in a workspace must address this. Let's keep the conversation going. You could probably write a best practice on this - i checked and didn't see one.
If I could only give you one suggestion it would be to give your solution architect the responsibility for maintaining the integrity of the workspace. Ideally, as part of a CoE so the architect can get help when there's a roadblock. Politics always creep into these conversations.
Of course, where you assign the testing work will depend on your ability to get the resources.
Dedicated testers may not be necessary since the testing comes in bursts.
Just a few more thoughts:
We all know testing isn't effective unless the tester really knows what they're testing. That's why I believe investing in the process AND data integration user stories are so important. Plus, if you have a CoE charged with making sure standards are met OR you have a dedicated solution architect, they can confirm the user story is well designed.
Ideally, the stakeholder will do the UAT but the unit testing, in my opinion, should be a Dev/Ops mentality where the person that builds it, tests it and owns it. Best results I've ever seen when done this way.
As you increase the model count, you might run into issues where it starts to feel like a free-for-all. This is when you'll need to get serious about ITIL and implement incident, problem, and minor enhancement processes.
If you use the SCRUM methodology, or the Anaplan Way, the testing is shared between the modeler and the stakeholder (UAT).
Lastly, with regard to regression testing on the workspace: this is where the CoE makes the most sense because they will help enforce the integrity of the data movement.
Since writing your initial post, have you adopted a new best practice for testing? I was recently introduced to the idea of Test Driven Development (TDD) while learning Python. I really enjoyed the concept of TDD and wondered how I could leverage the benefits of TDD with Anaplan model development.
UAT and checking against existing external Excel models have served me well for initial model buildout. However, I've found the most trouble when small tweaks are needed in a rush: you make the "small" change and it works the way you think it should (it passes a 'unit test'), but there's no regression testing performed on all the other individual units which would ensure you haven't unintentionally distorted downstream outputs.
I spend a lot of time comparing data to make sure things are still behaving, but having tests that I can run instead would save a lot of time. Anytime I start doing something really repetitive and tedious, I quickly get the suspicion that someone out there has an idea of how to do it more efficiently.
I have some vague ideas of how I might implement some testing modules, but my first step was to come to the Anaplan community to see if someone else had already invented the 'TDD with Anaplan' wheel. 😉
That's helpful to know, and it makes sense why Platform Updates can sometimes break the scripts. Since the testing I'm interested in has more to do with ongoing data validation, I think there are simple yet effective measures I can employ to speed up the testing process without going the Selenium route yet. As I learn more I can automate more, but the important part is that I've become more aware of the value of testing downstream data as changes are made to a model.
If I weren't on a Mac, I think the Excel add-in would have a lot to offer in terms of time-to-value for spinning up semi-automated data validation (but once the Google Sheet Add-In becomes available, I can explore that as an option). However, I think the API will help me achieve a similar goal (with more leg work).