Data Hub Exception Reporting

Hello everyone,

 

I'm looking for process guidance surrounding Data Hub exception reports for automated data loads that occur once or more than once per day.  Exception reports are built, etc. Ideally, the process is automated so that the APIs pull data into the data hub and immediately into the spoke models.  I'm wondering if anyone has recreated the exceptions in spoke models to bring more awareness to potential data issues in spokes, or give everyone access to the Data Hub, alerts, etc.


Thank you!

Answers

  • @KBeltz 

    One of my favorite topics is data integration.

    So, as you probably guessed, the exception reporting and auditing has a lot of dimensions to it.

    To answer your question quickly, there is no "one" definitive best practice articles written on this topic that I know of but there are plenty of helpful articles that when consolidated add up to what you're looking for. @rob_marshall's articles are some of my favorite, mostly because he really gets into the details, but there are quite a few other really good authors.

    Some thoughts / learnings:

    • For exception reporting you need to set up three types of audits: New, Change, and Delete. Each is considerably different. Change is the hardest and why I highly recommend you always use "codes" to identify your list items.
    • You will need to spend significantly more time outside of Anaplan to set up your CRUD actions so you can process your exceptions.
    • Automated emails are possible but you'll need to read up on the best practices for that - I found that rather complicated - I'll look for the link for you.
    • I saw an amazing demonstration of a simulated "progress bar" in Anaplan but you need a scheduler for that and a really good programmer (the demo was written in Python).
    • Build a "process" to handle the exceptions and make a sprint out of it so it gets documented and adopted. The last thing you want is your data hub and spoke applications to get out of sync. That is a nightmare to straighten out, so insist that all master data changes be made in one model, period. The sooner the better.
    • It's on the roadmap, but doesn't exist yet, the ability to "push" data from the DH to the spoke application. For now, you'll have to build all your actions in the respective model to "pull" the data. 
    • Always document your imports and exports using the notes/comments by putting the exact name of the saved view you used and the model. You can't imagine the difficulty of hunting down import data sources without some ability to trace where the data is coming from.
    • If you ever change your export saved view, remember you have to re-download the file before it will work with your automated process, even if you use "everybody"

     

    Let's keep this conversation going! I'm sure there are plenty that have advice on this subject! We all have all been challenged with this.

  • @KBeltz ,

     

    Quick question, why would you want to do data validation in the spoke model?  I would think you would want to do the validation in the data hub and then prevent the "dirty" data from getting into the spoke.  Basically, making sure only validated data is pulled into the spoke model.

     

    @JaredDolich great points!

     

    Rob

  • @KBeltz 

    Here's the article about exception handling I was referring to.

    https://community.anaplan.com/t5/Best-Practices/Integrated-Error-Handling-and-Email-Notification/ta-p/20701

    Written by @chanaveer_k@scott.smith, and @pmarpaka 

    They did a terrific job.

  • usually I'd say no, the users of the spoke models are not the ones responsible for data sanity.
    However it your company/client is organized this way, why not ? You can certainly imports them along with the meta data.