Choose a label or article, or search below to begin.
Sort by:
Reducing the number of calculations will lead to quicker calculations and improve performance. But this doesn’t mean combining all your calculations into fewer line items, as breaking calculations into smaller parts has major benefits for performance. Learn more about this in the Formula Structure article. How is it possible to reduce the number of calculations? Here are three easy methods: Turn off unnecessary Summary method calculations. Avoid formula repetition by creating modules to hold formulas that are used multiple times. Ensure that you are not including more dimensions than necessary in your calculations. Turn off Summary method calculations Model builders often include summaries in a model without fully thinking through if they are necessary. In many cases the summaries can be eliminated. Before we get to how to eliminate them, let’s recap on how the Anaplan engine calculates. In the following example we have a Sales Volume line-item that varies by the following hierarchies: Region Hierarchy Product Hierarchy Channel Hierarchy City SKU Channel Country Product All Channels Region All Products   All Regions     This means that from the detail values at SKU, City, and Channel level, Anaplan calculates and holds all 23 of the aggregate combinations shown below—24 blocks in total. With the Summary options set to Sum, when a detailed item is amended (represented in the grey block), all the other aggregations in the hierarchies are also re-calculated. Selecting the None summary option means that no calculations happen when the detail item changes. The varying levels of hierarchies are quite often only there to ease navigation and the roll-up calculations are not actually needed, so there may be a number of redundant calculations being performed. The native summing of Anaplan is a faster option, but if all the levels are not needed it might be better to turn off the summary calculations and use a SUM formula instead.  For example, from the structure above, let’s assume that we have a detailed calculation for SKU, City, and Channel (SALES06.Final Volume). Let’s also assume we need a summary report by Region and Product, and we have a module (REP01) and a line item (Volume) dimensioned as such. REP01.Volume = SALES06 Volume Calculation.Final Volume is replaced with REP01.Volume = SALES06.Final Volume[SUM:H01 SKU Details.Product, SUM:H02 City Details.Region] The second formula replaces the native summing in Anaplan with only the required calculations in the hierarchy. How do you know if you need the summary calculations? Look for the following: Is the calculation or module user-facing? If it is presented on a dashboard, then it is likely that the summaries will be needed. However, look at the dashboard views used. A summary module is often included on a dashboard with a detail module below; effectively the hierarchy sub-totals are shown in the summary module, so the detail module doesn’t need the sum or all the summary calculations. Detail to Detail Is the line item referenced by another detailed calculation line item? This is very common, and if the line item is referenced by another detailed calculation the summary option is usually not required. Check the Referenced by column and see if there is anything referencing the line item. Calculation and staging modules If you have used the DISCO module design, you should have calculation/staging modules. These are often not user-facing and have many detailed calculations included in them. They also often contain large cell counts, which will be reduced if the summary options are turned off. Can you have different summaries for time and lists? The default option for Time Summaries is to be the same as the lists. You may only need the totals for hierarchies, or just for the timescales. Again, look at the downstream formulas. The best practice advice is to turn off the summaries when you create a line item, particularly if the line item is within a Calculation module (from the DISCO design principles). Avoid Formula Repetition An optimal model will only perform a specific calculation once. Repeating the same formula expression multiple times will mean that the calculation is performed multiple times. Model builders often repeat formulas related to time and hierarchies. To avoid this, refer to the module design principles (DISCO) and hold all the relevant calculations in a logical place. Then, if you need the calculation, you will know where to find it, rather than add another line item in several modules to perform the same calculation. If a formula construct always starts with the same condition evaluation, evaluate it once and then refer to the result in the construct. This is especially true where the condition refers to a single dimension but is part of line item that goes across multiple dimension intersections. A good example of this can be seen in the example below: START() <= CURRENTPERIODSTART() appears five times and similarly START() > CURRENTPERIODSTART() appears twice. To correct this, include these time-related formulas in their own module and then refer to them as needed in your modules. Remember, calculate once; reference many times! Taking a closer look at our example, not only is the condition evaluation repeated, but the dimensionality of the line items is also more than required. The calculation only changes by day, as per the diagram below: But the Applies To here also contains Organization, Hour Scale, and Call Center Type. Because the formula expression is contained within the line item formula, for each day the following calculations are also being performed: And, as above, it is repeated in many other line items. Sometimes model builders use the same expression multiple times within the same line item. To reduce this overcalculation, reference the expression from a more appropriate module; for example, Days of Week (dimensioned solely by day) which was shown above. The blueprint is shown below, and you can see that the two different formula expressions are now contained in two line items and will only be calculated by day; the other dimensions that are not relevant are not calculated. Substitute the expression by referencing the line items shown above. In this example, making these changes to the remaining lines in this module reduces the calculation cell count from 1.5 million to 1500. Check the Applies to for your formulas, and if there are extra dimensions, remove the formula and place it in a different module with the appropriate dimensionality .
View full article
Overview There is not a switch to “turn on” ALM.  ALM is based on entitlements described in your subscription agreement Discuss your subscription with your Anaplan Account Executive and Business Partner Workspace administrators can check the feature availability: Log in to Anaplan Click on your name in the top-right-hand corner Select Manage Models Look for the Compare/Sync button Button is greyed out: Speak to your Anaplan Account Executive regarding your subscription agreement Button is available: You currently have access to ALM functionality on your workspace.     Additional information is available in the 313 Application Lifecycle Management (ALM) class, located in the education section.
View full article
A revision tag is a snapshot of a model’s structural information at a point in time. Revision tags save all of the structural changes made in an application since the last revision tag was stored. By default, Anaplan allows you to add a title and description when creating a revision tag. This article covers:   Suggestions for naming revision tags Creating a revisions tracking list and module Note: For guidance on when to add revision tags, see When should I add revision tags?   Suggestions for naming revision tags It’s best to define a standard naming convention for your revision tags early in the model-building process. You may want to discuss with your Anaplan Business Partner or IT group if there is an existing naming convention that would be best to follow. The following suggestions are designed to ensure consistency when there are large number of changes or model builders as well as allow the team to better choose which revision tag to choose when syncing a production application. Option 1: 1.0 = Major revision/release 1 = Minor changes within a release In this option, 1.0 indicates the first major release. As subsequent minor changes are tagged, they will be noted as 1.2, 1.3, etc until the next major release: 2.0. Option 2: X = Major revision/release X.1 = Minor changes within a release In this option, YYYY indicates the year and X indicates the release number. For example, the first major release of 2017, would be: 2017.1. Subsequent minor changes would be tagged: 2017.1.1, 2017.1.2, etc until the next major release of the year: 2017.2.   Creating a revisions tracking list and module Revision tag descriptions are only visible from within Settings. That means that it can be difficult for an end user to know what changes have been made in the current release. Additionally, there may be times where you want to store additional information about revisions beyond what is in the revision tag description. To provide release visibility in a production application, consider creating a revisions list and module to store key information about revisions. Revisions list: In your Development application, create a list called: Revisions Do not set this list as Production. You want these list members to be visible in your production model    Revisions details module: In your Development application, create a list called: Revisions Details Add your Revisions List Remove Time Add your Line Items Since this module will be used to document release updates and changes, consider which of the following may be appropriate: Details: What changes were made Date: What date was this revision tag created Model History ID: What was the Model History ID when this tag was created Requested By: Who requested these changes? Tested By: Who tested these changes? Tested Date: When were these changes tested? Approved By: Who signed off on these changes? Note: Standard Selective Access rules apply to your production application. Consider who should be able to see this list and module as part of your application deployment.
View full article
Little and often Would you spend weeks on your budget submission spreadsheet or your college thesis without once saving it? Probably not. The same should apply to making developments and setting revision tags. Anaplan recommends that during the development cycle, you set revision tags at least once per day. We also advise testing the revision tags against a dummy model if possible. The recommended procedure is as follows: After a successful sync to your production model, create a dummy model using the ‘Create from Revision’ feature. This will create a small test model with no production list items. At the end of each day (as a minimum), set a revision tag and attempt to synchronize the test model to this revision tag. The whole process should only take a couple of minutes. Repeat step 2 until you are ready to promote the changes to your production model. Why do we recommend this? There are a very small number of cases where combinations of structural changes cause a synchronization error (99 percent of synchronizations are successful). The Anaplan team is actively working to provide a resolution within the product, but in most cases, splitting changes between revision tags allows the synchronization to complete. In order to understand the issue when a synchronization fails, our support team needs to analyze the structural changes between the revisions. Setting revision tags frequently provides the following benefits: The number of changes between revisions is reduced, resulting in easier and faster issue diagnosis  It provides an early warning of any problems so that someone can investigate them before they become critical The last successful revision tag allows you to promote some, if not most, of the changes if appropriate In some cases, a synchronization may fail initially, but when applying the changes in sequence the synchronization completes. Using the example from above: Synchronizations to the test model for R1, R2 and R3 were all successful, but R3 fails when trying to synchronize to production. Since the test model successfully synchronized from R2 and then R3, you can repeat this process for the production model. The new comparison report provides clear visibility of the changes between revision tags.   Click here to watch a 7:00 video on this topic
View full article
If you’re familiar with Anaplan, you’ve probably heard the buzz about having a data hub and wondered why it’s considered a “best practice” within the Anaplan community. Wonder no more. Below, I will share four reasons why you should spend the time to build a data hub before Anaplan takes your company by storm.   1. Maintain consistent hierarchies Hierarchies are a common list structure built by Anaplan and come in a variety of options depending on use case, e.g., product hierarchy, cost center hierarchy, and management hierarchy, just to name a few. These hierarchies should be consistent across the business whether you’re doing demand planning or financial planning. With a data hub, your organization has a higher likelihood of keeping hierarchies consistent over time since everyone is pulling the same structure from one source of truth: the data hub.   2. Avoid sparsity As you expand the use of Anaplan across multiple departments, you may find that you only need a segment of a list rather than the entire list. For instance, you may want the full list of employees for workforce planning purposes, but only a portion of the employees for incentive compensation calculations. With a data hub, you can distribute only the pertinent information. You can filter the list of employees to build the employee hierarchy in the incentive compensation model, while having the full list of employees in the workforce planning model. Keep them both in sync using the data hub as your source of truth.   3. Separate duties by roles and responsibilities An increasing number of customers have asked about roles and responsibilities with Anaplan as they expand internally. In Anaplan, we recommend each model have a separate owner. For example, an IT owner for the data hub, an operations owner for the demand planning model, and a finance owner for the financial planning model. The three owners combined would be your Center of Excellence, but each has their separate roles and responsibilities for development and maintenance in the individual models.   4. Accelerate future builds One of the main reasons many companies choose Anaplan is for the platform’s flexibility. Its use can easily and quickly expand across an entire organization. Development rarely stops after the first implementation. Model builders are enabled and excited to continue to bring Anaplan into other areas of the business. If you start by building the data hub as your source of truth for data and metadata, you can accelerate the development of future models since you already have defined the foundation of the model, the lists, and dimensions. As you begin to implement, build, and roll out Anaplan, starting with a data hub is a key consideration. In addition to this, there are many other fundamental Anaplan best practices to consider when rolling out a new technology and driving internal adoption.
View full article
Assume the following Non-Composite list, ragged hierarchy that needs to be set to Production Data   We need to refer to the parent to define the logic calculation. In the example, we have assumed that children of Parent 1 and Parent 3 need to return the value 100 and those under Parent 2 and Child 3.1 return 200 and we need to show the proportion of the children. Select Calculation: IF PARENT(ITEM('Non-Composite List')) = 'Non-Composite List'.'Parent 1' OR PARENT(ITEM('Non-Composite List')) = 'Non-Composite List'.'Parent 3' THEN 100 ELSE IF PARENT(ITEM('Non-Composite List')) = 'Non-Composite List'.'Parent 2' OR PARENT(ITEM('Non-Composite List')) = 'Non-Composite List'.'Child 3.1' THEN 200 ELSE 0 Select Proportion: Select Calculation / IF PARENT(ITEM('Non-Composite List')) = 'Non-Composite List'.'Parent 1' THEN Select Calculation[SELECT: 'Non-Composite List'.'Parent 1'] ELSE IF PARENT(ITEM('Non-Composite List')) = 'Non-Composite List'.'Parent 2' THEN Select Calculation[SELECT: 'Non-Composite List'.'Parent 2'] ELSE IF PARENT(ITEM('Non-Composite List')) = 'Non-Composite List'.'Parent 3' THEN Select Calculation[SELECT: 'Non-Composite List'.'Parent 3'] ELSE IF PARENT(ITEM('Non-Composite List')) = 'Non-Composite List'.'Child 3.1' THEN Select Calculation[SELECT: 'Non-Composite List'.'Child 3.1'] ELSE 0 These “hard references” will prevent the list being set as a production list SOLUTION: Create a Parents Only list (this could be imported from the Non-Composite list) Parent Logic? Module Add Boolean line items for each of the “logic” types Then you can refer to the logic above     Lookup Calculation: IF Parent Logic?.'Logic 1?'[LOOKUP: Parent Mapping.Parents Only List] THEN 100 ELSE IF Parent Logic?.'Logic 2?'[LOOKUP: Parent Mapping.Parents Only List] THEN 200 ELSE 0 To calculate the proportion calculation without the SELECT, a couple of intermediate modules are needed: Parent Mapping module This module maps the Non-Composite parent to the Parents Only list. In this example, the mapping is automatic because the items in the Parents Only list have the same name as those in the Non-Composite list. The mapping could be a manual entry if needed. The formula and “applies to” are: Non Composite Parent: PARENT(ITEM('Non-Composite List')) Applies to: Non-Composite List Parents Only List FINDITEM(Parents Only List, NAME(Non Composite Parent)) Applies to: Parents Only List Parents Only subtotals An intermediary module is needed hold the subtotals   Calculation: Parent Logic Calc.Lookup Calculation[SUM: Parent Mapping.Parents Only List] The final piece is to reference this line item in the original module Lookup Proportion: Lookup Calculation / Parents Only Subtotals.Calculation[LOOKUP: Parent Mapping.Parents Only List] The list can now be set as a production list as there are no “hard references” Appendix: Blueprints:    
View full article
What drives the need for a Center of Excellence? The need for a Center of Excellence comes after businesses have successfully implemented Anaplan into their organization and are ready to become self-sufficient in ongoing Anaplan development and support. Organizations will establish a Center of Excellence to proactively handle the increase in use cases that the company can expect in the near future. Once established, the Center of Excellence can provide the following benefits to the organization: 1. Maintain control Many organizations will significantly grow within Anaplan after their first release. As such, they will want to maintain control of the product with the introduction of more use cases and departments in Anaplan. Creating a Center of Excellence will help organizations to maintain control of Anaplan, including the implementation of new releases and training of new users, from a centralized, internal group or team. 2. Consistency Establishing a Center of Excellence early will help organizations to drive consistency across Anaplan. As more data, models, and modules are created within Anaplan it’s important to ensure that everything stays consistent across the application. Doing this will help to ensure that four key elements stay consistent in Anaplan: Data and metadata The Center of Excellence will help drive consistency in data and metadata by eliminating duplicate data and avoiding shadow integration processes.  Model design The Center of Excellence will help drive consistency in model design by providing best practices in model architecture, calculation, performance optimization, and usability across the different Anaplan models deployed in the organization.  Processes The Center of Excellence promotes the consistent execution of business processes and methodologies. User experience The Center of Excellence drives consistency across the application for all users involved. This means that users new to Anaplan can expect nearly the same experience across each model and dashboard they interact with as they have all been developed and deployed using the same process and guidelines. 3. Knowledge sharing Creating a Center of Excellence promotes knowledge sharing within an organization. The Center of Excellence may ultimately be responsible for the initial and ongoing training of end users in the Anaplan platform. Additionally, the Center of Excellence may also be responsible for maintaining the processes, procedures, and best practices that the organization uses within Anaplan, which may be provided directly through the platform. 4. Upstream/downstream development Creating and maintaining a Center of Excellence within an organization will also empower the business to develop more upstream and/or downstream processes within the Anaplan platform. For example, an organization may first deploy a T&Q model, and then decide to develop an upstream HR-based model that contains employee details and compensation data to manage their sales team. The potential to expand upstream and downstream from an initial model in Anaplan is endless. 5. Efficiency The Center of Excellence creates a "service" for the business to become more efficient in developing, releasing, and maintaining models within Anaplan, supporting a business group to build and own its own model. This means that an organization may rely on an internal  Center of Excellence as a single source to implement new applications, promote platform use, share Anaplan best practices, and handle all training needs.  6. Governance Finally, a  Center of Excellence provides a central point of governance for the Anaplan projects across the organization. In a centralized mode, the  Center of Excellence is responsible for maintaining the platform, as well as all other necessary elements involved with the creation and maintenance of an organization’s Anaplan products. The  Center of Excellence will have the final say in platform updates and developments, which further drives consistency and efficiency in Anaplan. In a federated mode, the  Center of Excellence will assist local teams in their implementation and application rollout efforts as needed. In both cases, the  Center of Excellence will communicate the progress, update and value of the Anaplan applications to executive sponsors across the organization and highlight the value of the Anaplan investment.
View full article
An Anaplan customer’s journey The diagram below displays a real world example of an Anaplan customer’s journey. As you can see, the customer’s journey begins with the implementation and deployment of one app in the Anaplan platform, and only two resources involved. Soon after, the organization develops and deploys more apps and includes more resources with each release. Simultaneously, the organization realizes that Anaplan is the platform solution they’ve required and they begin supporting the platform with governance, processes, training, and more through a Center of Excellence to ensure its success. The three phases of developing a Center of Excellence To get each organization to this point, a Center of Excellence should be established with three phases of deployment: Phase 1: Foundations Phase 2: Centralize Phase 3: Scale The following sections provide the steps that are necessary to complete each phase of developing a  Center of Excellence : Phase 1: Foundations Deploy the first project successfully Phase 1 of deploying a  Center of Excellence starts with successfully deploying the first project/release in the Anaplan platform. Doing this established the foundation needed to start building the  Center of Excellence . Establish the governing body Next, it’s important to establish the governing body, or steering committee within the organization. This initial team should feature three key members: Executive or Project Sponsor This role must be a neutral party that is invested in the success of Anaplan, but not aligned with a specific use case. This person will ultimately have the final approval in prioritizing sprints and releases for the business. Anaplan Architect The Anaplan Architect speaks for the Anaplan administrative team and demonstrates new model changes upon release. Additionally, this role advises on new modeling time requests and project planning when necessary. Center of Excellence Lead The  Center of Excellence Lead oversees the  Center of Excellence and organizes the monthly  Center of Excellence update. Establish your governance Once the governing body of the  Center of Excellence has been selected, the Monthly  Center of Excellence Update and Quarterly Strategy Update meetings must be established: Monthly  Center of Excellence Update: This meeting is organized by the  Center of Excellence Lead Attendance includes: Executive Sponsors Product Owners Project Architects Center of Excellence Lead, Architect, and Advisor Business Transformation Leads (optional) Meeting minutes and follow up are to be documented and delivered by the  Center of Excellence Lead Once complete, the meeting minutes and action items with responsible parties should be posted for the team by the  Center of Excellence  Lead Quarterly Strategy Update The Quarterly Strategy Update should occur on the same day and at the same location as the applicable Monthly  Center of Excellence Update meeting Again, this meeting is organized by the  Center of Excellence Lead Attendance includes: Executive Sponsors Product Owners Project Architects Center of Excellence  Lead, Architect, and Advisor Business Transformation Leads Meeting minutes and follow up are to be documented and delivered by the  Center of Excellence Lead The content of this meeting should include: Strategy updates by the Executive Sponsors Roadmap updates Key demonstration/s Once complete, the meeting minutes and action items with responsible parties should be posted for the team by the  Center of Excellence Lead Designate/hire the delivery team Next, the organization should designate and/or hire the delivery team, which is initially comprised of the following members: Soultions Architect (SA) This role is an experienced Anaplan Architect that is responsible for guiding the delivery team using design and data management best practices. Model Builders Model Builders are responsible for producing new builds within planned timelines. The number of Model Builders may vary depending on the release demands. Establish the sprint cadence The final step of Phase 1 is to establish the cadence of sprints for future releases in the organization. To do this, institute the following: Daily standup meetings Two week sprint cycles Sprint review with the governing body – one hour every two weeks During this review, the most recent functionality built should be reviewed (15m-30m), as well as a sprint retrospective that covers “What went well?” and “What didn’t go so well?” Sprint planning – one hour every two weeks During this meeting, all user stories from the backlog are reviewed and allocated into the next sprints based on priority and group agreement (45m) Phase 2: Centralize Phase 2 begins when the  Center of Excellence enters into the following: Multi-use-case configuration with integration points between processes within Anaplan The business wants a model delivery service as they cannot own model development The business wants to own model delivery and needs to be ramped up Many departments within the business want to leverage the Anaplan platform, and consistency and standardization is as important as agility and flexibility for business Data governance becomes critical Establish data governance The first step of Phase 2 starts with establishing governance in the  Center of Excellence . This includes: Electing an official Core IT point of contact for all planning processes Building the Master Data Hub Setting the data refresh frequency Hierarchy and data validation Centralizing user provisioning Managing Admin licenses Establish Functional Representatives in business Represent their functional areas for new project requests Provide feedback on recently completed model Designate a Central Solutions Architect Once designated, the Central Solutions Architect will be in charge of: All central maintenance Providing status updates to Executive/Project Sponsors Managing new model requests Communicating changes to the master model to the broader community Reviewing all citizen built models once per month and advising on best practices Ensuring all citizen developers have taking Anaplan training prior to developing Create a team of model builders - Options Junior resources out of college – trained and ramped up Outsourced to an Anaplan partner Taken from IT resources, if available Reports to the Central Architect If there is a need of a ”Business Service” central entity for Anaplan Establish a process for creating a new model The business team submits the request with business justification to central the  Center of Excellence Prioritization and validation: Is it part of corporate strategy? Global (IT) or Specific (Business)  Scoping agreement on both data hub and model Model sizing Use existing workspace / Get a new workspace Project resource requirement and alignments Can it be delivered as federated team? Can it be delivered by the central team? Execution governance Include the new project to the daily/monthly/quarterly governance Phase 3: Scale Once the  Center of Excellence is established in Phases 1 and 2, it may then be scaled to gain more attention within the business, and enable new users to work and build inside of the Anaplan platform. Phase 3 begins when the  Center of Excellence enters into the following: 5+ planning use cases built on Anaplan, integrated or not Anaplan has been communicated internally by the Global CIO, as the Platform to use for any planning application. High visibility of Planning apps execution management teams Need to hire more resources internally or externally in  Center of Excellence team or in business federated teams Attract Increase internal PR Provide frequent communication & evangelism of the Anaplan platform Produce newsletters and regular executive communications Demonstrate alignment to corporate strategy Have leadership publically celebrate small wins Define internal career paths Highlight growing external marketplace Enable Enable new users to utilize the Anaplan platform by providing the following: Leverage Anaplan training using classroom and on-demand resources Train during implementation and continue to offer training on an ongoing basis Develop internal materials specific to business outcomes & implementation Stay current on new functionality through an Early Adopter program Refresh the team’s knowledge through regularly updated Anaplan Enablement courses  
View full article
ETL Overview Traditionally, the IT department has controlled and owned all the data in a given organization. Therefore the various functional areas within an organization (such as Finance, HR, Procurement, etc.) have provided reporting and analytical requirements to the IT department / Business Intelligence (BI) professionals, and have waited until the work corresponding to these business requirements is completed.  Historically, the approach taken by the BI professionals to meet these requirements was the standard Extract, Transform and Load process, which is depicted in the sketch below. The raw data from various data sources (cloud, .txt, databases, .csv, etc.) is first extracted on to a staging area. This extracted data is then transformed per a pre-determined set of transformation rules and then loaded to data repository. The business then consumes this transformed data for their reporting, analytics, and decision making functions. Figure 1 – ETL Process at a high level The ETL process is considered a bit rigid because all the requirements have to be first shared with the BI professionals, who will then code the required transformation rules. In addition, any changes to these rules come at a higher cost to the business, both in terms of time and money. In some cases, this lost time may also result in opportunity cost to the businesses.   ELT Overview Nowadays, given the increasing need for speed and flexibility in reporting and analytics, what-if-analyses, etc., the same businesses cannot afford to wait for an extended period of time while its business requirements are being worked on by the same BI professionals. This, coupled with the relatively lower infrastructure (hardware) costs and the emergence of cloud technologies, has given rise to the ELT process.  In the ELT process, the raw data from all data sources is extracted and then immediately loaded into a central data repository. The business can then get its hands on this raw data and transform it to suit its requirements.  Once this transformation is done, the data is readily available for reporting, analytics, and decision-making needs. The sketch below illustrates the ELT process from a high level.   Figure 2 – ELT Process at a high level The ELT process is similar to that of a data lake concept, where organizations dump data from various source systems into a centralized data repository. The format of the data in the data lake may be structured (rows and columns), semi-structured (CSV and logs), unstructured (emails and .pdfs), and sometimes even binary (images).  Once organizations become familiar with the concept of a data lake / ELT process and see the benefits, they often rush to set one up. However, care must be taken to avoid the dumping of unnecessary and/or redundant data. In addition, an ELT process should also encompass data cleansing or data archival practices to keep up with the efficiency of the data repository. Comparison of ETL and ELT: The table below summarizes and compares the two methodologies of data acquisition and preparation for warehousing and analytics purposes. ELT vs ETL and the Anaplan Platform As a flexible and agile cloud platform, Anaplan supports both methodologies. Depending on the method chosen, below are suggestions on solution design approach.  If choosing the ETL methodology, clients could utilize one of the many ETL tools available in the marketplace (such as Informatica, Mulesoft, Boomi, SnapLogic, etc.) to extract and transform the raw data, which can then be loaded to the Anaplan platform. Although it is preferred to load huge datasets to a data hub model, the transformed data could also be loaded to the live or planning model(s). With the ELT approach, after the raw data extraction, it is recommended that it be loaded to a data hub model where the Anaplan modeling team will code the required transformation rules. The transformed data can be then loaded to the live or planning model(s) to be consumed by end users. Regardless of the approach chosen, note that the activities to extract raw data and load to the Anaplan platform could be automated. A final note The content above gives a high-level overview of the two data warehousing methodologies and by no means urges clients to adopt one methodology over the other. Clients are strongly advised to evaluate the pros and cons of each methodology as they relate to their business scenario(s) and have a business case to select a methodology.
View full article
Anaplan API: Communication failure <SSL peer unverified:  peer not authenticated> This is a common error if a Customer Server is behind a proxy or firewall. Solution is to have the customer whitelist '*.anaplan.com' for firewall blocks.  If behind a proxy, use the '-via" or 'viauser" commands in Anaplan Connect. The other very common cause for this error is that the security certificate isn’t synced up with java. If the whitelist or via command solutions don’t apply or don’t resolve the error, uninstalling and reinstalling Java usually does the trick. Thanks to Jesse Wilson for the technical details.   jesse.wilson@anaplan.com Here are the commands available:  
View full article
Manual integration with Anaplan is by far the simplest option for integration. Using the point-and-click user interface available in Anaplan, you can select any tab-delimited or comma-separated file for import into your model. Importantly, this is the only way to add a new data source to your Anaplan model. This makes it a stepping stone for all the other forms of integration, as any other import will use the formatting of an already-uploaded file for its format.
View full article
Anaplan has built several connectors to work with popular ETL (Extract, Translate, and Load) tools. These tools provide a graphical interface through which you can set up and manage your integration. Each of the tools that we connect to has a growing library of connectors – providing a wide array of possibilities for integration with Anaplan. These ETL tools require subscriptions to take advantage of all their features, making them an especially appealing option for integration if you already have a sub.      MuleSoft Anaplan has a connector available in MuleSoft's community library that allows for easy connection to cloud systems such as Netsuite, Workday, and Salesforce.com as well as on-premise systems like Oracle and SAP. Any of these integrations can be scheduled to recur on any period needed, easily providing hands-off integration. MuleSoft uses the open-source AnyPoint studio and Java to manage its integrations between any of its available connectors. Anaplan has thorough documentation relating to our MuleSoft connector on  the Anaplan MuleSoft github.   SnapLogic SnapLogic has a Snap Pack for Anaplan that leverages our API to import and export data. The Anaplan Snap Pack provides components for reading data from and writing data to the Anaplan server using SnapLogic, as well as executing actions on the Anaplan server. This Snap Pack empowers you to use connect your data and organization on the Anaplan Platform without missing a beat.   Boomi Anaplan has a connector available on the Boomi marketplace that will empower you to create a local Atom and transfer data to or from any other source with a Boomi connector. You can use Boomi to import or export data using any of your pre-configured actions within Anaplan. This technology removes any need to store files as an intermediate step, as well as facilitating automation.   Informatica Anaplan has partnered with Informatica to build a connector on the Informatica platform. Informatica has connectors for hundreds of applications and databases, giving you the ability to leverage their integration platform for many other applications when you integrate these applications with Anaplan. You can search for the Anaplan Connector on the Informatica marketplace or request it from your Informatica sales representative.  
View full article
If you're using IBM Java version 1.7 or 1.8 and have encountered some issues since TLS 1.0 deprecation, we recommend that you try this step: Edit the AnaplanClient.sh script that calls Anaplan Connect to insert the line. JAVA_OPTS= "${JAVA_OPTS} -Dcom.ibm.jsse2.overrideDefaultTLS= true -Dcom.ibm.jsse2.suiteB= true " immediately before exec "${_java}" ${JAVA_OPTS} -classpath "${classpath}" com.anaplan.client.Program "$@"  
View full article
Recently, I used Anaplan Connect for the first time; I used it to import Workday and Jobvite data into my Anaplan model. This was my first serious data integration. After my experience I put together some tips and tricks to help other first-timers succeed. Firstly, there are a few things you can do to set yourself up for success: Download the most up-to-date version of   Java. Download Anaplan Connect from Anaplan's Download Center. Make sure you can run Terminal (Mac) or the Command Prompt (Windows). Make sure you have a plaintext editor to edit your script (TextEdit or Notepad are available by default, but I recommend   Sublime Text). Read through the Anaplan Connect User Guide in the "doc" folder of the Anaplan Connect folder you downloaded in step #2. Once you have these items completed then you’re ready to start writing your script. In the Anaplan Connect folder that you downloaded, there are some example script files, “example.bat” for Windows and “example.sh” for Mac. The best way to start is to copy the right example file for your operating system, then alter it. When you’re first navigating the example script, the section contains what are called variables (e.g. ModelId, WorkspaceId, AnaplanUser). If you keep your variables at the top, then use them in your script, it's easier to edit those components because they are only in one place. I highly recommend adding a variable for your Anaplan certificate. Then you don’t have to manually enter your password every time the script runs. When you begin to piece together your own script, it will include some combination of Anaplan Connect Commands (you can check out the full list in an appendix of the Quick Start Guide for Anaplan Connect, on Anapedia). Because my script was focused on importing data from an outside source into Anaplan, it included the following components: file, put, import, execute, output. Each of these has a different function: File identifies the File Name (i.e. Workday.csv). Put identifies the File Path of the file you’re importing (i.e. User/Admin/Documents/Workday.csv). Import identifies the action Anaplan is supposed to run (i.e. Workday_Import). Execute is what runs the process; nothing needs to follow this. Output identifies what happens to errors. If you would like those to go to a file then you include the location of the file following the output (i.e. User/Admin/Documents/ErrorLog.csv). It’s worth noting that you can have multiple actions behind a file. For instance, I can have a command sequence like this: file-put-import-execute-output-put-import-execute-output. I found this useful when I used a single file to update multiple lists and modules; it saved me from needing to upload a file over and over again. When you are identifying the file path for the script, it is easiest to keep terminal open. When you drag and drop a file in terminal it will automatically populate the file path. This will assist in avoiding syntax errors since you can copy and paste from terminal into the script. Once you assemble your commands, it’s time to start testing your script! When you start testing the script, it is helpful to break it into small pre-built test chunks that build on one another. That way if something goes wrong, it won’t take as long to find out where the error is. Additionally, it makes the script more digestible in the event that it needs to be edited in the future. As you test each of these chunks, you may run into some errors, so here are a few troubleshooting tips to get you started. If your terminal reports that there is a syntax error, then there is most likely a pesky apostrophe, a space, or some other special character in your script that is causing the error. Comb through the code, especially your filenames, and find the error before attempting to run it again. Secondly, you may run into a permissions error. These typically arise when your file is not currently an executable file. When I encountered this error, changing the permissions on the file to give me write access solved it. Overall once you know these basics of Anaplan Connect you can build a script—even a complicated one! When in doubt, see if somebody else has asked about a similar issue in the discussion section; if you don’t find something there, you can always create your own question. Sometimes a second set of eyes is all you need, and our integrations site has some of the best in biz contributing! Best of luck to the other rookies out there!
View full article
At Anaplan, our mission is to change the way companies around the world align people and plans to market opportunities. Central to achieving this goal is successfully integrating your data from various external systems into Anaplan, including native connectors as well as connectors with the most popular ETL tools on the market. Anaplan has built a connector to provide a graphical environment for connecting with any of Boomi’s library of connectors to other applications. Check out the Anapedia for more information on performing basic functions with the Boomi connector; in this post, we will demonstrate an example Boomi process to demonstrate some advanced ways to use this tool.   Multi-Module Import Often, a need arises to import a subset of data into a list before it is possible to fill in a module. This is easily achievable if you prepare your import actions with a single sample CSV with the exact formatting that Boomi’s export will create. This is a useful technique for handling an export of Salesforce opportunities. A single pull from Salesforce returns a CSV containing opportunity IDs as well as other data you track in Anaplan. Two successive upsert calls with the Anaplan Connector can add new opportunity IDs to an Anaplan list (useful for model-wide data integrity) then fill add the other information to a module. Following this process has several upsides: a simpler Boomi process, a single query to Salesforce, and increased model-wide data integrity from the ability to make any opportunity id in the model match an item on the op ID list.   Calling a Process Anaplan’s Boomi connector does not have native support for calling processes. Often, calling individual actions is all that’s needed, but some integrations demand more. Also, calling an Anaplan process instead of a collection of actions can reduce the burden of maintenance for IT professionals by allowing actions to be renamed and reordered without requiring change of the Boomi process. To call an Anaplan process within your Boomi workflow, you must skirt the Anaplan connector, and instead use a Boomi   HTTPS connector   to call a process using our API. There is thorough documentation on   Anaplan’s RESTful API, and supplemental information in the knowledge base.
View full article
Tableau Connector for Anaplan The Tableau Anaplan native integration provides an easy way to see and understand your Anaplan data using Tableau. Using the Tableau Connector for Anaplan, you can directly connect to Anaplan in few easy steps. The connector is native to Tableau and built using the Anaplan API. It enables you to import Anaplan data into Tableau’s in-memory query engine using export actions created and saved in Anaplan. With a direct connection to Anaplan, people within your organization can effectively work with Tableau and get actionable insights on their data. Users can publish their Anaplan extract as a data source to Tableau Online or Tableau Server and keep their data refreshed on a regular basis. To start using the Tableau - Anaplan connector, you need to have an Anaplan account with workspace and model, and a license for Tableau Desktop. You will also need to configure the Export actions that you plan to use with Tableau in Anaplan. Tableau supports only extract connections for Anaplan, not live connections. You can update the data by refreshing the extract.  To try the Tableau Connector for Anaplan visit  https://www.tableau.com/products/trial.  For an introduction to the Tableau - Anaplan integration, refer to the page below: https://www.tableau.com/about/blog/2016/10/connect-directly-your-anaplan-data-tableau-61853 More details about configuring the connector in Tableau are here: https://onlinehelp.tableau.com/current/pro/desktop/en-us/examples_anaplan.html Information on configuring Anaplan to use the Tableau Connector, as well as frequently asked questions, is available on Anapedia.
View full article
Problem to solve: As an HR manager, I need to enter the salary raise numbers for multiple regions that I'm responsible for. As a domain best practice, my driver-based model helps me to enter raise guidelines, which will then change at the employee level. Usability issue addressed: I have ten regions, eight departments in each, with a total of 10,000+ employees. I need to align my bottom up plan, to the down target I received earlier. I need to quickly identify what region is above/behind target and address the variance. My driver-based raise modeling is fairly advanced and I need to see what the business rules are. I need to quickly see how it impacts the employee level. Call to action: Step 1: Spot what region I need to address.  Step 2: Drill into the variances by department. Steps 1 & 2 are analytics steps: "As an end user, I focus first on where the biggest issues are." This is a good usability practice that helps users. Step 3: Adjusting the guidelines (drivers) There are not excessive instructions on how to build and use guidelines, which would have cluttered the dashboard. Instead, Anaplan added a "view guideline instruction" button. This button should open a dashboard dedicated to detailed instructions or link to a video that explains how guideline works. Impact analysis: The chart above the grid will adjust as guidelines are edited. That is a good practice for impact analysis: no scrolling or clicking needed to view how the changes will impact the plan. Step 4: Review a summary of the variance after changes are made. Putting steps 1–4 close to each other is a usable way of indicating to a user that he/she needs to iterate through these four steps to achieve their objective, which is to have every region and every department be within the top down target. Step 5: A detailed impact analysis, which is placed directly below steps 3 and 4. This allows end users to drill into the employee-level details and view the granular impact of the raise guidelines. Notice the best practices in step 5:   The customer will likely ask to see 20 to 25 employee KPIs across all employees and will be tempted to display these as one large grid. This can quickly lead to an unusable grid made of thousands of rows (employees) across 25 columns. Instead, we have narrowed the KPI list to only ten that display without left-right scrolling. Criteria to elect these ten: be able to have a chart that compares employees by these KPIs. The remaining KPIs are displayed as an info grid, which only displays values for the selected employee. Things like region, zip codes, and dates are removed from the grid as they do not need to be compared side-by-side with other KPIs or between employees.  
View full article
Example #1: Account assignment Problem to solve: Operational task to complete: "I need to assign territories to a list of accounts I'm in charge of." Usability issue addressed: the user has a list of 400k accounts to plan on. Cannot be done by simple scroll or search. Addressed by setting up a user-based filter: User enters filter criteria in a dedicated dashboard that displays a floating palette using an "undock" functionality. Filter by region, sub-region, and current territory are the useful filter criteria, as well as showing only those accounts for which data entry is incomplete. The model calculates a Boolean to TRUE when the account matches the criteria. The Boolean is dimensionalized by user and account. The impact that this makes on the model size should be carefully monitored. The grid then filters and only displays accounts that match the filter criteria. Call to action: Step 1: As a sales operations manager, I filter my list of accounts based on an import of some customer ID, or by manually setting up a filter. Then, for this account selection, I need to enter a territory, country, city, and method for the account selected. Once complete, I commit my changes to recalculate all of my impacted territories. Step 2: Once committed, I need to fine tune and finalize my sub-accounts and ensure that they are properly assigned by looking at only sub-accounts that have been assigned to different territories than their parents. Impact analysis: Visual cues to highlight invalid data entry (Usability guideline): Done by providing a user-based filter option that only displays invalid data entry. Visual cues to indicate when I need to submit changes that I have made   Example #2: Capacity planning for a territory Problem to solve: As a sales operations manager, I should make a plan that will show how I will meet the target amount. Call to action: Adjust productivity, ramping profile, and resources until my capacity matches my target. Impact analysis:  Main chart showing capacity versus target Capacity alert color coding  Warning message: Updates real time as end users use any of the three options
View full article
Overview A complex dashboard may require numerous instructions that users need to read, as well as legal instructions that users have to read and acknowledge they have read. U sers should not be exposed to instructions every time they visit a dashboard, especially if they are lengthy instructions. A dedicated instruction dashboard will provide instructions to end users on demand.   Dashboard content flow Users should not be exposed to instructions every time they visit a dashboard Dashboard content flow Title; Heading 1, "Instructions Dashboard" Heading 2, break-up of instructions content Module or chart requiring instructions Only display grid, line items, and KPIs that require instructions Remove unnecessary information (i.e., page drop downs) Instructions directly to the right of object which it is explaining Single concise sentences in "Instruction" format Large blue box is not preferable; break it up in separate smaller boxes to improve readability Do not clutter with too many instructions Short videos on the instructions dashboard can show a 30–60 second demo of basic user functions   Acknowledgement process Create a module dimensioned by user Use conditional formatting to display red when user has not acknowledged instructions Publish line item at the bottom of instructions page for user to acknowledge viewing of dashboard Give admin ability to clear all acknowledgements when instructions are added or changed Automate your custom users list by exporting/importing daily out of the actual user's view in settings   Example of an instruction dashboard  
View full article
Overview Reporting dashboards provide the ability to do immediate/at-a-glance analysis of planning activities. Provide real-time collaboration on the plan across the user community Use same best practices as planning dashboard; de-clutter, highlight exceptions, main display grid versus informative data grid, commentaries Dashboard content flow One key analysis per dashboard Components: Title Viewpoint Should be used when a global context must be set for the entire dashboard; i.e. time period or territory Page selectors should be ordered based on usage and should be consistent across dashboards Chart panels Display grid (below charts) Color coding Navigation to instructions where necessary Dynamic navigation in the plan across the hierarchies (i.e., click on a level 1 item in one grid, will display relevant L2s and below) Below is an example taken from a customer POC with two synced grids and associated chart panels with color coding. Notice that displaying when the data was last refreshed is a popular technique, especially if dashboards are refreshed daily or weekly. Reporting dashboard example:
View full article
Announcements


Join us in San Francisco, CA, to explore what’s possible with business leaders, industry visionaries, and your peers.
Take 50% off your registration with code COMMUNITYCPX50.


Anapedia

Review the official documentation of the Anaplan platform.

Share what you know!

Share what you know! Contribute your best practices and Anaplan expertise using our Contributor's Toolkit.