Sort by:
We often see Anaplan Connect scripts created ad-hoc, as new actions are added, or updating existing scripts with these new actions. This works when there is a limited number of imports/exports/processes/etc. running, and when these actions are relatively quick. However, as a models and actions scale up and grow in complexity, this solution can become very inefficient. Either scheduling dozens of scripts, or trying to manage large, difficult to read scripts. I prefer to design for scale from the outset. My solution utilizes batch scripts that call the relavant Anaplan Connect script, passing the action to run as a variable. There are a couple ways I've accomplished this: dedicate a script to execute processes and pass in the process name, or pass in the action type (-action, -export, etc.) and name as the variable. I generally prefer the first approach, but you want to be careful when creating your process that it doesn't become so large that it impacts model performance. Usually, I will create a single script to perform all file uploads to a model, then run the processes. In my implementations, I've written each Anaplan Connect script to be model specific, but you could pass the model ID as a variable as well. To achieve this, I create a "controller" script, that calls the Anaplan Connect script, which would look something like this: @echo off for /F "tokens=* delims=" %%A in (Demand-Daily-Processes.txt) do ( call "Demand - Daily.bat" %%A & TIMEOUT 300) pause This reads from a file called Demand-Daily-Processes.text, reads a line which contains the name of the process as it appears in Anaplan, e.g.,  Load Master Data from Data Hub ... Load Transactional Data from Data Hub Then it calls the Anaplan Connect, passing this name as a variable. Once the script completes, the controller waits 300 seconds before reading the next line and calling the AC script again. This timeout is there to give the model time to recover after running the process and prevent any potential issues executing subsequent processes. The Anaplan Connect script itself looks mostly as it does, except in place of the process name, we use a variable reference.  @echo off set AnaplanUser="" set WorkspaceId="" set ModelId="" set timestamp=%date:~7,2%_%date:~3,3%_%date:~10,4%_%time:~0,2%_%time:~3,2% set Operation==-certificate "path\certificate.cer" -process "%~1" -execute -output "C:\AnaplanConnectErrors\<Model Name>-%~1-%timestamp%" rem *** End of settings - Do not edit below this line *** setlocal enableextensions enabledelayedexpansion || exit /b 1 cd %~dp0 if not %AnaplanUser% == "" set Credentials=-user %AnaplanUser% set Command=.\AnaplanClient.bat %Credentials% -workspace %WorkspaceId% -model %ModelId% %Operation% @echo %Command% cmd /c %Command% pause You can see that in place of declaring a process name, the script uses %~1. This tells the script to use the value of the first parameter provided. You can set to 9 variables this way, allowing you to pass in workspace and model IDs as well. This also creates a timestamp variable with the current system time when executed, then uses that and the process name to create a clearly labled folder for error dumps. eg. "C:\AnaplanConnectErrors\Demand Planning-Load Master Data from Data Hub-dd/mm/yyyy time". By using this solution, as you add processes to your model, you can simply add them to the text file (keeping them in the order you want them executed), rather than editing or creating batch scripts. Additionally, you need only schedule your controller script(s), making maintenance easier still. 
View full article
While setting up your Anaplan Connect integration, if you encounter authentication errors with your proxy it may be because it uses NTLM. Anaplan Connect does not support this out of the box, but you can enable it by following these steps: Navigate to  http://jcifs.samba.org/   Download the latest JCIFS jar file (1.3.19 as of the date of this post) Install the jar file in Anaplan Connect's lib folder After doing this, you should be able to successfully authenticate with your proxy using the -via and -viauser options in your Anaplan Connect script.
View full article
I recently posted a Python library for version 1.3 of our API. With the GA announcment of API 2.0, I'm sharing a new library that works with these endpoints. Like the previous library, it does support certificate authentication, however it requires the private key in a particular format (documented in the code, and below). I'm pleased to announce, the use of Java keystore is now supported. Note:   While all of these scripts have been tested and found to be fully functional, due to the vast amount of potential use cases, Anaplan does not explicitly support custom scripts built by our customers. This article is for information only and does not suggest any future product direction. This library is a work in progress, and will be updated with new features once they have been tested.   Getting Started The attached Python library serves as a wrapper for interacting with the Anaplan API. This article will explain how you can use the library automate many of the requests that are available in our Apiary, which can be found at   https://anaplanbulkapi20.docs.apiary.io/#. This article assumes you have the requests and M2Crypto modules installed as well as the Python 3.7. Please make sure you are installing these modules with Python 3, and not for an older version of Python. For more information on these modules, please see their respective websites: Python   (If you are using a Python version older or newer than 3.7 we cannot guarantee validity of the article)   Requests   M2Crypto Note:   Please read the comments at the top of every script before use, as they more thoroughly detail the assumptions that each script makes. Gathering the Necessary Information In order to use this library, the following information is required: Anaplan model ID Anaplan workspace ID Anaplan action ID CA certificate key-pair (private key and public certificate), or username and password There are two ways to obtain the model and workspace IDs: While the model is open, go Help>About:  Select the workspace and model IDs from the URL:  Authentication Every API request is required to supply valid authentication. There are two (2) ways to authenticate: Certificate Authentication Basic Authentication For full details about CA certificates, please refer to our Anapedia article. Basic authentication uses your Anaplan username and password. To create a connection with this library, define the authentication type and details, and the Anaplan workspace and model IDs: Certificate Files: conn = AnaplanConnection(anaplan.generate_authorization("Certificate","<path to private key>", "<path to public certificate>"), "<workspace ID>", "<model ID>") Basic: conn = AnaplanConnection(anaplan.generate_authorization("Basic","<Anaplan username>", "<Anaplan password>"), "<workspace ID>", "<model ID>")   Java Keystore: from anaplan_auth import get_keystore_pair key_pair=get_keystore_pair('/Users/jessewilson/Documents/Certificates/my_keystore.jks', '<passphrase>', '<key alias>', '<key passphrase>') privKey=key_pair[0] pubCert=key_pair[1] #Instantiate AnaplanConnection without workspace or model IDs conn = AnaplanConnection(anaplan.generate_authorization("Certificate", privKey, pubCert), "", "") Note: In the above code, you must import the get_keystore_pair method from the anaplan_auth module in order to pull the private key and public certificate details from the keystore. Getting Anaplan Resource Information You can use this library to get the necessary file or action IDs. This library builds a Python key-value dictionary, which you can search to obtain the desired information: Example: list_of_files = anaplan.get_list(conn, "files") files_dict = anaplan_resource_dictionary.build_id_dict(list_of_files) This code will build a dictionary, with the file name as the key. The following code will return the ID of the file: users_file_id = anaplan_resource_dictionary.get_id(files_dict, "file name") print(users_file_id) To build a dictionary of other resources, replace "files" with the desired resource: actions, exports, imports, processes.  You can use this functionality to easily refer to objects (workspace, model, action, file) by name, rather than ID. Example: #Fetch the name of the process to run process=input("Enter name of process to run: ") start = datetime.utcnow() with open('/Users/jessewilson/Desktop/Test results.txt', 'w+') as file: file.write(anaplan.execute_action(conn, str(ard.get_id(ard.build_id_dict(anaplan.get_list(conn, "processes"), "processes"), process)), 1)) file.close() end = datetime.utcnow() The code above prompts for a process name, queries the Anaplan model for a list of processes, builds a key-value dictionary based on the resource name, then searches that dictionary for the user-provided name, and executes the action, and writes the results to a local file. Uploads You can upload a file of any size, and define a chunk size up to 50mb. The library loops through the file or memory buffer, reading chunks of the specified size and uploading to the Anaplan model. Flat file:  upload = anaplan.file_upload(conn, "<file ID>", <chunkSize (1-50)>, "<path to file>") "Streamed" file: with open('/Users/jessewilson/Documents/countries.csv', "rt") as f: buf=f.read() f.close() print(anaplan.stream_upload(conn, "113000000000", buf)) print(anaplan.stream_upload(conn, "113000000000", "", complete=True)) The above code reads a flat file and saves the data to a  buffer (this can be replaced with any data source, it does not necessarily need to read from a file). This data is then passed to the "streaming" upload method. This method does not accept the chunk size input, instead, it simply ensures that the data in the buffer is less than 50mb before uploading. You are responsible for ensuring that the data you've extracted is appropriately split. Once you've finished uploading the data, you must make one final call to mark the file as complete and ready for use by Anaplan actions. Executing Actions You can run any Anaplan action with this script, and define a number of times to retry the request if there's a problem. In order to execute an Anaplan action, the ID is required. To execute, all that is required is the following: run_job = execute_action(conn, "<action ID>", "<retryCount>") print(run_job) This will run the desired action, loop until complete, then print the results to the screen. If failure dump(s) exits, this will also be returned. Example output: Process action 112000000082 completed. Failure: True Process action 112000000079 completed. Failure: True Details: hierarchyName Worker Report successRowCount 0 successCreateCount 0 successUpdateCount 0 warningsRowCount 435 warningsCreateCount 0 warningsUpdateCount 435 failedCount 4 ignoredCount 0 totalRowCount 439 totalCreateCount 0 totalUpdateCount 435 invalidCount 4 updatedCount 435 renamedCount 435 createdCount 0 lineItemName Code rowCount 0 ignoredCount 435 Failure dump(s): Error dump for 112000000082 "_Status_","Employees","Parent","Code","Prop1","Prop2","_Line_","_Error_1_" "E","Test User 2","All employees","","101.1a","1.0","2","Error parsing key for this row; no values" "W","Jesse Wilson","All employees","a004100000HnINpAAN","","0.0","3","Invalid parent" "W","Alec","All employees","a004100000HnINzAAN","","0.0","4","Invalid parent" "E","Alec 2","All employees","","","0.0","5","Error parsing key for this row; no values" "W","Test 2","All employees","a004100000HnIO9AAN","","0.0","6","Invalid parent" "E","Jesse Wilson - To Delete","All employees","","","0.0","7","Error parsing key for this row; no values" "W","#1725","All employees","69001","","0.0","8","Invalid parent" [...] "W","#2156","All employees","21001","","0.0","439","Invalid parent" "E","All employees","","","","","440","Error parsing key for this row; no values" Error dump for 112000000079 "Worker Report","Code","Value 1","_Line_","_Error_1_" "Jesse Wilson","a004100000HnINpAAN","0","434","Item not located in Worker Report list: Jesse Wilson" "Alec","a004100000HnINzAAN","0","435","Item not located in Worker Report list: Alec" "Test 2","a004100000HnIO9AAN","0","436","Item not located in Worker Report list: Test 2 Downloading a File If the above code is used to execute an export action, the fill will not be downloaded automatically. To get this file, use the following: download = get_file(conn, "<file ID>", "<path to local file>") print(download) This will save the file to the desired location on the local machine (or mounted network share folder) and alert you once the download is complete, or warn you if there is an error. Get Available Workspaces and Models API 2.0 introduced a new means of fetching the workspaces and models available to a given user. You can use this library to build a key-value dictionary (as above) for these resources. #Instantiate AnaplanConnection without workspace or model IDs conn = AnaplanConnection(anaplan.generate_authorization("Certificate", privKey, pubCert), "", "") #Setting session variables uid = anaplan.get_user_id(conn) #Fetch models and workspaces the account may access workspaces = ard.build_id_dict(anaplan.get_workspaces(conn, uid), "workspaces") models = ard.build_id_dict(anaplan.get_models(conn, uid), "models") #Select workspace and model to use while True: workspace_name=input("Enter workspace name to use (Enter ? to list available workspaces): ") if workspace_name == '?': for key in workspaces: print(key) else: break while True: model_name=input("Enter model name to use (Enter ? to list available models): ") if model_name == '?': for key in models: print(key) else: break #Extract workspace and model IDs from dictionaries workspace_id = ard.get_id(workspaces, workspace_name) model_id = ard.get_id(models, model_name) #Updating AnaplanConnection object conn.modelGuid=model_id conn.workspaceGuid=workspace_id The above code will create an AnaplanConnection instance with only the user authentication defined. It queries the API to return the ID of the user in question, then queries for the available workspaces and models, and builds a dictionary with these results. You can then enter the name of the workspace and model you wish to use (or print to screen all available), then finally update the AnaplanConnection instance to be used in all future requests.
View full article
With Bring Your Own Key (BYOK), you can take ownership of the encryption keys to your model data. If you have access to Anaplan Administration, you can encrypt and decrypt selected workspaces with your own AES-256 keys. Unlike the system master keys, keys created with BYOK are owned and secured by you. No mechanism exists for Anaplan employees to access your keys. For more information, see Bring Your Own Key in Anapedia. Note: Bring Your Own Key is an additional product that your organization can purchase if it has the Enterprise edition.
View full article
Who is Mulesoft? What do they do? MuleSoft is a market leading Integration Platform as a Service (IPaaS) or Enterprise Service Bus (ESB) vendor. IPaas or ETL vendors are "middle-ware" software vendors that connect data from source systems to target systems within an enterprise company. For Anaplan customers a partner like MuleSoft can import data from cloud, on-premise, database, and other source systems into Anaplan as a target system, and export or update Anaplan data to other target systems like a BI or financial system vendor as a few examples. MuleSoft was acquired by Salesforce in early 2018, and will become the Integration Cloud business unit. What is the difference between Anaplan's Mulesoft connector, Hyperconnect, Connect, and using other Anaplan IPaaS partners like Dell Boomi and SnapLogic? Anaplan Connect is a server based Java scripting application that can import flat files into Anaplan. Connect is configured so business users can automate getting source data into Anaplan. The product typically the simplest way to import data sets to Anaplan. Hyperconnect is an optional SKU Anaplan sells that includes the Informatica Cloud Service (ICS) and bundles of 2, 5, or more connectors. One of these connectors must be the Anaplan Informatica connector. The Anaplan Enterprise Edition includes ICS and 2 connectors. ICS is a market-leading cloud ESB product for integrating on-premise and cloud application data to Anaplan. Anaplan sells, implements, and supports Hyperconnect. MuleSoft is also a market leading IPaaS or ESB vendor whose primary product is called AnyPoint. MuleSoft sells and supports the AnyPoint platform directly with their customers. Anaplan developed the v2 Mulesoft connector which is available in the Mule Exchange where certified connectors and integrations can be downloaded. Anaplan has connectors for Dell Boomi and SnapLogic as additional IPaaS/ETL options for our customers who have picked one of these vendors as their ETL solution. What are the new features in the MuleSoft v2 Anaplan connector compared to our v1 connector? Support for CA certificates Work with Anaplan actions like Import, Export, Delete, and Processes Support both file based and streaming data transfer; support for configurable file chunks up to 50MB to increase performance File previews for import and export actions Automatic retries and configurable retry counts Additional debug logs How does Mulesoft position itself relative to other ESB, ETL, or IPaaS vendors? MuleSoft reduces technology complexity and cost Anypoint Platform allows you to maintain and operate one platform to support the entire API and Integration lifecycle instead of two Developers do not have to hack orchestration into API gateways, can use built-in integration functionalities Use a single runtime for both integration and APIs, reducing resource needs MuleSoft helps your team deliver APIs faster Anypoint Platform enforces API design best practices and helps you avoid duplicated work with reusable templates We provide a mock service so API owners can validate the design before coding Exchange seamlessly imports templates into built-in API design tools, you do not have to rely on open source tools (Swagger) MuleSoft provides actionable visibility to help you prevent and reduce system downtime Anypoint Platform provides one place to monitor how data traverses APIs and integrations: end-to-end visibility We have built-in test tools that work well with your existing CI/CD pipeline; with us you can use standard Java tools for debugging v. having no visibility into black box Has MuleSoft certified this v2 connector? Yes. What versions of Mulesoft runtime and Studio is the v2 Anaplan connector compatible with? The Anaplan v2 connector is compatible with Mule runtime 3.8/3.9 and Studio 6.4. My customer is using the Mule 4.x runtime. Is our v2 connector compatible with that? Our v2 connector is not compatible with Mule 4.x runtime today but is on our roadmap for forthcoming enhancements. What are the compelling benefits to our customer for undertaking a v1 to v2 connector migration, or to start with our v2 connector for Anaplan integration? The basic authentication provided through the Anaplan APIs used with our v1 MuleSoft connector have been upgraded with Anaplan v2.0 APIs to now support certification-based authentication. There are significant security, performance, data orchestration, and monitoring features in our v2 connector that will enable greater scaling and more real-time data flows into Anaplan, and less development effort for IT. Any customer using the Mule 3.x runtime must use our v2 MuleSoft connector. Is there a price to use the MuleSoft v2 Anaplan connector? No. Neither Anaplan nor MuleSoft charges for the Anaplan MuleSoft connector. How does a customer get access to this v2 Anaplan Mulesoft connector? Do a Google search for the MuleSoft Exchange or go to https://www.mulesoft.com/exchange/. All of MuleSoft's certified connectors and integrations can be found and downloaded from this Exchange. Within the MuleSoft Exchange you can type "Anaplan" to find the connector, or go to https://www.mulesoft.com/exchange/org.mule.modules/mule-module-anaplan-connector/ What resources are available to help our customers plan to download, use or migrate to this v2 Anaplan Mulesoft connector? Within the Anypoint Exchange there is a FAQ section that explains how to download and use connectors with the Anypoint platform, and how to upgrade between connector versions. See https://www.mulesoft.com/exchange-faq How is support provided for MuleSoft and the Anaplan MuleSoft connector? MuleSoft   sells their Anypoint platform directly to customers, and therefore provides tier 1 support to these customers including the connectors. MuleSoft tier 1 support will work with Anaplan support who will provide tier 2 and 3 support if there are issues within the Anaplan MuleSoft connector. Will the v1 Anaplan connector continue to be available? Will the v1 connector be supported now that v2 is available? The v1 connector will still be available through the Anypoint Exchange to download and use for development, but customers will be encouraged to start all new development with the v2 connector.   MuleSoft will no longer provide support for the v1 connector.  Resources Anaplan-MuleSoft Connector Data Sheet (attached) MuleSoft Connector v2.0 Presentation (attached) 
View full article
What happens to History when I delete a user from a workspace?
View full article
Overview The Anaplan Optimizer aids business planning and decision making by solving complex problems involving millions of combinations quickly to provide a feasible solution. Optimization provides a solution for selected variables within your Anaplan model that matches your objective based on your defined constraints. The Anaplan model must be structured and formatted to enable Optimizer to produce the correct solution. You are welcome to read through the materials and watch the videos on this page, but Optimizer is a premium service offered by Anaplan (Contact your Account Executive if you don't see Optimizer as an action on the settings tab). This means that you will not be able to actually do the training exercises until the feature is turned on in your system. Training The training involves an exercise along with documentation and videos to help you complete it. The goal of the exercise is to setup the optimization exercise for two use cases; network optimization and production optimization. To assist you in this process we have created an optimization exercise guide document which will walk you through each of the steps. To further help we have created three videos you can reference: An exercise walk-through A demo of each use case A demo of setting up dynamic time Follow the order of the items listed below to assist with understanding how Anaplan's optimization process works: Watch the use case video which demos the Optimizer functionality in Anaplan Watch the exercise walkthrough video Review documentation about how Optimizer works within Anaplan Attempt the Optimizer exercise Download the exercise walkthrough document Download the Optimizer model into your workspace How to configure Dynamic Time within Optimizer Download the Dynamic Time document Watch the Dynamic Time video Attempt Network Optimization exercise Attempt Production Optimization exercise
View full article
As a model builder you have to define line items formats over and over. Using a text expander/snippet tool, you can speed up the configuration of modules. When you add a new Line Item, Anaplan sets it by default as a Number (Min Significant Digits : 4, Thousands Separator : Comma, Zero Format : Zero, etc.). You usually change it once and copy it over to other line items in the module. Snippet tools can store format definition of generic formats (numbers, text, boolean, or no data) and by a simple shortcut, paste it in the format of the desired line items.  Below is an example of a number format line item with no decimal and hyphens instead of zeros. On my Mac, I press Option + X, I type "Num..." and get a list of all Number formats I pre-defined. I press Enter to paste it. It also works if several line items were selected. The value stored for this Number format is : {"minimumSignificantDigits":-1,"decimalPlaces":0,"decimalSeparator":"FULL_STOP","groupingSeparator":"COMMA","negativeNumberNotation":"MINUS_SIGN","unitsType":"NONE","unitsDisplayType":"NONE","currencyCode":null,"customUnits":null,"zeroFormat":"HYPHEN","comparisonIncrease":"GOOD","dataType":"NUMBER"} Here is the result of a text format snippet. {"textType":"GENERAL","dataType":"TEXT"} Or a Heading line item (No Data, Style : Heading 1). ---- false {"dataType":"NONE"} - Year Model Calendar All false false {"summaryMethod":"NONE","timeSummaryMethod":"NONE","timeSummarySameAsMainSummary":true,"ratioNumeratorIdentifier":"","ratioDenominatorIdentifier":""} All Versions true false Heading1 - - - 0  This simple trick can save you a lot of clicks. While we are unable to recommend specific snippet tools, your PC or Mac may include one by default, while others are easy to locate for free or low-cost online.
View full article
Filters can be very useful in model building and are widely used, but they can come at the expense of performance; often very visible to users through their use on dashboards. Performance can also hit imports and exports, which in turn may lead to the blocking of other activity, causing a poor perception of the model. There are some very simple guidelines to designing well performing filters: Using a single Boolean filter on a line item that does not have time or versions applied and does not have a summary is fastest. Try to create a Boolean line item that incorporates all the filter criteria you want to apply. This allows you to re-use the line item and combine a series of Boolean line items into a single Boolean for use in the filter. For example, you may want to filter on three data points: Volume, Product Category, and Active Status. Volume is numeric, Product Category is a list formatted line item matching a user selection, and Active Status is a Boolean. Create a line item called Filter with the following formula: Volume > Min Vol AND Product Cat = User Selection.Category AND Active Status Here’s a very simple example module to demonstrate this… A Filter line item is added to represent all the filters we need on the view. Only the Filter line needs to be dimensioned by Users. A User Selection module dimension only by Users is created to capture user-specific filter choices: Here’s the data before we apply the filter: With the filter applied: A best practice suggestion would be to create a filter module and line items for each filter part. You may want other filters and you can then combine each filter as needed from this system module. This should reduce repetition and give you control over the filters to ensure they can all be Boolean. What can make a filter slow? The biggest performance hit for filters are when nesting dimensions on rows. The performance loss is significantly increased by the number of nested dimensions and the number of levels they contain. With a flat list versus nested dimensions (filtering on the same number of items) the nested filter will be slower. This was tested with a 10,000,000 list versus 2 lists of 10 and 1,000,000 items as nested rows; the nested dimension filter was 40% slower. Filtering on line items with a line item summary will be slow. A numeric filter on 10,000,000 items can take less than a second, but with a summary will take at least 5 seconds. Multiple filters will increase time This is especially significant if any of the preceding filters do not lower the load, because they will take additional time to evaluate. If you do use multiple filter conditions, try to order them so the most effective filters are first. If a filter doesn’t often match on anything, evaluate whether it's even needed. Hidden levels act as a filter If you hide levels on a composite list, this acts like a filter before any other filter is applied. The hiding does take time to process and will impact more depending on the number of levels and the size of the list. How to avoid nested rows for export views Using nested rows can be a useful way to filter a complex set of data for export but, as discussed above, the filter performance here can be poor. The best way around this is to pivot the dimensions so there is only one dimension on rows and use the Tabular Multi Column export option with a Filter Row based on Boolean option. Some extra filter tips… Filter duration will affect saved views used in imports, so check the saved view open time to see the impact. This view open time will be on every use of the view, including imports or exports. If you need to filter on a specific list, create a subset of those items and create a new module dimensioned by the subset to view that data.
View full article
The Connect Manager is a tool that allows non-technical users to create Anaplan Connect scripts from scratch simply by walking through a step-by-step wizard.                 Features include:  - Create scripts for the Import/Export of flat files  - Create scripts for Importing from JDBC/ODBC sources  - Ability to chose between commonly used JDBC connection – New in v4  - Run scripts from the new Connection Manager Interface – New in v4  - Ability to use certificate authentication Please note that this program is currently only supported on Windows systems and requires .Net 4.5 or newer to run (.Net has been included in the download package). The Connect Manager is approved by Anaplan for general release, however it is not supported by Anaplan. If there are any specific enhancements you want to see in the next version, please leave a comment or send me an email at graham.gronhoff@anaplan.com. Download the Anaplan Connect Wizard here.
View full article
L'application Bring Your Own Key (BYOK) vous permet maintenant de vous approprier les clés de chiffrement de vos données de modèle. Si vous avez accès à l'outil Anaplan Administration, vous pouvez chiffrer et déchiffrer des espaces de travail sélectionnés à l'aide de vos propres clés AES-256. À la différence des clés principales système, les clés créées par BYOK vous appartiennent et vous en assurez la sécurité. Aucun mécanisme ne permet au personnel Anaplan d'accéder à vos clés. Bring Your Own Key (BYOK) - Guide de l'utilisateur  Bring Your Own Key (BYOK) est un produit complémentaire que votre organisation peut acheter si elle possède l'édition Enterprise.
View full article
Model Load: A large and complex model such as 10B cells can take 10 minutes to load the first time it's in use after a period of inactivity of 60 minutes. The only way to reduce the load time, besides reducing the model size, is by identifying what formula takes most of the time. This requires the Anaplan L3 support, but you can reduce the time yourself by applying the formula best practices listed above. One other possible leverage is on list setup: Text properties on a list can increase the load times and subsets on lists can disproportionately increase load times by up to 10 times. See if you can impact the model load on reviewing these 2 and use module line item instead. Model Save: A model will save when the amount of changes made by end-users exceeds a certain threshold. This action can take several minutes and will be a blocking operation. Administrator have no leverage on model save besides formula optimization and model size reducing.  Model Rollback: A model will roll back in some cases of invalid formula, or when a model builder attempts to create a process, an import a view which name already exists. In some large implementation cases, on a complex model made of 8B+ cells, the rollback takes approximately the time to open the model, and up to 10 minutes worth of accumulated changes, followed by a model save. The recommendation is to use ALM and have a DEV model which size does not exceed 500M cells, with production list limited to a few dozen items, and have TEST and PROD model with the full size and large lists. Since no formula editing will happen in TEST or PROD, the model will never rollback after a user action. It can roll back on the DEV model, but will take a few seconds only if the model is small.
View full article
Learn how to organize your model into logical parts to give you a  well-designed model that is easy to follow, understand and amend at a later date
View full article
Note: While all of these scripts have been tested and found to be fully functional, due to the vast amount of potential use cases, Anaplan does not explicitly support custom scripts built by our customers. This article is for information only and does not suggest any future product direction. Getting Started Python 3 offers many options for interacting with an API. This article will explain how you can use Python 3 to automate many of the requests that are available in our apiary, which can be found at   https://anaplan.docs.apiary.io/#. This article assumes you have the requests (version 2.18.4), base64, and JSON modules installed as well as the Python 3 version 3.6.4. Please make sure you are installing these modules with Python 3, and not for an older version of Python. For more information on these modules, please see their respective websites: Python   (If you are using a Python version older or newer than 3.6.4 or requests version older or newer than 2.18.4 we cannot guarantee validity of the article)   Requests   Base Converter   JSON   (Note: install instructions are not at this site but will be the same as any other Python module) Note:   Please read the comments at the top of every script before use, as they more thoroughly detail the assumptions that each script makes. Authentication To start, let's talk about Authentication. Every script run that connects to our API will be required to supply valid authentication. There are 2 ways to authenticate a Python script that I will be covering. Certificate Authentication Basic Encoded Authentication Certificate authentication will require that you have a valid Anaplan certificate, which you can read more about   here. Once you have your certificate saved locally, to properly convert your Anaplan certificate to be usable with the API, first you will need   openssl. Once you have that, you will need to convert the certificate to PEM format by running the following code in your terminal: openssl x509 -inform der -in certificate-(certnumber).cer -out certtest.pem If you are using Certificate Authorization, the scripts we use in this article will assume you know the Anaplan account email associated with the certificate. If you do not know it, you can extract the common name (CN) from the PEM file by running the following code in your terminal: openssl x509 -text -in certtest.pem To be used with the API, the PEM certificate string will need to be converted to base64, but the scripts we will be covering will take care of that for you, so I won't cover that in this section. To use basic authentication, you will need to know the Anaplan account email that is being used, as well as the password. All scripts in this article will have the following code near the top: # Insert the Anaplan account email being used username = '' ----------------- # If using cert auth, replace cert.pem with your pem converted certificate # filename. Otherwise, remove this line. cert = open('cert.pem').read() # If using basic auth, insert your password. Otherwise, remove this line. password = '' # Uncomment your authentication method (cert or basic). Remove the other. user = 'AnaplanCertificate ' + str(base64.b64encode(( f'{username}:{cert}').encode('utf-8')).decode('utf-8')) # user = 'Basic ' + str(base64.b64encode((f'{username}:{password}' # ).encode('utf-8')).decode('utf-8') Regardless of authentication method, you will need to set the username variable to the Anaplan account email being used. If you are using a certificate to authenticate, you will need to have your PEM converted certificate in the same folder or a child folder of the one you are running the scripts from. If your certificate is in a child folder, please remember to include the file path when replacing cert.pem (e.g. cert/cert.pem). You can remove the password line and its comments, and its respective user variable. If you are using basic authentication, you will need to set the password variable to your Anaplan account password and you can remove the cert line, its comments, and its respective user variable. Getting the Information Needed for Each Script Most of the scripts covered in this article will require you to know an ID or metadata for the file, action, etc., that you are trying to process. Each script that gets this information for their respective fields is titled get_____.py. For example, if you want to get your files metadata, you'll run getFiles.py, which will write the file metadata for each file in the selected model in the selected workspace in an array to a JSON file titled files.json. You can then open the JSON file, find the file you need to reference, and use the metadata from that entry in your other scripts. TIP:   If you open the raw data tab of the JSON file it makes it much easier to copy the whole set of metadata. The following are the links to download each get____.py script. Each get script uses the requests.get method to send a get request to the proper API endpoint. getWorkspaces.py: Writes an array to workspaces.json of all the workspaces the user has access to. getModels.py: Writes an array to models.json of either all the models a user has access to if wGuid is left blank, or all of the models the user has access to in a selected workspace if a workspace ID was inserted. getModelInfo.py: Writes an array to modelInfo.json of all metadata associated with the selected model. getFiles.py: Writes an array to files.json of all metadata for each file the user has access to in the selected model and workspace. (Please refer to   the Apiary   for more information on private vs default files. Generally it is recommended that all scripts be run via the same user account.) getChunkData.py: Writes an array to chunkData.json of all metadata for each chunk of the selected file in the selected model and workspace. getImports.py: Writes an array to imports.json of all metadata for each import in the selected model and workspace. getExports.py: Writes an array to exports.json of all metadata for each export in the selected model and workspace. getActions.py: Writes an array to actions.json of all metadata for all actions in the selected model and workspace. getProcesses.py: Writes an array to processes.json of all metadata for all processes in the selected model and workspace. Uploads A file can be uploaded to the Anaplan API endpoint either in chunks, or as a single chunk. Per our apiary: We recommend that you upload files in several chunks. This enables you to resume an upload that fails before the final chunk is uploaded. In addition, you can compress files on the upload action. We recommend compressing single chunks that are larger than 50MB. This creates a Private File. Note: To upload a file using the, API that file must exist in Anaplan. If the file has not been previously uploaded, you must upload it initially using the Anaplan user interface. You can then carry out subsequent uploads of that file using the API. Multiple Chunk Uploads The script we have for reference is built so that if the script is interrupted for any reason, or if any particular chunk of a file fails to upload, simply rerunning the script will start uploading the file again, starting at the last successful chunk. For this to work, the file must be initially split using a standard naming convention, using the terminal script below. split -b [numberofBytes] [path and filename] [prefix for output files] You can store the file in any location as long as you the proper file path when setting the chunkFilePrefix (e.g. chunkFilePrefix = ''upload_chunks/chunk-" This will look for file chunks named chunk-aa, chunk-ab, chunk-ac etc., up to chunk-zz in the folder script_origin/upload_chunks/. It is very unlikely that you will ever exceed chunk-zz). This will let the script know where to look for the chunks of the file to upload. You can download the script for running a multiple chunk upload from this link: chunkUpload.py Note:   The assumed naming conventions will only be standard if using Terminal, and they do not necessarily work if the file was split using another method in Windows. If you are using Windows you will need to either create a way to standardize the naming of the chunks alphabetically {chunkFilePrefix}(aa - zz) or run the script as detailed in the   Apiary. Note:   The chunkUpload.py script keeps track of the last successful chunk by writing the name of the last successful chunk to a .txt file chunkStop.txt. This file is deleted once the import completes successfully. If the file is modified in between runs of the script, the script may not function correctly. Best practice is to leave the file alone, and delete it if you want to start the upload from the first chunk. Single Chunk Upload The single chunk upload should only be used if the file is small enough to upload in a reasonable time frame. If the upload fails, it will have to start again from the beginning. If your file has a different name then that of its version of the server, you will need to modify line 31 ("name" : '') to reflect the name of the local file. This script runs a single put request to the API endpoint to upload the file. You can download the script for running a single chunk upload from this link: singleChunkUpload.py Imports The import.py script sends a post request to the API endpoint for the selected import. You will need to set the importData value to the metadata for the import. See Getting the Information Needed for Each Script for more information. You can download the script for running an import from this link: Import.py Once the import is finished, the script will write the metadata for the import task in an array to postImport.json, which you can use to verify which task you want to view the status of while running the importStatus.py script. The importStatus.py script will return a list of all tasks associated with the selected importID and their respective list index. If you are wanting to check the status of the last run import, make sure you are checking postImport.json to verify you have the correct taskID. Enter the index for the task and the script will write the task status to an array in file importStatus.json. If the task is still in progress, it will print the task status and progress. If the task finished and a failure dump is available, it will write the failure dump in comma delimited format to importDump.csv which can be used to review cause of the failure. If the task finished with no failures, you will get a message telling you the import has completed with no failures. You can download the script for importStatus.py from this link: importStatus.py Note:   If you check the status of a task with an old taskID for an import that has been run since you last checked it, the dump will no longer exist and importDump.csv will be overwritten with an HTTP error, and the status of the task will be 410 Gone. Exports The export.py script sends a post request to the API endpoint for the selected export. You will need to set the exportData value to the metadata for the export. See Getting the Information Needed for Each Script for more information. You can download the script for running an export from this link: Export.py Once the export is finished, the script will write the metadata for the export task in an array to postExport.json, which you can use to verify which task you want to view the status of while running the exportStatus.py script. The exportStatus.py script will return a list of all tasks associated with the selected exportID and their respective list index. If you are wanting to check the status of the last run import, make sure you are checking postExport.json to verify you have the correct taskID. Enter the index for the task and the script will write the task status to an array in file exportStatus.json. If the task is still in progress, it will print the task status and progress. It is important to note that no failure dump will be generated if the export fails. You can download the script for exportStatus.py from this link: exportStatus.py Actions The action.py script sends a post request to the API endpoint for the selected action (for use with actions other than imports or exports). You will need to set the actionData value to the metadata for the action. See Getting the Information Needed for Each Script for more information. You can download the script for running an action from this link: Action.py Once the action is finished, the script will write the metadata for the action task in an array to postAction.json, which you can use to verify which task you want to view the status of while running the actionStatus.py script. The actionStatus.py script will return a list of all tasks associated with the selected actionID and their respective list index. If you are wanting to check the status of the last run import, make sure you are checking postAction.json to verify you have the correct taskID. Enter the index for the task and the script will write the task status to an array in file actionStatus.json. It is important to note that no failure dump will be generated in the case of a failed action. You can download the script for actionStatus.py from this link: actionStatus.py Processes The process.py script sends a post request to the API endpoint for the selected process. You will need to set the processData value to the metadata for the process. See Getting the Information Needed for Each Script for more information. You can download the script for running a process from this link: Process.py Once the process is finished, the script will write the metadata for the process task in an array to postProcess.json, which you can use to verify which task you want to view the status of while running the processStatus.py script. The processStatus.py script will return a list of all tasks associated with the selected processID and their respective list index. If you are wanting to check the status of the last run import, make sure you are checking postProcess.json to verify you have the correct taskID. Enter the index for the task and the script will write the task status to an array in file processStatus.json. If the task is still in progress, it will print the task status and progress. If the task finished and a failure dump is available, it will write the failure dump in comma delimited format to processDump.csv which can be used to review cause of the failure. It is important to note that no failure dump will be generated for the process itself, only if one of the imports in the process failed. If the task finished with no failures, you will get a message telling you the process has completed with no failures. You can download the script for processStatus.py from this link: processStatus.py Downloading a File Downloading a file from the Anaplan API endpoint will download the file in however many chunks it exists in on the endpoint. It is important to note that you should set the variable fileName to the name it has in the file metadata. First, the downloads individual chunk metadata will be written in an array to downloadChunkData.json for reference. The script will then download the file chunk by chunk and write each chunk to a new local file with the same name as the 'name' listed in the files metadata. You can download the link for this script from this link: downloadFile.py Note:If a file already exists in the same folder as your script with the same name as the name value in the files metadata, the local file will be overwritten with the file being downloaded from the server. Deleting a File You can delete the file contents of any file that the user has access to that exists in the Anaplan server. Note: This only removes private content. Default content and the import data source model object will remain. You can download the link for this script from this link: deleteFile.py Standalone Requests Code and Their Required Headers In this section, I will list the code for each request detailed above, including the API URL and the headers necessary to complete the call. I will be leaving the content right of Authorization: headers blank. Authorization header values can be either Basic encoded_username:password or AnaplanCertificate encoded_CommonName:PEM_Certificate_String (see   Certificate-Authorization-Using-the-Anaplan-API   for more information on encoded certificates) Note: requests.get will only generate a response body from the server, and no data will be locally saved unless written to a local file. Get Workspaces List requests.get('https://api.anaplan.com/1/3/workspaces/', headers='Authorization':) Get Models List requests.get('https://api.anaplan.com/1/3/models/', headers={'Authorization':}) or requests.get('https://api.anaplan.com/1/3/workspaces/{wGuid}/models', headers={'Authorization':}) Get Model Info requests.get(f'https://api.anaplan.com/1/3/models/{mGuid}', headers={'Authorization':}) Get Files/Imports/Exports/Actions/Processes List The get request for files, imports, exports, actions, or processes are largely the same. Change files to imports, exports, actions, or processes to run each. requests.get('https://api.anaplan.com/1/3/workspaces/{wGuid}/models/{mGuid}/files', headers={'Authorization':}) Get Chunk Data requests.get('https://api.anaplan.com/1/3/workspaces/{wGuid}/models/{mGuid}/files/{fileID}/chunks', headers={'Authorization':}) Post Chunk Count requests.post('https://api.anaplan.com/1/3/workspaces/{wGuid}/models/{mGuid}/files/{fileID}/chunks/{chunkNumber}', headers={'Authorization': , 'Content-type': 'application/json'}, json={fileMetaData}) Upload a Chunk of a File requests.put('https://api.anaplan.com/1/3/workspaces/{wGuid}/models/{mGuid}/files/{fileID}/chunks/{chunkNumber}', headers={'Authorization': , 'Content-Type': 'application/octet-stream'}, data={raw contents of local chunk file}) Mark an upload complete requests.put('https://api.anaplan.com/1/3/workspaces/{wGuid}/models/{mGuid}/files/{fileID}/complete', headers=={'Authorization': , 'Content-Type': 'application/json'}, json={fileMetaData}) Upload a File in a Single Chunk requests.put('https://api.anaplan.com/1/3/workspaces/{wGuid}/models/{mGuid}/files/{fileID}', headers={'Authorization': , 'Content-Type': 'application/octet-stream'}, data={raw contents of local file}) Run an Import/Export/Process The post request for imports, exports, and processes are largely the same. Change imports to exports, actions, or processes to run each. requests.post('https://api.anaplan.com/1/3/workspaces/{wGuid}/models/{mGuid}/imports/{Id}/tasks', headers={'Authorization': , 'Content-Type': 'application/json'}, data=json.dumps({'localeName': 'en_US'})) Run an Action requests.post('https://api.anaplan.com/1/3/workspaces/{wGuid}/models/{mGuid}/imports/{Id}/tasks', data={'localeName': 'en_US'}, headers={'Authorization': , 'Content-Type': 'application/json'}) Get Task list for an Import/Export/Action/Process The get request for import, export, action and process task lists are largely the same. Change imports to exports, actions, or processes to get each task list. requests.get('https://api.anaplan.com/1/3/workspaces/{wGuid}/models/{mGuid}/imports/{importID}/tasks', headers={'Authorization':}) Get Status for an Import/Export/Action/Process Task The get request for import, export, action and process task statuses are largely the same. Change imports to exports, actions, or processes to get each task list. Note: Only imports and processes will ever generate a failure dump. requests.get('https://api.anaplan.com/1/3/workspaces/{wGuid}/models/{mGuid}/imports/{ID}/tasks/{taskID}' headers={'Authorization':}) Download a File Note:   You will need to get the chunk metadata for each chunk of a file you want to download. requests.get('https://api.anaplan.com/1/3/workspaces/{wGuid}/models/{mGuid}/files/{fileID}/chunks/{chunkID}, headers={'Authorization': ,'Accept': 'application/octet-stream'}) Delete a File Note:   This only removes private content. Default content and the import data source model object will remain. requests.delete('https://api.anaplan.com/1/3/workspaces/{wGuid}/models/{mGuid}/files/{fileID}', headers={'Authorization': , 'Content-type': 'application/json'} Note:  SFDC user administration is not covered in this article, but the same concepts from the scripts provided can be applied to SFDC user administration. For more information on SFDC user administration see the apiary entry for  SFDC user administration .
View full article
This article describes how to use the Anaplan DocuSign integration with single sign-on (SSO).
View full article
Anaplan Connect v1.3.3.5 is now available. 
View full article
We're pleased to announce version 1.6 of the DocuSign for Anaplan integration. This release includes the following enhancements. Access the Header Bar from DocuSign for Anaplan You can now access the header bar directly from DocuSign for Anaplan. The header bar provides access to models, applications, notifications, and personal settings. The following tasks are now performed from the User menu, in the top-right corner: Change Account Switch between different DocuSign accounts. Change Workspace/Model Switch between different models. Logout Log out of Anaplan. Previously, these options were located in the workspace and email address dropdowns, which have been removed. Map Percentages from Anaplan We've added the ability to include percentage amounts in DocuSign documents populated with Anaplan data. When you're creating and editing envelopes, you can now map columns containing numeric line items formatted as percentages (including decimal amounts). You can map percentages to any Text tag in your DocuSign template. Updated Certificates DocuSign will renew its DocuSign Connect X.509 certificates in May 2018. To support this change, we've updated the SSL certificates used by DocuSign for Anaplan. You don't need to take any action. Known Issues & Workarounds The following issues will be addressed in a future release: Issue Workaround The status of a sent document does not change to Complete after all recipients have signed it in DocuSign and clicked Complete. Refresh your browser to update the status. If two or more of your DocuSign accounts are part of the same group, DocuSign information (for example, envelopes) from both accounts is available to logged-in users who connect the integration to any account in the group. You'll only encounter this issue if you have multiple DocuSign accounts organized using groups.  None Selecting an envelope in the Create a new workflow wizard can take up to 20 seconds if multiple envelopes exist.  None Leading and trailing spaces are not excluded from search queries. For example, searching for "envelope one " doesn't return an envelope named "envelope one". Check for leading and trailing spaces when searching for envelopes, DocuSign Workflows, or sent documents. The column dropdowns in step 2 of the Create a new envelope wizard do not close as expected.  None Workspace administrators can update envelope status in the Anaplan module. Non workspace administrators cannot update status, although the integration will work and documents will be sent out.  None When reusing a DocuSign Workflow with the same module, certain document actions such as "decline" or "cancel" do not overwrite the columns used to track the envelope status. Clear the columns used to track the envelope status. When an existing envelope is edited to use a different DocuSign template and the workflow that used the Envelope is subsequently edited, users are unable to save the edited Workflow. When this occurs, nothing happens when you click the Save button on a Workflow. When you need to work with a different DocuSign template, create a new Envelope rather than editing an existing one. If you attempt to add more recipients to an existing DocuSign Workflow by changing the template, the workflow cannot be saved. Create a new Envelope Workflow with the required template. Useful Resources Browse the DocuSign for Anaplan section of Anapedia. Take the DocuSign Integration training course.
View full article
We're pleased to announce the February 2018 release of the Anaplan Connector for Informatica Cloud. This release fixes Success/Error row counts in Monitor Log for Data Synchronization Tasks (DST).   Exports Anaplan List exports Success rows is the number of Anaplan List rows exported. Error row count should be 0. Anaplan Module exports Success rows is the number of Anaplan Module rows exported. Error row count should be 0. Imports Anaplan List imports Success rows is sum of number of rows successfully updated/inserted & number of rows updated/inserted with warning. Error row count is number of failed rows. Anaplan Module imports Success rows is the sum of number of Anaplan cells successfully updated/inserted & number of Anaplan cells updated/inserted with warning. Error rows is number of failed Anaplan cells. Note: Cells ignored by Anaplan Import action are not included in above count. For example, during Module Import, any parent hierarchy level cells will be ignored. For more information, see the Anaplan Informatica Connector Guide. 
View full article
Audience: Anaplan Internal and Customers/Partners Workiva Wdesk Integration Is Now Available We are excited to announce the general availability of Anaplan’s integration with Workiva’s product, known as the Wdesk. Wdesk easily imports planning, analysis and reporting data from Anaplan to deliver integrated narrative reporting, compliance, planning and performance management on the cloud. The platform is utilized by over 3,000 organizations for SEC reporting, financial reporting, SOX compliance, and regulatory reporting. The Workiva and Anaplan partnership delivers enterprise compliance and performance management on the cloud. Workiva Wdesk, the leading narrative reporting cloud platform, and Anaplan, the leading connected-planning cloud platform, offer reliable, secure integration to address high-value use cases in the last mile of finance, financial planning and analysis, and industry specific regulatory compliance. GA Launch: March 5th  How does the Workiva Wdesk integration work? Please contact Will Berger, Partnerships (william.berger@workiva.com) from Workiva to discuss how to enable integration. Anaplan reports will feed into the Wdesk platform. Wdesk will integrate with Anaplan via Wdesk Connected Sheets. This is a Workiva built and maintained connection. What use cases are supported by the Workiva Wdesk Integration? The Workiva Wdesk integration supports a number of use cases, including: Last mile of finance: Complete regulatory reporting and filing as part of the close, consolidate, report and file process. Workiva automates and structures the complete financial reporting cycle and pulls consolidated actuals from Anaplan. Financial planning and analysis: Complex multi-author, narrative reports that combine extensive commentary and data such as budget books, board books, briefing books and other FP&A management and internal reports. Workiva creates timely, reliable narrative reports pulling actuals, targets and forecast data from Anaplan. Industry specific regulatory compliance & extensive support of XBRL and iXBRL: Workiva is used to solve complex compliance and regulatory reporting requirements in a range of industries.  In banking, Workiva supports documentation process such as CCAR, DFAST and RRP, pulling banking stress test data from Anaplan. Also, Workiva is the leading provider of XBRL software and services accounting for more than 53% of XBRL facts filed with the SEC in the first quarter of 2017.
View full article
We're pleased to announce the release of DocuSign version 1.5. In this release, we've introduced support for Single Sign-on (SSO) authentication. This means that DocuSign SSO users can authenticate the Anaplan DocuSign Integration with their single sign-on login, rather than entering their DocuSign password every time. Also, the Anaplan DocuSign Integration now supports all Anaplan data types with the exception of the "List" data type. See DocuSign for Anaplan for more information about the Anaplan DocuSign Integration. See the DocuSign SSO Authentication article for information about using the Anaplan DocuSign integration with single sign-on (SSO). To find out more about DocuSign SSO authentication, see the Overview section of the  DocuSign Single Sign-on Implementation Guide.   Known Issues & Workarounds Issue Workaround Anaplan Workspace Administrators can update Envelope status in the Anaplan module. Non-admins cannot update status, although the Integration will work and documents will be sent out. There is no workaround. When reusing a Workflow with the same module, certain document actions such as  "decline" or "cancel" do not overwrite the columns used to track the envelope status. Clear the columns used to track the envelope status. When an existing Envelope is edited to use a different DocuSign template and the workflow that used the Envelope is subsequently edited, users are unable to save the edited Workflow. When this occurs, nothing happens when you click the Save button on a Workflow. When you need to work with a different DocuSign.com template, create a new Envelope rather than editing an existing one. If you attempt to add more recipients to an existing Envelope Workflow by changing the template, the workflow cannot be saved. Create a new Envelope Workflow with the required template.
View full article