Bumping this up. To relay the pain felt by clients...imagine you land on a dashboard and your 5 POV selectors follow you as you scroll up and down the dashboard. 4 of those dimensions will change very infrequently. I might spend 60 minutes on a single Product, single Version, single Year. Most of my toggling is within the Contract dimension. Having the right-most POV selector be the selector you would change most-frequently within a session adds a lot to productivity, eases user adoption of new dashboards, and reduces the mental friction that takes place your POV selectors are in random order.
... View more
Love this Frankie!
One thing I hear all the time from my clients is "why should I dedicate time to this" ... "what is the business value?"
It can be difficult to justify the time and $ investment to ensure hygiene.
A few bullets I typically use to justify the business case for KonMari:
- The faster we do something (e.g., deliver some new functionality or business decision-making capability), the more high-maintenance it is, and the more chaos it creates
- Investing time into hygiene ensures a low-maintenance, low-chaos solution that enables us to respond faster to the needs of the business and answer questions more-quickly, better serving our executive stakeholders and business partners
- Investing time into hygiene also enables us to more-rapidly incorporate new data, new logic, new decision-making capabilities into our models as the business grows and evolves; we can respond more quickly
- Lastly...SO many of our customers suffer from key-person risk. If one person leaves, they're in a bad way. One of the wonderful things about Anaplan is that it lets you digitize your business modeling...provided it's organized. If you have an organized Anaplan modeling environment, Anaplan modelers can be easily replaced and new modelers can more-easily navigate their models and find what they need to be effective in their roles. It is inevitable that key individuals will grow in their careers, pivot in their careers, move onto other opportunities, and good Anaplan model hygiene mitigates the risk of those individuals being the only people capable of operating and augmenting their organization's models. It's critical to a resilient Anaplan-enabled planning and analytics team.
... View more
Today’s healthcare landscape is a challenging one, and commercial business leaders in the pharma/biopharma industry are realizing they need to both respond to the urgent challenges of today and innovate so they are ready for the challenges and opportunities of tomorrow.
Successful commercial business leaders have been doing this by investing in their decision-making and predictive planning capabilities, connecting people, data, and analytics.
The Challenges Are Real
Commercial pharma leaders are under more pressure than ever to perform. By therapeutic area and drug class, the most-effective strategies for achieving brand awareness and securing market access are varied and ever-changing. Competition is intensifying, with fewer areas of unmet clinical need and increased difficulty achieving an added benefit over existing therapies. Healthcare budgets are more constrained than ever, with payers introducing increasingly restrictive cost-control mechanisms. And consolidation in the pharmacy benefit manager (PBM) space is giving PBMs more power to negotiate significant rebates with dubious value to the healthcare system.
Leaders across market access, managed markets, pricing and contracting, field sales, market analytics and insights, and gross-to-net channel management are taking these challenges head-on. Leaders who see the commercial forecasting process as a continuous, collaborative activity are making better decisions, returning millions of dollars in revenue leakage to the business by identifying and paring back their lowest-ROI commercial spend, and identifying and investing in high ROI strategies. Their teams are collaborating in real time on a single platform powered by a next-generation predictive analytics engine, enabling real-time decision-making where ROI can be predicted, measured, analyzed, and continuously improved.
We are seeing these leaders use this platform—Anaplan—to solve for three key challenges.
Challenge 1: Where should I invest my next dollar of commercial spend?
Using the Anaplan platform, commercial leaders are determining how to allocate sales and marketing spend based on which tactics will have the biggest impact on brand awareness. In parallel, they are identifying that they may be over-paying for (or perhaps under-investing in) market access. And they are tuning their investments across brand awareness and market access based on insights gleaned from their data.
On the brand awareness front, should we invest more in detailing, direct-to-consumer (DTC) campaigns, speaker programs, and key opinion leader education, or elsewhere? Imagine being able to go to a dashboard that tells you, “You ran these two DTC campaigns during these months, and based on a 20-percent average increase in demand by month six after the midpoint of the campaign—controlled for other factors using demand data, survey data, and tracking data—this was the total pull-through generated, and this is how long the increase in demand was sustained.” We get our answer in terms of the real ROI generated on your campaigns relative to other tactics. The true power of one platform, however, comes from being able to pivot from a dashboard for obtaining these insights to one where you can apply these insights. Being able to ask, “What happens if…” and getting the answer in terms that matter to you (e.g., demand, awareness, market share) and your stakeholders (e.g., in $ terms) in real time.
On the market access side, what rebates should I pay to obtain, maintain, improve, and defend market access? For where my brand is in its lifecycle if I want to achieve a 30-percent return on every rebate dollar paid for access, what terms should I offer and what bid grid structures will result in what coverage across all associated plans? And what will the shift of covered lives (and associated Rxs) be from undesirable coverage status to more-desirable coverage status? Imagine being able to go to a dashboard that tells you, “You paid these rebates to improve access, which resulted in 50 percent of covered lives under the major PBMs having access to your brand at preferred status with restrictions. But, you can offer up to 5 percent more in rebates to plans of a certain classification (per the bid grid), and we predict that you will see 20 percent more plans cover your brand without restrictions, resulting in a 30-to-50 percent increase in Rx volumes, with a minimum 45-percent return on rebates and a maximum of 75-percent return on rebates.” Again, we can ask questions and get insights in real time, and in real $ terms; insights that drive better discussions amongst leaders and better, faster decision-making during market access contract negotiations.
Now, where should I invest my next $1? For each brand, analyze your actual spend in each of these areas, compare this spend to industry benchmarks, pull in competitive intelligence, and leverage available data and the technology of continuously learning predictive algorithms, to see the projected impact of your decisions before you make them. Successful organizations take the time to understand these complex interdependencies, and then delegate the day-to-day management of the math and mechanics of these interdependencies to cutting-edge technology like the Anaplan platform, letting their teams spend more time strategizing.
Challenge 2: How can I maximize the interactions our field reps have with prescribers?
Using the Anaplan platform, commercial leaders are maximizing brand awareness and growth from improving prescriber interactions. When field reps have the data and tools they need to identify the prescribers whose patients benefit from your brands and have the relevant information for that prescriber at their fingertips, positive outcomes result.
Imagine you’re a field sales rep and that you’re able to load up a dashboard on your iPad with a prescriber's prescribing statistics in your therapeutic area, the brands relevant to them, the efficacy and safety data that sets your brand apart from its competitors, the key opinion leaders (KOLs ) in the healthcare community that are advocating for the brand’s benefits, the availability of the drug in local pharmacies, and whether that prescriber’s accepted insurance plans cover the drug (and patient assistance programs if they don’t), as well as the latest relevant data from medical affairs, pharmacovigilance, marketing analytics & insights, pricing & market access, and supply chain. Additional information that results from connected analytics includes:
What key opinion leaders are voicing and at what conferences/in what journals.
Average co-pays, coverage, and patient assistance programs.
Which brand(s) are running DTC campaigns, where the brand is in its lifecycle (e.g., new launch, high priority), brand potential, current brand awareness based on surveys.
Competitive information on other brands on the market or soon-to-launch.
This simple Anaplan dashboard is connected to the same single platform where collaborative commercial planning and analytics is occurring. The field rep has access to real-time information. And those fields can use this same platform to provide real-time feedback to their counterparts; for instance, field sales informing managed markets and market access that prescriber sentiment is positive on a brand but that affordability and coverage needs to improve for a brand to reach its full potential.
More About Connected Planning:
The 3 Pillars of Connected Planning: Part 3—Plans
Lead with Enablement: The Key to Anaplan Implementation Success
Transforming IT Project Planning for CIOs
Challenge 3: We are making decisions based on stale, incomplete, and misaligned data. How can I speed up our team’s information access and turnaround time?
Using the Anaplan platform, commercial leaders are making better decisions, faster. Whether you’re negotiating market access contracts with payers or forecasting GTN channel mix, real-time information from the field can be invaluable. Critical business decisions and deliverables rely on the availability and readiness of data from many sources. For market access, claims data is often processed through Integrichain, actual TRx and NRx demand data from IQVIA, claims and co-pay data from IQVIA and other longitudinal data providers, MMIT coverage and lives data by brand, DRG formulary data. For gross-to-net forecasting, shipments and returns, rebate rates and channel mix, financials from the GL, and sub-ledgers. From a modeling perspective, the ripple effect of a shift in actual contracted volumes, or coverage status, or a change in our national-level or channel-level forecast assumptions, or a shift in channel mix assumptions, can be massive and hard to correct after the fact. The longer it takes to incorporate this data into a common data model, the less agile you are, the less-ready you are to make decisions, and the further you are behind your competition, whether it’s a competing brand or a PBM.
Many pharma organizations use Excel for commercial analytics and decision-making—whether in the market access or managed markets functions, upstream or downstream—and they choose Excel because they need flexibility and customizability. These commercial leaders are using Excel for everything from national unit volume forecasting to pre-/post-deal analytics and process management; from GTN channel forecasting to accruals and true-ups. Because all of these areas are using Excel, these processes are often sequential instead of collaborative, with each stakeholder incorporating data and assumptions into their offline models and shipping the results to the next stakeholder down the line. In this spreadsheet-driven world, real-time collaboration is next to impossible and individual stakeholders are resigned to high-impact interdependencies remaining unaccounted for. The result is siloed decision-making based on incomplete and stale information, and frustratingly low agility in a fast-changing landscape.
Anaplan-enabled pharma organizations run their commercial forecasting and analytics on Anaplan to collaborate in real-time. As external market events are captured, models in Anaplan predict demand shifts at a national, channel, and account level. As brand awareness tactics are re-prioritized, Anaplan models flex national and regional volume projections up and down. As gross-to-net channel mix assumptions change, so too does the channel-level volume forecast, and national, channel, and account-level forecast stakeholders and pricing and finance stakeholders are all aware and able to discuss the drivers of change. Meanwhile, pricing and contracting analysts responsible for pre-deal forecasting can see the impact of these changes on projected contract performance and negotiate with full visibility into projected payer and plan-level volumes. The field, in the meantime, has access to the latest clinical safety and efficacy data, competitive data, and patient affordability data (formulary access and copay program data) all in one place to maximize pull-through, providing a seamless patient and provider experience (as important as clinical outcomes, nowadays).
Challenge Begets Opportunity
Commercial business leaders are innovating to realize their brands’ commercial potential. And enabling technologies like Anaplan are removing the constraints that have been a barrier to connecting people, data, and analytics in the past.
With a connected decision-making and predictive analytics capability, we are seeing a new generation of leaders emerge who can confidently navigate a challenging market landscape and guide their commercial organizations toward brand success, delivering value from year 1 to year 10 and beyond.
Kevin Jacokes is the North America Industry Lead for Life Sciences at Impetus Consulting Group. He has 12+ years of experience in delivering connected planning, analytics, and decision-making capabilities to global organizations, implementing the data infrastructure, deploying the analytical models, and coaching the teams who make these capabilities possible. Kevin believes in the Anaplan platform and ecosystem because he has seen the effect it has on the cultures of organizations that adopt it. He has seen it promote collaboration, drive curiosity, unlock creativity, and empower team-driven continuous improvement.
... View more
Summarized Feature Request
End users need the ability to manage list items, i.e. metadata, via the Excel add-in.
By far the highest-priority and highest-value need is to add new list items.
Users switching from Oracle Hyperion / Essbase love the Essbase SmartView Add-in for office. One of the reasons they love it, is they can enter data at the intersection of dimensions in a very sparse cube (module, in Anaplan). Effectively they are able to quickly use Excel to load data into any intersection of dimensions in very sparse cubes/modules, and not have to worry about creating items, metadata.
In Anaplan, we use composite hierarchies to manage sparsity, performance, and ensure calculations happen in real-time. This enables us to have models that are lightning-fast and calculate in real-time, BUT, if users want to create data at the intersection of multiple sparse dimensions (e.g. at the intersection of Company, Cost Center, Product, GL Account) then we've probably used a composite hierarchy to capture this data, and to capture it, we need to enable a user to add an item to that composite hierarchy.
Challenge / Pain Point
The pain here is two-fold:
1. Anaplan users have to think about both metadata and data whenever they want to store data in a composite hierarchy, performing 2 steps; first, create the composite hierarchy item, a step that is inherently data-driven; second, store the data at that intersection
2. Anaplan users cannot manage metadata via the Excel add-in, making it virtually impossible to rely on the Excel Add-in for production data entry in a Corporate Finance setting where new composite hierarchy items are being added all the time
In the web interface, adding a new item to a list can be done using various types of actions and via the Edit > Insert feature, but no such ability exists in the Excel Add-in.
First...a couple of acknowledgements
As an Anaplan evangelist, let me first acknowledge that:
1. Users should not be using Excel to submit data; just use the web interface!!!
2. That other technologies enable users to create intersections amongst sparse dimensions can cause some big issues, and can enable users to balloon the size of their cubes inadvertently, causing a drag on performance
How can we simplify the user experience for users who want to submit data to items in composite hierarchies that represent net new intersections of planning dimensions (e.g., Entity, Cost Center, Product, GL Account)?
This, bringing us to feature parity with the Essbase Add-in for Smart View.
And what is the business value?
Sales: Increase Anaplan value proposition to Corporate Finance at large organizations with sparse charts of accounts, enabling positioning of Anaplan Excel Add-in as having feature parity with Essbase Excel Add-in
Customer Satisfaction and Market Development: Help CS and Partners convert an entire generation of Essbase users to Anaplan, who will love that Anaplan gives them what Essbase did, but does so much more.
Provide a smoother transition for folks moving from Essbase (and the Essbase Excel Add-in) to Anaplan (with its more-browser-based user experience)
Having worked with many of these customers, the change management necessary to get them to go cold turkey on Excel for working with sparse data sets is tremendous, and the lack of this feature makes it extremely painful for organizations switching from Essbase to Anaplan
Customer Success and Retention: Improve adoption and reduce implementation risk for large implementations; users can always fall back on submitting data via Excel Add-in, since data entry and composite hierarchy management are the most fundamental of Minimum Viable Product (MVP) requirements
Productivity and ease of use: Enable users to use the Anaplan Excel Add-in to add data (and supporting metadata) via Excel, and in a single step:
Our clients are requesting the ability to (1) add composite hierarchy items to Anaplan lists via the Excel Add-in, and (2) add those items at the same time as they add data.
Per #1 above...
- Ability to point the Anaplan Excel Add-in to an Anaplan list
- Ability to *retrieve and read* all items from that list, including its parents and ancestors, into an "active grid" that is in sync with metadata in Anaplan
- Ability to add a row to that "active grid", enter in values, and *submit* those values; if the values are unique and in all other ways valid, an item will be added to the list in Anaplan
- Ability to enter and *submit* property data to a list for one or more items (nice-to-have)
Per #2 above...
- Ability to submit data (existing functionality, today), and in doing so, create a composite hierarchy item for that data to be stored at, if the composite hierarchy item does not already exist
... View more
Hello, and welcome to the Life Sciences & Health Care User Group! We’re glad you’re here. We (your group facilitators) are here to help you as best as we can, so please reach out at any time. Use the Forum to ask questions and start new discussion topics. You can also tag us on a post or send us a private Community message. Meanwhile, please take a moment here to introduce yourself to the rest of the group. Since introductions can sometimes feel a little awkward, here are a few questions to help you get started: How did you find the Anaplan Community? What specifically are you working on at the moment and how can this group assist you? What goals do you have for your participation in this group? — Bob Debicki (@BobD), Kevin Jacokes (@kjacokes), and Matthew Daniel (@mdaniel)
... View more
Hi @Rebecca Any word on this? We have 2 customers (big ones for Anaplan) who need this functionality, and it would help them avoid technical workarounds that would be time-consuming to build and maintain. This time could otherwise be allocated to expansion into new areas of the business and adding value that benefits end-users. I can DM you with the client names if that helps with prioritization. Thank you, Kevin
... View more
Yes, we are doing this in a few of our models. You're not missing anything, this will work. There are a few cautions though. Cautions: To mitigate risk of affecting the user experience, only set something like this up to run against small models where processing time is limited. To mitigate the risk that these API calls run away and "pile up", make sure that your process (as initiated by the scheduler) is designed to be robust and stop initiating API calls if old API calls are still outstanding: Don't kick off a new process until the old one is done Don't kick of processes if any other API call(s) are being made To mitigate the risk of affecting other models or workspaces, put components with high API traffic into separate models, in separate workspaces; if you crash the workspace due to overloading it with API calls (it's happened), you don't want to affect other models in your workspace Let me know if you have any questions. Cheers, Kevin
... View more
Hi Vamshidhar, This error will happen when your Certificate Signing Request (CSR), used to obtain your certificate, does not have the e-mail field populated. After I obtained my certificate, the certificate (as shown below, when opened in Mozilla Firefox's Certificate Manager) will show E=<e-mail address>. Let me know if you have any questions. Cheers, Kevin
... View more
Hi @Wendy, Happy to try and help. Keep me honest on my understanding of your question and what piece of the allocation logic we're trying to figure out. My understanding of the question is: "How one would approach the allocation in such a way that it a) works b) is easy to follow the step-wise logic c) accommodates allocating from parents to children, across multiple dimensions. Regarding "a" and "b" above, from an approach perspective, whenever I am thrown a new allocation I will: 1. Break it down into as many steps as possible. I find that I start to confuse myself when I try to combine the data movement into a single, big step. 2. Make liberal use of separate dimensions; when proving out the allocation I try not to use concatenated dimensions. I build out each of my lists separately (e.g., State-WLSR as one list, Brand-PD as another list, LineCode-CN as a third list). And I add to those lists only the items I need in order to build out an allocation from a single source data point to several target data points. The idea here isn't to build out the whole thing, but to prove the math can work and to visualize the dimensionality in action. 3. Once I prove out the allocation logic for one source data point, I'll do it for a second, and a third. At that point I'll optimize the allocation for performance and size, rebuilding the allocation, but this time using concatenated lists. At this stage I am still not introducing the full data set, or full set of list items. I don't want to do this yet because I'm still figuring out module sizing and don't want to "blow up" my model. Note that at this stage I have the benefit of being able to referencing the example I build in #2, making this less of a problem-solving exercise and more of a translation exercise (I need to do the same math, just with concatenated lists instead of stand-alone lists) 4. Finally, once we've proven out the optimized, efficient design (from a size and performance perspective), we incorporate all of the items into our lists, to calculate real results we can validate, get an idea of total module size, and gauge performance. Now to item "c". If I correctly understood you're asking about allocating parents to children, one way to approach this would be to: 1. In a staging module, dimensionalize your flat source data set by whatever the source dimensionality is, let's say it's by State, Brand, and LineCode 2. Create a process that updates a list with WSLRPDCN items rolling up to State-Brand-LineCode items (or whatever your source module dimensionality is); this process should run whenever any of these lists is updated AND whenever you load in a new source data set with allocable expense 3. In a target module, look at each of your WSLRPDCN items. Have a line item of value 1 called "# of Allocation Target Intersections". Sum up that value by State-Brand-LineCode. Then calculate the allocated expense in the target intersections by taking Allocable Expense from the source module (some dimension-traversing functions required; SUM and LOOKUP) and dividing it by the "# of Allocation Target Intersections" in the target module. Let me know if you have any questions. Don't hesitate to e-mail me at firstname.lastname@example.org if you have any questions or want to hop on a quick call to chat. Cheers, Kevin
... View more
Hi there, Happy to attempt an assist. Can you please provide a screenshot of your Anaplan Connect script (or the file itself) and a screenshot of your folder as viewed in Windows Explorer (assuming you're not working in a *nix environment) so we can see what all is in there (script, .jars, .properties, etc.)? Thank you, Kevin
... View more
In my experience, only the secured agent is supported, which means you would have to install that secured agent somewhere, whether on a hosted server or one managed by your organization. Either way, in my experience, a server needs to be provisioned for the agent to run on. Not ideal because it usually means IT needs to be involved in some capacity.
... View more
Hello Hina, If I'm understanding your issue correctly, you'd like to add a line item to a Line Item Subset--so that you can map list items to line items--but are having trouble adding the line item because it does not have a numeric data type. You've come across a known limitation. A product manager would say it's working as intended. 🙂 The Issue The idea behind Line Item Subsets is that they enable you to turn a set of Line Items into a list so that you can then use the COLLECT() function to dimensionalize that data set in a downstream module and perform further calculations on it. For instance, in Module A you could have 10 line items applying to 3 dimensions, each showing some monthly metric or figure. But if you want the ability to show the YTD value for all 10 of those items, you can add all 10 of those Line Items to a Line Item Subset, add your original 3 dimensions (plus your Line Item Subset) to Module B, and then have a Line Item in Module B that calculated YTD for all 10 items in one fell swoop. Because the intended use of Line Item Subsets is to support this COLLECT() functionality pulling forward numeric data, you're Questions Can you elaborate on the mapping you're attempting to do? For our other manufacturing customers we'll typically have a mapping of Lots to Factories, indicating the Factory where a Lot (or grouping of Lots) will be produced. From the screenshots it looks like you may be doing something different here What's the next step after you capture / populate (manually or formulaically) this mapping in the module? Are you using it for some aggregation, calculation, or transformation of data? Potential Workarounds First of all, I like the creativity. Creating Line Item Subsets and using them in mappings can be a clever solution to address nuanced functional requirements or user experience requirements that come up every now and then. Happy to continue the conversation and figure out the best approach with you, but based on the information provided thus far, you may have more luck by making a list containing your Lots, and having that line item be of data format type "List" as opposed to "Line Item" or text. This way you can, for each Factory, assign a Lot item, looking it up from a property or line item that applies to the Factory Inbound Load list. Let me know your thoughts, also happy to hop on the phone to discuss. Cheers, Kevin
... View more
Hi Steve, I've run into similar issue to what you described, and depending on the situation different approaches will make sense. While I won't advertise these as a flawless approaches, we've had success: 1. Identifying items based on a unique combination of properties, as well as 2. Having a module in Anaplan that takes long, text-based unique identifiers, and assigns a shorter numeric code (e.g. "000123"), and implementing logic across the model that enables us to work with those shorter codes as opposed to the longer text-based identifiers Both required an understanding of the pros and cons, and a well-thought-out approach, and at the end of the day worked well for our purposes. I just wanted to call out that these approaches can work, so long as the cons/down-sides are well-understood, sufficiently-mitigated, and don't pose risks to scalability of the model over time. Let me know if you would like to discuss further. Cheers, Kevin
... View more
Hey Amanda, Happy to help. I've summarized my understanding of what you're looking to accomplish (keep me honest), outlined an approach I've used in the past for this, and have put together a quick example including some screenshots. Let me know if we're not aligned on the desired outcome or if I've missed the mark on how to solve for it. My understanding of what you're looking to accomplish... You're looking to set up an Anaplan Process (comprised of one or more Actions) that will: 1. Check whether certain conditions are met for each record that will be imported from a source to a target 2. If the conditions are not met for *a particular* record, have that record not move from the source to the target. 3. If the conditions are not met for *any single* record, have the entire process throw a Red 'X' or Yellow '!' to indicate to the user that, at a Process level, something went wrong. If this is correct, here's an approach I've used in the past for this... To provide a visual cue to the user that something went wrong In the module containing them items to which you are applying validation logic, create 2 new line items, the 1st of type Text and the 2nd of type Number In the 1st, have a formula that uses IF-THEN-ELSE logic to check if all required fields are populated with valid values, if so, return a text value of "1", otherwise, return a text value of "Some text value." or "Uh oh, we have a problem." or something to that effect. Create an action that imports from that module to itself, mapping the 1st line item above (of data format type 'Text') to the 2nd line item above (of data format type 'Number') If all rows pass the validation, you will be importing a text value of "1" into a Number-formatted field for each row, which will be interpreted as the number 1. No issue here, green checkmarks all the way. However, if one row does not pass validation, you will be attempting to import a text value of "Some text value." into a Number-formatted field, and the action will fail. If you add this action to the end of the process, it should trigger a failure or warning at a Process level, indicating to the user that something went wrong. To prevent the invalid item (due to missing data / invalid data inputs) from being saved to the history module You probably figured this one out, but this is as simple as adding a filter to the saved view that feeds data from your source module to your target module; the filter would look at a boolean that checks whether all fields are populated with valid values And here's an example... Data Entry Form module Applies to a 10-item list called "New Items", enabling me to enter and save data for up to 10 items at a time Has 3 fields in which I want to capture some value; the 1st and 3rd are required, whereas the 2nd is a boolean, so technically it's optional / valid to leave it unchecked as false I plan to enter values into this form, press a "Save" button, and for any records with fields 1 and 3 populated, move those values into my History module History module Applies to a list called "Historical Items" It looks like I've saved 3 items into this module thus far Has 3 fields, supporting the same values from the Data Entry Form module Data Entry Form module with the addition of line items to support solution described above... And I've gone ahead and entered some data into the first two rows. Here's my saved view for the import data source that will feed into my history module when I press "Save"... And here's my saved view for the import data source that will drive my process to show a failure if I have an incomplete row. Here's how the latter action (triggering failure if we have any incomplete rows) looks when I'm setting up the action. And here's how it looks when I run it as part of a larger process. If you want end-users to see a detailed list of actions and what succeeded / failed, you can check this box at the bottom of the Process configuration dialog: Zoomed-in: Zoomed-out: Let us know if that helps and if there's anything else we can do to assist. Cheers, Kevin
... View more
We have a lot of processes at our clients that will run just fine for months or years, and all of a sudden start failing due to this, when a record list hits 10M records. Would love to see this addressed, as it creates a lot of work and uncertainty / issues with reliability for our clients.
... View more
Hi James, Yes Anaplan Connect can integrate with Power BI. You can do a scheduled cadence refresh, or you could set it up such that you can initiate the data as a "push" from Anaplan, or as a "pull" from PowerBI. We've set it up both ways. The integration relies on Anaplan Connect for the first leg of the journey and the Data refresh functionality in Power BI for the second leg of the journey. I would be happy to hop on the phone to talk you through how the integration might work, or answer any questions you or your IT group might have about setting up that integration. Let me know if you have a few minutes to catch up next week and I can take you through it. Best regards, Kevin
... View more
For those familiar with "Save As" functionality in MS Office, this is basically what I'm requesting. When you "Save As" in MS Office, you're show the Windows Explorer dialog and can save overtop of existing files. Would like to do the same for views. Not a big deal, I just find myself needing to do this fairly often and it's tedious to find the name of the view I want to save over, and then manually re-type it/copy+paste it.
... View more
Great! A couple of assumptions first, then a couple of thoughts. I would be glad to spin up a quick example as well, once we can confirm I'm up-to-speed on the requirements here. Can you please let me know if I missed or mis-stated anything here? Assumptions: 1. You want the ability to rank all Cost Centers, based on the variance line item value, including Cost Centers that you do not have access to. 2. You want the end user to be able to view all of those ranked Cost Centers 3. You *do not* want users to be able to see the variance line item values (the values on which the ranking is being down), and would want selective access to only enable them to see values for Cost Centers to which they have access Here's a visual representation of the assumptions above: Options: Assuming I'm on the right track above, you could: Create a Cost Center list (let's call it CC2) that mirrors your main Cost Center list (CC1). While CC1 has selective access applied, CC2 does not. You can pull the data used for ranking purposes into a module that applies to CC2, from a module that applies to CC1. Then, use Dynamic Cell Access to control access to senstive information presented in the CC2 module. You can remove selective access from CC1 entirely, and use Dynamic Cell Access to manage access to data associated with CC1. Option #1 is less impactful and should be a straightforward workaround. Option #2 would only make sense if you have multiple areas of the model where access needs to be managed at a more-granular level than at a list level. Let me know whether or not the above makes sense. If it does and you're interested in an example, just DM me a copy of your module blueprint and I can spin something up for you. Kevin
... View more
Hi @obriegr, Take a look at the use of single-quotes and double-quotes in the sample below and see if using single quotes as shown below helps: WorkspaceId="'xxxxx'" ModelId="'xxxx'" Operation="-certificate '/xxx/yyy/zzz/Certs/certificate-00000000000000000000.cer' -file 'testData01.txt' -jdbcurl 'jdbc:oracle:thin:@host:port/sid' -jdbcuser 'schema:pw' -jdbcquery \"select * from TABLE1\" -import 'Anaplan Connect Test 01' -execute -output '/xxx/yyy/zzz/Logs/testScript01.log'" Let us know. Cheers, Kevin
... View more
You raise a great point. At this stage, my understanding is you would have to either maintain duplicates of the Organization list, one for each workflow process, or fold the separate tasks into a combined workflow associated with your Organization list. I will preface this next statement by saying that I don't speak for Anaplan, and there may be a way to do exactly what you're asking, and if not, it may be on their product roadmap. The thought: to your point about workarounds, given the unpredictable timelines associated with enhancements to the platform (for good reason; there are a lot of moving parts), and the many competing priorities when it comes to Anaplan DEV's prioritization of enhancements, if a client needs some functionality that requires a minor workaround that provides what they are looking for and that adds value (saves time, increases productivity, etc.) without introducing an unacceptable level of complexity, maintainence, or peformance degradation, then I'll take that any day of the week. All that said, really glad you raised this item because A) I haven't run into this particular need yet, but if I do, I'll be more aware of limitations/options and be able to better-advise my clients B) if other Anaplan customers would like Workflow to support what you're describing, then hopefully it will get more attention.
... View more
Hi @ninja14127, Great question! We've had a customer integrate Anaplan with PowerBI. There are a handful of options available to you, but as with most integrations, the recommended approach depends on your requirements (refresh rate/availability of data in PowerBI, automation requirements, etc.) and what, if any, ETL tools your organization owns: tools such as HyperConnect/Informatica Cloud, MuleSoft, and SnapLogic. If there is any sensitivity around sharing this information on the forum, feel free to DM me and we can arrange to continue the conversation offline. Cheers, Kevin Manager Impetus Consulting Group
... View more
Hi @Peter40, Great question. How we've done this in the past is to use the built-in workflow functionality to support the approval/sign-off portion of your ask. Then, you use lines item in a module that check whether a particular assumption has been entered for a particular organizational unit. If it has been entered, then one such line item could return a value of 1, otherwise it could return 0. You could then aggregate these 1's and 0's such that if you have 150 assumptions that must be captured across 20 organizational units, you can see a count of completed assumptions grouped by Org at various levels. You can then turn this table of data into a chart that would enable a quick visual of where the organization stands in the process of supplying assumptions. Often time's we'll publish this sort of visual + status table to an FP&A dashboard along with the ability to click on a MAILTO link in Anaplan that generates an e-mail to all organizational unit owners based on a template that reminds them of key due dates, what assumptions are still missing from them, and that provides a link to the model. Does this make sense? Let me know if I missed or misunderstood anything. Cheers, Kevin
... View more
Hi @bill.liao, You're in luck! Anaplan makes this function available to you as well: IRR function documentation and example. Also: Here is a list of financial functions in Anaplan. And here is a list of all Excel functions and their Anaplan equivalents. Let us know if this doesn't address your need. Cheers, Kevin
... View more
Everything you've done looks correct to me. The next step I take in these instances is to re-create the import from scratch. Upload a brand new file (give it a unique name) and make sure the file is in plain .txt format. Re-map everything. If you're still getting an error, I would take that file and remove all but one column, and attempt the import again. If that works, I would add one column at a time until you come to the column where you're getting the error. One other thing to keep an eye on is the time scale applicability and range in your model. Make sure that if you're loading monthly data to a line item, that the line item "Applies to" months, and that the months associated with the data you're loading are included in your model's time scale (e.g. if you're loading FY18 data, your time scale supports FY18). Most of these suggestions are stating the obvious OR recommending you check things you've likely already checked, but it's an obligatory step when trying to troubleshoot things virtually / not live. Let me know if you're still having trouble and we can hop on a quick phone call.
... View more
Hi @jsdeguzman, Can you post a screenshot of the "Time" and "Project Scheme" tabs in your import definition, so we can see the mapping logic there? Also, if you could post the "Details" section of the error message (assuming there is some information in that tab) that may help us track down what's going on here. If I had to hazard a guess, the "Time" tab is expecting mapping logic to help it interpret the format of your dates. Cheers, Kevin
... View more
Hey @PaulRitner , I respectfully disagree. Your response was informative and on-point! During Anaplan Launchpad trainings I always tout how active the Anaplan Community site is, so it brightens my day when I see customer questions getting multiple responses, representing different perspectives, informed by varied experiences, and with multiple examples. @jsdeguzman, if you need anything else, you've got a couple of folks here happy to help. Cheers, Kevin
... View more
Hello jsdeguzman, Certainly! If I'm understanding your question correctly, you are interested in using at least 2 different formulas for a single line item, each formula differing depending on the version. OPTION 1 (recommended): Create a new "Version Control" module, that applies to Version. Create 1 new line item for each different set of formulaic logic you want to apply. This can be as simple as Actual and Budget. Or you can create other conditions like "Is Plan?" or "Is Other?", and associate multiple versions with one of them, because multiple versions should use the same formula. Now that you have that set up, go to your module where you want to apply the version-specific formulas to a particular line item, and create something like the following, using conditional statements that check what version the formula is currently applying to (using the line items from our Version Control module), and using differential logic based on the TRUE flags in that Version Control module. You will notice that the line item where we're using this formula applies to "All" versions, as seen in the screenshot of the blueprint, below. Let's zoom in on the formula there: IF Version Control.Is Actual? = TRUE THEN 123 ELSE IF Version Control.Is Plan? = TRUE THEN 456 ELSE IF Version Control.Is Other? = TRUE THEN 789 ELSE 0 Does that make sense? OPTION 2: This one is a bit less-flexible, but you can use the ISACTUALVERSION() function in a conditional statement to apply differential logic based on whether the version is Actual or something else. This doesn't allow differentiation of non-Actual versions though, so I prefer to set up my models using Option 1 and modifying it to meet my needs. Let me know if I misunderstood your question or if you need any additional clarification! Cheers, Kevin
... View more