
A great question – and a frequent challenge at companies of every size and shape is a concern around how you govern democratized analytics to prevent the issue of analytics and dashboards breaking, and no one knows how to fix the issues. This can be a common problem as people move-on to new jobs and is a historical issue that causes pain to those that inherit these broken analytics.
First, I want to acknowledge that the passing-on of knowledge and maintenance of work processes is a challenge well beyond analytics; whether we are talking about a Financial Planning complex Excel sheet (you know the one that says don’t touch anything below this line), or an HR process where the person that ran it for 20 years just left, the ability to ensure the process is maintained and continues to run can be quite problematic. The best practices for reducing this pain are somewhat common across disciplines, but with analytics – we have a few added things we can do – and I’ll share some of those best practices here.

I also want to dig into that complex spreadsheet that has been passed along through several folks – we in fact can’t remember who originally built the spreadsheet. When it finally breaks and you are called to ‘fix it’, you find that you can’t reproduce the same answers. In fact, you frequently find that for years, the spreadsheet hasn’t been working correctly, but no one knew it. This happens incredibly frequently – as business processes change, and the person running the sheet didn’t understand what was below that ‘do not touch’ comment. I would actually bet that any spreadsheet that was old enough that no one knew who built it anymore and with enough complexity that people didn’t know how exactly it worked, it more likely to be producing errors that accurate answers – luckily, these errors are frequently small and not material, but problematic no less.
So what is one to do? How do you ensure these processes not only live on, but continue to be optimized and deliver to your organization? Setting these parts up correctly are key to great governance, and are often either neglected, or used as a reason to not create new processes and solutions. But if you want to innovate and drive transformation, it is key to spend time on these key practices.
With most modern analytic solutions, this step can really be automated to a great degree, and is likely the most important step I’ve seen successfully implemented at companies. For each automation, dashboard, or other type of analytic, it is important to document who owns the process. With modern analytic tools like Alteryx, managing this process can be incredibly easy. Imagine that every time someone ‘publishes’ a dashboard or automation to a server, the publication process requires them to communicate who the ‘Owner’ is, and who the ‘Backup’ is if that Owner is no longer available. This can be done with a Form that is used when people submit their content for publication. Then, each day, a simple automation runs that looks for all the ‘Owners’ and ‘Backups’ in the HR system to see if they are still at the company and in the same role. If the Owner moved, the automation notifies the Backup that they are now the Owner, and asks who the new Backup is. If the Backup leaves, the Owner is contacted for a new Backup. And if both leave simultaneously, their management is notified that a new Owner is needed.
This process is great at driving some processes to be decommissioned, and ensuring that there is awareness and knowledge transfer when people are moving versus when processes are breaking. Does it eliminate the need for knowledge transfer? Of course not. Are there process Owners that accept the responsibility, but never perform the knowledge transfer? Of course. That said, this is one BIG step in the right direction at many organizations.
This one is a bit more difficult, as getting it right can be a bit trickier. Certainly, the skillset to build a complex all-new process can be quite different from that of maintaining it. And amazingly talented professionals would much prefer to be building the new stuff than maintaining the old stuff. With this, some organizations setup a separate team to deal with the maintenance. This role frequently leverages more junior/lower cost talent and helps to free-up the more senior talent. Before jumping to this as the answer for everything, here are some cons of this approach. First, when you build an all-new analytic, if I force you to maintain it for a period of time, it will likely force you to write a more robust solution to begin with – as you don’t want to be constantly having to fix it. In addition, going through the change management of a complex launch can significantly change the original solution. With this, some perform these transitions after some time has passed and the analytic is truly at a steady-state level. Another con is that the cost of knowledge transfer can be significant, and not only eats up time, but it pre-supposes that when a problem comes up, it will be easily solved by the more junior resource, which while sometimes true, will not always be the case. So many of the issues will still come back to the more senior personnel. For these issues, adding the junior resource is just adding cost that isn’t really making anything better.
With these pros and cons appropriately balanced, highly efficient teams understand which processes will use this approach, and which will not. Would you do this with every spreadsheet on your desktop? Trying to teach a junior resource how to run it? Of course not. Nor would you do this for every dashboard, every automation, or every analytic. That said, if you start with the most complex, production critical ones, you can decide when you are at the point of diminishing returns from this approach.
So you’ve launched that new dashboard or automation, and the first step is determining who should have access. At many organizations, the owner of the process ends up spending time regularly adding and deleting people from the access list. A role based approach where you give access to the department, specific management levels, or other characteristic of the position that should have access provides a much better approach. While there will be exceptions at times, the base access control should likely default to a role-based setup. Then, an automation can run every day, and as new point join (or leave) the department, access is automatically updated to either add or remove them from the list. This moves from an annual access control review process that many organizations do to an automated daily access review. And guess what? When a new employee starts, they immediately have access to most of the things they need – without even submitting a request. Yes, this saves time. At most organizations I have worked with, when I ask what the number one ticket category is for their help desk, it is access requests. If you broaden this approach to go after the top access request items, you may find that this improved governance practice also saves considerable time, and reduces a ton of frustration in the organization.
So you built a dashboard that counts the widgets the company sells each day. You’ve automated the process so each day it updates with the latest data. You published it to a server, and have moved on to your next challenge. How do you typically find out that the dashboard broke? It’s seemingly always a phone call, on a Friday, when you’re out on vacation. From my experience, the way we find out most systems are broken are when a user notices and writes a ticket. But how long did it take for that to happen? Were people using bad data up to that point? Did the system do the wrong thing while nobody noticed? Or did we make bad decisions because we were seeing bad data? What if we could have the automation let us know if something looked amiss?
While it isn’t practical to check every possible failure mode, experience suggests that there are a small number of failures that are the most common culprits. What I see bringing down most data/analytic applications or dashboards is when the data feeds stop flowing. Perhaps a field changes or moves, a credential expires that makes data stop flowing, or an upstream system stops working and drives issues with the downstream systems. How easy would it be to add a simple check. Something like, if the number of records flowing through the pipeline is significantly off the norm, notify the owner that something looks wrong. Or if there is an error/warning, automatically notify the author. This can take a minute or two to add to the automation, but can make a massive difference. While this doesn’t self-heal the process, it certainly gives significant advanced warning to the Owner compared to the typical process of waiting until a user of the system notices and writes a ticket. Perhaps this means you’ll find out on a Wednesday (instead of Friday) and have the issue fixed, before the User ever knew something broke.
This last one is a bit broader, but it is such an important approach. In nearly every strong analytic organization there is a Community of Practice that are the go-to folks driving analytics maturity in their organizations. It is really important for these leaders to see the analytics that are occurring at the company to both be aware but also to drive best practices.

One of my favorite best practices is to ensure that whenever analytics are being published, they are shared in a review of sorts that has in the room some of these top leaders of your analytic community. These leaders should of course be celebrating the accomplishments of the team and can use this review to clap, pass out certificates and awards and generally recognize the great work happening in analytics; however, it is also a place where they can see how people are building their solutions, and if they are using best practices that will make them robust over time. Dedicating time and effort to these reviews and to genuinely helping (vs. beating up) participants to drive improved analytics into the organization can be a real boost to the robustness of the analytics being launched.
An important aspect of keeping things running is whether you are picking technologies that can be understood and maintained by a large percentage of your team. If you use a specialized tool that only a few in the organization know how to use, you likely will create maintenance and knowledge transfer issues. Today, as many analytic tools are becoming easy enough to use that anyone in the business can build with them, it is important to strategically ensure you have picked tools that can be broadly leveraged wherever possible.
These are some of the top best practices I see that really do make a difference in the robustness and resilience of the analytics that are running in an organization. But the reality is that even with all of these best practices, there will be analytics that break, there will be knowledge gaps caused by people moving on to new jobs, and there will be analytics that ‘go-bad’ over time. This is just as true in the analytic space as with every area of your business – when good people move to new jobs, transferring their capabilities and knowledge to those left behind is hard. That said, the answer can’t be, “Let’s not hire great people -they are too hard to replace when they move”, just as the analytic equivalent can’t be, “Let’s not innovate and drive impact with analytics, because they may be hard to maintain in the future.”.
And so, it is key that you put in the work to improve knowledge transfer in your organization, and hopefully these 6 best practices can help accelerate that for your organization. If you have other best practices, I’d love to hear about them!






