zum Hauptinhalt

Politik: Three common mistakes in measuring the values of innovation and how to fix them

Lene Jeppesen

8 out of 10 public-sector workplaces in Denmark have implemented at least one innovation during 2015-16. An innovation is a new or significantly changed way of improving the workplace's activities and results. 3 out of 4 innovations have resulted in higher quality in the public sector. Nearly half of the innovations have made the public sector more efficient. Thus, we can document that public-sector innovation is not a purpose in itself but a means to other ends. Those are some of the insights from the Danish InnovationBarometer – the world’s first statistic on public sector innovation.

The Danish history of public-sector innovation means that Denmark has a fairly high level of innovation maturity in the public sector. The skills are there and are honed not only in dedicated schools and programs for innovation but as part of the education system – in elementary and secondary schools and in universities. At the workplace level we have innovation handbooks, innovation process models and we see a fairly advanced use of innovation tools, and we continuously see really impressive results of this work. So in this fairy tale of public-sector innovation – what are we still struggling with? The answer from key innovation managers in the Danish public sector: implementing, evaluating and spreading innovation. How do we ensure that public sector innovation is not seen as just a lot of sparks - the smaller projects driven by passionate individuals – but become sustainable and scalable innovations? To us at the National  Centre for Public Sector Innovation this shows a desire to move to a new level of innovation maturity – a desire for system change.

Proper measurement and evaluation is a key part to achieving this. “Being more structured around evaluating our innovative initiatives has helped us focus on seeing them through instead of just starting a lot of things,” as one civil servant told me. Yet our InnovationBarometer shows that only 44% of the innovations have been evaluated. Why ruin the happy times in the creative bureaucracy with boring lectures on remembering evaluation of innovation? Because I promise you it won’t. You’ll see that your innovations will be better and better anchored in your organization and with your stakeholders when you put just a bit of timely thought and structure into your evaluation and innovation work.

Since 2015, we’ve been working together with public sector innovators to enable them to do more and better evaluations. This work includes co-creating an evaluation toolkit with the potential users: civil servants. The toolkit defines 4 phases in the evaluation and offers 10 dialogue tools for asking and answering the relevant questions in the organization and with the stakeholders to enable a strong evaluation design that actually makes sense to both innovators and evaluators. My own experience as a civil servant and my ongoing dialogue with innovators and evaluators in the public sector have given me insights into the challenges with and immense possibilities in doing meaningful measurement and evaluation of public-sector innovation. Three very common mistakes keep popping up. The good news is – there is a way forward in all of them.

Mistake number 1: The innovation work starts … but the evaluation work doesn’t

As one civil servant expressed it: “You really want to evaluate but … then my innovation project … it’s going really well! So shouldn’t I spend my time on making the innovation project succeed instead of measuring whether it is actually going really well?” All of us creative bureaucrats have been at the center of this conundrum. However, the consequences of not starting your innovation and evaluation work simultaneously are real and costly. The result is a report at the end of the innovation project instead of an evaluation that actually helps you make knowledge-based choices in your innovation work. You’re squandering resources because you’re not gathering data from workshops and innovation activities when they happen and thus are forced to reconstruct data or re-interview participants afterwards. And for the geeks out there: working with control groups and baselines becomes practically impossible, and this does have consequences for the validity of your evaluation.

How to fix this mistake? From the very beginning of your innovation work, you need to fine-tune expectations with stakeholders – internal decision makers and other collaborators. Which questions do they expect the evaluation to answer – which knowledge is relevant to them? How would they like this knowledge to be communicated and put into play inside and outside of your organization? Make sure to set aside resources for evaluation from the beginning. It's cheaper and better than finding resources for a report later on. You will be surprised at the improvement in the quality of knowledge you gain, in your innovation work itself – and in evaluations that actually have a chance of a full life in the organization instead of dying in a desk drawer.

Mistake number 2: Drowning in data but not using the right data

In the Danish public sector registries and data are king. Partly due to our high level of trust in the public sector and decades of strategic work with digital government, most public workplaces have lots of data. “Oh please tell people to keep it simple! Restrain yourself with the data!” one civil servant implored me. Sometimes the problem is that you don’t know which data you’re drowning in. Figuring out which data is accessible in which formats and with which other data it can be combined is an enormous task killing every creative bureaucrat’s will to live. Sometimes we fall in love with certain data in the system. Data in the form of enticing numbers – maybe even beautifully visualized – can lead us astray. We might even forget that numbers are good for describing what is, but not for understanding why it is that way.

How to fix this mistake? Choose the data and data collecting methods that actually give you answers to your evaluation questions. Never lose sight of what knowledge you’re looking for. Which data do you need to determine whether your innovation actually shows value? Collaborate! Are you good with numbers – partner with the best anthropologist in your organization. Go searching for sources of data in the organization and cultivate a good working relationship with colleagues responsible for whichever registries and data you have in your organization. “In my experience those colleagues that work with the registries have so much knowledge about which data we have and don’t have access to, which meaningful indicators we can find in our systems and what is legally possible. Find them and work with them,” as another Danish creative bureaucrat told me.  

Mistake number 3:  The sole purpose with the evaluation is an internal focus on learning

”Basically we’ve focused our innovation work on content, learning and quality. That is important for a sustainable concept,” a creative bureaucrat told me – not for the first time. Well, if we are talking about innovation and trying out new ways of working and new activities in the public sector, isn’t learning important? Absolutely! And we’re good at that. Our InnovationBarometer shows that in 84% of the evaluations learning is the purpose. However, only 21% of the evaluations have the purpose of documenting results for the decision makers, and only 18% of the evaluations aim to improve managing the innovation process along the way. This means that we creative bureaucrats are robbing the political and administrative leadership of the possibility of making knowledge-based decisions on the direction of the public sector. One consequence is that public sector innovation will not achieve its system-changing potential if we’re not engaged in the discussion of innovation and the documented results of innovation work across hierarchies. We creative bureaucrats are also losing out on making knowledge-based decisions during our innovation process. And one could argue that we’re even talking about a democratic deficit when we’re not transparent in showing the taxpayers how we manage the use of taxpayer-money.

How to fix this mistake? From the very beginning, make sure to design your evaluation not only to provide you with knowledge about how to learn from your innovation work, but also to use it to manage your innovation work better and to figure out how to document results for decision makers. Create a space and a framework for ongoing vertical and horizontal dialogue about your innovative initiatives and their evaluations. Strive to make this a natural way of working.

Uniting the fields of innovation and evaluation is an ongoing focus for us at the National Centre for Public Sector Innovation. Our next step is to look at how we can work with sustainable evaluation capacity building in the public-sector workplaces. This includes new dialogues amongst civil servants and decision makers on which values we’re aiming to achieve with our innovation work, knowing the difference between purpose and goals and not being afraid to have these discussions up front as a part of anchoring the innovation and evaluation in the public-sector workplaces.

Zur Startseite

showPaywall:
false
isSubscriber:
false
isPaid:
showPaywallPiano:
false