Evaluating Digital Projects: Managing ‘Theory of Change’ and Turning Challenges into Opportunities.
RegisterDigital projects are almost always aiming to be enablers of other outcomes or service priorities, they are not an end in themselves. Usually, they are aiming to make processes smoother, faster, more intuitive, or cheaper - in other words, better.
Many digital projects will ultimately aim to replace a non-digital channel (in-person visits, telephone calls, letters, etc.) with a fully digital channel. This is called a ‘channel shift’ and a projects’ efficiency and cost benefits are sometimes calculated by measuring time saved from in-person visits - for example to post offices, local council buildings, or passport offices.
A more intuitive digital service makes it easier for the user, or the operator, to complete the service journey. It might also make people more willing to engage with the service or organisation in question. Internally, having more user-friendly services for operators could yield important benefits, like increased staff well-being and retention.
When tracking outcomes, it is important to measure the technical performance of the service (its uptime, speed and latency, for instance) as well as the usability of the service (how much users enjoy using it, what challenges they have overcome or still face). Performance and usability are linked and should be treated as such in the Theory of Change.
Modern technology almost always offers better cyber security than legacy systems as it typically uses more advanced authentication methods like biometrics and two-factor authentication. It is more regularly maintained and patched, and will include security-by-design considerations in the development process. In particular, cloud-hosted services are often more secure than legacy systems.
One of the major claimed benefits of digital projects and services is that they can be easy to scale, due to assets being re-usable. However, scaling services from one context to another requires large amounts of dedicated time, resource and technical skill. Being realistic about barriers and constraints to a digital solution’s scalability is key.
Digital skills are broader and more context-dependent than ‘traditional’ skills, and often relate to how people solve problems in a variety of contexts. So for teams measuring improvements in digital skills for a group of people, the best way is usually to observe the application of those skills in a real-world context. This can be done by applying ethnographic methods, observing specific projects or tasks.
Teams should measure how their service has driven more sustainable outcomes compared with previous services, but should also track the sustainability of the new service itself. Across all hardware, software and services, teams can take steps to reduce the environmental footprint of their technology project, many of which are covered in ‘Defra’s Greening Government ICT’ report.
All projects are subject to changes and review, but digital projects are designed to be agile, iterative, and pivot quickly. The general advice is to ensure that Theories of Change and objectives are reviewed regularly throughout the lifecycle of ongoing projects. In particular, teams expect that the overall vision of their Theories of Change stays the same, but the sub-steps needed to get there could change. The earlier changes are implemented, the more concise the evaluation efforts.
Digital projects are often innovative and therefore have a distinct adoption profile. A common profile for digital projects is to have very few first movers followed by a number of early adopters, who champion the service. This builds trust and leads to mass adoption, often triggered by a specific event or use case. Finally, innovation adoption often has a stubborn long tail of potential users who adopt a service very late - or sometimes not at all.
Digital projects will often segment their potential users into ‘personas’, representing different types of users, contexts and motivations. When evaluation teams develop their sampling or engagement approach, they should ensure they can segment the data into different user personas, and report outcomes against them.
Digital teams will always think about the inclusiveness and accessibility of their services, to ensure that they can be used by everyone. Therefore, inclusion should similarly be a key consideration of evaluation teams. GDS has developed a Digital Inclusion Toolkit, and a 9-point Digital Inclusion Scale, which can be used to transparently test how outcomes have affected different target participant groups, which can be a helpful starting point.
Teams can supplement more traditional matching variables, like population or size, with specific digital considerations that could impact an organisation’s ability to adopt an intervention. This could include the use of operating systems, current digital maturity, or presence of legacy technology within the organisation.
Digital projects can change a lot. And quickly. In particular, the technology context around them can change (as can the underlying legal and regulatory context). Evaluation teams need to constantly monitor relevant technological changes and assess if the approaches originally decided upon are still valid in light of these changes, or if they need to be amended.
Service Assessments - which follow the Service Standard laid out by GDS - are detailed appraisals of digital projects, conducted by external assessors, covering a number of important project success factors. Our advice for evaluation teams is to use these Service Assessment reports and documentations in their evaluations, especially in their process evaluations, which may cover very similar topics.
Most websites and online services measure a number of common user analytics and data to monitor their performance. These web analytics can form an important part of any digital service evaluation, and aligning with ongoing GDS and UK Government approaches using Google Analytics 4 can help to ensure that evaluations are joined up across Government.
Digital projects often offer the opportunity to embed tools to run randomised controlled trials. This is because - using A/B triage approaches - there can be a randomised assignment of which users engage with a given service vs. a previous version of the service. This means that, at the level of the user, it is possible to compare outcomes for the new service for the old one. The technology that underpins this kind of randomisation is usually just a simple 50-50 random outcome generator for when a user clicks on a link, or starts an online journey.
One source of data that can be highly useful for teams evaluating public sector digital projects is contract data. Contract data - or spend data - is data about the public contracts that public authorities have with their suppliers. In a digital context, these are the contracts that public authorities hold with their digital vendors. This data can be used for many things: like measuring direct cost savings, comparing different authorities, or measuring changes in public sector vendor markets.
The European’s Commission DG Connect has recently launched GovTech Connect, a pilot project aiming to foster the digitisation of the public sector through the use of an innovative European GovTech Platform - GovTech Connect. Carried out by a consortium formed by Intellera Consulting, PUBLIC Deutschland, The Lisbon Council, and Politecnico di Milano, GovTech Connect aims to support public administrations in the adoption of cost-effective and flexible digital solutions by articulating the GovTech ecosystem in the European public sector, over the course of two years.
The project will run two open innovation activities per year, including two boot camps aiming to identify innovative solutions in different regions across Europe, to solve key pressing challenges.
The boot camps will run as accelerated and intensive training programmes. Each boot camp will be delivered online over four weeks and will culminate with a final online pitch event in week 6.
Content will be delivered in form of masterclasses, workshops, roundtables and focus groups. Each week we will start with a full day of training, housing both lecture and activity-based sessions.
Within the boot camps, we will also facilitate a number of opportunities for start-ups to work together with citizens and co-develop user-centric products. Each start-up will be matched with a small pool of citizens that will be recruited by our consortium or directly by the start-ups, at the beginning of the programme.
The project will run two open innovation activities per year, including two boot camps aiming to identify innovative solutions in different regions across Europe, to solve key pressing challenges.
For 2023, the regions and challenges are the following:
The boot camps will be delivered by PUBLIC, with contributions from Politecnico di Milano, Intellera and Lisbon Council, all part of the Consortium implementing the pilot project GovTech Connect on behalf of the European Commission. PUBLIC has significant experience in working with European governments and innovators, having facilitated 100+ collaborations between government organisations and SMEs across 4 countries through thought-leadership and consulting activities, product development, innovation programmes and events.
The boot camps will run as accelerated and intensive training programmes for 4 weeks, with a dedicated learning day once per week. At least 1 startup team member must attend all sessions. The boot camps are funded by the EU.
The content of the boot camps includes topics on working with the government, innovative business models, sources of funding, pitch preparation and a closing final pitch.
Content will be delivered in the form of masterclasses, workshops, roundtables and focus groups. Each week we will start with a full day of training, housing both lecture and activity-based sessions.
Within the boot camps, we will also facilitate a number of opportunities for start-ups to work together with citizens and co-develop user-centric products. Each start-up will be matched with a small pool of citizens that will be recruited by our consortium or directly by the start-ups, at the beginning of the programme.s.
Training sessions are regional specific and will be delivered based on personal experiences from recognised experts, drawn from our network of leading entrepreneurs, intrapreneurs, academics and public sector officials, with diverse expertise that span public sector procurement, investment, entrepreneurship, service design, design thinking and more.
The boot camp is targeted to early-stage start-ups that are developing technology-based solutions with the capacity to generate value for the public sector. These startups are usually in their pre-seed/seed phase. However, more mature startups that are new to working with governments are also encouraged to apply. Startups must fit within one of the challenges and its region.
Interested startups can apply to one of the boot camps via the application portal where they will be requested to provide general information on their company and team as well as more detailed information on their proposed solution and how it addresses the relevant challenge.
The application deadline for both boot camps is 5 May 2023.
The boot camps offer a number of unique benefits to startups:
We encourage all startups, including those from other regions of Europe, to subscribe to our JoinUp page and follow the project LinkedIn page to receive updates on any other project activities. In particular concerning the upcoming boot camps with a focus on Western Europe and Eastern & Central Europe in 2024.
A jury panel will assess applications made up of representatives from the project team, sectoral experts and citizen engagement leads.
Eligible startups will be scored based against the following criteria: Team Experience, Company Experience, Solution Feasibility and Citizen Engagement.
The expected resolution date will occur in the middle of May 2023.