Data Disasters (and how to avoid them)

 

Every month, our Nonprofit Datafolk Club gets together to share experiences and learning. It’s a chance for data folk working in or with nonprofits to network and discuss things that are common to all of us.

In our April Nonprofit Datafolk Club workshop we shared tales of data projects that didn't go to plan. We asked everyone to tell us:

  • What was the project that went wrong, and how? If they didn’t have any personal disaster projects then they could share one they’d seen/heard about.

  • What did you learn from the disaster? Were you able to mitigate/limit the damage? How has this helped you avoid similar experiences again?

  • What is your biggest data fear? Are there any potential disasters you lose sleep over?

We didn’t record any of the details at this session – we wanted it to be a safe space to share – but we did note down some recurring themes.

Common causes of data disasters and how to avoid them

The following were common causes of data disasters. We’ve also collated some of the relevant learnings shared to help prevent disasters from happening or mitigate their effects.

Accidents

We’re all human, and our participants acknowledged that sometimes accidents just happen – whether it’s deleting something essential or making a mistake in a complex formula. However, people advised that the likelihood of accidents causing irreparable damage could be reduced by implementing policies and procedures such as regular backups, version history tracking, and security controls. They also emphasised the importance of peer-reviewing analysis before it’s used for decision-making or publication.

Lack of skills and knowledge

Poor data literacy – at all levels, from senior leaders to frontline staff – can increase the chance of data disasters. This can be particularly true where data-related activities are driven by hype and the adoption of new tools and techniques is rushed without proper understanding (as, one might argue, we’re currently seeing with Generative AI). People suggested that well-planned training programmes and fostering a data culture could help with this.

Lack of capacity/time

People noted that data often took a back seat to frontline work, and this could increase the likelihood of mistakes being made. They added that when resources were scarce, automation could seem like a quick solution, but that hasty implementations could easily introduce errors. Efficiency needs to be balanced with thorough testing and validation.

Poor planning and organisation

People mentioned that poorly organised data was a recipe for disaster, increasing the likelihood of data being lost, forgotten, or leaked. File organisation systems, databases, data asset registers and process maps were suggested as being key to preventing disasters. They also noted the importance of planning ahead, using pilots to determine the best course of action, documenting procedures, and avoiding mid-project changes to data processes. One of the most commonly mentioned sources of data disasters were migrations of data between systems. People advised to test migrations meticulously, clearly document changes as you go, and have contingency plans in case things don’t work as expected.

Poor communication

People agreed that different interpretations of terms like 'participation' and 'engagement' could lead to inconsistent reporting. It’s important to establish clear definitions and communicate them across the organisation. People also felt that it was helpful to have regular catch-ups about data assets to ensure that everyone was on the same page with regard to datasets, systems and software used by the organisation.

Poor tools

They say that it’s a bad workman who blames his tools, but sometimes tools really are the problem. In particular, people noted that while free tools were tempting, they could lack critical features or security. They advised that organisations needed to evaluate tools thoroughly and consider long-term needs before committing to their use, especially if they are used with sensitive data.

Nightmare disasters

The potential disasters that people said they lost most sleep over were:

  • Personal/sensitive data breaches

  • Payments being of the wrong value and/or going to the wrong people

  • Any kind of change in tools or systems going wrong

  • Senior leaders making decisions using insufficiently robust data

  • Losing skilled staff and having to start over with digital and data literacy for new recruits.

Join the Nonprofit Datafolk Club

If you found this resource interesting, or if you have any curiosity in nonprofit data more generally, please come and join us at our next workshop. Each month has a different topic, and you will be able to find the details on our events page. Previous topics have included: 

Communicating data accessibly

Measuring impact in nonprofits

AI in nonprofits

Resourcing data roles in nonprofits

Data storage in nonprofits