We make decisions based on evidence.
By improving our data, we will make better decisions.
This is what the data says.
Underlying most of these statements is the assumption that data and the interpretation of it is objective. This may not be a true assumption. Here’s a few things to consider:
In various fields of research, there is significant work put into being objective and removing any subjectivity from the process. In most social organisations I have worked with, the expertise, time, and money required to follow a similar path is simply not available. Herculean efforts are made by some of the most incredible people on the planet, but when you have two days to collect, analyse, and process data in the aftermath of a devastating earthquake, the ‘data’ will not be completely objective.
As we move into a world where artificial intelligence and machine learning is more common, it is often assumed the ‘machines’ are neutral, unbiased, and objective. While it is understanding why we think this, it often forgets the machines learn through two ways. First, they need to be programmed to learn and this programming is done by humans who are, more often than not, male and of a certain socio-economic bracket. Second, machines learn by humans ‘feeding’ them datasets. So depending on the quality and of and the biases in the datasets, the machines take this ‘understanding’ on. This is known to happen in gender bias of hiring and ethnic bias in policing.
A better place to start is to assume bias each because we all bring our own perspectives created through our life experiences. And to assume it is critical for our decision making to listen to multiple different perspectives and to try to identify our own biases.
Assuming bias is likely a better assumption than assuming objectivity.