ABC books, sometimes called alphabet books, can be a creative way to explore a topic. Here’s the first half of exploring the ABCs of responsible data and technology.
A is for access and alternatives. Who has access and who does not? Does the person about whom the data is have access? Are there alternative options for participation for the recipient to choose from? Or is digital the only way?
B is for bias and breaches. Everyone has bias, more than one. So what’s yours? Let’s put ours on the table so everyone can see and we can help each other manage them. Data breaches will happen, there is no use acting as if they won’t, so have a plan, rehearse it often, and make others aware
C is for consent. Informed consent. Does it exist? Consent is joined at the hip with choice and hard to separate them – when they separate consent morphs into coercion.
D is for data, deletion and disability. Data is everywhere and in reality less is often more although no one seems to agree until they have too much and don’t know what to do with it. Deleting data is critical for reducing risk and harm – it good to have a plan for deleting data before collecting it, however we often skip this set. And disability, ‘going digital’ tends to be for the mainstream, people living with disabilities often are left out – it doesn’t have to be this way. The choice is up to us.
E is for ethics. Ethics are about values, what we believe about ourselves, others, our world. Ethics are about trade offs and trolley problems. About a thousand shades of grey and who decides in a world of 0s and 1s.
F is for flow. Where does the data flow? Who does it touch? What does it pick up and deposit along the way? It’s a bit like a treasure map – when you follow the flows of data you discover many things happen with it you were never told about.
G is for governance, guardianship, and GDPR. Governance is all about decision making, processes, interactions, rules, norms, and how they are regulated and accounted for. It’s the rules of the game – who can do what. It also helps you see ‘who decides’ and the power dynamics at play. Guardianship allows someone to act on behalf of another and is tricky in the digital world. GDPR is an EU law about data, potentially the most important law about data on the planet.
H is for harms. They exist. They can be hard to see at first and in the short term, but they often are silently growing. Look for them always.
I is for inequality and interoperability. The digital divide is real. If you don’t have a device or access to a device, it’s hard to participate in the digital world. Women, girls, and people living with disabilities have the least access to devices. When you design for the majority, the world becomes more unequal. Interoperability is about systems and solutions playing nicely together and sharing information.
K is for KYC and KM. Know Your Customer is a financial term and process made much easier with digital, perhaps too easy. Knowledge Management has exploded because of the availability of data, which is then turned into knowledge. But it has also changed as now information is just a google away and we confuse information with knowledge.
L is for legal and literacy. Different legal jurisdictions allow different things. However, just because something is legal, doesn’t make it right or ethical. Literacy comes in many forms – linguistic, digital, identity – and yet it is almost always overlooked in projects. Revisit it, you’ll be glad you did.
M is for minimisation and the mosaic effect. Data minimisation is a principle we all should live by, but few do. Collect the least amount of data you need for the task or project you are doing. Then when finish with it, delete it. Overnight, our organisational risk would decrease by tenfold as would the harms we enable. The mosaic effect is a fancy way of talking about connecting dots. Data is everywhere now, anonymous datasets publicly available as are powerful analytical software. And so we put many of these datasets together along with trend reports, understanding of human behaviour, and quite quickly we discover anonymous datasets are no longer anonymous.
That’s the first half of the alphabet, the second half will come tomorrow.
Are there a few thumb rules to use when we want to decide – ‘which’ all data to delete?
I work in technology development team in semiconductor industry. We keep coming back to the data / reports we generated. My observation is that companies archive their data; and doesn’t ask employees to ‘delete’ the data.
Great question Kishor – thank you for it! I have three gut responses:
1. I tend to recommend deleting data about people after the purpose it was collected for is completed.
2. It would be good to explore with your colleagues, when you are archiving the data instead of deleting it
3. Form a small group of colleagues with diverse perspectives to discuss the archiving vs deleting question. There will be richness in the discussion and is important because ‘context’ is critical in these decisions.
I hope that helps!