Digital technology has become mainstream in our organisations. We often assume technology is neutral, without bias or power, and therefore assume more is always better. Our default is too often to use technology or gather more data ‘because we can do it’ without thinking about the why or the implications of technology or examining the consequences of increased technology. One of the (un)intended consequences of this is remaking a world where there is no individual privacy or rights. Why are we doing this again?
There is an unclear relationship between our belief systems, ethics and decision making around creating and using technology, especially with the social organisations (humanitarian, public orgs).
While using technology has improved aspects of humanitarian work, it has also significantly increased the power imbalances and inequality. Humanitarian agencies are using technology more and more to assert control over those without power by those with power. Moral and ethical standards are not being considered in decision-making around our creation and use of technology. The potential for increasing the vulnerability of people is more obvious, and heightened, in a humanitarian aid setting, but is a global risk for all of us whose data is being used. Why are we doing this again?
So what can we do? Here are a few questions for strategist and practitioners alike to ask when thinking through different use cases of ‘going digital’ in your organisation:
- How does this opportunity align with the humanitarian (or other sector specific) principles?
- Consider establishing an Ethics Review Board in your organisation to review new technology or new use cases.
- What does an ethical choice look like in a technological world where privacy and consent are not real or possible due to previous decisions and power imbalances?
- Who are the stakeholders and where does the power lie?
- What is the impact on people (staff, beneficiaries, etc.), business processes, organisational culture?
- What framework and criteria are you using to decide to proceed or not? How is the beneficiary represented in these criteria?
- What are the risks and opportunities? And for whom?
- How does this idea adhere to the Principles for Digital Development?
- What is the cost of not proceeding?
- How could the different schools of ethics help us understand the ethical questions around technology?
- What existing ethical principles exist and what can we learn from them?
- Does using the Do No Harm framework help us think through the decision?
- How does this idea impact people living with disabilities (mental & physical), people of different ages, faiths, gender, and so on?
Of course, this is not exhaustive, it’s the conversation that’s important.