Questions for Digital Ethics

by | May 27, 2020 | ICT4D, Questions |

There are many ways to describe something. We can use the senses – how it looks, tastes, feels, sounds, or smells. Or we can talk about what it does, how it makes us feel, how it changes us. We can use stories too. And sometimes we can simply describe questions that it raises.

So I took the question approach in describing what kind of questions a digital ethics committee would ask. They are grouped under common ethical issues in data management.  The list is not meant to be exhaustive nor is it expected or appropriate to answer each question all the time.  The questions are prompts for discussion and to draw out perspectives.

Purpose

  • Is the data being collected tied to a specific purpose or is it speculative?
  • Is the data and/or model representative of what the project wants to measure?
  • Could we collect less data and still achieve our purpose?

Use

  • Is the data collected used?  If so, by whom? Does the data have an explicit purpose? Who has access to it, is it shared, is there consent?
  • Is it clear when it will be deleted?

Awareness and Consent

  • Is there a clear process and plan to raise awareness identifying target groups, channels to reach each group, language, extra vulnerabilities?  Does the plan address the following?
    • What is the benefit to the community that the project is trying to achieve?
    • What are the activities that will be carried out in this project?
    • Do the people involved in the project understand:
      • What data about them is needed for the project, why, how it will be used, who will have access, with whom it will be shared (and why), and when it will be deleted
  • Is there a clear process and plan to collect consent during which understanding is checked for and explanations about what data about them is needed for the project, why, how it will be used, who will have access, with whom it will be shared (and why), and when it will be deleted is explained again?

Feedback and Safeguarding?

  • Can a project participant or stakeholder complain about the process?
  • If the project engages people online, are there clear protocols in place about how to do that? 
  • Is the project ensuring communities know how to live wisely in the digital world?
  • How is digital literacy being addressed? Especially for the most vulnerable groups?

Bias and Fairness

  • Who does it leave out? Is the project designed for the majority or a minority?
  • Has the project considered the bias in project team and in project data (sources)?
  • Is the use of data and technology non-discriminatory?
  • Is there any prejudice or favouritism in the data or model?
  • Who has written the algorithms and Code – what are their assumptions about the populations with whom we work?
  • Is the model (or underlying data) codifying the current state of the world and thereby making it harder to change? (e.g. are we building models that perpetuate or encore past mistakes)

Transparency and Explain-ability:

  • Is there clear documentation of the data management process and visibility on how the model or algorithm(s) function? (e.g. can someone not directly involved in the process explain what is happening?)

Access, Ownership, and Portability

  • Are the rights to the data and related insights derived clearly defined? (e.g. is it clear how decisions are made regarding how and by whom the data can be used, how problems in or related to the data are rectified, and other related issues?)
  • Is it clear who ‘owns’ the data and the insights?  And who has access to them? 
  • Who will have access to the data, why, for what purpose?
  • Will the person about whom the data is be able to access the data? Take it with her? Use it in other areas of their life without our involvement/knowledge?

Sharing

  • Who is the project sharing the data with? For what purpose?
  • Have the people about whom the data is consented to the sharing of the data?
  • Is there a clear data sharing model? Is it one way, two way, or multidirectional?
  • What is the ethos of the partners and does it ‘fit’ ours?

Sustainability, Maintenance & Architecture

  • Minimal reliance on central authorities
  • Are the systems auditable and fixable?
  • Is there a sustainability model in place for any data being maintained or system to be continually used? 

Security, Protection, Privacy

  • Has the IT Information and Security team conducted a Privacy Impact Assessment, Data Protection Impact Assessment, and a security assessment of the system the project will be using?
  • Are the digital processes privacy-preserving
    • Does the data or its use reveal the identity of an individual or group of people?
  • Is the impact on data subject – both short term and long term – understood by the project team?
  • Is it clear when the data will be deleted?

Photo by Ethan Sykes

2 Comments

  1. Nikolai Segura

    Hi Amos,

    This is an excellent post, and you are 100% right in your approach to Digital Ethics for Data. These questions would absolutely be helpful for a Data Ethics committee.

    May I suggest though that a fantastic categorisation of questions already exists in General Data Protection Regulations in the EU:

    1. Lawfulness, fairness and transparency: This covers areas like the nature of consent (specific, informed, free, pro-temp, etc.) and fairness and non-bias in data processing and usage.
    2. Purpose limitation: This covers ensuring the data subject’s consent is valid and adhered to in how the data is used.
    3. Data minimisation: Ensuring that only data needed for the consented purpose is collected and used.
    4. Accuracy: This is one area I thought may be missing from your questions – but it is an obligation that data held on a subject is accurate, and remains accurate and up-to-date, throughout the time it is held. Data controllers should have processes to make sure this is the case.
    5. Storage limitation: This covers the deletion of data when it is no longer needed and therefore the right to privacy and be forgotten that you mentioned.
    6. Integrity and confidentiality (security): This ensures the systems and processes the Data Controller and Processors employ are fit for purpose and provide adequate protection for data subjects.
    7. Accountability: This covers having solid processes, feedback, complaints mechanisms, etc.

    I would also add that in section 6 (security) there is an obligation to “Privacy by Design”. This means that all systems and processes around holding personal data should be designed around data subject privacy. I would add a question on that: have we designed our systems and processes with privacy and security as the default assumption?

    Best

    Nikolai

    Reply
    • AmosD

      Thanks Nikolai – the GDPR categorisation is helpful and one more angle to look at it. And certainly extremely useful for framing the discussion with legal. The more ‘ways in’ the better I think as it helps make ‘ethics’ accessible to folks.

      Reply

Submit a Comment

Your email address will not be published. Required fields are marked *