The Digital Responsibility Goals

Trustworthy – European – Sovereign

Digital technologies have the potential to improve people‘s lives. But they also bring negative side effects. Technological innovations and their use must therefore be geared to the wellbeing of people and society. Therefor the Digital Responsibility Goals define a framework and work toward a trusting, ethically sensitive, and sustainable digital transformation. In this article we introduce the 7 Digital Responsibility Goals.

The Digital Responsibility Goals – WHY?

We need a human- and planet-centered digital transformation

Digital technologies improve people‘s lives, but technological innovations and the use of innovative technologies must be geared more to taking responsibility for the well-being of people and society – especially in the sector of healthcare provision. Guidelines and laws are indispensable in this regard, but the dynamics of technological development also challenge social developments, and the ethical dimension in dealing with digital technologies.

Likewise, the internet and digital technologies bring negative side effects: for example, in many places in the world, especially in totalitarian states, the internet is restricted, regulated, monitored, and used for their propaganda; also, fake news and hate speech poison the atmosphere and make social discourse more difficult.

Oftentimes, critical decisions about the future of digital developments are made without a clear framework. Trustworthy, ethically sensitive, and sustainable guidelines that focus on the benefits for people are missing. The Digital Responsibility Goals (DRGs) aim to define this framework and work towards a truly human- and planet-centred digital transformation. (Fig. 1, 1)

The Digital Responsibility Goals – HOW?

We provide a human-centered digital transformation framework

Leading organizations and companies are committed to the United Nations’ 17 Sustainable Development Goals (SDGs, 2). Following the same logic, addressing the digital dimension, the 7 DRGs aim to guide decision makers, companies, and other stakeholders, such as researchers and users, to develop trustworthy digital products and services.

The DRGs provide an opportunity for various stakeholders and decision-makers from businesses, regulators, academia, and civil society to form a common agenda and plan a common course of action to deal with a human-centered digital transformation.

The Digital Responsibility Goals – WHAT?

We measure a human-centered digital transformation step by step

DRG #1 “Digital literacy” is at the forefront of the DRGs, as only knowledge, education, and comprehensive information can be the basis of all self-determination and decision-making. This includes access to the technological infrastructure itself. Goals 2 to 6 of the DRGs are oriented along the data value chain – from the security of the system (DRG #2 “Cybersecurity”) as the technological basis, to the protection of personal data (DRG #3 “Privacy”) – as a European promise – and fair handling, a new understanding of how to deal with non-personal information and data (DRG #4 “Data Fairness”), responsible collection, processing and understandable evaluation of data (DRG #5 “Trustworthy Algorithms”) and transparent communication of behaviour (DRG #6 “Transparency”). Finally, the DRGs form a big social bracket – the protection of our identity as well as the preservation of human agency (DRG #7 “Human Agency & Identity”) in the digital space. In the representation of DRGs, people and their identity are elevated from their previous positioning as marginal figures to the sovereign bracket and at the same time to the centre of digital transformation.

Developed in a consortium consisting of leading academics, NGOs, and industry experts, the DRGs cover 7 areas where we see scope for commitments that go beyond compliance with existing laws and regulations. We already described 5 guiding criteria per Digital Responsibility Goal to make the responsible behavior measurable (fig. 2).

Guiding Criteria and Examples

DRG #1 Digital Literacy and accessibility of technology as the fundamental basis for trust and acceptance of digital innovations: starting with the individual. Digital competence and access to digital products, services, and processes are prerequisites for the acceptance of digital technologies. They are the basis for all other goals of the Digital Responsibility Goals, enable the assessment of the trustworthiness of offerings, and put humans at the center. They are what makes the multi-layered human identity in the digital space possible in the first place.

These are the 5 guiding criteria of DRG #1 Digital Literacy:

  • DRG #1.1 The information offered for digital products, services, and processes must be designed individually and in a way that is suitable for the target group.
  • DRG #1.2 Access to digital products, services, and processes must be reliable and barrier-free.
  • DRG #1.3 The acceptance of digital products, services, and processes must be proactively considered in design and operation. This includes measures on equity, diversity & inclusion.
  • DRG #1.4 Education on the opportunities and risks of digital transformation is essential – everyone has a right to education on digital matters.
  • DRG #1.5 The education and information offered should be designed to create awareness of related topics such as sustainability, climate protection, and diversity/inclusion (for example along the UN SDGs) where applicable.

Example of a successful implementation of DRG #1: “DRG 4 GovTech”: In the design and operation of an authority website for the electronic application of a car license plate, principles of accessibility were implemented in accordance with DRG guiding criteria #1.2, for example in accordance with BITV 2.0 (Barrier-free Information Technology Ordinance). This includes perceptibility, usability, comprehensibility, and robustness for the relevant target groups.

DRG #2 Cybersecurity as the crucial foundation and basis of secure digital technologies. Cybersecurity arms systems against compromise and manipulation by unauthorized persons and ensures the protection of users and their data – from data collection to data utilization. It is a basic prerequisite for the responsible operation of digital solutions.

These are the 5 guiding criteria of DRG #2 Cybersecurity:

  • DRG #2.1 Developers, providers, and operators of digital products, services, and processes assume responsibility for cybersecurity. Users also bear some of
    the shared responsibility – awareness (see DRG #1) is essential here.
  • DRG #2.2 Developers, providers, and operators of digital solutions are responsible for appropriate security measures and are constantly developing them further. Products, services, and processes are designed from the outset to be resistant to compromise and abuse by unauthorized persons (security by design).
  • DRG #2.3 A holistic view and appropriate implementation are considered along the lifecycle, the value chain, and across the entire service or solution.
  • DRG #2.4 Developers, providers, and operators of digital products, services, and processes must account for how they provide security for users and their data – while maintaining necessary trade secrets and information security.
  • DRG #2.5 Business, politics, authorities, civil society, and science must jointly and collaboratively shape the framework for cybersecurity with appropriate objectives, measures, and targets. This requires open and transparent cooperation (for example according to principles of “responsible disclosure”).

Example of a successful implementation of DRG #2: “DRG 4 Finance”: A bank offering online services has been certified ISO 27000 to – in accordance with DRG guiding criteria #2.2 – demonstrate it possesses a robust security system based on appropriate measures to prevent unauthorized access to private information, internal systems, and networks. Ultimately this helps minimize the risk of security breaches, making the company more reliable and reputable in the eyes of potential customers.

DRG #3 Privacy as a European promise: Privacy is part of human dignity and a prerequisite for digital self-determination. Protection of privacy – with a consistent purpose limitation and data economy – allows users to act confidently in the digital world. Privacy by design and default enable responsible data usage. Users are given control and providers must account for how they protect privacy.

These are the 5 guiding criteria of DRG #3 Privacy:

  • DRG #3.1 Operators and providers of all digital products, services, and processes must take responsibility for protecting the privacy of their users.
  • DRG #3.2 When dealing with personal data, strict purpose limitations and data economy are observed.
  • DRG #3.3 Privacy protection is considered throughout the entire lifecycle. Privacy protection is the default setting.
  • DRG #3.4 Users have control over their personal data and its use – this includes the rights to access, rectify, erase, restrict processing, object, avoid automated decision-making, and ensure data portability.
  • DRG #3.5 Providers must account for how they protect users‘ privacy and personal data – while maintaining necessary trade secrets and information security.

Example of a successful implementation of DRG #3: “DRG 4 ResponsibleTech”: An online search engine assumes responsibility for protecting the privacy of its users in accordance with DRG guiding criteria #3.1. Privacy protection is clearly anchored in the organization, and sufficient financial resources are available for additional expenses incurred as a result. Responsibilities for privacy protection in the organization are defined, with a clear mandate at the highest organizational level.

DRG #4 Data Fairness: A new understanding of data and data fairness: it‘s about fair competition. Non-personal data must also be protected and handled according to its value. At the same time, suitable mechanisms must be defined to make data exchangeable between parties and applicable. This is the only way to ensure balanced cooperation between different stakeholders in data ecosystems.

These are the 5 guiding criteria of DRG #4 Data Fairness:

  • DRG #4.1 When collecting data, proactive care is taken to ensure that it fairly reflects and represents the context in which it is collected.
  • DRG #4.2 In digital ecosystem structures, the mutual exchange of data between all parties involved must be clearly described and regulated (data governance). The goal must be fair participation in the benefits achieved through the exchange of data.
  • DRG #4.3 Developers, providers, and operators of digital solutions must clearly define and communicate the purpose (wherever possible) with which they use and process data (including non-personal data). Exceptions are approaches like “open data”.
  • DRG #4.4 Data is designed “FAIR”, especially for use cases relevant to society as a whole – “FAIR” stands for Findable, Accessible, Interoperable, Reusable.
  • DRG #4.5 Data providers must be equipped with mechanisms to control and withdraw their data – they shall be able to have a say regarding the usage policies.

Example of a successful implementation of DRG #4: “DRG 4 GovTech / DRG 4 Mobility”: In line with DRG guiding criteria #4.4 a municipal government has a dedicated strategy to ensure the use of data based on the “FAIR” principles. It takes a number of dedicated measures with the aim of bringing data including traffic information, environmental data, and economic indicators to the public and promoting its use.

DRG #5 Trustworthy Algorithms “Trust by Design” – through trustworthy algorithms: Once the data has been collected, it must be processed with the goal of trustworthiness. This is true for simple algorithms as well as for more complex systems up to autonomously acting systems (AI = Artificial Intelligence for example).

These are the 5 guiding criteria of DRG #5 Trustworthy Algorithms:

  • DRG #5.1 Algorithms, their application, and the datasets on which they are based are designed to provide the highest level of fairness and inclusion.
  • DRG #5.2 The individual and overall societal impact of algorithms is regularly reviewed, and the review documented. Depending on the results, proportional measures must be taken.
  • DRG #5.3 The results of algorithmic processing and their occurrence are comprehensible.
  • DRG #5.4 AI systems must be designed to be reliable and precise to be able to withstand subtle attempts to manipulate data or algorithms. It must be possible to reproduce results where possible.
  • DRG #5.5 AI systems must be designed and implemented in such a way that independent control of their mode of action is possible.

Example of a successful implementation of DRG #5: “DRG 4 Industry”: A startup that develops and markets AI tools for industrial applications implements measures to maintain fairness and inclusion in accordance with DRG guiding criteria #5.1. These includes active measures to increase diversity in developer teams and the establishment of an AI Ethics Board.

DRG #6 Transparency must form the basis to guide the actions of all stakeholders in the digital supply chain to create trust: openness and transparency making the difference. Proactive transparency for users and all other stakeholders as to which principles underlie digital products, services, and processes, as well as transparency on the digital solution itself and its components, is created. Principled behavior is an important building block for building trust.

These are the 5 guiding criteria of DRG #6 Transparency:

  • DRG #6.1 To gain the trust of users, organizations establish transparency about their digital ventures and solutions – for the final digital products, services, and processes as well as the organization, business models, data flows, and technology behind them.
  • DRG #6.2 Transparency is implemented in interactive communication (for example, between providers and users), and mechanisms for interaction are actively offered.
  • DRG #6.3 The use of digital solutions is designed to be transparent wherever there is a digital interaction between people and the digital solution takes place (for example, the use of chatbots).
  • DRG #6.4 In addition to transparency for users, transparency should also be provided for professionals – while maintaining the necessary business secrets and information security.
  • DRG #6.5 Organizations must outline how they will make transparency verifiable and thus hold themselves accountable for their actions in the digital space.

Example of a successful implementation of DRG #6: “DRG 4 Health”: In a tool for diagnostic imaging in line with DRG guiding criteria #6.1 it is made transparent to doctors upon use that image recognition and analysis is used for diagnostic purposes in healthcare. Furthermore, this is also clearly communicated to relevant patients in the doctor-patient conversation.

DRG #7 Human Agency & Identity are critical guideposts and the precondition for digital development. Digital products and services must be human-centric, sustainable, inclusive, and developed under human oversight. It‘s about each and every one of us. Even in the digital space, we must protect our identity and preserve human responsibility. Preserving the multi-faceted human identity is a prerequisite for any digital development. The resulting digital products, services, and processes are human-centered, inclusive, ethically sensitive, and sustainable, remaining in human care at all times. Only in this way can digital technology promote the well-being of humanity and have a sustainable impact.

These are the 5 guiding criteria of DRG #7 Human Agency & Identity:

  • DRG #7.1 The preservation of the multifaceted human identity is a basic requirement and must be the basis for any digital development. The resulting digital approaches are always user-centric – they respect personal autonomy and dignity, limit commoditization, and open new perspectives.
  • DRG #7.2 Sustainability and climate protection must be part of digital business models and implemented in practice (especially in accordance with the UN Sustainable Development Goals).
  • DRG #7.3 Digital products, services, and processes promote responsible, nonmanipulative communication. Where possible, communication takes place unfiltered.
  • DRG #7.4 Digital technology always remains under human authorship and control – it can be shaped throughout its deployment.
  • DRG #7.5 Technology may only be applied if it is of use to individuals and mankind and promotes welfare.

Example of a successful implementation of DRG #7: “DRG 4 ResponsibleTech”: In line with DRG guiding criteria #7.5 a technology company conducts an impact assessment on the effects of the technology of facial recognition. Discovering the risk of malicious and unfair use, it decides to clearly limit the use of that technology to dedicated, risk-mitigated use cases and transparently communicates that decision.

Summary and Outlook

Digital technologies have the potential to improve people‘s lives. At the same time, the internet and digital technologies bring negative side effects. Technological innovations and the use of innovative technologies must therefore be geared more to the wellbeing of people and society in the future than is already the case today.

As a benchmark for players in the digital space, the Digital Responsibility Goals define this framework and work toward a trusting, ethically sensitive and sustainable digital transformation among decision-makers. The Digital Responsibility Goals pursue an integrative and combined approach of all relevant actors in order to promote trust in digital technologies and business models. The Digital Responsibility Goals are accompanied by a guideline to measure the degree of successful value-based digital transformation, to strengthen the responsibility of digital players and to give participants clear guidance for their digital strategies.

Based on the logic of the UN SDGs, the DRGs are also about presenting very complex tasks for society as a whole in a simplified framework for action using target images as orientation, initiating responsible action at all target levels, and enabling and accompanying their progress in a comprehensible manner. Acting along the goals is of course desired at all levels and does not have to be done chronologically from 1-7. Rather, the aim is to create awareness and sharpen attention for challenges and opportunities in the necessary fields of action – to shape our sustainably responsible digital space for our society.

As a target picture to shape a sustainable human-centered digital transformation, the DRGs offer an opportunity to promote greater responsibility in the digital space across sectors. Responsible behavior and responsible leadership all along the data life cycle is at the core of establishing trust. By adhering to the framework of the DRGs, and implementing it with a dedicated management system, building trust will no longer be a random by-product, but a pro-active and targeted achievement.

Any development towards more digital responsibility does not happen in isolation inside an organization only, but also takes place in an ecosystem if not the wider environment and on a societal level, therefore: “Our activation plan depends on an interactive and engaging ecosystem, that wants to put the user, the human, back in the centre of digital innovations. It´s about collaboration and it´s about putting perspectives together from different sectors to find solutions that can create a more sustainable and trustworthy world. We need responsible leadership. More than ever.” – Jutta Juliane Meier, Founder & CEO, Identity Valley.

Sources
1. https://identityvalley.org/drg
2. https://sdgs.un.org/goals

Jutta Juliane Meier,
Founder & CEO, Identity Valley Research gUG

Jutta has been working as an independent digital strategies consultant since Steve Jobs introduced the first iPhone. In 2020 she founded the Identity Valley, which is partly a response, partly an evolution of Silicon Valley. Evolving from “What can technology do?” to “What should technology do?” Oftentimes, critical decisions about the future of digital developments are made without a clear digital strategy or framework. The Digital Responsibility Goals, developed together with a consortium – consisting of academics, NGOs, and industry experts – define this framework and work towards a trustworthy, human-centered digital transformation.

Identity Valley Research gUG (haftungsbeschränkt) promotes and calls for more Digital Responsibility. As a non-profit organization, Identity Valley engages thought leaders in academia, policy, and industry for a values-based future of the Digital World through networking, lobbying, and communication. Identity Valley advocates for a data economy based on trust, privacy, and personal identity, derived from the humanistic tradition of Europe. In this, the organization is partly a response, partly an evolution of Silicon Valley. It is about both the possibilities of technology and the accompanying assumption of responsibility – by companies, institutions, and states. In the process, the uniqueness of multi-faceted human identities replaces “Silicon,” until now probably tech´s most important raw material. It evolves from the question “What can technology do?” to the question “What should technology do?”. The Identity Valley credo: It’s all about trust.