Our purpose, to use our skills in technology and design to help in areas of social need and policy gaps, has a new mission. In recent years, we as a team, like the world, have been discussing and exploring how we might make use of emerging technologies like artificial intelligence. Being an area that involves some pretty important things — data and privacy, trust and security, automation and humanity — work with artificial intelligence needs consideration and criticism. It needs some ground rules.
For us here at Portable, a group of researchers, designers and developers that makes things, we’re putting this article out there as the ground rules we’ve written for ourselves. They’ll guide us as we go about using the skills and tools at our disposal to design and develop products and services using machine learning and artificial intelligence.
These ground rules are underpinned by one thing, the methodology in which we are expert: human-centred design. We believe designers and developers of artificial intelligence should participate in forward thinking on what it means to develop sustainable, democratic, and humane technology. Being human-centred means we believe in some fundamental, mandatory aspects of the processes we use to make things.
We respect the power and beauty of our humanity. We don’t want to wait until we lose what makes us human in the noise of megadata, metadata, automated decision-making, instant responses and increased productivity. In 20 years, we want to look back at this time and feel pride in our ability to recognise the possibilities, act on the opportunities and do this with the utmost optimism and empathy to a diverse range of users.
We worry about the future, so we act in the present. We don’t wait for permission, and we don’t need to be granted authority. We will be curious and exploratory, but orient our work towards purpose and need. We will acknowledge our limitations and know that expertise is relative. We will not claim to be experts, but strive to be expert enough to accomplish our goals. We believe in doing things that scare and excite us, but also in responsible innovation and in understanding intimately the technology and creativity we wield before unleashing it.
We do this alongside the humans who we are developing the artificial intelligence for. We will do this by putting users in the centre, inviting them in, and finding champions in other organisations, governments, and citizens. Collaboration with others who think similarly is more important than competition, and we will seek out partners that share our principles.
To have this conversation means accepting certain truths: Some of the most interesting technologies we use now weren’t invented with an end use — or user — in mind; As designers and developers we won’t know and don’t control the future of the technologies we employ, even though we invest in designing for and developing with them now.
We’ll stretch, we’ll reach out, we’ll make mistakes. We’ll be humble and co-create. We’ll be truthful at every step of the way. We will keep on making. We will do this within the following guidelines.
Don’t be reckless with data
We respect the right to privacy, and believe even metadata can hold significant information on an individual. Providing the information to the user and being up-front about our intentions when collecting data and the right to opt out is a priority for our services. We will seek out experts to help us identify the applications of this principle when we engage with data from users. We’ll make our motivations and projects public, because we know that with exploratory technology development, data can transform into insights and be used in ways that weren’t intended when we collected it.
Audit, report, correct
We will be proactive in auditing the data and processes we use and identify any bias it contains. We know that AI output often depends on gathering data from flawed, human systems, systems that often hold historical prejudices and power structures. When creating tools that impact people’s lives, we believe it’s crucial to identify, acknowledge, and try to correct biased input to prevent unintentionally biased decision making.
Design against risk
Many designers spend their time trying to build trust with users through strategies deployed through content and application design and communication. We will do this by designing against risk. Users need to be assured the we have designed against the risk of poor or biased decision-making in our AI technologies. They need assurance that the automated process isn’t removing the human aspect of their situation. We believe in preventing reputation damage in our AI platforms by avoiding designing inhumane systems that care more about efficiency and automation than the person. We will not prioritise the data over the user.
Balance data privacy, security, and ownership
Even if data is collected through aggregate and non-identifiable processes, potential risk of contamination through security flaws is still an important concern. Aside from the legal obligations around data privacy, we believe in proactively ensuring data privacy as a human right in our data collect, manipulation, and use. We aim to be proactive instead of reactive in our work, and will mandate that data collected for use in machine learning is non-identifiable, and to the best of our knowledge, not capable of being re-identifiable.
Critical creativity
Ethical innovation requires bouncing between creativity and critical review. We believe AI development requires consistent research and reflection on industry best practice and thought leadership. But we don’t believe in taking anything from this research into practice without thorough questioning. Any AI tool we build requires external evaluation in order to hold our assumptions and biases up to the light.
Disciplined disruption
We believe new technologies don’t get made when “business as usual” isn’t up for grabs. This requires mandating the requirement of machine learning in speculative projects and research and development initiatives in order to constantly and iteratively explore, make, and re-make.
The rules we have written for ourselves today might evolve. Indeed, we expect they will, hence the V1 in the title. But by pinning them to human-centred design principles, means we commit to the fundamental, mandatory aspects of that methodology, which provides a magnetic north pole to orient our work.
Authors: Sarah Kaur, Sean Dockray, Luke Thomas, Ryan Blandon, Peter Roper
Illustration: Aron Mayo
H/T The Expert Enough Manifesto and ‘Wear Sunscreen’ by Mary Schmich (famously turned into a song by Baz Luhrmann)