DeepMind, the U.K. AI company which was acquired in 2014 for $500M+ by Google, has launched a new ethics unit which it says will conduct research across six "key themes" -- including 'privacy, transparency and fairness' and 'economic impact: inclusion and equality'.
The XXVI-Alphabet-owned company, whose corporate parent generated almost $90BN in revenue last year, says the research will consider "open questions" such as: "How will the increasing use and sophistication of AI technologies interact with corporate power?"
It will helped in this important work by a number of "independent advisors" (DeepMind also calls them "fellows") to, it says, "help provide oversight, critical feedback and guidance for our research strategy and work program"; and also by a group of partners, aka existing research institutions, which it says it will work with "over time in an effort to include the broadest possible viewpoints".
Although it really shouldn't need a roster of learned academics and institutions to point out the gigantic conflict of interest in a commercial AI giant researching the ethics of its own technology's societal impacts.
(Meanwhile, the issue of AI-savvy academics not already being attached, in some consulting form or other, to one tech giant or another is another ethical dilemma for the AI field that we've highlighted before.)
The DeepMind ethics research unit is in addition to an internal ethics board apparently established by DeepMind at the point of the Google acquisition because of the founders' own concerns about corporate power getting its hands on powerful AI.
However the names of the people who sit on that board have never been made public -- and are not, apparently, being made public now. Even as DeepMind makes a big show of wanting to research AI ethics and transparency. So you do have to wonder quite how mirrored are the insides of the filter bubbles with which tech giants appear to surround themselves.
One thing is becoming amply clear where AI and tech platform power is concerned: Algorithmic automation at scale is having all sorts of unpleasant societal consequences -- which, if we're being charitable, can be put down to the result of corporates optimizing AI for scale and business growth. Ergo: 'we make money, not social responsibility'.
But it turns out that if AI engineers don't think about ethics and potential negative effects and impact before they get to work moving fast and breaking stuff, those hyper scalable algorithms aren't going to identify the problem on their own and route around the damage. Au contraire. They're going to amplify, accelerate and exacerbate the damage.
Witness fake news. Witness rampant online abuse. Witness the total lack of oversightthat lets anyone pay to conduct targeted manipulation of public opinion and screw the socially divisive consequences.
Given the dawningpolitical and public realization of how AI can cause all sorts of societal problems because its makers just 'didn't think of that' -- and thus have allowed their platforms to be weaponized by entities intent on targeted harm, then the need for tech platform giants to control the narrative around AI is surely becoming all too clear for them. Or they face their favorite tool being regulated in ways they really don't like.
The penny may be dropping from 'we just didn't think of that' to 'we really need to think of that -- and control how the public and policymakers thinks of that'.
And so we arrive at DeepMind launching a research unit that'll be putting out ## pieces of AI-related research per year -- hoping to influence public opinion and policymakers on areas of critical concern to its business interests, such as governance and accountability.
This from the same company that this summer was judged by the UK's data watchdog to have broken UK privacy law when its health division was handed the fully identifiable medical records of some 1.6M people without their knowledge or consent. And now DeepMind wants to research governance and accountability ethics? Full marks for hindsight guys.
Now it's possible DeepMind's internal ethics research unit is going to publish thoughtful papers interrogating the full spectrum societal risks of concentrating AI in the hands of massive corporate power, say.
But given its vested commercial interests in shaping how AI (inevitably) gets regulated, a fully impartial research unit staffed by DeepMind staff does seem rather difficult to imagine.
"We believe AI can be of extraordinary benefit to the world, but only if held to the highest ethical standards," writes DeepMind in a carefully worded blog post announcing the launch of the unit.
"Technology is not value neutral, and technologists must take responsibility for the ethical and social impact of their work," it adds, before going on to say: "As scientists developing AI technologies, we have a responsibility to conduct and support open research and investigation into the wider implications of our work."
The key phrase there is of course "open research and investigation". And the key question is whether DeepMind itself can realistically deliver open research and investigation into itself.
There's a reason no one trusts the survey touting the amazing health benefits of a particular foodstuff carried out by the makers of said foodstuff.
"To guarantee the rigour, transparency and social accountability of our work, we've developed a set of principles together with our Fellows, other academics and civil society. We welcome feedback on these and on the key ethical challenges we have identified. Please get in touch if you have any thoughts, ideas or contributions," DeepMind adds in the blog.
The website for the ethics unit sets out five core principles it says will be underpinning its research. Principles I've copy pasted below so you don't have to go hunting through multiple link trees* to find them, given DeepMind does not include 'Principles' as a tab on the main page so you do really have to go digging through its FAQ links to find them.
(If you do manage to find them, at the bottom of the page it also notes: "We welcome all feedback on our principles, and as a result we may add new commitments to this page over the coming months.")
So here are those principles that DeepMind has lodged behind multiple links on its Ethics & Society website:
We believe AI should be developed in ways that serve the global social and environmental good, helping to build fairer and more equal societies. Our research will focus directly on ways in which AI can be used to improve people’s lives, placing their rights and well-being at its very heart.
Rigorous and evidence-based
Our technical research has long conformed to the highest academic standards, and we’re committed to maintaining these standards when studying the impact of AI on society. We will conduct intellectually rigorous, evidence-based research that explores the opportunities and challenges posed by these technologies. The academic tradition of peer review opens up research to critical feedback and is crucial for this kind of work.
Transparent and open
We will always be open about who we work with and what projects we fund. All of our research grants will be unrestricted and we will never attempt to influence or pre-determine the outcome of studies we commission. When we collaborate or co-publish with external researchers, we will disclose whether they have received funding from us. Any published academic papers produced by the Ethics & Society team will be made available through open access schemes.
Diverse and interdisciplinary
We will strive to involve the broadest possible range of voices in our work, bringing different disciplines together so as to include diverse viewpoints. We recognize that questions raised by AI extend well beyond the technical domain, and can only be answered if we make deliberate efforts to involve different sources of expertise and knowledge.
Collaborative and inclusive
We believe a technology that has the potential to impact all of society must be shaped by and accountable to all of society. We are therefore committed to supporting a range of public and academic dialogues about AI. By establishing ongoing collaboration between our researchers and the people affected by these new technologies, we seek to ensure that AI works for the benefit of all.
And here are some questions we've put to DeepMind in light of the launch of the ethics research unit. We'll include responses when/if they reply:
Is DeepMind going to release the names of the people on its internal ethics board now? Or is it still withholding that information from the public?
If it will not be publishing the names, why not?
Does DeepMind see any contradiction in funding research into ethics of a technology it is also seeking to benefit from commercially?
How will impartiality be ensured given the research is being funded by DeepMind?
How many people are staffing the unit? Are any existing DeepMind staff joining the unit or is it being staffed with entirely new hires?
How were the fellows selected? Was there an open application process?
Will the ethics unit publish all the research it conducts? If not, how will it select which research is and is not published?
What's the unit's budget for funding research? Is this budget coming entirely from Alphabet? Are there any other financial backers?
How many pieces of research will the unit aim to publish per year? Is the intention to publish equally across the six key research themes?
Will all research published by the unit have been peer reviewed first?
*Someone should really count how many clicks it takes to extract all the information from DeepMind's Ethics & Society website, which, per the DeepMind Health website design (and indeed the Google Privacy website) makes a point of snipping text up into smaller chunks and snippets and distributing this information inside boxes/subheadings that each have to clicked to open up to get to the relevant information. Transparency? Looks rather a lot more like obfuscation of information to me, guys