Plurality and intersectionality are concepts mentioned in two information sources relating to the multidisciplinary research agenda being established in relation to ethics and artificial intelligence. The sources worth exploring are from Oxford University and ACOLA (Australian Council of Learned Academies):
- A discussion of ethical challenges posed by AI, involving experts from fields across Oxford – Seminar 1 (podcast)
- The effective and ethical development of artificial intelligence: an opportunity to improve our wellbeing – ACOLA (report)
Ethics and AI is having “a moment”
The Oxford podcast has an introduction by Professor Sir Nigel Shadbolt and presentations from: Tom Douglas, Carissa Véliz, Vicki Nash, Sandra Wachter, Brent Mittelstadt, Gil McVean and Jess Morley. A welcome mix and intersection of medical, social, legal, policy, data science and computer science perspectives on ethics and AI.
About 12-13 minutes into Shadbolt’s introduction he mentions the plethora of ethics codes for AI as part of the virtue signalling occurring as ethics (and AI) is having “a moment”. He also outlines the multidisciplinary nexus in which this scholarship can and needs to occur, the need for intersection and a plurality of views, and the importance of locating this work within philosophy (at Oxford University). The university has established an Institute for Ethics in AI within the Schwartzmann Centre for the Humanities to bring together “world leading philosophers and other experts in the humanities with the technical developers and users of AI in academia, business and government.”
A pointed statement (at ~15 mins) is made about the proposal for and structure of the Institute, in particular the role of the advisory council of “world-leading AI experts to guide the future of the Institute” as supportive but not intended to set the research agenda. This statement served as a useful reminder on critical and ethical issues raised in a webinar published by Population Health Research Network (PHRN) on ethics and data linkage. In this webinar, Dr Felicity Flack (a legal expert) outlines the difference between public benefit and public interest and the ethical and legal complications that can arise when these two political concepts get conflated or confused.
Community outreach and public education
Potentially a part of community outreach and public education around ethics and AI will need to involve the following:
- Reiterating the public roles and responsibilities that public institutions (like universities and learned academies) have to work in the public interest in their role as research institutions.
- Defining the drivers for the involvement of private enterprise and philanthropy (influence) and the critical role played, along with public institutions, in generating public benefit (arising from research).
- Reaffirming the value of due process associated with balancing public interest and public benefit (and private interests and benefits) in establishing a research and development agenda (to advance ethical AI capacity).
- Reasserting the importance of diverse participation across social groupings to enable wide ranging and collective benefits and wellbeing as a focus in decision-making (in all spheres of society in the embrace of AI).
Experts and the wider community of stakeholders
The ACOLA report is on a research project undertaken by a wide mix of “specialist expertise from Australia’s Learned Academies” convened as “the Artificial Intelligence Expert Working Group” chosen to guide the study. The names of those on the working group are included as are acknowledgements to external experts and those that made submissions. Contributors that provided written submissions are listed at the back of the research report, and they have expertise in areas ranging from Agriculture, Indigenous Data Sovereignty, Psychological and Counselling Services, to SMEs and Start-ups. In the section on Regulation, governance and wellbeing the need to extend participation in the dialogue around ethics and AI to “the public, communities and stakeholders directly impacted by the changes” (p6) is recognised, as is the need to pay attention to key wellbeing measures such as the OECD Better Life Index and indicators like the Australian Digital Inclusion Index as AI is developed safely and appropriately.
The ACOLA report is a long read (250 pages) – which makes for a lot of printing – or – an opportunity to test out using a PDF reader!
A lurking question: In what sense and in what context can and will these concepts of plurality and intersection further play out as ethics and AI get explored and integrated into societal policies, standards, systems and processes?