Trust… and AI? No smoke without fire.

Trying to understand what artificial intelligence is and how machine learning works is a long game (for this library and information scientist).  Linking that new technical knowledge with questions around ethics and social impact is critical, but it is hard going and also a bit unnerving.  After having read some of Kate Crawford’s research over the past few years, and more recently listening to her give a great talk at the Australian National University in Canberra “AI and Power: From Bias to Justice”* the magnitude of that professional learning exercise seemed quite overwhelming.    

connifer buds

It seemed like a good point to acknowledge the need to seek out more easy entry points (videos) as a means to help counterbalance the more heavy duty technical information on machine learning, neural networks, and models to be absorbed.

10 Essential TED Talks on Artificial Intelligence is just one subset of useful viewpoints.  The views on AI come from: Sebastian Thrun (educator and entrepreneur), Bruno Michel (engineer), Margaret Mitchell (scientist), Robin Hauser (documentary filmmaker), Zeynep Tufecki (techno-sociologist), Stuart Russell (AI developer), Max Tegmark (physicist and AI researcher), Kai-Fu Lee (entrepreneur), Grady Brooch (scientist and philosopher), and Garry Kasparov (chess player).  There’s an artificial intelligence playlist on the TED site, also a shortlist from Computerworld to listen to too.  I would really like to hear more from people like Fang Chen from Data61/CSIRO, and from scholars in philosophy, psychology, sociology and anthropology. That way it’s possible to think more about how trust forms (as part of social systems and in social institutions) and how trust in artificial intelligence grows (the more accuracy of the AI, the more there is human reliance).              

Why post on this today?  Australia has just had terrible fires in four states, and the smoke hazards in Sydney and Canberra hit dangerously high records.  In Fang Chen’s talk (TEDx Sydney, 2018) she talks about how AI may be able to help with assessing fire risks and dealing with emergencies. Technology can be extraordinary – however it is only part of a much bigger picture. New Year’s day seemed like a good moment to consider the intersection of trust and two major social and environmental challenges we are being faced with: AI and climate change. There are no policies in place in Australia to address the impact of climate change, whereas exploratory steps are being taken to consider the impact of AI.  Where AI fits into the social and environmental changes that will be associated with climate change, will come down to: the kinds of collective and political questions that get asked and answered; who does the asking and answering; and, the social and economic issues that are being faced by drawing on a spectrum of expertise.

Here’s hoping 2020 brings some more debate on AI, and more importantly collective understanding on where AI fits into the social contract, and how it can assist with living more sustainably, developing renewables, reducing emissions, and environmental damage.  Why? It felt like the social contract got badly burned here in Australia, as the page turned from one calendar year to another.

New technologies and the weather changes affect us all.        

*Kate Crawford’s ANU talk wasn’t recorded, but there are recordings of other talks she has given, and a pile of publications on her website that are worth exploring in depth.