Made by Humans: The AI Condition by Ellen Broad (Melbourne University Press, 2018) is a startling book. Broad prompts us to scratch beyond the surface of the smooth sheen of AI interfaces. She exposes the shaky foundations of data that support AI.
This book is written in an intelligent conversational style. Broad opens with the story of how she found herself in the fields of data sharing, open data and AI ethics. An AI career guidance bot, relying on the mined data of online resumes, might not have predicted a tech career pathway for Ellen Broad. She cites a “desperate, random chance” (p.x). led to a tech career including Head of Policy for the UK based Open Data Institute and global roles as a data expert. Broad is now a consultant advising Australian organisations on data standards. She makes me feel a bit more legitimate about my tech career launched by a timetabling clash!
The “AI condition” that Broad is alerting us to the amount of humanity that we leave littered around in our data. Our sticky fingerprints of human bias remain in our so-called “clean data”. As I read, I could relate to moments where I have relied on AI without question. Broad refers to this “blind reliance” (p. 73) which is the ease in which we easily believe and follow computer generated knowledge. It becomes difficult to know. We want to believe computers can transcend our biases.
It made me think of the eerie fun of playing Bot or Not, the Turing test for poetry and finding a beautiful poem, written by AI that seems so human.
This book also connects you to the bigger picture. It is brim-full of big cautionary tales. Data bias abounds. From self-driving cars, military systems, social media, justice, health and social welfare systems. No amount of careful data cleaning can remove the spectre of our human bias. If we then mine the data by writing algorithms to make predictions, we exacerbate the issue. If we don’t understand the conditions leading to predictions, we are culpable.
The book has three distinct sections: Part 1 Humans as Data, Part II Humans as Designers and Part III Making Humans Accountable
Humans as Data
I found this section to be a useful insight into some of the problems posed by relying on data that humans try to “clean”. Some incredible high profile examples of data biases are included.
Humans as Designers
This was my favourite section of the book. I could relate it to my own work in higher education. Predictive use of data gathered about student behaviours is a very current issue.
Broad talks about the human propensity to design with bias. It was in this section particularly, I heard Broad’s clarion call for the field of artificial intelligence. We often can’t see the complex closed algorithms written to make predictions on data. We can ask questions about who. We can explore the motivations of “AI practitioners deciding what data to use as an input, what machine learning models to choose, how to weave a piece of software together.” (p.75)
Making Humans Accountable
This section has examples of why humans need to be accountable for the decisions AI systems make. Can we be ethical computer programmers? How much power do tech companies or individuals have over data? There are current examples of organisations grappling with the murky legal and ethical aspects of AI.
Who should read this book?
You. Everyone. Decisions are being made based on our data. This book helps you understand that you should be part of the who’s who of AI. There are further resources and readings on AI and data to keep you discovering more.
You can follow Ellen Broad on Twitter.
Made by Humans: The AI Condition by Ellen Broad (Melbourne University Press, 2018).