The use of AI is pervasive and will only grow. As responsible business leaders and citizens we need to realize that ethics and principles must influence the development and proliferation of technology. We must be cognizant of the possibility for bias in computer results. None of us is likely able to detect this on our own, but we can join with others to raise awareness that with these technological advancements comes the need for oversight and governance. As a society, we are obligated to make sure technology benefits everyone and that the impact it has and will have on our lives and work is a good one.
We love our devices, even as we are aware that every key stroke, search, Tweet or Like, every online order provides information about us to some computer program, somewhere. We know our personal data is fed into impersonal algorithms that steer our choices and predict our behaviors. We know this, and still, we can’t stop.
The same is true for business. Big data and the use of artificial intelligence (AI) to mine it is big business. “Last year, according to global management consultant McKinsey, tech companies spent somewhere between $20bn and $30bn on AI, mostly in research and development. Investors are making a big bet that AI will sift through the vast amounts of information produced by our society and find patterns that will help us be more efficient, wealthier and happier.”
But what happens when those all-powerful, omniscient algorithms don’t work any better than we humans do? What happens if they get it wrong? What is our responsibility as business leaders and as world citizens?
“Joanna Bryson, a researcher at the University of Bath, studied a program designed to ‘learn’ relationships between words. It trained on millions of pages of text from the internet and began clustering female names and pronouns with jobs such as ‘receptionist’ and ‘nurse’. Bryson says she was astonished by how closely the results mirrored the real-world gender breakdown of those jobs in US government data, a nearly 90% correlation.
“ ‘People expected AI to be unbiased; that’s just wrong. If the underlying data reflects stereotypes, or if you train AI from human culture, you will find these things,’ Bryson says.
“So who stands to lose out the most? Cathy O’Neil, the author of the book Weapons of Math Destruction about the dangerous consequences of outsourcing decisions to computers, says it’s generally the most vulnerable in society who are exposed to evaluation by automated systems. A rich person is unlikely to have their job application screened by a computer, or their loan request evaluated by anyone other than a bank executive. In the justice system, the thousands of defendants with no money for a lawyer or other counsel would be the most likely candidates for automated evaluation.
“In 2016, the Cornell University professor and former Microsoft researcher Solon Barocas claimed that current laws ‘largely fail to address discrimination’ when it comes to big data and machine learning.”
Note: Discussions about the impacts of AI and robotics were the cornerstone of the World Economic Forum Annual Meeting held in Davos-Klosters, Switzerland, in January 2017. Klaus Schwab, Founder and Executive Chairman of the forum, in his article “The Fourth Industrial Revolution – What it Means and How to Respond” calls for a “new collective and moral consciousness based on a shared sense of destiny” when dealing with the potential world disruption that could be brought about by the Fourth Industrial Revolution.
Contact us and we can help you and your IT department think through your strategic approach to the use and management of big data and AI.
Read the full article at: amp.theguardian.com