The United Nations has described Artificial Intelligence (AI) as “humanity’s new frontier” that will “lead to a new form of human civilisation”.
But the use of AI in our day-to-day lives presents many ethical dilemmas. Put simply: AI can’t always be trusted to be neutral and unbiased.
Like it or loathe it, AI is already an intrinsic part of many of the things we all take for granted. From our social media feeds to the directions provided by our smartphone apps, the use of AI is everywhere. The benefits are huge, but there are also risks.
AI bias has already become a problem in predictive analysis, such as recruitment and crime fighting. Amazon had to scrap an AI recruitment tool that was found to have gender bias. The AI had been trained using mainly male resumés (due to the predominance of men in technology), so it learned to seek the same patterns when vetting candidates.
Twitter also faced the issue of image-cropping algorithms that were more likely to focus on a person’s complexion, as did Zoom when it rolled out virtual backgrounds. Automated facial analysis is also less accurate with some segments of society.
While human-run systems obviously contain bias, these biases can be further magnified by machines. As we increasingly use AI to automate manual systems and processes, it’s critical that we remain aware of how and why a machine has learnt something. If an algorithm is fed bad, biased, or unrepresentative data, it will make bad, biased decisions. It is therefore of utmost importance to represent the human outcomes in the design and evaluation of the AI, and to not just rely on data.
A humanist approach
The UN emphasises the need to develop AI through a humanist approach. UNESCO is calling forinternational and national regulatory policies and has drafted a Recommendation on the Ethics of Artificial Intelligence. The European Union has also proposed regulations to harmonise rules on artificial intelligence.
User experience (UX) and customer experience (CX) experts are increasingly being asked by companies to help provide the guide rails to help AI deliver positive outcomes for all. This involves designing with the communities that an AI will affects, and including diverse perspectives of citizens, policy makers, and multi-disciplinary teams.
As Australian state and federal governments increasingly look to harnessing the positive potential of AI, for example, in e-Health, fraud detection, and to support vulnerable people, the are also developing policies and guidelines. In March 2022, the NSW government will launch its AI Assurance Framework supported by AI Ethics Principles, to assist agencies to design, develop and use the technology “safely and securely”. The framework supports project teams using AI to comprehensively analyse and document their projects’ AI specific risks, implement risk mitigation strategies and establish clear governance and accountability measures.
One imperative is to have more diversity within the AI community. This would make it better equipped to identify bias. AI4ALL is a US-based non-profit that seeks to create more access for underrepresented people in the AI field. It believes that “diverse perspectives yield more innovative, human-centred, and ethical products and systems”.
Ethical, empathetic leadership
Achieving the ethical design and implementation of AI in the new digital era also requires a new kind of leadership. As we’ve seen over the past year, some of the world’s biggest technology companies have struggled with issues such as privacy, “fake news”, workplace discrimination and the misuse of personal data. Traditional management culture, which too often prioritises profits above people and relies on secrecy and litigation to cover up malpractice, is no longer appropriate for a democratised, digital world. Change is required if we are to successfully embrace the potential of AI, while avoiding the pitfalls of bias.
The World Economic Forum notes that leadership can no longer remain hierarchical, but needs to be more democratic and collaborative. Leaders need to be endowed with emotional intelligence, empathy, grit, and an inherent willingness to nurture, strengthen and build teams. As AI is embedded in every facet of our lives — finance, travel, health, leisure, education, employment, environment and civic engagement — having leaders across these sectors with these characteristics at their core will be essential to ensure AI is responsibly used and applied to our daily lives.
This is not only because these leaders will set the vision and values of AI use within their businesses and organisations, but also because collaborative and democratic leaders inherently understand the value of diverse workforces. An AI future built by a diverse workforce will be a future built for everyone in our community. It is therefore imperative that we have more diversity within the AI community and the workforces that commission, design, build and use AI. Diversity of views will better equip an organisation to identify the inherent bias they are building into their AI.
While we can’t predict accurately how how the AI we implement today will affect our world of tomorrow, it is crucial to ensure that diverse, empathetic leaders are at the helm able through their values, behaviours and attitudes support the diverse and ethical design, build and use of AI. The decisions we make now will have far-reaching consequences for the future of individuals, communities, and making sure that diversity as well as ethics is the corner-stone of our efforts will ensure we have a fair, sustainable and equitable future. The humanity shown by our scientific, business and technology leaders today is the key to unlocking the unbiased AI we all need for the future.
By Claire Rawlins, Managing Director at Publicis Sapient Australia
This article was first published by Judith Neilson Institute