Generative Artificial Intelligence (GenAI) has dominated almost every technology discussion worldwide. GenAI is being adopted by most major multinationals, whether via provider-based tools, embedded solutions, integrations and partnerships or in-company adoption as an extension of existing AI and analytics capabilities.
According to McKinsey, the widespread adoption of GenAI could contribute up to US$4.4 trillion (S$5.9 trillion) to the global economy every year, presenting a trillion dollar opportunity for organisations across industries. From healthcare, manufacturing, and banking to retail, GenAI can automate supply network controls, facilitate improved customer support and personalised experiences, and streamline backend operations.
Despite its enormous potential, GenAI has its limitations, from copyright, data protection and privacy, and ethical issues to misinformation, bias, lack of explainability and “hallucinations”. A global survey, also by McKinsey, revealed that more than half of respondents cited inaccuracy (56%), cybersecurity (53%), followed by intellectual property infringement (46%), as the most-considered risks of generative AI adoption.
AI and Regulation
The Asia Pacific region received strong government support for fostering AI innovation, resulting in fewer regulatory obstacles. According to an IDC report, the region leveraged its favourable regulatory environment to tap into the potential of GenAI. It revealed that the region is leading other markets in prioritising GenAI investments, with two-thirds of APAC organisations exploring potential novel applications or are already investing in the technology.
To address the risks of AI, Singapore has established the AI Verify Foundation. The not-for-profit organisation aims to harness the collective power and contributions from the global open-source community to facilitate the adoption of responsible AI and promote best practices and standards for AI.
The nation’s Infocomm Media Development Authority has also collaborated with AI tech company Aicadium and released a discussion paper for a proposed framework for “trusted and responsible” adoption of GenAI, including how to address the key risks and issues of the emerging technology from hallucinations, accelerated disinformation to embedded biases.
Singapore and its ASEAN neighbours are also working together to develop regional guidelines on the responsible use of AI, which will be released early next year.
Meanwhile, the European Union has developed legislation that requires companies that create GenAI platforms, such as ChatGPT, to publicise copyrighted content used by these technology platforms to create content of any kind. The United States Department of Justice (DOJ) and other US agencies have also issued a joint statement on Enforcement Efforts Against Discrimination and Bias in Automated Systems.
AI and Industry
Unfortunately, the industry tends to forget that GenAI is meant to support and assist human workers, not replace them. The more powerful and widely used GenAI becomes, the greater its potential for negative influence. It will require a careful and responsible approach to ensure trustworthiness and transparency.
To illustrate, the education sector has been hugely impacted by generative AI. There have been instances where ChatGPT has exploited the system and enabled students to pass a range of exams, from secondary and tertiary levels to legal and medical board exams. Educators have rushed out tools to try to detect AI-generated essays and “cheating”.
With deep fake images and videos already prevalent on social media and content being produced in the style of famous writers or artists, trust and believability are more important than ever and need to be built into the model and enforced by authorities.
There is a need for regulation in industries where compliance and safety are crucial to ensure accountability, mitigate risks and safeguard the public welfare. The benefits of boosting productivity and facilitating data analysis by applying GenAI models should be combined with human oversight, which may involve incorporating additional checks and validation processes to verify compliance.
Building a transparent and safe path for AI
Even as it becomes apparent that AI requires some regulation, technology providers cannot just outsource the ‘how’ to regulators as there is a need to make the technology inherent to the solution itself. This is where graph technology can make AI more productive and ethical.
Knowledge graphs make Large Language Models (LLMs), the technology behind GenAI, less biased, more accurate, and better behaved. They constrain the model to focus on the right answers by training an LLM on curated, high-quality, structured data, significantly reducing the risk of errors and ‘hallucinations’.
If training data is used to create a hiring algorithm biased towards specific demographics, the algorithm may discriminate against other groups. But because knowledge graphs capture human-curated ‘golden relationships’, they can correct the LLM’s mistakes. These corrections can then be used as examples to train more refined machine learning algorithms. By utilising an LLM and creating a knowledge graph, companies can make the most sense of their accumulated data.
GenAI’s potential is powerful and profound, but its greatest opportunities will only be unleashed with its responsible use. We need to counter this bias, both in how the models are defined and how the AI is trained and fed.
Industry and government leaders in the region need to work together to find a safe and reliable path for the use of AI, such as the implementation of shared risk protocols and regular auditing – ensuring that safety, rigour, and transparency are adhered to, benefiting organisations and society at large.
By Nik Vora, Vice President, APAC, Neo4j
This article was first published by Tech Collective