June 28, 2024 05:30:38 booked.net

Navigating the Brave New World of AI Decision Oversight

What Does the MCA Mandate and Why?

The MCA’s latest AI-powered compliance system is set to be introduced on its MCA21 portal once the ongoing upgrade and migration of forms to high-security formats are completed, as reported by Mint, citing an unnamed official. However, while the system will compile a list of non-compliant companies, the ultimate decision will rest with an authorized official. The objective is to adopt a “human-centric” approach to AI, allowing non-compliant companies a chance to respond before receiving any formal notices. This approach mirrors the practice of regulators seeking public input on draft laws before finalizing them.

What Sets This Approach Apart?

MCA21 was initially designed to automate all aspects related to enforcement and compliance with requirements under the Companies Act. In their “Vision 2019-2024” document, the MCA emphasized the use of AI, machine learning (ML), and “real-time analytics” to create a unified platform connecting all economic and financial regulators’ databases, thereby preventing data duplication. In March 2020, the Lok Sabha was informed that Version 3 of the MCA21 portal would incorporate AI and ML to enhance “security and threat management” solutions, among other features. This time around, MCA intends to involve humans in overseeing the results generated by AI.

What Does ‘Human in the AI Loop’ Mean?

Including a human in the AI loop typically signifies integrating that person into, or at least keeping them informed about, the decision-making process. Even in highly automated settings, such as “lights out” factories, human presence remains to halt processes in case of emergencies using a “kill switch.” This concept is now being applied by policymakers to govern AI.

What Are the Advantages of This Approach?

Generative AI models are known to provide convincing but incorrect answers, engage in plagiarism, and infringe upon copyrights and trademarks without moral guidance. Experts struggle to comprehend how unsupervised large language models (LLMs) like OpenAI’s GPT-4 arrive at their conclusions. Moreover, who bears responsibility if such a system provides incorrect legal or medical advice? Consequently, companies now employ humans to moderate content and data annotators to add labels, categories, and other contextual elements to enhance model accuracy.

Can Humans Compete with AI’s Capabilities?

In 1983, Lt. Col. Stanislav Petrov of the Soviet Union averted a nuclear war by relying on his judgment and dismissing reports of an incoming US missile strike (the computer had misinterpreted the sun’s reflection off clouds as a missile). However, Petrov had 30 minutes to make his decision, whereas today’s AI systems make decisions in milliseconds. Kobi Leins from King’s College London and Anja Kaspersen from the Carnegie Council believe that no human can comprehensively understand all these aspects, let alone meaningfully intervene.