Special Service and Features
azadi ka amrit mahotsav

IIT Madras Study calls for ‘Participatory Approach’ to AI Governance in India and Abroad

Posted On: 07 NOV 2024 1:31PM by PIB Chennai

A study published by Indian Institute of Technology Madras (IIT Madras) Researchers and Vidhi Centre for Legal Policy, Delhi, has called for participatory approaches in the development and governance of Artificial Intelligence in India and abroad.

This study identified the primary reasons why a participatory approach in AI development can improve the outcomes of the AI algorithm as well as enhance the fairness of the process. This study sought to establish the need and importance of a participatory approach to AI Governance while grounding it in real-world use cases, through an interdisciplinary collaboration.

As operations in multiple domains get increasingly automated through AI, the various choices and decisions that go into their setup and execution can get transformed, become opaque and obfuscate accountability. This model highlights the importance of involving relevant stakeholders in shaping the design, implementation, and oversight of AI systems.

Researchers from the Centre for Responsible AI (CeRAI) under Wadhwani School of Data Science and AI at IIT Madras and Vidhi Legal, a leading think-tank on legal and tech policy, between technologists, lawyers and policy researchers conducted this study in two parts.

Their findings were published in a Pre-Print Paper in ‘arXiv’, an open-access archive for nearly 2.4 million scholarly articles in the fields of physics, mathematics, and computer science, among many others. The Papers can be viewed using the following links - https://arxiv.org/abs/2407.13100 and https://arxiv.org/abs/2407.13103

Highlighting the need for such studies, Prof. B. Ravindran, Head, Wadhwani School of Data Science and Artificial Intelligence (WSAI), IIT Madras, said, “The widespread adoption of AI technologies in the public and private sectors has resulted in them significantly impacting the lives of people in new and unexpected ways. In this context, it becomes important to inquire how their design, development and deployment takes place. This study found that persons who will be impacted by the deployment of these systems have little to no say in how they are developed. Seeing this as a major gap, this research study advances the premise that a participatory approach is beneficial to building and using more responsible, safe, and human-centric AI systems.”

Further, Prof. B. Ravindran, also the Head of Centre for Responsible AI (CeRAI), IIT Madras, said, “The recommendations from this study are crucial for addressing several pressing issues in AI development. By ensuring that diverse communities are included in AI development, we can create systems that better serve everyone, particularly those who have been historically underrepresented. Increasing transparency and accountability in AI systems fosters public trust, making it easier for these technologies to gain widespread acceptance. Further, by involving a wide range of stakeholders, we can reduce risks like bias, privacy violations, and lack of explainability, making AI systems safer and more reliable.”

Elaborating further, Shehnaz Ahmed, Lead, Law and Technology, Vidhi Centre for Legal Policy, said, “Increasingly, there is a recognition of the value of participatory approaches in AI development and governance. However, the lack of a clear framework for implementing these principles limits their adoption. This report addresses critical challenges by offering a sector-agnostic framework that answers key questions such as how to identify stakeholders, involve them throughout the AI lifecycle, and effectively integrate their feedback. The findings demonstrate how participatory processes can enhance AI solutions, particularly in areas like facial-recognition technology and healthcare. Embracing a participatory approach is the pathway to making AI truly human-centric, a core aspiration of the IndiaAI mission.”

The Recommendations for Implementing Participatory AI include:

  • Adopt a Participatory Approach to AI Governance: Engage stakeholders throughout the entire AI lifecycle—from design to deployment and beyond—to ensure that AI systems are both high-quality and fair.
  • Establish Clear Mechanisms for Stakeholder Identification: Develop robust processes for identifying relevant stakeholders, guided by criteria like power, legitimacy, urgency, and potential for harm. The "decision sieve" model is a valuable tool in this process.
  • Develop Effective Methods for Collating and Translating Stakeholder Input: It is crucial to create clear procedures for collecting, analyzing, and turning stakeholder feedback into actionable steps. Techniques like voting and consensus-building can be used but it is important to be aware of their limitations and potential biases.
  • Address Ethical Considerations Throughout the AI Lifecycle: Involve ethicists and social scientists from the beginning of AI development to ensure that fairness, bias mitigation, and accountability are prioritized at every stage.
  • Prioritize Human Oversight and Control: Even as AI systems become more advanced, it is essential to keep humans in control, especially in sensitive areas like law enforcement and healthcare.’.

In this First Paper, the authors investigated various issues that have cropped up in the recent past when it comes to AI governance and explored viable solutions. By analyzing how beneficial a participatory approach has been in other domains, they proposed a framework that integrates these aspects.

The Second Paper analysed two use cases of AI solutions and their governance, with one of them being a largely deployed solution in Facial Recognition Technologies which has been widely discussed and well documented, while the other is a possible future application of a relatively newer AI solution in a critical domain.

CASE STUDIES

Facial Recognition Technology (FRT) in Law Enforcement: FRT systems have the potential to perpetuate societal biases, especially against marginalized groups, if not developed with care. The lack of transparency in how these technologies are deployed raises serious privacy concerns and risks of misuse by law enforcement. Engaging stakeholders like civil society groups, undertrials, and legal experts can help ensure that FRT systems are deployed in ways that are fair, transparent, and respectful of individual rights.

Large Language Models (LLMs) in Healthcare: In healthcare, the stakes are even higher. LLMs can sometimes generate inaccurate or fabricated information, posing significant risks when used in medical decision-making.

Furthermore, if LLMs are trained on biased data, they could exacerbate healthcare disparities. The opacity of these models' decision-making processes further complicates matters, making it difficult to trust their outputs. Involving doctors, patients, legal teams, and developers in the development and deployment of LLMs can lead to systems that are not only more accurate but also more equitable and transparent.

*** 


(Release ID: 2071446) Visitor Counter : 41


Read this release in: Tamil