/
1 min read

Nasscom releases guidelines for responsible AI aimed at researchers , developers and users

Nasscom, the apex software and services company body in India, has recently published a set of guidelines aimed at defining the responsible use of generative artificial intelligence (AI). These guidelines target researchers, developers, and users of generative AI models and apps, emphasizing the need for comprehensive risk assessments and internal oversight throughout the entire lifespan of a generative AI solution.

The primary objective of these guidelines is to address and mitigate potential harms associated with generative AI, such as misinformation, infringement of intellectual property, violation of data privacy, propagation of biases, disruption of life and livelihood on a large scale, environmental degradation, and malicious cyberattacks. Nasscom aims to raise awareness about these guidelines, develop specific guidance for different use cases, and enhance existing resources for responsible AI.

The guidelines advocate for cautious usage and risk assessment during the development of generative AI solutions, ensuring that potential harms are carefully evaluated throughout their lifecycle. They also recommend disclosing data sources and algorithms publicly, unless developers can demonstrate that such disclosure could jeopardize public safety.

Furthermore, the guidelines stress the importance of explainability in the outputs generated by generative AI algorithms and the establishment of grievance redressal mechanisms to address any issues that may arise during the development or use of these solutions.

According to Nasscom chairperson and Microsoft India president Anant Maheshwari, these guidelines will help unlock the full potential of AI within the ecosystem, fostering a future that harmoniously integrates human ingenuity with technological advancement.

Leave a Reply