4 mins read

“It’s undeniable that generative AI adoption brings with it the heightened risk to data security.” – Kartik Shahani Country Manager India – Tenable

Kartik Shahani Country Manager India - Tenable
Kartik Shahani Country Manager India - Tenable

IT Voice: How can Generative AI be effectively leveraged for preventative cyber defence strategies?

Kartik: Generative AI can arm defenders with the capacity to take a more preventative approach to cybersecurity and build up defences that make it more difficult for attackers to perpetrate breaches. With generative AI-powered tools assuming the role of cyber assistants and guiding users through specialised solutions, preventative security becomes more accessible. Generative AI can identify patterns and automate critical actions making preventive cybersecurity a scalable proposition, helping defenders stay a step ahead of their adversaries.

IT Voice: In a rapidly evolving market with numerous Generative AI solutions, what criteria should organisations use to assess the efficacy and accuracy of these tools for cybersecurity?

Kartik: Organisations implementing generative AI functionality into cybersecurity, have the responsibility of doing so thoughtfully. It won’t work if organisations merely introduce the technology without careful consideration as that would lead to disseminating inaccurate data, opening the door to unintended and possibly undesirable applications and also increasing the risk of cyberattacks.

Generative AI is as good as the data it’s built on. While looking for the right solutions, organisations must consider if the solution breaks down silos and brings all preventive security data into a single data lake, ensuring the Generative AI assistant is not prone to inaccuracies.

IT Voice: What are the specific ways in which Generative AI can assist security professionals in managing and mitigating their exposure to cyber threats?

Kartik: Generative AI assistants act as a force multiplier, helping already short-staffed security teams with some crucial tasks. Security practitioners are often challenged with finding asset data, which is difficult to do manually. It often requires figuring out what filters are available within the existing solution, getting a grasp of which assets are supported by those filters, and running through mountains of data until they discover exactly what is needed. Generative AI makes this task easier as security teams only have to ask the right questions using natural language search queries. It becomes much easier to analyse assets across their environments.

Secondly, generative AI can help security teams get a comprehensive understanding of cyber risk within the right context. This generally requires analysis of multiple factors, including exposure specifics, asset characteristics, user privileges, external accessibility, and attack paths. Generative AI offers security teams a concise written summary of attack path analyses, empowering security practitioners, even those with limited path analysis experience, to derive the right context into what an attacker sees, helping them proactively mitigate risks. 

Finally, generative AI enables preventative action, which has for long been a challenge for cyber defenders. Organisations grapple with thousands of vulnerabilities and misconfigurations, and identifying which ones pose the greatest risk has always been a major challenge. Generative AI boosts security teams’ ability to prioritise risks and the actions needed to address and mitigate them. They can deliver actionable insights based on the most critical cyber risks facing the business, helping security teams to proactively address risks, thereby reducing the organisation’s overall risk exposure.

IT Voice: How can GenAI-powered cybersecurity solutions contribute to addressing the industry’s skill gap by enhancing the capabilities of security professionals?

Kartik: Generative AI has the capacity to integrate with vulnerability management tools to quickly detect vulnerabilities and automate remediation. It helps security teams increase their efficiency and redirect additional resources towards thwarting cyberattacks.

IT Voice: What are the potential limitations or challenges associated with implementing Generative AI in cybersecurity, and how can organisations overcome them?

Kartik: Generative AI’s increasing popularity also brings to the fore, ethical dilemmas of both corporate and personal awareness. Generative AI implementation must consider moral obligations and when it’s being used for cyber defence, there are certain dilemmas. This has prompted the creation of ethical frameworks, internally and across industries, in order to confer a sense of responsibility in generative AI deployment.

IT Voice: Can you provide examples of successful applications of Generative AI in cyber defence that have resulted in improved security posture?

Kartik: To help organisations stay ahead of emerging threats, Tenable launched ExposureAI, new generative AI capabilities and services across the Tenable One Exposure Management Platform. ExposureAI surfaces high-risk exposure insights and recommends actions, such as addressing software vulnerabilities, cloud misconfigurations, web app flaws and identity weaknesses.

Recently, Tenable Research made generative AI-developed research tools available for free to the cybersecurity community. Now Tenable is using generative AI to put more power than ever in the hands of security teams, so they can be more efficient and focus more resources on preventing successful attacks.

IT Voice: How can organisations ensure the ethical use of Generative AI in cybersecurity, particularly in terms of privacy and data protection?

Kartik: It’s undeniable that generative AI adoption brings with it the heightened risk to data security. Finding the right balance requires an understanding of the evolving landscape of generative AI technology and the associated risks.

Organisations must create a culture where employees are aware of the risks generative AI poses. It requires training employees on the technical aspects of generative AI, and educating them about the ethical implications of using such tools. Organisations need a comprehensive understanding of the types of data that are more likely to be leaked and potential ways these leaks could occur.

One way of ensuring this is proactively and continuously monitoring generative AI tool usage. This would involve the implementation of robust technical controls such as data encryption, access control, and anomaly detection systems that can flag unusual patterns in data usage or data transfer. This is only possible with a holistic policy framework, outlining the acceptable use of generative AI tools and outlining the repercussions of non-compliance. This framework must be dynamic,  adapting to the changing landscape of AI technology and the risks that come with it.

Adopting a holistic approach to education, technical safeguards and a dynamic policy framework can help businesses strike a balance between the usage of generative AI and data security.

IT Voice: What ongoing developments or trends should security professionals be aware of in the field of Generative AI and its role in cyber defence?

Kartik: What we are seeing with GenerativeAI is only the beginning. If history has taught us anything, it’s that cybercriminals will find ways to leverage new technology or capabilities to increase the scale of their attacks. Cyber defenders must continue to innovate to defend against cyber threats. 

Leave a Reply