With ChatGPT and other generative AI tools being integrated throughout enterprise software, organizations should act now to formulate an enterprise-wide strategy to deal with trust, risk and security issues arising from the rapidly developing field of generative AI, with blocking ChatGPT altogether a viable option, according to Gartner.
The analyst firm published a Q&A featuring Avivah Litan, a vice president analyst, who says a new class of AI trust, risk and security management tools (AI TRiSM) need to be developed to help organizations manage data and process flows between users and providers of those AI models.
However, there are currently no tools on the market that provide users with those privacy assurances or effective content filtering that help prevent errors, hallucinations, copyrighted materials or confidential information, Litan says.
Generative AI models–while potentially revolutionary–are far from perfect and are prone to mistakes, including factual errors and off-base responses. In addition, there is a growing trend of these generative AI tools being used for malicious purposes, including cyberattacks, deepfakes and more.
In a recent example, an AI-generated image of Pope Francis wearing a fashionable white puffer jacket went viral on social media,” Litan says in Gartner’s published article. “While this example was seemingly innocuous, it provided a glimpse into a future where deepfakes create significant reputational, counterfeit, fraud and political risks for individuals, organizations and governments.”
In addition to well-known data privacy and copyright issues associated with these AI models, Litan calls out more advanced cybersecurity concerns, saying providers of AI models don’t provide users with the tools they need to audit all the security control sin place.
“The vendors also put a lot of emphasis on “red teaming” approaches,” Litan says. “These claims require that users put their full trust in the vendors’ abilities to execute on security objectives.”
Set policies, monitor, and potentially block ChatGPT
To deal with these risks, organizations should establish a governance and compliance framework for enterprise use of out-of-the-box solutions, and those policies should prohibit employees from asking questions that expose sensitive organizational or personal data, Litan says.
These policies could go as far as blocking unsanctioned use of ChatGPT and similar solutions and monitor event logs for violations. As of May 2023, several companies have begun blocking their users from using ChatGPT, including Apple, Amazon and Samsung.
Jason Wong, a distinguished vice president analyst at Gartner, previously told TechDecisions that organizations should establish a “center of excellence’” to bring together the activities using the technology to better understand the collective impact. While stopping short of calling for organizations to potentially block the use of ChatGPT and other AI models, Wong says IT leaders should actively monitor it usage and have an open dialogue with users rather than denying them the use of the technology outright.
However, Litan says a “prompt engineering approach” that uses tools to create, tune and evaluate prompt inputs and outputs requires additional steps to protect internal and other sensitive data used to engineer prompts on third-party infrastructure.
IT leaders should create and store engineered prompts as immutable assets which can represent vetted engineered prompts that can be safely used.
Update May 22, 2023: This article has been updated to include examples of companies blocking ChatGPT.
If you enjoyed this article and want to receive more valuable industry content like this, click here to sign up for our digital newsletters!
Leave a Reply