While ChatGPT and generative AI are largely hailed as tools that will usher in a new era of working defined by intelligent assistants and automation, there are a handful of concerns around the technology’s use, including privacy, security, the pace of innovation and ethics.
The technology is exciting and has the power to unlock new capabilities and efficiencies that any IT leader would dream of, but it should be carefully evaluated and deployed just like any other enterprise tool, says Chirag Dekate, vice president analyst at Gartner.
According to Dekate, IT leaders should avoid the headlines about generative AI because the constant barrage of news, new products and integrations can be overwhelming. It can be hard to distinguish between different products and services, especially if organizations lack internal AI expertise.
Rather than diving headfirst into generative AI, IT leaders should evaluate the risk in engaging with these emerging AI models, evaluate the data and ethics policies of different vendors and decide whether they should wait for more advanced and accurate models to come out.
According to Dekate, there are three key considerations that IT leaders should evaluate when deploying generative AI tools in their organization.
Data security
Above anything else, the consumer adoption of generative AI shouldn’t be viewed as being in the same ballpark as business adoption, as some consumer-facing tools are trained on data inputs, meaning any proprietary information or sensitive data could be accessible by a competitor.
IT leaders should disallow widespread use of consumer-facing generative AI in enterprise applications, Dekate says.
“Any query you ask or any prompt you make essentially gets subsumed into the data model that is operating behind the scenes,” Dekate says. “With the enterprise, you cannot essentially compromise your corporate IP or customer data by integrating consumer-facing APIs into your product suite.”
IT leaders should take a hard look at the data policies of generative AI providers to ensure that their proprietary data is completely isolated from the data model,
“The first question they should ask is, ‘How is my data protected?’”
Listen to the interview with Chirag Dekate in this podcast episode!
Innovation risk
Dekate’s comments to TechDecisions came just days before thousands of AI and tech leaders called on the tech industry to pause AI investments and focus on making current models safer and more trustworthy.
According to Dekate, the rapid pace of innovation in AI could have security and privacy implications.
“The biggest risk I see in the rapid pace of innovation is potential slippage in how we secure some of the underlying data ecosystems, or inadvertently create a mechanization of noise,” Dekate says.
IT leaders need to put guardrails in place so they aren’t exposed to some of those risks.
However, generative AI companies are largely being transparent in how they communicate those risks and how models are trained. For example, OpenAI says GPT-4 was trained last year and the team spent the year vetting and testing the model. While the company says GPT-4 is capable of more than GPT-3, it is still not immune to hallucinations.
That kind of transparency should give organizations a blueprint to help them decide where the technology can help them, Dekate says.
Ethics and human control
Any application of generative AI in the business world should also adhere to ethics and responsibility frameworks that keep the control with human operators. Dekate uses the example of leveraging generative AI to create a marketing campaign by converting simple text into a blog post, but that content may include errors since generative AI tools are currently not entirely trustworthy.
Any enterprise application of generative AI should include guardrails that ensure human supervision is involved and that the output of these models is at least quality checked and approved, Dekate says.
“Without the necessary ethical guardrails and without the necessary responsibility guardrails, you run the risk of mechanizing noise and potentially showing up in the headlines for all the wrong reasons,” Dekate says.
If you enjoyed this article and want to receive more valuable industry content like this, click here to sign up for our digital newsletters!
Leave a Reply