ChatGPT and other new AI models are widely believed to be a game changer across many industries, including IT. However, IT leaders are also expressing concerns with implementing the generative AI too soon, according to new Salesforce research.
The CRM giant’s survey of more than 500 IT leaders finds that 67% are prioritizing generative AI for their business within the next year-and-a-half, despite some concerns over ethics, accuracy, security and other issues.
Salesforce released its report as the company announced integrations with ChatGPT creators OpenAI, including Einstein GPT, a new generative AI tool for Salesforce CRM designed to help customers generate AI-created content for sales, marketing, commerce and IT interactions. In addition, the Slack owners are launching the beta of a ChatGPT app for Slack designed to deliver instant conversation summaries, research tools and writing assistance.
According to the research, 57% of leaders call generative AI a “game changer,” with respondents saying the technology can help them better serve customers, take advantage of data and operate more efficiently.
Even 80% those respondents who called generative AI “overhyped” say the technology will help them better serve customers, reduce team workload and help their organization work more efficiently.
Specifically in IT, early use cases of the technology have included creating scripts, coding, identifying vulnerabilities, and automations of other mundane tasks. The report reflects how the technology can help make work more efficient, but it also shows how IT leaders should pump the brakes on widespread implementation of generative AI.
According to the report, IT leaders remain skeptical about the ethical implications of generative AI, with 59% saying AI outputs are inaccurate, and 63% saying there is bias in generative AI outputs. In addition, the technology is not viewed as sustainable, as 71% of IT leaders say generative AI would increase their carbon footprint through increased IT energy use.
When asked about top concerns of ChatGPT-like AI, security was the most cited, with 71% of IT leaders saying generative AI will introduce new security risks to data. Other concerns included a lack of employee skills (66%), difficulty integrating generative AI into current tech stacks (60%) and the lack of a unified data strategy (59%).
Due to those concerns, technology leaders need to take steps to equip their organizations with the tools and skills to successfully leverage generative AI. According to Salesforce, 55% cite accurate, complete and unified data, while 54% say they need enhanced security measures to protect the business from cybersecurity threats that might rise from the use of generative AI.
The company says organizations should work together and share their knowledge to help improve the technology and make it a reality across the enterprise. According to the research, 81% of senior IT leaders say generative AI should combine public and private data sources, 82% say business should work together to improve functionality, and 83% say businesses should collaborate to ensure the ethical use of the technology.
Last month, Salesforce released its give guidelines for responsible development of generative AI, which are listed here:
Accuracy: We need to deliver verifiable results that balance accuracy, precision, and recall in the models by enabling customers to train models on their own data. We should communicate when there is uncertainty about the veracity of the AI’s response and enable users to validate these responses. This can be done by citing sources, explainability of why the AI gave the responses it did (e.g., chain-of-thought prompts), highlighting areas to double-check (e.g., statistics, recommendations, dates), and creating guardrails that prevent some tasks from being fully automated (e.g., launch code into a production environment without a human review).
Safety: As with all of our AI models, we should make every effort to mitigate bias, toxicity, and harmful output by conducting bias, explainability, and robustness assessments, and red teaming. We must also protect the privacy of any personally identifying information (PII) present in the data used for training and create guardrails to prevent additional harm (e.g., force publishing code to a sandbox rather than automatically pushing to production).
Honesty: When collecting data to train and evaluate our models, we need to respect data provenance and ensure that we have consent to use data (e.g., open-source, user-provided). We must also be transparent that an AI has created content when it is autonomously delivered (e.g., chatbot response to a consumer, use of watermarks).
Empowerment: There are some cases where it is best to fully automate processes but there are other cases where AI should play a supporting role to the human — or where human judgment is required. We need to identify the appropriate balance to “supercharge” human capabilities and make these solutions accessible to all (e.g., generate ALT text to accompany images).
Sustainability: As we strive to create more accurate models, we should develop right-sized models where possible to reduce our carbon footprint. When it comes to AI models, larger doesn’t always mean better: In some instances, smaller, better-trained models outperform larger, more sparsely trained models.
If you enjoyed this article and want to receive more valuable industry content like this, click here to sign up for our digital newsletters!
Leave a Reply