The arrival of generative AI seems to have caught many organizations off guard, uncertain of how to deal with the many implications of interacting with this new technology in an enterprise setting. Some prominent financial-services firms have imposed outright bans on the use of Chat GPT. A major electronics manufacturer temporarily is restricting the use of generative AI through company devices and networks after sensitive internal data was leaked by employees using Chat GPT.
Organizations inclined to believe they can stop the use of generative AI in an enterprise setting should realize they can’t. Its potential already is recognized. Our newest data—collected in the last month—show that a third of organizations already are experimenting with generative AI or plan to start using it within a year. Another 3 percent of organizations have put generative AI into production use.
A majority of organizations—64 percent—have directly or indirectly deferred dealing with generative AI. Only 6 percent of these organizations plan to use generative AI but don’t expect to do so for more than a year. The other 58 percent of organizations report no plans or don’t know if plans exist for using generative AI.
These 64 percent of organizations—along with other organizations that have not yet set formal guidelines and policies for the use of generative AI—actually have a luxury of planning its appropriate use, albeit with a high sense of urgency. The time to do so is now, as complications and potential risks and liabilities only will increase as an organization waits.
Despite its many potential benefits—including lowering costs (by reducing labor requirements) and increasing business agility (by shortening process times)—the use of generative AI also can create real business concerns. They include security issues, unintentional exposure of confidential, proprietary, and/or personally identifiable information, and model and/or training-data bias causing negative results (such as discrimination or improper exclusion)—as well the potential legal ramifications of these.
The cornerstone for appropriate use of generative AI will come, aptly enough, from well-defined information governance, which will help organizations define the roles, people, and applications that should have access to which data for what purposes. In effect, information governance provides the “guard rails” that mark the lanes of business-appropriate use of data for specific use cases, as well as provide the barriers that discourage and prevent their inappropriate use.
Most organizations have not yet fully analyzed the role generative AI can and should play in their futures. It’s clear through the success and usage of applications such as ChatGPT that the use of generative AI has strong interest and high potential to create value. Although generative AI has “arrived”—whether from a generative AI application or through a vendor-provided capability—bear in mind only a small minority (3 percent, likely organization that tend to be early adopters) report production use of generative AI. It’s early days for most organizations, which suggests a path of experimentation of generative AI is more likely than production deployments in the short term.
However, the use of generative AI shouldn’t be a solution in search of a problem. To accelerate an eventual transition to production use, organizations need to take time to determine which use cases would make best use of generative AI. The most promising of these potential use cases then should have measures developed that will help determine ROI from any potential investments. Refined use cases with measurable ROI will make the strongest pilot projects and deliver the best results for an organization. Ensuring proper governance on the use of generative AI and data sources beforehand further reduces risk and improves potential returns.
You do not have permission to access this document. Make sure you are logged in and/or please contact Danielle with further questions.