GENERATIVE AI OPENS THE DOOR TO POTENTIAL ETHICAL CHALLENGES

One of the biggest concerns about generative AI like ChatGPT is the potential for misinterpretation. When generative AI cannot generate a correct answer to a question, it starts to invent one in a process called “artificial intelligence hallucination.”

Leaders also worry about data quality and the potential for models to get corrupted or poisoned – either intentionally or accidentally – resulting in a bad outcome. Therefore, while organizations see the potential of generative AI, they still do not yet fully know how to handle the risks.

Most organizations today operate in a much more dynamic environment and generative AI opens the door to potential ethical challenges. Most organizations lack full visibility to all the risks and need support beyond technical development.

“Our research shows a real sense of urgency in the market,” said Prashant Kelker, partner and chief strategy officer with ISG. “However, despite the top-down imperative to embrace generative AI, most enterprises lack the focus to identify the right use cases. Right now, the market is being driven by specialist providers that are actively engaging with enterprises to brainstorm and co-create innovative solutions.”
 

Enterprises lack an AI architecture

Kelker said enterprises currently lack an AI architecture to support generative AI at scale. Early efforts to apply generative AI are focused on creating domain-specific models built on proprietary datasets, further expanding those datasets to improve model training, developing customized adoption platforms, and creating solutions that unite analytics, AI and generative AI.

“The initial focus for generative AI is on knowledge management, the ability to extract data from vast unstructured data sources for business decision-making, and functional process optimization, in areas such as marketing and sales, finance, HR, IT and DevOps,” said Kelker. “As enterprises grow more comfortable with generative AI, and use cases become more mature, companies will begin to imagine more transformative possibilities leading to new products and service offerings and complete business transformation.”

Before they make that leap, the ISG report notes the hurdles enterprises need to overcome – include security, copyright issues, ethical considerations and legal concerns.

“Enterprises are cautious about pushing the capabilities of generative AI too far, too soon – especially in customer-facing interactions,” said Kelker. “The quality of legacy data is an issue, which may be addressed by using synthetic data, as are so-called AI data hallucinations caused by missing or inaccurate data. Enterprise leaders also want to see clear ROI for their investments before proceeding. Then, of course, there are the security, legal and ethical implications. Guardrails will need to be established before generative AI can be adopted at scale.”
 

Financial services embrace generative AI

The ISG report shows financial services, including banking and insurance, is the leading industry for generative AI adoption, with 24% of total use cases, followed by manufacturing (14%), healthcare and pharma (12%), and business services (11%).

However, when it comes to mature use cases, defined by ISG as solutions that are already in progress, with well-defined KPIs and ROI, business services leads the way, at 39% of mature use cases. The report notes this is driven primarily by code-generation use cases, accounting for half the use cases in this sector.

From a functional perspective, the ISG report shows predictive analytics is the top use case, with 57% of all mature use cases, followed by code generation or DevOps (50%), data extraction and analysis (30%), and performance analysis (24%).


Topics

 

Share this article