Tuesday, July 2, 2024

Responsible AI: Beyond Ethics, Prioritizing Accuracy

Share

CEO Carl Eschenbach at Workday Summit

As the spring events roll on, I press on with my AI ‘deep dive’ vendor profiles:

Customer Validation and Gut Checks

We need architectural specifics on how gen AI accuracy is improved and customer data protected. If we make assurances about responsible AI, we need transparency there too. What products were canned or altered? What areas of AI are off-limits?

I brought an axe to grind about “responsible AI” to Workday’s Innovation Summit, but Workday was ready for me. Even at the opening reception, Kathy Pham, Workday VP, Artificial Intelligence and Machine Learning, was up for fielding my (over)heated input on the extent of vendor responsibility for customer education on AI use cases.

Responsible AI – Enterprises May Gloss Over Ethics, But They Can’t Ignore Accuracy

During my Workday Innovation Summit video with Constellation’s Holger Mueller, he raised the issue of what happens if customers don’t take the “responsible” part of AI seriously. Mueller isn’t sure the AI ethical talk in our industry is going to hold up in the longer term.

Recently, I talked to a consulting director who told me only 1/3 of their enterprise customers are serious about getting AI ethics right. The other 2/3 want to plow ahead in pursuit of productivity gains and headcount efficiency. But as I see it, “responsible AI” is more than just ethics. It’s also about output accuracy and getting results.

On AI Accuracy and the Changing Role of LLMs at Workday

Workday is moving towards using the right model for the task at hand, whether it’s a Large Language Model or a smaller model suitable for that process. This reduced dependency on external LLMs is promising for several reasons, including reducing compute costs and protecting customer data.

How Workday Mitigates the Downside of Smaller Models

Workday has addressed some of the historical downsides of smaller models by using a different type of training data than large LLMs. This approach has led to breakthroughs in generalization capabilities from smaller models.

My Take – “Responsible AI” Requires Collective Tech Literacy

Part of earning AI trust is grappling with the technical conversation. Workday is pursuing deeper explainability of AI results, embedded inside user screens/processes. Improving the AI audit trail is important for trust and compliance.

Workday has been one of the vendors out in front with a pricing message customers want to hear: they won’t be charged for core processes where their data is a key part of the value delivered. Vendors monetizing AI thanks to customer data seems ironic.

There is anticipation of a breakthrough in AI that we haven’t seen yet. It will be interesting to see what Workday’s AI marketplace partners come up with at Workday Rising later this year.

Read more

Local News