Learn how companies can responsibly integrate AI into production. This invitation-only event in SF explores the intersection of technology and business. Find out how to attend here.
If you suddenly feel the urge to laugh when you see this rock, you are in good company.
As humans, we often irrationally explain human-like behavior to objects that have some, but not all, of their characteristics (also known as anthropomorphism). We are seeing this happening more and more in AI.
In some cases, anthropomorphism appears to say 'please' and 'thank you' when interacting with a chatbot, or praising the generative AI when its output matches expectations.
But etiquette aside, the real challenge here is identifying the AI ‘why’ with simple tasks (like summarizing this article) and then expecting it to perform the same task effectively in a complex anthology of scientific articles. Or, if you see a model generating an answer to Microsoft's recent earnings report, you expect the model to conduct market research by feeding it the same earnings history for 10 other companies.
VB events
AI Impact Tour – San Francisco
invitation request
Because, as Cassie Kozyrkov puts it, “AI is as creative as a paintbrush,” these seemingly similar tasks are actually very different from model to model.
The biggest barrier to productivity due to AI is the human ability to use AI as a tool.
Anecdotally, we've heard of clients who had already deployed Microsoft Copilot licenses and then reduced seat counts because individuals didn't feel this would add value.
Those users are likely to have mismatched expectations between reality and the problems AI is suited to solve. Sure, a polished demo looks like magic, but AI isn't magic. I know very well the disappointment you feel when you first realize, ‘Oh, AI doesn’t work like that.’
But instead of throwing up your hands and quitting the AI generation, you can build the right intuition to understand AI/ML more effectively and avoid the trap of anthropomorphism.
Defining intelligence and inference for machine learning
We've always had a bad definition of intelligence. If a dog asks for a treat, is that intelligent? What about when monkeys use tools? Is it intelligent to intuitively know how to take your hand off the heat? If a computer did the same task, would it become more intelligent?
I was (all 12 months ago) in the camp opposed to acknowledging that large-scale language models (LLMs) can 'reason'.
However, in recent discussions with several Trusted AI founders, we hypothesized a potential solution: a rubric that describes the levels of inference.
Just as we have rubrics for reading comprehension or quantitative reasoning, what if we could introduce an equivalent rubric for AI? This can be a powerful tool used to communicate to stakeholders the level of ‘reasoning’ expected from an LLM-based solution, along with non-realistic examples.
Humans form unrealistic expectations about AI.
We tend to be more tolerant of human error. In fact, self-driving cars are statistically safer than humans. However, when an accident occurs, chaos ensues.
This further compounds the disappointment when AI solutions fail to perform tasks that humans are expected to perform.
We often hear anecdotal accounts of AI solutions being massive armies of ‘interns’. Despite this, machines still fail in ways that humans don't and vastly outperform them in other tasks.
Knowing this, it is not surprising that less than 10% of organizations successfully develop and deploy Gen AI projects. Other factors, such as misalignment with business value and unexpectedly costly data curation efforts, only compound the challenges companies face with AI projects.
One of the keys to solving these problems and achieving project success is giving AI users better intuition about when and how to use AI.
Build intuition using AI training
Training is the key to addressing the rapid advancements in AI and redefining our understanding of machine learning (ML) intelligence. AI training in and of itself may sound very vague, but I find separating it into three different buckets useful for most businesses.
- Safety: How to use AI safely and avoid new AI-enhanced phishing scams.
- Literacy: Understand what AI is, what to expect from it, and how it can be disrupted.
- Readiness: Knowing how to skillfully (and efficiently) leverage AI-based tools to get work done at higher quality.
Protecting your team with AI safety training is like fitting new cyclists with knee and elbow pads. It may protect you from scratches, but it won't prepare you for the challenges of intense mountain biking. Meanwhile, AI readiness training can help your team get the most out of AI and ML.
The more opportunities you give your employees to safely interact with Gen AI tools, the more they will be able to build the right intuition for success.
You can only guess what features will be available over the next 12 months, but by tying them back to the same baseline (level of reasoning) and knowing what to expect as a result, you can better prepare your employees for success.
Know when to say ‘I don’t know’ and when to ask for help. And most importantly, know when a problem is beyond the scope of a particular AI tool.
Cal Al-Dhubaib is Head of AI and Data Science. Furthermore.
data decision maker
Welcome to the VentureBeat community!
DataDecisionMakers is a place for professionals, including technical people, who work with data to share data-related insights and innovations.
If you want to read about cutting-edge ideas, latest information, best practices, and the future of data and data technology, join DataDecisionMakers.
You might also consider contributing an article of your own!
Learn more at DataDecisionMakers