3 Questions to Guide Your AI Guardrails

This content is locked. Please login or become a member.

6 lessons • 26mins
1
3 AI Horizons Leaders Should Explore
07:20
2
Why Physical AI Is Harder to Deploy Than Virtual AI
04:51
3
How to Unlock AI’s Potential for Your Business
06:03
4
A 2×2 Matrix for Evaluating AI Use Cases
02:36
5
3 Questions to Guide Your AI Guardrails
03:44
6
Key Skills to Hone for the AI Age
01:45

When deploying AI at an organization, I think the very first question is: Are you deploying it in a way that is purely internal facing where presumably the folks who are using it are employees that you can train and are under your control versus are you deploying it in an external facing way, say, to customers, for example, in a customer service application or as such? Obviously, the care and rigor for an external-facing application is quite different, and you need to make sure that your AI is not doing something that could cause harm either to your customers or to your reputation. And so that’s a whole can of worms in its own right and requires – I would say — extensively rigorous testing of whatever AI you’re deploying.

The second thing is even with all of the best intentions, the AI can still hallucinate and give you wrong answers. Of course, the same is true if you’re working with a less experienced colleague — an intern or a new hire — who might give you erroneous results and erroneous answers. So it’s not that this is a new kind of problem, but I think there is maybe a tendency to be more trusting of the output that one gets out of an AI. And so one needs to remind your team members that, ultimately, they’re the ones accountable for the correctness of whatever it is that they provide as work product. Even if they got it out of the AI, it is their responsibility, for example, to follow the research to the source, make sure that the source is reliable, that references weren’t hallucinated, and so on and so forth. And so I think that is an important level of rigor and a reminder to people that, ultimately, they can use whatever tools they want, but they’re accountable for the end product that they deliver.

And then the third is a lot of these AI tools are, you know, not necessarily secure from a, you know, privacy and security perspective. And so you need to make sure that employees are careful of what they put into specific AI tools to avoid exfiltration, for example, of private information. Now it’s important not to get overly hysterical about this. I mean, we don’t tell people that they can’t use Google search just because Google search is not, by and large private and secure. So you don’t want to impose overly burdensome constraints, but you do need to be thoughtful of which AI tool are using. And, fortunately, many of the AI tools today have an enterprise version that is, I think, relatively secure against and private information being exposed to others, but you need to be sure that that is what you’re deploying and that people are aware of the differences.