Newsletter
AI the LAW & YOU Newsletter

Release Date:

2024-03-26

Episode Transcription:

[00:00:00] Mark Miller: You’re listening to “AI the Law and You”, a show where a lawyer, a technologist , and a layman discuss a recent legal case filing against an AI company. These are not scripted. What you’ll hear are real conversations as we talk, argue, and cajole each other to think deeper about the legal aspects of using AI, and what people should be concerned about when using one of the platforms.

In today’s episode we talk about how Air Canada tried to defend itself in court by contending that the chatbot on its company site is its own entity and is separate from Air Canada. A lot of the fun in this case is the absurdity of the defense. However, it’s a good case for thought experiments, thinking about the near-term future of AI and who ultimately is responsible for its output.

While prepping for this call, I really did dig into the case here because of the absurdity of it in my mind. Joel, give us a brief overview of what the case is and who the, complainants and defendants are.

[00:01:18] Joel MacMull: What makes this resonate, at least with me, is the fact that we have a very sympathetic plaintiff. A young man, buys an airline ticket, in connection with his deceased grandmother, he buys it from Vancouver to Toronto. Prior to buying the ticket, he, is on Air Canada’s website and is having a conversation with its chatbot and asks about bereavement fare.

And the sum and substance of the message he receives is that within 90 days after his purchase, again, this is a conversation he’s having with the chatbot, within 90 days after making his purchase, he can essentially claim bereavement. And the chatbot, in providing him with that textual response, actually has a hyperlink to another Air Canada webpage, which has additional terms about bereavement there.

It so happens that that additional hyperlink, however, is at odds with what the chatbot is saying, and that hyperlink says, in essence, that bereavement fare has to be, paid for or, otherwise, dealt with on the front end. You can’t do it after the travel has occurred.

But, from the facts of the case, it doesn’t look like this young man did that, instead just relying on the chatbot. Long story short, he travels to Toronto, within the 90 day window, he seeks his reimbursement, consistent with the information he received from the chatbot. And, from what I understand, he engages in some emails with Air Canada, and they say, Hey, you know what?

The statement that you received on the chatbot is erroneous. We’ll flag that, we’ll get that corrected, but from what I understand, refused to provide him, with, the discount of his bereavement fare, which, according to the opinion, was something to the tune of 600, was the difference between the full fare and the bereavement fare that he otherwise would have been entitled to.

[00:03:07] Mark Miller: There’s so many things to unpack from that. Let’s start with just basic things that everybody deals with every day. So that means that if the chatbot tells me something, I, as the consumer, have to go back and verify that what the chatbot is telling me is true by searching the rest of Air Canada’s site to see if there’s no conflicting information.

[00:03:33] Joel MacMull: That’s exactly the opposite of what the decision said. The decision said, in essence, Air Canada provides no basis, and I’m paraphrasing here, nor can I think of one, said Adjudicator, that requires a consumer to, essentially trust one aspect of the website over another. And so what it was saying, in essence, is where there may be variance in the correctness of the statements, that’s the fault of Air Canada, that’s not the fault of the consumer, who, under your scenario, would essentially be forced to look at every nook and cranny of a website to ensure the validity of the information that, in so far as that argument was raised, it was rejected, by the Small Claims Tribunal in British Columbia that awarded this young man his I guess his restitution

[00:04:23] Mark Miller: The other thing Shannon that we have to get into here is one of the defenses of Air Canada is they said hey, we didn’t say it the chatbots said it and so we’re not responsible

[00:04:39] Shannon Lietz: Yeah, that was an interesting defense.

I didn’t see that one working when it was claimed. I think, from a technical perspective, what’s super interesting about this case is, as a technologist, now there’s a reason to actually do testing. for the materials that are going into the content of a chatbot as precedent in Canada. So if you were to think about being Air Canada, what did they not necessarily do?

Somebody didn’t test whether the bereavement policy matched up to their expectations during the deployment of that chatbot. And that to me, from an implication standpoint, says, Alright, we probably now have the onus of testing from a technology perspective for these chatbots. And which harnesses are out there that could allow for something like that?

Example would be, this is really about policy management, right? So from a content perspective, what went into this chatbot, I think there’s a couple of things. If it was doing some level of generative AI, which I can’t tell, because I haven’t really spent any time with the chatbot personally, but if it was doing gen AI type work, then I would say the temperature wasn’t precise enough.

And from a technology standpoint, I would say, The onus now of testing information parity for what would go back to an end user, I think we now have to start adding that into our deployment pipelines.

[00:06:12] Mark Miller: You know the terminology that really stuck out for me in the judges opinion is that they stated that Air Canada was trying to make the chatbot responsible for its own actions.

That’s almost a direct quote the absurdity of that is beyond my comprehension. it comes down to me, Joel, I want to go with you on this one to start, is Why in the hell would Air Canada, number one, put up a defense like this, and number two, why don’t give those guys 600 and just be done with it.

[00:06:52] Joel MacMull: I think it’s unfortunate that the full freight of Air Canada’s argument do not appear to have been sort of captured in the decision. Because if the chatbot was created by a third party, for example, and I have no reason to believe it was, but I’m saying if, there, there would be more latitude to argue that’s not us.

It’s them, unfortunately, the decision is silent on that and doesn’t speak to the ownership structure between Air Canada and perhaps some sort of subcontractor that would have developed the chatbot. I just don’t know, but these are sort of obvious questions I had in reading.

[00:07:30] Mark Miller: Let me, let me jump in then on that one, because that’s an interesting one. Can you delegate responsibility to a third party, or is a third party in this case actually an agent or a servant of Air Canada?

[00:07:44] Joel MacMull: So it really depends. I mean, and that’s the other thing is that to be perfectly frank with you, if Air Canada really wanted a crystal clear defense, the most, the easiest way it could have done that was by issuing a disclaimer in connection with its use of the chatbot.

That says to the effect that, we’re providing this as a convenience, but no statement herein made is otherwise going to bind Air Canada. There’s very simple language that they could have utilized essentially with just a, a radio button or a clicking of OK before using that chatbot.

That, in my estimation from a legal perspective, would have exonerated them, but they didn’t do that.

[00:08:19] Shannon Lietz: Why would the chatbot be interesting or useful to an end user? If you have to say. Basically what you’re saying is, we don’t trust it, why would you? And now all of a sudden this convenience, the best it’s gonna do is give me a link, I gotta go read myself anyways.

And I don’t think that’s where technology is headed. I think we are trying to get to the point from a technology perspective where these conveniences could be relied upon. I think to your point, are we at that point in time? I don’t know that we are and the reason why is what I was saying is I’m not sure that these capabilities are being fully tested for what we intend.

I’m not certain that there’s a content management strategy for what we intend to use them for. And so that lineage analysis, the things that actually are really hard. Most of the technologists out there don’t, are not given the time or the money to be able to develop something that could be as reliable as we’re trying to get to.

And so that, to me, says that if you’re going to put a disclaimer on a chatbot, you might as well not do it.

[00:09:25] Joel MacMull: And we’re approaching it from two different perspectives. You’re approaching it from this sort of user interface functionality standpoint. To what extent is it a useful tool? I’m just saying, I’m just looking at it like a lawyer and saying, how could Air Canada have covered it if they wanted to?

So we’re not exactly comparing apples to apples there. But let me answer your question, Mark, if I may. You said, why is Air Canada defending this case? I suspect, and again, I don’t work for Air Canada, I don’t know, but it’s essentially a slippery slope, right? And if you read it, it says it was defended essentially by an employee of Air Canada.

My guess is it was some in house lawyer or someone who’s interested, who’s involved in, dispute resolution, whatever training they have. And I think one of the reasons why they contested it, my guess would be, is that they appreciate the door that would otherwise, you know, that, that was going to be open had they not.

Now, unfortunately, I think this probably has had more of a backlash from a PR perspective than they would have hoped for. But I think in an effort to contain this as best they could, that was worth it for purposes of putting up a defense.

[00:10:31] Mark Miller: Why not pay the $600 and then fix the system, as far as Shannon says?

[00:10:37] Joel MacMull: Because the question becomes, in how many other instances are they going to also have to pay the 600 some odd and make other fixes?

[00:10:44] Mark Miller: Thousands now, because everybody in the world knows about it.

[00:10:48] Joel MacMull: Right, and that’s what I’m saying, because I think from a PR perspective, this has been more damaging. And I will also add, as a Canadian citizen, that Air Canada is frequently vilified, within major public news…

[00:10:59] Mark Miller: … and by Canadians that fly it, yes.

[00:11:02] Joel MacMull: Because first of all, they have a complete monopoly over the airline business, at least domestically in Canada. And their service is, when you compare it on an international scale vis a vis other providers, they’re among the lowest in terms of, customer satisfaction.

And by that if you look at the number of flights, for example, that are delayed, that are cancelled or whatever, it’s exponentially larger than other national carriers, at least vis a vis the United States.

[00:11:28] Mark Miller: And it’s really hard to get a Canadian pissed off, so this is a big deal. There you go. There you go.

Unless you’re on the hockey ice, but that’s a whole different topic. Shannon, let’s go to things that you’re concerned about, what does this portend future wise that we’re looking at here?

[00:11:47] Shannon Lietz: I think it sets a bit of precedent for the fact that companies are now responsible for the output of a chatbot in a different way. That’s what I take from a technology perspective. Meaning, if I was to walk away with, all right, what did I learn from this legal case as a technologist? What I learned is there’s now no defense to building something great and testing it. That ” and testing it” is what I keep bringing up, which is I’m not certain that this chatbot got tested effectively and verified or certified for what Air Canada intended it to be as part of its website.

That’s where I feel like there’s a little bit of a slide that’s happened. From a technology perspective, like I said, technologists are never given the time and money to do things well. It’s always a fix it after a break. In this situation, it broke. I’m certain that somebody’s had to go back and do some fixes and reparations to it.

The next question I have is, which version are you using of the chatbot? And is that posted within the chatbot? Because, let’s just say that three months ago, somebody used chatbot A, and now they’ve done some versioning of the chatbot, it’s data, whatever is going into it from an AI perspective. And now they’re on chatbot B, right?

When somebody’s going back and saying, here’s what the chatbot said, they took an image, traceability is, I think, another piece of the puzzle. And something that I can foresee is going to become a challenge for technologists that are going through these same things, where lawsuits are involved. This is the aha moment of how we might need to wake up in business, is it is becoming more important. To have your content strategy, your policy strategy, and the lineage of anything that’s AI related figured out in a data governance way so that your publishing process matches what you expect.

[00:13:58] Mark Miller: One of the things that we’ve talked about too, and I’ll just throw this on the table for both of you, is that in the law circles, there’s a confusion between AI and generative AI.

In this case, my mind went immediately to, if somebody has built a chatbot like that using generative AI, why? Why would you want inferences coming out from something like that? Joel, you’re shaking your head.

[00:14:29] Joel MacMull: That’s the wrong tech. I don’t profess to know chatbot technologies all that well.

But if you go to a website and use the chatbot, it’s usually confined to a. It’s been my experience anyway to a, a set FAQ list. And if it’s outside of that, then the chatbot doesn’t respond. But to your point, yeah, why open that Pandora’s box?

I don’t know. And the, I don’t think the decision specifies No, in fact, I know it doesn’t. It doesn’t say that it was generative, or just garden, plain old garden variety AI.

[00:14:55] Mark Miller: Yeah, I think it’s more of a thought experiment on this is Why would somebody, when you’re talking about something that is company related, why would somebody, and not saying Air Canada did, why would somebody implement an inference engine in a case like this?

Is there a case where this would be useful, Shannon?

[00:15:20] Shannon Lietz: Yeah, there’s definitely some cases where it would be useful, but I think once it got to that policy question, The policy is probably a more important temperature aspect, because the conversation that you could have with generative AI is much more lifelike than some of the chatbots that we’ve seen out there.

Like, I don’t know how many you’ve used, but I use a lot of them just to test them out. I will say that as one of the things I’ve learned from this case is, be really suspect about these chatbots, like the things that are actually being said by them. Because if you’re relying on it, can we really trust the technology at this point?

I think somebody’s really going to have to go to that next level to say, you know what, we’re going to stand by our chatbots if they’re going to be put on websites. There is a case for generative AI to be used. But again, when you have to get to something precise, What do you opt for?

There’s a way to take in the questions and actually determine what somebody is leveraging versus what you output to that person. There’s some definite technical mechanics that have been implicated here that are Probably less obvious to our point. We really don’t know if it was generative.

We don’t know if it was AI that was developed. But more importantly, and this is the one that I’m picking up on, is from a content publishing perspective, how often do you see somebody publish policies on a website? And actually go through and make sure that the chatbot knows what’s been updated is really rare.

That is not a combined process that I’ve seen in most organizations where the content publishing also touches all the aspects of what that chatbot has The value stream of leveraging a chatbot now needs to be Figured out from a content management perspective, so that’s actually united.

If legal is actually looking at the bereavement policy before it goes out, they should also be thinking about the chatbot and how it actually is going to implicate the company from a servant relationship. The

[00:17:22] Mark Miller: thing that you just brought up is that, and I’ll put my terms on it, inference versus conversational tone.

Those are two different things. So it seems to me that one of the things that people could work on is, can we create a conversational chatbot that’s linked to our policies that doesn’t do inferences?

[00:17:50] Shannon Lietz: You can use temperature on generative AI to get closer to precision. I have seen when using strict precision in generative AI, it is much less useful.

And its tone is much more specific to something that it’s pulling forward. I absolutely think technologists are brilliant people and can figure out anything if given a technical situation. The question is, in their use cases, Were they given a requirement that says this should be a precise answer?

And I’m not certain that I’ve seen that in most use cases, that the precision of the answer has even been part of the requirements of a use case. So that’s a new advancement. If you were thinking about using Agile and telling stories and use cases to a programmer, one of the new elements you should be thinking about adding to your use cases is how precise do you want this?

Because I think if it does require precision, that’s going to tell the technologist something else, because let’s face it, we talk about AI like it’s a whole new thing, right? AI’s been out here for a long time, it’s generative AI is new and novel and interesting, but AI in general has been out for a very long time, which means, by the way, It’s not a computer programming itself.

It’s a technologist in the background who’s actually having to move things around and program it to get the results and the outcome that scale. That’s basically a scaling algorithm. We’re far from AGI at this point, which is where maybe computers could get to the point where they could actually decide things on themselves.

But right now we’re really talking about a group of technologists that got together that built an AI platform that then got in front of The end user, and the use cases, the test cases, those did not result in great software. The claim on which Moffitt wins is negligent misrepresentation.

[00:19:50] Joel MacMull: And that is to be distinguished, I think, from just Air Canada being liable because they were wrong. Those are two different things. And I think it’s, it’s important to keep that in mind because one of the potential defenses that Air Canada could have argued, and again, I have to assume they didn’t, was that they could have argued, that while the message was in fact wrong, they could have argued the manner in which it was trained, right?

The reason why that matters is because if you go back to and you analyze how the bot was trained, that may not give rise to negligence. And that’s what’s important here is because the claimant won on grounds of negligence. So just because you’re wrong doesn’t necessarily mean you’re negligent.

And I suspect, by the way, that in like cases, insofar as they may exist, That would be an area of development for a defendant to say, Hey, look at the way this was trained. We did everything that was reasonable. Therefore, we were not negligent. Therefore, claimant does not win.

[00:20:55] Shannon Lietz: And that’s what I’m picking up on, by the way, from a negligence standpoint is I, my belief is that if they got to this point where this was actually the outcome of that chatbot, and it is very.

Interesting to me because it says that testing was not implicated in the deployment pipeline. And to me, that’s the negligent piece of this. And I’m just wondering if they could have asserted that defense because I think they probably would have gone back and said, Alright, what do we know about how this happened?

And so I’m still curious as to why it ended up in court at all. I think it should have settled. This was, like, an easy, little tiny thing. Why they spent so much money going to court on this is really interesting to me. But I will say, I think the negligence piece is the deployment of the technology and the data together to get to a result that people can rely on and trust is where I think they have some gaps.

[00:21:47] Joel MacMull: That’s a good point. To what extent is it negligent if you don’t do proper testing? Yeah.

[00:21:53] Mark Miller: I would take a different tangent on this one, in that I think that there was negligent representation by the lawyers. I think the lawyers are responsible for what happened in this case.

[00:22:08] Joel MacMull: When you say the lawyer, you mean Air Canada’s lawyer?

Yeah,

[00:22:12] Mark Miller: because they were negligent in their representation. We’re all talking about the absurdity of how the case was brought.

[00:22:22] Joel MacMull: So you’re saying the Air Canada employee who is entrusted to defend this case, no doubt by his superior, The negligent misrepresentation was essentially the arguments he was putting forth on his company’s behalf?

[00:22:36] Mark Miller: I absolutely do. There’s two forms of negligence here. One is the chatbot, and the other one is the lawyer representing, in their case, the chatbot. The absurdity of the representation from the lawyers is what stands out for me. Okay. Yeah,

[00:22:54] Joel MacMull: I think you have a point. I think you have a point. as a social commentary.

I don’t, I know you don’t have a point legally. You

[00:23:05] Mark Miller: mean you can’t, you couldn’t have Air Canada corporate come back and say, we were misrepresented by our in house

[00:23:13] Joel MacMull: counsel. Or whoever else defended this case. No, you couldn’t, the argument would be, and again, I’m now applying uS legal, doctrine and I’m posing it on Canada, but it would be something along the lines of the ineffective use of counsel or the ineffectualness of counsel. No, I don’t think there’s an argument to be made here because at the end of the day, Air Canada put that employee in a position to defend it.

So just because you have a bad defense doesn’t mean that the lawyer has committed some sort of malpractice. I didn’t

[00:23:46] Mark Miller: say malpractice, I said negligence.

[00:23:49] Joel MacMull: Again, I think the PR mud on this is far thicker than the actual claim. And I think this is, not surprisingly, I think this has done Air Canada a great disservice.

It’s one in, I think, what is a laundry list of things that our candidate does to sort of, harm itself within the public eye.

[00:24:11] Mark Miller: When we started talking about this case, we really are talking to the lawyers in this sense, AI, the law, and you, as far as you being the lawyers.

What should lawyers take away from this? Joel, you’ve done a deep dive into this, and so you want to be the mouthpiece to say, Hey guys, hey peers, this is what you should be looking for going into the future.

[00:24:38] Joel MacMull: I think, to some extent, I think it marks sort of an incremental development. That companies are responsible for their technology, I, that’s my sort of, to the extent that we want to paint with a broad brush, that’s my takeaway, and that doesn’t strike me as particularly novel for all the reasons we’ve talked about, that, that because A computer at the end of the day may be liable for misinformation.

The company doesn’t get to distance itself from that because at the end of the day, it’s people that program that computer. So that’s my sort of, 10, 000 foot perspective.

[00:25:11] Mark Miller: Shannon?

[00:25:13] Shannon Lietz: My 10, 000 foot perspective is, as a technologist, go back, look at your deployment pipeline for chatbots. Look at your deployment pipeline for your websites.

Coordinate your content management systems and do some testing, especially when it comes to policies that are being placed in the public eye. Because that’s an area where I think that there’s a requirement for precision. That precision should be in use cases for every change that’s going through the process.

And that should help to make lower amounts of errors or risks similar to this.

[00:25:56] Mark Miller: Joel, that does lead us to the final point, is if you were going to defend a case like this, with what Shannon just said, she was talking to technologists, what’s the value of what she just said for lawyers?

[00:26:12] Joel MacMull: a lawyer who’s going to defend this is looking at it, as I said earlier, through a different lens, which is, how do I limit liability? And while Shannon talks about things that technologists need to be mindful of to create a better widget, the lawyer is looking at that, and that may very well be institutionally, the desire of the company, no doubt.

It’s not looking, it is looking to increase functionality. It is looking to create a really useful tool. I think when the lawyer is looking to limit liability, it cuts against what Shannon says, which is that, you almost have to make it sound aspirational. In the sense that we want to make the widget as good as we can, but we know, we’re acknowledging that there may be gaps, and as a consequence, we can limit that damage to the entity.

By using certain disclaiming language, as I mentioned earlier, this is convenient purposes only, making it very clear that the chatbot is just a convenience tool, and that it doesn’t in any way bind the organization, insofar as it, delivers any information, and I understand that cuts against, What Shannon is saying, right?

Because that undercuts the value of the tool. But I think there’s just an inherent tension there.

[00:27:24] Mark Miller: The dilemma I have with what you’re saying there and you said it several times is put a Terms of Agreement button on it. You and I did an entire podcast series about EULA’s and end user license agreement.

absolutely agreed that they’re most of them are just bullshit and obfuscation. I think don’t think that’s gonna stand up anymore it’s just we went too deep into that to actually be an acceptable way to get people to say, I understand what I’m doing here.

[00:27:55] Shannon Lietz: I think there’s an expectation of these things being more precise.

And I think that is the bar that we’re going to have to see, folks live up to. I’m not discounting what Joel says. I think you probably do need a disclaimer on your chatbots. I think you do need to put in, they make mistakes , right? And that’s what with some of the generative AI ones like Copilot, as an example.

Check your mistakes. And I will say, though, as a productivity tool, how many people are really going to have the time, again, this goes back, how much of a time savings are these tools really bringing to us if the disclaimer is Don’t trust it. Then it’s just a nuisance on a website you shouldn’t even, put on there, and it’s not worth the money being paid to create the convenience.

At that point, it’s just a nuisance to all of us as users.

[00:28:46] Mark Miller: That adjourns our session for today. Check your podcast feed next week, when we’ll talk about Elon Musk and his lawsuit against OpenAI and Sam Altman. Does he really have a case or is this a publicity stunt to slow down OpenAI’s progress.

If you enjoyed the conversation today, you can listen to all the episodes of AI the Law and You by subscribing on your favorite podcast platform. All content is free and ungated. The opinions expressed on the show are just that, opinions. It’s not real legal advice, but we do like to think we know what we’re talking about. You’re welcome to disagree. Hey, it’s your dime.

AI the Law and You is a Sourced Network Production.

Episode Guest:

SUBSCRIBE