Powered by RND
PodcastsTecnologíaTech Law Talks

Tech Law Talks

Reed Smith
Tech Law Talks
Último episodio

Episodios disponibles

5 de 93
  • AI explained: Introduction to Reed Smith's AI Glossary
    Have you ever found yourself in a perplexing situation because of a lack of common understanding of key AI concepts? You're not alone. In this episode of "AI explained," we delve into Reed Smith's new Glossary of AI Terms with Reed Smith guests Richard Robbins, director of applied artificial intelligence, and Marcin Krieger, records and e-discovery lawyer. This glossary aims to demystify AI jargon, helping professionals build their intuition and ask informed questions. Whether you're a seasoned attorney or new to the field, this episode explains how a well-crafted glossary can serve as a quick reference to understand complex AI terms. The E-Discovery App is a free download available through the Apple App Store and Google Play. ----more---- Transcript: Intro: Hello, and welcome to Tech Law Talks, a podcast brought to you by Reed Smith's Emerging Technologies Group. In each episode of this podcast, we will discuss cutting-edge issues on technology, data, and the law. We will provide practical observations on a wide variety of technology and data topics to give you quick and actionable tips to address the issues you are dealing with every day.  Marcin: Welcome to Tech Law Talks and our series on AI. Today, we are introducing the Reed Smith AI Glossary. My name is Marcin Krieger, and I'm an attorney in the Reed Smith Pittsburgh office.  Richard: And I am Richard Robbins. I am Reed Smith's Director of Applied AI based in the Chicago office. My role is to help us as a firm make effective and responsible use of AI at scale internally.  Marcin: So what is the AI Glossary? The Glossary is really meant to break down big ideas and terms behind AI into really easy-to-understand definitions so that legal professionals and attorneys can have informed conversations and really conduct their work efficiently without getting buried in tech jargon. Now, Rich, why do you think an AI glossary is important?  Richard: So, I mean, there are lots of glossaries about, you know, sort of AI and things floating around. I think what's important about this one is it's written by and for lawyers. And I think that too many people are afraid to ask questions for fear that they may be exposed as not understanding things they think everyone else in the room understands. Too often, many are just afraid to ask. So we hope that the glossary can provide comfort to the lawyers who use it. And, you know, I think to give them a firm footing. I also think that it's, you know, really important that people do have a fundamental understanding of some key concepts, because if you don't, that will lead to flawed decisions, flawed policy, or choices can just miscommunicate with people in connection with you, with your work. So if we can have a firm grounding, establish some intuition, I think that we'll be in a better spot. Marcin, how would you see that?  Marcin: First of all, absolutely, I totally agree with you. I think that it goes even beyond that and really gets to the core of the model rules. When you look at the various ethics opinions that have come out in the last year about the use of AI, and you look at our ethical obligations and basic competence under Rule 1.1, we see that ethics opinions that were published by the ABA and by various state ethics boards say that there's a duty on lawyers to exercise the legal knowledge, skill, thoroughness, and preparation necessary for the representation. And when it comes to AI, you have to achieve that competence through some level of self-study. This isn't about becoming experts about AI, but to be able to competently represent a client in the use of generative AI, you have to have an understanding of the capabilities and the limitations, and a reasonable understanding about the tools and how the tech works. To put another way, you don't have to become an expert, but you have to at least be able to be in the room and have that conversation. So, for example, in my practice, in litigation and specifically in electronic discovery, we've been using artificial intelligence and advanced machine learning and various AI products previous to generative AI for well over a decade. And as we move towards generative AI, this technology works differently and it acts differently. And how the technology works is going to dictate how we do things like negotiate ESI protocols, how we issue protective orders, and also how we might craft protective orders and confidentiality agreements. So being able to identify how these types of orders restrict or permit the use of generative AI technology is really important. And you don't want to get yourself into a situation where you may inadvertently agree to allow the other side, the receiving party of your client's data, to do something that may not comply with the client's own expectations of confidentiality. Similarly, when you are receiving data from a producing party, you want to make sure that the way that you apply technology to that data complies with whatever restrictions may have been put in to any kind of protective order or confidentiality agreement.  Richard: Let me jump in and ask you something about that. So you've been down this path before, right? This is not the first time professionally you've seen new technology coming into play that people have to wrestle with. And as you were going through the prior use of machine learning and things that inform your work, how have you landed? You know, how often did you get into a confusing situation because people just didn't have a common understanding of key concepts where maybe a glossary like this would have helped or did you use things like that before?  Marcin: Absolutely. And it comes, it's cyclic. It comes in waves. Anytime there's been a major advancement in technology, there is that learning curve where attorneys have to not just learn the terminology, but also trust and understand how the technology works. Even now, technology that was new 10 years ago still continues to need to be described and defined even outside of the context of AI things like just removing email threads almost every ESI order that we work with requires us to explain and define what that process looks like when we talk about traditional technology assisted review to this day our agreements have to explain and describe to a certain level how technology-assisted review works. But 10 years ago, it required significant investment of time negotiating, explaining, educating, not just opposing counsel, but our clients.  Richard: I was going to ask about that, right? Because. It would seem to me that, you know, especially at the front end, as this technology evolves, it's really easy for us to talk past each other or to use words and not have a common understanding, right?  Marcin: Exactly, exactly. And now with generative AI, we have exponentially more terminology. There's so many layers to the way that this technology works that even a fairly skilled attorney like myself, when I first started learning about generative AI technology, I was completely overwhelmed. And most attorneys don't have the time or the technical understanding to go out into the internet and find that information. A glossary like this is probably one of the best ways that an attorney can introduce themselves to the terminology or have a reference where if they see a term that they are unfamiliar with, quickly go take a look at what does that term mean? What's the implication here? Get that two sentence description so that they can say, okay, I get what's going on here or put the brakes on and say, hey, I need to bring in one of my tech experts at this point.  Richard: Yeah, I think that's really important. And this kind of goes back to this notion that this glossary was prepared, you know, at least initially, right, for, you know, from the litigator's lens, litigator's perspective. But it's really useful well beyond that. And, you know, I mean, I think the biggest need is to take the mystery out of the jargon, to help people, you know, build their intuition, to ask good questions. And you touched on something where you said, well, I've got a, I don't need to be a technical expert on a given topic, but I need a tight. Accessible description that lets me get the essence of it. So, I mean, a couple of my, you know, favorite examples from the glossary are, you know, in the last year or so, we've heard a lot of people talking about RAG systems and they fling that phrase around, you know, retrieval augmented generation. And, you know, you could sit there and say to someone, yeah, use that label, but what is it? Well, we describe that in three tight sentences. Agentic AI, two sentences.  Marcin: And that's a real hot topic for 2025 is agentic AI.  Richard: Yep.  Marcin: And nobody knows what it is. So I focus a lot on litigation and in particular electronic discovery. So I have a very tight lens on how we use technology and where we use it. But in your role, you deal with attorneys in every practice group and also professionally outside of the law firm. You deal with professionals and technologists. In your experience, how do you see something like this AI glossary helping the people that you work with and what kind of experience levels you get exposed to?  Richard: Yeah, absolutely. So I keep coming back to this phrase, this notion of saying it's about helping people develop an intuition for when and how to use things appropriately, what to be concerned about. So a glossary can help to demystify things. These concepts so that you can then carry on whatever it is that you're doing. And so I know that's rather vague and abstract, but I mean, at the end of the day, if you can get something down to a couple of quick sentences and the key essence of it, and that light bulb comes on and people go, ah, now I kind of understand what we're talking about, that will help them guide their conversations about what they should be concerned about or not concerned about. And so, you know, that glossary gives you a starting point. It can help you to ask good questions. It can set alarm bells off when people are saying things that are, you know, perhaps very far off, those key notions. And you have, you know, you have the ability to, you know, I think know when you're out of your depth a little bit, but to know enough to at least start to chart that course. Because right now people are just waving their hands. And that, I think, results in a tendency to say, oh, I can't rely on my own intuition, my own thinking. I have to run away and hide. And I think the glossary makes all this information more accessible so that you can start to interact with the technology and the issues and things around it.  Marcin: Yeah, I agree. And I also think that having those two to three sentence hits on what these terms are, I think also will help attorneys know how to ask the right questions. Like you said, know when to get that help, but also know how to ask for it. Because I think that most attorneys know when they need to get help, but they struggle with how to articulate that request for it.  Richard: Yeah, I think that's right. And I think that, you know, often we can bring things back to concepts that people are already comfortable with. So I'll spend a lot of time talking to people about sort of generative AI, and their concerns really have nothing to do with the fact that it's generative AI. It just happens to be something that's hosted in the cloud. And we've had conversations about how to deal with information that's hosted in the cloud or not, and we're comfortable having those. But yet, when we get to generative AI, they go, oh, wait, it's a whole new range of issues. I'm like, no, actually, it's not. You've thought about these things before. We can attack these things again. Now, again, the glossary, the point of the glossary is not to teach all this stuff, but it's about to help you get your bearings straight, to get you oriented. And from there, you can have the journey.  Marcin: So in order to get onto that journey, we have to let everybody know where they can actually get a copy of the glossary. So the Reed Smith AI Glossary can be found at the website e-discoveryapp.com, or any attorney can go to the Play Store or the Apple Store and download the E-Discovery App, which is a free app that contains a variety of resources. And right on the landing page of the app there's a link for glossaries and within there you'll see a downloadable link that'll give you a PDF version of the AI Glossary which again any attorney can get for free and have available and of course it is a live document which means that we will make updates to it and revisions to it as the technology evolves and as how we present information changes in the coming years.  Richard: At that price, I'll take six.  Marcin: Thank you, Rich. Thanks for your time.  Richard: Thank you.  Outro: Tech Law Talks is a Reed Smith production. Our producers are Ali McCardell and Shannon Ryan. For more information about Reed Smith's emerging technologies practice, please email [email protected]. You can find our podcasts on Spotify, Apple Podcasts, Google Podcasts, reedsmith.com, and our social media accounts.  Disclaimer: This podcast is provided for educational purposes. It does not constitute legal advice and is not intended to establish an attorney-client relationship, nor is it intended to suggest or establish standards of care applicable to particular lawyers in any given situation. Prior results do not guarantee a similar outcome. Any views, opinions, or comments made by any external guest speaker are not to be attributed to Reed Smith LLP or its individual lawyers.  All rights reserved. Transcript is auto-generated.
    --------  
    14:56
  • AI explained: Navigating AI in Arbitration - The SVAMC Guideline Effect
    Arbitrators and counsel can use artificial intelligence to improve service quality and lessen work burden, but they also must deal with the ethical and professional implications. In this episode, Rebeca Mosquera, a Reed Smith associate and president of ArbitralWomen, interviews Benjamin Malek, a partner at T.H.E. Chambers and former chair of the Silicon Valley Arbitration and Mediation Center AI Task Force. They reveal insights and experiences on the current and future applications of AI in arbitration, the potential risks of bias and transparency, and the best practices and guidelines for the responsible integration of AI into dispute resolution. The duo discusses how AI is reshaping arbitration and what it means for arbitrators, counsel and parties. ----more---- Transcript: Intro: Hello, and welcome to Tech Law Talks, a podcast brought to you by Reed Smith's Emerging Technologies Group. In each episode of this podcast, we will discuss cutting-edge issues on technology, data, and the law. We will provide practical observations on a wide variety of technology and data topics to give you quick and actionable tips to address the issues you are dealing with every day.  Rebeca: Welcome to Tech Law Talks and our series on AI. My name is Rebeca Mosquera. I am an attorney with Reed Smith in New York focusing on international arbitration. Today we focus on AI in arbitration. How artificial intelligence is reshaping dispute resolution and the legal profession. Joining me is Benjamin Malek, a partner at THE Chambers and chair of the Silicon Valley Arbitration and Mediation Center AI Task Force. Ben has extensive experience in commercial and investor state arbitration and is at the forefront of AI governance in arbitration. He has worked at leading institutions and law firms, advising on the responsible integration of AI into dispute resolution. He's also founder and CEO of LexArb, an AI-driven case management software. Ben, welcome to Tech Law Talks.  Benjamin: Thank you, Rebeca, for having me.  Rebeca: Well, let's dive in into our questions today. So artificial intelligence is often misunderstood, or put it in other words, there is a lot of misconceptions surrounding AI. How would you define AI in arbitration? And why is it important to look beyond just generative AI?  Benjamin: Yes, thank you so much for having me. AI in arbitration has existed for many years now, But it hasn't been until the rise of generative AI that big question marks have started to arise. And that is mainly because generative AI creates or generates AI output, whereas up until now, it was a relatively mild output. I'll give you one example. Looking for an email in your inbox, that requires a certain amount of AI. Your spellcheck in Word has AI, and it has been used for many years without raising any eyebrows. It hasn't been until ChatGPT has really given an AI tool to the masses that question started arising. What can it do? Will attorneys still be held accountable? Will AI start drafting for them? What will happen? And it's that fear that started generating all this talk about AI. Now, to your question on looking beyond generative AI, I think that is a very important point. In my function as the chair of the SAMC AI Task Force, while we were drafting the guidelines on the use of AI, one of the proposals was to call it use of generative AI in arbitration. And I'm very happy that we stood firm and said no, because there's many forms of AI that will arise over the years. Now we're talking about predictive AI, but there are many AI forms such as predictive AI, NLP, automations, and more. And we use it not only in generating text per se, but we're using it in legal research, in case prediction to a certain extent. Whoever has used LexisNexis, they're using a new tool now where AI is leveraged to predict certain outcomes, document automation, procedural management, and more. So understanding AI as a whole is crucial for responsible adoption.  Rebeca: That's interesting. So you're saying, obviously, that AI and arbitration is more than just chat GPT, right? I think that the reason why people think that and relies on maybe, as we'll see in some of the questions I have for you, that people may rely on chat GPT because it sounds normal. It sounds like another person texting you, providing you with a lot of information. And sometimes we just, you know, people, I can understand or I can see why people might believe that that's the correct outcome. And you've given examples of how AI is already being used and that people might not realize it. So all of that is very interesting. Now, tell me, as chair of the SVAMC AI Task Force, you've led significant initiatives in AI governance, right? What motivated the creation of the SVAMC AI guidelines? And what are their key objectives? And before you dive into that, though, I want to take a moment to congratulate you and the rest of the task force on being nominated once again for the GAR Awards, which will be unveiled during Paris Arbitration Week in April of this year. That's an incredible achievement. And I really hope you'll take pride in the impact of your work and the well-deserved recognition it continues to receive. So good luck to you and the rest of the team.  Benjamin: Thank you, Rebeca. Thank you so much. It really means a lot, and it also reinforces the importance of our work, seeing that we're nominated not only once last year for the GAR Award, but second year in a row. I will be blunt, I haven't kept track of many nominations, but I think it may be one of the first years where one initiative gets nominated twice, one year after the other. So that in itself for us is worth priding ourselves with. And it may potentially even be more than an award itself. It really, it's a testament to the work we have provided. So what led to the creation of the SVAMC AI guidelines? It's a very straightforward and to a certain extent, a little boring answer as of now, because we've heard it so many times. But the crux was Mata versus Avianca. I'm not going to dive into the case. I think most of us have heard it. Who hasn't? There's many sources to find out about it. The idea being that in a court case, an attorney used Chad GPT, used the outcome without verifying it, and it caused a lot of backlash, not only from opposing party, but also being chastised by the judge. Now when I saw that case, and I saw the outcome, and I saw that there were several tangential cases throughout the U.S. And worldwide, I realized that it was only a question of time until something like this could potentially happen in arbitration. So I got on a call with my dear friend Gary Benton at the SVAMC, and I told him that I really think that this is the moment for the Silicon Valley Arbitration Mediation Center, an institution that is heavily invested in tech to shine. So I took it upon myself to say, give me 12 months and I'll come up with guidelines. So up until now at the SVAMC, there are a lot of think tank-like groups discussing many interesting subjects. But the SVAMC scope, especially AI related, was to have something that produces something tangible. So the guidelines to me were intuitive. It was, I will be honest, I don't think I was the only one. I might have just been the first mover, but there we were. We created the idea. It was vetted by the board. And we came up first with the task force, then with the guidelines. And there's a lot more to come. And I'll leave it there.  Rebeca: Well, that's very interesting. And I just wanted to mention or just kind of draw from, you mentioned the Mata case. And you explained a bit about what happened in that case. And I think that was, what, 2023? Is that right? 2022, 2023, right? And so, but just recently we had another one, right? In the federal courts of Wyoming. And I think about two days ago, the order came out from the judge and the attorneys involved were fined about $15,000 because of hallucinations on the case law that they cited to the court. So, you know I see that happening anyway. And this is a major law firm that we're talking about here in the U.S. So it's interesting how we still don't learn, I guess. That would be my take on that.  Benjamin: I mean, I will say this. Learning is a relative term because learning, you need to also fail. You need to make mistakes to learn. I guess the crux and the difference is that up until now, at any law firm or anyone working in law would never entrust a first-year associate, a summer associate, a paralegal to draft arguments or to draft certain parts of a pleading by themselves without supervision. However, now, given that AI sounds sophisticated, because it has unlimited access to words and dictionaries, people assume that it is right. And that is where the problem starts. So I am obviously, personally, I am no one to judge a case, no one to say what to do. And in my capacity of the chair of the SVAMC AI task force, we also take a backseat saying these are soft law guidelines. However, submitting documents with information that has not been verified has, in my opinion, very little to do with AI. It has something to do with ethical duty and candor. And that is something that, in my opinion, if a court wants to fine attorneys, they're more welcome to do so. But that is something that should definitely be referred to the Bar Association to take measures. But again, these are my two cents as a citizen.  Rebeca: No, very good. Very good. So, you know, drawing from that point as well, and because of the cautionary tales we hear about surrounding these cases and many others that we've heard, many see AI as a double-edged sword, right? On the one hand, offering efficiency gains while raising concerns about bias and procedural fairness. What do you see as the biggest risk and benefits of AI in arbitration?  Benjamin: So it's an interesting question. To a certain extent, we tried to address many of the risks in the AI guidelines. Whoever hasn't looked at the guidelines yet, I highly suggest you take a look at them they're available on svamc.org I'm sure that they're widely available on other databases Jus Mundi has it as well. I invite everyone to take a look at it. There are several challenges. We don't believe that those challenges would justify not using it. To name a few, we have bias. We have lack of transparency. We also have the issue of over-reliance, which is the one we were talking about just a minute ago, where it seems so sophisticated that we as human beings, having worked in the field, cannot conceive how such an eloquent answer is anything but true. So there's a black box problem and so many others, but quite frankly, there are so many benefits that come with it. AI is an unlimited knowledge tool that we can use. As of now, AI is what we know it is. It has hallucinations. It does have some bias. There is this black box problem. Where does it come from? Why? What's the source? But quite frankly, if we are able to triage the issues and to really look at what are the advantages and what is it we want to get out of it, and I'll give you a brief example. Let's say you're drafting an RFA. If you know the case, you know the parties, and you know every aspect of the case, AI can draft everything head to toe. You will always be able to tell what is from the case and what's not from the case. If we over-rely on AI and we allow it to draft without verifying all the facts, without making sure we know the transcript inside and out, without knowing the facts of the case, then we will always run into certain issues. Another issue we run into a lot with predictive AI is relying on data that exists. So compared to generative AI, predictive AI is taking data that already exists and predicting another outcome. So there's a lesser likelihood of hallucinations. The issue with that is, of course, bias. Just a brief example, you're the president of Arbitral Women, so you will definitely understand. It has only been in the last 30 years that women had more of a presence in arbitration, specifically sitting as an arbitrator. So if we rely on data that goes beyond those 30, 40, 50 years, there's going to be a lot of male decisions having been taken. Potentially even laws that applied back then that were not very gender neutral. So we need, we as people, need to triage and understand where is the good information, where is information that may have bias and counterbalance it. As of now, we will need to counterbalance it manually. However, as I always say, we've only seen a grain of salt of what AI can do. So as time progresses, the challenges, as you mentioned, will become lesser and lesser and lesser. And the knowledge that AI has will become wider and wider. As of now, especially in arbitration, we are really taking advantage of the fact that there is still scarcity of knowledge. But it is really just a question of time until AI picks up. So we need to get a better understanding of what is it we can do to leverage AI to make ourselves indispensable.  Rebeca: No, that's very interesting, Ben. And as you mentioned, yes, as president of ArbitralWomen, the word bias is something I pay close attention. You know, we're talking about bias. You mentioned bias. And we all have conscious or unconscious biases, right? And so you mentioned that about laws that were passed in the past where potentially there was not a lot of input from women or other members of our society. Do you think AI can be trained then to be truly neutral or will bias always be a challenge?  Benjamin: I wish I had the right answer. I think, I actually truly believe that bias is a very relative term. And in certain societies, bias has a very firm and black and white standing, whereas in other societies, it does not. Especially in international arbitration, where we not only deal with cross-border disputes, but different cultures, different laws, laws of the seats, laws of the contract. I think it's very hard to point out one set of bias that we will combat or that we will set as principle for everything. I think ultimately what ensures that there is always human oversight in the use of AI, especially in arbitration, are exactly these type of issues. So we can, of course, try to combat bias and gender bias and others. But I don't think it is as easy as we say, because even nowadays, in normal proceedings, we are still dealing with bias on a human level. So I think we cannot ask from machines to be less biased than we as humans are.  Rebeca: Let me pivot here a bit. And, you know, earlier, we mentioned the GAR Awards. And now I'd like to shift our focus to the recent GAR Life on Technology that took place here in New York last week on February 20th. And to give our audience, you know, some context. GAR stands for Global Arbitration Review, a widely read journal that not only ranks international arbitration practices at law firms worldwide, but also, among other things, organizes live conferences on cutting-edge topics in arbitration across the globe. So I know you were a speaker at GAR Live, and there was an important discussion about distinguishing generative AI, predictive AI, and other AI applications. How do these different AI technologies impact arbitration, and how do the SVAMC guidelines address them?  Benjamin: I was truly honored to speak at the GAR Live event in New York, and I think the fact that I was invited to speak on AI as a testament on how important AI is and how widely interested the community is in the use of AI, which is very different to 2023 when we were drafting the guidelines on the use of AI. I think it is important to understand that ultimately, everything in arbitration, specifically in arbitration, needs human oversight. But in using AI in arbitration, I think we need to differentiate on how the use of AI is different in arbitration versus other parts of the law, and specifically how it is different in arbitration compared to how we would use it on a day-to-day basis. In arbitration specifically, arbitrators are still responsible for a personal or arbitrators are given a personal mandate that is very different to how law works in general. Where you have a lot of judges that let their assistants draft parts of the decision, parts of the order. Arbitration is a little different, and that for a reason. Specifically in international arbitration, because there are certain sensitivities when it comes to local law, when it comes to an international standard and local standards. Arbitrators are held to a higher standard. Using AI as an arbitrator, for example, which could technically be put at the same level as using a tribunal secretary, has its limits. So I think that AI can be used in many aspects, from drafting for attorneys, for counsel, when it comes to helping prepare graphs, when it comes to preparing documents, accumulating documents, etc., etc. But it does have its limits when it comes to arbitrators using it. As we have tried to reiterate in the guidelines, arbitrators need to be very conscious of where their personal mandate starts and ends. In other words, our recommendation, again, we are soft law guidelines, our recommendation to arbitrators are to not use AI when it comes to any decision-making process. What does that mean? We don't know. And neither does the law. And every jurisdiction has their own definition of what that means. It is up for the arbitrator to define what a decision-making process is and to decide of whether the use of AI in that process is adequate.  Rebeca: Thank you so much, Ben. I want to now kind of pivot, since we've been talking a little bit more about the guidelines, I want to ask you a few questions about them. So they were created with a global perspective, right? And so what initiatives is the AI task force pursuing to ensure the guidelines remain relevant worldwide? You've been talking about different legal systems and local laws and how practitioners or certain regulations within certain jurisdictions might treat certain things differently. So what is the AI task force doing to remain relevant, to maybe create some sort of uniformity? So what can you tell me about that?  Benjamin: So we at the SVAMC task force, we continue to gather feedback, of course, And we're looking for global adaptation. We will continue to work closely with practitioners, with institutions, with lawmakers, with government, to ensure that when it comes to arbitration, AI is given a space, it's used adequately, and if possible, of course, and preferential to us, the SVAMC AI guidelines are used. That's why they were drafted, to be used. When we presented the guidelines to different committees and to different law sections and bar associations, it struck us that jurisdictions such as the U.S., and more specifically in New York, where both you and I are based, the community was not very open to receiving these guidelines as guidelines. And the suggestion was actually made to creating a white paper, And as much as it seemed to be a shutdown at an early stage, when we were thinking about it, and I was very blessed to have seven additional members in the Guidelines Drafting Committee, seven very bright individual members that I learned a lot from during this process. It was clear to us that jurisdictions such as New York have a very high ethical standard, and where guidelines such as our guidelines would potentially be seen as doubling ethical rules. So although we advocate for them not being ethical guidelines whatsoever, because we don't believe they are, we strongly suggest that local and international ethical standards are being upheld. So with that in mind, we realize that there is more to a global aspect that needs to be addressed rather than an aspect of law associations in the US or in the UK or now in Europe. Up-and-coming jurisdictions that up until now did not have a lot of exposure to artificial intelligence and maybe even technology as a whole are rising. And they may need more guidance than jurisdictions where technology may be an instinct away. So what the AI task force has created. And is continuing to recruit for, are regional committees for the AI Task Force, tracking AI usage in different legal systems and different jurisdictions. Our goal is to track AI-related legislation and its potential impact on arbitration. These regional committees will also provide jurisdiction-specific insights to refine the guidelines. And hopefully, or this is what we anticipate, these regional committees will help bridge the gap between AI's global development and local legal framework. There will be a dialogue. We will continue, obviously, to be present at conferences, to have open dialogue, and to recruit, of course, for these committees. But the next step is definitely to focus on these regional committees and to see how we, as the AI task force of the Silicon Valley Arbitration Mediation Center, can impact the use of AI in arbitration worldwide.  Rebeca: Well, that's very interesting. So you're utilizing committees in different jurisdictions to keep you appraised of what's happening in each jurisdiction. And then with that, continue, you know, somehow evolving the guidelines and gathering information to see how this field, you know, it's changing rapidly.  Benjamin: Absolutely. Initially, we were thinking of just having a small local committee to analyze different jurisdictions and what laws and what court cases, etc. But we soon came to realize that it's much more than tracking judicial decisions. We need people on the ground that are part of a jurisdiction, part of that local law, to tell us how AI impacts their day-to-day, how it may differ from yesterday to tomorrow, and what potential legislation will be enacted to either allow or disallow the use of certain AI.  Rebeca: That's very interesting. I think it's something that will keep the guidelines up to date and relevant for a long time. So kudos to you, the SVAMC and the task force. Now, I know that the guidelines are a very short paper, you know, and then in the back you have the commentary on them. So I want to, I'm not going to dissect all of the guidelines, but I want to come and talk about one of them in particular that I think created a lot of discussion around the guidelines itself. So for full disclosure, right, I was part of the reviewing committee of the AI guidelines. And I remember that one of the most debated aspects of the SVAMC AI guidelines is guideline three on disclosure, right? So should arbitrators and counsel disclose their AI use in proceedings? So I think that that has generated a lot of debates. And that's the reason why we have the resulting guideline number three, the way it is drafted. So can you give us a little bit more of insight what happened there?  Benjamin: Absolutely. I'd love to. Guideline three was very controversial from the get-go. We initially had two options. We had a two-pronged test that parties would either satisfy or not, and then disclosure was necessary. And then we had another option that the community could vote on where it was up to the parties to decide whether their AI-aided submission could impact the outcome of the case. And depending on that, they would disclose or not disclose whether AI was used. Quite frankly, that was a debate we had in 2023, and a lot changed from November 2023 until April, when we finally published the first version of the AI guidelines. A lot of courts have implemented an obligatory disclosure. I think people have also gotten more comfortable with using AI on a day-to-day. And we ultimately came to the conclusion to opt for a flexible disclosure approach, which can now be found in the guidelines. The reason for that was relatively simple, or relatively simple to us who debated that. Having a disclosure obligation of the use of AI will very easily become inefficient for two reasons. A blanket disclosure for the use of AI serves nobody. It really boils down to one question, which is, if the judge, or in our case in arbitration, if the arbitrator or tribunal knows that AI was used for a certain document, now what? How does that knowledge transform into action? And how does that knowledge lead to a different outcome? And in our analysis, it turned out that a blanket disclosure of AI usage, or in general, an over-disclosure of the use of AI in arbitration, may actually lead to adverse consequences for the parties who make the disclosure. Why? Because not knowing how AI can impact these submissions causes arbitrators not to know what to do with that disclosure. So ultimately, it's really up to the parties to decide, how was AI used? How can it impact the case? What is it I want to disclose? How do I disclose? It's also important for the arbitrators to understand, what do I do with the disclosure before saying, everything needs to be disclosed. During the GAR event in New York, the issue was raised whether documents which were prepared with the use of AI should be disclosed or whether there should be a blanket disclosure. And quite frankly, the debate went back and forth, but ultimately it comes down to cross-examination. It comes down to the expert or the party submitting the document, being able to back up where the information comes from rather than knowing that AI was used. And if you put that in aspect, we received a very interesting question of why we should continue using AI, knowing that approximately 30% of its output are hallucinations and it needs revamping. This was compared to a summer associate or a first-year associate, and the question was very simple. If I have a first-year associate or a summer associate whose output has a 30% error rate, why would I continue using that associate? And quite frankly, there is merit to the question, and it really has a very simple answer. And the answer is time and money. Using AI makes it much faster to receive using AI makes it faster to receive output than using a first year associate or summer associate and it's way cheaper. For that, it's worth having a 30% error margin. I don't know where they got the 30% from, but we just went along with it.  Rebeca: I was about to ask you where they get the 30%. And well, I think that for first-year associates or summer associates that are listening, I think that the main thing will be for them to then become very savvy in the use of AI so they can become relevant to the practice. I think everyone, you know, there's always that question about whether AI will replace all of us, the entire world, and we'll go into machine apocalypses. I don't see it that way. In my view, I see that if we, you know, if we train ourselves, if we're not afraid of using the tool, we'll very much be in a position to pivot and understand how to use it. And when you have, what is the saying, garbage in, garbage out. So if you have a bad input, you will have a bad output. You need to know the case. You need to know your documents to understand whether the machine is hallucinating or giving you, you know, an information that is not real. I like to play and ask certain questions to chat GPT, you know, here and there. And sometimes I, you know, I ask obviously things that I know the answer to. And then I'm like, chat GPT, this is not accurate. Can you check on this? And he's like, oh, thank you for correcting me. I mean, and it's just a way of, you got to try and understand it so you know where to make improvements. But that doesn't mean that the tool, because it's a tool, will come and replace, you know, your better judgment as a professional, as an attorney.  Benjamin: Absolutely. One of the things we say is it is a tool. It does nothing out of its own volition. So what you're saying is 100% right. This is what the SVAMC AI guidelines stand for. Practitioners need to accustom themselves on proper use of AI. AI can be used from paid versions to unpaid versions. We just need to understand what is an open source AI, what is a close circuit AI. Again, for whoever's listening, feel free to look up the guidelines. There's a lot of information there. There's tons of articles written at this point. And just be very mindful of if there is an open AI system, such as an unpaid chat GPT version. It does not mean you cannot use it. First, check with your firm to make sure you're allowed to use it. I don't want to get into any trouble.  Rebeca: Well, we don't want to put confidential information on an open AI platform.  Benjamin: Exactly. Once the firm or your colleagues allow you to use ChatGPT, even if it's an open version, just be very smart about what it is you're putting in. No confidential information, no potential conflict check, no potential cases. Just be smart about what it is you put in. Another aspect we were actually debating about is this hallucination. Just an example, let's say you say this is an ISDS case, so we're talking a little more public, and you ask Chad GPT, hey, show me all the cases against Costa Rica. And it hallucinates, too. It might actually be that somebody input information for a potential case against Costa Rica or a theoretical case against Costa Rica, Chad GPT being on the open end, takes that as one potential case. So just be very smart. Be diligent, but also don't be afraid of using it.  Rebeca: That's a great note to end on. AI is here to stay. And as legal professionals, it's up to us to ensure it serves the interests of justice, fairness, and efficiency. And for those interested in learning more about the SVAMC AI guidelines, you can find them online at svamc.org and search for guidelines. I tried it myself and you will go directly to the guidelines. And if you like to stay updated on developments in AI and arbitration, be sure to follow Tech Law Talks and join us for future episodes where we'll continue exploring the intersection of law and technology. Ben, thank you again for joining me today. It's been a great pleasure. And thank you to our listeners for tuning in.  Benjamin: Thank you so much, Rebeca, for having me and Tech Law Talks for the opportunity to be here.  Outro: Tech Law Talks is a Reed Smith production. Our producers are Ali McCardell and Shannon Ryan. For more information about Reed Smith's Emerging Technologies Practice, please email [email protected]. You can find our podcasts on Spotify, Apple Podcasts, Google Podcasts, reedsmith.com, and our social media accounts.  Disclaimer: This podcast is provided for educational purposes. It does not constitute legal advice and is not intended to establish an attorney-client relationship, nor is it intended to suggest or establish standards of care applicable to particular lawyers in any given situation. Prior results do not guarantee a similar outcome. Any views, opinions, or comments made by any external guest speaker are not to be attributed to Reed Smith LLP or its individual lawyers.  All rights reserved. Transcript is auto-generated.
    --------  
    37:11
  • AI explained: The EU AI Act, the Colorado AI Act and the EDPB
    Partners Catherine Castaldo, Andy Splittgerber, Thomas Fischl and Tyler Thompson discuss various recent AI acts around the world, including the EU AI Act and the Colorado AI Act, as well as guidance from the European Data Protection Board (EDPB) on AI models and data protection. The team presents an in-depth explanation of the different acts and points out the similarities and differences between the two. What should we do today, even though the Colorado AI Act is not in effect yet? What do these two acts mean for the future of AI? ----more---- Transcript: Intro: Hello, and welcome to Tech Law Talks, a podcast brought to you by Reed Smith's Emerging Technologies Group. In each episode of this podcast, we will discuss cutting-edge issues on technology, data, and the law. We will provide practical observations on a wide variety of technology and data topics to give you quick and actionable tips to address the issues you are dealing with every day.  Catherine: Hello, everyone, and thanks again for joining us on Tech Law Talks. We're here with a really good array of colleagues to talk to you about the EU AI Act, the Colorado AI Act, the EDPB guidance, and we'll share some of those initials soon on what they all mean. But I'm going to let my colleagues introduce themselves. Before I do that, though, I'd like to say if you like our content, please consider giving us a five-star review wherever you find us. And let's go ahead and first introduce my colleague, Andy.  Andy: Yeah, hello, everyone. My name is Andy Splittgerber. I'm a partner at Reed Smith in the Emerging Technologies Department based out of Munich in Germany. And looking forward to discussing with you interesting data protection topics.  Thomas: Hello, everyone. This is Thomas, Thomas Fischl in Munich, Germany. I also focus on digital law and privacy. And I'm really excited to be with you today on this podcast.  Tyler: Hey everyone, thanks for joining. My name is Tyler Thompson. I'm a partner in the emerging technologies practice at Reed Smith based in the Denver, Colorado office.  Catherine: And I'm Catherine Castaldo, a partner in the New York office. So thanks to all my colleagues. Let's get started. Andy, can you give us a very brief overview of the EU AI app?  Andy: Sure, yeah. It came into force in August 2024. And it is a law about mainly the responsible use of AI. Generally, it is not really focused on data protection matters. Rather, it is next to the world-famous European Data Protection Regulation. It has a couple of passages where it refers to the GDPR and also sometimes where it states that certain data protection impact assessments have to be conducted. Other than that, it has its own concept dividing up AI systems. And we're just expecting a new guidance on how authorities and how the commission interprets what AI systems are. So watch out for that. Into different categories, prohibited AI, high-risk AI, and then normal AI systems. There are also special rules on generative AI, and then some rules on transparency requirements when organizations use AI towards ends customers. And depending on these risk categories, there are certain requirements, and attaching to each of these categories, developers, importers, and also users as like organizations of AI have to comply with certain obligations around accountability, IT security, documentation, checking, and of course, human intervention and monitoring. This is the basic concept and the rules start to kick in February 2nd, 2025 when prohibited AI must not be used anymore in Europe. And the next bigger wave will be on August 2nd, 2025 when the rules on generative AI kick in. So organizations should start and be prepared to comply with these rules now and get familiar with this new type of law. It's kind of like a new area of law.  Catherine: Thanks for that, Andy. Tyler, can you give us a very brief overview of the Colorado AI Act?  Tyler: Sure, happy to. So Colorado AI Act, this is really the first comprehensive AI law in the United States. Passed at the end of the 2024 legislative session. it covers developers or deployers that use a high-risk AI system. Now, what is a high-risk AI system? It's just a system that makes a consequential decision. What is a consequential decision? These can include things like education decisions, employment opportunities, employment related decisions, financial lending service decisions, if it's an essential government service, a healthcare service, housing, insurance, legal services. So that consequential decision piece is fairly broad. The effective date of it is February 1st of 2026, and the Colorado AG is going to be enforcing it. There's no private right of action here, but violating the Colorado AEI Act is considered an unfair and deceptive trade practice under Colorado law. So that's where you get the penalties of the Colorado AEI Act. It's tied into the Colorado deceptive trade practice.  Catherine: That's an interesting angle. And Tom, let's turn to you for a moment. I understand that the European Data Protection Board, or EDPB, has also recently released some guidance on data protection in connection with artificial intelligence. Can you give us some high-level takeaways from that guidance?  Thomas: Sure, Catherine, and it's very true that the EDPB has just released a statement. It actually has been released in December of last year. And yeah, they have released that highly anticipated statement on AI models and data protection. This statement of the EDPB follows actually a much-discussed paper published by the German Hamburg Data Protection Authority in July of last year. And I also wanted to briefly touch upon this paper. Because the Hamburg Authority argued that AI models, especially large language models, are anonymous when considered separately. They do not involve the processing of personal data. To reach this conclusion, the paper decoupled the model itself from, firstly, the prior training of the model, which may involve the collection and further processing of personal data as part of the training data set. And secondly, the subsequent use of the model, where a prompt may contain personal data and output may be used in a way that means it represents personal data. And interestingly, this paper considered only the AI model itself and concluded that the tokens and values that make up the inner processes of a typical AI model do not meaningfully relate to or correspond with information about identifiable individuals. And consequently, the model itself was classified as anonymous, even if personal data is processed during the development and the use of the model. So the EDPB statement, recent statement, does actually not follow this relatively simple and secure framework proposed by the German authority. The EDPB statement responds actually to a request from the Irish Data Protection Commission and gives kind of a framework, just particularly with respect to certain aspects. It actually responds to four specific questions. And the first question was, so under what conditions can AI models be considered anonymous? And the EDPB says, well, yes, it can be considered anonymous, but only in some cases. So it must be impossible with all likely means to obtain personal data from the model either through attacks aimed at extracting the original training data or through other interactions with the AI model. The second and third questions relate to the legal basis of the use and the training of AI models. And the EDPB answered those questions in one answer. So the statement indicates that the development and use of AI models can. Generally be based on a legal basis of legitimate interest, then the statement lists a variety of different factors that need to be considered in the assessment scheme according to Article 6 GDPR. So again, it refers to an individual case-by-case analysis that has to be made. And finally, the EDPB addresses the highly practical question of what consequences it has for the use of an AI model if it was developed in violation of data protection regulations. The EDPB says, well, this partly depends on whether the EI model was first anonymized before it was disclosed to the model operator. And otherwise, the model operator may need to assess the legality of the model's development as part of their accountability obligations. So quite interesting statement.  Catherine: Thanks, Tom. That's super helpful. But when I read some commentary on this paper, there's a lot of criticism that it's not very concrete and doesn't provide actionable guidance to businesses. Can you expand on that a little bit and give us your thoughts?  Thomas: Yeah, well, as is sometimes the case with these EDPB statements, which necessarily reflect the consensus opinion of authorities from 27 different member states. The statement does not provide many clear answers. So instead, the EDPP offers kind of indicative guidelines and criteria and calls for case-by-case assessments of AI models to understand whether and how they are affected by the GDPR. And interestingly, someone has actually counted how often the phrase case-by-case appears in the statement. It appears actually 16 times. and can or could appears actually 161 times so. Obviously, this is likely to lead to different approaches among data protection authorities, but it's maybe also just an intended strategy of the EDPB. Who knows?  Catherine: Well, as an American, I would read that as giving me a lot of flexibility.  Thomas: Yeah, true.  Catherine: All right, let's turn to Andy for a second. Andy, also in view of the AI Act, what do you now recommend organizations do when they want to use generative AI systems?  Andy: That's a difficult question after 161 cans and goods. We always try to give practical advice. And I mean, with regard, like if you now look at the AI Act plus this EDPB paper or generally GDPR, there are a couple of items where organizations can prepare and need to prepare. First of all, organizations using generative AI must be aware that a lot of the obligations is on the developers. So the developers of generative AI definitely have more obligations, especially under the AI Act, for example. They have to create and maintain the model's technical documentation, including the training and testing processes, monitor the AI system. They must also, which can be really painful and will be painful, they have to make available a detailed summary of the content that was used for the training for the model. And this goes very much also into copyright topics. So there are a lot of obligations and none of these are on the using side. So if organizations use generative AI, they don't have to comply with all of this, but they have to, and that's our recommendation, ensure in their agreements when they license the model or the AI system, get the confirmation by the developer that the developer complies with all of these obligations. That's kind of like the supply chain compliance in AI. So that's one of the aspects from the using side. Make sure in your agreement that the provider complies with AI Act. Other items for the agreement when licensing AI, generative AI systems or AI is attaching to what Thomas said. Getting a statement from the developer whether or not the model itself contains personal data. The ideal answer is no, the model does not contain personal data because then we don't have the poisonous tree. If the developer was not in compliance with GDPR or data protection laws when doing the training, there is a cut. If the model does not contain any personal data, then this cannot infect the later use by the using organization. So this is a very important statement. We have not seen this in practice very often so far, and it is quite a strong commitment developers are asked to give, but it is something at least to be discussed in the negotiations. So that's the second point. A third point for the agreement with the provider is whether or not the usage data is used for further training that can create data protection issues and might require using organizations to solicit consent or other justifications from their employees or users. And then, of course, having in place a data processing agreement with the provider or developer of the generative AI system if it runs on someone else's systems. So these are all items for the contracts, and we think this is something that needs to be tackled now because it always takes a while until the contract is negotiated and in place. And on top of this, as I said, the AI Act obligations are rather limited. There's only some transparency only, but it's transparency obligations for using organizations to, for example, inform their employees that they're using AI to inform end users that a certain whatever text or photo or article was created by AI. So like a tag, this was created by AI being transparent that AI was used to develop something. And then on top of this, the general GDPR compliance requirements apply, like transparency about what personal data is processed when the AI is used. Justification of processing, add the AI system to your role paths, and also check if potentially data protection impact assessment is required. This will mainly be the case if the AI has intensive impact on the personality of data subjects' data. So these are the general requirements. So takeaways, look, check the contracts, check the limited transparency requirements under AI Act, and comply with what you know already under GDPR.  Tyler: It's interesting because there is a lot of overlap between the EU AI Act and the Colorado AI Act. But Colorado, it does have that robust impact assessment requirements. You know, you've got to provide notification. You have to provide opt-out rights and appeal. You do have some of that publicly facing notice requirement as well. And so the one thing that I think I want to highlight that's a little bit different, we have an AG notification requirement. So if you discover that your artificial intelligence system has been creating an effect that could be considered algorithmic discrimination, you have an affirmative duty to notify the attorney general. So that's something that's a little bit different. But I think overall, there's a lot of overlap between the Colorado AI Act and the EU AI Act. And I like Andy's analogy of the supply chain, right? Colorado as well. Yes, it applies to the developers, but it also applies to deployers. And on the deployer side, it is kind of that supply chain type of analogy of these are things that you as a deployer, you need to go back, look at your developer, make sure you have the right documentation, that you've checked the right boxes there and have done the right things.  Catherine: Thanks for that, Tyler. Do you think we're entering into an area where the U.S. States might produce more AI legislation?  Tyler: I think so. Virginia has proposed a version of basically the Colorado AI Act. And I honestly think we could see the same thing with these AI bills that we have seen with privacy on the US side, which is kind of a state-specific approach. Some states adopting the same or highly similar versions of the laws of other states, but then maybe a couple states going off on their own and doing something unique. So it would not be surprising to me at all, at least in the short to midterm. We have a patchwork of AI laws throughout the United States just based on individual state law.  Catherine: Thanks for that. And I'm going to ask a question to both Tyler and Tom and Andy. Either one of you can answer, whoever thinks of this. But we've been reading a lot lately about DeepSeek and all the cyber insecurities, essentially, with utilizing a system like that and some failures on the part of the developers there. Is there any security requirement in either one of the EU or Colorado-based AI acts for deploying or developing a new system?  Tyler: Yeah, for sure. So where your security requirements are going to come in, I think, is in the impact assessment piece, right? Where, you know, when you have to look at your risks and how this could affect an individual, whether through a discrimination issue or other type of risk to it, you're going to have to address that in the discrimination piece. So while it's not like a specific security provision, there's no way that you're going to get around some of these security requirements because you have to do that very robust impact assessment, right? Part of that analysis under the impact assessment is known or reasonably foreseeable risks. So things like that, you're going to have to, I would say, address via some of the security requirements facing the AI platform.  Catherine: Great. And what about from the European side?  Andy: Yes, similar from the European side or perhaps even a bit more, definitely robustness, cybersecurity, IT security is like a major portion of the AI Act. So that's definitely a very, very important obligation and duty that must be compliant.  Catherine: And I would think too under GDPR, because you have to ensure adequate technical and organizational measures that if you had personal information going into the AI system, you'd have to comply with that requirement as well, since they stand side by side.  Andy: Exactly, exactly. And then there's under both also notification obligations if something goes wrong.  Catherine: Well, good to know. All right, well, maybe we'll do a future podcast on the impact of the NIST AI risk management framework and the application to both of these large bodies of law. But I thank all my colleagues for joining us today. We have time for just a quick final thought. Does anyone have one?  Andy: Thought from me after the AI Act came into force now, I'm as a practical European worried that we're killing the AI industry and innovation in Europe. It's kind of like good to see that at least some states in the U.S. follow a bit of a similar approach, even if it's, you know, different. Perhaps I haven't given up the hope for a more global solution. Perhaps the AI Act will be also adjusted a bit to then come more to a closer global solution.  Tyler: On the U.S., I'd say, look, my takeaway is start now, start thinking about some of this stuff now. It can be tempting to say it's just Colorado. You know, we have till February of 2026. I think a lot of these things that the Colorado AI Act and even the EU AI Act are requiring are arguably things that you should be doing anyway. So I would say start now, especially as Andy said, on the contract side, if nothing else. We'd start thinking about doing a deal with a developer or a deployer. What needs to be in that agreement? How do we need to protect ourselves? And how do we need to look at the regulatory space to future-proof this so that when we come to 2026, we're not amending 30, 40 contracts?  Thomas: And maybe a final thought from my side. So the EDPB statement does only answer a few questions, actually. So it doesn't touch other very important issues like automated decision-making. There is nothing in the document. There is not really anything about sensitive data. The use of sensitive data, data protection impact assessments are not addressed. So a lot of topics that remain unclear, at least there is no guidance yet.  Catherine: Those are great views and I'm sure really helpful to all of our listeners who have to think of these problems from both sides of the pond. And thank you for joining us again on Tech Law Talks. We look forward to speaking with you again soon.  Outro: Tech Law Talks is a Reed Smith production. Our producers are Ali McCardell and Shannon Ryan. For more information about Reed Smith's emerging technologies practice, please email [email protected]. You can find our podcasts on Spotify, Apple Podcasts, Google Podcasts, reedsmith.com, and our social media accounts.  Disclaimer: This podcast is provided for educational purposes. It does not constitute legal advice and is not intended to establish an attorney-client relationship, nor is it intended to suggest or establish standards of care applicable to particular lawyers in any given situation. Prior results do not guarantee a similar outcome. Any views, opinions, or comments made by any external guest speaker are not to be attributed to Reed Smith LLP or its individual lawyers.  All rights reserved.  Transcript is auto-generated.
    --------  
    22:33
  • Navigating NIS2: What businesses need to know
    Catherine Castaldo, Christian Leuthner and Asélle Ibraimova dive into the implications of the new Network and Information Security (NIS2) Directive, exploring its impact on cybersecurity compliance across the EU. They break down key changes, including expanded sector coverage, stricter reporting obligations and tougher penalties for noncompliance. Exploring how businesses can prepare for the evolving regulatory landscape, they share insights on risk management, incident response and best practices. ----more---- Transcript: Intro: Hello, and welcome to Tech Law Talks, a podcast brought to you by Reed Smith's Emerging Technologies Group. In each episode of this podcast, we will discuss cutting-edge issues on technology, data, and the law. We will provide practical observations on a wide variety of technology and data topics to give you quick and actionable tips to address the issues you are dealing with every day.  Catherine: Hi, and welcome to Tech Law Talks. My name is Catherine Castaldo, and I am a partner in the New York office in the Emerging Technologies Group, focusing on cybersecurity and privacy. And we have some big news with directives coming out of the EU for that very thing. So I'll turn it to Christian, who can introduce himself.  Christian: Thanks, Catherine. So my name is Christian Leuthner. I'm a partner at the Reed Smith Frankfurt office, also in the Emerging Technologies Group, focusing on IT and data. And we have a third attorney on this podcast, our colleague, Asélle.  Asélle: Thank you, Christian. Very pleased to join this podcast. I am counsel based in Reed Smith's London office, and I also am part of emerging technologies group and work on data protection, cybersecurity, and technology issues.  Catherine: Great. As we previewed a moment ago, on October 17th, 2024, there was a deadline for the transposition of a new directive, commonly referred to as NIS2. And for those of our listeners who might be less familiar, would you tell us what NIS2 stands for and who is subject to it?  Christian: Yeah, sure. So NIS2 stands for the Directive on Security of Network and Information Systems. And it is the second iteration of the EU's legal framework for enhancing the cybersecurity of critical infrastructures and digital services, it will replace what replaces the previous directive, which obviously is called NIS1, which was adopted in 2016, but had some limitations and gaps. So NIS2 applies to a wider range of entities that provide essential or important services to the society and the economy, such as energy, transport, health, banking, digital infrastructure, cloud computing, online marketplaces, and many, many more. It also covers public administrations and operators of electoral systems. Basically, anyone who relies on network and information systems to deliver their services and whose disruptions or compromise could have significant impacts on the public interest, security or rights of EU citizens and businesses will be in scope of NIS2. As you already said, Catherine, NIS2 had to be transposed into national member state law. So it's a directive, not a regulation, contrary to DORA, which we discussed the last time in our podcast. It had to be implemented into national law by October 17th, 2024. But most of the member states did not. So the EU Commission has now started investigations regarding the violations of the treaty of the functioning of the European Union against, I think, 23 member states as they have not yet implemented NIS2 into national law.  Catherine: That's really comprehensive. Do you have any idea what the timeline is for the implementation?  Christian: It depends on the state. So there are some states that have already comprehensive drafts. And those just need to go through the legislative process. In Germany, for example, we had a draft, but we have elections in a few weeks. And the current government just stated that they will not implement the law before that. And so after the election, the implementation law will be probably discussed again, redrafted. And so it'll take some time. It might be in the third quarter of this year.  Catherine: Very interesting. We have a similar process. Sometimes it happens in the States where things get delayed. Well, what are some of the key components?  Asélle: So, NIS2 focuses on cybersecurity measures, and we need to differentiate it from the usual cybersecurity measures that any organization thinks about in the usual way where they protect their data, their systems against cyber attacks or incidents. So the purpose of this legislation is to make sure there is no disruption to the economy or to others. And in that sense, the similar kind of notions apply. Organizations need to focus on ensuring availability, authenticity, integrity, confidentiality of data and protect their data and systems against all hazards. These notions are familiar to us also from the GDPR kind of framework. So there are 10 cybersecurity risk management measures that NIS2 talks about, and this is policies on risk analysis and information system security, incident handling, business continuity and crisis management, supply chain security. Security in systems acquisition, development, and maintenance, policies to assess the effectiveness of measures, basic cyber hygiene practices, and training, cryptography and encryption, human resources security training, use of multi-factor authentication. So these are familiar notions also. And it seems the general requirements are something that organizations will be familiar with. However, the European Commission in its NIS Investments Report of November 2023 has done research, a survey, and actually found that organizations that are subject to NIS2 didn't really even take these basic measures. Only 22% of those surveyed had third-party risk management in place, and only 48% of organizations had top management involved in approving cybersecurity risk policies and any type of training. And this reduces the general commitment of organizations to cybersecurity. So there are clearly gaps, and NAS2 is trying to focus on improving that. There are other couple of things that I wanted to mention that are different from NIS1 and are important. So as Christian said, essential entities are different, have different regime, compliance regime applied to them compared with important entities. Essential entities need to systematically document their compliance and be prepared for regular monitoring by regulators, including regular inspections by competent authorities, whereas important entities only are obliged to kind of be in touch and communicate with competent authorities in case of security incidents. And there is an important clarification in terms of the supply chain, these are the questions we receive from our clients. And the question is, does the supply chain mean anyone that provides services or products? And from our reading of the legislation, supply chain only relates to ICT products and ICT services. Of course, there is a proportionality principle employed in this legislation, as with usually most of the European legislation, and there is a size threshold. The legislation only applies to those organizations who exceed the medium threshold. And two more topics, and I'm sorry that I'm kind of taking over the conversation here, but I thought the self-identification point was important because in the view of the European Commission, the original NIS1 didn't cover the organizations it intended to cover and so in the European Commission's view, the requirements are so clear in terms of which entities it applies to, that organizations should be able to assess it and register, identify themselves with the relevant authorities by April this year. And the last point, digital infrastructure organizations, their nature is specifically kind of taken into consideration, their cross-border nature. And if they provide services in several member states, there is a mechanism for them to register with the competent authority where their main establishment is based, similar to the notion under the GDPR.  Catherine: It sounds like, though, there's enough information in the directive itself without waiting for the member state implementation that companies who are subject to this rule could be well on their way to being compliant by just following those principles.  Christian: That's correct. So even if the implementation international law is currently not happening. All of the member states, companies can already work to comply with NIS2. So once the law is implemented, they don't have to start from zero. NIS2 sets out the requirements that important and essential entities under NIS2 have to comply with. For example have a proper information security management system have supply chain management train their employees and so they can already work to implement NIS2 and the the directive itself also has an access that sets out the sectors and potential entities that might be in scope of NIS2 And the member states cannot really vary from those annexes. So if you are already in scope of NIS2 under the information that is in the directive itself, you can be sure that you would probably also have to comply with your national rules. There might be some gray areas where it's not fully clear if someone is in scope of NIS2 and those entities might want to wait for the national implementation. And it also can happen that the national implementation goes beyond the directive and covers sectors or entities that might not be in scope under the directive itself. And then of course they will have to work to implement the requirements then. I think a good starting point anyways is the existing security program that companies already hopefully have in place so if they for example have an ISO 27001 framework implemented it might be good to start but with a mapping exercise what NIS2 might require in addition to the ISO 27001. And then look if this should be implemented now or companies can wait for the national implementation. But it's recommended not to wait for the national implementation and don't do anything until then.  Asélle: I agree with that, Christian. And I would like to point out that, in fact, digital infrastructure entities have very detailed requirements for compliance because there was an implementing regulation that basically specifies the cybersecurity requirements under NIS2. And just to clarify, perhaps digital infrastructure entities that I'm referring to are DNS service providers, TLD name, registries, cloud service providers, data centers. Content delivery network providers, managed service providers, managed security service providers, online marketplaces, online search engines, social networking services, and trust service providers. So the implementing regulation is in fact binding and directly applicable in all member states. And the regulation is quite detailed and has specific requirements in relation to each cybersecurity measure. Importantly, it has detailed thresholds on when incidents should be reported, and we need to take into consideration that not any incident is reportable, only those incidents that are capable of causing significant disruption to the service or significant impact on the provision of the services. So please take that into consideration. And NISA also published implementing guidance, and it's 150 pages, just explaining what the implementing regulation means. And it's still a draft. The consultation ended on the 9th of January 2025, so there'll be further guidance on that.  Catherine: Well, we can look forward to that. But I guess the next question would be, what are some of the risks for noncompliance?  Christian: Noncompliance with NIS2 can have serious consequences for the entity's concern, both legal and non-legal. On the legal side, NIS2 empowers the national authorities to impose sanctions and penalties, breaches. They can range from warnings and orders to fines and injunctions. Depending on the severity and duration of the infringement. The sanctions can be up to 2% of the annual turnover or 10 million euros, whatever is higher for the essential entities, and up to 1.4% of the annual turnover or 7 million euros, whichever is higher for important entities. NIS2 also allows the national authorities to take corrective or preventive measures. They can suspend or restrict the provision of the services and take the or order the entities to take remedial actions or improve the security posture. So even if they have implemented security measures and the authorities understand or determine that they are not sufficient in light of the risk applicable to the entity, they can require them to implement other measures to increase the security. On the non-legal side, it's very similar to what we discussed in our DORA podcast. There can be civil liability if there is an incident, if a damage occurs. And of course, the reputational damage and loss of trust and confidence can be really, really severe for the entities if they have an incident. And it's huge because they did not comply with the NIS2 requirements.  Asélle: I wanted to add that, unfortunately, with this piece of legislation, member states can add to the list of entities to which this legislation will apply. They can apply higher cybersecurity requirements, and because of the new criteria and new entities being added, it now applies to twice as many sectors as before. So quite a few organizations will need to review their policies, take cybersecurity measures. And it's helpful, as Christian mentioned, that, you know, NIS already mapped the cybersecurity measures against existing standards. It's on its website. I think it's super helpful. And it's likely that, the cybersecurity measures and the general risk assessment will be done by cybersecurity teams and risk compliance teams within organizations. However, legal will also need to be involved. And often policies, once drafted, they're reviewed by in-house legal teams. So it's essential that they all work together. It's also important to mention that there will be an impact on the due diligence and contracts with ICT product providers and ICT service providers. So the due diligence processes will need to be reviewed and enhanced and contracts drafted to ensure they will allow the organization, the recipients of the services to be compliant with NIS2. And maybe last point, just to cover off the UK, what's happening in the UK for those who also have operations there. It is clear now that the government will implement a version of NIS2. It's going to follow the European Union in its steps. And we recently were informed of a government page on the new cybersecurity and resilience bill. It's clear that it's going to be covering five sectors, transport, energy, drinking, water, health, and digital infrastructure. And digital services, very similar to NIS2, such as online marketplaces, online search engines, and cloud computing services. We are expecting the bill to be introduced to Parliament this year.  Catherine: Wow, fantastic news. So it should be a busy cybersecurity season. If any of our listeners think that they need help and think that they may be subject to these rules, I'm sure my colleagues, Asélle and Christian, would be happy to help with the legal governance side of this cybersecurity compliance effort. So thank you very much for sharing all this information, and we'll talk soon.  Outro: Tech Law Talks is a Reed Smith production. Our producers are Ali McCardell and Shannon Ryan. For more information about Reed Smith's emerging technologies practice, please email [email protected]. You can find our podcasts on Spotify, Apple Podcasts, Google Podcasts, reedsmith.com and our social media accounts.  Disclaimer: This podcast is provided for educational purposes. It does not constitute legal advice and is not intended to establish an attorney-client relationship, nor is it intended to suggest or establish standards of care applicable to particular lawyers in any given situation. Prior results do not guarantee a similar outcome. Any views, opinions, or comments made by any external guest speaker are not to be attributed to Reed Smith LLP or its individual lawyers.  All rights reserved. Transcript is auto-generated.
    --------  
    21:17
  • AI explained: AI and the Colorado AI Act
    Tyler Thompson sits down with Abigail Walker to break down the Colorado AI Act, which was passed at the end of the 2024 legislative session to prevent algorithmic discrimination. The Colorado AI Act is the first comprehensive law in the United States that directly and exclusively targets AI and GenAI systems. ----more---- Transcript:  Intro: Hello, and welcome to Tech Law Talks, a podcast brought to you by Reed Smith's Emerging Technologies Group. In each episode of this podcast, we will discuss cutting-edge issues on technology, data, and the law. We will provide practical observations on a wide variety of technology and data topics to give you quick and actionable tips to address the issues you are dealing with every day. Tyler: Hi, everyone. Welcome back to the Tech Law Talks podcast. This is continuing Reed Smith's AI series, and we're really excited to have you here today and for you to be with us. The topic today, obviously, AI and the use of AI is surging ahead. I think we're all kind of waiting for that regulatory shoe to drop, right? We're waiting for when it's going to come out to give us some guardrails or some rules around AI. And I think everyone knows that this is going to happen whether businesses want it to or not. It's inevitable that we're going to get some more rules and regulations here. Today, we're going to talk about what I see as truly the first or one of the first ones of those. That's the Colorado AI Act. It's really the first comprehensive AI law in the United States. So there's been some kind of one-off things and things that are targeted to more privacy, but they might have implications for AI. The Colorado AI Act is really the first comprehensive law in the United States that directly targets AI and generative AI and is specific for those uses, right? The other reason why I think this is really important is because Abigail and I were talking, we see this as really similar to what happened with privacy for the folks that are familiar with that. And this is something where privacy a few years back, it was very known that this is something that needed some regulations that needed to be addressed in the United States. After an absence of any kind of federal rulemaking on that, California came out with their CCPA and did a state-specific rule, which has now led to an explosion of state-specific privacy laws. I personally think that that's what we could see with AI laws as well, is that, hey, Colorado is the first mover here, but a lot of other states will have specific AI laws in this model. There are some similarities, but some key differences to things like the EU AI Act and some of the AI frameworks. So if you're familiar with that, we're going to talk about some of the similarities and differences there as we go through it. And kind of the biggest takeaway, which you will be hearing throughout the podcast, which I wanted to leave you with right up at the start, is that you should be thinking about compliance for this right now. This is something that as you hear about the dates, you might know that we've got some runway, it's a little bit away. But really, it's incredibly complex and you need to think about it right now and please start thinking about it. So as for introductions, I'll start with myself. My name is Tyler Thompson. I'm a partner at the law firm of Reed Smith in the Emerging Technologies Practice. This is what my practice is about. It's AI, privacy, tech, data, basically any nerd type of law, that's me. And I'll pass it over to Abigail to introduce herself. Abigail: Thanks, Tyler. My name is Abigail Walker. I'm an associate at Reed Smith, and my practice focuses on all things related to data privacy compliance. But one of my key interests in data privacy is where it intersects with other areas of the law. So naturally, watching the Colorado AI Act go through the legislative process last year was a big pet project of mine. And now it's becoming a significant part of my practice and probably will be in the future. Tyler: So the Colorado AI Act was passed at the very end of the 2024 legislative session. And it's largely intended to prevent algorithmic discrimination. And if you're asking yourself, well, what does that mean? What is algorithmic discrimination? In some sense, that is the million-dollar question, but we're going to be talking about that in a little bit of detail as we go through this podcast. So stay tuned and we'll go into that in more detail. Abigail: So Tyler, this is a very comprehensive law and I doubt we'll be able to cover everything today, but I think maybe we should start with the basics. When is this law effective and who's enforcing it and how is it being enforced? So the date that you need to remember is February 1st of 2026. So there is some runway here, but like I said at the start, even though we have a little bit of runway, there's a lot of complexity and I think it's something that you should start now. As far as enforcement, it's the Colorado AG. The Colorado Attorney General is going to be tasked with enforcement here. A bit of good news is that there's no private right of action. So the Colorado AG has to bring the enforcement action themselves. You are not under risk of being sued for the Colorado Privacy Act from an individual plaintiff. Maybe the bad news here is that violating the Colorado AI Act will be considered an unfair and deceptive trade practice under Colorado law. So the trade practice regulation, that's something that exists in Colorado law like it does in a variety of state laws. And a violation of the Colorado AI Act can be a violation of that as well. And so that just really brings the AI Act into some of this overarching rules and regulations around deceptive trade practices. And that really increases the potential liability, your potential for damages. And I think also just from a perception point, it puts the Colorado AI Act violation in some of these kind of consumer harm violations, which tend to just have a very bad perception, obviously, to your average state consumer. The law also gives the Attorney General a lot of power in terms of being able to ask covered entities for certain documentation. We're going to talk about that as we get into the podcast here. But the AG also has the option to issue regulations that further specify some of the requirements of this law. That's the thing that we're really looking forward to is additional regulations here. As we go through the podcast today, you're going to realize there seems like there's a lot of gray area. And you'd be right, there is a lot of gray area. And that's what we're hoping some of the regulations will come out and try to reduce that amount of uncertainty as we move forward. Abigail, can you tell us who does the law apply to and who needs to have their ducks in a row for the AGE by the time we hit next February? Abigail: Yeah. So unlike Colorado's privacy law, which has like a pretty large like processing threshold that entities have to reach to be covered, this law applies to anyone doing business in Colorado that develops or deploys a high-risk AI system. Tyler: Well, that high-risk AI system sentence, it feels like you used a lot of words there that have a real legal significance. Abigail: Oh, yes. This law has a ton of definitions, and they do a lot of work. I'll start with a developer. A developer, you can think of just as the word implies. They are entities that are either building these systems or substantially modifying them. And then deployers are the other key players in this law. Deployers are entities that deploy these systems. So what does deploy actually mean? The law defines deploy as to use. So basically, it's pretty broad. Tyler: Yeah, that's quite broad. Not the most helpful definition I've heard. So if you're using a high-risk AI system and you do business in Colorado, basically you're a deployer. Abigail: Yes. And I will emphasize the fact that it only applies to most of the requirements of the law. Only apply to high-risk AI systems. And I can get into what that means. High-risk, for the purpose of this law, refers to any AI system that makes or is a substantial factor in making a consequential decision. Tyler: What is a consequential decision? Abigail: They are decisions that produce legal or substantially similar effects. Tyler: Substantially similar. Abigail: Yeah. Basically, as I'm sure you're wondering, what does substantially similar mean? We're going to have to see how that plays out when enforcement starts. But I can get into what the law considers to be legal effects, and I think this might highlight or shed some light on what substantially similar means. The law kind of outlines scenarios that are considered consequential. These include education enrollment, educational opportunities, employment or employment opportunities, financial or lending service, essential government services, health care services, housing, insurance, and legal services. Tyler: So we've already gone through a lot. So I think this might be a good time to just pause and put this into perspective, maybe give an example. So let's say your recruiting department or your HR department uses, aka deploys an AI tool to scan job applications or job application cover letters for certain keywords. And those applicants that don't use those keywords get put in the no pile or, hey, this cover letter, it's not talking about what we want to talk about, but we're going to reject them. They're going to go on the no pile of resumes. What do you think about that, Abigail? Abigail: I see that as kind of falling into that employment opportunity category that the law identifies. And I feel like that's kind of almost like falling into that substantially similar thing when it comes to substantially similar to legal effects. I think that use would be covered in this situation. Tyler: Yeah, a lot of uncertainty here, but I think we're all guessing until enforcement really starts or until we get more help from the regulations. Maybe now's the time, Abigail, do you want to give them some relief? Talk about some of the exceptions here. Abigail: Yeah, I mean, we can, but the exceptions are narrow. Basically, as far as developers are concerned, I don't think they're getting out of the act. If your business develops a high-risk AI system and you do business in the state, you're going to comply with it. Tyler: Oh. Abigail: Yeah, or face enforcement. The law does try to prevent deployers from accidentally becoming developers, and that's a nuanced thing. Tyler: So I guess that's interesting. What do you mean by that, that it tries to prevent them from becoming developers, and how does it do that? Abigail: So if you recall when I was talking about what a developer is, you can fall into the developer category if you modify a high-risk AI system. You don't have to be the one that actually creates it from the start, but if you substantially modify a system, you're a developer at that point. But what the law tries to do is make it so that if you're a deployer and your business deploys one of these systems and the system continues to learn based off of your deployment, and then that learning changes the system, you don't become a developer as a result. But, and like, this is a big but, that chain of events, the system modifying itself based off of training on your data or your use of the system, that has to be an anticipated change that you found out about through an impact assessment. So, and it also has to be technically documented. I'll give a crude hypothetical for this, just like a simple one to kind of help you wrap your mind around what I'm talking about here. Let's say I have a donut business and I start using Bakery Corporation's AI system. And then that system starts to become an expert in donuts as a result of my using it. It can't be a happy accident. I have to anticipate that or else my business becomes a developer. Tyler: Yeah. Donuts are high risk in this scenario, right? Abigail: Well, donuts are always a consequential decision, Tyler. Tyler: That's fair. Abigail: But there's more. Deployers have a little bit of a small business exception. And I think that this is going to end up really helping a lot of companies out. Basically, you will meet this exception if you employ fewer than 50 full-time equivalent people, if you don't use your own data to train the high-risk AI system. And if you use the system as intended by the developer and make available to your consumers similar information as to what would go into an impact assessment, then you get out of some of the law's more draconian requirements, such as the public notice requirement, impact assessments, and the big compliance program that the law requires that we'll get into later. Tyler: Okay, wait. So if my donut business is already providing consumers with some of the similar information that would have been in the impact assessment, I don't actually have to conduct a full impact assessment then? Abigail: Yes. Tyler: But wouldn't I have to basically do the impact assessment anyway to know what the similar information is? Like, how can I provide similar information without knowing what would have been in the impact assessment? Abigail: Yes and no. You have to do it, but you don't. And I think this is another spot where the definitions are kind of doing a lot of work here. I think that what the law is trying to do with this exception is trying to not force small businesses to have these robust, expensive compliance programs that you and I know are a heavy lift, while still kind of making them carefully consider the consequences of using a high-risk AI system. I think that's the balance that's trying to be struck here is, you know, we understand that compliance programs, especially the one that this law dictates, are very expensive and cumbersome and can sometimes require whole compliance departments. But we also still don't want to let small businesses employ high-risk AI systems in a way that's not carefully considered and could potentially result in algorithmic discrimination. Tyler: Okay, that makes sense. So maybe a small business would be using the requirements of an impact assessment, not actually doing one, but using the requirements as a guide for how they should go about using the AI system. So they don't actually have to do the assessment, but just looking at the requirements provides a helpful guide. Abigail: Yeah, I think that's the case. And we'll get into this more later when we talk about some of the enforcement mechanisms, but they also wouldn't have to provide the attorney general with an impact assessment. That's part of the enforcement aspect. Tyler: So wait a second. I think we've been positive for probably almost a minute or two. So I think it's time for maybe the other shoe to drop, right? So you said that this only exempts small businesses from a number of requirements. I think they still have to tell customers if a high-risk system was used to make a decision about them though. Is that right? Abigail: Yes, that's right. Tyler: Okay, interesting. So is there any other not very relieving relief that you want to share with us? Abigail: Yes. So I also want to circle back on the high risk thing. Like for example, the law does explicitly say that AI systems that consumers talk to for informational purposes, you know, like if I go to one of these language models and I say, write an email to my boss asking for a last minute vacation, these are not high risk as long as they have an accepted use agreement, like a terms of use. Tyler: Okay, so that's interesting. I think I see what the act is getting at there. So if I ask an AI model with a terms of use to write that vacation email, then it results in my resignation, probably because I didn't read it before sending it, then that's out of scope. Abigail: Yes. And one last thing, if I may. Tyler: Of course you may. Abigail: Yes. I want to make sure that this is clear. All consumer facing AI systems, high risk or not, have to disclose to the consumer that they are using or talking to AI. There is a funny little exception here. The law includes an obviousness exception, but I would not counsel anyone to rely on that. And I'm sure, Tyler, you've seen people on social media fall for those AI-generated videos where they have like 12 fingers and there's like phrases, but they're not using real letters. I think obvious is too subjective to rely on that exception. Tyler: Yeah, I agree. And of course, I would never fall for one of those and certainly have not numerous times. So good to know. So let's switch gears. Let's talk a little bit about internal compliance requirements. We've spent a lot of time talking about the who and the what that the Colorado AI Act applies to. I think now we shift gears and we talk about what does the Colorado AI Act actually require. And I guess, Abigail, do you want to start by telling us what developers have to do? Abigail: Yeah. So first and foremost, I will say both developers and deployers have an affirmative responsibility to use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination. And I'm sure, Tyler, that you're probably prickling at the reasonably foreseeable aspect of that. Tyler: Yeah, I think in general, right, the regulator always has a different idea of what's reasonably foreseeable than, you know, a business in this space that's actually operating in this area, right? You know, I would say that the business could be the true expert on AI and what is algorithmic discrimination, but reasonable minds can disagree about what's reasonable and what's reasonably foreseeable. And so I do think that's tricky there. And while it might seem like a gray area that's helpful, I think it's just a gray area that adds risk. Abigail: Yeah. And now getting kind of more into what developers have to do, I'm about to bomb you with a laundry list of requirements. So if this is overwhelming, don't worry, you're not alone. But one of the main aspects of the requirements for developers is that they have to provide the deployers. So remember the people that are using the developer's high-risk AI system. They have to provide them with tons of information. They have to give them a statement describing reasonably foreseeable uses and known harmful or inappropriate uses. They have to document the data used to train the system, the reasonably foreseeable limitations of the system, purpose, intended benefits, and uses, and all other information a deployer would need to meet their obligations. They also have to document how it was evaluated for performance and mitigation of algorithmic discrimination, data governance measures for how they figured out which data sets were the right ones to train the model. The intended outputs of it, and also how the system should and should not be used, including being monitored by an individual. It's really tracing this model from inception to deployment. Tyler: Wow. Woof. Well, I want to get into some of this intended uses versus reasonably foreseeable uses thing. Talk about that for a minute. I think a key point here will be trying to address some of these things in the contract, right? You know, Abigail, you and I have talked a lot about artificial intelligence addendums, artificial intelligence agreements that you can attach to kind of a master agreement. I think something like that, that gives us some certainty and something reasonable in a contract might be key here, but I'm interested to hear your thoughts. Abigail: Yeah, I agree with you, Tyler. I think that this intended uses thing, it's interesting that the law also requires developers to also identify what they think are not intended uses, but possible uses. And here I'm thinking that a developer probably in their AI addendum is probably going to want to put stuff in there, especially tying to like indemnification, kind of saying, hey, Deployer, if you use this in a way that we did not intend, you need to hold us harmless for any downflow effects of that. I think that's going to be key for developers here. Tyler: Yeah, I'm with you. I think the contract is just so crucial and just have to have that in my mind moving forward to do this the right way. You talk about the deployers. Dare I ask about the deployers and what the act requires there? Abigail: Yep. And here I think our listeners are going to really see why the small business exception is a big deal. So, deployers are required to implement a risk management policy and program, and the law does not leave anything here to chance. To quote it, the program must be an iterative process planned, implemented, and regularly and systematically reviewed and updated over the lifecycle of the high-risk AI system. Basically, it's not enough just to paper up as a deployer. You have to be papering up and then following the plan that you set up for yourself. Tyler: Yeah, interesting. And I wonder if the regulators kind of saw what happened with even privacy, right? Where there was a lot of put a policy in place, let's paper this. But on a monthly or daily or yearly basis, whatever your life cycle is, you're not actually doing a lot with it. So interesting that they have made that so robust with those requirements. And it seems like that this is where that small business exception must be pretty important, right? Abigail: Absolutely, yes. Because as you and I know, this can get pretty expensive and it can take up a lot of man hours as well. The program also has to be based off of a nationally or internationally recognized standard, kind of like we see when NIST publishes guidance. It has to consider things like the characteristics of the deployer and also the nature, scope, and intended uses of the system. And it also has to consider the sensitivity and volume of the data process. Like I said, nothing's left up to chance here. And that's not all. This is another big compliance requirement. Tyler, do you want to give an overview of what the impact assessments have to look like? We've seen these in data privacy before. Tyler: Yeah, for sure. Happy to. And I know it just seems like a lot because it is a lot, but hopefully the impact assessment is something that you're at least a little bit familiar with because, as Abigail said, we've seen that in privacy. There's other compliance areas where an impact assessment or a risk assessment is important. In my mind, it does follow some of what we saw in privacy where your very high level, your 30,000-foot view is we're talking about what the AI system is doing. We're going to point out some risk there, and then we're going to point out some controls for that risk. But to get into some of the specifics, let's talk about what's actually required and some of the specifics here. The first is a statement describing the purpose, intended uses, and benefits of the AI system. You also need an analysis of whether it poses any known or reasonably foreseeable risks of algorithmic discrimination and steps that mitigate that risk. You need a description of the categories of data used as inputs and the outputs the system produces, any metrics used to evaluate the system's performance, and a description of transparency measures and post-deployment monitoring in guardrails. And finally, finally, finally, if the deployer is going to customize that AI system, an overview of the categories of data that were used in that customization. Abigail: Yeah. So I'm starting to see a lot of privacy themes here, especially with descriptions of data categories. But when do deployers have to do these impact assessments? Tyler: The shorter answer is it's ongoing, right? It's an ongoing obligation. They have to do it within 90 days of the deployment or modification of a high-risk AI system. And then after that, at least annually. So you have that short 90-day turnaround up front. And keep in mind that's deployment or modification of the system. And then you have the annual requirement. So this really is an ongoing thing that you're going to need to stay on top of and have your team stay on top of. Also worth noting that modifications to the system trigger an additional requirement in which deploys have to describe whether or not the system was used within the developers intended use. But that's kind of a tricky thing there, right? A little bit, you might have to step into the developer's shoes and think about, was this their intended use? Especially if the developer didn't provide really good documentation there, and that's not something you got during the process of signing up with them for the AI platform. Abigail: Yeah, and I think this highlights, again, how the intended use thing is going to play a big role in contracting. I think there's also a record retention requirement with these impact assessments, right? Tyler: Yeah, there is. I mean, 2025, I think, is going to be the year of record retention. Deployers have to retain all impact assessments, so all your past impact assessments, each time you conduct one for your annual review. So at least three years following the final deployment. So that's important to think about, too. I mean, something that we saw with privacy is, A, it's an updated impact assessment, and some of those old impact assessments would just be gone, be removed. Maybe they were a draft assessment that was never actually finalized. Now it kind of makes it clear that every time you have an impact assessment that satisfies one of these requirements, that hits the timeframe, we need to have an impact assessment, let's say, within that 90 days. Now that's an impact assessment that you have to save for a minimum of three years following that deployment. Also, if you recall, deploy has a really, really broad definition of just to use. So really, it's three years from the last time the system gets used. I think that can be incredibly tricky. and certainly it's a couple years down the road, but that can be an incredibly tricky thing, right? If you have an AI system that is kind of dormant or maybe it's used once a year for a compliance function, something like that, every time it's touched, that's going to re-trigger that use definition and then you will have deployed it again and now you have to do another three-year period from that last deployment or use. Abigail: Wow, yeah. Thinking like you're going to have some serious admin controls on when you put a high-risk AI system to bed. I think, too, there's also some data privacy-esque requirements involved with these. Do you want to go over that really quick? Tyler: Sure, yeah. I mean, these are some of the transparency things that, again, like you said, Abigail, folks might be kind of used to doing some of these transparency-type requirements from the privacy side. The Colorado AI Act has these requirements, too. So first, notification, opt-out, and appeal. So remember, we're talking about AI systems that are helping to make or actually making consequential decisions. In that case, the law requires the deployer to notify the consumer of the nature and consequences of the decision before it's made. So before that can actually, the AI system can make a decision or help, the consumer has to be notified. I have to tell the consumer how to contact the deployer. This might seem easy, but as we've seen with privacy, you might have a whole different contact set of information for something AI related than like your general customer service line, for example. If applicable, you have to tell the consumer how to opt out of their personal data being used to profile that consumer in a way that produces that legal effect. So that's similar to what we've seen in Colorado privacy law and other state comprehensive privacy laws in the United States. And then finally, if a decision is adverse to the consumer, provide specific information of how that decision was reached, along with opportunities to correct any wrong data that produced the adverse decision in a way to appeal the decision. So that's a big deal there. I mean, providing that information on how that decision was reached, I think that requirement alone might be enough to cause some businesses to say, we don't want to go down this road. We don't want to provide it because we don't want to have to provide this information on why an adverse decision was reached. Abigail: Yeah, I would agree with that. And I want to reemphasize here that small businesses are not exempt from this part of the law, even if they're exempt from the other stuff. Tyler: Yeah, sadly, that's correct. And most importantly, deploys have to make sure this information, it's the consumer, which can be tricky, of course, right? And then even if not interacting with the consumer directly, you got to figure out a way that the consumer can actually have this information. And then it has to be in plain language and accessible. So I view this as another spot that a contract can come into play because there's going to be maybe some real requirements here that you're not going to be able to handle. You might have to use that contract to make sure that you have the information that you need and to maybe obligate other parties to provide some of this information to the consumer, depending on your relationship. Abigail: Yeah, absolutely. Should we really quick talk about the notice provisions? Tyler: Yeah, I think that'd be great. Abigail: Okay, so real quick, both deployers and developers are going to have to have some sort of public facing policy. I really think that this is going to become a commonplace thing, an AI policy, kind of like we have privacy policies now. And some themes for these policies are going to be describing your currently deployed systems, how you're managing known or reasonably foreseeable risk, insights about the information collected and processed. And the other thing is that there's an accuracy requirement here. If you change any of those things on the back end, you need to update your AI policy within 90 days. Tyler: And I know we're kind of glossing over this a bit, but I feel like this is kind of a trap, right? We've seen this before where a business can get dinged because their privacy policy or something didn't accurately reflect their data practices. I think this is similar, right? Where maybe, arguably, they would have been better off by not saying anything. Abigail: Yeah, of course. I think this aspect kind of opens companies up to some FTC risk. They're going to have to stay on top of this, not only to comply with Colorado law, but also to avoid federal regulatory scrutiny and kind of the same unfair and deceptive trade practices arena. Tyler: Well, I know we're getting to the end here, but I think we've got to quickly talk about how much insight the act entitles the AG to. And then maybe, Abigail, just go on and talk about some of the attorney-client privilege, that weirdness that we have as well. Abigail: Yeah. So I think this is kind of like where the law gets really scary for companies is it enables the AG to ask developers for all of that information that they have to provide deployers that we went over quickly earlier in the podcast. And then the AG also gets to ask deployers for their risk management policy and program, their impact assessments, and all the records that assist the impact assessment. Tyler: Yeah. And then do you want to talk about some of the weird no waiver of attorney-client privilege piece? I think that's really strange. Abigail: Yeah. So we've seen this with the privacy laws as well, because I think that if I'm remembering correctly, the AG gets to ask for those impact assessments as well. And it has this provision that says having to provide the impact assessment doesn't waive attorney-client privilege when you comply with it, which is, I think, interesting because then the AG has now seen your information, but they're not allowed to use it against you. I don't know how that's going to work in terms of enforcement. Tyler: Yeah, that's pretty strange, right? And there is a 90-day deadline in responding to these requests, so it's kind of a short deadline. Abigail: Yeah. And then finally, the last kind of, like I said, scary AG notification requirement that I really want to point out is that there's a mandatory reporting requirement if a developer or employer discovers that a high-risk AI system has had algorithmic discrimination, then they have to alert the AG. There is an affirmative defense if they rectify the issue, but you only get the affirmative defense if you have those NIST-like frameworks in place. And also, I want to point out too that the law does not require deployers to tell their developers that they found algorithmic discrimination. They just have to tell the AG. So I think this is another issue if you're developer side in contracting that you need to insist that your deployers are also alerting you to this kind of issue. Tyler: Yeah, right. Otherwise, you know, your deployer is going to tell the AG that, hey, that developer's product is discriminating. You might never even know that it happened. You'll have an investigation maybe pending against you and you had no idea that it was even going to happen. Abigail: Exactly. So since we're going to, let's wrap up here, Tyler, I want to reemphasize, I think you talked about this in the beginning. If this goes into effect in a year, why are we talking about it today? Tyler: Look, I think, you know, from this conversation, it's obvious, right? There is a lot a lot here. This is going to be a big project for any business that it covers. I think there's also even threshold projects of determining, hey, is this something that is going to apply to us? And that's going to be big as well. You know, as I've seen in my years doing data privacy, there's probably going to be a little bit of an initial lag in enforcement. So I don't expect, hey, once we hit February 2026, a bunch of enforcement actions. But you could be wrong. And those enforcement actions, when they do come, are going to come seemingly out of nowhere. And they're going to be backwards looking until the effective date of the law. So you really don't want to be caught off guard here. There's a lot to do. Abigail: Yeah, I think that's especially true considering how much documentation is involved. I feel like this law really implicates a lot of business planning and decision making. So you kind of need to have your business side of your team kind of really thinking about whether these systems are worth it in the end. Tyler: Yeah. When you think about the compliance costs, the amount of oversight, you just have to be honest with yourself, I think, if you're a business, as to whether implementing a high-risk AI system is really worth it for you, at least in Colorado. And I think we're going to see it in other states as well. I think this is especially true for the business that just barely misses that deployer exception. And if you just barely miss that deployer exception, that can be tough, right? Because you might have the bigger compliance obligations. And so that's something you have to think about if you're in that gray area or or maybe some of the other gray areas in this law, think about whether it's really worth it. Well, we've covered a lot here. I think the bottom line is the risk here is real. There's real action items. There's real things you need to do. Please reach out to us. Abigail and I, as you can probably tell, we love nerding out about the subject. We'd be happy to talk to you high level and just help you brainstorm whether it applies to you and your company or not. Again, thanks so much for joining. Really appreciate your time and hope to see you on the next one. Abigail: Yeah, thank you, everyone. And thank you, Tyler. Tyler: Yeah, thanks, Abigail. Outro: Tech Law Talks is a Reed Smith production. Our producers are Ali McCardell and Shannon Ryan. For more information about Reed Smith's emerging technologies practice, please email [email protected]. You can find our podcasts on Spotify, Apple Podcasts, Google Podcasts, reedsmith.com, and our social media accounts.  Disclaimer: This podcast is provided for educational purposes. It does not constitute legal advice and is not intended to establish an attorney-client relationship, nor is it intended to suggest or establish standards of care applicable to particular lawyers in any given situation. Prior results do not guarantee a similar outcome. Any views, opinions, or comments made by any external guest speaker are not to be attributed to Reed Smith LLP or its individual lawyers.  All rights reserved.  Transcript is auto-generated.
    --------  
    34:21

Más podcasts de Tecnología

Acerca de Tech Law Talks

Listen to Tech Law Talks for practical observations on technology and data legal trends, from product and technology development to operational and compliance issues that practitioners encounter every day. On this channel, we host regular discussions about the legal and business issues around data protection, privacy and security; data risk management; intellectual property; social media; and other types of information technology.
Sitio web del podcast

Escucha Tech Law Talks, Loop Infinito (by Applesfera) y muchos más podcasts de todo el mundo con la aplicación de radio.es

Descarga la app gratuita: radio.es

  • Añadir radios y podcasts a favoritos
  • Transmisión por Wi-Fi y Bluetooth
  • Carplay & Android Auto compatible
  • Muchas otras funciones de la app

Tech Law Talks: Podcasts del grupo

Aplicaciones
Redes sociales
v7.20.0 | © 2007-2025 radio.de GmbH
Generated: 7/3/2025 - 9:38:27 AM