- Insights
- Podcast episodes
From Lawyer to Employer: Season 4, Episode 8 | Generative AI and Legal Ethics: What In-House Counsel Need to Know
From Lawyer to Employer: A Shipman Podcast
Generative AI is quickly becoming part of the legal toolkit but the ethical obligations of lawyers haven’t changed. In this episode of From Lawyer to Employer, host Dan Schwartz sits down with Shipman attorney Claire Pariano to explore how generative AI is reshaping legal practice and what in-house counsel need to know to stay compliant.
For legal departments and organizations navigating AI adoption, Dan and Claire outline practical steps for building an AI governance framework - from risk classification and verification protocols to training and data security safeguards.
The takeaway: AI can be a powerful tool, but lawyers remain responsible for the work product they produce.
Host: Welcome to from Lawyer to Employer, a Shipman podcast, bringing you the latest developments in labor and employment law, offering you practical considerations for your organization. You can subscribe to this podcast on Apple, Spotify, or wherever you listen. Thank you for joining us, and we hope you enjoyed today's episode.
Dan Schwartz: Welcome back to a new episode of From Lawyer to Employer, a Shipman & Goodwin podcast. I'm your host, Dan Schwartz, partner in the Labor and Employment and Education Group and Shipman & Goodwin on today's podcast, we are tackling a topic that's on every legal department's mind, and it is a generative AI and the ethical guardrails that in-house lawyers need to follow.
And really any lawyers - we know from the feedback we get on this podcast, we've got more than a few lawyers who listen to this. So recently my colleague Claire Pariano and I had the opportunity to speak to the Association of Corporate Council and the Connecticut chapter on this very topic. So, I figured that we would bring Claire on, recap, a few of the highlights and some of the lessons learned and share them with you.
So, Claire, welcome back to the podcast.
Claire Pariano: Thanks, Dan. Yeah. As you mentioned, the emphasis today is that while AI is a powerful tool, it doesn't change our professional obligations as attorneys.
Dan Schwartz: Yeah, I mean it's always fun, you know, anytime a new technology comes about, we are always sort of applying the older rules that we had to the new technology. And I think if there's a takeaway to start with this, which is the old rules still apply, so let's jump into it. First, let's make sure to, you know - set the table here. What makes generative AI sort of different from some of the tools that lawyers have been using for sort of the most recent generation?
Claire Pariano: Yeah, so as you mentioned, lawyers have used AI type of platforms for years. So e-discovery, legal research platforms and document management. And when we say generative AI what we mean is that it's creating new content, so such as text images, analysis based on patterns from training data. This is different from extractive AI, which retrieves existing information without generating new content.
So, what's new here is that generative AI drafts content, and with that raises a whole set of new ethical questions.
Dan Schwartz: Yeah and the American Bar Association, of which I'm a member and just got off the Board of Governors last year. They weighed in with ethics opinion 512 back in July of 2024. So what were some of the, the key questions that the opinion sought to address?
Claire Pariano: Yes, as you mentioned, the ABA identified five fundamental questions. First, what competency do lawyers need? Two, how do we protect confidentiality? Three, when must we disclose AI used to clients? Four, What level of review is required? And five, what's a reasonable fee? These questions reflect directly onto familiar ethics roles- competence, confidentiality, communication, supervision and candor.
Dan Schwartz: Awesome. So, let's start with competence under rule 1.1, which I'm sure everyone remembers from law school and you've looked at it each year, but what does that rule require when using artificial intelligence?
Claire Pariano: Connecticut’s commentary says lawyers must quote: keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology. For AI this means understanding capabilities and limitations, recognizing risks like hallucinations, and never over relying on AI output. So always fact checking what you receive.
Dan Schwartz: Yeah, I think hallucinations is a word that sort of entered the lexicon every day and that's really fabricated information that's presented as fact. I think that's something that has been somewhat common, perhaps a little overblown at times, but we've certainly seen cautionary tales.
Claire - if I'm remembering right, there was even a case, uh, in the last year or so where an attorney used AI to generate case citations, didn't verify them, and frankly, more than one of them, but the, it was the one I was thinking about.
Claire Pariano: I mean, this is getting more common in our, in our field. This case from Wyoming Federal Court dealt with an attorney that inputted prompts, such as ‘add federal case law from Wyoming to a motion’ and then AI generated non-existent cases. This attorney did not verify the accuracy of these cases before, including them in a motion that he filed with the court. The court then sanctioned him $3,000 and revoked his pro hac vice, a admission saying that ‘while technology continues to evolve, one thing remains the same, checking and verifying the source’.
Dan Schwartz: Yeah, and I think for in-house counsel that are listening to this, certainly checking the source that are presented as sort of facts in a generative AI output is really critical to it, right? You, you own the word work product and you've gotta verify everything is, is that a fair takeaway, Claire?
Claire Pariano: Exactly, you're signing your name on this motion and filing with the court – you need to verify what you're submitting is accurate.
Dan Schwartz: Alright, so let's talk about another rule here, which is, uh, rule 1.6, which is the confidentiality for lawyers. I presume that should be a major concern, particularly for in-house council, right?
Claire Pariano: Exactly and we've identified two main risks concerning confidentiality.
First information could be disclosed to others outside your organization through the AI platform, and two, self-learning AI tools might incorporate your inputs and the information or documents you submit into their training data, potentially exposing client information later on. These safeguards are to inform client consent and to conduct due diligence on these tools to ensure they maintain confidentiality.
Dan Schwartz: So the, you know, if you're using a commercial off the shelf generative AI program with ChatGPT, Claude, a Gemini, one of those, they specifically say that they train on your data. So, if you input confidential information in there, there's no assurance that that won't get used down the road by it. So you've gotta be cautious there. There was a recent case I think in, in New York that sort of illustrated another example that we hit upon, right?
Claire Pariano: Yes. This was very recent, just last month, February of 2026, and a case out of the Southern District of New York involving a financial services executive accused of fraud. This individual used Claude AI to prepare documents related to his case, which he then sent to his defense counsel.
Then when the government seized his devices and found these AI generated materials, his defense counsel claimed privilege. The judge however, ruled that they were not privileged. Reasoning that an AI tool is not an attorney and the tool's terms of service disclaim any attorney-client relationship, and state that inputs are not confidential.
Dan Schwartz: Yeah, and so that's really a wakeup call for inhouse counsel that consumer AI tools really may not protect privilege the way that lawyers assume. Now, that's not to say that if you use an online sort of professional grade AI or one desiged for lawyers, that there may still be some confidentiality that attaches to it.
Much like attorneys who use E discovery programs now, but I think there's a real difference between using something off the shelf in a specifically designed one where perhaps you have some data privacy agreements that are in place. So really a lesson to make sure you know what you're using to protect confidentiality.
So, let's talk about communications and the duty to communicate with clients. So in-house attorneys have their own clients that they deal with, or they may be the client and they're dealing with outside counsel who are using these tools. So do lawyers need to disclose the use of AI under some of the guidance that's out there?
Claire Pariano: It really depends on the specific circumstances and whether those warrant disclosure. We recommend evaluating the client's expectations and the sensitivity of the information involved. Some clients have explicit policies prohibiting AI use and in-house counsel should be aware that outside firms may be using these tools unless told otherwise.
Dan Schwartz: Yeah. And we've certainly now seen clients, uh, who have guidelines sort of requiring the use of AI for things like deposition summaries or reviewing certain things. So, the ground is sort of shifting as we speak, but there's some rules on billing though too, right? I think under Rule 1.5 that attorneys should be aware of.
Claire Pariano: Yes. So as we all know, fees must be reasonable when billing hourly attorneys can only bill for actual time spent, including review of AI output. You can't charge for the hours that the AI did in seconds. So whether that's reviewing a deposition transcript, as Dan just mentioned, there's really a push pull here and, uh, this will only intensify as AI tools get cheaper and more capable.
Dan Schwartz: Yeah, we've certainly seen a difference even in the last year as these AI tools have gotten more sophisticated. You know the change is, is happening as we speak. So I think if your experience on an AI tool was from 18 months ago or 24 months ago, it's really worth taking another look at because technology improvements are really somewhat remarkable when you think about it. You mentioned another C at the beginning, which was candor and candor to tribunals, and I think that's under rule 3.3. What risks exist there for attorneys?
Claire Pariano: This ties back to that Wyoming case that we previously discussed, but the risks are citations to non-existent opinions, inaccurate analysis and misleading arguments.
As we mentioned, lawyers must review all AI output to ensure accuracy of both law and facts, while also ensuring controlling authorities actually cited and avoiding misleading arguments. Notably, some courts now are requiring disclosure of generative AI use in filing. So, we'd recommend reviewing the local rules.
Dan Schwartz: So let's bring this back to in-house, uh, counsel and sort of building governance here. Why is this, um, particularly important for in-house lawyers now?
Claire Pariano: This is particularly important for in-house counsel as they face a unique duty of responsibilities. They set policies for the entire organization, not just in legal matters.
They also manage outside counsel who may be using these AI platforms such as Claude, Harvey to name a few and more importantly, they're often the first adopters of these tools, and they face AI governance questions that extend into HR operations and product development.
Dan Schwartz: So, what can in-house counsel be doing now to build an AI governance framework?
We talked about some of those ideas right at the, uh, presentation.
Claire Pariano: Yes. And we've developed sort of a, a five-step process to, to follow here. First, you establish a dedicated AI governance committee that crosses functions. Two, you classify uses by risk. Not all AI applications carry the same concerns. Third, you implement verification protocols to catch errors.
This is catching those hallucinations and fake citations. Fourth, established rigorous data handling standards and five, conduct ongoing audits and training. Because this landscape is changing almost on a weekly basis.
Dan Schwartz: Yeah, it, uh, it certainly is. There was another rule I was thinking about that's sort of the supervision rules under 5.1 and 5.3.
So, what do in-house lawyers who may supervise others need to just be mindful?
Claire Pariano: This ties back to some of the points we've made, but. Firms need to establish clear policies on permissible AI use. Supervising lawyers must ensure compliance and provide adequate training. This is really important as we've mentioned, the AI is developing on a rapid basis when using external AI providers you want to investigate reliability, liability limitations and data retention policies.
Dan Schwartz: Yeah, those, uh, are, are certainly good. Uh, well, let's close this with some concrete best practices. What do you got for, for some of them?
Claire Pariano: Just to highlight a few. First, you wanna document your verification practices. Human review is non-negotiable. Someone still needs to review this work that's not an AI platform. You also wanna develop protocols for spotting common AI errors so you know what to look for, and you always wanna protect sensitive information by using secure platforms that don't train on your data.While also making proper disclosures when using AI to clients courts or within your organization.
Dan Schwartz: Yeah, we, uh, we've been using Harvey for a number of months here at our firm and certainly I think our proficiency continues to build. It's a newer technology and I think the more you use it, the more you, hopefully can spot some of its, its flaws. I was using it earlier today from recording this and it presented something very convincingly as fact, and I sort of scratch my head and go, eh, it's not quite as, uh, cut and dry about that. So, I think a takeaway that I have for people is, is to be somewhat skeptical even when the AI presents things as real. Any further thoughts? Uh, Claire, before we wrap up.
Claire Pariano: Just to tie back to how we began, the core message is really that technology changes, but our obligations as legal professionals don't. Uh, we want to ensure that lawyers thrive in this new AI environment while also remembering your, your duties and ethical responsibilities.
Dan Schwartz: Great guidance, Claire. And, uh, really for our listeners, whether you're in-house counsel, outside counsel, or even a HR professional working with a legal team, I think the AI is a tool but not really a substitute for professional responsibility or judgment there. I'm sure that we're gonna hear a lot more about this.
This is, um, not a technology that's, uh, going away and as I heard a few months ago and I like repeating it cause I thought it was cool. The technology we're using now is the worst it's probably ever gonna be. So hopefully that is the, uh, the case. So, Claire, thanks for joining us.
Claire Pariano: Thanks Dan.
Dan Schwartz: And that will wrap up another episode, uh, from Lawyer to Employer.
We really appreciate you listening. As a reminder, you can subscribe to this podcast wherever you get your podcast, whether it's on Apple Podcast or Spotify. Feel free to leave us a review as well that helps others know about this podcast. And if you have a topic for a future episode or just wanna provide some feedback you can always reach out to me at dSchwartz@goodwin.com.
We'll have another episode coming out soon. But in the meantime, go use your generative AI and do it responsibly. Take care.
Host: Thank you for joining us on this episode of From Lawyer to Employer a Shipman podcast. This podcast is produced and copyrighted by Shipman & Goodwin, LLP.
All rights reserved. The contents of this communication are intended for informational purposes only and are not intended or should not be construed as legal advice. This may be deemed advertising under certain state laws. Subscribe to our podcast on Spotify, Apple Podcast, or wherever you listen. We hope you'll join us again.
Dan Schwartz: Yeah, I mean it's always fun, you know, anytime a new technology comes about, we are always sort of applying the older rules that we had to the new technology. And I think if there's a takeaway to start with this, which is the old rules still apply, so let's jump into it. First, let's make sure to, you know - set the table here. What makes generative AI sort of different from some of the tools that lawyers have been using for sort of the most recent generation?
Claire Pariano: Yeah, so as you mentioned, lawyers have used AI type of platforms for years. So e-discovery, legal research platforms and document management. And when we say generative AI what we mean is that it's creating new content, so such as text images, analysis based on patterns from training data. This is different from extractive AI, which retrieves existing information without generating new content.
So, what's new here is that generative AI drafts content, and with that raises a whole set of new ethical questions.
Dan Schwartz: Yeah and the American Bar Association, of which I'm a member and just got off the Board of Governors last year. They weighed in with ethics opinion 512 back in July of 2024. So what were some of the, the key questions that the opinion sought to address?
Claire Pariano: Yes, as you mentioned, the ABA identified five fundamental questions. First, what competency do lawyers need? Two, how do we protect confidentiality? Three, when must we disclose AI used to clients? Four, What level of review is required? And five, what's a reasonable fee? These questions reflect directly onto familiar ethics roles- competence, confidentiality, communication, supervision and candor.
Dan Schwartz: Awesome. So, let's start with competence under rule 1.1, which I'm sure everyone remembers from law school and you've looked at it each year, but what does that rule require when using artificial intelligence?
Claire Pariano: Connecticut’s commentary says lawyers must quote: keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology. For AI this means understanding capabilities and limitations, recognizing risks like hallucinations, and never over relying on AI output. So always fact checking what you receive.
Dan Schwartz: Yeah, I think hallucinations is a word that sort of entered the lexicon every day and that's really fabricated information that's presented as fact. I think that's something that has been somewhat common, perhaps a little overblown at times, but we've certainly seen cautionary tales.
Claire - if I'm remembering right, there was even a case, uh, in the last year or so where an attorney used AI to generate case citations, didn't verify them, and frankly, more than one of them, but the, it was the one I was thinking about.
Claire Pariano: I mean, this is getting more common in our, in our field. This case from Wyoming Federal Court dealt with an attorney that inputted prompts, such as ‘add federal case law from Wyoming to a motion’ and then AI generated non-existent cases. This attorney did not verify the accuracy of these cases before, including them in a motion that he filed with the court. The court then sanctioned him $3,000 and revoked his pro hac vice, a admission saying that ‘while technology continues to evolve, one thing remains the same, checking and verifying the source’.
Dan Schwartz: Yeah, and I think for in-house counsel that are listening to this, certainly checking the source that are presented as sort of facts in a generative AI output is really critical to it, right? You, you own the word work product and you've gotta verify everything is, is that a fair takeaway, Claire?
Claire Pariano: Exactly, you're signing your name on this motion and filing with the court – you need to verify what you're submitting is accurate.
Dan Schwartz: Alright, so let's talk about another rule here, which is, uh, rule 1.6, which is the confidentiality for lawyers. I presume that should be a major concern, particularly for in-house council, right?
Claire Pariano: Exactly and we've identified two main risks concerning confidentiality.
First information could be disclosed to others outside your organization through the AI platform, and two, self-learning AI tools might incorporate your inputs and the information or documents you submit into their training data, potentially exposing client information later on. These safeguards are to inform client consent and to conduct due diligence on these tools to ensure they maintain confidentiality.
Dan Schwartz: So the, you know, if you're using a commercial off the shelf generative AI program with ChatGPT, Claude, a Gemini, one of those, they specifically say that they train on your data. So, if you input confidential information in there, there's no assurance that that won't get used down the road by it. So you've gotta be cautious there. There was a recent case I think in, in New York that sort of illustrated another example that we hit upon, right?
Claire Pariano: Yes. This was very recent, just last month, February of 2026, and a case out of the Southern District of New York involving a financial services executive accused of fraud. This individual used Claude AI to prepare documents related to his case, which he then sent to his defense counsel.
Then when the government seized his devices and found these AI generated materials, his defense counsel claimed privilege. The judge however, ruled that they were not privileged. Reasoning that an AI tool is not an attorney and the tool's terms of service disclaim any attorney-client relationship, and state that inputs are not confidential.
Dan Schwartz: Yeah, and so that's really a wakeup call for inhouse counsel that consumer AI tools really may not protect privilege the way that lawyers assume. Now, that's not to say that if you use an online sort of professional grade AI or one desiged for lawyers, that there may still be some confidentiality that attaches to it.
Much like attorneys who use E discovery programs now, but I think there's a real difference between using something off the shelf in a specifically designed one where perhaps you have some data privacy agreements that are in place. So really a lesson to make sure you know what you're using to protect confidentiality.
So, let's talk about communications and the duty to communicate with clients. So in-house attorneys have their own clients that they deal with, or they may be the client and they're dealing with outside counsel who are using these tools. So do lawyers need to disclose the use of AI under some of the guidance that's out there?
Claire Pariano: It really depends on the specific circumstances and whether those warrant disclosure. We recommend evaluating the client's expectations and the sensitivity of the information involved. Some clients have explicit policies prohibiting AI use and in-house counsel should be aware that outside firms may be using these tools unless told otherwise.
Dan Schwartz: Yeah. And we've certainly now seen clients, uh, who have guidelines sort of requiring the use of AI for things like deposition summaries or reviewing certain things. So, the ground is sort of shifting as we speak, but there's some rules on billing though too, right? I think under Rule 1.5 that attorneys should be aware of.
Claire Pariano: Yes. So as we all know, fees must be reasonable when billing hourly attorneys can only bill for actual time spent, including review of AI output. You can't charge for the hours that the AI did in seconds. So whether that's reviewing a deposition transcript, as Dan just mentioned, there's really a push pull here and, uh, this will only intensify as AI tools get cheaper and more capable.
Dan Schwartz: Yeah, we've certainly seen a difference even in the last year as these AI tools have gotten more sophisticated. You know the change is, is happening as we speak. So I think if your experience on an AI tool was from 18 months ago or 24 months ago, it's really worth taking another look at because technology improvements are really somewhat remarkable when you think about it. You mentioned another C at the beginning, which was candor and candor to tribunals, and I think that's under rule 3.3. What risks exist there for attorneys?
Claire Pariano: This ties back to that Wyoming case that we previously discussed, but the risks are citations to non-existent opinions, inaccurate analysis and misleading arguments.
As we mentioned, lawyers must review all AI output to ensure accuracy of both law and facts, while also ensuring controlling authorities actually cited and avoiding misleading arguments. Notably, some courts now are requiring disclosure of generative AI use in filing. So, we'd recommend reviewing the local rules.
Dan Schwartz: So let's bring this back to in-house, uh, counsel and sort of building governance here. Why is this, um, particularly important for in-house lawyers now?
Claire Pariano: This is particularly important for in-house counsel as they face a unique duty of responsibilities. They set policies for the entire organization, not just in legal matters.
They also manage outside counsel who may be using these AI platforms such as Claude, Harvey to name a few and more importantly, they're often the first adopters of these tools, and they face AI governance questions that extend into HR operations and product development.
Dan Schwartz: So, what can in-house counsel be doing now to build an AI governance framework?
We talked about some of those ideas right at the, uh, presentation.
Claire Pariano: Yes. And we've developed sort of a, a five-step process to, to follow here. First, you establish a dedicated AI governance committee that crosses functions. Two, you classify uses by risk. Not all AI applications carry the same concerns. Third, you implement verification protocols to catch errors.
This is catching those hallucinations and fake citations. Fourth, established rigorous data handling standards and five, conduct ongoing audits and training. Because this landscape is changing almost on a weekly basis.
Dan Schwartz: Yeah, it, uh, it certainly is. There was another rule I was thinking about that's sort of the supervision rules under 5.1 and 5.3.
So, what do in-house lawyers who may supervise others need to just be mindful?
Claire Pariano: This ties back to some of the points we've made, but. Firms need to establish clear policies on permissible AI use. Supervising lawyers must ensure compliance and provide adequate training. This is really important as we've mentioned, the AI is developing on a rapid basis when using external AI providers you want to investigate reliability, liability limitations and data retention policies.
Dan Schwartz: Yeah, those, uh, are, are certainly good. Uh, well, let's close this with some concrete best practices. What do you got for, for some of them?
Claire Pariano: Just to highlight a few. First, you wanna document your verification practices. Human review is non-negotiable. Someone still needs to review this work that's not an AI platform. You also wanna develop protocols for spotting common AI errors so you know what to look for, and you always wanna protect sensitive information by using secure platforms that don't train on your data.While also making proper disclosures when using AI to clients courts or within your organization.
Dan Schwartz: Yeah, we, uh, we've been using Harvey for a number of months here at our firm and certainly I think our proficiency continues to build. It's a newer technology and I think the more you use it, the more you, hopefully can spot some of its, its flaws. I was using it earlier today from recording this and it presented something very convincingly as fact, and I sort of scratch my head and go, eh, it's not quite as, uh, cut and dry about that. So, I think a takeaway that I have for people is, is to be somewhat skeptical even when the AI presents things as real. Any further thoughts? Uh, Claire, before we wrap up.
Claire Pariano: Just to tie back to how we began, the core message is really that technology changes, but our obligations as legal professionals don't. Uh, we want to ensure that lawyers thrive in this new AI environment while also remembering your, your duties and ethical responsibilities.
Dan Schwartz: Great guidance, Claire. And, uh, really for our listeners, whether you're in-house counsel, outside counsel, or even a HR professional working with a legal team, I think the AI is a tool but not really a substitute for professional responsibility or judgment there. I'm sure that we're gonna hear a lot more about this.
This is, um, not a technology that's, uh, going away and as I heard a few months ago and I like repeating it cause I thought it was cool. The technology we're using now is the worst it's probably ever gonna be. So hopefully that is the, uh, the case. So, Claire, thanks for joining us.
Claire Pariano: Thanks Dan.
Dan Schwartz: And that will wrap up another episode, uh, from Lawyer to Employer.
We really appreciate you listening. As a reminder, you can subscribe to this podcast wherever you get your podcast, whether it's on Apple Podcast or Spotify. Feel free to leave us a review as well that helps others know about this podcast. And if you have a topic for a future episode or just wanna provide some feedback you can always reach out to me at dSchwartz@goodwin.com.
We'll have another episode coming out soon. But in the meantime, go use your generative AI and do it responsibly. Take care.
Host: Thank you for joining us on this episode of From Lawyer to Employer a Shipman podcast. This podcast is produced and copyrighted by Shipman & Goodwin, LLP.
All rights reserved. The contents of this communication are intended for informational purposes only and are not intended or should not be construed as legal advice. This may be deemed advertising under certain state laws. Subscribe to our podcast on Spotify, Apple Podcast, or wherever you listen. We hope you'll join us again.
