SoundCloud’s AI TOS Controversy: A Wake-Up Call for Lawyers on Generative AI, ABA Model Rules, and Client Data Security 🛡️🤖 (Copy)

The recent uproar over SoundCloud’s Terms of Service (TOS) update—initially allowing user content to train artificial intelligence—should be a clarion call 📣 for legal professionals. While the music industry’s outrage was swift, the lessons for attorneys run deeper, especially as generative AI tools become woven into the fabric of law practice. This episode underscores the urgent need for lawyers to understand the terms of every digital tool they use, not just for convenience but to uphold their ethical duties under the ABA Model Rules and to protect their clients’ confidential and personally identifiable information (PII).

SoundCloud’s TOS update, which quietly granted the company broad rights to use user-uploaded music for AI training, raised fundamental questions about ownership and control. Musicians and creators were alarmed that their original works could be harvested to develop AI models—potentially even those that mimic or replace their creative output—without explicit consent or compensation. The controversy highlighted not only the risk of artists’ music being used to train generative AI but also the ambiguous ownership of derivative works that might result from such training. This concern over intellectual property rights and loss of control is a stark parallel to the legal profession’s own obligations to protect sensitive information in the digital age.

ABA Model Rules: The Foundation for Ethical Tech Use

Competence (Model Rule 1.1):
Lawyers must provide competent representation, which now includes technological competence. Comment 8 to Rule 1.1 specifically requires attorneys to “keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology”. The ABA’s Formal Opinion 512 reinforces that lawyers must understand the capabilities and limitations of generative AI tools before using them in their practice.

Confidentiality (Model Rule 1.6):
The duty of confidentiality is paramount. Rule 1.6 and its comments require lawyers to safeguard client information against unauthorized access or disclosure, including when using third-party technology like generative AI. The ABA’s recent guidance warns that self-learning AI tools, which may retain and reuse input data, pose a heightened risk of inadvertent disclosure—even within a closed system. Public generative AI platforms, in particular, can expose client data to prying eyes, including the opposition or other users.

Communication (Model Rule 1.4):
Lawyers must communicate effectively with clients about the use of AI tools, including any risks to confidentiality or data security. Informed consent is critical, especially when client data could be processed or stored in jurisdictions with different privacy laws.

Reasonable Fees (Model Rule 1.5):
While AI can enhance efficiency, lawyers must ensure that fees for AI-assisted work are reasonable and transparent. The ABA clarifies that time spent learning to use AI tools generally should not be billed to clients.

Lessons from The Tech-Savvy Lawyer.Page

The Tech-Savvy Lawyer.Page has been at the forefront of discussing these risks. In the blog post “Can Lawyers Ethically Use Generative AI with Public Documents? Navigating Competence, Confidentiality, and Caution,” the duty of confidentiality is highlighted as a significant challenge when using public generative AI tools. Even anonymized client data may risk exposure if the context allows for re-identification. My editorials and podcast episodes consistently urge lawyers to:

  • Vet AI platforms for robust data protection and clear, restrictive TOS.

  • Avoid inputting client PII into public AI tools without explicit, informed consent.

  • Stay educated on evolving bar association guidance and best practices for AI in law.

Podcast interviews, such as with Jayne Reardon, reinforce that even “whitewashed” data can be problematic, as the context may still reveal confidential client details. The Tech-Savvy Lawyer.Page Podcast regularly features experts discussing how to balance AI’s benefits with the profession’s ethical imperatives, echoing the same concerns that musicians voiced about the use and ownership of their creative works on SoundCloud.

The SoundCloud Example: Ownership, Control, and Risk

SoundCloud’s TOS update, which initially allowed the platform to use artists’ music to “inform, train, develop, or serve as input to artificial intelligence or machine intelligence technologies,” brought to light the risk that creators might lose control over how their work is used and who ultimately benefits from it. For musicians, the fear was that their own songs could be used to train AI models that would then generate similar music, possibly undermining their careers and eroding their ownership rights. The lack of transparency and the default opt-in nature of the policy further eroded trust.

This is directly analogous to the legal field, where attorneys must ensure that client documents and confidential information are not used to train AI models in ways that could compromise client interests or ownership of legal work product. Just as SoundCloud’s artists demanded clarity and control over their music, lawyers must demand the same over their data and intellectual property.

Why This Matters for Lawyers

The legal profession is rapidly adopting generative AI tools for research, drafting, and client communications. Many of these tools, whether standalone or embedded in cloud-based platforms, operate under terms of service that grant providers broad rights to access, analyze, and even use uploaded data to train their AI models. If attorneys do not scrutinize these terms, they risk inadvertently exposing privileged information, work product, or client PII to unauthorized access or use by the AI provider—or worse, by third parties.

Unlike musicians, whose primary concern is copyright and creative control, lawyers have an ethical and legal duty to protect client confidentiality. The American Bar Association’s Model Rules of Professional Conduct require attorneys to make reasonable efforts to prevent unauthorized disclosure of client information. Using a generative AI tool with permissive or ambiguous TOS could violate these duties, especially if sensitive documents are uploaded without clear assurances that the data will not be used for AI training or shared beyond the intended scope.

Past precedents highlight the urgency of this issue. In 2024, Utah’s court system implemented MyCase, a case management platform that claimed ownership of all user-generated data, including attorney work product and client communications, while disclaiming liability for breaches. Similarly, Google’s 2025 Local Services Ads policy update asserted ownership over law firms’ client intake data, including call recordings and messages, which could be analyzed by AI without explicit consent - see my editorial MTC: Google’s Claim Over LSA Client Intake Recordings: Why Lawyers Must Rethink Cloud Service Risks in 2025 ⚖️☁️. These cases mirror broader industry trends, such as Vultr’s short-lived attempt to claim perpetual commercial rights to cloud-hosted content and Slack’s controversial data export fees. For lawyers, these examples demonstrate how easily confidential information can fall under third-party control—often through clauses buried in updates to terms of service. The ABA’s Formal Opinion 512 and analysis from The Tech-Savvy Lawyer.Page emphasize that such risks are not theoretical, urging practitioners to treat TOS reviews as a core component of client protection1.

Security and the Adversarial Risk
One of the gravest risks is that information uploaded to a generative AI platform could be accessed or inferred by the opposition or other unauthorized parties. If an AI provider uses your data to train its models, there is a risk—however small—that elements of your confidential work could surface in responses to other users. This is not a hypothetical concern; it has already happened in other industries, and the consequences for legal practice could be catastrophic.

Practical Steps: Protecting Clients and Your Practice

SoundCloud’s TOS incident is a stark reminder: always read and understand the terms of service for any platform or AI tool before uploading client data. The New Solo podcast episode “AI And The Terms Of Service. Know What You Are Sharing!” echoes this advice, emphasizing that lawyers—even those who draft TOS for others—often neglect to scrutinize these agreements for their own practice. Key questions include:

  • Who owns the data entered into the platform?

  • What rights does the provider claim over your data?

  • Is your data used solely to deliver the service, or is it also used to train AI models?

  • What safeguards are in place to prevent unauthorized access or disclosure?

As with the SoundCloud controversy, where artists discovered their music could be used for AI training without their knowledge, lawyers must be vigilant to prevent their confidential work from being similarly exploited.

The Adversarial Risk: More Than Just Theory

The risk is not hypothetical. As detailed in ABA Formal Opinion 512 and multiple bar association opinions, generative AI can inadvertently disclose confidential information through its self-learning capabilities—even within a law firm’s own closed system. If a provider’s TOS allows broad use of your data, there’s a real danger that sensitive client information could be exposed to third parties, including opposing counsel. Along this same vein, remember when an online law practice management provider tried to claim ownership of the data user put into their system? Know your terms of service!

Final Thoughts: Vigilance Is Essential

The SoundCloud TOS controversy is a pivotal lesson for the legal profession. Lawyers must not only embrace technology but do so with a clear-eyed understanding of their ethical obligations. The ABA Model Rules, recent formal opinions, and thought leadership from The Tech-Savvy Lawyer.Page all point to the same conclusion: due diligence, transparency, and ongoing education are non-negotiable in the age of AI. Just as artists must protect the ownership and use of their creative works, attorneys must safeguard client data and their own legal work product from becoming fodder for AI models without consent.