EU AI Act Code of Practice and Copyright Challenges
Discussions around ethical AI continue to dominate discussions in the tech sector. One piece of news that may have slipped under the radar, however, is the rollout of the EU AI Act, which tech companies are paying close attention to.
Following months of negotiations, confusion and pushback, rumors that the Act could be paused soon to make way for simplified rules are gaining traction. Whilst the Act came into force in 2024, with full implementation expected to gradually take shape over the coming years, one specific component deserves a closer look: the Code of Practice and its implications for copyright compliance.
For organisations developing AI solutions such as network optimisation and edge computing applications, it’s a moral imperative to review the Code of Practice closely to preserve intellectual property and human protection without letting technological innovation run rampant. As the wireless tech sector deploys more sophisticated AI systems and products, it’s prudent to understand the (albeit vague and ambiguous) copyright compliance challenges afoot and how, if not properly managed, they could land businesses in hot water.
Ambiguity in Copyright Compliance
Without being too alarmist, the draft Code of Practice fundamentally misrepresents EU copyright law. Rather than establishing clear legal obligations, the Code portrays copyright compliance as simply respecting expectations, dubbing them “reasonable efforts”.
The draft Code’s treatment of copyright compliance mirrors the ambiguity that has long plagued data protection. As such, the Code requires AI providers to demonstrate lawful access to training data, yet stops short of mandating comprehensive documentation.
This ambiguity essentially leaves organisations to determine for themselves what constitutes appropriate and reasonable due diligence.
For companies utilising AI and training it on vast data sets, this vagueness is a setback. The lack of specific guidance on data lineage documentation mirrors challenges that the technology sector has faced with other regulatory frameworks such as GDPR, antitrust laws and Section 230 of the Communications Decency Act, where unclear standards have led to inconsistent and poorly-managed implementation.
Loopholes and Complexity
The Code’s copyright provisions have become challenging when deploying AI systems. The European Commission has advised tech companies to follow the 2019 Copyright Directive (specifically Articles 3 and 4), which permit data mining for research institutions with approved access to copyrighted materials. Similar privileges extend to commercial entities unless the original creators have opted out.
That said, these provisions may conflict with different copyright frameworks. As law firm Hassans have noted, striking the right balance between driving innovation in business and ensuring AI serves an ethical and responsible purpose (with human oversight and supervision) requires regulatory clarity. Businesses operating across multiple jurisdictions (in sectors where AI usage will only grow) will need practical compliance guidance that they can comfortably and confidently adhere to. Specifically, for wireless technology companies using global datasets to train models for automation, the need to understand and adhere to variations in international AI law adds significant complexity to an AI deployment strategy.
Given the increasing popularity of generative AI models capable of producing images, music, books, videos, text and other content, copyright infringement lawsuits may be more common than anticipated. Copyright frameworks and image licensing terms vary across jurisdictions, and the Code does not adequately resolve these initial concerns.
The ‘SME Exception’
Perhaps most concerning is the Code’s differential treatment of small and medium enterprises (SMEs). Exempting SMEs from key performance indicators (KPIs) around copyright compliance creates a two-tiered structure that potentially rewards non-compliance amongst smaller players while creating additional hurdles for larger organisations.
The distinction overlooks the often-touted aggressive data collection practices that frequently originate from smaller tech enterprises operating with venture capital (VC) backing. The size of a business doesn’t correlate with the scope or risk of its AI training practices, but the Code’s SME loophole presents a dichotomy that suggests otherwise. Established tech companies investing substantial resources in AI compliance may face more red tape. This creates an uneven playing field, a dynamic that ostensibly empowers smaller competitors to ‘bend the rules’.
Building Resilient Frameworks
Speculation persists about potential simplifications to the AI Act, but technology organisations cannot afford to wait months or even years for regulatory certainty before copyright compliance is addressed. The best course of action is to integrate thorough copyright assessments into existing AI risk management processes as soon as possible.
This involves moving beyond the Code’s vague current requirements to implement "policies preventing copyright infringement" and executing concrete practices. Specific guidance on third-party dataset assessment procedures, training data validation, output monitoring for copyright issues, and rights holder notification requirements must be provided, at minimum. For organisations developing AI applications in wireless and IoT domains, clear governance frameworks should account for specific datasets relevant to their sectors and the different copyright implications each one presents.
Where Do We Go From Here?
The final version of the AI Act’s Code of Practice may look dramatically different to its current iteration. However, the evergreen challenge remains that technology businesses must deploy AI systems that drive innovation while respecting copyrights, all within an evolving, rapid and often conflicting regulatory field.
The wireless tech community has successfully navigated complex regulatory environments and changes before. Addressing AI copyright compliance mandates the same proactive, strategic and methodical stance. Organisations that review and reassess their AI deployment strategies against new and evolving copyright laws will be in a firm position when more clarity inevitably emerges.