A summary of proposed AI rules in the US

In May 2023, the White House ignited a fervent discourse on how AI should be regulated to guide development in the US. Although no laws or regulations currently exist to oversee AI development, the momentum towards a regulatory framework has been building steadily. As key players across various sectors and political landscapes engage in discussions, it is crucial to understand the key issues that AI rules and regulations seek to address. In this Perspective, we present a concise summary of the key dimensions outlined in a recent Vox article and provide our own commentary on each.

Copyright: A Complex Interplay between Human and AI Creation

One of the foundational issues pertains to copyright in the context of AI-generated content. The first point posits that AI-generated works cannot be copyrighted as original creations due to their non-human origin. A secondary consideration is whether companies employing copyrighted materials to train AI models must divulge information about the training data used, especially if it is copyrighted.

If we lived in the dystopian world where AI systems were ideating and subsequently generating content by themselves, then this might be a worthwhile direction for copyright law. But the reality right now is that we are not. The vast majority of generative AI use cases involve a human coming up with an idea and then using an AI tool to help them create the content they need. From this, a pressing question emerges: If a human uses AI tools to create content, is copyright applicable? The blurry line between human and AI creations today underscores the complexity of this issue.

From our perspective, AI tools in the medium term, whether generative or otherwise, will continue to be just that…tools. They will be used by humans to efficiently create unique content. So the counter position that we stand behind is: “Should a human creator not be able to copyright their work if they used an AI tool to generate it?”. We think they should be able to copyright their work and not be penalized for taking advantage of the next generation of creative tooling.

Second, we believe that it excessive to require generative AI systems to disclose the copyrighted information that was used to train their models. In any creative process, a human is going to do some research or look at other materials for inspiration. More often than not, they are going to take certain elements of those materials and incorporate them into their own works regardless of whether they were copyrighted or not. If a person uses that inspiration and creates content that infringes upon a copyright, they are penalized but they are not disincentivized from looking for inspiration in the first place. We believe this situation is akin to that of training a generative model. The training data used to build the model is like a person reviewing other content to develop inspiration and such a practice should not be discouraged. If the model creates content that subsequently infringes on a copyright however, it is fair to say that infringement has taken place. In that case some combination of the model developer and the person using the model should be responsible for that infringement, but therein lies even more complexity.

 

Privacy: Balancing Ethical Data Use with Innovation

Privacy concerns have taken center stage in the AI regulatory conversation as well, with two central points being debated: the potential ban on utilizing personal data for targeted advertising and the concept of data minimization, which advocates collecting only data directly aligned with a website's purpose.

While respecting privacy rights is imperative, these propositions present challenges in terms of enforcement and we struggle to see why these need to be resolved in the context of other AI regulation. These proposals are not novel nor specific to AI and deserve to be treated similarly across the entire technology landscape. Additionally, the impact that these proposals would have on industry giants like Google, Amazon, and Meta is significant and likely to play out over an extended period while the industries lobby policy makers and / or contest decisions in court. This battle need not hold up some of the other issues that we hope AI regulation can address.

Algorithmic Bias: Navigating Fairness and Accountability

The Algorithmic Accountability Act is a significant proposal aimed at mitigating algorithmic bias, a pervasive concern in AI development. Addressing bias is a priority, especially as studies indicate AI systems have demonstrated bias against women, people of colour, and other marginalized groups. One proposal is to have the FTC enforce bias evaluation which underscores the seriousness of this issue. The FTC is a pretty big “stick” to use to keep developers on top of the bias in their models but we believe that it is not only a necessary one, but one that is important to implement before bias gets out of hand.

Mandatory Auditing: Striking a Balance between Innovation and Accountability

Proposals for mandatory model audits present a dichotomy between innovation and accountability. Audit proponents argue that model developers should be required to maintain documentation that describes how a model was created and that they should be responsible for understanding and mitigating the ability for their models to generate “dangerous outputs”. If and when companies are audited by a regulator who concludes that best practices have not been followed, they will be subject to a fine.

We agree with the spirit behind model audits but would go one step further and only require them for “high risk” models as to not stymie innovation on models that pose very little risk to people. The US can draw inspiration from the EU’s risk-based model categorization which seeks to match rigour with the risk of harm to people.

Licensing Requirements: Harnessing Control without Curtailing Innovation

The concept of model licensing raises intriguing parallels with drug testing and approval. The proposal is that models should require licenses that are granted by a government or regulatory body and once approved, can be distributed to users. We believe that a licensing scheme is apt for high-risk AI applications (e.g. computer vision for self driving cars), but its overuse could lead to an ecosystem resembling big pharma, where a handful of dominant players lobby governments to have influence over regulatory research and decisions. Few people would disagree that this is not the right direction for a rapidly growing tech market. The key to this challenge will be defining what use cases that entail “high risk” so that regulators can strike the right balance between safeguarding society and fostering innovation.

Other Quick-Hitters

Several additional suggestions seek to shape the future of AI development and regulation:

  • Establishing "The National Artificial Intelligence Research Resource" that provides funding to research and university facilities to purchase the compute infrastructure needed to build powerful models – great idea.

  • Creating a dedicated regulator for AI, staffed with experts to ensure effective oversight – definitely needed but will only be as effective as the people it is staffed with. They need to be actual domain experts who understand the technology and can keep bad actors accountable. We do not need a replay of Mark Zuckerberg testifying in front of the Senate on data privacy.

  • Transitioning regulatory guidelines for AI to NIST – an interesting proposal as NIST has proven to be capable of developing and maintaining many data & cybersecurity best practices. It remains to be seen how capable they would be of extending their expertise into all of the nuances surrounding AI.

  • A CERN-like research center for AI to foster international collaboration and progress that is not owned by any private enterprise – a great idea especially to tackle the types of AI development that could lead to breakthroughs benefitting people around the world (e.g. cancer research)

  • An International Atomic Energy Agency (IAEA_-like body for AI to prevent the release of AI "nuclear bombs" – a dreadful topic to consider, but likely one that the world unfortunately needs

As the conversation around AI regulation gathers momentum, informed decision-making will become paramount and its important that experts have a seat at the table. While the proposals presented above encapsulate some essential dimensions of AI governance, the fine balance between innovation, ethical considerations, and societal needs will be larger in scope and rely heavily on the details of how that governance is implemented. It is a topic that we as a company follow closely and hope that officials engage the right industry players and policy-makers to build a system that allows for innovation while controlling high-risk scenarios.

Previous
Previous

Episode 2: It’s Not Magic, It’s Math - News or Noise 08/22/23

Next
Next

Episode 1: It’s Not Magic, It’s Math - Creating Value with AI