六合彩开奖结果

Detail of a high rise in Montreal. By Phil Deforges at https://unsplash.com/photos/ow1mML1sOi0

Bill C-27 and AI in Content Moderation: The Good, The Bad, and The Ugly

Through Bill C-27, the Canadian Federal Government has signaled a desire to update our legislative framework to account for the radical changes technology has had on our society. But is it enough?

Through Bill C-27, the Canadian Federal Government has signaled a desire to update our legislative framework to account for the radical changes technology has had on our society. But is it enough?

With an ever-increasingly digitized economy, companies are looking for innovative ways to weave artificial intelligence (AI) into their business practices for economic gain. A particularly widespread use of AI has been in the realm of content moderation, whereby companies 鈥減olice鈥 the content on their platforms through a combination of AI and human interjections. Owing to a degree of newness of AI and machine-learning algorithms, the implementation of AI for content moderation has had equal parts advantages and disadvantages.

A current advantage to companies is in how there is little oversight internationally regarding the deployment of AI, as governments have left this to the private sector. Canada, for its part, has in the last year spearheaded a regulation of AI through , which includes, inter alia, the Artificial Intelligence and Data Act (AIDA). This is a swath of regulations aimed at modernizing legislation to account for the impacts of AI. While much has been written about the ways in which Bill C-27 impacts and , little commentary exists at this point in the way of examining the importance of including AI in the moderation of content to AIDA.

With this in mind, it remains crucial to understand how AI can be used for content moderation. Whether or not AIDA will apply to this type of AI, much suggests that it nonetheless should. To that effect, this blog post will seek to argue for an inclusion of AI for content moderation within the ambit of AIDA. It will examine the benefits and detriments of AI implementation for content moderation, before moving onto why it should be regulated. Such scrutiny is warranted not least because AI employed in this particularly niche way can have a detrimental effect on many crosscutting legal issues.

AI In Content Moderation: The Good

, which strongly suggests that social media is here to stay. With a continuously growing user base comes a rise in user-generated content (UGC) as users engage in communities of their choice. A noted issue with this rise in UGC is the volume of information that needs to abide by community guidelines. Additionally, many human content moderators have expressed their negative employment experiences, and the trauma they endure as they can be exposed to graphic content. This has risen to the point where some have for lack of support.

It is thus for this dual purpose 鈥 volume and nature of content 鈥 that many advocate for the use of AI in content moderation. Several companies already employ AI in this way through a common workflow of a hybrid human/AI system, as note. Here, the AI is engaged in the pre-moderation process, the creation of training data, and to assist in human moderation, where AI can prove invaluable for . Facebook their ability to combat a host of UGC counter to community guidelines, including spam and terrorist propaganda, using AI.

AI In Content Moderation: The Bad

Despite the prevalence and purported success of AI in moderating content, there remain some negative aspects. Facebook already acknowledged that their AI was weaker at detecting hate speech, bullying and harassment, with the latter only getting picked up by the AI 14% of the time. YouTube, who uses AI in its copyright management in their 鈥淐ontent ID鈥 program, equates as proof that their system works. However, this fails to consider the negative impacts of the system on, for example, content creators whose livelihood surrounds the YouTube platform, such as Rick Beato and his 鈥溾 video series as well as many other artists-creators.

On a macro level, it鈥檚 clear that an area where AI is lacking is in the question of nuance. For example, many content creators who use copyrighted materials on YouTube have Content ID claims on their videos, in spite of some uses perhaps falling within the ambit of under the copyright regime in the United States. The (TWG) at the University of Pennsylvania recently demonstrated that AI鈥檚 comprehension of nuance, context, and differences in cultures is flawed.

AI In Content Moderation: The Ugly (For Society)

It is in these shortcomings of AI that regulation must come in. The that internal biases from the programmers of AI are pressing, difficult issues within content moderation. Fairness is thus a subsequent challenge, as the troubles of regulating those internal biases can be burdensome to monitor, from an oversight perspective.

Doing nothing, and leaving content moderation to the private sector, should not be an option either. A purely market-focused system of content moderation presents a struggle for democracy. In an increasingly digital political economy, users can simply migrate to other platforms, which contribute to the through growing polarization and divides. Alternatively, simply mandating the companies to do better risks an increasing economic divide between the major companies, who can afford to 鈥渄o better鈥, and who cannot.

AIDA 鈥 as 鈥渄olce鈥 as it should be?

Based on the proposed framework of AIDA, it remains unclear as to whether or not it will apply to this specific kind of AI. This is owing to, in part; how the regulations only apply to 鈥溾 within the meaning of S. 5(1), a self-assessed criterion under S. 7. Self-assessment is not ideal, as corporations could structure their AI such that their system does not fall within AIDA. Furthermore, as , AIDA鈥檚 scope for qualifying 鈥渉arm鈥 within the meaning of S. 5(1) is narrow, and could be interpreted to excluding harm suffered by communities, focusing instead on the harm suffered by the individual. recognize the importance of avoiding harm. However, that leaving the determination of harm to the private sector would be imperfect at best, as 鈥渕any Internet companies were never designed for [adjudicating content-related issues]鈥.

Where does this leave us? Content moderation, on its own, is a uniquely fickle issue that tackles fundamental constitutional concerns, including freedom of expression. This issue is compounded by the inclusion of AI, as it complicates . In addition, internalized biases contribute to the obscuring of AI鈥檚 efficiency for content moderation, meaning there will .

To that effect, whether or not AIDA will apply to this type of AI that has macro implications, the better arguments support the view that it should. AIDA should thus be amended to recognize the dangers associated with AI in the moderation of content, and this should be done in a clear way. In addition, the legislation should be amended to expand the scope of 鈥渉arm鈥 to recognize the harm suffered by communities. Finally, AIDA should expand its scope and apply to more AI systems beyond those deemed 鈥淗igh-Impact鈥. In introducing AIDA as part of Bill C-27, the federal government has indicated a desire to innovate our legislative framework to include AI. Owing to the expansive scope and dangers of AI in content moderation, the government should ensure the law helps move society and innovation forward in equal step.

Back to top