YouTube announced it is restricting more videos — including “borderline content and content that could misinform users in harmful ways.” The platform has a bad history of policing content.
YouTube posted in its company blog that it will be curbing the spread of misinformation on its platform. It claims to define these videos as those which "misinform users in harmful ways" such as one featuring a “phony miracle cure for a serious illness, claiming the earth is flat, or making blatantly false claims about historic events like 9/11.” While it does not propose removing the videos, it has stated that these videos will be suppressed and kept from recommendations.
When users watch a YouTube video, other videos are recommended both on the right-hand column as well as after the video finishes playing. This new endeavor will allow YouTube to stop certain videos from being recommended. Targeted videos do not violate terms of service, they are “borderline” in that they are considered misinformation. But much like the term “hate speech,” it is a slippery and subjective definition based upon who is judging the content.
The blog claims that this shift of restrictions “will apply to less than one percent of the content on YouTube” and clarifies that “this will only affect recommendations of what videos to watch, not whether a video is available on YouTube” pitching the initiative as a “balance between maintaining a platform for free speech and living up to our responsibility to users.”
[ads:im:1]
Part of what makes this shift so dubious in its implementation is that it will be done through a “combination of machine learning and real people,” both of which have been accused of political bias against conservatives and being unable to detect satire. Even more worrisome is their plan to implement this program not only across the United States, but to other countries around the world.
[ads:im:2]