In the fight against misinformation, YouTube is considering blocking the sharing of videos with borderline content on other sites.

This emerges from an official blog post http://bit.ly/33u20v9 on dealing with misinformation. Accordingly, the platform is also looking for approaches to nip new information nonsense in the bud before it has a chance to spread virally.

“Break” links

Not all nonsense violates YouTube guidelines so clearly that the video is simply deleted.
With such borderline videos, YouTube strives to minimize their visibility on the platform, said Neal Mohan, Chief Product Officer for YouTube, in the blog. "But even if we don't recommend a particular borderline video, it may still receive views from other websites that link to or embed a YouTube video," he writes. Therefore, YouTube is thinking about what additional steps could minimize the reach of such videos. It would be possible to deactivate the share button or “break” the links so that they do not lead correctly to the video. That would also make embedding impossible. “But we are struggling with whether preventing sharing would go too far as restricting the freedoms of seers,” says Mohan. After all, sharing is an active user decision. In addition, there would then have to be context-dependent exceptions, for example for linking borderline content in the context of a critical scientific discussion or reporting. An alternative could be to display warnings before playing borderline videos.

Don't let it go viral at all

Dealing with the sharing of barely permissible content is, of course, only one aspect of the fight against misinformation.
However, Mohan emphasizes: This is made more difficult today by the fact that new, potentially dangerous conspiracy theories are emerging much more quickly - for example, that 5G is contributing to the spread of the coronavirus, which has led to arson attacks on radio masts in some countries. It would be ideal to identify such nonsense early on, before it can spread virally. However, this is not a trivial problem. “Not every future, fast-moving narrative will have expert advice that can influence our policies,” Mohan explains. In addition, when it comes to completely new false information, there is initially hardly any database with which YouTube could train its detection algorithms. In order to achieve greater success in the fight against fake news worldwide, YouTube wants to, among other things, increasingly rely on classifiers and keywords in additional languages. The input of regional analysts should also help to find previously poorly recognized nonsense stories more quickly.

Source: pressetext.com

Notes:
1) This content reflects the current state of affairs at the time of publication. The reproduction of individual images, screenshots, embeds or video sequences serves to discuss the topic. 2) Individual contributions were created through the use of machine assistance and were carefully checked by the Mimikama editorial team before publication. ( Reason )