当前位置:首页 >娛樂 >【】

【】

2024-11-05 07:40:43 [熱點] 来源:有聲有色網

After much speculation, Facebook has imposed restrictions on live-streaming following the New Zealand attacks in March.

Announced on Tuesday, the company will implement a "one strike" policy which will restrict anyone who violates the social network's community standards from using Facebook Live.

Users who violate the network's most serious policies will be prohibited from using Live for a certain period of time, which will begin from their first offence. One example of an offence is a user who "shares a link to a statement from a terrorist group with no context."

Guy Rosen, Facebook’s vice president of integrity, said in the blog post that the company's goal was "to minimize risk of abuse on Live while enabling people to use Live in a positive way every day."

Rosen said these restrictions will be extended to other areas of the platform over the next few weeks, which will begin with restricting offending users from taking out ads.

Mashable Light SpeedWant more out-of-this world tech, space and science stories?Sign up for Mashable's weekly Light Speed newsletter.By signing up you agree to our Terms of Use and Privacy Policy.Thanks for signing up!
SEE ALSO:Whistleblower says Facebook's algorithms generate extremist videos

Prior to this, Facebook had simply taken down content that violated its community standards, and if that person kept posting violating content they'd be blocked from the whole platform for a period of time. Some were banned altogether.

The restrictions are applicable to individuals Facebook considers "dangerous" as per an updated definition in Facebook's Community Guidelines, which saw the bans of a host of controversial public figures including Alex Jones, Nation of Islam leader Louis Farrakhan, Milo Yiannopoulos, and others. 

In addition to these new live-streaming restrictions, Facebook also said it's investing in research to prevent incidents like the rapid spread of the Christchurch shooter video, which was modified in order to avoid detection and allow reposting.

The company will invest in a $7.5 million partnership with three universities: the University of Maryland, Cornell University and the University of California, Berkeley.

The money will go to research improved detection of manipulated images, video, and audio, something that could also help deal with things like deepfakes.


Featured Video For You
This WhatsApp flaw helped send spyware with a voice call

TopicsCybersecurityFacebookSocial Media

(责任编辑:探索)

    推荐文章
    热点阅读