Loading…

Our Blog

Find the latest updates of Link3 in our blog.

Facebook Can Pin Point Terrorists

Mark Zuckerberg has invented an artificial intelligence software for Facebook using which can spot not only terrorism, but also violence, bullying and even prevent suicide. Although, in the website there used to be a content which Facebook had removed, but Facebook founder Mark Zuckerberg said it would take years for the necessary algorithms to be developed.

He wrote a letter of 5,500 words discussing the fact that it’s impossible to review the billions of posts and messages that appeared on the platform every day. “The complexity of the issues we’ve seen has outstripped our existing processes for governing the community,” he said.

He specifically mentioned of a video which was removed, related to Black Lives Matter movement and the historical napalm girl photograph from Vietnam as “errors” in the existing process.

In 2014, Facebook was highly criticized for the following reports that one of the killers of Fusilier Lee Rigby spoke online about murdering a soldier, months before the attack.

“We are researching systems that can read text and look at photos and videos to understand if anything dangerous may be happening.

“This is still very early in development, but we have started to have it look at some content, and it already generates about one third of all reports to the team that reviews content.”

“Right now, we’re starting to explore ways to use AI to tell the difference between news stories about terrorism and actual terrorist propaganda.”

Mr. Zuckerberg said that the algorithm will detect aligned with the giving the proper freedom to the audience to post whatever they liked, within the law. Whatever posts they don’t like, audience can filter those from their news feed.

He also explained, “Where is your line on nudity? On violence? On graphic content? On profanity? What you decide will be your personal settings. “For those who don’t make a decision, the default will be whatever the majority of people in your region selected, like a referendum.

“It’s worth noting that major advances in AI are required to understand text, photos and videos to judge whether they contain hate speech, graphic violence, sexually explicit content, and more.

“At our current pace of research, we hope to begin handling some of these cases in 2017, but others will not be possible for many years.”

This would be the an amazing idea accepted by the Family Online Safety Institute, a member of Facebook’s own safety advisory board.

YOU MIGHT ALSO LIKE

  • Uber now operates in 662 countries around the world and ...

  • “We made a huge splash”, Chief Executive of Rentberry, Alex ...

  • After a long awaited beta process the new and latest ...

  • The world’s number one Go player Ke Jie got defeated ...

Archives

  • It’s sad that how digital presence is being differentiated between ...

  • The rule for dealing with cyberbullies is to ...

  • Apparently, before and after the presidential election in US, an ...

  • We are back with the second batch of useless junk ...

  • Now-a-days, hackers are targeting the shipping industry confirmed by one ...

  • In order to help one of the social media sites ...

  • Australian parliament in Canberra has been disrupted by about 30 ...

  • The fist Hifiman was something to talk about. He could ...

  • A product with became the most profitable in history right ...

  • The recent break out news was the infection happened in ...

For Support
SMS us at 01708811111 with your Link3 ID.