The social networks including Facebook, whats app has doubled server capacity as people in isolation place more voice, video calls and stream videos by using social apps.
Facebook said it would place “authoritative” coronavirus content at the top of user feeds as it scrambled to keep up with increased usage and stem the flow of misinformation on its platform and WhatsApp messaging.
Facebook also donated $1 million to the International Fact-Checking Network to expand the presence of local fact-checkers and curb misinformation on WhatsApp, said Facebook head of health Kang-Xing Jin.
“Teams are hard at work to make sure all the services run smoothly, because this is clearly a time when people want to stay connected,” Facebook chief executive Mark Zuckerberg said while updating reporters on the company’s efforts.
“We want to make sure we do our part to alleviate loneliness.”
The information hub was built in collaboration with health organizations and will roll out in the US and Europe through Wednesday, with plans to expand it to other locations.
“Our goal is to put authoritative information in front of everyone who uses our services,” Zuckerberg said.
Facebook has been grappling with the challenge making it possible for content to be moderated at home by workers, many of them contracted through outside companies, who are working remotely to reduce coronavirus risk.
“This is a big one we have been focused on for the past few days,” Zuckerberg said.
“There are certain kinds of content moderation that are very sensitive, such as suicide and self harm and if you are working on that content for a long time it can be very emotionally challenging.”
Facebook is in the process of moving the most sensitive types of content moderation to full-time employees for now, Zuckerberg said.
“I am quite worried the isolation of people staying at home could lead to more depression or mental health issues and I want to make sure we are ahead of that with more people working on suicide and depression prevention, not less,” Zuckerberg said.
“That will cause a trade-off with content not representing imminent physical risk to people.”
Facebook will continue to use artificial intelligence systems to watch for banned content.