Facebook chatbot is created to assist its personal staff reply tough questions from family and friends over the vacation season.
The Liam bot solutions question about how the social community handles hate speech and disinformation and might even supply recommendation about serving to locked-out customers.
Facebook stated it was responding to requests from its employees.
Prior to now, staff have been provided steering on what to say to kinfolk by way of electronic mail.
Facebook advised BBC Information: “Our staff often ask for data to make use of with family and friends on matters which have been within the information – particularly across the holidays.
“We put this right into a chatbot, which we started testing this spring.”
A chatbot is a chunk of software program that makes use of synthetic intelligence to hold out a dialog.
If Liam is requested about how Fb handles hate speech, it can supply the next factors:
Fb consults with consultants on the matter
It has employed extra moderators to police its content material
It’s engaged on AI to identify hate speech
Regulation is essential for addressing these points
The bot additionally affords hyperlinks to firm weblog posts or information releases.
Fb has confronted a sequence of controversies through the previous few years, together with questions concerning the position it performs in elections, with the unfold of pretend information and disinformation.
It’s also struggling to get well its status within the wake of the Cambridge Analytica scandal, which noticed the information of thousands and thousands of customers harvested with out consent.
The response on Twitter was blended, with some describing the chatbot as “dystopian” and “unhappy”.
The New York Occasions, which was the primary to interrupt the story, tweeted: “Family and friends can ask tough questions over the vacation season about the place you’re employed – particularly when you work at Fb.”
John Thornhill, an innovation editor on the Monetary Occasions, tweeted: “You could not make it up.”