LONDON , — British lawmakers quizzed Facebook Thursday about its online safety practices as European countries try to limit the influence of social media companies.

Facebook’s safety chief said that the company supports regulation and does not have a business interest in offering people an “unsafe experience.”

A parliamentary committee was also looking at the draft legislation that the British government had proposed to combat harmful online content. Representatives from Google and Twitter answered questions. This comes just days after the American legislators heard testimony from the companies. They gave little commitment to U.S. legislation that would strengthen protection of children online from harm.

Both sides of the Atlantic want stronger rules to protect social media users, particularly younger ones. However, the United Kingdom is making progress. U.K. lawmakers have begun questioning journalists, tech executives, and researchers to provide a report to government about how to improve the final online safety bill. Digital rules are also being developed by the European Union.

ADVERTISEMENT

Antigone Davis (Facebook’s global safety head) addressed British lawmakers via videoconference. She defended Facebook’s handling internal research into how Instagram’s photo-sharing platform could harm teens. This included encouraging suicide and eating disorders.

Damian Collins, the lawmaker in charge of the committee, asked: “Where does it end?”

Davis stated that the company is a team of experts and they all work together to make these decisions. Davis stated that there is no business or business interest in offering people a bad or unsafe experience.

Davis stated that Facebook was largely supportive U.K. safety legislation and is interested regulation that allows publicly elected officials to hold the company responsible.

She stated that she disagrees with Facebook critics who claim that it is amplifying hatred.

Collins asked, “Did Facebook not amplify hatred?”

Davis stated, “Correct,” adding that Davis could not say that they’ve ever recommended anything that might be considered hateful. “I can only say that we have AI that is designed to detect hate speech.”

She declined to reveal the extent of dangerous content these AI systems can detect.

Frances Haugen, a Facebook whistleblower, told the U.K. Committee this Wednesday that the company’s systems made online hatred worse and that there is little incentive for it to address the problem. She stated that it is urgent to regulate social media companies using artificial intelligence systems to decide what content users see.

Haugen, a Facebook data scientist, copied internal research documents and handed them over to U.S. Securities and Exchange Commission. These documents were also given to a number of media outlets, including The Associated Press. They reported many stories about how Facebook prioritized safety over profits and hid its research from investors and others.

John Nicolson, a Scottish lawmaker, told Davis in one of many pointed exchanges before the parliamentary panel that Facebook was an abuse facilitator. He said that it only responds to threats, whether they are from bad publicity or companies like Apple that threaten your financial security.

Lawmakers demanded that Facebook provide its data to independent researchers so they could examine the potential dangers of its products. Facebook stated that it is concerned about privacy and how this data will be shared.

Collins, chairman of the committee, stated that “it’s not up to Facebook to set parameters around research.”

Online safety legislation in the United Kingdom calls for a regulator to make sure tech companies follow rules that require them to remove harmful or dangerous content, or face fines up to 10% of their annual global revenue.

British lawmakers still struggle with difficult issues like privacy and free speech, defining legal and harmful content, advocacy of self-harm and online bullying. They are also trying to control misinformation on social media.

Representatives of Google and YouTube spoke with U.K. lawmakers on Thursday to urge changes to an overly broad definition for online harms. They also appeared virtually, but the tone of lawmakers’ questions was not as harsh as that faced by Facebook.