On the same day that whistleblower Frances Haugen testified before Congress about Facebook and Instagram’s harm to children in the fall of 2021, Arturo Bejar, then a contractor at the social media giant, sent an alarming email to Meta CEO Mark Zuckerberg about the same thing. subject.
In the note, as first reported by The Wall Street Journal, Bejar, who worked as Facebook’s chief technical officer from 2009 to 2015, outlined a “critical gap” between the way the company approaches harm and the way the people who use its products – most of them especially young people – experience it.
“Two weeks ago, my daughter, 16, and an experimental maker on Instagram, made a post about cars, and someone said, ‘Go back to the kitchen.’ It was deeply disturbing to her,” he wrote. “At the same time, the comment is far from against policy, and our blocking or removal tools mean this person will move on to other profiles and continue spreading misogyny. I don’t think policy/reporting or more content control is the answer.”
Bejar believes Meta should change the way it polices its platforms, with a focus on addressing harassment, unwanted sexual advances and other bad experiences, even if these issues do not clearly violate existing policies. For example, sending vulgar sexual messages to children isn’t necessarily against Instagram’s rules, but Bejar said teens should have a way to tell the platform they don’t want to receive these types of messages.
Two years later, Bejar is testifying Tuesday before a Senate subcommittee on social media and the teen mental health crisis, hoping to shed light on how Meta executives, including Zuckerberg, knew about the harm that Instagram caused, but chose not to make meaningful changes. point them out.
“I can safely say that Meta executives knew the harm teens were experiencing, that there were things they could do that were very doable and they chose not to do them,” Bejar told The Associated Press. This, he said, makes it clear that “we cannot trust them with our children.”
At the opening of the hearing Tuesday, Sen. Richard Blumenthal, a Connecticut Democrat who chairs the Senate Privacy and Technology Subcommittee, introduced Bejar as an engineer who is “widely respected and admired in the industry” and who was hired specifically to to help prevent harm to children, but whose recommendations were ignored.
“What you brought to this committee today is something every parent needs to hear,” added Sen. Josh Hawley of Missouri, the panel’s top Republican.
Bejar points to user perception studies that show, for example, that 13% of Instagram users – between the ages of 13 and 15 – reported receiving unwanted sexual advances on the platform in the past seven days.
In his prepared remarks, Bejar is expected to say he does not believe the reforms he proposes would have a significant impact on revenues or profits for Meta and his peers. They are not intended to punish the companies, he said, but to help teenagers.
“You heard the company talking about it: ‘oh, this is really complicated,’” Bejar told the AP. “No that’s not true. Just give the teen a chance to say ‘this content isn’t for me’ and then use that information to train all the other systems and get feedback that will make it better.”
The testimony comes amid a bipartisan effort in Congress to pass regulations aimed at protecting children online.
Meta said in a statement: “Every day, countless people inside and outside Meta are working on how to keep young people safe online. The issues raised here regarding user perception surveys highlight some of this effort, and surveys like these have prompted us to create features such as anonymous notifications of potentially offensive content and comment alerts. Working with parents and experts, we’ve also introduced more than 30 tools to support teens and their families in having safe, positive experiences online. All this work continues.”
As for unwanted material users see that doesn’t violate Instagram’s rules, Meta points to its 2021 “Content Distribution Guidelines,” which states that “problematic or low-quality” content will automatically receive less distribution on users’ feeds. This includes clickbait, fact-checked misinformation, and “borderline” messages, such as a “photo of a person posing in a sexually suggestive manner, speech that contains profanity, transgressive hate speech, or gory images.”
In 2022, Meta also introduced “kindness reminders” that tell users to be respectful in their direct messages – but this only applies to users who send message requests to a creator, not a regular user.
Bejar’s testimony comes just two weeks after dozens of US states sued Meta for harming young people and contributing to the youth mental health crisis. The lawsuits, filed in state and federal courts, allege that Meta knowingly designs features on Instagram and Facebook that addict children to its platforms.
Bejar said it is “absolutely essential” that Congress passes bipartisan legislation “to ensure that there is transparency about these harms and that teens can get help” with the support of the right experts.
“The most effective way to regulate social media companies is to require them to develop metrics that allow both the company and outsiders to evaluate and track instances of harm as experienced by users. This plays to the strengths of what these companies can do, because to them data is everything,” he wrote in his prepared testimony.