The
Buffalo, N.Y, shooter used tech platforms to plan his mass murder of
10 Black people in a Tops supermarket. Now is the time to hold tech
companies accountable for the shooting because they bear some
responsibility for a toxic, racist and deadly environment.
Every
step of the way, Payton Gendron relied on several tech websites and
chatrooms for his murderous rampage. The shooter sent a chatroom
invitation on messaging platform Discord
to 15 users shortly before the mass shooting. The users who accepted
his invitation had access to months of his extremist racist writings
and his plans
to attack
the Tops supermarket in Buffalo, as well as a livestream of his
attack on Twitch.
Videos of the shooting went viral, aided
and abetted by Twitter.
And
the shooter was radicalized and cut his teeth in the racist cesspool
of 4chan,
which along with 8chan, has been the online platform of choice for
other white supremacist terrorists in the U.S. and around the world.
Gendron found inspiration for his manifesto from Brenton
Tarrant,
who used 4chan and 8chan and killed 51 people in two mosques in New
Zealand. In turn, Tarrant borrowed heavily from anti-immigrant mass
murderer Anders
Breivik,
who killed 77 in Norway.
How
responsible are these tech companies? New York Gov, Kathy
Hochul
criticized the platforms for not doing more to check hate speech and
violent content and allowing this “virus” to spread. New
York Attorney General Letitia James is investigating tech companies
such as Twitch—a
popular streaming service for gamers owned by Amazon’s Jeff
Bezos—and Discord, which is known as a Slack for gamers.
The
Buffalo massacre has once again placed freedom of speech and tech
industry accountability in the spotlight. The First Amendment to the
U.S. Constitution prohibits the government from limiting freedom of
speech. However, freedom of speech is not absolute, and there are
exceptions for inciting violence or using “fighting words”
that inflict harm on others. Further, these platforms are private
companies and not the government and can regulate what takes place on
their site—at least in theory.
Tech
companies have been slow to respond to hate speech, violence and
white supremacist organizing. For example, Facebook has allowed users
to incite genocide
against Rohingya Muslims in Myanmar
and Tigrayan
people in Ethiopia. Disinformation on Facebook and Twitter set the
stage for the
January 6 insurrection
at the U.S. Capitol, and hundreds of active duty and retired police officers
have joined hate groups on Facebook.
Meanwhile,
Facebook has used its racist algorithms to protect
white men
while cracking down on Black users who discuss
racism
online and flagging it as hate speech. These algorithms are only as
good as the people who code them. And in the mostly white, mostly
male tech sector,
problems abound when Black
and Latinx
folks and women are scarce (it’s even worse at the executive
levels)—it is easy for tech companies to coalesce around a
white-dude, bro culture that is hostile to melanin.
Things
can only get worse when individuals like Elon Musk seek to own
platforms like Twitter and potentially turn social media into a
dystopian hotbed of unregulated hate speech.
Musk—who
paid $250,000 to a SpaceX flight attendant
after allegedly exposing himself and propositioning her for sex and
whose electric car company Tesla
was sued after Black employees said they were called monkeys and
slaves in the workplace—would reinstate Donald Trump’s
Twitter account. For Black Twitter and those who rely on the digital
public square, we see the problem, and the problem won’t go
away by itself.
If
Silicon Valley is unable or unwilling to moderate its content and
heal itself from violence and racism, regulation is the answer for
some. For example, federal regulators want to break
up Meta,
which owns Facebook, Messenger, Instagram and WhatsApp, for acting as
a monopoly,
unfairly killing the competition, intruding on our privacy and
enabling violence and hate speech.
In
Texas, a federal judge put on hold a state law that would block
social media companies from moderating their content.
Texas Gov. Greg Abbott and others claim such a measure, which claims
to prohibit censorship of users “based
on their political viewpoints,”
is necessary to protect conservative voices.
President
Obama
believes the laws governing the internet need to change. Calling the
internet “one of the biggest reasons for democracy’s
weakening” and claiming it was “turbocharging some of
humanity’s worst impulses,” Obama called for reforming
Section
230
of the Communications Decency Act of 1996. Section 230 allows digital
content providers to regulate hate speech on their platforms, but
unlike newspaper publishers, these platforms cannot be held liable
for the content they publish when users post on their site. Calling
for more transparency from tech companies, Obama believes these
companies should “be required to have a higher
standard of care
when it comes to advertising on their site.”
Meanwhile,
as more white supremacist young men turn to social media to plan and
broadcast their crimes, it is clear these platforms can’t
continue doing what they’re doing.
This
commentary is also posted on The
Grio