If Big Tech Doesn’t Self-Regulate, Governments Will Do It for Them

Advertisement

hate speech - If Big Tech Doesn’t Self-Regulate, Governments Will Do It for Them

Source: Shutterstock

On March 15, 2019, a man entered two different mosques in Christchurch, New Zealand during Friday Prayer and opened fire. He slaughtered 50 people and injured 50 others. It was the deadliest mass shooting in New Zealand history, and the white supremacist terrorist was able to share his attack with the world — by livestreaming the first attack on Facebook (NASDAQ:FB).

The fault for the tragedy in New Zealand unequivocally lies with the shooter. But it’s too easy to say that the spread of violence and hate speech, both before and after, is squarely on his shoulders. Of course, those uploading the videos and spreading the hate that caused the attack are primarily to blame here, but what about the tech companies that gave them a platform to spread hate speech? No, a company is not solely to blame when people use its technology to hurt others, but when it becomes clear that their technology is being used this way by a large number of people, don’t they have a responsibility to stop it or at least try? I think they do.

Microsoft (NASDAQ:MSFT) President and Chief Legal Officer, Brad Smith, agrees. Last week, he made a blog post calling for tech companies to come together to fight the use of technology to spread hatred and violence. He believes that this will take the form of not only working together to respond more effectively in moments of crisis, but focusing on prevention and fostering a healthier online community in general.

On the surface, it may not seem like Microsoft should trouble itself with what happened in Christchurch. After all, they’re not a web-content company, and their social media arm, LinkedIn, certainly wasn’t the medium used to spread the video. So why post this?

I think it’s possible Smith sees the writing on the wall. If regulators come for Big Tech, then they inevitably come for Microsoft. Or maybe, as the president of one the world’s largest companies, Brad Smith has had the come-to-Jesus moment many tech execs still need. He sees that he has a responsibility here to make sure technology isn’t being used to hurt people.

Regardless, Smith’s predictions seem right on the money, with Australia and New Zealand now working on new laws to penalize companies that don’t remove violent content in a timely fashion — with a fine as high as 10% of revenues.

The Christchurch Shooting and Immediate Aftermath 

As I mentioned earlier, the Christchurch attack was livestreamed. And in the immediate aftermath, websites failed to contain all the copies of the videos flooding their servers.

The video of the massacre was reuploaded over a million times, with Facebook alone reporting that it had removed 1.5 million copies of the video in the first 24 hours after the shooting. 1.2 million were blocked on upload, but that means 300,000 were not. Twitter (NYSE:TWTR) and Google’s (NASDAQ:GOOGL, NASDAQ:GOOG) were also hard at work, with YouTube reporting uploads at a rate of one-per-second in the hours following the attack. The world’s largest video site was forced to take unprecedented steps to stem the flow of videos, which were being edited and recut to bypass automatic blocks they put in place.

This was all according to the shooter’s plan. The attack was “designed for the purpose of going viral” according to Neal Mohan, YouTube’s chief product officer. In addition to the video being quickly copied and re-uploaded elsewhere, the shooter released a detailed manifesto and even repeated the meme, “subscribe to PewdiePie,” in reference to YouTube’s largest star, Felix Kjellberg.

At the height of this, New Zealand ISP’s blocked access to websites that failed to respond to requests to remove the videos. This included 4chan, 8chan and LiveLeak among others. However, the tech giants, YouTube, Facebook and Twitter remained unblocked.

To put this in an American context, imagine if Verizon (NYSE:VZ), Comcast (NASDAQ:CMCSA) and AT&T (NYSE:T) were blocking sites based on their content. Were New Zealand ISPs wrong to block access to these sites?

I don’t know.

Personally, I’m not losing any sleep when people can’t view the unmoderated hate and disgusting, illegal images 4chan and 8chan are known for, and I certainly don’t think ISPs have any obligation to let people or sites use their services to distribute videos of terrorist attacks. However, there should probably be an understanding that this could happen and at least a conversation about it. I do think it’s worth reiterating that ISPs blocked these sites when the takedown requests for videos were ignored, not on the appearance of the videos themselves. So it’s not punishing the sites for user uploads, it’s punishing them for refusing to do anything about user content.

But are ISPs the best judges when it comes to these things? Again, I have no idea.

That’s the problem here. The internet as it exists today is still in its infancy, but it’s growing fast, exponentially even. Thus far, there has been lots of discussion of “can we do this?” and very little discussion of “should we do this?” We don’t know the best way to handle the internet so it provides as much good and does as little harm as possible. That is why I agree with Brad Smith. Something needs to be done. And if tech companies want any say in the matter, they need to do it themselves.

But keep in mind, there are two layers to the controversy surrounding the Christchurch attack here.

The first is that a terrible, violent terrorist attack was able to be streamed, uploaded and shared so many times. In addition to this being disrespectful, traumatizing and disgusting, this amplifies the act of terrorism. Not only are there people out there who would kill 50 Muslims, there are thousands more who will cheer it and upload the video, hoping more Muslims will see it.

The second is the rise of hate groups, the role tech companies play in facilitating this, and whether they have the responsibility to stop it when they can.

‘Subscribe to Pewdiepie’

Pewdiepie is the largest individual YouTuber, making him a vital part of the conversation around hate speech that's been mostly overlooked.

During his livestream, the shooter said the phrase: “Subscribe to Pewdiepie.” Obviously, Pewdiepie is not to blame for the shooting, but the shooter’s intention was for Pewdiepie’s hordes of fans to assume he would be blamed for the shooting (he hasn’t been), so they could rush to his defense and muddy the conversation following the tragedy.

Why would his fans even believe Pewdiepie could be blamed for the shooting? Because of the rash of headlines connecting him to Nazism and racist jokes and slurs. Would I say Pewdiepie is a racist in the worse sense of the word? No. But would I say Pewdiepie treats racism casually and is okay using racist jokes in a way that normalizes them to his millions of young fans? Absolutely.

Pewdiepie is the largest individual YouTuber, and that makes him a vital part of the video streaming site that’s been mostly overlooked in the larger conversation about social media and the spread of hate groups. Many fingers have been pointed at Facebook, Twitter and smaller sites like Reddit and 4chan — and for good reason. However, YouTube is absolutely a key player here. Note that YouTube had 1.8 billion monthly users as of last May (and note the image Business Insider used to communicate this) as compared to Facebook’s 2 billion. Furthermore, 21% of U.S. adults get news from YouTube — we don’t have the data on people under 18.

Researcher Zeynep Tufekci has observed a trend in the YouTube algorithm — that is, the automated system that decides what videos to recommend to users; it tends toward the extreme. If you’re watching videos about vegetarianism, it recommends videos about veganism. If you’re watching videos about jogging, you quickly end up on ultramarathons. The same is true, and far more troubling, when it comes to videos about politics. Videos of Donald Trump rallies lead you to videos railing against immigrants, and from there to white supremacist rants and Holocaust denial. Researcher Rebecca Lewis has published a report showing how this effect is stronger on the right side of the political spectrum, which has helped lead to the rise of white nationalism in our current time.

And while this is not Google’s aim, the point of the algorithm is to keep you on the site longer so it can make more money from ads. If sucking you down a white-supremacist rabbit hole is what keeps you on the site, then Google is profiting from this hate. And it’s not just YouTube, Facebook, and to a lesser extent Twitter, make their profit from ads as well, no matter what said ads appear next to. Content that sparks a strong reaction — good or bad — means more engagement, and more engagement means more ad sales.

It happens regardless of your intentions, and while many people can see that a video they’ve ended up on is wrong or hateful, a lot of people have been eased into it over a long period and a lot of people watching Youtube are children. In fact, 81% of children under 11 watch some iteration of YouTube.

Let’s go back to Pewdiepie. The majority of his gigantic audience is young and impressionable. So when the YouTube algorithm carries them from Pewdiepie’s flippant treatment of racism to something more insidious, they’re not old enough to see what’s happening. It doesn’t help that Pewdiepie has — advertently or inadvertently — referenced less-than-savory channels that may help the YouTube algorithm along in recommending his followers watch more extreme racist content.

The internet is a good place to find people like you, and while that’s been largely good connecting minorities and people with unique interests with others like them, it has also let people with more hateful fringe views to congregate. That has led to a rise in the white supremacist content that inspired the New Zealand shooter.

Move Fast and Break Things

Social media companies have focused on growth first without evaluating the consequences. Now it's time to evaluate.

Source: Shutterstock

Social media companies have always focused on growth first without evaluating the consequences. Now it’s time to evaluate.

From Christchurch to Myanmar, tech companies have helped spread the hate that has cost people their lives.

So what do tech companies do? What can tech companies do?

First, they need to evaluate what they make money from and what they’re complicit in. Brad Smith has done this. Other tech execs need to follow his lead. Personally, I think his blog post lays out a great three-step plan to get tech companies started.

One: Improve upon the technologies used to identify when violent content is being uploaded and share this technology more freely to prevent what happened following the Christchurch massacre. Remember, part of the problem was that a lot of the videos were being edited just enough to get past current technologies. Not only would this stem the sharing of violent and other objectionable content, but this would also push AI technology forward in general.

Two: Create a major event protocol so that tech companies can respond to events like Christchurch more cohesively. This would involve tech companies deciding how to move forward together and sharing information. This would help avoid another situation where ISPs have to block access to sites.

Three: Work to foster a healthier online environment. This is the least specific goal, and as such the hardest to achieve. However, this is by far the most important because it helps prevent another event like the Christchurch shooting. If fewer people can be radicalized online, then there are fewer violent attacks. Facebook recently took a step toward this by banning white supremacy and separatism from its platforms and directing people who try to do this to a non-profit that helps people leave hate groups. This is a great step forward. If people don’t have a place to discuss these ideas and pull more people into them, then these hate movements languish.

Can the tech companies eradicate hate groups entirely? Of course not, but they can definitely slow them down.

Consider Alex Jones. He was deplatformed from all the major social media sites for spreading the lie that the Sandy Hook killings never happened and inspiring his followers to harass families that lost their young children. Now, minus a large portion of audience that made him money, Alex Jones has admitted the Sandy Hook shooting was real. Did this admission undo the damage? Of course not, but he’s no longer throwing fuel on this particular fire.

Also, I would like to make it clear that moderating violence and hate speech isn’t a violation of “free speech.” First of all, the internet is global and so is the reach of these tech companies. The First Amendment applies to Americans only. Second, the concept of free speech only protects individual rights to speak without government intervention. The First Amendment does not force private entities to give you a platform for violence or hate speech. You have no constitutional right to a youtube channel, Twitter handle or Facebook profile. To extend it to the ISP case, other companies do not have a right to use Verizon, Comcast or AT&T’s internet infrastructure to stream violence.

Looming Regulation for Tech Companies

So why does any of this really matter? If Google, Facebook and Twitter are making money while not breaking the law, why does it matter how they’re doing it?

If making people’s experience online less dehumanizing, abusive and traumatizing and saving lives isn’t enough, consider this: tech companies cannot continue on like this without running into some serious problems for their bottom lines.

As I mentioned earlier, many governments are exploring regulation for tech companies. The European Parliament recently passed a law that would make companies liable for copyright violations on their platforms. This comes after the General Data Protection Regulation (GDRP) law restricted what companies could do with user data.

But Australia and New Zealand’s responses to the Christchurch Mosque Shooting are like nothing the tech companies have been subject to before. A law, to be introduced in Australia this week, would make it a criminal offense for tech companies to not remove “abhorrent violent content” fast enough. Punishments for violating this law include fines of up to 10% of a company’s global revenue. The proposal would also rebrand Facebook and its peers as publishers — making them liable for content in ways they’ve never been before.

“Big social media companies have a responsibility to take every possible action to ensure their technology products are not exploited by murderous terrorists,” sad Scott Morrison, Australia’s prime minister. “It should not just be a matter of just doing the right thing. It should be the law.”

Meanwhile, New Zealand Prime Minister Jacinda Ardern says that whatever laws her country ends up passing — they’re currently studying Germany’s hate speech laws closely — the regulation has to come from more than individual affected countries:

“Ultimately, we can all promote good rules locally, but these platforms are global.”

The European Union has proposed a law that would fine a company 4% of revenue for failing to remove terrorist material from their platforms within an hour. The U.K. is considering its own fines of up to 4% of global revenue for tech companies that fail to remove toxic content as well.

Think about this from a stock perspective. For 2019, analysts are calling for Google to bring in revenues of $163 billion. What would happen if one of the largest companies in the world — fined by Australia (10%), the EU (4%) and the U.K (4%) — had to pay out 18% of its revenues in fines? Think of the reactions to Google missing earnings by a few pennies, let alone $30 billion. And that assumes just one fine from each of these proposed laws. Plus it wouldn’t be a one-time issue if these companies don’t clean up their act. For as long as tech companies continue to allow violence to be posted to and propagated by their platforms, they’ll be subject to regulation.

So what? Don’t allow your services to be used in countries with such strict policies. The tech companies got big without China, they can do it without the 30 million people in Australia and New Zealand.

Sure, those two countries are a drop in the bucket, but the European Union is not. What if India decides to regulate? Or Canada? Or — likely after a shift in which party is in power — the United States? The U.S. cannot only fine these companies but cut them off where they operate if they don’t follow the law.

Additionally, where do most of FB and GOOGL revenue come from? Advertisements. What happens when advertisers decide it’s not worth risking their product appearing next to a video of someone advocating for violence against Muslims? They lose a lot of money, that’s what.

So tech companies are hitting a point where they need to slow down or risk being pulled over. If they don’t do something about the spread of violence and hate content on their platforms, governments are going to do it for them.

They need to look at what they’re allowing to happen using their technologies and decide how to address it. If their humanity can’t be appealed to, maybe their wallets can be.

As of this writing, Regina Borsellino held no positions in the aforementioned securities.


Article printed from InvestorPlace Media, https://investorplace.com/2019/04/time-for-big-tech-to-self-regulate/.

©2024 InvestorPlace Media, LLC