It’s difficult to separate the Black Lives Matter movement from the tech platforms that proliferate its global reach.

Darnella Frazier, the 17-year old who captured George Floyd’s last moments, first shared the video on Facebook, where it’s now been viewed over 1.8M times. And of course, the #BlackLivesMatter hashtag itself emerged from Twitter in 2013, as a response to the acquittal of George Zimmerman who racially profiled and fatally shot African American teen Trayvon Martin.

By 2016, the #BlackLivesMatter hashtag appeared 30 million times, according to the Pew Research Center. Updated research from Pew shows the hashtag appeared 47.8 million times between the 26 May and 7 June, which isn't surprising given its international support – including the surge of K-Pop fans who used hashtags from anti-BLM counter-movements to drown out hate speech.

Pew Research Center

But what happens when the topic stops trending? 

The viral power of social media makes it an ideal site for activism. Past movements like Extinction Rebellion, #MeToo, and the Arab Spring have made that apparent. But what happens when the topic stops trending? 

How can companies like Facebook and Twitter go beyond simply being spaces that facilitate conversation? 

Many of these platforms have shown their support for the movement – albeit varied. Facebook announced it would donate $10 million to “groups working on racial justice”, while YouTube said it would give $1 million to the Center for Policing Equity, as did Netflix.

Twitter has added #BlackLivesMatter to its bio, but it has not pledged any donations.

Twitter Black Lives Matter

Why solidarity and donations aren't enough

Sitting in the crosshairs of social, political and economical influence, these tech monoliths have the power to make the systemic changes that Black Lives Matter demands – starting from within.

One way for Big Tech to make a lasting impact is by actively countering its own structural biases – both in terms of its products and its developers. The two go hand-in-hand, after all.

A 2019 Cornell study identified that Twitter’s monitoring AI was biased against African American users, with tweets from Black users more likely to be tagged as hate speech compared to the platform’s white users.

“These systems are being developed to identify language that’s used to target marginalized populations online” said Thomas Davidson, primary author of the study and doctoral candidate. “It’s extremely concerning if the same systems are themselves discriminating against the population they’re designed to protect”.

Biases in machine learning are nothing new. Just look at facial recognition technology.

Last year, The New York Times reported that machine learning algorithms in facial recognition systems are 10 to 100 times more likely to falsely identify African American and Asian faces, compared to Caucasian faces, according to a study conducted by the National Institute of Standards and Technology.

Amazon's facial recognition software
Amazon's Rekognition was shown to have racial biases. Image sourced from Rekognition's site.

But these biases quite often have origins in baselines set by people.

As Harvard Business Review explains, this can be due to training machine learning software with skewed data – that is, data that's imbued with existing social and historical prejudices.

Data samples can be flawed as well, with certain groups being more represented or less represented than others.

As a result of the Black Lives Matter protests, IBM announced on 8 June it would stop developing and researching general purpose facial recognition software.

The company’s CEO Arvind Krishna said in a letter to the US Congress that “IBM firmly opposes and will not condone uses of any technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values and Principles of Trust and Transparency.” 

Following suit, Amazon announced it would ban the use of its controversial facial recognition software Rekognition by law enforcement for a year, until there are stronger regulations in place.

Rekognition showed biases against Black and darker skinned people, as MIT Media Lab researcher Joy Buolamwini discovered in 2019.

While these moves are in the right direction, Big Tech can more actively address its blindspots and biases by hiring more black engineers, coders and scientists.

The lack of diversity in Silicone Valley is glaring. It remains predominantly white or Asian and male, and progress towards inclusivity has been slow

In 2018, fewer than 3% of employees at Uber, Twitter, Google and Facebook identified as black. Facebook's 2019 annual diversity report showed that black employees rose from 3.5% to 3.8% across the whole company. Only 1.5% of its technical roles were held by black people in 2019, up from 1.3% the year before. 

As tech consumers and social media participants, we must expect more of Big Tech and stay vigilant. We must ensure the solidarity they offer today doesn't become empty rhetoric tomorrow. The changes won’t happen overnight. But they must happen.