Untitled-design-1.png
2022-05-06
By Adeola
Tip the author

Web2.0 Social Media Platforms May Have Failed, Could Web3 Do Better?

It’s been over two decades since the first social media network went live on the Internet to provide unhindered interconnectivity and be a convergence point for distinct views, ideologies, and constructive conversations. This dream materialized in Web2.0, the second generation of the internet’s growth phase marked by a smoother user interface and improved features than Web1.0 offered.

Achieving this objective wasn’t easy. While some social networks thrived, albeit briefly and died, Facebook, Twitter, and Youtube survived infancy and now boast millions of worldwide users who stay glued to the screens of their devices for about an hour every day. Facebook alone clocked over one billion daily active users as of the last quarter of 2021, and 72 percent of its users are outside North America and Europe.

The growth and operations of these centralized social platforms have not been without scandals and controversies, including data privacy violation and content moderation that some perceive as repressive of the freedom of expression and driven by profit at the expense of the safety and sanity of users. The inefficiencies of the operations and policies of big centralized social media firms have resulted in many people asking whether the world needs new social media.

The data collected at the point of free registration to access the platform are exploited and used for targeted advertising. There are also allegations that social media platforms suppress some categories of views. For instance, a 2020 study by Pew Research Center reported that nine-in-ten Republicans – and around six-in-ten Democrats believed political views were censored on social media sites. But a U.S. appeal court ruling dismissed claims that social media networks suppress conservative views.

Yang Wang, an associate professor of Information and Computer Science at the University of Illinois Urbana-Champaign, agrees that a new social platform is needed and tells Arweave News that big Web2 companies behave the way they do because of their stakeholders and are primarily profit-driven. 

“However, these platforms have a social responsibility to ensure they are healthy environments and fight against issues like cyberbullying and misinformation”, Wang said.

There is a lack of data for vices on social media partly because victims hardly report them. Most data generally show a pattern of increase in social media abuse. The European Commission’s Joint Research Centre three months survey in 2020 of more than 6,000 10-18-year-olds found that 50% of children experienced at least one kind of cyberbullying.

The complicated challenge of content moderation

When big centralized social media firms crackdown on the abuse of their platforms, there are disparities in the intensity with which it’s done in developing countries versus North America and Europe. In “at-risk countries”, like Ethiopia, where civil war has been raging for more than a year, and Myanmar, where the Rohingya people are being persecuted, Facebook did little to prevent the use of its platform to incite violence and fuel ethnic conflicts, according to a report. Facebook denied the allegations.

“The scandals stem from the financial incentives that we’ve created for monetising attention on the internet”, Sam Guzik, foresight affiliate at the Future Today Institute which models and prototypes future risk and opportunity, told Arweave News.

“The toxic speech and harassment that permeates the web are a product of bad actors and real-world power dynamics playing out in a digital space.”

Negativity sells on social media because it attracts users’ attention more than positive content and keeps people scrolling on their devices, a goal for social media companies because it is suitable for revenue flow, said Johann Hari, journalist, and author. He added that algorithms amplify the harmful content but suppress the positive. This business model is a counter incentive to rid social media platforms of negative content.

Deleting objectionable user-generated content is usually the verdict when such are reported. Still, the implementation process big social network companies go through is often in contention, and it raises questions, including whether social media-generated content should be regulated? What content should be flagged as offensive and harmful? And at what point would removing that kind of content violate the freedom of expression?

Moderating content on social platforms is operationally challenging. Company’s technology and human resources struggle to keep up in an age where people worldwide find a home on social media for their opinions on the dynamics of local, national, and international happenings. Operating in multiple international jurisdictions comes with some challenges for moderating content.

What constitutes a harmful post in some countries may not be in another jurisdiction. While other countries, especially in Europe, have tight regulations for social media firms, others don’t, leaving the companies to operate by internal policy. However, even this has its problems and has been criticized.

The problem with establishing policies for what users can post on social media begins with who sets the rules and underlying motives. There is the risk of abuse of power on centralized Web2 social platforms when company executives and moderators develop policies with biases, ideological leanings, and explanations. There is also the likelihood that human rights could be suppressed if statutory authorities were responsible for creating rules for social media use.

“At a high level, it is fundamentally the responsibility of social networking companies to make decisions about how their platforms are used”, said Guzik. “Those decisions should be made transparently and predictably, but the fact of deciding that some speeches do not belong in the public square is not inherently problematic.”

Even if statutory institutions were to be involved in determining the rules and procedures to regulate social media contents, could they capably perform the responsibility? Evelyn Douek of Harvard University Law School wrote that:

Constitutional obstacles aside, the sheer scale, speed and technological complexity of content moderation means state actors could not directly commandeer the operations of content moderation.

Also, Douek said that social media platforms have the First Amendment right (a provision in the constitution of the United States which protects rights to speech, religion, press, assembly and petition the government for redress) and business interest to moderate contents.

Countless content is removed by centralized social media platforms yearly. Absolute powers by these companies to delete content or expel anyone could blur the line between maintaining decorum and suppressing dissenting views. 

Deciding where to draw this line – and when to trust individual companies versus relying on government regulation — is a deeply complex question, Guzik said.

Web3.0 the panacea for Web2.0?

According to supporters, Web3.0, the third generation of internet development, possesses the solution to almost everything that is wrong with preceding generations.

Web2.0 social media has failed and you can see that in the moderation overreach and imbalance of power between users and moderators, Darwin, the pseudonymous founder of  Decent.Land, a free software open source protocol for building social network platforms, told Arweave News.

Other issues are algorithms and what is shown to readers and often this could be gamed, but the big issue there is that you don’t know how they are being gamed.

Proponents say social media platforms built on Web3.0 decentralized technology devolve power and decision making to users and holders of governance tokens instead of a few company executives. Its technology architecture is open-source, making it open for scrutiny. Data is hosted on blockchain-supported permanent storage systems such as Arweave, preventing contents from being censored and altered to promote disinformation.

Web3.0 is promising. When people actually own their data, many good things will happen, said Wang, Computer Science professor at the University of Illinois.

As for what the future holds for Web.3.0:

But I doubt Web3.0 will solve all the problems that Web2.0 companies grapple with. I think decentralised governance is a key challenge for Web3[… ]how does the crowd reach a consensus on what’s allowed and what’s not given that people tend to have different views and values, not to mention the dominance of whale addresses. Decentralisation doesn’t get away with the fundamental diversity of human perceptions.

Guzik noted that the new social media networks could succeed in the future because a new generation of founders will develop a way for their users to connect and express themselves, not because they’ve deployed a new technology for the sake of itself. 

Without any detail, the idea of a ‘truly decentralized social network’ scares me because I think it would be a supercharged vector for distributing hate, lies, and terror, he also mentioned.

Let’s not forget that Web3.0 innovations are relatively new and at an early stage. Web3.0 technology is about 10-years-old, and some technology experts believe that many innovations branded as Web 3.0 are actually Web 2.5, suggesting that the full potential and opportunities of Web3.0 are still being unlocked.

People who do not understand the underlying technology behind Web3.0 believe it is rogue and harmful content will be challenging to take down, potentially compounding the problems with Web 2.0.

This (harmful and illegal contents) is something that Arweave itself solves. Arweave gateways hold a large library of content hashes of illegal contents (uploaded by people all over the world)  and the gateways themselves do a pretty good job at detecting and blacklisting those contents, Darwin said.

On top of that, Decent Land has a decentralised moderation protocol which means people can vote out a content. They can say this is harmful and they can quickly come to a consensus that it’s the truth and it should not be there.

Unlike Web 2.0 social media networks, Web 3.0 may not be free to use. The fee or token-based system in Web 3.0 potentially eliminates the inclination to exploit users’ data and tweak algorithms to promote content – even questionable ones – for profit as big Web 2.0 social media companies do. Although the paid access business model spawns financial sustainability for social media networks, it denies access to a medium for expression for people who can’t pay.

The impact of content on social media networks is far-reaching. A public square of ideas and a tool to amplify thoughts and opinions, lessons from history have shown real consequences if social media is allowed to be a dumping ground for harmful content.

People will likely perceive differing thoughts and ideas as offensive even though they may not be harmful. Social media content moderation is an essential and delicate effort to ensure that enforcing civility does not influence users’ freedom of expression. Of course, this responsibility comes with immense power, which cannot be entrusted to a few people or the government.

“We need to be looking more at how people and civic institutions (outside of government) can take charge of our own behaviours, Leah Plunkett, researcher and lecturer at Harvard Law School told Arweave News.

And the norms within our communities (online and offline) to create and hold the line between free exchange and toxic, destructive behaviour.


Join our

Telegram /Discord /Twitter

Adeola

Adeola is a journalist at Arweave News. As a former freelance journalist, his works were published by Newlines Magazine, The Continent and the Mail and Guardian. He has interest in the intersection of technology and human lives.

Sign up for newsletter

Sign up here to get the latest news and updates delivered directly to your inbox.