There are 7.6 billion humans on this earth. 2.23 billion of them logged on to Facebook (the number counts “monthly active users”) during the second quarter of 2018.
I don’t know about you, but I found this astounding, considering that Facebook did not exist prior to 2004, and was not open to the general public until 2006. This single “platform” has arguably had a greater influence on human social and political behavior than anything since the invention of radio and television. It may turn out to be as disruptive of the social order as the widespread introduction of movable type in the 15th century.
The sheer speed at which Facebook has spread through world cultures along with its constantly changing, hidden, proprietary algorithms mean that its effects are difficult to study. Unlike the decentralized publishing industry that grew out of the advances in printing technology, Facebook is tightly controlled by a single private company.
Yesterday Facebook announced that it had deleted some 652 accounts for “coordinate inauthentic behavior” – that is, they were “sock puppets” associated with Russia and Iran, accounts that pretended to belong to real people or legitimate news agencies, which posted “political content focused on the Middle East, as well as the UK, US, and Latin America” primarily in English and Arabic. Information on exactly what content was posted is sketchy, but it seems that it included the usual anti-Israel material, as well as propaganda intended to create internal division to destabilize the US and UK.
One of the well-known characteristics of Facebook is its encouragement of ideological bubbles. This is by design. The designers understand that the amount of time one spends on Facebook – and therefore the number of ads one sees – depends on the psychic gratification one receives from the content. It’s well-known that such gratification increases when the content includes ideas with which one agrees, while exposure to ideas that challenge one’s beliefs produces discomfort. So the algorithm that decides which posts a user will see chooses those which – according to an elaborate profile created by the user’s own posts and “likes” – it estimates that the user will find congenial.
This is benign in some ways – for example, it “knows” that I am interested in motorcycles, so I will see posts about motorcycles – but it also works as a political censor. In a triumph of artificial intelligence, it has learned to (most of the time) distinguish between pro- and anti-Israel posts, and show me the former and not the latter. If you have ever tried to program a computer to perform a similar task, you know that this is an order of magnitude harder than simply looking for texts that are about a particular subject, as it does for motorcycles.
The platform itself is structured to encourage its users to behave in ways which support its objective of providing a gratifying experience. For example, a user who posts a “status,” photo, or link, has control of the comments that other users can make about it. If another user posts a comment that the “owner” of the initial post disagrees with, the owner can delete it. As a result, Facebook etiquette has developed in which it is considered inappropriate to post a disagreement. “This is mypage, and I won’t allow racism (or fascism, transphobia, etc.) on it,” a user will write, and delete the offending comment.
There is also the way Facebook users get “friends.” Friend suggestions are generated in various ways, such as number of common friends, but also by the platform’s evaluation of common interests, which also means ideological agreement. My personal experience illustrates this. I have been a member of Facebook since 2010, and by now have collected several hundred “friends.” After an initial period in which I befriended relatives and real-life friends, I almost never initiated a friend request. But on a regular basis I receive such requests. Some of them are people with whom I share non-political interests or who were my real-life friends in the past. A few are people that I have interacted with in the comments section. But the majority are people with whom I am not acquainted, but who appear (to Facebook) to have a similar ideological profile. In addition, over the years, many of my more liberal friends have unfriended me, mostly as a result of my posts about Barack Obama’s anti-Israel policies. So I am left in a bubble of pro-Israel, generally conservative folks with a few old friends and family members thrown in. I also get regular requests to join groups which are ideologically congenial.
So why is this bad? Of course it means that I won’t be exposed to ideas that I disagree with. That’s bad enough. But there is an even worse problem. It is that in an ideologically homogeneous group, a participant gets respect by reinforcing the ideology of the group. I can become a hero to my group of hawkish conservatives by being even more hawkish. Because there are no doves in my group, thanks to Facebook’s algorithm and natural selection, there is nothing to stop me from moving farther to the right. And the next person that wants to make his mark in the group will attack me from the right, moving the discourse as a whole along with him.
As a result, ideological groups develop which then move more and more away from the center. They emphasize different facts and even develop their own facts. They create their own dialects, with each side using words that the other side never uses. What we call “Judea and Samaria,” they call “occupied Palestinian territories.” Members of opposing groups would think each other’s ideas are crazy, but they will rarely see them.
Now, I admit that I like right-wing discourse, up to a point. But think about what is happening in a similar group of Palestinian Arabs who are inclined in a nationalist or Islamist direction. Their discourse, too, is moving, in the direction of hatred and confrontation. And while my right-wing friends may be (thanks to the algorithm) close to my age and therefore relatively harmless, that couldn’t have been said about Palestinian college student Omar al-Abed, who told his Facebook friends that his knife “answers the call of al-Aqsa,” hours before he walked into a Jewish home and murdered three members of a family with it.
Facebook often announces programs to try to distinguish real and fake news, and to remove posts that “violate its community standards,” whatever they are. It certainly does not want to provide a platform for incitement to murder, genocide, sexual violence, racism, or many other undesirable things. But it will never do anything that will significantly impact its primary objective, which is to get people to spend more time scrolling through it and encountering ads.
In short, the platform itself, which is designed to increase ad revenues for Facebook’s shareholders, has the undesired side effect of nurturing and amplifying extremism. Rather than bringing people together, it drives them apart and polarizes them. Unfortunately, this is built into the structure of the platform, and is essential to the attainment of its business objectives. It can’t be fixed with anything other than a wholesale change that would make it unrecognizable, and possibly destroy its ability to make a profit.
Some countries have blocked Facebook. They are generally totalitarian states that want to prevent their citizens from learning about the outside world. Israel is not that kind of state and will not ban Facebook; but we should understand that its pleasant diversions come at a price.
The numbers look good for facebook and co. but IMHO, I don’t believe them.