Shifter: An Experiment in Dynamic Social Spheres

Social networks are pervasive in modern human communication, and they typically provide the same or similar affordances. Speaking, liking, sharing, commenting. These provide for the very basic contours of conversation, with options for limited non-verbal engagement. So limited in fact, that expressing negativity or disagreement non-verbally is not an option. These choices are made delicately, and are tangled up with the company’s bottom line. Take for instance the Facebook decision to avoid including a “Dislike” button. It justifies this decision based on a philosophy that it would not be good for the world at large, however these choices, which are likely made more based on what is profitable, dramatically influence what kind of conversations take place. “Its algorithms optimize for ‘engagement’, which includes posts, likes, clicks, shares, and comments. Among the metrics Facebook does not optimize for: honesty, exchange of ideas, critical thinking, or objective truth.” [1]

There are an infinite number of affordances one could imagine for nudging and steering the direction of discourse, and largely these remain unexplored. What also remains unexplored (at least in implementation) are the ends to which these affordances could be applied. The solution space for directing discourse is not even limited to accessible user affordances. Technologies like machine learning, natural language processing, and sentiment analysis, while imperfect, can help to craft conversations that are better informed and contextualized, more respectful, and more reactive to what is said and how participants behave. In addition, non-technical information design choices hold potential for instance in crafting a civil and equalitarian environment for discussion and debate by enforcing identification of users; what is called the “Social Identity Model of Deindividuation Effects” or “SIDE”. [2] Decisions like these have significant ramifications for participation [3] and risks exclusion of groups that are typically targeted with hate speech and need anonymity as a matter of personal safety. In addition to questions around identity, networked information access influences the experience of discourse and deliberation online, particularly in large networks. A number of studies support the idea that “as one’s network size increases, the probability of interaction with sources of new information grows, since one is more likely to encounter a higher number of politically active individuals.” [4] In current popular models of interaction on social networks, these networks (subject to the privacy configuration choices of the user) frequently require a commitment to other individuals in the network, usually a formalized “friendship” that allows the users to see each other’s contributions and interact.

It cannot be understated how much care (and transparency) is required in crafting these environments and tools. Brought to scale, as we have seen with the likes of Facebook and Twitter, a large percentage of communications of the human race can be affected, and it is not always clear what the ultimate repercussions of these effects are. If users of a system do not understand how what they see and how they are seen by others is manipulated (and manipulation is the appropriate term here), their world view can be skewed based on the invisible decisions of the algorithm responsible for manipulation. In the case of Facebook’s “curatorial” algorithms for users’ news feeds, (the site’s primary interface for information delivery) some research suggests as high as 62.5% of users at the time of the study were unaware their news feed was being filtered. [5]

I propose Shifter, a model for a social network that supports values of exposure to alternate perspectives, patience, and expansion of understanding, and which leverages a non-static model of social relationships, in addition to other affordances that encourage respect and the ability to navigate difficult subject matter while minimizing social friction. In this model, users relinquish the almost universally assumed ability to choose who they are connected to in the network. Instead, a shuffling algorithm would modify the network based on properties self-identified by each user, in addition to interactions and ratings generated through discourse that happens on the site itself, and mediated by the chosen tolerance each user sets for exposure to individuals with opinions divergent from their own. An effect of this shuffling is that the potential interactions with various individuals and viewpoints on the network dramatically increases. In addition to participation in conversations via the social network, users will also be allowed to create prompts for other users to help define their position on a variety of contentious issues. These variables would allow for a much higher resolution of identity than typical dichotomies like “liberal” and “conservative”, which force users into camps that carry a lot of (frequently inappropriate) baggage and assumptions. The hope is that such a network could provide a deeper understanding into the complex political and ideological identities typically unrecognized in much modern political discourse.

Restrictions on speech and user agency can be extremely problematic when the social network represents a generalized and significantly sized digital public sphere, however constrained social environments with specific goals and values provide opportunity for experimentation with affordances that may or may not work at scale. In addition, they provide the ability for subaltern counterpublics to have socio-technical affordances designed with their specific issues in mind, something that could never be done across a large and diverse digital community. Ultimately, the goal of bridging gaps of misunderstanding, and the possibility of progress around social issues that result in even greater societal ills is what the structure of this network aims to accomplish.

1. “You Can’t Dislike This Article”. http://www.slate.com/articles/technology/future_tense/2014/12/facebook_dislike_button_why_mark_zuckerberg_won_t_allow_it.html

2. “Social Identity Model of Deindividuation Effects”. http://en.wikipedia.org/wiki/Social_identity_model_of_deindividuation_effects

3. “Facebook Apologizes To LGBT Community And Promises Changes To Real Name Policy”. http://techcrunch.com/2014/10/01/facebook-apologizes-to-lgbt-community-and-promises-changes-to-real-name-policy/

4. “Social media as a catalyst for online deliberation? Exploring the affordances of Facebook and YouTube for political expression”. http://www.sciencedirect.com/science/article/pii/S0747563212002762

5. “Uncovering Algorithms: Looking Inside the Facebook News Feed”. https://civic.mit.edu/blog/natematias/uncovering-algorithms-looking-inside-the-facebook-news-feed