The Myth of the Closed Container
The heyday for me as an individual contributor online was late 2005 through 2006. I had discovered the world of blogging and in the .edu space was considered a serious contributor. It was out in the open, seemingly anyone could participate, and the self-forming community of participants engaged vigorously, by commenting on the posts of others as well as by linking to those points in their own pieces. At the time, at least within the .edu arena, there was some loathing for closed container solutions, particularly the learning management system. An early exemplar is this post by Leigh Blackall.
While blogging of this sort still exists today, it is now in eclipse. If you consider the point of view of the platform providers, blogging overall didn't generate enough participants to be very profitable. There needed to be a way to turn up the volume. Here we should ask why the volume wasn't greater.
My experience with online goes back a decade earlier, when we had online conversations in some client/server software that enabled threaded discussions and later by email via listserv. Based on that experience I believe the following is safe to maintain. Among the members of the group, a few will do the bulk of the posting, feeling comfortable expressing their opinions. The vast majority will be lurkers. It is much harder to know the behavior of lurkers, but I suspect some were careful readers of the threads yet never jumped into the fray, while others may have been more casual in their reading. A critical question is this? Why doesn't the lurker chime in? Two possibilities are (1) fear of criticism by other members of the group and (2) self-censorship about the quality of one's own ideas. In this sense, even though these threads were in a closed container application, they were still too open to elicit universal participation.
People will open up more if they perceive the environment to be safe. Having trusted partners in communication is part of that. Having others who are not trusted unable to penetrate the conversation is another part. The issue here, and it is a big one, is that people often make a cognitive error with regard to the safety of the environment. Email, for example, is treated as a purely private means of communication when there are far too many examples to illustrate that it is not. (While readers might first think of email leaks as the main issue, people who work for public organizations should be aware that their email is subject to FOIA requests.)
Faux privacy may be its own issue. If true privacy can't be guaranteed broadly, it may make sense to have very limited means of true privacy that are safeguarded to the max, with the rest of the communication semi-public.
With regard to Facebook in particular, there is a part of the software design that encourages the cognitive error. This is about how somebody else becomes your friend in Facebook. Is that somebody else to be trusted? If they are a friend of a friend whom you do trust, is that sufficient for you to then trust this potential new friend? If your set of friends is uneven in how they answer these questions, how should you deal with them?
Out of sight is out of mind. You may very well consider these issues when you receive a friend request. But if you haven't gotten such a request recently those issues fall by the wayside. Then, when you make a status update and choose for it to be available to friends only, you feel secure in saying what you have to say. That feeling of security may be an error.
That sense of security may then impact what you click on (which we now know is being scraped to develop a sharper profile of you). If, in contrast, you felt like you were being watched the entire time, you would be more circumspect in how you navigate the Facebook site. So, odd as this may sound, one answer might be that all Facebook posts are publicly available. Knowing that, the cognitive error is far less likely to happen. Of course, that can only work if that becomes the norm for other platforms as well. In other words, perhaps some sort of return to the blogging days would be right.
Micro-blogging might be considered from this angle. It clearly has been more successful than long-form blogging in generating volume. Part of that is the consequence of tight character limits. They reduce the perceived need for self-censorship and instead create the feel that this is like texting. Yet we should ask how many people who are Facebook users don't themselves do micro-blogging. That's the population to consider in thinking through these issues.
The Myth of Free Software
Way back when I was in grad school, I learned - There's no such thing as a free lunch. Although I'm not otherwise a big Milton Friedman fan, I certainly subscribe to this view. Yet users of software that is free to them (meaning it is paid for by others) have grown used to that environment. We are only slowly coming to realize that the cost of use comes in other ways.
“It don’t cost no money, you gotta pay with your heart.”
Sharon by David Bromberg
With ad supported software, in particular, we pay by putting our privacy at risk. While it is clear that some will call for regulation about how software companies protect the information about us, let's recognize that the incentive to collect this information will not go away as long as ads are the way to pay for the software.
So, one might contemplate other ways to pay for the software, in which the incentive to collect personal information is absent because there is no profit in it. The most obvious alternative, at least to me, is to retain the free access to to the user (the paid subscription alternative ends up limiting users too much so does not sufficiently leverage the network externalities) and thus pay via tax revenues. This would be in accord with treating the software as a public good. Taxes are the right way to fund public goods.
How might this work? If a municipality or some other jurisdiction provided access to some software for its members, the municipality would do so by writing a contract with the provider. Members would then log in through the municipality's portal and be presented with an ad-free version of the software. I want to note that this is not so unusual as a method of provision. My university, for example, provides eligible users - faculty, staff, students, and in some cases alumni as well - with free access to commercial software, for example Box.com and Office 365. So the market already has this sort of model in place. The only things that would need to adjust are that it would be municipalities or other jurisdictions that do the procurement and they would need to provide front ends so that members could have access but non-members would not. The online environment, then, could be without ads for members but would still have ads for non-members.
Part of the agreement and what would rationalize such procurement by the municipality is that the provider agrees not to scrape information from members of the municipality. It is this item in the contract that justifies the public provision of the online environment. In other words, people pay with their taxes to protect the privacy of members of their community.
This is obviously a tricky matter, because if I live in such a community that provides access and one of my friends is using an ad-supported version, wouldn't my information get scraped anyway, just because of that? There are two possible answers to that question which are consistent with protecting my information. One is to divide platforms into those that are only paid for by the various municipalities, so no user is in the ad-supported category. The other is to (heavily) regulate how the information of users who don't see the ads gets collected. Each of these pose challenges for implementation. But do remember there is no free lunch, so we need to work through which alternative is better, rather than cling to an idealistic vision (total privacy protection coupled with no intrusion) that is actually not feasible.
Policing the Online Environment - News, Fake News, and Ads
One reason to note my own usage of the Internet from back in the 1990s is to mark the time since. We have been in Wild West mode for those two decades plus. We probably need something more orderly moving forward. What should that more orderly something look like?
An imperfect comparison, which might be useful nonetheless, is driving on the Interstate. As there is a general preference to drive faster than the speed limit, most of us would prefer at an individual level that there were no highway patrol. On the one hand, that would be liberating. However, we also care about the reckless driving of others and would prefer to limit that, if possible. The highway patrol clearly has a role in that, as does the fine for speeding and how auto insurance premiums are impacted from getting a speeding ticket. The system tries to balance these things, imperfectly as that may be.
In the previous section where I talked about municipality access, a part of that is members of the municipality not seeing paid-for content. Is that consistent with the software provider's incentives? Think about the contract negotiation between the software provider and the municipality. What will determine the terms of such a contract? Might usage by municipality members be a prime determinant? If so, the provider has incentive to jimmy up usage and might use salacious content for that purpose. As with the speeding on the highway example, an individual user would likely gravitate toward the salacious content, but might prefer that other users do not, to preserve the safety of the environment. One would think, then, that some form of policing would be necessary to achieve that control of other users.
Speeding is comparatively easy to measure. Determining what content is suitable and what content is not is far more difficult. One possible way out of this is for the provider to block all content from non-friend sources. Subject to an acceptable use policy, users themselves would be able to bring in any content they see fit via linking (for example, I'm linking to certain pieces in this post) but for the software provider to be out of the business of content push altogether. Then, the policing would amount to verifying whether the provider stuck to that agreement, plus the monitoring of users who are actually trolls instead. Another possible way is to generate an approved list of content providers and to only accept content from those providers on the list, perhaps with users opting in to certain content providers rather than giving the ability to the software provider to arbitrarily push content at the user, but then allowing the software provider to retain the ability to push certain content.
A point to be reckoned with here is that self-policing by the software provider is apt to fail, because the incentives aren't very good for it. But, on the flip side, we don't have a good model of effective yet non-intrusive online policing in broad social networks. (In narrow cases, for example on my class site, the site owner can function in this policing role. I tell my students I will delete those comments I find inappropriate. I can't ever recall doing that for a student in the class, but since I use a public site I can recall deleting comments from people who weren't in the class.)
The concept of online police may be anathema to those of of us weaned on the mantra - the Internet should be free and open. Wouldn't policing be used to suppress thoughtful but contrary opinion? Before answering that questions, we should ask, why there isn't more abuse by traffic cops. They largely do the job they are paid to do. If that system works, more or less, couldn't some online analog work as well?
Wrap Up
Some time ago I wrote a post called Gaming The System Versus Designing It, where I argued that we've all become very good gamers, but most of us don't have a clue about what good system design looks like. There is a problem when a large social network provider operates with a gamer mentality, even as it is providing a public good on a very large scale. We need more designer thinking on these matters. In this post, my goal was not to provide an elegant design alternative to the present situation. I, for one, am not close to being ready to do that. But I hope we can start to ask about those issues a good design would need to address. If others begin to consider the same sort of questions, that would be progress.