The demise of Barnes and Noble

Near the end of the twentieth century, there was a battle between small bookstores and the big chains like Barnes and Noble and Borders, typified in the film You’ve Got Mail.  The chains won because they had lower prices, larger stocks, and served as mini-community centers where people liked to hang out. It was sad to see the independent bookstores die but the replacement was actually a nice addition to the neighborhood. The Barnes and Noble business model was to create attractive places to spend time, with play areas for children, a cafe with ample seating, and racks and racks of magazines. The idea was that the more time you spent there the more money you would spend and it worked for at least ten years. Yet, at the height of their dominance, the seeds of their destruction could be plainly seen. Amazon was growing even faster and a new shopping model was invented. People would spend time and browse in B and N and then go home to order the books on Amazon. The advent of the smartphone only quickened the demise because people could order directly from the store. The large and welcoming B and N store was a free sample service for Amazon. Borders is already gone and Barnes and Noble is on its last legs. The one I frequent will be closing this summer.

The loss of B and N will be a blow to many communities. It’s a particular favorite locale for retirees to congregate. I think this is a perfect example of a market failure. There is a clear demand for the product but no viable way to monetize it. However, there already is a model for providing the same service as B and N that has worked for a century and that is called a library. Libraries are still extremely popular and provide essential services to people, and particularly low income people. The Enoch Pratt Free Library in Baltimore has a line every morning before it opens for people scrambling to use the computers and access the internet. While libraries have been rapidly modernizing, with a relaxation of behavior rules and adding cafes, they still have short hours and do not provide the comforting atmosphere of B and N.

I see multiple paths forward. The first is that B and N goes under and maybe someone invents a new private model to replace it. Amazon may create book stores in its place that act more like showrooms for their products rather than profit making entities. The second is that a philanthropist will buy it and endow it as a nonprofit entity for the community much like Carnegie and other robber barons of the nineteenth century did with libraries. The third is that communities will start to take over the spaces and create a new type of library that is subsidized by tax payers and has the same hours and ambience of B and N.

Productivity, marginal cost, and monopoly

240px-supply-and-demand-svg

In any introductory economics class, one is introduced to the concept of supply and demand. The price of a product is expressed as a function of the number of products that suppliers would produce and buyers would purchase at that price, respectively. Supply curves have positive slope, meaning that the higher the price the more suppliers will produce and vice versa for demand curves. If a market is perfectly competitive, then the supply curve is determined by the marginal cost of production, which is the incremental cost of production for making one additional unit. Firms will keep producing more goods until the price falls below the marginal cost.

Increases in productivity lead to decreases in marginal cost, and since the advent of the industrial revolution, technology has been increasing productivity. In some cases, like software or recorded music, the marginal cost is already zero. The cost for Microsoft to make one more copy of Office is miniscule. However, if the marginal cost is zero then according to classical microeconomic theory firms would produce goods and give it away for free. Public intellectual Jeremy Rifkin has been writing about a zero marginal cost society for several years now, (e.g. see here and here), and has proposed that ubiquitous zero marginal cost will lead to a communitarian revolution where capitalism is overturned and people will collaborate and share goods along the lines of the open software model, which has produced the likes of Wikipedia, Linux, Python, and Julia.

I’m not so sanguine. There are two rational strategies for firms to pursue to increase profit. The first is to lower costs and the second is to create monopolies. In completely unregulated markets, like drug trafficking, it seems like suppliers spend much of their time and efforts pursuing monopolies by literally killing their competition. In the absence of the violence option, firms can gain monopolies by buying or merging with competitors and through regulatory capture to create barriers to entry. There are also industries where size and success create virtual monopolies. This is what happens for tech companies where a single behemoth like Microsoft, Google, Facebook, or Amazon, completely dominates a domain. Being large has a huge advantage in finance and banking. Entertainment seems to breed random monopoly status where a single artist will garner most of the attention even though objectively there may not be much difference between the top and the 100th best selling artist. As costs continue to decrease, there will be even more incentive to create monopolies. Instead of a sharing collaborative egalitarian world, a more likely scenario is a world with a small number of entrenched monopolists controlling most of the wealth.

 

Talk at Maryland

I gave a talk at the Center for Scientific Computing and Mathematical Modeling at the University of Maryland today.  My slides are here.  I apologize for the excessive number of pages but I had to render each build in my slides, otherwise many would be unreadable.  A summary of the work and links to other talks and papers can be found here.

Technology and inference

In my previous post, I gave an example of how fake news could lead to a scenario of no update of posterior probabilities. However, this situation could occur just from the knowledge of technology. When I was a child, fantasy and science fiction movies always had a campy feel because the special effects were unrealistic looking. When Godzilla came out of Tokyo Harbour it looked like little models in a bathtub. The Creature from the Black Lagoon looked like a man in a rubber suit. I think the first science fiction movie that looked astonishing real was Stanley Kubrick’s 1968 masterpiece 2001: A Space Odyssey, which adhered to physics like no others before and only a handful since. The simulation of weightlessness in space was marvelous and to me the ultimate attention to detail was the scene in the rotating space station where a mild curvature in the floor could be perceived. The next groundbreaking moment was the 1993 film Jurassic Park, which truly brought dinosaurs to life. The first scene of a giant sauropod eating from a tree top was astonishing. The distinction between fantasy and reality was forever gone.

The effect of this essentially perfect rendering of anything into a realistic image is that we now have a plausible reason to reject any evidence. Photographic evidence can be completely discounted because the technology exists to create completely fabricated versions. This is equally true of audio tapes and anything your read on the Internet. In Bayesian terms, we now have an internal model or likelihood function that any data could be false. The more cynical you are the closer this constant is to one. Once the likelihood becomes insensitive to data then we are in the same situation as before. Technology alone, in the absence of fake news, could lead to a world where no one ever changes their mind. The irony could be that this will force people to evaluate truth the way they did before such technology existed, which is that you believe people (or machines) that you trust through building relationships over long periods of time.

Fake news and beliefs

Much has been written of the role of fake news in the US presidential election. While we will never know how much it actually contributed to the outcome, as I will show below, it could certainly affect people’s beliefs. Psychology experiments have found that humans often follow Bayesian inference – the probability we assign to an event or action is updated according to Bayes rule. For example, suppose P(T) is the probability we assign to whether climate change is real; P(F) = 1-P(T) is our probability that climate change is false. In the Bayesian interpretation of probability, this would represent our level of belief in climate change. Given new data D (e.g. news), we will update our beliefs according to

P(T|D) = \frac{P(D|T) P(T)}{P(D)}

What this means is that our posterior probability or belief that climate change is true given the new data, P(T|D), is equal to the probability that the new data came from our internal model of a world with climate change (i.e. our likelihood), P(D|T), multiplied by our prior probability that climate change is real, P(T), divided by the probability of obtaining such data in all possible worlds, P(D). According to the rules of probability, the latter is given by P(D) = P(D|T)P(T) + P(D|F)P(F), which is the sum of the probability the data came from a world with climate change and that from one without.

This update rule can reveal what will happen in the presence of new data including fake news. The first thing to notice is that if P(T) is zero, then there is no update. In this binary case, this means that if we believe that climate change is absolutely false or true then no data will change our mind. In the case of multiple outcomes, any outcome with zero prior (has no support) will never change. So if we have very specific priors, fake news is not having an impact because no news is having an impact. If we have nonzero priors for both true and false then if the data is more likely from our true model then our posterior for true will increase and vice versa. Our posteriors will tend towards the direction of the data and thus fake news could have a real impact.

For example, suppose we have an internal model where we expect the mean annual temperature to be 10 degrees Celsius with a standard deviation of 3 degrees if there is no climate change and a mean of 13 degrees with climate change. Thus if the reported data is mostly centered around 13 degrees then our belief of climate change will increase and if it is mostly centered around 10 degrees then it will decrease. However, if we get data that is spread uniformly over a wide range then both models could be equally likely and we would get no update. Mathematically, this is expressed as – if P(D|T)=P(D|F) then P(D) = P(D|T)(P(T)+P(F))= P(D|T). From the Bayesian update rule, the posterior will be identical to the prior. In a world of lots of misleading data, there is no update. Thus, obfuscation and sowing confusion is a very good strategy for preventing updates of priors. You don’t need to refute data, just provide fake examples and bury the data in a sea of noise.