Archive for the ‘Sociology’ Category

The ultimate pathogen vector

March 31, 2014

If civilization succumbs to a deadly pandemic, we will all know what the vector was. Every physician, nurse, dentist, hygienist, and health care worker is bound to check their smartphone sometime during the day before, during, or after seeing a patient and they are not sterilizing it afterwards.  The fully hands free smartphone could be the most important invention of the 21st century.

Happiness and divisive inhibition

October 9, 2013

The Wait But Why blog has an amusing post on why Generation Y yuppies (GYPSYS) are unhappy, which I found through the blog of Michigan economist  Miles Kimball. In short, it is because their expectations exceed reality and they are entitled. What caught my eye was that they defined happiness as “Reality-Expectations”. The key point being that this is a subtractive expression. My college friend Peter Lee, now Professor and Director of the University Manchester X-Ray imaging facility, used to define happiness as “desires fulfilled beyond expectations”. I always interpreted this as a divisive quantity, meaning “Reality/Expectations”.

Now, the definition does have implications if we actually try to use it as a model for how happiness would change with some quantity like money. For example, consider the model where reality and expectations are both proportional to money. Then happiness = a*money – b*money. As long as b is less than a, then money always buys happiness, but if a is less than b then more money brings more unhappiness. However, if we consider the divisive model of happiness then happiness = a*money/ b*money = a/b and happiness doesn’t depend on money at all.

However, the main reason I bring this up is because it is analogous to the two possible ways to model inhibition (or adaptation) in neuroscience. The neurons in the brain generally interact with each other through two types of synapses – excitatory and inhibitory. Excitatory synapses generally depolarize a neuron and make its potential get closer to threshold whereas inhibitory neurons hyperpolarize the neuron and make it farther from threshold (although there are ways this can be violated). For neurons receiving stationary asynchronous inputs, we can consider the firing rate to be some function of the excitatory E and inhibitory I inputs. In subtractive inhibition, the firing rate would have the abstract form f(E-I) whereas for divisive inhibition it would have the form f(E)/(I+C), where f is some thresholded gain function (i.e. zero below threshold, positive above threshold) and C is a constant to prevent the firing rate from reaching infinity. There are some critical differences between subtractive and divisive inhibition. Divisive inhibition works by reducing the gain of the neuron, i.e. it makes the slope of the gain function shallower while subtractive inhibition makes the threshold effectively higher. These properties have great computational significance, which I will get into in a future post.

TB, streptomycin, and who gets credit

September 4, 2013

The Science Show did a feature story recently about the discovery of streptomycin,  the first antibiotic to treat tuberculosis, which had killed 2 billion people in the 18th and 19th centuries. Streptomycin was discovered by graduate student Albert Schatz in 1943, who worked in the lab of Professor Selman Waksman at Rutgers. Waksman was the sole winner of the 1952 Nobel Prize for this work. The story is narrated by the author of the book Experiment Eleven, who paints Waksman as the villain and Schatz as the victim. Evidently, Waksman convinced Schatz to sign away his patent rights to Rutgers but secretly negotiated a deal to obtain 20% of the royalties. When Schatz discovered this, he sued Waksman and obtained a settlement. However, this turned the scientific community against him and he forced him out of microbiology into science education. To me, this is just more evidence that prizes and patents are incentives for malfeasance.

The problem with democracy

August 16, 2013

Winston Churchill once said that “Democracy is the worst form of government, except for all those other forms that have been tried from time to time.” The current effectiveness of the US government does make one wonder if that is even true. The principle behind democracy is essentially utilitarian – a majority or at least a plurality decides on the course of the state. However, implicit in this assumption is that the utility function for individuals match their participation function.

For example, consider environmental regulation. The utility function for the amount of allowable emissions of some harmful pollutant like mercury for most people will be downward sloping – most people would increase their utility the less the pollutant is emitted. However, for a small minority of polluters it will be upward sloping with a much steeper slope. Let’s say that the sum of the utility gained for the bulk of the population for strong regulation is greater than that gained by the few polluters for weak regulation. If the democratic voice one has in affecting policy is proportional to the summed utility then the smaller gain for the many will outweigh the larger gain to the few. Unfortunately, this is not usually case. More often, the translation of utility to legislation and regulation is not proportional but passes through a very nonlinear participation function with a sharp threshold. The bulk of the population is below the threshold so they provide little or no voice on the issue. The minority utility is above the threshold and provides a very loud voice which dominates the result. Our laws are thus systematically biased to protecting the interests of special interest groups.

The way out of this trap is to either align everyone’s utility functions or to linearize the participation functions. We could try to use regulation to dampen the effectiveness of minority participation functions or use public information campaigns to change utility functions or increase the participation functions of the silent majority. Variations of these methods have been tried with varying degrees of success. Then there is always the old install a benevolent dictator who respects the views of the majority. That one really doesn’t have a good track record though.

Failure at all scales

March 12, 2013

The premise of most political systems since the enlightenment is that the individual is a rational actor. The classical liberal (now called libertarian) tradition believes that social and economic ills are due to excessive government regulation and intervention. If the individuals are left to participate unfettered in a free market then these problems will disappear.  Conversely, the traditional Marxist/Leninist left posits that the capitalistic system is inherently unfair and can only be cured by replacing it with a centrally planned economy. However, the lesson of the twentieth century is that there is irrationality, incompetence, and corruption at all levels, from individuals to societies. We thus need regulations, laws and a government that take into account of the fact that we are fallible at all scales, including the regulations, laws and the government.

Markets are not perfect and often fail but they are clearly superior to central planning for the distribution of most resources (particularly consumer goods). However, they need to be monitored and regulated. When markets fail, government should intervene. Even the staunchest libertarian would support laws that prevent the elimination of your competitors by violence. Organized crime and drug cartels are an example of how businesses would run in the absence of laws. However, regulations and laws should have built-in sunset clauses that force them to be reviewed after a finite length of time. In some cases, a freer market makes sense. I believe that the government is bad in picking winners so if we want to promote alternative energy, we shouldn’t be helping nascent green industries but rather tax fossil fuel use and let the market decide what is best. Making cars more fuel-efficient may not lead to less energy use but just encourage people to drive more. If we want to save energy, we should make energy more expensive. We should also make regulations as universal and simple as possible to minimize  regulatory capture. I think means testing for social services like medicare is a bad idea because it will just encourage people to find clever ways to circumvent it. The same probably goes for need-based welfare. We should just give everyone a minimum income and let everyone keep any income above it. This would then provide a safety net but not a disincentive to work. Some people will choose to live on this minimum income but as I argued here, I think they should be allowed to. If we want to address wealth inequality then we should probably tax wealth directly rather than income. We want to encourage people to make as much money as possible but then spend it to keep the wealth circulating. By the same reasoning, I don’t like a consumption tax. Our economy is based on consumer spending so we don’t want to discourage that (unless it is for other reasons than economic).

People do not suddenly become selfless and rational when the political system changes but systems can mitigate the effects of their irrational and selfish tendencies. As the work of Kahneman, Tversky, Ariely, and others have shown, rational and scientific thinking does not come naturally to people. Having the market decide what is the most effective medical treatment is not a good idea. A perfect example is in a recent Econtalk podcast with libertarian leaning economist John Cochrane on healthcare. Cochrane suggested that instead of seeing a doctor first, he should just be allowed to buy antibiotics for his children whenever they had an earache. The most laughable part was his idea that we have rules against self-administering of drugs to protect uneducated people. Actually, the rules are to protect highly educated people like him who think that expertise in one area transfers to another. The last thing we want is for even more antibiotic use and more antibiotic resistant bacterial strains. I definitely do not want to live in a society where I have to wait for the market to penalize companies that provide unsafe food or build unsafe buildings. It doesn’t help me if my house collapses in an earthquake because the builder used inferior materials. Sure they may go out of business but I’m already dead.

There is no single perfect system or set of rules that one should always follow. We should design laws, regulations, and governments that are adaptable and adjust according to need. The US Constitution has been amended 27 times. The last time was in 1992, which just changed the rules on salaries for elected officials. The 26th amendment in 1971 made 18 the universal threshold age for voting. We are thus due for another amendment and I think the 2nd amendment, which guarantees the right to bear arms, is a place to start. We could make it more explicit what types of arms are protected and what types can be regulated by local laws. If we want to reduce gun violence then gun regulation makes sense. People will do things they later regret. If one is in the heat of an argument and there is a gun available then it could be used inadvertently. It takes a lot of training and skill to use a gun effectively. Accidents will happen. In the case of guns, failure often leads to death. I would prefer to live in a society where guns are scarce rather than one where everyone carries a weapon like the old wild west.

Aboriginals and Canada

February 21, 2013

This lecture by John Ralston Saul captures the essence of Canada better than anything else I’ve ever heard or read.  Every Canadian should listen and non-Canadians could learn something too!

A new strategy for the iterated prisoner’s dilemma game

September 4, 2012

The game theory world was stunned recently when Bill Press and Freeman Dyson found a new strategy to the iterated prisoner’s dilemma (IPD) game. They show how you can extort an opponent such that the only way they can maximize their payoff is to give you an even higher payoff. The paper, published in PNAS (link here) with a commentary (link here), is so clever and brilliant that I thought it would be worthwhile to write a pedagogical summary for those that are unfamiliar with some of the methods and concepts they use. This paper shows how knowing a little bit of linear algebra can go a really long way to exploring deep ideas.

In the classic prisoner’s dilemma, two prisoner’s are interrogated separately. They have two choices. If they both stay silent (cooperate) they get each get a year in prison. If one confesses (defects) while the other stays silent then the defector is released while the cooperator gets 5 years.  If both defect then they both get 3 years in prison. Hence, even though the highest utility for both of them is to both cooperate, the only logical thing to do is to defect. You can watch this played out on the British television show Golden Balls (see example here). Usually the payout is expressed as a reward so if they both cooperate they both get 3 points, if one defects and the other cooperates then the defector gets 5 points and the cooperator gets zero,  and if they both defect they both get 1  point each. Thus, the combined reward is higher if they both cooperate but since they can’t trust their opponent it is only logical to defect and get at least 1 point.

The prisoner’s dilema changes if you play the game repeatedly because you can now adjust to your opponent and it is not immediately obvious what the best strategy is. Robert Axelrod brought the IPD to public attention when he organized a tournament three decades ago. The results are published in his 1984 book The Evolution of Cooperation.  I first learned about the results in Douglas Hofstader’s Metamagical Themas column in Scientific American in the early 1980s. Axelrod invited a number of game theorists to submit strategies to play IPD and the winner submitted by Anatol Rappaport was called tit-for-tat, where you always cooperate first and then do whatever your opponent does.  Since this was a cooperative strategy with retribution, people have been using this example of how cooperation could evolve ever since those results. Press and Dyson now show that you can win by being nasty. Details of the calculations are below the fold.


What’s your likelihood?

August 10, 2012

Different internal Bayesian likelihood functions may be why we disagree. The recent shooting tragedies in Colorado and Wisconsin have set off a new round of arguments about gun control. Following the debate has made me realize that the reason the two sides can’t agree (and why different sides of almost every controversial issue can’t agree) is that their likelihood functions are completely different. The optimal way to make inferences about the world is to use Bayesian inference and there is some evidence that we are close to optimal in some circumstances. Nontechnically, Bayesian inference is a way to update  the strength of your belief in something (i.e. probability) given new data. What you do is to combine your prior probability with the likelihood of the data given your internal model of the issue (and then normalize to get a posterior probability). For a more technical treatment of Bayesian inference, see here. I posted previously (see here) that I thought that drastic differences in prior probabilities is why people don’t seem to update their beliefs when faced with overwhelming evidence to the contrary.  However, I’m starting to realize that the main reason might be that they have completely different models of the world, which in technical terms is their likelihood function.

Consider the issue of gun control.  The anti-gun control side argue that “guns don’t kill people, people kill people’ and that restricting access to guns won’t prevent determined malcontents from coming up with some means to kill. The pro-gun control side argues that the amount of gun violence is inversely proportional to the ease of access to guns. After all, you would be hard pressed to kill twenty people in a movie theatre with a sword. The difference in these two viewpoints can be summarized by their models of the world.  The anti-gun control people believe that the distribution of the will of people who would commit violence looks like this

where the horizontal line represents a level of gun restriction.  In this world view, no amount of gun restriction would prevent these people from undertaking their nefarious designs.  On the other hand, the pro-gun control side believes that the same distribution looks like this

in which case, the higher you set the barrier the fewer the number of crimes committed. Given these two views of the world, it is clear why new episodes of gun violence like the recent ones in Colorado and Wisconsin do not change people’s minds. What you would need to do is to teach a new likelihood function to the other side and that may take decades if at all.



Financial Fraud and incentives

April 12, 2012

I highly recommend this EconTalk podcast on financial fraud with William Black.  It gives a great summary of how financial fraud is perpetrated.  It also clearly highlights the difference in opinions on the cause of the recent financial crisis.    What most people seem to agree on is that the financial crisis was caused by a system wide Ponzi scheme.  Lots of mortgages were issued (some were liar loans where the recipients clearly were not qualified, while some were just risky); these loans were then packaged into mortgage-backed securities (e.g. CDOs) by investment banks who them sold them to other banks and institutional investors like hedge funds and pension funds.  As long as housing prices increased, everyone made money.  Homeowners didn’t care if they couldn’t pay back the loan because they could always sell the house.  The mortgage lenders didn’t care if the loans go bad because they didn’t hold the loans and made money off of the fees. Like a classic Ponzi scheme, when the bubble burst everyone lost money, except for those who got out early.

The motivation for the mortgage lenders seem quite straightforward.  They were making money on fees and the more mortgages the more fees.  When they ran out of legitimate people to lend to, they just lent to riskier and riskier people.  The homeowners were simply caught in the bubble mania at the time.  I remember people telling me I was an idiot for not buying and that house prices never go down. The question at dispute is what were the incentives for the investment banks and institutional investors to go along with this scheme. Why were they fueling the bubble? Libertarians like Russ Roberts, the host of EconTalk, believes that they didn’t care if the bubble burst because they knew they would be bailed out by the government.  To him, the moral hazard due to government intervention was the culprit.   The pro-regulators, like Black, believe that this was a regulatory failure in that the incentives were to commit and reward fraud.  The institutional investors simply didn’t do the due diligence that they should have because the money was rolling in.  He cites the example of Enron, where incredible profits were booked through accounting fraud, which not only didn’t raise alarm to investors, but only attracted more money leading to  more incentives to perpetuate the fraud.  He also says that thousands of people were prosecuted for fraud in the eighties after the Savings and Loans crisis but no one has been prosecuted this time around. He believes that unless we criminalize financial fraud, this will only continue.  The third view, shared by people like Steve Hsu and Felix Salmon, was that the crisis was mostly due to incompetence (Steve calls this bounded cognition).  The investment banks had too much confidence or didn’t fully understand the risks in their financial instruments. They thought they could beat the system. The fourth possibility is that people knew it was a Ponzi scheme but either felt trapped and couldn’t act or thought they could get out in time. It was a pure market failure in that the rational course of action led to a less efficient outcome for everyone.


Bullies and free loaders

February 29, 2012

I think one way to conceptualize how people on different ends of the current political spectrum think is that all people tend to have an aversion to bullies and free loaders.  What differs between political ideology is whom you consider to be in these categories.  For libertarian minded conservatives, the government is a bully and recipients of government largesse are free loaders.  For the Marxist left (which no longer seems to exist in the United States),  capitalists are both bullies and free loaders.  It is not too difficult to see the origins and assumptions leading to such view points.  However, even though these ideas represent extremes of the political spectrum, they are also reliant on optimistic (albeit different) assumptions about human nature.

The libertarian believes that all forms of regulation are a direct impingement on their freedoms and a suppression of economic prosperity for all.  In their view, as for example espoused by Ayn Rand, noble entrepreneurs are thwarted in their attempts to be creative and productive by corrupt and irrational regulations.  The fruits of their labour are expropriated by the domineering state and redistributed to the lazy, undeserving hoi polloi.  However, conflict arises in the strict libertarian view when exercising one’s freedom impinges on the freedom of someone else.  Libertarians, as propounded by Milton Friedman, believe that individuals can rationally settle disputes by negotiating or in civil courts.  I think this represents a highly optimistic view of human nature.  What more likely will happen, as articulated brilliantly in Noahpinion, is that those with  more wealth and power will simply bully those with less.  Society will be about navigating the realms of local bullies.

In the Marxist viewpoint, economic value comes from a combination of labour and capital.  However, the system is rigged so that capital is controlled by a few capitalists who exploit the labour for surplus value, (i.e. excess wealth generated by labour beyond what they need to live).  Hence, the capitalists are free loaders that build wealth and more capital on the backs of the workers.  The more capital they accumulate the more the workers are beholden to them.  This will eventually lead to a revolution of some sort and the cycle starts again.  Marx’s solution to break the cycle was to let labour take control of the capital and reap the rewards of their collective labour.  However, again this solution requires an optimistic reading of human nature that a)  people would not take advantage of  a collective society and free ride and b) that people would be happy in such a society where one does not directly reap the rewards of their own efforts.

Ideally, I would like an economic system that is both fair and realistic about human nature.   A system that accepts that regulations are required to prevent bullying but also recognizes that one can capture  regulations to bully and free load.  One that acknowledges that unrestricted welfare can lead to free loading and be a disincentive to be productive but also realizes that not providing a social safety net is a form of bullying because people have no other means of surviving beyond participation in the existing economic system. We have no option to opt out.   I think the American liberal left has designs on doing this but has not been fully successful because of their own incoherence and push back from the right.   Additionally, there is an inherent asymmetry in the current political debate. The Marxist viewpoint has all but disappeared from American political thinking except as a convenient  foil for certain aspirants of higher office.  While there has been some recent revitalization on the left, such as with the Occupy Wall Street movement,  they have not been able to effectively convey how the top one percent are both bullying and free riding.

Cognitive dissonance

February 12, 2012

The New York Times has a story today describing how the American middle class are becoming more reliant on government aid, much to their chagrin.  However, the reaction of many of the people interviewed  is animosity towards government programs and support for culling them, even though that would hurt themselves economically.

New York Times: One of the oldest criticisms of democracy is that the people will inevitably drain the treasury by demanding more spending than taxes. The theory is that citizens who get more than they pay for will vote for politicians who promise to increase spending.

But Dean P. Lacy, a professor of political science at Dartmouth College, has identified a twist on that theme in American politics over the last generation. Support for Republican candidates, who generally promise to cut government spending, has increased since 1980 in states where the federal government spends more than it collects. The greater the dependence, the greater the support for Republican candidates.

Conversely, states that pay more in taxes than they receive in benefits tend to support Democratic candidates. And Professor Lacy found that the pattern could not be explained by demographics or social issues.

Cognitive dissonance is a term in psychology that describes the uncomfortable feeling when two conflicting thoughts are simultaneously held and the attempts to rationalize the inconsistency. The political dynamics currently playing out in the United States may be a giant manifestation of this phenomenon.  A telling aspect of the article was that many of the people interviewed acknowledged that they could not survive without government assistance but felt that they did not deserve such help and preferred that it be reduced rather than subjecting others to higher taxes to pay for it.   This rather honorable attitude serves as a stark contrast to the premise of the heavily debated new book of Charles Murray, Coming Apart (see New York Times review here) that argues that the economic travails of the white working class is due largely to a lapse in moral values.  What was also striking in the article was that there was no sense that the dire economic situation these people were facing was due to the fact that the economic game was stacked against them.  There was just a silent resignation that this is the way things are.  The American mythos of the self-reliant and self-made individual is a powerful metaphor that is firmly implanted in a large fraction of the population.  People will not always support policies that are in their economic interests.  This facility for self-denial is a large part of what makes us human.  How we obtained it is still an unresolved problem in evolutionary biology.


Judicial system versus Bayesian brain

July 11, 2011

I think the recent uproar over the acquittal of  Casey Anthony clearly shows how our internal system of inference can be at odds with the American judicial system.  For those of you who don’t pay attention to the mainstream media, Casey Anthony was a young mother of a toddler that was found dead.  What captivated the American public was that the toddler had been missing for a month before Anthony reported it.  She lied to her parents and the authorities about the whereabouts of the child and even appeared celebratory in public during the period of the child’s disappearance.  In the court of public opinion, Anthony was clearly guilty.  The fact that a mother showed no anxiety whatsoever over the disappearance of a child clearly indicates that she was the culprit.

The American judicial system requires that if there is any reasonable doubt of guilt then a person must be acquitted.  The burden of proof is on the prosecution.  In this case, there were no witnesses and no physical evidence linking Anthony to the death of the child or that the child was even murdered.  Thus there was the remote possibility that Anthony was not responsible for the child’s death but simply took advantage of the situation, as macabre as that may be.  Even though the probability  of these two unlikely events – a mother wishing to be free of her child and a child going missing – is exceedingly low, it is still non zero and thus the jury was forced to acquit.

Now if a single piece of evidence had linked Anthony to the crime, say a fingerprint or DNA sample,  then she most likely would have been found guilty.  The interesting aspect of this is that there is also an equally low probability that someone could have planted the evidence to frame her. Thus, reasonable doubt is not a global quantity according to the law. It is not sufficient that the total probability that the accused is guilty be high, it also matters if it is high in each of several categories, i.e. motive, opportunity, and direct physical evidence or witnesses.  Circumstantial evidence is insufficient to convict a criminal.  However, it appears that our brains do not work this way.  We seem to take the global probability of guilt and go with that.

Crime and immigration

July 4, 2011

Noted sociologist Richard Florida has an opinion piece in the Finacial Times and The Atlantic (see here) about how immigration may be responsible for the recent decline in violent crime in cities. Many explanations have been given for why crime has decreased since the nineteen nineties. The bestselling book Freakonomics suggested that the decline was because of legalized abortion, which meant fewer unwanted children who would go on to be criminals. Florida shows that there is a strong negative correlation between the presence of large immigrant communities and the crime rate. Again, like all epidemiological results, this correlation may or may not be significant much less have causal value. That is not to say that it is not correct. Immigrant neighborhoods may have a greater sense of a small town community that discourages crime but if the opposite correlation was found an equally plausible just-so story could also be concocted. I think the crucial point about this result along with all other explanations of complex phenomena is that we are drawn towards single universal explanations. I am all the time even though there may not be a reason why there should even be an explanation in a few hundred bits. The opposite view would be that a single explanation is implausible. After all, something like crime involves millions upon millions of degrees of freedom so why should it be compressible to a few hundred bits. However, if the phenomenon is consistent across many cities and regions then maybe a global explanation may be in order. The variance around the mean is also important. If the variability is low then a universal explanation carries more weight. I think that we tend to either embrace reduced descriptions or reject them outright based on the ideas presented. However, in many cases, a careful examination of the data may at least tell us if a reduced description is warranted or not.


Get every new post delivered to your Inbox.

Join 111 other followers