Saturday, February 6, 2010

Kant, Freud, Laughter and Brain Science



Many philosophers since Kant have tried to explain jokes, humor, and laughter, notably the French philosopher Henri Bergson and Sigmund Freud at the beginning of the twentieth century. Some of them saw in the joke and following laughter a mechanism of “bewilderment and illumination”. Freud wrote about that:

“The factor of ‘bewilderment and illumination’, too, leads us deep into the problem of the relation of the joke to the comic. Kant says of the comic in general that it has the remarkable characteristic of being able to deceive us only for a moment. Heymans (Zeitschr. f. Psychologie, XI, 1896) explains how the effect of a joke comes about through bewilderment being succeeded by illumination. He illustrates his meaning by a brilliant joke of Heine’s, who makes one of his characters, Hirsh-Hyacinth, the poor lottery-agent, boast that the great Baron Rothschild had treated him quite as his equal – quite ‘famillionairely’. Here the word that is the vehicle of the joke appears at first to be a wrongly constructed word, something unintelligible, incomprehensible, puzzling. It accordingly bewilders. The comic effect is produced by the solution of this bewilderment, by understanding the word. Lipps (Komik und Humor, 95) adds to this that this first stage of enlightenment – that the bewildering word means this or that – is followed by a second stage in which we realize that this meaningless word has bewildered us and has then shown us its true meaning. It is only this second illumination that produces the comic effect.” (Sigmund Freud, Jokes and their relation to the unconscious, 1905, pp. 27-28).

Now Daniel Elkan writes in NewScientist The comedy circuit: When your brain gets the joke (February 1, 2010):

Take the following exchange from the classic British sitcom Only Fools and Horses, when an anxious "Del Boy" Trotter visits his doctor for a heart check-up. "Do you smoke, Mr Trotter?" asks the doctor. "Not right now, thank you doctor," he responds.

The joke's incongruity, of course, lies in the unlikely offer of a cigarette by a doctor to a patient concerned about his heart. It is only once we understand the mismatch that we get the joke. "Humour seems to be a product of humans' ability to make rapid, intuitive judgements" about a situation, followed by "slower, deliberative assessments" which resolve incongruities, says Karli Watson of Duke University in Durham, North Carolina.

But which parts of the brain carry out these processes? To find out, Joseph Moran, then at Dartmouth College in Hanover, New Hampshire, used functional MRI to scan the brains of volunteers while they watched popular TV sitcoms. The experiments revealed a distinct pattern of neural activity that occurs in response to a funny joke, with the left posterior temporal gyrus and left inferior frontal gyrus seeing the most activity. These regions are normally linked to language comprehension and the ability to adjust the focus of our attention, which would seem to correspond to the process of incongruity-resolution at the heart of a good joke (NeuroImage, vol 21, p 1055).

Further research, conducted by Dean Mobbs, then at Stanford University in California, uncovered a second spike of activity in the brain's limbic system - associated with dopamine release and reward processing - which may explain the pleasure felt once you "get" the joke (Neuron, vol 40, p 1041).

Examining one particular part of the limbic system - the ventral striatum - was especially revealing, as its level of activity corresponded with the perceived funniness of a joke. "It's the same region that is involved in many different types of reward, from drugs, to sex and our favourite music," says Mobbs, now at the MRC Cognition and Brain Sciences Unit in Cambridge, UK. "Humour thus taps into basic rewards systems that are important to our survival."

That reward explains the relaxation and laughter that we experience when we get to understand the incomprehensible or paradoxical elements in the joke: our brain is rewarded for having solved the puzzle because understanding a complex environment has survival value and should thus be rewarded.

Thursday, February 4, 2010

Spence on Banking Reform


In an article on the American Recovery for Project Syndicate Michael Spence, 2001 Nobel Laureate in Economics, formulates a synthetic judgment on the financial reform possibilities :

« In fairness, the new rule proposed by former US Federal Reserve Chairman Paul Volcker to separate financial intermediation from proprietary trading is not a bad idea. Combined with elevated capital requirements for banks, it would reduce the chance of another simultaneous failure of all credit channels. But it is not sufficient. Hedge funds can also destabilize the system, as the collapse of Long Term Capital Management in 1998 demonstrated. So they also need clear, albeit different, limits on leverage. »

Sustained.

Wednesday, February 3, 2010

Scott Sumner on macroeconomics and blogging

Tyler Cowen writes: “Read the whole thing. It is one of the best statements of how blogging can make a difference” and answer the question “In what way is blogging science?”

Scott Sumner (TheMoneyIllusion) is often not easy to read and his arguments about macro and monetary policy are quite subtle. He makes an important point in his recent post "Seeing the world in a different way (one year later)" about the “Official Method” in macroeconomics. He doesn’t think much of it. Excerpts:

“You devise a model. You go out and get some data. And then you try to refute the model with some sort of regression analysis. If you can’t refute it, then the model is assumed to be supported by the data, although papers usually end by noting “further research is necessary,” as models can never really be proved, only refuted.

My problem with this view is that it doesn’t reflect the way macro and finance actually work. Instead the models are often data-driven. Journals want to publish positive results, not negative. So thousands of macroeconomists keep running tests until they find a “statistically significant” VAR model, or a statistically significant “anomaly” in the EMH. Unfortunately, because the statistical testing is often used to generate the models, and determine which get published, the tests of statistical significance are meaningless.

I’m not trying to be a nihilist here, or a Luddite who wants to go back to the era before computers. I do regressions in my research, and find them very useful. But I don’t consider the results of a statistical regression to be a test of a model, rather they represent a piece of descriptive statistics, like a graph, which may or may not be usefully supplement of a more complex argument that relies on many different methods, not a single “Official Method.””

One could add too that in so many published papers, mostly in fields other than macroeconomics, the story goes like this: first a rather simple, or basic economic statement is made about a plausible causal link between two variables. It usually relies on very simple economics, although it is a speculative hypothesis not justified by deeper argument or any other evidence. Then follow twenty pages of abstruse and irrelevant mathematics, preferably with a score of triple integrals. And then only, some statistical tests of the sort Scott Sumner criticizes are presented. Here the mathematical apparatus, not the statistical testing, is what determines which paper gets published, wasting a lot of publication pages.

Deirdre McCloskey has been suggesting for a long time, as Scott Sumner does in a way, that economics is a persuasion instrument. In this perspective one has to wonder what can be the value of mathematical developments and “proofs” derived from rather arbitrary formal models designed mostly to justify, in an apparently “scientific” way, the statistical presentations that follow. A more useful discussion should be centered on the choice of the main building blocks of potential models, and a comparison with alternative choices, in the process of constructing the model.

This is congruent with the author’s experience with journal publishing that he reports later in his post:

“In 2008 and 2009 I sent papers on the economic crisis out to journals like the JMCB and the JPE, journals that I have published at in the past. But now for the first time in my life the articles come back without even being sent out to a referee. “It’s not the sort of thing we publish” they’d say. I gather this means they don’t see enough equations. I hope it doesn’t mean “because it addresses the most important macroeconomic problem since the 1930s.” "


There are many gems in the post. I especially liked the beautiful summary of what the efficient market hypothesis (much criticized – wrongly - for being disproved by the crash) really is : it “merely tells us that the market forecast is the optimal forecast, not perfect foresight.”

The second part of the post, “Don’t ask me to become a blogger”, is about the experience of publishing a blog. It is very well worth reading and invites one to think about the respective influences and impact of the blogosphere and the established institutions of universities and journal publishing.

Hat tip: Tyler Cowen (Marginal Revolution).

Tuesday, February 2, 2010

The U.S. as a Social Market Economy

Charles Rowley comments on the transition of the American system towards state capitalism .

Efficient Markets, Fundamental Values, and Identifying Bubbles


Should monetary policy react to asset bubbles, assuming that they are identifiable? Beyond the political babble, an intelligent and learned discussion of the issues on Rajiv Sethi’s blog .

Monday, February 1, 2010

Mankiw and De Long agree on Phillips


On June 9, 2008, a post on this blog reported work by Greg Mankiw and others on the Phillips curve, showing the durability of the inverse relationship between unemployment and growth.


Now Brad de Long posts an interesting graph (above) showing the relation since 1970.


Eurozone: What Centralizers Think


Wolfgang Munchau, an editorial commentator in the Financial Times and a staunch promoter of European centralization, warns of the coming Spain problem (FT January 31) as the “clear and present danger” to the zone’s survival, and its entering “the most dangerous phase in its 11-year history.” The solutions?

First establish a “robust and transparent system of crisis management” that would minimize moral hazard: “countries that benefit from help will have to accept a partial loss of sovereignty.” Centralization.

“The second essential prerequisite for survival is a reduction in internal imbalances, which lie at the core of the current crisis.” In other terms, a condition for solving a crisis is to treat the causes of the crisis. Nothing new there, except that doing so necessitates, according to Munchau, more coordination by the finance ministers of the eurogroup. More centralization.

Third, centralize financial regulation. But this begs the question of the responsibility of the national regulations in the crisis. Is it proven that Spanish financial regulation is worse than say the French or British ones? And if this is the case, then why not let Spain adopt the British of French regulatory system (a competition of regulations) or “import” some parts of them, rather than imposing to all the member countries a supra national one?

If regulatory and policy centralization is the solution, one has to assume that decentralization was the cause of the problem. Many commentators, on the contrary, have shown that suppressing the national exchange rate indicator ( a centralization of exchange rate and monetary policy) created an incentive for governments to increase public deficits, with no visible external consequence such as a currency depreciation, and that national imbalances are aggravated by the disequilibrium exchange rates embedded in the common currency system. While a swift depreciation of a national exchange rate would have helped to reduce the imbalances, a common currency forbids the use of such an instrument and thus exacerbates the problems.

When advocating centralization policies, one should make explicit the expected costs along with the hypothetical future benefits.