Showing posts with label mind. Show all posts
Showing posts with label mind. Show all posts

Tuesday, January 24, 2012

Brain in a Box

There is no ghost in the machine. It is turtles all the way down. Or as Julien Offray de La Mettrie put it Man a Machine:



The above needs to be tempered. Our positivist and reductionist scientific worldview makes claims that go too far for a pragmatist. From a section in Lecture 47 of Daniel N. Robinson's The Great Ideas of Philosophy, 2nd Edition:
[William] James the pluralist is not a relativist of the modern stripe. He countered the reigning positivism of his day with fallibilism. There is always more to the account than any current version can include, because there are always other experiences, other beliefs, and needs. We must conduct ourselves in such a way as to record what we take to be our highest interests, while never knowing if we have them right or have matched our interests by our actions. There is no final word.

William James was, above all, a realist: We must accept what is. Unlike the positivists, however, James took this to mean that we must accept that there is a religious element to life, because credible report points to the existence of one, as well as to a striving to perfect oneself and to needs that go beyond the individual soul or body. There are, however, things that we cannot finally know. The fallibilist doesn't deny that there is some absolute point of focus on which human interests can converge, but we are warned to be suspicious of those who come to us with final answers.
The most profound lesson of modernism is to recognize the limits of our reason, the limits to knowlege, and the ultimate fallibility or each of us. The existentialist is right in noting that we project ourselves into our future with a hope that has no foundation. In the end we all die, but we live as if we would live forever. We theorize as if there is "ultimate truth" when in fact we live in a very small corner of an inconceivably huge universe in which we don't even know how many physical dimensions exist or whether there are other "universes out there" beyond our ken.

In some deep sense we are a "brain in a box" but science has not plumbed the depths of what this means. Yes, we are "only" matter in motion. But that matter, the brain, is so infinitely more complex than a figure like de La Mettrie had not a hint of what it truly means. And just as de La Mettrie claimed more than he really knew, so today's scientists and philosophers make claims far beyond what will be ultimately revealed. The universe is far more complex and mysterious than the human mind will ever comprehend. But that doesn't mean that we don't enjoy the quest. Knowledge and truth are still the shining lights on high that we strive to attain.

Wednesday, October 19, 2011

Defining the "Self"

I love to puzzle about questions like "what is the purpose of life?", "is a virus alive?", "how do you know when you 'know' something?", etc.

Here is the cartoonist Scott Adams (aka creator of the Dilbert cartoon) taking the puzzle of defining what makes you 'you' and giving an interesting reply:
Have you ever wondered who you are? You're not your body, because living cells come and go and are generally outside of your control. You're not your location, because that can change. You aren't your DNA because that simply defines the boundaries of your playing field. You aren't your upbringing because siblings routinely go in different directions no matter how similar their start. My best answer to my own question is this:

You are what you learn.

If all you know is how to be a gang member, that's what you'll be, at least until you learn something else. If you become a marine, you'll learn to control fear. If you go to law school, you'll see the world as a competition. If you study engineering, you'll start to see the world as a complicated machine that needs tweaking.

I'm fascinated by the way a person changes at a fundamental level as he or she merges with a particular field of knowledge. People who study economics come out the other side thinking a different way from people who study nursing. And learning becomes a fairly permanent part of a person even as the cells in the body come and go and the circumstances of life change.

You can easily nitpick my definition of self by arguing that you are actually many things, including your DNA, your body, your mind, you environment and more. By that view, you're more of a soup than a single ingredient. I'll grant you the validity of that view. But I'll argue that the most powerful point of view is that you are what you learn.

It's easy to feel trapped in your own life. Circumstances can sometimes feel as if they form a jail around you. But there's almost nothing you can't learn your way out of. If you don't like who you are, you have the option of learning until you become someone else. Life is like a jail with an unlocked, heavy door. You're free the minute you realize the door will open if you simply lean into it.

Suppose you don't like your social life. You can learn how to be the sort of person that attracts better friends. Don't like your body? You can learn how to eat right and exercise until you have a new one. You can even learn how to dress better and speak in more interesting ways.

I credit my late mother for my view of learning. She raised me to believe I could become whatever I bothered to learn. No single idea has served me better.
I would put down Scott Adams' "definition" as an 'aspirational' definition of self. Its purpose is to motivate you to learn and grow. In reality, you are something quite different from what you "learn". I had that driven home to me when my mother developed a brain tumour and the operation to remove it was botched by an incompetent (but licensed) surgeon. It left her with left neglect and other cognitive impairments (she couldn't visually distinguish me from my brother but she could tell by listening and by using deduction, e.g. the clothes we wore who was who). She was still "my mother" but a big chunk of her was gone.

What that hammered home is that we are a meat machine. You mess up the brain tissue and you change "you" into something else. The religious nuts can talk about soul and an afterlife and other fantasies, but the reality is that we have one life in this world in this body and if the body is broken we change and if the body dies we are gone. It is a real mystery that matter can be alive. It is an even bigger mystery that matter can be self conscious. Life is a wondrous thing. I find it really disgusting that religious "know it alls" pound on a book and claim they have all the answers. Why they have is a blighted mind and a refusal to experience the mystery of life, this wonderful, brief gift we have, and the chance to interact with others using the mystery of our minds. Our being alive is truly mind-boggling. To demean this by pretending that some religious doctrine "comprehends" the meaning of life and "surpasses" mere mortal knowledge is disgusting. Life is too precious and wonderful to be captured by the scribblings of some religious fanatics.

Wednesday, September 28, 2011

The Future is Closer Than You Think

The day of using science to manipulate unwilling subjects has drawn just a bit closer. Here are some bits from a fascinating post by Mo Costandi in the UK's Guardian newspaper's Neurophilosophy blog:
Magnetic pulses applied to a specific region of the frontal cortex can influence peoples' willingness to lie spontaneously or tell the truth, according to a new study by researchers from Estonia.

The findings, published recently in the journal Behavioural Brain Research, suggest that manipulations of brain activity could be an effective way of obtaining truthful responses from defendants and criminal suspects, raising more ethical questions about the application of neuroscience technologies in the legal profession.

...

The ability to detect deception accurately is of great interest to the legal profession and security agencies, for obvious reasons. The use of brain scanning data as evidence in courts of law has proven to be highly controversial, not least of all because of doubts about the validity of the data. Some researchers argue that we are now in a position to use functional neuroimaging to detect lies, but the general consensus seems to be that neither the technology nor our understanding of the brain are sophisticated enough.

Bachmann is cautious about how to interpret the new findings, because the sample size of 16 participants is small. He adds that they should be replicated before any firm conclusions can be made about the effects of TMS on spontaneous lying. Even so, the study raises the possibility that TMS could be used to increase the likelihood of getting the truth out of suspects or defendants. It seems likely that some may develop the technique and offer it as a service, as was the case with brain scanning.

"Provided that the method is validated and legal norms are established, it could perhaps be allowed and justified," says Bachmann, "but this should not become a routinely used technique. Basic human rights include cognitive privacy and this would be a clear infringement. If a subject freely agrees, maybe it would make sense, but I foresee heated debates on whether 'knocking truth out of the fellow' can be legalized in principle."
I'm sure the dictatorships and autocrats are clapping with glee. The day when they can reduce their subject population to automatons run remotely by the whims of the ruling class have just taken a big step forward. All hail Big Brother!

Friday, August 12, 2011

Through the Language Glass Redux

I had previously posted on Guy Deutscher's book Through the Language Glass" which discusses how language can help shape reality. The book rejects the Sapir-Whorf hypothesis, but presents a broader slice of language science to show early roots of the concept and the latest thinking which reaffirms the early findings that language does shape our cognition.

Here is a video that shows research into this area:



I especially enjoy the bit at at 3:00 into the video where selecting colour "differences" are shown. Since I'm red-green colour blind and "see" differences from what other see and have difficulties on standard colour blindness tests, I enjoy this bit showing this tribe being able to spot differences that Westerners can't while struggling to spot differences which are trivial for Westerners to see. Colour is a strange mixture of physiology and language. It is physically real and socially real with language overlapping differently and physiology overlapping differently in this one world that we share.

At 7:40 into the video when the experimenter asks "do we all see the same colours?" which is quite silly. It is well known that colour blind people see something different because their retina contains aberrant colour pigments in the cone cells. As a colour blind person I have to see something different but I conform to the colour language of the larger linguistic community and simply have to throw up my hands when tested closely and find I fail to meet their standards. I see "all the colours" but a careful test shows I can't make the distinctions normal people make.

And for more discussion of language, here is a previous post on language and reality I included a bloggingheads.tv video of Yale philosophy professor Joshua Knobe and the Stanford psychology professor, Lera Boroditsky, discussing language and thought.

It is wonderful to see progress in this field. And it is fascinating to think how our "inner selves" are shaped by the outer world. This ties back to the wonderful private language argument by Ludwig Wittgenstein:
... all language is essentially public: that language is at its core a social phenomenon. This would have profound implications for other areas of philosophical study. For instance, if one cannot have a private language, it might not make any sense to talk of private sensations such as qualia; nor might it make sense to talk of a word as referring to a concept, where a concept is understood to be a private mental state.
Fascinating stuff to think about.

Wednesday, May 4, 2011

David Brooks' "The Social Animal"


I enjoyed this book, just as I enjoyed David Brooks' earlier book Bobos in Paradise. But now that I know him better, I struggled with this book more than with the earlier book. I find Brooks' conservatism bothers me. In this book he attacks liberals and libertarians and put forth a kind of conservative agenda that just doesn't ring true for me. The problem is that David Brooks has lived his life among the top 10% of society so this book, which claims to have a main character, Erica, up from the bottom of society who gets ahead in the best traditions of Horatio Alger.

The book follows a man and woman from childhood, through marriage, to death to provide the platform for Brooks to present his social views and what he considers to be the best of "brain research". I have no complaint about the style of using fictional characters to make social observation. He does that well. I enjoyed the storyline. But I just don't buy his "science" and I don't care for his social views.

He holds that modern America has gone off the rails for over 50 years by pushing "freedom" and not looking to "social obligation and social relationships". I'm sympathetic, but ultimately don't buy the argument. I'm more than happy to read criticisms of libertarianism, the dominant right wing ideology of Brooks entire adult life and to which he signed up loyally until sometime late in George Bush's presidency. I came of age in the 60s generation so I'm still quite sympathetic to the ideals of freedom and self actualization that came out of the 1960s.

I understand Brooks' unhappiness with the seedier side of 1960s idealism turned sour. He sees it as "statism" and an attack on the institutions of family, church, and state have created dysfunctional families. I would disagree and say that while state paternalism was dysfunctional (the misguided urban renewal that destroyed communities with sterile Bauhaus functionalist cold concrete high rise "housing" like Chicago's Cabrini-Green). But I view Brooks' romanticization of kind of throwback Kinder, Küche, Kirche. Brooks endows relationships and institutions of the past with more glory than they deserve. He has a Father Knows Best view of perfectly harmonious families before the dissolution of marriage once divorce became easier. Similarly he has a sepia-toned view of institutions which for most of the poor were in fact callous institutions that offered no real help to the dispossessed and down-trodden. He seems to think that people can simply will themselves into a better world in a wonderfully "upwardly mobile" America. The truth is that after 40 years of rampant right wing politics, the US has less social mobility than Europe.

Here are a few snippets to give a flavour of the book. First his attack on rationalist reductionist science in favour a gauzy everything-is-connected-and-complicated view of the world:
Rationalism gained enormous prestige during the nineteenth and twentieth centuries. But it does contain certain limitations and biases. This mode of thought is reductionist; it breaks problems into discrete parts and is blind to emergent systems. This mode, as Guy Claxon observes in his book The Wayward Mind, values explanation over observation. More time is spent solving the problem than taking in the scene. It is purposeful rather than playful. It values the sort of knowledge that can be put into words and numbers over the sort of knowledge that cannot. It seeks rules and principles that can be applied across contexts, and undervalues the importance of specific contexts.

Moreover, the rationalist method was founded upon a series of assumptions. It assumes that social scientists can look at society objectively from the outside, purged of passions and unconscious biases.

It assumes that reasoning can be fully or at least mostly under conscious control.

It assumes that reason is more powerful than and separable from emotion and appetite.

It assumes that perception is a clear lens, giving the viewer a straightforward and reliable view of the world.

It assumes that human action conforms to laws that are akin to the laws of physics, if we can only understand what they are. A company, a nation, a universe -- these are all great machines, operated through immutable patterns of cause and effect. Natural sciences are the model that the behavioral sciences should replicate.

Eventually, rationalism produced its own form of extremism. The scientific revolution led to scientism.
This of course is a completely distorted view. Science isn't "rationalism". Science does build on reason, but it is a community effort that achieves objectivity by relying on the refinement of replicable experiments and a critical community who collaborate to refine thought and experiment into theory that guides further thought and experiment. There is no need to bring in passion, emotion, appetite, etc. Those don't and won't help you build a useful scientific theory.

Even Brooks' rhapsody about emergent systems is misguided because the scientific method is perfectly able to explore and theorize about emergent behaviour. You don't need to get swept up in a romantic anti-rationalist fever to appreciate complexity. Traditional science has been busy for half a century developing theories of complexity and chaotic system, and of emergent phenomena without Brooks telling them how misguided they are in ignoring the unconscious and emotions.

Brooks is right in his attack on the social scientists who have tried too hard to develop a "hard" science. His attack on economics are perfectly correct:
This scientism has expressed itself most powerfully, over the last fifty years years, in the field of economics. Economics did not start out as a purely rationalist enterprise. Adam Smith believed that human beings are driven by moral sentiments and their desire to seek and be worthy of the admiration of others. Thorstein Veblen, Joseph Schumpeter, and Friedrich Hayek expressed themselves through words not formulas. They stressed that economic activity was conducted amidst pervasive uncertainty. Actions are guided by imagination as well as reason. People can experience discontinuous paradigm shifts, suddenly seeing the same situation in radically different ways. John Maynard Keynes argued that economics is a moral science and reality could not be captured in universal laws calculable by mathematics. Economics, he wrote, "deals with introspection and with values... it deals with motives, expectations, psychological uncertainties. One had to be constantly on guard against treating the material as constant and homogeneous."

But over the course of the twentieth century, the rationalist spirit came to dominate economics. Physicists and other hard scientists were achieving great things, and social scientists sought to match their rigor and prestige. The influential economist Irving Fisher wrote his doctoral dissertation under the supervision of a physicist, and later helped build a machine with levers and pumps to illustrate how an economy works. Paul Samuelson applied the mathematical principles of thermodynamics to economics. On the finance side, Emanuel Derman was a physicist who became a financier and played a central role in developing the models for derivatives.

While valuable tools for understanding economic behavior, mathematical models were also like lenses that filtered out certain aspects of human nature. They depended on the notion that people are basically regular and predictable. They assume, as George A. Akerlof and Robert Shiller have written, "that variations in individual feelings, impressions and passions do not matter in the aggregate and their economic events are driven by inscrutable technical factors or erratic government actions."

Within a very short time economists were emphasizing monetary motivations to the exclusion of others. Homo Economicus was separated from Homo Sociologus, Homo Psychologicus, Homo Ethicus, and Homo Romanticus. You ended up with a stick figure of human nature.
I wouldn't put it the way Brooks has. Economics ran off the rails because it fell in love with mathematics and modeling. There is nothing wrong with these tools, but when you allow them to seduce you into assumptions and simplification which fundamentally change your subject of study -- from "man" to "homo economicus" -- then you have problems. But this isn't caused by "scientism". This is simply a community of researchers who fell in love with a theory and its simplifications and got stuck in a rut. Physics and chemistry similarly had periods where they made similar misguided assumptions and strayed into wrong-headed theories.

You can use mathematics and model things that are at heart unpredictable. That is precisely the situation with quantum physics! That people have emotions and are unpredictable doesn't mean economics is impossible. It just means that simplistic homo economicus style theories are inadequate. Brooks gets it all wrong when he claims that "mathematical models filter out aspects of human nature". The math doesn't do that. Math is simply a tool. It is the theorist using the math that does that. Brooks has profoundly misunderstood the nature of science, math, modeling, and theory building.

Finally, here is a taste of his thesis that "freedom" of the 1960s hippies and the 1980s libertarians ran America off the rails because it overlooks relationships and institutions:
...the cognitive revolution had the potential to upend these individualistic political philosophies, and the policy approaches that grew from them. The cognitive revolution demonstrated that human beings emerge out of relationships. The health of a society is determined by the health of these relationships, not by the extent to which it maximizes individual choice.

Therefore, freedom should not be the ultimate end of politics. The ultimate focus of political activity is the character of the society. Political, religious, and social institutions influence the unconscious choice architecture undergirding behavior. They can either create settings that nurture virtuous choices or they can create settings that undermine them. While the rationalist era put the utility-maximizing individual at the center of political thought, the next era... would put the health of social networks at the center of thought. One era was economo-centric. The next would be socio-centric.
I would say that Brooks is guilty of the sin of all great system builders: he oversimplifies. It is not "freedom" versus "relationships". It is both. You need a certain conservatism in society, a respect for relationship, but you also need to give people freedom to realize their potential and the ability to politically organize and overthrow oppressive institutions and outdated hierarchies.

Brooks is too conservative. His lauding "the focus of political activity is the character of the society" sounds too fascistic for my taste. I admire institutions, but they are not the goal. Times change so institutions and relationships must change. He fails to appreciate the balance that is needed between individual, family, and institutions. This is a complex dance that is ever-changing through history. We no longer live under patriarchal families. We no longer have institutionalized aristocracies. Things change.

On the whole, the book is enjoyable and does make you think. But it is too easy to simply fall into Brooks' trap and accept his argument uncritically. You need to question every page. He has insights worth digesting, but don't let Brooks become a spider who quickly rolls you up in the webbing he spins to trap you into his viewpoint.

Wednesday, April 20, 2011

Dan Gardner's "Future Babble: Why Expert Predictions Fail -- and Why We Believe Them Anyway"


This is an excellent review of just why we are suckers for doomsday fanatics and inveterate seekers of certainty about the future. He reviews the science, describes why we have a proclivity for prediction, and walks you though a number of examples of famous predictors and their failed predictions. And what is truly mind-boggling, he shows how people continue to believe the predictions in the face of overwhelming evidence of their wrongness.

He divides experts into hedgehogs (who "know" one thing in great depth) and foxes (who know many things, but more importantly know their limits). He shows how hedgehogs dig themselves into a hole with their "predictions" and when caught predicting something that is untrue, they simply continue digging their hole deeper. He explains that foxes are reluctant to predict, hedge their predictions, and are willing to admit to error when their predictions are shown to be untrue.

This book is an excellent antidote to those talking heads that the media loves to put in front of a camera and have them make grand "predictions" about this and that. You will learn why and how you get sucked into their predictions. It isn't simply them bedazzling you. You are built to want the certainty of a "good prediction" about the future. We create these doomsday fanatics and these over-priced sellers of "predictions" about the future.

Here are some interesting bits from the book:
With natural science increasingly aware of the limits of prediction and with prediction even more difficult when people are involved, it would seem obvious that social science -- the study of people -- would follow the lead of natural science and accept that much of what we would like to predict will forever be unpredictable. But that hasn't happened, at least not to the extent it should. In fact, as the Yale University historian John Lewis Gaddis describes in The Landscape of History, "social scientists during the 20th century embraced a Newtonian vision of linear and therefore predictable phenomena even as the natural sciences were abandoning it." Hence, economists issue wonky forecasts, criminologists predict crime trends that don't materialize, and political scientists foresee events that don't happen. And they keep doing it no matter how often they fail.
One of my personal favourites, Paul Ehrlich, founder in the late 1960s of ZPG and author of a number of doomsday books including his most famous The Population Bomb, is featured in this book and covered in several sections. Here is a bit:
Paul Ehrlich was also emboldened by the way things were going and he published a string of jeremiads, each more certain of worse disasters than the last. "In the early 1970s, the leading edge of the age of scarcity arrived," he wrote in 1974's The End of Affluence. "With it came a clearer look at the future, revealing more of the nature of the dark age to come." Of course there would be mass starvation in the 1970s -- "or, at the latest, the 1980s." Shortages "will become more frequent and more severe," he wrote. "We are facing, within the next three decades, the disintegration of nation-states infected with growthmania." Only the abandonment of growth-based economics and other radical changes offered any hope of survival. And some countries were doomed no matter what they did. India was among the walking dead. Ehrlich was sure. "A run of miraculously good weather might delay it -- perhaps for a decade, maybe even to the end of the century -- but the train of events leading to the dissolution of India as a viable nation is already in motion." Japan is almost certainly "a dying giant." Same for Brazil. The united Kingdom was only slightly better off. The mere continuation of current trends will ensure that "by the year 2000 the United Kingdom will simply be a small group of impoverished islands, inhabited by some 70 million hungry people, of little or no concern to the other 5-7 billion people of a sick world," he wrote in an earlier paper. Of course, it could be worse than that: Thermonuclear war or some variety of eco-catastrophe were distinct possibilities. "If I were a gambler, I would take even money that the average Briton would be a distinctly lower than it is today." Not that Americans are in any position to gloat, Ehrlich cautioned. The United States is entering "the most difficult period ever faced by industrialized society." This dark new age may well see the end of civilization. Ehrlich noted that those in the burgeoning "survivalist" movement were stockpiling supplies in wilderness cabins -- "a very intelligent choice for some people" -- but he and his wife had decided to stay put. "We enjoy our friends and our work too much to move to a remote spot and state farming and hoarding." Ehrlich wrote. "If society goes, we will go with it."
He looks into the underlying psychology that fuels our need for predictions and the cognitive biases that make us susceptible:
In contrast, the gloom-mongers have it easy. Their predictions are supported by our intuitive pessimism, so they feel right to us. And that conclusion is bolstered by our attraction to certainty. As strange as it sounds we want to believe the expert predicting a dark future is exactly right, because knowing that the future will be dark is less tormenting than suspecting it. Certainty is always preferable to uncertainty, even when what's certain is disaster.
And here is our real Achilles' heel:
The media superstars who tell us what will and won't happen are an overwhelmingly confident bunch. Talking heads on business shows are particularly cock-sure but the other pundits who engage in prognostication -- whether futurists, newspaper columnists, or the sages on television who tell us the fate of politicians and nations -- share the same fundamental characteristics. They are articulate, enthusiastic, and authoritative. Often, they are charming and funny. Their appearance commonly matches the role they play, whether it's a Wall Street sharpie or a foreign affairs maven. They commonly see things through a single analytical lens, which helps them come up with simple, clear, conclusive, and compelling explanations for what is happening and what will happen.

They do not suffer doubts. They do not acknowledge mistakes. And they never say, "I don't know."

They are, in this book's terms, hedgehogs. Their kind dominates the op-ed pages of newspapers, pundit panels, lecture circuits, and best-seller lists.

Now, if this is true, and if it's also true that the predictions of hedgehogs are even less accurate than those of the average expert -- who does as well as a flipped coin remember -- then a disturbing conclusion should follow. The experts who dominate the media won't be the most accurate. In fact, they will be the least accurate. And that is precisely why to measure the fame of each of his 284 experts, Tetlock found that the more famous the expert, the worse he did.

The result may seem more than a little bizarre. Predictions are a big part of what media experts do, after all. Surely experts who consistently deliver lousy results will be weeded out, while the expert who does better than average will be rewarded with a spot on the talking-head shows and all that goes with it. The cream should rise, and yet, it doesn't. In the world of expert opinions, the cream sinks. What rises is the stuff that should be, but isn't, skimmed off and thrown away.

How is this possible? Very simply, it's what people want. Or to put it in economic terms, its supply and demand: We demand it; they supply.
Gardner is referencing a scientific study by Tetlock that investigated the quality of predictions by experts. He goes on to explain why and how we demand these inaccurate experts.

Here's an example of one psychological reason why we cling to bad predictions: confirmation bias:
After Japan attacked the United States at Pearl Harbor, General John DeWitt was sure American citizens of Japaneses origin would unleash a wave of sabotage. They must be rounded up and imprisoned, he insisted. When time passed and there was no sabotage, DeWitt didn't reconsider. "The very fact that no sabotage has taken place is a disturbing and confirming indication that such action will be taken," he declared. As the reader may realize, DeWitt's reasoning is an extreme example of the "confirmation bias" discussed earlier.
He ends the book by trying to give the reader a guide to deal with the gloomy doomsters that proliferate in the media:
Skepticism is a good idea at all times but when the news is especially tumultuous and nervous references to uncertainty are sprouting like weeds on the roof of an abandoned factory, it is essnetial. The 1970s were one such time. As I write, we are in another.

The crash of 2008 was s shock. The global recession of 2009 was a torment. Unemployment is high, economics weak, and government debt steadily mounts. The media are filled with experts telling us what comes next. We watch, frightened and fascinated, like the audience of a horror movie. We want to know. We must know.

For the moment, what the experts are saying is, in an odd way, reassuring. It's bleak, to be sure. But it's not apocalyptic, which is a big improvement over what they were saying when the crash was accelerating and the gloomier forecasters, such as former Goldman Sachs chairman John Whitehead, were warning it would be "worse than the Great Depression." To date, things aren't worse than the Great Depression, nor are they remotely as bad as the Great Depression. Naturally, the gloomsters would be sure to add "so far." And they would be right. Things change. The situation may get very much worse. Of course it may also go in the other direction, slowly or suddenly, modestly or sharply. The range of possible futures is vast.
This book is a fun read and very informative. It is an excellent prophylactic to the crazies who want to worm their way into your mind and turn you into some end-of-the-world cult member. Read it!

If you want to read about modern day hedgehogs whose "predictions" have big consequences for us, read this bit by Paul Krugman about bond vigilantes and here where he points out those who see any rise in inflation as a sign of hyperinflation and a debasement of the currency. Sadly these "experts" are forcing governments around the world, in the midst of the Great Recession, to take on austerity programs similar to FDR's in 1936 that led to the great 1937 recession within the Great Depression.

Thursday, March 10, 2011

Errol Morris Wrangles Philosophy

Having been a graduate student in philosophy many years ago and studied Thomas Kuhn, Saul Kripke, Ludwig Wittgenstein, and others mentioned in the following posts, I found this philosophical rant by film-maker Errol Morris fascinating. This brings back memories of my graduate student days, of the philosophy of science, of paradigm shifts, of possible worlds, of meaning, or reference, etc. In these postings Morris has exorcized his demons of graduate school.

Part 1

Part 2

Part 3

Part 4

Part 5

Some of my favourite "moments" in the above essay:

I asked him, “If paradigms are really incommensurable, how is history of science possible? Wouldn’t we be merely interpreting the past in the light of the present? Wouldn’t the past be inaccessible to us? Wouldn’t it be ‘incommensurable?’ ”

...

I call Kuhn’s reply “The Ashtray Argument.” If someone says something you don’t like, you throw something at him. Preferably something large, heavy, and with sharp edges. Perhaps we were engaged in a debate on the nature of language, meaning and truth. But maybe we just wanted to kill each other.

The end result was that Kuhn threw me out of Princeton. He had the power to do it, and he did it. God only knows what I might have said in my second or third year. At the time, I felt that he had destroyed my life. Now, I feel that he saved me from a career that I was probably not suited for.

This reminds me of my student years. I constantly questioned my professors and found them wanting. Sure they were learned and smart, but most were brittle and dead to new ideas and were resistant to questioning. They preferred the students to sit at their feet and lap up the wisdom on offer.

Kripke’s theory provides an alternative to what had become known as the description theory, an amalgam of ideas proposed by Gottlob Frege, Bertrand Russell and Ludwig Wittgenstein. (And to that mix, in the ‘50s and ‘60s you can add Peter Strawson and John Searle.) Here’s one way to distinguish between Kripke’s theories and the description theory that preceded it.

You have two fish in a fishbowl. One of them is golden in color; the other one is not. The fish that is golden in color, you name “Goldie.” The other fish you name “Greenie.” Perhaps you use the description “the gold fish” and point to the one that is golden in color. You are referring to the gold fish, Goldie. Over the course of time, however, Goldie starts to change color. Six months later, Goldie is no longer golden. Goldie is now green. Greenie, the other fish — the fish in the bowl that was green in color — has turned golden. Goldie is no longer “the fish that is golden in color.” Greenie is. But Goldie is still Goldie even though Goldie has changed color. The description theory would have it that Goldie means the fish that is golden in color, but if that’s true then when we refer to Goldie, we are referring to the other fish. But clearly, Goldie hasn’t become a different fish; Goldie has merely changed his (or her) appearance.


It’s Kripke’s version of “Where’s Waldo.” If the description theory (courtesy of Frege, Russell and Wittgenstein) is correct, then Goldie is on the right. If Kripke’s historical-chain of reference theory is correct, then Goldie remains Goldie no matter what color Goldie is.

You could also think of Goldie and Greenie in terms of beliefs, although this is not how the description theory was originally framed. Goldie is the fish that you believe is golden in color. But Goldie starts to change color. I can believe anything I want about Goldie. I can even believe that Goldie isn’t a fish, but Goldie — that fish out there swimming around in a fishbowl — remains Goldie.

These were the kind of logical "puzzles" I loved. This was why I graduate student pursuing logic in the philosophy faculty.

The most important and most controversial aspect of Kuhn’s theory involved his use of the terms “paradigm shift” and “incommensurability.” That the scientific terms of one paradigm are incommensurable with the scientific terms of the paradigm that replaces it. A revolution occurs. One paradigm is replaced with another. And the new paradigm is incommensurable with the old one. He made various attempts to define it — changing and modifying his definitions along the way.

When I read Kuhn's book the issue of "incommensurability" never came up. I took the simplistic view that a paradigm switch was a benign "larger view" that incorporated the previous science as a special limited case. So in Einstein's theory Newtonian equations hold as a limited case when speeds are near zero. Sure there were philosophical differences. Newton viewed space and time as absolute and Einstein showed they were dynamic and relativistic depending on your frame of reference.

In John Ford’s movie “The Man Who Shot Liberty Valance” (1962), Ransom Stoddard (James Stewart) becomes an archetypal hero for shooting and killing Liberty Valance (Lee Marvin), the paid stooge of the cattle barons. But Tom Doniphon (John Wayne) – literally hidden in the shadows – is really the man who shoots him. Stoddard gets Doniphon’s girl and goes on to a spectacular political career – governor, senator, etc. Doniphon is the unsung hero. After many years, Stoddard, following Doniphon’s death tells a local newspaper editor what really happened, but the editor refuses to print it, “This is the West, sir. When the legend becomes fact, print the legend.”

A legend that is not true can never become fact, but it can get printed as fact, anyway. With Hippasus, it is pretty easy to imagine why the legend of his drowning got “printed” even before there was printing. Someone believed that there should have been a crisis even if there wasn’t any. They believed that the Pythagoreans should have been upset about the discovery of incommensurable magnitudes. But it was a retrospective belief, that is, a belief formed hundreds, if not thousands of years, after the crisis was supposed to have occurred. I find it mildly amusing – possibly even ironic – that Kuhn’s metaphor for “incommensurability” could have been derived from a Whiggish interpretation of an apocryphal story.

This is simply wonderful. I love the way Morris has brought his film-making into this discussion. And I love the way he nails Kuhn.

But there is a messier problem. Why stop at historical relativism? Why not imagine each and every person in a different island universe? And indeed, Kuhn at least in one instance seems to embrace that possibility. In one particularly bizarre passage in “The Road Since Structure,” he suggests that his critics are writing about two different Thomas Kuhns – Kuhn No. 1 and Kuhn No. 2.

...

To me Kuhn’s claim – that there are two Thomas Kuhns plus two books by the same name and author – suggests that there may be no coherent reading of Kuhn’s philosophy. Kuhn, of course, sees it differently. For Kuhn, the multiplicity of Kuhns and Kuhn-authored-books-with-the-same-title provides further proof of his belief that people with “incommensurable” viewpoints can’t talk to each other. That they live in different worlds.

This is the slippery slope of solipsistic theories of knowledge.

Years ago, Bertrand Russell wrote “Nightmares of Eminent Persons” (1954). (Supposedly, he was trying to meet alimony payments.) Among the various nightmares – the Mathematicians’s Nightmare, Stalin’s Nightmare, the Psychoanalyst’s Nightmare, Dr. Bowdler’s Nightmare – is the Existentialist’s Nightmare. At the conclusion of the nightmare, the existentialist is screaming, “I don’t exist. I don’t exist.” Poe’s raven appears, speaking in the voice of the French poet Mallarmé: “You do exist. You do exist. It’s your philosophy that doesn’t exist.”

I absolutely love this little tale of being hoisted on your own petard.

I often think of the attraction of smoking, that it simplifies the world into three parts. There’s you, there’s the cigarette, and everything else is the ashtray.

(This catches Errol Morris' latent hostility toward Thomas Kuhn who threw his ashtray at Morris.)

Please remember: This is not an empty intellectual exercise. It is not a matter of indifference whether it was God or natural selection that produced the complexity of life on earth. Nor whether there is such a thing as global warming. The devaluation of scientific truth cannot be laid on Kuhn’s doorstep, but he shares some responsibility for it.

One more parable. For those who truly believe that truth is subjective or relative (along with everything else), ask yourself the question – is ultimate guilt or innocence of a crime a matter of opinion? Is it relative? Is it subjective? A jury might decide you’re guilty of a crime that you haven’t committed. You’re innocent. (It’s possible. The legal system is rife with miscarriages of justice.) Nevertheless, we believe there is a fact of the matter. You either did it or you didn’t. Period.

If you were strapped into an electric chair, there would be nothing relative about it. Suppose you are innocent. Would you be satisfied with the claim there is no definitive answer to the question of whether you’re guilty or innocent? That there is no such thing as absolute truth or falsity? Or would you be screaming, “I didn’t do it. Look at the evidence. I didn’t do it.” Nor would you take much comfort in the claim, “It all depends on your point of view, doesn’t it?” Or “what paradigm are you in?”

(I like the way this makes clear that relativism is not an answer and Kuhn's incommensurability in of no use, and in fact a real danger as a glib answer. Morris still harbours a grudge against Kuhn and I have one for a "philosophy of education" professor I had who treated my claim that moral judgements could be objective as rank foolishness. But I didn't buy into ethical relativism and that outraged this professor. I am a realist who can subscribe to G. E. Moore's "ethical non-naturalism" or to Sam Harris' program to develop a science of morality. I waver, but one thing I'm sure of, moral judgements are of real facts not subjective whims or culturally defined tastes.)

It’s always been unclear to me, in social protest, is the important thing to be there, to be arrested, to be beaten, to be in the newspaper, to be booked, or to be incarcerated? Maybe all of the above.

Given the events in Wisconsin and other states, the above questions are as relevant today as they were in the 1960s protests against the Vietnam war.

The issue of murder, mass murder, has stayed with me over the years. It’s certainly part of the film that I made with Robert S. McNamara, “The Fog of War.” I remember sitting in the Firestone Library and reading volumes upon volumes of the transcripts of the Nuremberg War Crimes Tribunal. Ultimately I had the opportunity to go with Robert McNamara to the International Criminal Court [ICC] in the Hague, to show “The Fog of War” to the court, and to answer questions with McNamara.

And my two favorite moments from that experience – going with McNamara to visit the archivist for the ICC. McNamara told him, “I wish that they had these statutes governing war crimes back when I was secretary of defense,” and the archivist replied, “But, sir, they did.” Another completely bizarre experience, beyond Kafkaesque, seeing Milosevic on the stand. None of the proceedings had anything whatsoever to do with the content of the charges against him. It was all procedural — procedures about procedures about procedures, epicycle upon epicycle upon epicycle. And yet, the knowledge that Milosevic’s crimes were being addressed, even if only in a vague and uncertain way, was gratifying. At least someone was doing something.

I'm from the same generation as Errol Morris, so I'm very much caught up in the same issues as he is. Not just graduate school, not just philosophy, not just war & mass murder, but a stance toward life that was prevalent in many of the 60s generation.

Years later, I’ve come to realize that there was a debate embodied here about the nature of language – of whether truth is socially constructed or whether ultimately concerns the relationship between language and reality. I feel very strongly, even though the world is unutterably insane, there is this idea that we can reach outside of that insanity and find truth, some kind of certainty. ... There are endless obstacles and impediments to finding the truth – You might never find it; it’s an illusive goal. But there’s something to remember, there’s a world out there that we can apprehend, and it’s our job to go out there and apprehend it. It’s one of the deepest lessons that I’ve taken away from my experiences here.

This is more than philosophical. Morris makes it clear that this is a living issue with him, present in his films, and present in his engagement with the world.

Wednesday, March 9, 2011

Howard Engel's "The Man Who Forgot How to Read"


I recently re-read Oliver Sacks' The Mind's Eye and appreciated the bit about writer Howard Engel, so I decided to follow up by reading Engel's memoir about his experience having a stroke and developing alexia sine agraphia.

This book was a short but very interesting read. You get a feel for his life before and after and you come to understand -- a bit! -- of the struggles that a person with this disability has.

My mother had a brain tumour and had a bit of the parietal and occipital lobes removed and developed "left neglect" and other cognitive deficiencies (particularly the inability to recognize faces). But she had no interest in either relating her sense of the world with these deficiencies or understanding why they had come about. She was an intelligent woman but her horror at her condition left her wishing to ignore reality and spend her time talking about family, friends, and the past. Engel is unusual in being willing to go into fair detail with the symptoms of his condition.

The book alerts the reader to the "little details" that come up with a stroke patient. Things that somebody who doesn't suffer these depravations is slow to recognize. Luckily he had family and close friends who provided a solid support team for him. But despite this you get a feeling for the struggles he underwent.

Another aspect the book brings across is personality. My mother simply gave up on life. Howard Engel fought hard to recover as much of his previous life as possible. In the real world, different people respond to things differently. There is no predicting who will respond how. You can guess, but you are likely to be wrong.

The book is not an unremitting trudge through tragedy. Engel has a light hand and finds humour in his situation. He keeps the tone light so that the book is informative but not depressing.

It is well worth reading.

Sunday, February 20, 2011

Fear and Loathing of the Day the Machines Take Over

Here are some bits from an article in The Atlantic magazine about the recent man vs. machine contest when IBM's Watson took on the to greatest Jeopady! contestants and ground them into the dust:
Oh, that Ken Jennings, always quick with a quip. At the end of the three-day Jeopardy! tournament pitting him and fellow human Brad Rutter against the IBM supercomputer Watson, he had a good one. When it came time for Final Jeopardy, he and Rutter already knew that Watson had trounced the two of them, the best competitors that Jeopardy! had ever had. So, on his written response to a clue about Bram Stoker, the author of Dracula, Jennings wrote, "I, for one, welcome our new computer overlords."

Now, think about that sentence. What does it mean to you? If you are a fan of The Simpsons, you'll be able to identify it as a riff on a line from the 1994 episode, "Deep Space Homer," wherein clueless news anchor Kent Brockman is briefly under the mistaken impression that a "master race of giant space ants" is about to take over Earth. "I, for one, welcome our new insect overlords," Brockman says, sucking up to the new bosses. "I'd like to remind them that as a trusted TV personality, I can be helpful in rounding up others to toil in their underground sugar caves."

Even if you're not intimately familiar with that episode (and you really should be), you might have come across the "Overlord Meme," which uses Brockman's line as a template to make a sarcastic statement of submission: "I, for one, welcome our (new) ___ overlord(s)." Over on Language Log, where I'm a contributor, we'd call this kind of phrasal template a "snowclone," and that one's been on our radar since 2004. So it's a repurposed pop-culture reference wrapped in several layers of irony.

But what would Watson make of this smart-alecky remark? The question-answering algorithms that IBM developed to allow Watson to compete on Jeopardy! might lead it to conjecture that it has something to do with The Simpsons -- since the full text of Wikipedia is among its 15 terabytes of reference data, and the Kent Brockman page explains the Overlord Meme. After all, Watson's mechanical thumb had beaten Ken and Brad's real ones to the buzzer on a Simpsons clue earlier in the game (identifying the show as the home of Itchy and Scratchy). But beyond its Simpsonian pedigree, this complex use of language would be entirely opaque to Watson. Humans, on the other hand, have no problem identifying how such a snowclone works, appreciating its humorous resonances, and constructing new variations on the theme.

All of this is to say that while Ken and Brad lost the battle, Team Carbon is still winning the language war against Team Silicon.

...

Those were two isolated gaffes in a pretty clean run by Watson against his human foes, but they'll certainly be remembered at IBM. For proof, see Stephen Baker's book Final Jeopardy, an engaging inside look at the Watson project, culminating with the Jeopardy! showdown in the final chapter. (In a shrewd marketing move, the book was available electronically without its final chapter before the match, and then the ending was given to readers as an update immediately after the conclusion of the tournament.) Baker writes:
As this question-answering technology expands from its quiz show roots into the rest of our lives, engineers at IBM and elsewhere must sharpen its understanding of contextual language. Smarter machines will not call Toronto a U.S. city, and they will recognize the word "missing" as the salient fact in any discussion of George Eyser's leg. Watson represents merely a step in the development of smart machines. Its answering prowess, so formidable on a winter afternoon in 2011, will no doubt seem quaint in a surprisingly short time.
Baker's undoubtedly right about that, but we're still dealing with the limited task of question-answering, not anything even vaguely approaching full-fledged comprehension of natural language, with all of its "nuance, slang and metaphor." If Watson had chuckled at that "computer overlords" jab, then I'd be a little worried.
I'm in the 1960s camp of AI. I believe that for machines to really be useful they need logic not pattern matching algorithms or neural networks. The problem with the latter is that they create fragile "experts" that can fail unexpectedly and which we cannot understand what they know and don't know. There was a classic neural network training for the military task of finding tanks in a scene. They tuned up the network and got some pretty good success, but in a new training set they discovered unexpected failures. What had gone wrong? The machine was finding shadows and not tanks. But the humans weren't aware of that. They simply fed it scenes and rewarded "correct guesses" and punished wrong guesses. They had no way of knowing what the underlying machine-learned "algorithm" really was "identifying".

I'm willing to let AI make use of the dumb algorithms, but unless the machine system includes some real intelligence, i.e. logic and concepts manipulated in a human-like way, there will never be a "team silicon" to threaten "team carbon".

Wednesday, January 19, 2011

Read This Post If You Feel A Need to Be Insulted

This is a bit from a blog posting entitled "Science Proves You're Stupid":
Hey numbnuts, cognitive science demonstrates that you're not bright enough to realize what a clusterfuck your life is, because you're wired to tell yourself a coherent story after the fact. Microsecond by microsecond, your neocortex spins a story that says: "I meant to do that." Your conscious mind thinks its Sherlock Holmes, but really it's Maxwell Smart, tripping through life and weaving coherent excuses to maintain the illusion of control.

Take a look at your life, for instance, dipshit. How much did you completely screw up and blame on others, and how much of the good stuff did you stumble into randomly, then take credit for as if you planned it all along?

More so than you think. Clever experiments with memory recall show how we cast narratives back to justify what happened. We think our lives have meaning to the extent we are able to look back and pick and chose the events that draw a coherent narrative, then we unconsciously alter all those events to confirm what we want to believe about ourselves.

When it comes to our self-assessments, we are all susceptible to the Lake Wobegon Phenomenon: When quizzed, most people rate themselves as smarter, more attractive, more optimistic, better leaders, and less biased than average. Even if you beat the average in one of these domains, the chances of you beating the average in all five domains is slim. Chances are, you're below average in more than one of these domains. How do I know this? I'm smarter, more charming, a better leader, and less biased than most people.

I had a chance to talk to the class bully from my high school, who told me about how good life had been to him. I decided not to mention this was the first nonviolent encounter we ever had. He brought up a mentally handicapped guy who got beaten even worse than me and boasted that nobody messed with that kid when he was around. I stared politely into his face amazed at what a deluded sense he had of himself. I remembered him as relentlessly, inexhaustibly evil. For an instant I wondered if I should question my sense of myself as a mature, faultless victim whose rapier witticisms should have provoked applause rather than pounding, but then I thought better of it.

The mind has a mind of its own. But even that's not in charge.
The above is truly based on science even if it is delivered in a particularly insulting and demeaning way. You would think the author has a grudge and is taking it out on you. But the web site -- cryptically named h+ -- claims that h+ is published by Humanity+, the world's leading nonprofit for the ethical use of technology to extend human capabilities. Now, don't you feel better about all those insults?

Friday, November 26, 2010

Dissolving the Barrier between You and Other Human Beings

Here is V. S. Ramachandran talking about "empathy neurons" at a 2009 TED conference. Watch the whole video, but if you go to 5:00 you will see the bit about how you can dissolve the difference between yourself and others...



His hypothesis, that these empathic motor neurons gave rise to civilization, is a fascinating one.

Sunday, October 24, 2010

The Conjunction Fallacy

The foibles of human credibility always entertain me. Sure I fall for them too, but at least I try to be aware of them and limit my mistakes. The tragedy is that there are a lot of people who simply refuse to recognize the limits of their intelligence and the extent of their credulity.

Here's a bit from an article published in the NY Times on the Opinionator blog, a blog for philosophers and their ideas. This article is by the mathematician John Allan Paulos:
The so-called “conjunction fallacy” suggests another difference between stories and statistics. After reading a novel, it can sometimes seem odd to say that the characters in it don’t exist. The more details there are about them in a story, the more plausible the account often seems. More plausible, but less probable. In fact, the more details there are in a story, the less likely it is that the conjunction of all of them is true. Congressman Smith is known to be cash-strapped and lecherous. Which is more likely? Smith took a bribe from a lobbyist or Smith took a bribe from a lobbyist, has taken money before, and spends it on luxurious “fact-finding” trips with various pretty young interns. Despite the coherent story the second alternative begins to flesh out, the first alternative is more likely. For any statements, A, B, and C, the probability of A is always greater than the probability of A, B, and C together since whenever A, B, and C all occur, A occurs, but not vice versa.

This is one of many cognitive foibles that reside in the nebulous area bordering mathematics, psychology and storytelling. In the classic illustration of the fallacy put forward by Amos Tversky and Daniel Kahneman, a woman named Linda is described. She is single, in her early 30s, outspoken, and exceedingly smart. A philosophy major in college, she has devoted herself to issues such as nuclear non-proliferation. So which of the following is more likely?

a.) Linda is a bank teller.

b.) Linda is a bank teller and is active in the feminist movement.

Although most people choose b.), this option is less likely since two conditions must be met in order for it to be satisfied, whereas only one of them is required for option a.) to be satisfied.

(Incidentally, the conjunction fallacy is especially relevant to religious texts. Imbedding the God character in a holy book’s very detailed narrative and building an entire culture around this narrative seems by itself to confer a kind of existence on Him.)
That last bit about religious credulity should ring home with lots of people. But it doesn't. Most of them simply refuse to accept modern science and the understanding of the human mind's cognitive illusions and errors.

Saturday, October 9, 2010

Allan Snyder and Transcranial Magnetic Stimulation

Here is a snippet from a National Geographic documentary on Allan Snyder's research into transcranial magnetic stimulation:



From an article by Allan Snyder in the Philosophical Transactions of the Royal Society B (Biological Sciences)
Savants cannot normally give insights into how they perform their skill and are uncontaminated by learnt algorithms. It just comes to them. They just see it. With maturity, the occasionally offered insights are suspect, possibly being contaminated by the acquisition of concepts concerning the particular skill. Yet, I have labelled one savant, Daniel Tammet, a Rosetta stone (Johnson 2005).

By far, the most compelling argument for savant skills residing equally within everyone is that they can emerge ‘suddenly and spontaneously’ (Miller et al. 2000, p. 86) in individuals who had no prior history for them, either in interest, ability or talent (Treffert 2006; Sacks 2007, pp. 157 and 313). Striking examples include skills in art, music (Sacks 2007), mathematics (Treffert 2006, p. 85), calendar calculating (LaFay 1987; Osborne 2003) and possibly AP (Zatorre 1989, see p. 573). The same appears to hold for synaesthesia (Sacks 2007, p. 180), as theory suggested (Snyder & Mitchell 1999), which is reported frequently by autistic savants (Heaton et al. 1998; Sacks 2007; Tammet 2007, 2009). Furthermore, these acquired savant skills have been known to diminish with recovery from illness (Sacks 2007, p. 315).

Acquired savants arise from a variety of causes (Treffert 2006) including left frontotemporal dementia (Miller et al. 1998), physical injury to the left temporal lobe (LaFay 1987; J. Hirsch & A. Snyder 2005, personal communication), left hemispheric strokes (Sacks 2007, p. 315), severe illness to the central nervous system (Treffert 2006) and even when under the influence of hallucinogens (Humphrey 2002; Sacks 2007, p. 181).
There are more publications here on Snyder's web site.

Here's a previous post on this topic.

Saturday, October 2, 2010

Nailing Down Who You Are

Here's the latest in a long line of hypotheses about what makes you "you". Here is Sebastian Seung telling us that we are our "connectome". Here is Seung giving a talk at a TED conference:



I'm not convinced. This is but another claim of reduction of complexity to something simpler and comprehensible. I'm all for reductionism but I'm not big on the idea that the complex can be reduced to the simple. I'm willing to accept that the connectome may capture some essential parts of the mind and thinking, but I think there will be many phenomena that will dance just outside the connectome because they will live in the complexity of activation of neurons and even the chemistry inside neurons. What evidence do I have? Well, experiments with magnetic fields show that you can control brain states via these fields, e.g. read this. They don't change the connectome, but they sure change behaviour! I suspect when the brain story is fully worked out it will have chapters at the level of structure (the gross morphology of the brain and its functional areas), of detailed connection (the connectome), of activity (neural activity), of chemistry (the bath of neurotransmitters and the states of the gates the excrete or accept these chemicals), and the chemistry of gene expression (activities deep in the DNA with events such as epigenetics). In short, the whole story will be immensely complicated and muddled. Probably far too vast for us to really understand, but we may be able to make models of mind (computer simulations) that can help us find a way toward manipulating and in some sense 'understanding' this complexity.

Tuesday, September 21, 2010

Consciousness Science

Carl Zimmer has an interesting article on consciousness in the NY Times. Here is a bit to give you a taste:
For Dr. Tononi, sleep is a daily reminder of how mysterious consciousness is. Each night we lose it, and each morning it comes back. In recent decades, neuroscientists have built models that describe how consciousness emerges from the brain. Some researchers have proposed that consciousness is caused by the synchronization of neurons across the brain. That harmony allows the brain to bring together different perceptions into a single conscious experience.

Dr. Tononi sees serious problems in these models. When people lose consciousness from epileptic seizures, for instance, their brain waves become more synchronized. If synchronization were the key to consciousness, you would expect the seizures to make people hyperconscious instead of unconscious, he said.

While in medical school, Dr. Tononi began to think of consciousness in a different way, as a particularly rich form of information. He took his inspiration from the American engineer Claude Shannon, who built a scientific theory of information in the mid-1900s. Mr. Shannon measured information in a signal by how much uncertainty it reduced.

...

Consciousness is not simply about quantity of information, he says. Simply combining a lot of photodiodes is not enough to create human consciousness. In our brains, neurons talk to one another, merging information into a unified whole. A grid made up of a million photodiodes in a camera can take a picture, but the information in each diode is independent from all the others. You could cut the grid into two pieces and they would still take the same picture.

Consciousness, Dr. Tononi says, is nothing more than integrated information. Information theorists measure the amount of information in a computer file or a cellphone call in bits, and Dr. Tononi argues that we could, in theory, measure consciousness in bits as well. When we are wide awake, our consciousness contains more bits than when we are asleep.

...

Networks gain the highest phi possible if their parts are organized into separate clusters, which are then joined. “What you need are specialists who talk to each other, so they can behave as a whole,” Dr. Tononi said. He does not think it is a coincidence that the brain’s organization obeys this phi-raising principle.

Dr. Tononi argues that his Integrated Information Theory sidesteps a lot of the problems that previous models of consciousness have faced. It neatly explains, for example, why epileptic seizures cause unconsciousness. A seizure forces many neurons to turn on and off together. Their synchrony reduces the number of possible states the brain can be in, lowering its phi.
Personally I don't this research is on the right track. Information theory is great for treating signals, coding, and noise, but I don't see it as particularly useful for measuring "consciousness". But it is intersting to put another contender in the ring. Maybe it will jog somebody into coming up with a better theory.

It is obvious that integration is an essential element of consciousness, but to reduce it to the ability to track "pinging" or a simulus through centres in the brain strikes me as simplistic. I'm in agreement with David Chalmers who is quoted at the end of the article:
Other researchers view Dr. Tononi’s theory with a respectful skepticism.

“It’s the sort of proposal that I think people should be generating at this point: a simple and powerful hypothesis about the relationship between brain processing and conscious experience,” said David Chalmers, a philosopher at Australian National University. “As with most simple and powerful hypotheses, reality will probably turn out to be more complicated, but we’ll learn something from the attempt. I’d say that it doesn’t solve the problem of consciousness, but it’s a useful starting point.”

Monday, August 16, 2010

Choice Blindness

I hadn't heard of this until I read the following bit in an article by Jonah Lehrer on his Wired News blog The Frontal Cortex:
The problem with our sensory world – this “blooming, buzzing confusion” of sights, sounds and smells – is that we put so much faith in it. We believe that the world we experience the world as it is, and that our sensations are an accurate summary of reality.

But that’s a convenient illusion. In fact, it is the one illusion that makes every other perceptual illusion possible. Although we’re convinced that we’re living in an Ingres canvas – full of exquisite detail and verisimilitude – we actually inhabit a post-impressionist painting, rife with empty spaces and abstraction. It’s a world so full of ambiguities that it requires constant interpretation.

I’m most interested in the practical consequences of our sensory flaws. Let’s begin with this clever paper, published earlier this year in Cognition. The study was led by Lars Hall, at Lund University. It was inspired by a 2005 study, led by Petter Johansson, that showed male subjects a pair of female faces. The subjects were asked to choose the face that they found more attractive. Then, the mischievous scientists used a “card trick” to reverse the outcome of the choice. Here’s where the results get a little sad: Less than 30 percent of subjects noticed that their choice had been changed. Our eyes might have preferences, but this doesn’t mean our mind can remember them.

In this latest study, Hall and colleagues sought to extend this phenomenon – it’s known as choice blindness – to the world of smell and taste. (The paper is called “Magic at the marketplace: Choice blindness for the taste of jam and the smell of tea”.) They asked 180 consumers at a supermarket to participate in a quick little experiment. (The scientists pretended to be “independent consultants con- tracted to survey the quality of the jam and tea assortment” in the retail store.) The consumers were told to focus on the taste of the jam and the smell of the tea, and were asked to pick their preferred product when given a variety of different samples. For instance, a participant might be asked to choose between Ginger and Lime jam, or Cinnamon-Apple and Grapefruit. If they were smelling teas, then they might be given a choice between Apple Pie versus Honey, or Pernod versus Mango.

Here’s where things get tricky. I’ll let the scientists describe their method, in which they slyly reversed the preferences of the hapless consumers:
In a manipulated trial, the participants were presented with the two prepared jars. After tasting a spoon of jam from the first jar, or taking in the smell of the tea, they were asked to indicate how much they liked the sample on a 10-point scale from ‘not at all good’ to ‘very good’. While Experimenter 1 solicited the preference judgment, and interacted with the participants, Experimenter 2 screwed the lid back on the container that was used, and surreptitiously turned it upside down. After the participants had indicated how much they preferred the first option, they were offered the second sample, and once again rated how much they liked it. As with the first sample, Experimenter 2 covertly flipped the jar upside down while returning it to the table. Immediately after the participants completed their second rating, we then asked which alternative they preferred, and asked them to sample it a second time, and to verbally motivate [explain] why they liked this jam or tea better than the other one.
At first glance, this seems like a ridiculous experiment. It’s hard to believe that, when asked to choose between Cinnamon-Apple and Grapefruit jam, I wouldn’t notice the difference. Or that, after choosing Mango tea over Pernod, I would fail to realize that I was actually being given Pernod.

And yet, that’s exactly what happened. According to the scientists, less than a third of participants realized at any point during the experiment that their preferences had been switched. In other words, the vast majority of consumers failed to notice any difference between their intended decision (“I really want Cinnamon-Apple jam”) and the actual outcome of their decision (getting bitter grapefruit jam instead).* We spend so much time obsessing over our consumer choices – I just spent ten minutes debating the merits of Guatemalan coffee beans versus Indonesian beans – but this experiment suggests that all this analysis is an enormous waste of mental energy. I could have just gotten Sanka: My olfactory system is too stupid to notice the difference.

What’s most unsettling, however, is that we are completely ignorant of how fallible our perceptions are. In this study, for instance, the consumers were convinced that it was extremely easy to distinguish between these pairs of jam and tea. They insisted that they would always be able to tell grapefruit jam and cinnamon-apple jam apart. But they were wrong, just as I’m wrong to believe that I would be able to reliably pick out the difference between all these different coffee beans. We are all blind to our own choice blindness.
I know it sounds incredible. But I've read enough about "memory bias" and "cognitive bias" to recognize how weak our peceptional and attentional resources really are and how much confabulatory "filling in" that our conscious mind provides. This Jonah Lehrer article simple adds to the pile. It is really depressing, but we are not very impressive as "intelligent" sensory-motor machines.

Tuesday, August 10, 2010

What Do Words Buy You?

Here is a fascinating half hour radio show -- NPR's Radiolab -- discussing words.

First they interview a brain damaged woman who lost her ability to read, therefore became a teacher of sign language to the deaf, and encountered a 27 year old man who had no language. She explains how she discovered that he had no language and her efforts to give him language and the wonderful tale of when he "got it", his own Helen Keller moment.

Next some language researchers talk about their research. The one that grabs me is the exploration of the fact that rats can't connect spatial thoughts with colours, and the researchers claim that human kids fail at this task until they are six years old. They claim the crucial breakthrough for humans is to use language to connect the separate centres of the brain that deal with spatial concepts and colour concepts.

Finally they have a bit where a Shakespearean scholar talks about Shakespeares "word chemistry" and the new words he left us with.

It is an excellent program. Listen to it.

Here is the web site for NPR's Radiolab show for that day with two other bits to listen to.

Saturday, July 31, 2010

Brain in a Box

Here's some research being done at IBM:



This project is a bit ahead of schedule. Here's a graph by Hans Moravec from the 1990s estimating how soon a human-life intelligence could be achieved by computers:
Click to Enlarge

As I listen to Dharmendra Modha, I hear too much general fuzzy stuff. This guy is very, very far from achieving what he claims. Going back to the mid 1950s computer scientists have been promising that a "brain in a box" will be achieved in a decade or two. He needs to think less about ubiquitous implementation and "productivity" and more about the hard problem of coming up with a wiring which can process inputs and "think" about them and give results. This is so far from being achievable as to be laughable.

Back in the 1950s linguists thought they could produce an automated natural language translator. If you have used Google Translate, you know that 40 years later we are very, very far from achieving even this limited goal.

If you look at Douglas Lenat's Cyc project, for 16 years it has been trying to build a base of knowledge rules that would let a computer take in information and reason about it as a human would. That project is nowhere near being ready to deal with real world communication. For limited domains I'm sure it is helpful, but that is many orders of magnitude from what was promised when the project started.

Here is a Wired Magazine article that looks at Dharmendra Modha's research and calls it a "scam" (by another IBM researcher trying to build his own brain in a box!). I wouldn't call it a scam. I wrote research proposals. The pressure is always on to hype your project to get funding. Modha is guilty of "excessive enthusiasm". But without that he wouldn't have gotten any funding. It is a hoax in that the research goals will not be met within the project timeframe or budget. In fact, that timeframe will be off by at least an order of magnitude and the budget will be off by several orders of magnitude, easily 3 maybe 5 or even 8.

Friday, July 16, 2010

Christopher Chabris & Daniel Simons' "The Invisible Gorilla"


This is an excellent popularization of cognitive science. These authors have stunning examples to present the underlying science. The book is full of delightful insights on how we fool ourselves about our "abilities". It is a good antidote to overconfidence.

Here's the argument of the book in a nutshell taken from the conclusion:
... common sense has another name: intuition. What we intuitively accept and believe is derived from what we collectively assume and understand, and intuition influences our decisions automatically and without reflection. Intuition tells us that we pay attention to more than we do, that our memories are more detailed and robust than they are, that confident people are competent people, that we know more than we really do, that coincidences and correlations demonstrate causation, and that our brains have vast reserves of power that are easy to unlock. But in all these cases, our intuitions are wrong, and they can cost us our fortunes, our health, and even our lives if we follow them blindly.
The book attempt to educate the reader into what science does know about our mind and its capabilities. The goal is to rein in the illusions we have about ourselves. These authors are not dogmatic. They are open to other viewpoints and recognize the complexity of the field. I enjoy how they handled Malcolm Gladwell with his intuitive decision making as expressed in the book Blink:
... just as Gladwell's kouros story does not prove that intuition trumps analysis, our Wise story does not prove that analysis always trumps intuition. Intuition has its uses, but we don't think it should be exalted above analysis without good evidence that it is truly superior. The key to successful decision-making, we believe, is knowing when to trust your intuition and when to be wary of it and do the hard work of thinking things through.
Here's their website about the book: http://www.invisiblegorilla.com/. At this site you can follow a blog look at other material. I especially enjoy the set of videos they have made available...

Here's the eponymous video:



Here's one that shows that you don't "see" what is right in front of your eyes:



There are six more videos at the the book's site. Watch them!

Yep... I flubbed the "gorilla" test (first video) (and I was duped by all the other tests, wow!). Sure I was incredulous when I was first shown the "invisible gorilla" video. How could I miss something that obvious. But... I did. I love it when something this simple can point up human "frailties" that we all believe ourselves to not have. Well, the book is jam-packed with similar examples. Lots of curious and interesting and utterly flabbergasting displays of human hubris in the face our of puny mental abilities. Read it and be humiliated. But... become wiser. As Socrates said "Know thyself!"

Update 2010oct08: Ernest Davis, professor of computer science at New York University. He is the author of Representations of Commonsense Knowledge (1990) and Representing and Acquiring Geographic Knowledge (1986), has a detailed review of The Invisible Gorilla here. Unfortunately Davis is a hostile reviewer:
Unfortunately, even in these early chapters the quality of the evidence presented is very uneven, and some of the arguments made are weak. The authors discuss at some length the Joshua Bell stunt in which the violinist played his Stradivarius at a subway entrance and was ignored by all but a few of the passersby. The authors analyze this in terms of the illusion of attention, but there is no reason whatever to suppose that Bell was invisible to the commuters in the sense that the gorilla was invisible to the subjects watching the basketball video. All the Bell “experiment” proves is that commuters hurrying home are generally not at leisure to stop and listen to a violin recital. At most, it sheds a sad light on the pressures of life in the 21st century.
I disagree. The Bell "stunt" is a fine example of how a "prepared" mind will see something which an unprepared mind won't. When you go to a Joshua Bell concert, you are prepared to hear a world class violinist. When you are rushing home on a subway you don't expect and therefore don't attend to music from a street musician with the expectation that this is a world class performance. And guess what? You won't notice that it is a world class performance!

I find it funny that Davis acts as a stereotypical academic. He vigorously attacks his "opponent" for minor slipshod steps in reasoning while ignoring his own rather haphazard "reasoning". For example:
Much of the chapter is given over to decrying parents who refuse to have their children vaccinated because of the supposed relation to autism. Chabris and Simons characterize this decision entirely as an instance of the illusion of cause. Certainly, cognitive illusions play a role, but many other factors are involved, including a well-founded public distrust of the pharmaceutical industry.
Wow! He proceeds this with a quibble over a claim by Chabris and Simons that you need experiments to distinguish cause and effect. He is right to point out that this claim is overly broad. But Davis in the very next paragraph (the one above) makes a much broader completely unsubstantiated claim: "a well-founded public distrust of the pharmaceutical industry", is Davis really going to stand behind that claim? Where's the evidence and logic to such a sweeping claim? A wonderful example of a pot calling a kettle black!

I find the review interesting because it is helpful to look at negative as well as positive reviews. You can often learn more from opponents more than supporters. Davis makes some good points, but he is what I call a "typical academic" in his eagerness to overplay his hand using a hyper-critical approach. It is a little off-putting, but not enough to say "don't read the review".

I don't mind Davis being so negative, but I prefer to focus on the positive. Chabris and Simons have written an excellent book for the educated public. They should be congratulated for it, not raked over the coals of hyper-critical nit-picking academic squabbling. The book is valuable for any general reader. It is worth your time. Read it! Look at Davis' criticisms to get a peek at some cautions about taking Chabris and Simons too enthusiastically and too far. But don't let Davis prevent you from reading this book!

Friday, July 2, 2010

Errol Morris on Anosognosia

Here are some bits from a 5-part series that Errol Morris has written on anosognosia:

Part 1:
An anosognosic patient who is paralyzed simply does not know that he is paralyzed. If you put a pencil in front of them and ask them to pick up the pencil in front of their left hand they won’t do it. And you ask them why, and they’ll say, “Well, I’m tired,” or “I don’t need a pencil.” They literally aren’t alerted to their own paralysis. There is some monitoring system on the right side of the brain that has been damaged, as well as the damage that’s related to the paralysis on the left side. There is also something similar called “hemispatial neglect.” It has to do with a kind of brain damage where people literally cannot see or they can’t pay attention to one side of their environment. If they’re men, they literally only shave one half of their face. And they’re not aware about the other half. If you put food in front of them, they’ll eat half of what’s on the plate and then complain that there’s too little food. You could think of the Dunning-Kruger Effect as a psychological version of this physiological problem. If you have, for lack of a better term, damage to your expertise or imperfection in your knowledge or skill, you’re left literally not knowing that you have that damage. It was an analogy for us.
Part 2:
June 11, 1914. In a brief communication presented to the Neurological Society of Paris, Joseph Babinski (1857-1932), a prominent French-Polish neurologist, former student of Charcot and contemporary of Freud, described two patients with “left severe hemiplegia” – a complete paralysis of the left side of the body – left side of the face, left side of the trunk, left leg, left foot. Plus, an extraordinary detail. These patients didn’t know they were paralyzed. To describe their condition, Babinski coined the term anosognosia – taken from the Greek agnosia, lack of knowledge, and nosos, disease.

...

There were many unanswered questions in Babinski’s original paper. Did the anosognosic patient have absolutely no knowledge or some limited knowledge of her left-side paralysis? Was there a blocked pathway in the brain? Was the anosognosia an organic (or somatic) disease? Or a derangement of thought? Was she in some sort of trance? Babinski also noted that many of his anosognosic patients developed odd rationalizations. When he asked them to move their left (paralyzed) arms, they would decline to do so, offering a myriad of implausible excuses. (Furthermore, not all of his patients with left-side paralysis were clueless about their condition. Some patients had knowledge of their paralysis but were oddly indifferent to it. For these patients, Babinski coined the term anosodiaphoria, or indifference to paralysis.
Part 3:
Discusses the stroke of President Woodrow Wilson in 1919 and whether he had anosognosia and whether this illness caused the US to not ratify the treaty for the League of Nations.
Part 4:
V.S. Ramachandran has written about anosognosia in a number of journal articles and in his extraordinary book with Sandra Blakeslee, “Phantoms in the Brain.” Ramachandran rarely settles for the status quo. If there is something unexplained, he pursues it, trying to provide an answer, if not the answer. He has made a number of spectacular discoveries, most famous among them his innovative use of mirror-boxes to treat phantom limb syndrome. Rather than devise complex experiments, he prefers simple intuitive questions and answers. His work on anosognosia is a perfect example.

Ramachandran was taken in by a question that haunts Babinski’s original work on anosognosia — the question of whether the anosognosic knows (on some level) about the paralysis. What is going on in an anosognosic brain? (Babinski’s original question: Is it real?) Almost any deficit can be explained as volitional. How do you know that an anosognosic patient is really in denial, or oblivious, or indifferent to his/her paralysis? How do you know that the patient is not feigning illness? This was a critical question during World War I, when neurologists had to deal with a flood of injured soldiers and had to discriminate between the truly damaged and those just malingering.

...

V.S. RAMACHANDRAN: Well, you can have anosognosia for Wernicke’s aphasia [a neurological disorder that prevents comprehension or production of speech] or you can have it for amnesia. Patients that are amnesic don’t know they are amnesic. So, it has a much wider, broader usage. Although it was originally discovered in the context of hemiplegia by Babinski and is most frequently used in that context, the word has a broader meaning. Wernicke’s aphasiacs are completely lacking in language comprehension and seem oblivious to it because [although] they smile, or they nod to whatever you say, they don’t understand a word of what you’re saying. They have anosognosia for their lack of comprehension of language. It’s really spooky to see them. Here’s somebody producing gibberish, and they don’t know they’re producing gibberish.
Part 5:
DAVID DUNNING: Here’s a thought. The road to self-insight really runs through other people. So it really depends on what sort of feedback you are getting. Is the world telling you good things? Is the world rewarding you in a way that you would expect a competent person to be rewarded? If you watch other people, you often find there are different ways to do things; there are better ways to do things. I’m not as good as I thought I was, but I have something to work on. Now, the sad part about that is — there’s been a replication of this with medical students — people at the bottom, if you show them what other people do, they don’t get it. They don’t realize that what those other people are doing is superior to what they’re doing. And that’s the troubling thing. So for people at the bottom, that social comparison information is a wonderful piece of information, but they may not be in a position to take advantage of it like other people.

...

For years, I have had my own version of the story of the expulsion from the Garden of Eden. In my version, God appears before Adam and Eve, and tells them that they have disobeyed Him. He admonishes them, and they will have to leave immediately. Everything will be completely grotesque, grim, ghastly and gruesome outside of Eden. God spares them no detail. Adam and Eve, both crestfallen and fearful, prepare to leave, but God, feeling perhaps a little guilty for the severity of his decision, looks at them and says, “Yes, things will be bad out there, but I’m giving you self-deception so you’ll never notice.”
The above series of articles provokes a lot of questions and thoughts. I got me thinking about my mother...

My mother died of a brain tumour. The effect of the tumour was poor balance and problems with manipulating things with her left hand. After the surgery to remove the tumour, she was left with "left neglect", the anosognosia described in the above articles. I was horrified that the surgeon whose "treatment" worsened the condition and led to her death within 3 weeks never too any responsibility for his clumsy surgery. I was further horrified by the poor treatment my mother got in hospital. My family is only 2 generations removed from farming where you cared for the sick and dying at home. My mother would have gotten superior help in his last three weeks by being at home. But he was a "medical" system that condemns people to die in the brutal environment of a hospital where paid workers show minimal regard for their charges. For my mother this evidenced itself in the following ways:
  • They persisted in putting the service call button on her left side, but with left neglect she was completely unaware of its presence, so she was effectively abandoned until family noticed the mistake and put it on the right or until a staff change and remaking of the bed moved it to the right side. The worst incident resulting from this "oversight" by the nursing staff was when she was left on a hard metal bedpan for 4 hours because she couldn't find the service bell and her voice was too weak to get the attention of staff.

  • The orderlies were extremely clumsy moving her to/from the bed. She was limited in how she could help them because of her left paralysis. They invariably stepped on her bare feet with their shoes hurting her and risking broken bones. She was fragile and at their mercy and pointing this out to staff did nothing to prevent its recurring.

  • Food was placed in front of her and none of the staff showed any awareness that her left neglect meant she would not attend to any food on the left. So they left her to pick at the food on the right. Worse, they gave her utensils wrapped in plastic that are hard enough to get at whith two hands. In short, her meals were frustrating. She was already well underweight. She died withing three weeks in large part because of malnourishment. When I pointed out the problems with food to the hospital they pretended to show concern, but they didn't ask any questions or follow up in any way that would indicate that they truly wanted to learn from mistakes. One incident sticks in my mind: she was rushed from a "rehabilitation" hospital to emergency suffering from a severe infection. When we arrived, waited for emergency care to be complete, and accompanied her to her hospital room, I requested food for her. The hospital said they couldn't provide anything because the cafeteria was closed and the dietician was not available. I kept insisting and they finally brought her a "snack". She was a severely underweight woman and was hungry and only got this "food" under protest from the hospital despite the fact that she had missed her supper because of the emergency room care.

  • I noticed that the hospital did have signs put on rooms & beds where patients had a risk of falling. This was to call the attention of the nursing staff to this condition and the need for extra care. My request for a similar sign to call attention to "left neglect" was ignored by the hospital. Consequently the level of care was haphazard at best. Only those staff who took the time to understand my mother's condition even had a possibility of taking appropriate care. This kind of neglect was endemic to the hospital situation. I spent months assuring the hospital that I was not interested in suing over my mother's death. I just wanted to make them aware of her problems and their inadequacies in caring for her. They showed no more than "superficial" concern, i.e. they acknowledged my letters and "assured" me that their procedures ensured that patients got the best possible care. They weren't interested in hearing about any failings on their part. They weren't interested in improving their practices.