Thursday, May 21, 2009

Read Montague's "How We Make Decisions"


This is a good solid discussion of the brain, psychology, and decision-making. It introduces you to current research. My only complaint is that I find the the writing style a bit murky.

Chapter 1: He views computation and simulation as the secret to bridging the brain/mind divide and understanding decision-making. He says that the computational theory of mind (we are just one big algorithm running on the wetware of the brain) is fundamentally right but incomplete. He argues that you need to add "value" to get the full story:
In my opinion, the early and confusing ideas about vitalism - the concept that living systems possess some immaterial, indescribable "vital" essense -- emerged because this crusion distinction between information processing and physical substrates was not clearly understood. ... The computational theory of mind explains why thoughts require a properly operating brain to exist, yet it also shows why they (our thoughts) are not equivalent to the brain or its parts. It's the information processing that the brain carries out that is equivalent to our thoughts, not the parts themselves. But what about the meaning? The stuff the philosphers claim correctly is missing from the computational theory of mind? it is still missing in my description so far and will require another idea. ... How do these very mechanical- and economic-sounding ideas of computation, efficiency, and valuation relate to a human's ability to set and pursue goals, to trust someone, to make decisions that deny instincts, and so forth? The surprising answer is that efficient computations care -- or more precisely, they have a way to care. I know that sounds strange. And what exactly does an efficient computation care about? Goals.
Chapter 2: He argues that efficiency-driven evolution is what put values into the computations to make us intentional agents. He argues that living things are very different from human-made computers because they are slow and powerful not fast and dumb. He names four principles that evolution enforces:
  • Drain batteries slowly

  • Save space

  • Save bandwidth

  • Have goals
Montague makes clear:
An efficient computational device requires goals in order to be able to decide which computations are good and which are less good or even bad. ... In order to survive, mobile creatures must be able to value the resources available and the choices they can make to acquire those resources. ... The idea of a value function or valuation mechanism is not complicated. A value function tells a machine the value of the current state of affairs -- simple as that.
Chapter 3: He expands on the computational theory of mind by pointing out that all universal machines can emulate any other machine, so you can have simulations of simulation within a machine:
Why are the execution costs and intrinsic costs of algorithms important measures for an efficient computing system? The short answer: flexibility. A truly efficient computational device must be adaptive if it solves hard, real-world problems; it must be able to respond to unanticipated changes in the world around it and unanticipated errors that arise in its own operation. It needs not just one algorithm for any particular problem, but many algorithms that solve the same problem in different ways. To remain flexible, it must aniticpate that there will be instances where it will need to get the same answer, but uing different data, running under different conditions, or possibly using a dramatically different machine type. But how can a single computational system contain multiple machines? We must remember that machines can simulations running on other machines. This is the cen tral idea behind the virtual machine concept, an idea invented by Alan Turing in the early part of the twentieth century.
Once we can simulate alternatives, the next key step is to simulate to learn:
Once the brain could model sequences of action forawrd in time quickly and accurately, tehere was a capacity to represent whbat "could have been." And once this ability could be carried out "offline," the brain was off to the races. Simulating "what could have been" is a capacity central to learning, critical for representing ourselves and others, and it's an efficient way to know how and when to share information.
Chapter 4: In this chapter Montague makes the argument that humans are unique in their ability to allow abstract goals to drive behaviour. He sees the human nervous system able to adapt and adjust internal structure to acquire the goal. He uses drug addiction as an example and calls it a "pathological software update" where the brain is rewired. As goals are selected the nervous system and a four-step information cycle is run as a goal-seeking, learning, and decision-making process of reinforcement learning:
  1. holds the current goal in mind

  2. produces a critic signal for this goal

  3. uses the critic signal to guide choices and improve the brain's model of the goal

  4. selects the next goal (or keeps the current one active)
Critic signals are a subconscious warmer-colder game. The dopamine system of the brain underlies this reinforcement system.
The brain's critic signals do not possess a perfect "bird's-ey knowledge" of a goal; instead, they use stored experience (memory) and rich models of similar problems to make educated guesses about the value of possible future actions. As we said, these critics combine two important sources of information: information about immediate reward (feedback from "what I'm experiencing now") and judgments about future reward ("what I'm likely to get in the long-term future") to produce a kind of smart error signal -- driven by the present, informed by the past, and guided by the likely future.
Here is how he describes the critic signals as reinforcement learning:
All reinforcement learning systems have three major parts: (1) an immediate reinforcement signal that assigns a number to each state of the creature, (2) a stored value function that represents a judgment about the long-term value of each state, and (3) a policy that maps the agent's states to its actions.
Both "real" experience and "simulated" experience can generate critic signals. Dopamine neurons compute a reward-prediction signal and encode this as a burst of and pauses in electrical impulses.
Bursts of impulse activity means "Reward is more than expected," pauses mean "Reward is less than expected," and no change means "Reward is just as expected."
Using this model, Parkinson's disease, drug addition, and thought disorders can be intrepreted.

Chapter 5: I found chapter 4 murky and this was even more murky. For example he makes the claim:
The reward-prediction error signal described for the dopamine system is well-suited for choosing goals and to guiding sequences of decisions.
That's a claim, but there is nothing I read in this book that really establishes this claim. The problem for me is that dopamine is a molecule while "goals" and "guiding" are behavioural. These are different levels of reality. Just like talking about pits on a DVD and watching a movie on a TV screen. The two may be related, but you have to describe all the intervening technology and processes otherwise claims like "scored pits on the DVD present sweeping panorams of sand and sun in Lawrence of Arabia". Once I understand the technology, then I can fill in the links, but on the face of it pits on a DVD and scenery viewed on a screen are two different levels of reality. Montague hasn't sufficiently filled in the gap between molecules and behaviour in his book.

Here's an example of unjustified claims by Montague:
The prefrontal cortex must possess a "let it go" mechanism that displaces goals from the active list if they have been there too long, or if they prove to be unhelpful, or if they are downgraded by other, higher-value goals. However, there are special classes of goals that have higher priority almost always.
I find this to be mumbo-jumbo. At best he is describing a projected scientific research program. In this book he certainly isn't presenting enough structure and details to justify claims such as the above.

One bit I did find interesting was the discussion of David Reddish's TDRL model of drug addiction. While I find this sketchy like much of the material in this section, there was enough material here to at least fit the pieces into the structure of an argument and get a sense of how the proposed model works. Montague goes on to claim that OCD, BDD, and Parkinson's also fit the model. But once again the generality to too great for me.

Chapter 6: As I get deeper into this book the less I value it. This chapter seems a muddle of speculation with no factual basis. Worse, Montague doesn't organize his materials into a presentation that makes a clear exposition. The chapter focuses on "regret" but for the life of me I don't see how it really fits into his dopamine guidance signal or his reinforcement learning "with value".

He talks about playing trust experiments on fMRI machines and claims that they get some data to support a critic signal. But that gets lost in technical details about collaborating with China a special fMRI called hyperscan-fMRI. These details blur whatever point he is trying to make.

He talks of playing economic games and the now famous "ultimatum" game. He talks of trust and regret. He talks of Jared Diamond and the domestication of wheat. But for the life of me I can't see how the contents of this chapter connect with the previous five chapters.

My understanding of pedagogy is "tell them what you're going to tell them, then tell them, then tell them what you told them". In other words, new ideas are hard to assimilate so you need to be a bit repetitive. Even more important, you need to put the new ideas into context to facilitate connection with existing knowledge. As well, you need to do cycles of outline then details, outline then details so that the student can more easily appreciate the thrust of the argument and understand how the details support the broader structure. Montague has no sense of this. The material in this chapter is a mishmash of chatty stories with some claims of a theory but nothing sufficiently concrete and without any clarifying structure to make it memorable.

Chapter 7: This chapter really disappointed me. He makes some stupid claims like the "idea" of a Coke or Pepsi has measurable influence on the brain. What's so brilliant about that? Why else would companies spend a fortune advertising if they didn't plant something in our minds?

He makes the claim that only the Coke "brand" really plants an idea. That if you give people a test of Coke or Pepsi with half labeled Coke and half unlabeled (but all drinks are Coke), then people choose the Coke. If you switch and give half labeled Pepsi and half unlabeled (but all drinks are Pepsi), then people have no preference. First, I rejecct this. I suspect they would go with Pepsi-labeled glasses because they "know" that is Pepsi. Second, this claim of an experimental result directly contradicts an earlier statement by Montague:
Brands really count, especially when our choice selects one to the exclusion of another. No one puts his money into a soft drink machine and happily selects the unmarked aluminum can or plastic bottle -- and for good reason. It could be an enormous waste of money since no one knows what, if anything, the unmarked can contains.
I believe the above statement. But it contradicts his claim that people are indifferent to choosing either a Pepsi or unmarked glass (as opposed to their consistent selection of a labeled Coke over an unmarked glass).

I have another problem beside the contradiction. While I accept the above statement, Montague has presented it as if nobody would ever buy any unbranded anything. Well, go to the grocery store and watch people pick unbranded fruit over the "branded" Dole products. OK, maybe that is unfair because it is obvious that "branding" doesn't change a fruit. What about his so-called unmarked aluminum can. I claim that picking the "store brand" is essentially picking the unmarked can. People do it and not always because of cost. In fact, I do it. So, my complaint is that his discussion and arguments are muddled.

His Coke/Pepsi discussion left me cold because I've heard an explanation that makes sense to me. People prefer Pepsi in a blind taste test because it is sweeter. For a simple quick sip, we prefer sweet. But if you are going to drink a bottle, the sweet becomes overwhelming and most people prefer a less sweet drink, so Coke outsells Pepsi. That makes some sense. My bottom line: the world is complex, there isn't a simple explanation of any of this. There are elements of truth in Montague and in the story about sweetness. Worse, you can't generalize to a single explanation because people vary. At best you can come up with some kind of statistical characterization. But statistics is not "classic science" (although quantum theory would imply that this statement is untrue).

Chapter 8: This chapter comes back to his initial claims:
So, what gives meaning to biological computations? Valuation is meaning, and valuation arose because of costs. Costs forced all biological computations the need to be valued. Living systems all run on batteries, energy is limited, and life is desperately hard. That is why choice and the ability to choose evolved.
What? After slogging through seven chapters hoping to be building a deeper understanding, the author has brought us back to making the same claims he started with? That's no science writing. That is preaching. That is indoctrination. I expect a scientific book to present the thesis, then marshal facts and show connections with other science. He hasn't done that. This whole book is one muddle of meandering around making claims and telling stories but not building a case or filling in a theory. Argh!

He gets quasi-mystical in this chapter:
Ideas about the "soul" or "me" or "self" or "immaterial force" accrue value in the way that I have described, and they can and do motivate behavior. ... If it's an illusion -- an effect of running up against the limit of some operating range -- then we must ask what's the quantity whose meter is constantly pegged at some end of its design specs? If it's an illusion, then I propose here to call this "bug" a "feature" of mental lives. I'm giving it airtime in this chapter, and simply by virtue of that it gains a presence in your cortex.


Epilogue: This chapter finishes the muddle. The only bit that I found interesting was his discussion about the limits of scientific theory captured in the following:
Thus we are composed of two types of "patterns," one computable and the other uncomputable. A pattern of uncomputable parts sounds contradictory at first. Think of uncomputable beads on a string. The beads can't be computed, but these little uncomputable beads can be organized along a string and in a specific order. It's just that the inner workings of a specific bead can't be computed. And here the beads are the vast collection of physical possibilities possessed by amino acid side chains. So while humans are not computable, they use computations to organize their computable and uncomputalbe parts. I suspect that this will make simulating ourselve difficult, but not impossible.
I'm disappointed. This should have been a riveting book. I'm sure there is a lot of exciting science that could have been presented. Instead, I close this book profoundly unhappy. I learned very little. Sure there were lots of words, lots of bits of stories, lots of bits of scientific research, but there wasn't a compelling tale here, there wasn't a "big idea" that grabbed my attention and left me in a "wow!" state. Instead, I invested a fair number of hours into a task that leaves me with very little memorable material. What a waste!

No comments: