Daniel Dennett Applied To Go
I (Dieter) am currently reading the book "Consciousness explained" by Daniel C. Dennett, contemporary philosopher. I have read the first chapter and am now rereading it, because unlike others, I'm not gifted with a brain the size of three planets (rgg inside joke).
Now to the point. I of course could not help but trying to apply his theory to the field of Go. First, while playing go I have the experience my consciousness is busy doing one single thing: playing Go. So Go seems to be a suitable test field for some of his theories, because of the simplified case of "one activity at a time", and as such I could try to better understand Dennett's ideas. Second, applying his ideas to Go could help me better understand Go.
The first idea I'd like to investigate is the following. Dennett expels our old idea of the brain as a central spot receiving information, processing that information and producing some output (actions). He calls this concept the "Cartesian theatre" and replaces it with the "multi-version" concept, in which several parallel processes take place. As a computer programmer, one is immediately reminded of the concepts of sequential versus parallel processing. He goes on to tell that multiple versions of the observed reality are created, and that multiple actions evolve from them. The action that eventually takes place, is rather based on a evolutionary principle than on a singular decision: the one with the better credentials wins.
The above is superficially what I remember and understood from the first reading. Inevitably, there will be mistakes and inaccuracies in my interpretation.
I wondered what it is I'm doing when I play Go. Do I really take several possibilities into account, make an evaluation and then "consciously" choose the one with the highest qualification marks? This would be the cartesian idea. Each possibility is checked at a time, evaluated (assigned a quality mark) and while processing we keep the possibility in mind with the highest marks up to then. Quite a sequential way of proceeding. This is also how Go theory tells us to proceed, and how computers are programmed to play Go (as far as I know).
Or do I see multiple possibilities on the spot, run through some variations, and in the end I produce a move which I believe simply has or had good credentials. Maybe the first one on the list, maybe the second, or maybe way down the list, because evolutionary progress needs information about failure too. According to Dennett there even is no such list, at best several partial lists.
In my program - but I am a slow turtle at it - I'd like to introduce this by creating several programs: "shaper", "connector", "shapedestroyer", "cutter", "liver", "killer", "territorymaker" and "territorydestroyer". Those programs would quite instantly find the move for their respective purpose, and a metaprogram would then rather randomly choose between those programs. Eventually, this meta-program would evolve from a random-generator to a genetic algoritm.
In conclusion: If it is true that we reason through parallel systems, should we then try to shift towards sequential reasoning? Or should we try to better understand and optimize this parallel system? Could this be one of the clues to the question why computer programs are so bad at Go?
Jasonred: I think this is a pretty brilliant idea, separate it into several components, then have an "overhead" program to find the best choice. Though the "overhead" program seems to me to be the main difference between computers and humans in Go. The computer finds it hard to make up its mind which of the final results are better, whereas humans find the reading out a problem. Or I do anyhow.
Second, how the heck are you going to explain "shape" and "aji" to a computer? I can see how you'd program territory, life and death (harder than it seems, but probably do-able), but the more subtle points of Go seem hard.
Computer programs are bad because nobody can explain to them the purpose of the game. In other words, there is no good evaluation function. Most modern programs rely on the local patterns, i.e. copy moves without understanding their meaning. At some moment in the middle of the game they just run out of what is programmed and waste moves. You can't imagine how many possibilities to waste a move there are on the board.
But the question about how we think is interesting. Strange only that a philosopher tries to solve it. It is specific knowledge and should belong to science in my opinion.
- Dennett is one of the people who thinks a philosopher should do a fair amount of science work and vice versa. I agree with him, not that my agreement brings me any closer to being a philosopher or a scientist. -- Dieter
Basically, when playing go most time is spent in exploring variations that don't work. At least I play this way. There are moves that work, solid, good moves and there are tempting moves. If possible I read out if they work. If not possible, then I, quite often, play them anyway to see what happens. This is supposed to be a learning process but it never works this way or works too slowly to notice.
There is a time trial on http://www.goproblems.com . Did you try it? Start with the simplest level. You will see that the answers will come instantly, at a glance. This continues up to the level of 5k for me. At that level I have to think what is the correct move, no instant recognitition anymore. I can solve the problems of 5d level, but that takes hours of analysis. This can give you some idea how the game of go is played. Up to your level you just know all answers. No wonder that you can beat people who have to find them during the play and naturally don't have enough time to do so.
There is some idea about the ability to see and to read. It is known that we can count at a glance up to seven objects. Counting eight objects is already a task that involves brains. You have to combine the objects in groups of countable quantity and then add the results. This is a very difficult and time-consuming task.
There is something like it in go. You don't read simplest semeai. You know the answer. But if there are four liberties for each side you have to work hard and make mistakes. I am amazed also by the fact that sometimes (not often) people at IGS 5k level misread the simplest ladders. I won't speak about geta.
But what did I want to say? Oh, yes, most time is spent in reading out variations that are never played.
Tamsin: There is a big problem with "mechanistic" approaches to consciousness, such as Dennett's. He says that consciousness is the result of sufficient cerebral complexity, but that in itself does not explain it. For example, your perception of a picture may result from neurons firing in your brain, but that does not account for how these events are seen as a picture, rather than as impulses. To put it another way, a conscious thought may involve mechanical events, but there is a huge difference between these events and your own perception of what a thought is. To an observer, my mental processes will appear like firing neurons and chemical reactions in my brain, but to me they are words and images. Where, then, does the nexus between mechanical event and intellectual experience lie? In the brain? Or is the brain merely the tool by which the soul expresses itself, by means yet undiscovered?
I think there is a big problem with the approach of making different programs and then choose one of them. Most moves in Go have multiple purposes! Then you would have to create more moves for each program, else there is no overlap. I think that when you can combine each of these goals (or a good selection of them) into a best move, your program will do better.
victim: Tamsin, of course "cerebral complexity" does not *explain* consciousness in the sense that you know all the details after you made the model. It's like Go where knowing the rules doesn't make you understand the game.
But there is no other useful approach other than the one you call "mechanistic". The alternative is to postulate some entity which has exactly the properties you need in order to "explain" consciousness, such as "soul". This is not solving any difficulties, it's hiding them behind words. But I guess this is turning into a religion versus science discussion that doesn't really belong here but on talk.origins or the Evolution Wiki (shameless plug of my other favorite wiki).
(I like Dennett very much, though I think he's a bit unfair to Stephen Jay Gould in "Darwin's Dangerous Idea" - he doesn't understand what Gould was trying to say, which may be mostly Gould's fault.)
Tamsin: An observation here. I made my comment months ago, and now you take it up. This is one of the most intriguing things about Sensei's Library, i.e., the "sleeper" page, which lies dormant for ages and then becomes active at the moment you think it's been forgotten :-) If only the OpenGoStory would awaken...
TJ: Awake! Thanks to random page hopping:)
Some theorists have found it indeed useful to think about thinking in this way, that a thinking being has many different "intelligences" or "agents" for doing things. Thinking of intelligence as a set of tools with varying skills, you can set up an AI which breaks down all of what we call skills to their basic parts. How does a meta-program choose between them? This idea seems in danger of humunculan recursion.
Society of Mind, (by Marvin Minsky, co-founder of the MIT AI Lab) a book on evaluating intelligence (taking up means to do so which is interestingly thousands of years old, practiced by the sages of the Upanishads...science indeed), with an aim to applying findings to AI, invents an idea of agents. These agents all express themselves by wishing to do what it is that they do...the more they do it, the better they become at it. In order to get to do what they do, they must ally with other agents...mob rules.
The prime example I remember is an example of a child with building blocks. Under an agency-activation of play, two agents, builder and crasher, agree to build up a tower so that crasher can crash the blocks down. There is an agreement between them...so play, itself winning out over other needs already, partly thanks to crasher wanting to crash something, lets builder build. This is overly simplified, but you get the idea. Under builder, there is grasper, mover, placer, etc. down to the smallest level you can think of, all working together as need dictates, all getting better as need pushes them to the fore...grasper works for builder, for eater, for drinker, and on and on, all these skills developing through play, thanks to builder and crasher.
What's the upshot? A meta-program could perhaps do nothing more than listen to the mob. Have it pick through all sub-programs and make an evaluation according to how strongly suggested moves fulfill the needs of each and every sub-program. Agents Divider and Surrounder scream above the mob for a candidate move to be made because it perfectly divides and perfectly surrounds, then that's your move.
To go further, if that move successfully divides, but does not surround very well in the end, perhaps surround could lower its expectations for such a move the next time, and thus the whole program could learn.
This from a non-scientific, philosopher sort, so I hope it isn't totally irrelevant or impossible. Just some ideas about the building blocks of consciousness. I actually rather hope computers don't get good at this game any time soon, it's part of its charm, but if it comes from computers actually thinking like humans, I think I can live with it.