ilan: In my opinion these papers exactly show the weakness of the neural network approach. They tackle the classical artificial intelligence questions (the ones based on making machines do difficult things that people usually consider a representation of "intelligence") instead of following the scientific approach of first solving simple and well understood questions. In particular, a good first problem for a neural network would be to solve integer arithmetic (addition and multiplication). Note that this is in fact necessary to play go (you need to know how to count and how to add to play go)! Of course, the reason for all of this is that you don't get grants or promotions if you study addition and multiplication because these don't require "intelligence".
Bildstein: Firstly, I think there is a fundamental difference between arithmetic and go. The first is a solvable problem, while the second is almost certainly not. We don't need a neural network to do arithmetic, because we can write a program to do it. On the other hand, you may have a point about biting off more than we can chew when we try to use neural networks to solve go. I know I did.
Second, in my opinion, the primary obstacle to the neural network approach to go is that it tends not to take into account the symmetry of the game. How rediculous is it to think that you could train a neural network to solve a tsumego problem in one corner of the board, but when faced with the same problem in a different corner, or with colours reversed, or mirrored along the diagonal, it gets it wrong. They've done okay with neural networks on 7x7 boards, but in my opinion, these solutions are simply not scalable. In fact, I'm not aware of any applications of neural networks that have anywhere near the number of inputs you'd need if you wanted to scale these solutions up to 19x19.
Trontonic: How would using MoGo for training a neural network work out? Pro games are probably good allright, but they contain few responses to "stupid" moves. This could even be a distributed project, for increased CPU per move.
Harleqin: I think that the application of neural nets to go has a fundamental problem in how to represent the game state, i.e. how to feed the game until now into the net's input layer. The simple idea of giving a matrix the size of the board where own stones are marked as 1, opponent's stones as 0, and empty intersections as 1/2, is flawed because cycles cannot be detected (this would mean that it cannot play ko!). Marking ko bans with a special value is also difficult because the net would need to specially process the input in a way that has a high learning barrier since ko are not frequent enough. There would also be no way for the net to detect longer cycles. So, basically, you have to feed all the moves until now into the input layer, but how? One idea would be to sequentially fire the board states or moves from the start into the input layer, and rely on the net having some memory (i.e. backpropagation) to synthesize this into a working game state. However, this introduces new problems: how to tell that the net has completed its calculation, and how to apply a learning algorithm. I think that this means that a neural net suitable for go has to be magnitudes more complex than the currently published neural net applications.
Bamboomy The problem of 'remembering' past moves could get solved by differentiating the input. In stead of using 'bare' values as 1, 0 and 1/2 you could differentiate the input. For example: if an own stone is removed you could change the value from 1 to 3/4 in stead of 1/2 and let the network learn how to deal with this kind of differentiation. This should at least solve the 'ko' problem. Longer cycles would get more difficult to deal with because of their subtlety but at least the input wouldn't be the exact same. The memory wouldn't be in the network but in the input. On a related note I think a network should have (much) more middle layers. My original design was a network that was as 'deep' as the size of the board but with the nodes only connected to their nearest neighbors. The rationale behind that design was that all of the information that is feeded to the input layer eventually ends up in the output layer. Does anyone has some working java code for this problem? I'd like to spent quite some computer time (both coding as cpu) on this... (spam3_at_bamboomy_dot_tk (with the 3, and the _at_ and _dot_ changed into @ and .)).
TestPlay09: hi, what about a neural network of neural networks? each net would be specialized for one type of processing and data crunching, giving formatted output on the input of master net.. not only that, why not make a difference engine out of several master nets each specialized in one type of style.. maybe even make nets dynamic in terms of mutable and evolving and have some process if not another net in charge of that! everything proposed here thus far just seems to me insultingly plain and simple. making an decision based on situation at hand and previous experience is a much more complicated process. thanks for the inspiration though.. definitely will try to implement some form of my idea on small scale.. at first that is :D
TestPlay09: read some more... intelligence is a bi#%h to reproduce =_= however, it will be done!
MrMormon: Harleqin's referring to recurrent neural networks, which include their output in the input of the next thinking step. It's easy to tell when a calculation is complete: 'still thinking' could be a move choice. And they have learning algorithms. However, I'm unsure whether they can learn general superko from a realistic number of games (possibly with itself).
Lukas?: If you use MoGo to train the network, the quality of the network will never become better than MoGo. The only way to train it to become better than anything else, is to let it train against itself using some sort of genetic algorithm.
Shain?: It would be very wise to train it against something like MoGo in the early generations. Once it is trained to 15k-20k it can start training against its brethren(?) and with selective mutation a stronger NN will emerge. One could also use goproblems to train NN and simply apply an algorithm which feeds in a single problem multiple times with every variation of rotation/mirroring/color swap, that way a NN capable of solving a problem will do this all the times, not just in that one corner.