: ((no subject))
(2023-11-17 23:38) [#12330]
A proverb should be known in literature of Go books or professional go commentaries as a proverb, preferbaly by having references of Japanese, Chinese or Korean wordings of the proverb, or English translations, occurring at least once and preferably more often.
Anyone can come up with dictums or proposals for something that sounds like a proverb and this particular one is definitely a good candidate. But the page as it is, is meagre to qualify it as a real proverb.
That's why I suggest to expand upon it, not to delete it.
: why not a proverb?
(2023-11-17 19:24) [#12329]
What criterion of proverbiality does it fail that it can't be properly called a proverb? Is it not a "pithy saying teaching a guideline of good basic play", as I'd define the term?
: true but not a proverb
(2023-11-17 09:03) [#12328]
Although there are probably more such examples in the list of proverbs, what is stated here is true, but it's not an acknowledged proverb. Let's keep the page for now and extend it with diagrams and reasoning, while looking for actual proverbs that relate to it.
2603:7080:2307:ba07: Defunct database not online
(2023-11-15 15:12) [#12327]
2001:0bc8:0701:0017: ((no subject))
(2023-11-14 10:07) [#12326]
: are you trying some kind of cross-site scripting attack?
(2023-11-09 14:56) [#12324]
Don't do that, ok? I saw you added that code to the Go Wiki article, and I removed it. Even if it's just a techy prank, still don't do it.
: Re: Test
(2023-11-09 05:57) [#12323]
(2023-11-09 05:56) [#12322]
: sorting out orthography before we feature
(2023-11-02 21:26) [#12321]
Wikipedia calls him Hsu Hao-hung, with Taiwanese orthography, which they're probably obliged to apply universally to Taiwanese names.
But afaik, all Go discussion between actual players refers to him as Xu Haohong, with Mainland spelling (see the videos Kejie VS Xu Haohong and Park Jeonghwan VS Xu haohong, Eunkyo; and The Greatest Go Tournament Run Possible- Part 1, Telegraph Go, which labels him 'Xu Haohong' on screen).
I think we should keep Xu Haohong the title but note Hsu Hao-hung as an alternative form, and also make it an alias. Does anyone disagree?
Hsu Haohong should probably be an alias as well.
2600:4040:7316:8700: Multi-player Go with a focus on Avatar
(2023-10-26 18:10) [#12319]
I know Avatar the last airbender has its own in universe game of Pai Sho but I always thought that a 4 player go game with the map of the world would be great. Blue stones for water, red stones for fire, green stones for earth and white stones for air. Could be any size but I would have each player start at their respective capitals. (Northern water tribe, fire nation capital, Ba Sing Sei for earth and any of the four air temples for air.
: moving old discussion on papers here
(2023-10-18 12:10) [#12318]
Comments on papers
ilan: In my opinion these papers exactly show the weakness
of the neural network approach. They tackle the classical
artificial intelligence questions (the ones based on making
machines do difficult things that people usually consider a representation of "intelligence") instead of following the scientific approach of first solving simple and well understood questions. In particular, a good first problem for a neural network would be to solve integer arithmetic (addition and multiplication). Note that this is in fact necessary to play go (you need to know how to count and how to add to play go)! Of course, the reason for all of this is that you don't get grants or promotions if you study addition and multiplication because
these don't require "intelligence".
Bildstein: Firstly, I think there is a fundamental difference between arithmetic and go. The first is a solvable problem, while the second is almost certainly not. We don't need a neural network to do arithmetic, because we can write a program to do it. On the other hand, you may have a point about biting off more than we can chew when we try to use neural networks to solve go. I know I did.
Second, in my opinion, the primary obstacle to the neural network approach to go is that it tends not to take into account the symmetry of the game. How rediculous is it to think that you could train a neural network to solve a tsumego problem in one corner of the board, but when faced with the same problem in a different corner, or with colours reversed, or mirrored along the diagonal, it gets it wrong. They've done okay with neural networks on 7x7 boards, but in my opinion, these solutions are simply not scalable. In fact, I'm not aware of any applications of neural networks that have anywhere near the number of inputs you'd need if you wanted to scale these solutions up to 19x19.
Trontonic: How would using MoGo for training a neural network work out? Pro games are probably good allright, but they contain few responses to "stupid" moves. This could even be a distributed project, for increased CPU per move.
Harleqin: I think that the application of neural nets to go has a fundamental problem in how to represent the game state, i.e. how to feed the game until now into the net's input layer. The simple idea of giving a matrix the size of the board where own stones are marked as 1, opponent's stones as 0, and empty intersections as 1/2, is flawed because cycles cannot be detected (this would mean that it cannot play ko!). Marking ko bans with a special value is also difficult because the net would need to specially process the input in a way that has a high learning barrier since ko are not frequent enough. There would also be no way for the net to detect longer cycles. So, basically, you have to feed all the moves until now into the input layer, but how? One idea would be to sequentially fire the board states or moves from the start into the input layer, and rely on the net having some memory (i.e. backpropagation) to synthesize this into a working game state. However, this introduces new problems: how to tell that the net has completed its calculation, and how to apply a learning algorithm. I think that this means that a neural net suitable for go has to be magnitudes more complex than the currently published neural net applications.
Bamboomy The problem of 'remembering' past moves could get solved by differentiating the input. In stead of using 'bare' values as 1, 0 and 1/2 you could differentiate the input. For example: if an own stone is removed you could change the value from 1 to 3/4 in stead of 1/2 and let the network learn how to deal with this kind of differentiation. This should at least solve the 'ko' problem. Longer cycles would get more difficult to deal with because of their subtlety but at least the input wouldn't be the exact same. The memory wouldn't be in the network but in the input. On a related note I think a network should have (much) more middle layers. My original design was a network that was as 'deep' as the size of the board but with the nodes only connected to their nearest neighbors. The rationale behind that design was that all of the information that is feeded to the input layer eventually ends up in the output layer. Does anyone has some working java code for this problem? I'd like to spent quite some computer time (both coding as cpu) on this... (spam3_at_bamboomy_dot_tk (with the 3, and the _at_ and _dot_ changed into @ and .)).
TestPlay09: hi, what about a neural network of neural networks? each net would be specialized for one type of processing and data crunching, giving formatted output on the input of master net.. not only that, why not make a difference engine out of several master nets each specialized in one type of style.. maybe even make nets dynamic in terms of mutable and evolving and have some process if not another net in charge of that! everything proposed here thus far just seems to me insultingly plain and simple. making an decision based on situation at hand and previous experience is a much more complicated process.
thanks for the inspiration though.. definitely will try to implement some form of my idea on small scale.. at first that is :D
TestPlay09: read some more... intelligence is a bi#%h to reproduce =_=
however, it will be done!
MrMormon: Harleqin's referring to recurrent neural networks, which include their output in the input of the next thinking step. It's easy to tell when a calculation is complete: 'still thinking' could be a move choice. And they have learning algorithms. However, I'm unsure whether they can learn general superko from a realistic number of games (possibly with itself).
Lukas?: If you use MoGo to train the network, the quality of the network will never become better than MoGo. The only way to train it to become better than anything else, is to let it train against itself using some sort of genetic algorithm.
Shain?: It would be very wise to train it against something like MoGo in the early generations. Once it is trained to 15k-20k it can start training against its brethren(?) and with selective mutation a stronger NN will emerge.
One could also use goproblems to train NN and simply apply an algorithm which feeds in a single problem multiple times with every variation of rotation/mirroring/color swap, that way a NN capable of solving a problem will do this all the times, not just in that one corner.
: cannot edit?
(2023-10-13 14:03) [#12317]
There's something wrong with this page.I can't edit it.
: thanks Dieter
(2023-10-10 22:55) [#12316]
Thank you, Dieter!
: what a work!
(2023-10-10 13:14) [#12315]
Nice work! This is truly a great reference.
: ((no subject))
(2023-09-14 20:46) [#12308]
Sorry for spamming here how this is going but the blurriness happens with fandom images being accessed from other domains. I addressed it by hosting the images elsewhere.
The layout problem with images is I guess not that bad. I changed the width of images to 780, in which case the images then approximately cover the page width aside from the sidebar when viewed with a few different smart phones and slightly zoomed in desktop pc's browsers. The images cover less area on regular desktop though. For me it looks satisfactory this way, so I think I'll add the pics to the bottom of the player pages like on Shin Jinseo's page at the moment.
I think the title of this thread is a bit misleading, oops. I looked at the ListOfKoreanProfessionals and ListOfChineseProfessionals and there indeed are most pages already existing. For some reason I thought I couldn't find some pages earlier when I was searching something, but maybe it was just missing pictures. Oh well, this is a positive finding :)
I guess I'm soon set to start grinding some pics. It's nothing special, all you fellow weiqi folks had already done the great work with the text updates. So many pages existing! Mostly I will just end up adding some 1-3 pictures to player pages and that's it
Oh, by the way. I added a text K바둑 (KBaduk) in html entities as suggested. Which is &#bc14;&#b451; but for some reason it's showing up as those entities, not as korean alphabets. Same happening here in this discussion. I guess I'm missing something
: ((no subject))
(2023-09-13 21:28) [#12307]
I tested now some changes on Shin Jinseo's page.
1. I tried linking pictures hosted at fandom, but for some reason the images seem blurry when they are embedded to senseis. You can compare with the actual image linked and there's a difference. I tried with no width and with 400 and 640 widths.
2. Pictures going kind of out of bounds causes sub headings below it to not respect where they belong. I had to move the "External links" above the "Pictures". Also they picture boxes don't seem to align nicely. I didn't try float yet, maybe that helps?
3. The long caption may cause the image go more out of bounds than without a long caption, I didn't test much. This sensei's library is quite outdated when it comes to responsive web design it seems but maybe something decent can be worked out.
4. This testing is now with having multiple largeish sized images on the page. What do you think is that too much or intrusive. Should there not be so many such pics on a player page
Maybe I can figure out these myself later, but now I gotta call it for a day so just leaving it here.
: ((no subject))
(2023-09-13 16:44) [#12306]
Another advantage of having images on pages is that they appear on Waltheri's database (ps.waltheri.net).
: ((no subject))
(2023-09-13 16:19) [#12305]
It sounds like lots of nice new content for SL, I look forward to seeing it! Hope it's not too much work for you - maybe it's better to not be too ambitious at first...?