In the tournaments that I ran as accelerated, lower rated players that had upsets (either draws or wins) in the first two rounds got kicked to the top but felt better about the pairings because they had had more reasonable pairings in rounds 1/2. More importantly, new ‘unrateds’ got a better first tournament experience. The complainers were mostly in the top quarter who wanted the traditional rabbit hunt in rounds 1/2. These tournaments were advertised as ‘accelerated’ but the top players either didn’t read the TLA or didn’t understand ‘accelerated’ pairings. Some groaning and moaning about having to face a player who might take longer than 30 minutes to defeat and even might have a chance in the first round. The advent of the ‘computer’ kids may have changed this somewhat. They seem to be harder to beat for everyone regardless of rating. I wonder about the effect of using accelarated pairings for 1/2 of the total rounds. For example, 3 rounds accelerated for a five or sixth round tournament. Players 1-4 might face each other in the 4th round and may have to play for the win rather than take draws. There could be unexpected consquenences; There is so little money in chess that more pro’s might drop out of chess with a more competitive pairing system.
They would probably really hate 1 versus 2 pairings. The guy who they were going to agree a draw with in the final round to divide up first and second place? Now they have to beat that guy in Round 1. That becomes maybe their toughest game. Then in Round 2 they have another tough game. This is a game they would also probably have in a Swiss, also, but in a Swiss it would be the second-to-last round, often the only tough game of the tournament. That’s the second round in 1 versus 2 pairing. After that, the top player is not necessarily home free because there are some rising stars coming up from below who have to be defeated. They would hate it. But sounds rather interesting to watch.
It tests “random” switches that are somewhat informed guesses about what changes are likely to improve matters, that is, someone who has a “bad” pairing is more likely to be chosen for a new test pairing. Everything is controlled by a simulated annealing algorithm, where somewhat more aggressive changes (such as switching players out of score group, or even pairing players a second time) are tried at high temperatures. The high temperature swaps should figure out what score groupings are necessary, then lower temperatures switch players around to get the colors as correct as possible.
I hadn’t thought of using simulative annealing for this problem. That is a very clever idea! In one project I worked on recently, one person in the team (not me) experimented with simulated annealing for a problem which arises in concatenative speech synthesis; namely, finding “optimal” join points between snippets of speech audio. The “units” (as they are called) can be overlapped slightly but then have to be stretched so as not to alter the pitch. We used annealing to find a reasonable degree and combination of overlapping and stretching.
If I run into you at the U.S. Open, it would be interesting to discuss simulated annealing for pairing some time.
Our weekly club ladder (runs almost 40 weeks each ladder year with one ladder game/point availabe to a player on each such night) uses 1 vs 2 pairings with a proviso that you cannot be repaired against an opponent unless you’ve each had four different ladder nights against other players since you last played each other (drops to two in the final two months). Only the players that show up are paired (and are thus able to get game points on the ladder) and a no-show is not counted in the four (or two) nights. Strong players joining in the middle of the year may have a number of rabbit-hunt nights, but can work their way to the difficult upper reaches of the ladder in a failry reasonable time. Each year the ladder restarts with everybody having zero games and game points.