Because I was always more interested in programming than solving the cube by hands, I started to solve the cube by program. Soon I realized that deep search with some heuristics is not that funny. Same time my friend Berka provoked me to create something more challenging, the database of all shortest solutions.
Now after years I returned to that idea to exercise in new technologies. I implemented analytic part of the work in C#, brute force part in C++/CLI. User interface is done in 3D WPF. Because you will probably not want generate whole database at home, I connected the clients to database on my machine using WCF web service. Finally some of you may not have windows platform at all, but you would like to use the database. So for you I created simple ASP+AJAX form. Now some video of the client GUI solving the cube.
Model
So let's describe the model: cube has 16 pieces (8 small + 8 big). Those small are only of 4 types as my cube has only 2 colors. Big ones are unique. On middle layer, there are 2 halves, connected with axis. When you turn around the axis you must finish 180 degrees, so it has only two states. *We don't need to care about middle layer.* <- Update at the bottom.Now let's assign pieces some (hexadecimal) numbers. It's anticlockwise, first on top then on bottom half. Numbers are assigned on solved cube, so that 01234567-89ABCDEF is sequence of solved cube. Detailed model and guide to your puzzled cube here.
Note that small pieces are there twice so we have permutation of multiset Small 004488CC and big 13579BDF, this will give us 2.2*10^13 permutations. Let's also say that we could consider same all cube configurations which could be transformed to each other without turning left and right half of the cube. So rotation of top, bottom and flipping whole cube is still the same cube. Last note that we will mark solved cube with level 1, there is only cube on that level. When performed any operations with top, bottom and finally make turn of right part, we reached level 2. We could found all cubes on level 2 by doing all possible operations on cube on level 1. Level 3 will be result of all operations on all cubes on level 2, which are not on level 1 or 2. Similarly for other levels.
Notation of moves is (top, bottom), '/' is turn right half, '!' is flip whole cube. For both top and bottom rotation, positive is anticlockwise.
-- Here was Ajax frame with webservice to database. Is gone now.
Algorithmization
This is the best part, rest of the work was just cheery on the cake. My first idea years ago was very naive: do operations on cube model, then search whether cube is in the list of already found cubes and if not add it into the list. This doesn't work long, because growing list of cubes will be slower and slower. So, sort it and binary search ? Not in memory, because count of cubes is cca 6.8*10^9, that could not fit into 32bit memory space. Ok, we could B-tree on disk, maybe even with some existing database engine.But there is better solution. For each cube configuration = permutation we could assign index of permutation. On such index we will store just level of the cube. But 2.2*10^13 permutations will give us 20 terabytes of data, not good yet. We know that there is 90 shapes, each shape has exact order of small and big pieces. For example CeCeCeCe/CeCeCeCe is shape of solved cube where 'C' is big and 'e' is small piece. So now we have 90 pieces * 2520 permutations of small pieces * 40320 permutations of big pieces. That is 9144576000 permutation and bytes. I guessed that there is no more than 15 levels (actually only 12), so we could use half byte to store level of any cube. This is 4.2 GB, good enough for my PC!
Now we could for each level go through whole database, and when we found some half-byte which is set to current level, we will do all possible operations on the cube which is described by address of such half-byte. Results will be converted back to address of new cube and written to the database, if new cube is not on same or previous level already. Before saving we should normalize the cube to one of 90 shapes, by rotating top, bottom or flipping whole cube.
Well we could save some work in analysis, we could skip all operations on the shape which will lead to same result for example (0,0)/! is same as (6,6)/ for all shapes. Also we could minimalize those shapes which stay in same shape after rotation, for example (3,6) on solved cube will not change shape, just colors. So we could only write that rotated cube, which has minimal index of color permutation. This will eliminate identical cubes from processing in next level.
Now we need some clever paging because is not possible get it all in memory at once with 2GB address space. There helps that transitions are always between pair of shapes, so we could use file of one shape as source and other as target. Memory mapped file is cool solution for shape files (50MB each). Also helps to lazy flush that files. And lastly try to order processing of page pairs in that order, that just few loading and flushing is performed for each file, because disk is slooow. This could be done because shapes don't produce cubes in all other shapes, just neighbor. Also helps that file is created whole at once to prevent fragmentation on disk. Windows Vista paging seems do better work than XP when working with mapped files. To keep busy processors and disk all the time, we could run more threads than we have processor cores, each thread one pair of shapes. When some thread is waiting for disk, other thread is heating the core.
Now we could optimize the operations on the cube. Here comes the C++/Cli. We generate optimized code branch for expanding cubes, specific for source and target shape. We could also optimize computation of index of permutation, because indexes of 2520 and 40320 permutations could be done by hash table and by lookup table on way back to permutation. And we could also do whole turn (rotation of top bottom, turn and rotation of top and bottom as normalization to the shape) just by using bitwise AND, OR and rotation on processor registers. Such operation could be done separately on permutation of small pieces and big pieces 32 bits each. Also prevent CLR to native and back transitions, so code for whole shape file scan should be native code.
Mike Masonjones did probably something similar in year 2005, maybe he had little bit harder problem, as he was solving cube with 6 colors. He had some old machine which was dedicated to the task and was running a year! My algorithm could do it in cca 8 hours. I have Dual core 2.4GHz + 4GB RAM, but I'm still happy with 1000 times speedup.
I had great time playing with that problem and learned lot. Interesting ? Found bug ? Any feedback ? Enjoy !
Numbers about silver edition
Level 1 | 1 |
Level 2 | 32 |
Level 3 | 536 |
Level 4 | 7652 |
Level 5 | 102204 |
Level 6 | 1296753 |
Level 7 | 15250126 |
Level 8 | 153335000 |
Level 9 | 1138303312 |
Level 10 | 4067633460 |
Level 11 | 1434709972 |
Level 12 | 164952 |
Download
Source code under LGPL on SourceForge subversion.GUI client + database creation engine binaries, .NET 3.0 required. Analysis report
Database of shapes in XML
Cube21 Links
Update: This is probably wrong. My idea was that the correction could be made by merging /(0,6)/(0,6)/(0,6) to last three turns when necessary. I currently don't have time to think about it deeply, but it seems to me now, that it wouldn't work for all shapes. I probably it could mean that my database is not 100% correct and that some paths will be one turn longer. It could also force me to store another bit of information in permutation, so disk space required could be twice bigger. I will welcome anyone who will do the fix :-)
6 comments:
I only read over your posting briefly, but there is an interesting optimization that your solution does not seem to take advantage of.
It is called bidirectional search. Basically, if you know that a solution is always reachable within 12 steps, you search say 7 steps from the solution, and store all reachable states, as well as how many moves did you need to reach each of them.
Then, to find a solution for a particular cube state, you just have to search 5 steps from that state, and you are guaranteed to find a state reachable from the solution.
(By the way, if in your state representation multiple states are the "solution" - say because of rotations - you can either search from all of them, or do some normalization, e.g. by fixing one of the pieces.)
Hmmm, I wasn't sure that there is only 12 levels before I reached all configurations, so now - why not use levels 7-12 when I already built them ? There is normalization as you described it, I call it minimalization and it's done on index of permutations.
Well, the point is that you don't have to build levels 7-12.
It should be much less work (by several factors of magnitude) to only compute levels 1-6, and you'd need much less storage for the precomputed values.
By the way, if you don't know the maximum depth, you can still use bidirectional search. Search from the solution and the scrambled state in parallel (switch between the searches after each move), and terminate when one search reaches a state that has already been visited by the other search.
Of course, what I suggested is only an algorithm to solve a particular scrambled state.
If you really want to build the database of all solutions, then it doesn't help.
have you considered the needleman-wunsch-algorithm or any other algorithm of graph theory and dynamic programming? you shouldn't neet more that linear space for this problem. see all-shortest-paths-problem...
I didn't get the idea why you are proposing Needleman-Wunsch, this is for aligning sequences, right ? For normalization ? We need exact match not just close.
With the Dynamic programming, I believe that I did exactly that. Precomputed first level and reused it for second level and so on. But I'm not theory expert.
Post a Comment