Given an arbitrary graph, a random walk of a chip is a path which begins at a given starting point and chooses the next node with equal probability out of the set of its current neighbors. Random walks have been used to model a wide variety of processes in economics, physics, medicine, and mathematics.
Around 2000, Jim Propp invented a quasirandom analogue of random walk. He called this the "Rotor-router model", while Joel Spencer coined the term "Propp machine". Instead of distributing chips to randomly chosen neighbors, it deterministically serves the neighbors in a fixed order by associating to each vertex a "rotor" pointing to one of its neighbors. The first chip reaching a vertex moves on in the direction the rotor is pointing, then the rotor direction is updated to the next direction in some cyclic ordering. The second chip is then sent in this new direction, the rotor is updated, and so on. This ensures that the chips are distributed highly evenly among the neighbors, similar to (and in fact better than) what a random walk would have done. A central question of much research on the rotor-router model is how well it mimics particular aspects of random walk. A number of aspects have been regarded so far. We discuss two of them here.
Suppose we put some number of chips on each vertex. What difference does it make if they all do a random walk or a rotor-router walk? More precisely, in the rotor-router model one time step consists of each chip (order irrelevant) doing one rotor guided jump to a neighbor (including the update of the rotor direction). After a certain number of time steps, we compare the actual number of chips on some vertex in the rotor-router model with the expected number of chips in the random walk model ("discrepancy" on this vertex at this time).
This problem has been studied by different researchers. It is easy to see that on finite graphs, the discrepancy on a single vertex is bounded by a constant (independent of the number of chips we used in total). This is not true for all infinite graphs, but (some technicalities aside) for the infinite grid Zd. This has been proven by Cooper and Spencer. For the infinite line Z1, Cooper, Doerr, Spencer and Tardos among other results, showed that this constant is 2.29. For Z2, Doerr and Friedrich discovered that this constant is approximately 7.8, if all vertices serve their neighbors in clockwise or counterclockwise order and 7.3 otherwise. This result in particular shows that the order in which the neighbors are served makes a difference. Note that this is not an issue in the one-dimensional case, where there is only one permutation (alternating left and right). This result raises the question if all graphs do have this property. Cooper, Doerr, Friedrich and Spencer were able to answer this question negatively. For the graph being an infinite regular tree, they show that the deviation can be unbounded.
Another way to look at the rotor-router model are aggregating models. Here, each chip (one after the other) starts at the origin, runs until it reaches an unoccupied vertex and occupies it. The question is how the set of occupied vertices looks like. It is natural to expect it to be some circular shape for both random walk and rotor-router walk. Lawler et al. showed in 1992 that the limiting shape is indeed a Euclidean ball for the random walk. Let us denote by rn the largest radius of a ball containing no unoccupied vertex, and by Rn the smallest radius of a ball containing all occupied vertices. Then Lawler proved that with high probability the difference between these two radii is bounded by O(n1/6). However, Moore and Machta experimentally observed that these error terms were much smaller, namely of logarithmic size.
Again, for the aggregating rotor-router model much less is known. Levine and Peres proved that the shape of this model converges to a Euclidean ball (in the sense that the symmetric difference of the actual shape of n occupied vertices and the ball of appropriate radius has cardinality O(n5/6)). Levine shows that after n chips, the rotor-router aggregation contains a disk whose radius grows as n1/4. In other words, rn is at least of the order of n1/4.
Similar to the observations regarding the discrepancy between random and rotor-router walk, we observe slightly different properties of the rotor-router aggregations for different orders of the neighbors (called rotor sequences). Below you can see three rotor-router aggregations of one billion chips for different rotor sequences. The left one (←↓↑→) is closest to a circle while the right (←→↓↑) has the largest Rn-rn. The color indicates in which direction the rotor is pointing. Use your scroll wheel or double click to zoom in and out.
Kleber discovered experimentally for the clockwise rotor sequence that after three million chips, Rn and rn differ by less than two. You can find here a chart showing Rn-rn for varying n depending on the chosen rotor sequence. The different colors represent different rotor sequences and the black lines are the moving averages. In fact, till n=3Mio, the maximum of Rn-rn is 1.21, 1.74, and 1.96 for ←↓↑→, ←↓→↑, and ←→↓↑, respectively. Note that (as it is common notation) the rotor sequences given here are "retrospective", not "prospective" rotors, that is, the rotor is updated first and then the chip is sent in the direction of the rotor. This way, the rotor says where the last particle went instead of where the next particle is going, which is sometimes advantageous. As noted by Jim Propp, the ordering of the Rn-rn in above figures exactly corresponds to the ordering of the local discrepancies of Holroyd and Propp.
As in above three examples, the final rotor direction can be used as a color to present the rotor-router aggregation. The resulting structures are very chaotic around the origin and somehow self-similar and fractal-like closer to the border. There is currently no good explanation where this structure comes from.
Step by step simulation of n chips takes time Θ(n2). Hence it is impossible to calculate large rotor-router aggregations with one or ten billion chips that way. In order to speed things up, Friedrich and Levine have presented an algorithm that computes the final state of the aggregation without computing all intermediate states. This takes approximately O(n log n) time. Their technique is based on a "least action principle" which characterizes the odometer function of the growth process. Starting from an educated guess for the odometer, they successively correct under- and overestimates and provably arrive at the correct final state. The source code is available here.
More examples can be found on the following pages: