*Some of my Rubik’s Cubes*

I tried some different methods, starting with the Beginner’s Method, working my way into the more complex ways to solve the cube, ending at some tutorial with the following statement:

OLL: There is nothing in this step except learning 57 algorithms for 57 cases.(Link)

**Nope thanks.**

I then realized a simple fact: Solving the Rubik’s Cube isn’t that hard, but memorizing algorithms is, at least for me, and especially keeping all those algorithms in memory. I also realized that for me, as an occasional cuber, it would be more important to just be able to solve it, even if I didn’t do it for some months or years.

After the maybe 10th time I decided I needed some special recipe, specifically tailored to fit my needs:

- Only a few algorithms to learn, use intuition as far as possible
- Some mnemonics to memorize the algorithms
- Still able to solve the cube in 2-3 minutes
- Should be expandable / reusable if I wanted to improve or get faster

So that’s when I decided to create my own, customized sequence of steps for solving the cube and made a recipe for me to reduce the algorithms and being able to memorize them.

It’s somewhere between this very simple method by Badmephisto (only available as WebArchive backup) and the little more sophisticated approach of Jasmine’s Beginner Method.

On a high level, I’m using the CFOP-method (popularized by Jessica Fridrich), which means we don’t solve the cube side by side, but use a layered approach.

The steps are as follows:

**Cross****F2L**– first 2 layers**OLL**– orientation of the last layer**PLL**– permutation of the last layer

To understand the algorithms, you’ll need to understand basic cube notation, but only the real basiscs, e.g. L R U D F B and that ‘ means backwards.

The beginning is always the cross on the first, white layer, extending to the middle layer. If you tried solving the Rubik’s Cube before this should be easy to do by intuition, and after this first step you should have something looking like this:

*front and back of a solved cross*

After this step, the white side is turned down and everything from now on is done from this new point of view.

The first two layers are most time-consuming, even for real speedcubers. I decided to take the approach where we first allocate the two missing pieces, bring them together and then put them into place in the correct slot. (Opposed to the approach where you first bring the corner into position and then the middle piece, which I think is far even complicated.)

If you do this often enough, this should be manageable to do by intuition. I’m not going to focus on this here, because it’s really individual how you’re approaching the solving by intuition, and also because it takes some time and practice to really grok it.

For some practical tips and inputs have a look at this video tutorial (which is one of few being reasonably slow):

The big advantage of learning to solve F2L by intuition is that it’s nearly impossible to forget! Even if you’re missing some parts you can re-learn it really quick by practicing.

Now to our first algorithm. Hooray! We have the first two layers and want to build a cross, like in the beginning, except now we only care for the **orientation** of the last layer, nothing else.

I memorized this as always starting with the front side **F**, then **R** and **U** in the middle, first normally and then backwards, and at the end move the front backwards again. To distinguish between the two cases above: If it’s a line, break that line by first moving **R**, else first move **U**, everything else is the same.

Oh, and if there’s only the center of the face yellow, do one of them at random and you should get a state from above. Then do the appropriate one to complete the yellow cross.

Next up is a neat little algorithm which I memorized by “twist and turn”. It’s really simple if you take apart the two hands: Your right hand just moves up and down, simple as that. Your left hand just turns the uppermost layer clockwise – two times one turn, and the third time two turns to complete a full rotation.

For the second algorithm on the right, just do the same for the right hand, but start at the end and move anti-clockwise with your left hand.

This was the most difficult algorithm for me, and needed a good technique for memorizing. The start is on the right, like all the other ones except for the first one. Then, it gets quite simple: Just go back and forth between the back and front (over the right side) until you’re finished. This was my main insight for remembering this one.

There are some other helpful observations:

- On the back, always do a double turn.
- For right and front, alternate between clockwise and anti-clockwise – after coming back the first time from the back, go clockwise, the next time you pass anti-clockwise, and so on. As you may notice, this doesn’t add up in the end, but by then, it’s pretty clear how to finish the algorithm with some additional R turning.

This one is my favourite algorithm. Partly because it’s the last one and after that you’re finished, but also because you can hold the middle while turning.

You prepare with R2 and U, and then the **F B’** part happens, where you can turn the sides to the right. After a quick R2, you can turn them back with **F’ B**, and you’re nearly done. I call this one the whopper because the rorating sides with the stable middle always reminds me of a hamburger :-)

It’s possible that you’ll have to repeat this one, if all four edges on the last layer are incorrectly placed. If this is the case, make sure to think of how they’re going to be aligned after you do it once.

That’s all! **4 algorithms** (partly with some second variation) are enough to solve the cube, keeping the amount of moves to memorize really low, but also enable you to solve the cube within 2-3 minutes with some practicing. I encourage you to give them some silly names yourself as they will stick far better that way :-)

Here’s a cheat sheet with the moves:

You can also download a printable PDF here.

]]>Last week I had a closer look at LOLCODE!

Code in LOLCODE looks like this:

```
HAI
I HAS A CATNIP ITZ 3
IM IN YR LOOP UPPIN YR CATNIP
VISIBLE SMOOSH "I LUVZ CATNIP ITZ " AN CATNIP MKAY
IM OUTTA YR LOOP
KTHXBYE
```

Lolcode is fun to write and although inefficient, you can indeed do pretty much everything. There’s even a formal Lolcode 1.2 specification and a course on codeschool to learn the basics :-)

I decided to write a program that actually has some use, so I wrote a **Pokemon Go IV rater** in lolcode, check it out on Github.

*A solid Arkani with 91%!*

It calculates the special IV values (individual values) for a given pokemon (which are unique to each pokemon), you’ll just have to enter what kind of pokemon it is and some stats for it:

You can try it here: **repl.it**.

Just hit **Run** and enter your stats at the right. You can also use lol-coffee if you want to run it locally.

Unfortunately, it misses arrays and lists (as for now, there’s a reserved keyword BUKKIT for future versions), so I rewrote those as simple function calls, which is why the program ended up to be ~700 LOC. Other actions like input is really simple, here’s the beginning where the user gets asked all of the input values:

```
HAI
VISIBLE "O HAI!"
BTW GIMMEH GIMMEH GIMMEH!
I HAS A POKEMON_NR, VISIBLE "GIMMEH POEKMON NR", GIMMEH POKEMON_NR
I HAS A MY_CP, VISIBLE "GIMMEH CP", GIMMEH MY_CP
I HAS A MY_HP, VISIBLE "GIMMEH HP", GIMMEH MY_HP
I HAS A MY_DUSTPRICE, VISIBLE "GIMMEH DUSTPRICE", GIMMEH MY_DUSTPRICE
BTW SOEM MAGICAL MAHT FUNTCION HOW TO CALCULAET CP W00T HOW IS THS EVN WURKIN
HOW DUZ I CP_MULTIPLIER YR LVL
BOTH SAEM LVL AN 1, O RLY?, YA RLY, FOUND YR 0.094, OIC
...
```

]]>

*Part of the table at* *Wikipedia*

I immediately wondered if it would be possible to identify some (preferably small) portion of text by its language, solely by looking at the letter frequencies. Obviously there are far more elaborated methods like word recognition etc., but the approach with letter frequency has two advantages:

- no language knowledge necessary (e.g. dictionaries or grammar rules) except for letter frequencies
- simple to implement: little code and measurable results

(Code is on Github)

The basic idea is to first analyze the text and then calculate the mean squared error for each language, something like this:

- loop through text and calculate relative letter frequency (a: 3.3%, b: 2.1%, ….)
- for each language:
- load letter frequency for language
- compare frequency of each letter to frequency in text and add difference^2 to score for that language

- sort and output score for each language

Below are some text examples, and the mean squared error for each language (less is better):

In de prachtige zeestad Genua, de trotsche bijgenaamd, werd omstreeks het jaar 1435 een knaapje geboren, dat nu in alle landen als Christophorus Columbus bekend is. (…)

Solone, il cui petto un umano tempio di divina sapienzia fu reputato, e le cui sacratissime leggi sono ancora alli presenti uomini chiara testimonianza dell’antica giustizia, era, (…)

Jukolan talo, eteläisessä Hämeessä, seisoo erään mäen pohjaisella rinteellä, liki Toukolan kylää. Sen läheisin ympäristö on kivinen tanner, mutta alempana alkaa pellot, joissa, (…)

The three snippets provided are from Project Gutenberg, each one contains only the first 200 lines of the original book.

Interestingly enough, we also see which languages are similar to each other: For the italian text, spanish, french, esperanto and portuguese pop up first:

- Italian: MSE 0.0007 | 1503.6 points
- Spanish: MSE 0.0042 | 237.0 points
- French: MSE 0.0057 | 175.9 points
- Esperanto: MSE 0.0068 | 147.4 points
- Portuguese: MSE 0.0072 | 139.4 points

… while for the dutch text it’s german and danish:

- Dutch: MSE 0.0016 | 639.8 points
- German: MSE 0.006 | 166.3 points
- Danish: MSE 0.0076 | 131.6 points

We can also see that Finnish is quite an outlier, with only Esperanto somewhere near it, and all other languages are quit far away.

With a simple Python script and the table from Wikipedia we get some quite good results on our example texts! It is, however, quite a small subset of languages we’re checking here, so maybe a bigger set would reveal some more results.

Furthermore we also get some “relative” results how the languages stand to each other and how close they match the occurring frequencies of the example text.

Thanks for reading!

]]>- foot -> toot | f->t
- toot -> tooth | +h

A common application are spelling checkers or search engines. If for example you’re searching on google and you enter a word with a typo, it’s most likely recognized by the engine and a suggestion is given:

Google will search for correctly written words with a small edit distance.

The Levenshtein distance is one way to measure how we get from one string to another. The following operations are allowed and also weighted the same:

- insertion
- substitution
- deletion

For each insertion, substitution or deletion, +1 is added to the edit distance.

Try searching a fruit from the following list: [apple, banana, pear, lemon]

Word:

Did you mean: * {{best_match}}?* (edit distance of {{best_val}})

For example, try typing “lemon” in the box below, you’ll notice after “le…” that the word **apple** is shown (because it also has the part “le” in it), but after “lem”, the suggestion will switch to **lemon**, because the edit distance is smaller.

The calculation of the distance is not as trivial as it seems on first sight: For example if we take **apple** and **lemon**, how to start? After some trying around we discover that we can keep the “le”-part from **apple**, delete the rest and add **mon** to get **lemon**:

- apple -> pple -> ple -> le -> lem -> lemo -> lemon (6 steps)

But then we see that we can replace every single character of **apple** to get to **lemon**, and this will use only 5 operations, so the edit distance is really 5, not 6. How can we be sure to find the shortest way all the time?

Luckily, the calculation can be done with a matrix which keeps track of the optimal distance using the Wagner-Fischer algorithm . We start at the left upper corner and work our way down to the bottom right, with inserting, substituting and deleting:

We want to get from **abc** to **ac** and start at 0. We see that “a” equals “a” and take the diagonal step to the next 0, just taking the exact same first character “a”. Then we could either replace the “b” with a “c” (that would be the diagonal step and is also calculated), but instead we just delete a “b” and go down, because now we can make once again a diagonal step, taking the left over “c” at no cost. We end up at 1, which is our final (optimal) edit distance.

This is just a very quick demonstration, the algorithm works its way trough every cell of the matrix, in a Dynamic Programming kind-of way.

Have a look at the next demo:

Here’s a full matrix view of what happens when calculating the Levenshtein distance. Just edit the words and the matrix will change accordingly:

Word 1:

Word 2:

Levenshtein-Distance: {{ matrix[word1.length][word2.length] }}

{{letter}} |
||

{{(" " + word1).substring($index, $index+1)}} |
{{item}} {{item}} |

The following is a (shortened) implementation in Javascript:

`// initialization of first row/column for (var i = 0; i <= s.length; ++i) d[i][0] = i; for (var i = 0; i <= t.length; ++i) d[0][i] = i;`

`// loop through matrix cells for (var i = 1; i <= s.length; ++i) { for (var j = 1; j <= t.length; ++j) { if (s[i-1] == t[j-1]) { cost = 0; } else { cost = 1; } d[i][j] = Math.min(d[i-1][j]+1, Math.min(d[i][j-1]+1,d[i-1][j-1]+cost)); } }`

Full code is also on Github.

Thanks for reading!

]]> https://www.kleemans.ch/levenshtein-distanceI wrote a flocking behaviour with inspiration and help from sources below. This is the full simulation (code on Github), powered by ProcessingJS :

**Click left** to place (or remove) an obstacle.

**Click right** to (un)decorate bird #0.

After reading this post about flocking I wanted to better understand Harry Brundage’s implementation and write a simulation from scratch (but adapting some of his ideas, like the spreading from the center). Using only three simple rules, it should be possible to simulate a flocking behaviour in a realistic manner, with the result above.

Craig Reynolds proposed the original flocking algorithm back in 1986 already and describes the following three forces (description on Wikipedia) involved:

**Alignment**: Move in the same direction as your neighbours.**Cohesion**: Remain close to your neighbours.**Separation**: Avoid collisions with your neighbours.

*Images from Craig Reynolds’* *post*

For the alignment, we just look were the neighbours are heading (their speed vector) calculate the mean and limit that value by a MAX_FORCE constant.

```
alignment = new PVector(0, 0);
for (int i = 0; i < n.size(); i++) {
alignment.add(n.get(i).speed); // add speed
}
if (n.size() > 0) alignment.div(n.size()); // mean speed vector
alignment.limit(MAX_FORCE);
```

Cohesion works in a similar way, only instead of the average speed, we calculate the “average position”, the center of mass. Then we steer in that direction, by subtracting our own position:

```
cohesion = new PVector(0, 0);
for (int i = 0; i < n.size(); i++) {
cohesion.add(n.get(i).pos); // add positions
}
if (n.size() > 0) {
cohesion.div(n.size()); // center of mass
cohesion.sub(pos); // cohesion - pos
cohesion.limit(MAX_FORCE);
}
```

Separation was a bit trickier, because now only those neighbours wo come really close trigger a separation movement. The closer they come, the stronger the birds try to avoid a collision.

```
separation = new PVector(0, 0);
int count = 0;
for (int i = 0; i < n.size(); i++) {
PVector neighbour_pos = n.get(i).pos;
float d = PVector.dist(pos, neighbour_pos);
if (d < DESIRED_SEPARATION) {
PVector repulsion = pos.get();
repulsion.sub(neighbour_pos);
repulsion.normalize();
repulsion.div(d);
separation.add(repulsion);
count += 1;
}
}
if (count > 0) separation.div(count);
```

As with the cohesion, we work with the two positions (“our own” from the bird’s view and the position of the neighbour) and calculate a direction vector, but only this time it points away from the other bird, so that our bird can escape. The vector then is normalized (recalculated to a vector with same direction but length 1 ) and divided by the distance between them. So if they’re really close, the force to drift them apart becomes stronger and stronger.

In addition it’s possible to place an obstacle, by clicking left anywhere on the canvas:

The birds react similar to the separation rule, only that the radius of the obstacle is much bigger than those of a bird. The force is also directly proportional to how near it is to the center.

Setting all the parameters so that the system is balanced was quite tricky, and I ended up first setting the neighbour circle (a radius / distance in which a bird sees other birds as neigbhbours) and the separation circle.

*The neighbour circle (black) and the separation circle (red).*

With different weights, it’s important that they stay in balance. For example, when setting the cohesion too high, the birds would slow down (because they were all steering towards each other) and eventually come to a standstill. It’s more important to fly in the same direction than steering to the mass center of the flock, so I set the weighs accordingly. Also, separation has to be quite strong, which is not a problem because the force is only active when the birds are really close.

```
float SEPARATION_WEIGHT = 1;
float ALIGNMENT_WEIGHT = 0.4;
float COHESION_WEIGHT = 0.2;
```

After 20-30s, the simulation above should stabilize and there should be some flocks:

You can find the whole code in this Github Repo.

**Javascript’s Modulo** – I’ve had some trouble with disappearing birds, when I let the simulation run some minutes the canvas would be empty after a while. Sometimes the birds would just disappear – but only if they flew upwards. When reaching the left, right or bottom boundary, they would be moved correctly to the opposite side. So I figured out this had to be something with ProcessingJS’ modulo function (which is compiled into Javascript), which I used to make sure the birds wouldn’t get outside the canvas. After some tests and pinning it down to the Modulo function indeed I stumbled upon this Stackoverflow post and the problem was easily fixed.

While some say it’s a bug, technically the % in Javascript seems to be the the Remainder operator instead of just modulo. I fixed it by using my own modulo function.

`function mod(n, m) { return ((n % m) + m) % m; }`

`// move function - wrap around if out of canvas void move() { pos.x = mod(pos.x + speed.x, w); pos.y = mod(pos.y + speed.y, h); ... }`

**Angle between vectors** – Another problem was the correct angle for drawing the bird image I’m using. For each bird on the canvas I had to take the angle they’re moving at into account when drawing the picture and rotate it accordingly. ProcessingJS offers a function *angleBetween()*, but that just seems to calculate the shortest possible angle, which was not what I wanted, so I had to rewrite the function into strictly calculating the clockwise angle, starting from PVector a:

```
function calculateAngle(PVector a, PVector b) {
float rot = PVector.angleBetween(a, b);
if (b.y < 0) rot = 2*PI - rot;
return rot;
}
```

Thanks for reading!

]]>