The Evolution of Evolvability: Changing Environments Promote Rapid Adaptation in Digital Organisms

Oh hey, I forgot to post about this when it happened last year. My paper for the ALIFE XV conference was published!

Canino-Koning, Rosangela, Michael J. Wiser, and Charles Ofria. “The Evolution of Evolvability: Changing Environments Promote Rapid Adaptation in Digital Organisms.” In Proceedings of the Artificial Life Conference 2016, 268–75. The MIT Press, 2016. doi:10.7551/978-0-262-33936-0-ch047

Abstract

Genetic spaces are often described in terms of fitness landscapes or genotype-to-phenotype maps, where each potential genetic sequence is associated with a set of properties and connected to other genotypes that are a single mutation away.  The positions close to a genotype make up its “mutational landscape” and, in aggregate, determine the short-term evolutionary potential of a population. Populations with wider ranges of phenotypes in their mutational neighborhood tend to be more evolvable. Likewise, those with fewer phenotypic changes available in their local neighborhoods are more mutationally robust.
As such, forces that alter the distribution of phenotypes available by mutation can have a profound effect on subsequent evolutionary dynamics.We demonstrate that cyclically-changing environments can push populations toward more evolvable mutational landscapes where a wide range of alternate phenotypes are available, though purely deleterious mutations remain suppressed. We further show that populations in environments with drastic changes shift phenotypes more readily than those in environments with more benign changes. We trace this effect to repeated population bottlenecks in the harsh environments, which result in shorter coalescence times and keep populations in regions of the mutational landscape where the phenotypic shifts in question are more likely to occur.

You can read the rest over at the MIT Press site, or download the pdf HERE.

Python I2C Cypress Capsense Library on BeagleBone Black

Over the weekend, I spent some time getting my old Cypress CapSense Express capacitive touch boards to talk to my new BeagleBone Black.

On the BBB, I2C is relatively trivial. It includes internal pull-ups, like the Arduino, so you don’t have to muck around with external resistors. My boards have a spot for pull-ups, but I never populated them, since I expected that I would rarely have a single board running on a bus. The boards also run on 3v3, so that was easy on the BBB.

My language of choice is Python where possible, so I hunted around for libraries to perform I2C over python. SMBus to the rescue! Also, Adafruit has some nice BBB libraries for I2C, which I cribbed as a starting example.

I’ve posted the result to GitHub, alongside my older Arduino/Chipkit Cypress Capsense library, and posted the python module to PyPI, so you can do fetch it with pip or easy_install.

To invoke:

import CypressCapsense_I2C

####### INITIAL SETUP - Only Do Once Per Device
# sensorInit = CypressCapsense_I2C.CypressCapsense_I2C(0x00, debug=True)
# sensorInit.setupDevice()
# sensorInit.changeDeviceAddress(0x5D) # or whatever address you want
########################################################################


## this device has already been set up to use 0x5D as its address
sensor = CypressCapsense_I2C.CypressCapsense_I2C(0x5D, debug=False)

while(True):
    print "0x%02X" % sensor.fetchTouchStatus()

Cutting 0.8mm PCB with Paper Guillotine

I just received my iTead PCBs in the mail (took about 3 weeks). I paid an extra two dollars to get half-thickness so that I could cut them more easily. My regular old paper guillotine (Xacto brand, if that matters) made short work of them.

Yes, I’m aware the video is sideways… again.

These PCBs will go in the Helix tubes to provide touch sensing capabilities.

Helix Custom PCBs

So, it turns out that they let just anyone design and print PCBs with a minimum of cost and absolutely no training whatsoever! Hooray amateurism?

The latest batches of PCBs just came in for the Volunt project (a small-scale portable version of Helix). These are the touch controller breakout boards that fit inside the rods and communicate with a series of touch sensors and relay the information via I2C to the microcontroller in the base.

Cypress Capsense Breakouts

Sooooo Tiny

I designed these boards with Fritzing, and had them made by BatchPCB (green), and OSH Park (purple). I have another batch of boards coming for Helix and Volunt from iTead in China, but who knows when they’ll actually arrive.

If you look closely, you can see that the touch boards are populated almost entirely with surface mount components. I did those with this high-tech reflow rig.

 

 

My Glorious Chariot

My Glorious Chariot

Yes, that’s a super-cheap hot plate. I populate the boards with solder paste and the components and set the boards on that little metal plate. Then, I set the dial to the W in LOW, watch it until everything reflows, and pull it off the heat.

If you set the temperature too high, you get this fabulous thing happening.

Mmmmm. Tastes like burning.

Mmmmm. Tastes like burning.

Amazingly, this board still works!

There are much fancier DIY rigs with thermocouples, but I’m incredibly lazy and didn’t bother. This seems to work for my level of detail, and I have yet to render a board unworkable.

Finally, while I’m showing off, here’s my glorious and honorable work space. Honestly, I’m not un-proud of my space frugality. Everything fits, and nothing has caught fire… yet.

My Crib

My Crib

Helix Color Fade Algorithm

(yes, I am aware that it is sideways.)

First, some background. Helix is an art installation that uses an evolutionary algorithm to produce smooth color gradients on a series of helical rods populated by sets of RGB pixels. The organisms in the algorithm are a series of RGB color values, and the fitness function for individuals is how similar each color value is to its neighbors. The smoother the gradient along the organism, the higher the fitness. The higher the fitness, the more those organisms get to reproduce, but with a chance of mutation. Rinse and repeat, and you get smoother and smoother color gradients as time goes on.

Helix Rendering

Helix Rendering

Now, the challenge in this is figuring out how to best display the organisms, both so that their gradients may be appreciated, but also so that we can pack as many organisms in the installation as possible. Ideally, the installation would be able to display many separate populations simultaneously, and thus allow us to do fun things like demonstrate population migrations, founder effects, etc.

So, that leaves us with using the time dimension to display the gradient, by cycling through the lights of each organism’s phenotype once per generation. So, we tried that. Suffice to say, the result is less than pleasing, and not exactly intelligible. The problem is that people have a hard time getting a feel for how smooth a gradient is if they can’t see it all at once. The only light they see is the one that is shining right then, and they have to try hard to remember what colors came before, and they have to imagine the whole sequence, and oh, they’re walking away. So, what can we do? How can we both display the gradient, while having all the organisms represented?

So far, the answer seems to be to offset the cycle, such that each organism is displaying the next locus at any given time, as we go down the line of organisms. So, at timepoint 1, organism 1 is displaying locus 1, organism 2 is displaying locus 2, etc. T2, o1 displays l2, o2 displays l3, etc. This is a little bit of a perceptual hack, because you seem to see the whole gradient moving down the rod, but the truth is that you are seeing one locus from one organism at a time. It may *look* like the evolved gradient because your fitness (and location) neighbors are pretty likely to have a similar genome to you, and thus the locus they are displaying is pretty likely to be the same value that is in that locus for you.

So, this seems to solve the problem, and give approximately the right evolutionary intuition, while allowing us to display as many populations as possible.

Ultimately, I’m not sure that we’ll go with this approach all the time, since it isn’t, in fact, exact, but at the very least, it gives a flavor of a population for when we want to do multi-population evolutionary demonstrations.

Right, so that was all background for the video above. It turns out that if you are displaying colors in sequence without any fading, they are SERIOUSLY uncomfortable to look at, especially in the early stages of evolution. It’s all blinky, jagged, eye-searing, seizure inducing horribleness. So, I added some fades between the colors as they went down the line to make it more pleasant to look at, while still accurately conveying the gradients in the population.