What can we possibly learn from butterflies and snails? The science of Biomimicry posits that we can observe naturally-occurring structures and processes formed through evolution, distill the mechanisms at work and apply the lessons towards solving modern-day problems. There’s an excellent podcast here with Janine Benyus, a woman who has dedicated her life towards extracting insights from things like the cell structures of butterfly wings the calcification mechanism in snail shells and applying them towards development of stealth aircraft and efficient plumbing systems. It stands to reason that with 4BN years of natural selection, evolution already has the answers to some tricky problems.
There are two fields in the space of Artificial Intelligence that take different approaches to solving problems but incorporate this same concept that we can emulate nature with software to solve complex problems. One attempts to emulate evolution (genetic algorithms) while the other emerging one is now attempting to emulate our brain function (called Hierarchical Temporal Memory Systems).
I spent the entire day yesterday doing nothing productive for our company but instead reading and listening up on these two fields of study and I’m blown away by the possibilities of each. I can’t possibly do justice to the explanation of either but here’s what I found interesting:
Genetic Algorithms – synonymous with the term evolutionary machine learning, this field has been around awhile. This amazing podcast with David Fogel explains his work with genetic algorithms in building a checkers program that learned to play at a grandmaster level with zero instruction on what the goal of the game was or how to win. It’s one thing to build a program and give it explicit instructions for how to evaluate moves and choose the best one- it’s an entirely different prospect to leave it untrained, turn it loose and tell it only the number of points it earned after x number of games. They essentially conducted Darwinian evolution with different virtual checker players using a program that ran 6mos on a Pentium II Windows NT machine and it learned to play on its own and ended up beating a grandmaster.
Granted the game of checkers is well-defined with clear rules and played in a bounded space (ie there’s no random external forces to account for like pieces getting knocked off the board by accident), but the mind-blowing implication of this story is that we no longer have to explicitly program a computer with the methods we know for solving problems, we can simply give it the problem and tell it when it’s doing well.
Hierarchical Temporal Memory Systems (HTMs) – this emerging field was started by Jeff Hawkins, the guy who created the Palm Pilot and wrote the book On Intelligence. It takes the approach of emulating the structure of the human brain (specifically the neocortex) with the idea that “our brains do things like facial recognition extremely well while the most powerful traditional computers cannot so there must be something to learn.” It breaks learning down to the idea that it’s based on patterns perceived in close temporal contiguity (back to back) and this idea that the storage mechanism consists of these hierarchical nodes that store different aspects of complex information at varying levels. For instance a computer running the Numenta software that’s hooked up to a camera and shown 100 instances of various dogs will begin to know the characteristics that make a dog (fur, four legs, tail, typical shape, etc). Information about the fur and the angles of the body are stored at the lowest nodes while concepts of body parts and the holistic notion of the essence of what makes a dog a dog, at the higher nodes. I haven’t gotten my head entirely around this concept but there’s an awesome six-page article here by Hawkins that explains this stuff and if you’re brave enough, there’s a longer white paper on his company site here that digs even deeper.
They have an interesting business proposition in that they have released the HTM software and API’s free for people to use as the engine for powering apps like facial recognition and intelligent network routing programs. They’re basically selling the raw “neocortex computing fabric” that anyone can use to train for any application. Combine this with the idea of utility computing and running your app on a virtual infrastructure like Amazon EC2 and S3 and you have the ability as a high school kid to build your own supercomputer with any world-changing application you can dream up.
In my high school physics we built solar cars out of balsa wood and tried to see who could build the fastest one- it was fun but in reality the skills we learned from those experiments had limited applicability. The forward-thinking high school physics teachers of today should be ditching the balsa wood and training their students on the concepts of genetic algorithms and HTMs and using them to conduct the equivalent experiments of the solar car challenge only with real problems. These concepts if grokked by youngsters now can be applied to solving hairy problems that will undoubtedly confront us in the next generation. Pollution, environmental change, contagious disease epidemics, harmful drug interactions, security threats, energy and vehicular traffic routing, diminishing energy resources- for all the daunting problems that could wipe out civilization, there’s extremely promising problem solving tactics emerging. Whether profit or non-profit, the important companies of tomorrow will be the ones that learn to capitalize on these technologies. Let’s hope that the high school physics teachers out there are listening and realize how important they are. My mother is 3wks away from finishing a 40yr high school teaching career – for they pay and the beauracracy that public education teachers put up with, I have to imagine the ones that stick it out are the ones that realize the value and ripple effects of what they do. And let’s hope that our Federal government too will recognize the importance of high schoolers becoming excited about this stuff now and start to prioritize expenditures accordingly.
I’ve read On Intelligence, but that was a hard book for me to read. I often found myself re-reading pages at a time to really grasp what was going on.
It’s still setting in my active reading material, as I feel I need to read it again…
[…] I couldn’t agree more. I recently wrote a short piece called “Computers emulating nature: why high schoolers should be excited.” It proposes that the fundamental knowledge of tomorrow will be in understanding the mechanics of genetic algorithms and Hierarchical Temporal Memory Systems and being able bridge the practical/theoretical gap like Lararidis did in order to make useful applications with them. The high school computer science teacher that has his/her students doing science projects with this stuff now will be priming those kids to develop the next world-advancing technology. […]