Where to Simulate Behavior

Introduction

In this essay I am going to describe how behavior does not have to be simulated on the surface; it can be simulated beneath the surface through the interaction of lower-level components to produce the desired top-level behavior. That is to say, instead of writing software that monolithically achieves the desired effect, you can write software that exhibits some simpler behavior. Then network the components together to behave, as a whole, as intended. I am going to describe an introduction to a software design technique for simulating behavior. It likely has limited applicability to many software problems, but for some, particularly those that are based on real-world behavior, it could be incredibly helpful.

I first started talking about this topic in a blog post, but have been cultivating the ideas and decided to take a second stab at ironing it out and expanding upon it. Feel free to comment on it and pursue the ideas further if they catch your interest. This is meant as a brief introduction to the topic; to lay the ideas out there for you to take further.

Where these ideas came from

These ideas were primarily inspired by several books I have been reading on the subject. On Intelligence, by Jeff Hawkins provides a unique and extraordinary look at intelligence; how our brain works and how we might create a machine that behaves like it. Creation: Life and How to Make It, by Steve Grand is an excellent book about artificial life, taking a holistic approach incorporating much more than just the neurons in our brain. If this essay interests you at all I highly recommend reading those books.

What is behavior?

Before I begin, I want to define how I am using a couple words. The first word is “behavior.” I am indenting behavior to mean observable actions or information. It can be observed via direct human observation or measured via instrumentation. There may be more going on under the hood of the behavior to drive it, but I don’t think it matters. If we are trying to simulate the intelligent behavior of the human brain, why should it matter if or how our artificial brain thinks to itself? If for all externally observable and measurable reasons it behaves like a brain, I don’t think it matters what happens on the inside if our only goal is to simulate the intelligent behavior of the human brain. However if you are trying to simulate every neuron and every synapse then it does matter how it works on the inside. This could be the case if you are trying to understand the human brain and/or apply knowledge learned from the simulated brain to the actual human brain.

The point is that when only the top-level behavior is of concern, there is no need to simulate at any lower level with a few important exceptions. The first exception is if there is something to be learned by lower-level behavior. Another is if you are implementing lower-level components in separate hardware units to parallel process — more on this later. And finally, if higher-level behavior is too complex. The importance here is that once you simulate at a particular level, the lower levels become irrelevant and do not exist in the simulation and you lose any benefits or insight. It is these exceptions where simulating at the top-level is not practical or possible with which I am concerned.

What does it mean to simulate?

I intend to use the word “simulate” in two ways. I will mostly use it to describe the process of creating software that is supposed to act like whatever you are trying to make it act like. In this case the software is directly simulating the desired behavior. This may or may not be the top-level behavior. It is just behavior that is being simulated in software.

The other way I will use the word “simulate” is to describe the end simulation, regardless of whether it is being directly driven by software or whether it is being driven by the interaction of lower-level components. It should be clear from the context of its use which meaning I intend.

How is the real world made?

The real world isn’t all made of stuff we can see with our naked eyes. The stuff we can see is just the beginning. If you look at yourself, all you can see is the stuff on the top, but there is a lot more under the hood. Under the hood we are made of organs and fluids, which are made of cells, which are made of chemicals, which are made of atoms, which are made of elementary particles, which are finally made of vibrating strings. I skipped steps, but I think I got the point across. While everything is made up of vibrating strings, our universe isn’t just a random mass of vibrating strings. The vibrating strings form structures that we understand to a degree. And those structures form other structures that we also understand to a degree. Eventually structures of organs and fluids make up what we call you. If we wanted to create a virtual universe all we would need to do is write a software module called VibratingString then put them together like legos and there you go, you’ll have you.

Obviously there are countless issues that would make this impossible today. The two main issues I’m concerned about are (1) we, or at least I, don’t sufficiently understand the vibrating strings to be able to code them and (2) making you from only vibrating strings would be an incredibly improbable feat. These are the two main factors to weigh when trying to decide how and where to simulate behavior. You have to understand (1) how it works and (2) how to put it together.

Real-world example: the human brain

To me, the most interesting real-world example is the human brain. I believe it is the most complicated and advanced feat of engineering that exists today. We can all come up with countless powerful examples of what it can do, but none of us truly understand how it works and few of us even have an idea exactly what it does, yet we all have one and (well, most of us) use it every day. We for the most part know what it is made out of and have pretty good models of the lowest level functional component — the neuron. Yet they are so small, are packed so tightly together, and there are so many of them that we have little idea how they are put together. Building an artificial brain by wiring up the billions of neurons as our brain just doesn’t seem practical. There must be more to the brain than just neurons.

Just like the vibrating strings I talked about earlier, our brain is not just a mass of neurons. In Jeff Hawkins’ book, On Intelligence, we have found that our neurons form functional components, called cortical columns, which Hawkins theorizes implement a generic learning and prediction algorithm that applies to and drives everything our brain does. Then those cortical columns are put together in hierarchical structures that generalize information going up the hierarchy and localize information going down the hierarchy to form another higher level functional component of the brain.

So as we learn more about the brain we learn more about the structures within it, and as we learn more about those structures we can build artificial brains that aren’t based on neurons, but based on the higher level structures like cortical columns.

Just as a note, when I say “brain” I specifically mean the neocortex. The neocortex is the thin sheet of brain matter that covers our inner brain and provides the high level thought, reasoning, and memory. It is the part of our brain that makes us so much more intelligent than any other creature. The rest of our brain is still important in making us who we are, but is tasked with more simple tasks like regulating our bodily functions like heart and lungs and probably some basic drives and needs.

Apply it to software

Now that we have an idea of how our world is put together, how can this help us design software? I think the important part is knowing that complexity in software can be achieved through more than just writing complex software. It can be achieved through relatively simple software that is put together in a complex way. When you typically write software you usually just write software. You may structure your software into discrete components, but complexity is generally achieved by creating complex components or many different components, not through simple components that are connected in a complex way. I am proposing that you can create complex software through simple components that are connected in a complex way, much like the neurons in your brain, which may be a simpler task than just creating complex components.

Neural Networks attempt to approach the task of simulating the brain, or at least part of what the brain does, in this way. They create simulated neurons, connect them, and see what happens. Things get much more complicated and the field has made great progress, but that is the basic idea. But as I mentioned above, there are lots of neurons in the brain, so many in fact that I don’t think it’s practical to simulate the brain in this way. That means we should be looking up the hierarchy to something like cortical columns.

Artificial Intelligence (AI) research approaches intelligence from the opposite end — trying to attack the problem only at the top level. Their point is that neurons (or cortical columns) is the way nature does it, but who says nature has done it the best and most efficient way? I think this approach has merit, but so does Neural Networks. I think the solution may actually lie somewhere in the middle. I think the spirit of the way nature does it is the right direction, but maybe the neuron (or cortical column or any naturally occurring structure) isn’t the best component to use. I’m sure people are already doing this, but it is something I have recently been turned on to and is my inspiration in writing this essay.

Parallel Processing

Parallel processing, as I’ll define it, is the use of multiple hardware-based processing units in order to fulfill multiple computing tasks simultaneously. With computers, this is done with multiple CPUs or CPUs with multiple cores. Our brain also parallel processes, although instead of a handful of cores, there are billions of cores! Each neuron is a piece of biological hardware that processes information coming into it in parallel to each other, allowing it to carry out many tasks at once. Even though each neuron is pretty slow, taking about 5 milliseconds to fire while a standard desktop CPU can process about 20 million instructions in that amount of time, the fact that each neuron operates in parallel (in addition to how the brain stores data, which I won’t get into) our bodies can do some amazing tasks in an amazingly short amount of time.

The marriage of combining lower-level simulations with parallel processing is an immensely powerful union. Running the end simulation on one CPU or only a handful of processing cores will be easier and cheaper, but at the cost of total processing time, a critical part of real-time simulations. But using multiple hardware units to process our simulation in parallel greatly multiplies our processing speed. The unique aspect of the approach of combining lower-level simulations is that there are discrete processing units that lend themselves perfectly to parallel processing as they are easy to separate. The ideal situation is a dedicated hardware unit for each low-level component. Hardware constraints may put multiple low-level components on one hardware unit, but breaking the low-level component processing up among multiple hardware units is still beneficial to processing speed.

Conclusion

I believe that with enough computing power, understanding, research, and time developing software we can simulate anything including human-level intelligence and higher and with enough time, a universe. Maybe our universe is just a simulation on a truly super computer. But there can be more to creating complex software to just writing software if you take hints from nature. Real-world behavior is driven by a hierarchy of components, each of which exhibits behavior that when sufficiently known can some day be sufficiently simulated in software. When attempting to solve a complex problem you don’t have to write one large traditional software program. Don’t lock yourself into the obvious paths. Look for and try to understand the hierarchy of components and find the level that make sense for the problem at hand. Some lower-level components might be unnecessary to simulate while some higher-level components might be too complex to simulate directly. You also don’t have to lock into naturally occurring components like neurons or cortical columns. You can make your components however you want to and wire them up however you want to. But I think nature has some good ideas that can help us along the way. The important part is that there is a another level of options open to software developers creating complex software. Know that options exist and explore them.


Share

Leave a Comment


NOTE - You can use these HTML tags and attributes:
<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>