Friday, December 07, 2007
Reducing AGI complexity: copy only high level brain design
In my previous post Complexity and incremental AGI design I claim that complexity has very serious impact on AGI development.
If we want to improve our chances of successful AGI implementation, we need to cut complexity as much as possible.
In this post I want to touch the topic of copying human brain design while developing AGI.
Human brain structure is very complex it's almost impossible to describe in details how exactly brain works.
Richard Loosemore explains why this is the case:
So, if low level brain design is incredibly complex - how do we copy it?
The answer is: "we don't copy low level brain design".
Low level design is not critical for AGI. Instead we observe high level brain patterns and try to implement them on top of our own, more understandable, low level design.
If we want to improve our chances of successful AGI implementation, we need to cut complexity as much as possible.
In this post I want to touch the topic of copying human brain design while developing AGI.
Human brain structure is very complex it's almost impossible to describe in details how exactly brain works.
Richard Loosemore explains why this is the case:
Imagine that we got a bunch of computers and connected them with a network that allowed each one to talk to (say) the ten nearest machines.
Imagine that each one is running a very simple program: it keeps a handful of local parameters (U, V, W, X, Y) and it updates the values of its own parameters according to what the neighboring machines are doing with their parameters.
How does it do the updating? Well, imagine some really messy and bizarre algorithm that involves looking at the neighbors' values, then using them to cross reference each other, and introduce delays and gradients and stuff.
On the face of it, you might think that the result will be that the U V W X Y values just show a random sequence of fluctuations.
Well, we know two things about such a system.
1) Experience tells us that even though some systems like that are just random mush, there are some (a noticeably large number in fact) that have overall behavior that shows 'regularities'. For example, much to our surprise we might see waves in the U values. And every time two waves hit each other, a vortex is created for exactly 20 minutes, then it stops. I am making this up, but that is the kind of thing that could happen.
2) The algorithm is so messy that we cannot do any math to analyze and predict the behavior of the system. All we can do is say that we have absolutely no techniques that will allow us to mathematical progress on the problem today, and we do not know if at ANY time in future history there will be a mathematics that will cope with this system.
What this means is that the waves and vortices we observed cannot be "explained" in the normal way. We see them happening, but we do not know why they do. The bizarre algorithm is the "low level mechanism" and the waves and vortices are the "high level behavior", and when I say there is a "Global-Local Disconnect" in this system, all I mean is that we are completely stuck when it comes to explaining the high level in terms of the low level.
Believe me, it is childishly easy to write down equations/algorithms for a system like this that are so profoundly intractable that no mathematician would even think of touching them. You have to trust me on this. Call your local Math department at Harvard or somewhere, and check with them if you like.
As soon as the equations involve funky little dependencies such as:
"Pick two neighbors at random, then pick two parameters at random from each of these, and for the next day try to make one of my parameters (chosen at random, again) follow the average of those two as they were exactly 20 minutes ago, EXCEPT when neighbors 5 and 7 both show the same value of the V parameter, in which case drop this algorithm for the rest of the day and instead follow the substitute algorithm B...."
Now, this set of computers would be a wicked example of a complex system, even while the biggest supercomputer in the world, following a nice, well behaved algorithm, would not be complex at all.
The summary of this is as follows: there are some systems in which the interaction of the components are such that we must effectively declare that NO THEORY exists that would enable us to predict certain global regularities observed in these systems.
So, if low level brain design is incredibly complex - how do we copy it?
The answer is: "we don't copy low level brain design".
Low level design is not critical for AGI. Instead we observe high level brain patterns and try to implement them on top of our own, more understandable, low level design.
Labels: AGI, AI, Artificial General Intelligence, Artificial Intelligence, Strong AI
Comments:
<< Home
Hi Dennis,
Great thinking! I have an additional approach to reduce complexity and that is to find the minimal roadmap, e.g. least coding, whose milestones lead to the creation of AI. One may differ with my approach but I think that low complexity is a guideline.
-Steve
http://texai.org/blog/roadmap/
Post a Comment
Great thinking! I have an additional approach to reduce complexity and that is to find the minimal roadmap, e.g. least coding, whose milestones lead to the creation of AI. One may differ with my approach but I think that low complexity is a guideline.
-Steve
http://texai.org/blog/roadmap/
<< Home