Ati says | ||||
I had an idea regarding a way to program something resembling strong AI. Essentially, the way it works is that it makes the AI generate random rules that take a cue from the environment, and propose a behavior. When a circumstance is met, the system then picks a random rule from it's stock that suggests a behavior for this kind of circumstance, and applies it. If the response to the use of this rule is positive, the rule increases it's odds of being picked next time, and creates two mutant versions of itself. If the response is negative, it decreases the odds of the rule being picked again. In this way, the AI gradually learns how to deal with it's environment. Now this is just an idea - how does this look to those of you who have had experience in fields of this kind? |
||||
Total Topic Karma: 30 | - More by this Author |
Cappy says |
|
|||||
Several problems. The AI would not be able to adapt to anything that was not pre-programmed in ahead of time. I understand that the rule system mutates but I don't see the mutation helping. It would probably hurt more than help. If you design the mutations to try to only help then the mutations are limited to what you've already designed (my original sentence). Everything the AI needs to do would be pre-programmed in. Basically, this would be like an AI Expert System with a "Mutating" Rule Based System but it's not close to strong AI. http://en.wikipedia.org/wiki/Expert_system http://ai-depot.com/Tutorial/RuleBased.html |
||||||
- 25 January, 2007 |
THK 802.00 Zombie says |
|
|||||
Correct answer Cappy | ||||||
- 25 January, 2007 |
Ati says |
|
|||||
Well, the idea is that you don't program in much before hand - it generates it's initial batch of rules by observing something else (i.e. if it were intended to play chess, you would have it generate it's rules based on the actions of the pieces on the board during a game), and figures out which ones work well. Also, your right; most of the 'mutant' rules would be worse than the original, and would be downsized when they got tested, but if the odd one was more successful than than the original it would help refine it's rules. A system like this would, of course, only be useful in a limited range of circumstances. |
||||||
- 25 January, 2007 |
Sepoy says |
|
|||||
I think you've written the idea off too easily, Cappy. The mutation is going to be extremely difficult, and likely ridiculously inefficient, but it could give you more than you explicitly programmed in the first place. To actually pull it off you'd need to make the AI capable of programming, or at least editing, code. The more robust you can make its programming capability, the more possibilities you have for entirely new behavior. Assuming you actually write a program that can alter itself 'randomly' without coring, infinite looping, or just being annoying to the other processes, all you'd have to worry about is killing all the millions of lousy mutants you're going to get. I don't know that this would qualify for "strong AI", but I think it would be interesting at any rate. |
||||||
- 26 January, 2007 |
Cappy says |
|
|||||
Actually I know a project very close to what you are suggesting Ati. http://www.comp.leeds.ac.uk/vision/cogvis/ This is a neat project that uses visual information to build a Rule Based set. For instance, it can learn to play a card game by just watching and observing how each player wins and loses. The biggest thing I worry about with Machine Learning is that it might actually learn who is winning based on hand position or any other factor and not pay attention to the actual cards being put down. Sepoy: If an AI was able to modify its programming to make itself smarter it would be a Seed AI. Seed AI is a form of Strong AI (AI that can reason and think). Here is a link: http://en.wikipedia.org/wiki/Seed_AI http://en.wikipedia.org/wiki/Strong_AI You may just be saying that the AI should be able to change its Rule System (like its database or whatever), like what CogVis does. That wouldn't be Seed AI. Trying to use mutations to guide the progress of AI is very tricky in the first place. When you actually test the mutations to determine which mutation is the most useful, the mutations will eventually only conform to the TEST, probably not the outcome you want. I'm finding it hard to explain so I'll give a basic example: You want an AI to mutate to learn to add 2+2. Eventually the AI will mutate to output 4. It won't actually be adding the 2+2, just "passing the test". Obviously you can make the test more complicated but it is always the same - it is only "passing the test". Kind of like in Hitchhiker's Guide To The Universe, they had to build a machine to ask the ultimate question because without it there could be no ultimate answer. |
||||||
- 26 January, 2007 |
Nadeem says |
|
|||||
Assuming you actually write a program that can alter itself 'randomly' without coring, infinite looping, or just being annoying to the other processes, all you'd have to worry about is killing all the millions of lousy mutants you're going to get. I'm afraid there's no general way to check if a program is going into an infinite loop - it's an uncomputable problem. Actually, it's the canonical example of uncomputability - it's called the Halting Problem. So the best you can do is to set an arbitrary cutoff and kill all programs that haven't terminated by then. And even then you'll run the risk of killing off that one program that was on the right track, and would have terminated one second later anyway. Ati's idea seems to amount to an event-driven genetic algorithm. Essentially the successful rules survive and reproduce, with mutations to maintain diversity. It doesn't even use crossover, and given the crazy thing we're trying to make, I'm not sure it would be a good idea to stick to mutation alone. In any event, crossover can be incorporated into the system, though I'm not certain how you might slice up and recombine two distinct rules. It'll depend hugely on the formalism you're using to represent rules. |
||||||
- 26 January, 2007 |
Hrishi says |
|
|||||
If you are planning to implement crossover, we must use Halting Program (one in infinite loop) before terminating it. If we terminate all the programs which are crossing the cutoff, the n the properties of the terminated programs are lost. Mutation after the crossover might actually help. |
||||||
- 26 January, 2007 |
Ati says |
|
|||||
Thanks for the ideas and crit. guys. Actually, crossover would be a good idea, although of course it would be difficult to combine them unless you used some kind of symbolic logic for the rules. One thing that I think a few of you misunderstood about my original post was that it isn't the code of the AI itself that mutates, but the rules that that the code uses: also it uses a very simple system for the rules that can be easily mutated (it takes a cue from the environment and proposes a behavior, so when it mutates it might try picking a different cue or responding with a different behavior). |
||||||
- 26 January, 2007 |
Hrishi says |
|
|||||
Hey... I have an idea. Why don't we create a behavior library kind of thing which will contain all possible personalities or behaviors for AI. People can use it for testing their Artificial environment. How about that?? |
||||||
- 26 January, 2007 |
Nadeem says |
|
|||||
How do you plan to enumerate all possible behaviors? | ||||||
- 26 January, 2007 |
Ati says |
|
|||||
@ Hrishi Well, that would be 'top-down' AI (where the AI is created with pre-programmed responses to every possible set of circumstances), which is almost impossible to make unless you have a large number of bored people with a couple of lifetimes to spare writing down common sense... And it would still freeze up as soon as it reached something that you had forgotten to program ahead of time. Back to the original topic though, one way you could pare down the worthless mutations, would be to keep a record of rules that didn't work out, and automatically delete new mutations that are similar to rules that have failed in the past. Also, in terms of the 'Test' problem: The way you would train the AI in your example would be to give it a set of randomly generated problems, so it wouldn't be able to optimize itself to the test, because the test would be inconsistant; likewise, in this one the test would be reality, which has reasonably consistant solutions, and which provides problems in a more-or-less random pattern. |
||||||
- 26 January, 2007 |
Nadeem says |
|
|||||
Well, that would be 'top-down' AI (where the AI is created with pre-programmed responses to every possible set of circumstances), which is almost impossible to make unless you have a large number of bored people with a couple of lifetimes to spare writing down common sense... That's precisely what the Cyc project is about. |
||||||
- 26 January, 2007 |
Ati says |
|
|||||
Oh yeah, I've heard of that - I still don't see the point of it personally (it seems to me that a system like that is extremely inefficient and limited), but the database could be useful for a more complicated AI at a later date. Also, the card playing program looks very interesting Cappy. Although, as you said, the problem with machine learnign is that it tends to pick up on irrelevant clues. |
||||||
- 26 January, 2007 |
Nadeem says |
|
|||||
Cyc is fundamentally a good idea, simply because human intelligence is tied up with common-sense knowledge so deeply that we cannot have one without the other. However, the execution is somewhat flawed. The amount of facts they will need is simply staggering, and the estimate keeps on rising every year. It's too darn slow to put those facts in by hand, and there are great difficulties with automatic knowledge acquisition. Cyc might simply have bitten off more than it can chew. WordNet is far less ambitious, but the amount of research done using that is much greater than the research based on Cyc. | ||||||
- 26 January, 2007 |
Ati says |
|
|||||
I agree that it is a good idea, but it seems to me that the time spent towards writing every piece of information known to human-kind in machine-readable form would be better spent perfecting machine learning and letting the computer pick it up for itself. |
||||||
- 26 January, 2007 |
Nadeem says |
|
|||||
That's one of the reasons they need to do something like WordNet. The machine can't pick things up for itself unless it already has a great deal of previous knowledge. It's the ultimate bootstrapping problem. | ||||||
- 26 January, 2007 |
Ati says |
|
|||||
Well, we know that a computer can pick up a full world-view starting from nearly nothing (every human child does it in about the first 0 - 10 years of life). The problem is that it takes a long time even for a human, and they are an order of magnitude more powerful than most silicon computers today. On the other hand, there is some hope for researchers in the exponential rate of silicon processer increase, and the fact that the human brain is not the optimal processor arrangement (evolution delivered us with something that works 'well enough', but being more efficient didn't help us much, so our brains aren't running at optimal efficiency) whereas a smart researcher could create something considerable more efficient than a human brain if given the knowledge and time. The most likely way it's going to work is something like in 2001: a space oddssey- a supercomputer learning it's way from child to adult over the course of several years, then being used for a highly specialized task. |
||||||
- 26 January, 2007 |
Nadeem says |
|
|||||
Well, we know that a computer can pick up a full world-view starting from nearly nothing (every human child does it in about the first 0 - 10 years of life). That's the issue - the human brain doesn't seem to start from nearly nothing. The tabula rasa idea is sadly mistaken. The brain starts with a staggering amount of initial complexity. Language, for instance - a great deal of it is innate, if Steven Pinker and others are to be believed, and they make a pretty convincing case. There's also the issue of how much of our intelligence is really a result of being embodied. Intelligence might very well be the result of complex biological systems interacting with the physical world. Software might not be all there is to it. |
||||||
- 26 January, 2007 |
Ati says |
|
|||||
True, but it could be argued that an innate understanding of language functions are an innate part of the way the basic processer functions are set up, like understanding certain programming languages are an innate part of standard computers. Also, while the brain does have a great deal of complexity, a good deal of it is not described genetically, but seems to be randomly generated (as I recall, the amount of DNA explicitly describing overall brain structure could be written to fit into a file the size of MS word (the program itself, not the files)) As for how much of our intelligence is embodied, it seems to me that it's probable that intelligence is the product of a learning system interacting with a complex environment - that complex environment does not neccesarily have to be the physical world. |
||||||
- 26 January, 2007 |
Nadeem says |
|
|||||
Also, while the brain does have a great deal of complexity, a good deal of it is not described genetically, but seems to be randomly generated (as I recall, the amount of DNA explicitly describing overall brain structure could be written to fit into a file the size of MS word (the program itself, not the files)) The amount of information there means little - the relevant DNA might be doing nothing more than triggering commands for building complex structures, not describing the structures themselves. In other words, the information content isn't just in the DNA, but also in the process that uses the DNA to build a brain. But I'll agree that there is probably a significant random component to it. While the complex environment need not be the physical world, we hardly have a provably better one for research purposes. It's the only one we know that actually seems to have produced intelligence. |
||||||
- 26 January, 2007 |
Nadeem says |
|
|||||
True, but it could be argued that an innate understanding of language functions are an innate part of the way the basic processer functions are set up, like understanding certain programming languages are an innate part of standard computers. Yes, and that's why we need to figure out just which innate functions we need as a prerequisite for intelligence. |
||||||
- 26 January, 2007 |
Ati says |
|
|||||
True. One way to do this might be to give young human infants simple tests and see how they respond to simple stimuli in order to get a better idea which qualities must be innate, and which should be learned. As for the physical world being the best environment, I agree with you that the physical world is best for research purposes, but I think that other environments would be better if the AI is being developed for a purpose (If the AI were being designed to, say, go through a database and catelogue images, it would probably best be brought up in a world consisting more of data and less of physical object). All this makes me wonder how far off strong AI is, and how we (as a species) will react to it. Legal questions aside, a sentient AI is going to face a huge level of discrimination and inconvenience (to get some idea of this, try asking your average high-school kid if they would date a computer program) It makes me wonder. |
||||||
- 26 January, 2007 |
Nadeem says |
|
|||||
Would computer programs want to date high school kids anyway? I think there's a good chance that we'll see mentally augmented humans before strong AI turns up. Hell, maybe it's a problem we just can't solve at our current level of intelligence. |
||||||
- 26 January, 2007 |
Ati says |
|
|||||
To the first point: (while amusing) it seems to me that if the AI were structured after a human mind, and developed at an analagous rate, that problem of romantic interaction would eventually turn up. To the second: You may be right, but the human brain really does not like having pieces of silicon and metal put into it, and by one way of looking at it, it might be better just to build from scratch rather than trying to jury-rig an existing system. |
||||||
- 26 January, 2007 |
Nadeem says |
|
|||||
The problem of setting up some kind of interface with the human brain seems somewhat easier than actually creating a brain from scratch. Besides, they might come up with something non-invasive. | ||||||
- 26 January, 2007 |
Ati says |
|
|||||
Well, look at it this way: your an inventor way back when the idea of a palm pilot was brand new. You just had the brilliant idea of building a smartphone. Which do you do? Do you design a new unit using technology borrowed from existing PDA's - or do you go out, buy a palm pilot, open the case and reverse engineer the circuitry, hack the software, tape an antenna to the side, and then hope the things works? This may not be a valid comparison, but that's how it seems to me. As for non-invasives, the main problem with these is that the electrical pulses from the neurons tend to be scrambled as they pass through the skull, making it difficult to look at or send data to individual neurons. |
||||||
- 26 January, 2007 |
Hrishi says |
|
|||||
I agree with Nadeem. How about using Brain as a Black box? Who cares whether the signals are getting scrambled. We might get something like Robocop as a result. |
||||||
- 26 January, 2007 |
Nadeem says |
|
|||||
How about using Brain as a Black box? Who cares whether the signals are getting scrambled. You sound like a willing volunteer. |
||||||
- 26 January, 2007 |
Nadeem says |
|
|||||
Do you design a new unit using technology borrowed from existing PDA's - or do you go out, buy a palm pilot, open the case and reverse engineer the circuitry, hack the software, tape an antenna to the side, and then hope the things works? This may not be a valid comparison, but that's how it seems to me. Ah, but in our case, we can probably figure out how to mess with the palm pilot. As for the smartphone, we can't go ahead and build it, since we have only the vaguest idea how the palm pilot itself works. |
||||||
- 26 January, 2007 |
Ati says |
|
|||||
Well, the problem with 'black boxing it' is that the brain is hugely complicated, and a black box would be a huge chunk of innefficient code. The other problem is that this technique would probably produce a 'zomboe' (something that claims to be sentient but isn't). In which case, the effort is wasted as you have mearly produced a big chunk of code that acts like you, but doesn't actually experience emotions, etc. As for the continuing palm pilot analogy, you might be right - it all depends on how much progress we make on figuring out how the palm pilot works over the next few years. |
||||||
- 27 January, 2007 |
paul says |
|
|||||
I just been looking at that http://ai-depot.com site cappy pointed out, pretty good information throughout the site, very interesting. | ||||||
- 29 January, 2007 |
Hrishi says |
|
|||||
I have some really good ideas about some AI applications, but I really lack practical knowledge. I am looking for some books for AI. Can anybody suggest some books? Free download obviously. |
||||||
- 01 February, 2007 |
|