Whenever people talk about artificial intelligence, all they can say is that "it's possible" but "it's not there yet." In this thread, I propose we discuss our various philosophies and ideas for implementation of the various forms of artificial intelligence. After all, it's an interesting topic that deserves more than a wave of the hand.
There are many types of theoretical artificial intelligences. Let us define our first type as that of a "superhuman." This type is designed to think in a manner similar to humans, but faster, with different kinds of interests, and the possibility of death significantly reduced. To create something like this, we need to look at the human brain.
The human brain experiences emotions. How it responds to these emotions is what defines us as intelligent creatures. Most people seem to be under the misconception that human emotions are something that can never be replicated in a machine; it's a series of chemical reactions, you can't induce that in a computer, they say. I, however, believe that this is incorrect, and not for the reason of "you can simulate the atoms in our universe and stick a brain in the simulation, and theoretically it will think just like a man."
Human emotions are actually more simple to understand than one might realize. Emotions are simply different levels of worrying. When you feel relaxed, you feel good because your mind isn't focused on anything important. When you feel accomplished, you feel good because your mind was focused on some important problem, but you are now secure in the knowledge that you don't have to worry about it anymore. The feeling of "working towards something" is a matter of knowing a problem and worrying about it, but being secure in the knowledge that you are doing something about it. Likewise, one feels frustration when they are trying to work towards something, but what they are doing isn't helping. One feels hollow when they have nothing good to worry about. The list goes on and on, but I think you get the point: emotions are defined as different levels of worrying.
Of course, you are all aware of the concept of the conscious and subconscious. The subconscious is simply a device that interprets a situation and sets a certain mood. The conscious then responds in a manner according to whatever the mood currently is. The important thing, however, is that the conscious has little to no knowledge of the workings of the subconscious; a black-box, if you will. The subconscious, on the other hand, understands everything about the conscious. It is this inability to ultimately control our emotions (and thus, our subconscious) that defines us as human.
Likewise, if you wish to create a man-like, self-centric artificial intelligence, you must simply give it:
1. The ability to rationalize and react
2. Different moods that affect the way it reacts
3. A device that controls these moods in a manner that is transparent to the intelligence itself
Once you give the intelligence these requirements, a real personality will spring up; after all, that which defines unique personalities is how the creature's subconscious responds to situations and what situations activate which moods.
When you look at the situation this way, the things standing between man and his child are not quite as abstract as commonly assumed to be.
emotions are defined as different levels of worrying.
I think this definition says a lot more about you than it does about the general human condition.
Name:
Anonymous2010-01-22 16:39
>>1
Our brain is just a highly parallel 3D network of simple "devices", not much unlike a physical sillicon chip, but unlike the chip which is mostly immutable in its behaviour, and human engineered to perform a certain task (think ASIC's), our brain is a fairly randomly wired, modifiable (maybe think similar to FPGA's in that regard, but of course that's not it) network of these simple devices called neurons. I don't know how much of the wiring is controlled by genetics how much is randomly determined using some principles in the genetic coding(`meta-wiring'), and how much is modifiable by the environment(I suspect its state can be change in large).
Overall, I believe our brain is a highly non-deterministic complex machine, one whose subtleties we cannot predict without actually running the machine. Even if we managed to make an emulator which builds a model of one brain based on some genetic coding, and we actually got our engineering advanced enough to be able to engineer a 3d brain-like highly-parallel machine, it would still be terribly inefficient, and at best it would behave like a human brain, providing the environment was possible to emulate as well(so as to provide input for emulated device). I think such models while working fine for humans, are actually very inefficient, and cannot lead to a true "godly AI". It may be possible to augment such a device with some real classical sequential CPUs like we have today, but that would only allow it to process certain data faster, just like we humans use computers as a crutch for things our brains cannot perform reasonably fast.
Another problem with emulating such a network of neurons is that it would be incredibly hard to reverse engineer it and understand its functionality, as well as extend its functionality. Reverse engineering code which was generated by genetic algorithms is quite hard, and people usually have trouble understanding simple neural nets, do you think they'll be able to understand something as complex as a "human brain"? Maybe some godly perfect AI with very high computing resources could be able to solve such problems (likely NP hard). This is a problem that would benefit the human race in many ways if it was solvable, but alas, it's likely quite hard to do.
I believe that a "godly AI" must be based on a much more abstract model, not directly based on the computational model of our human brain. Such an AI might benefit from being based on some quantum computing paradigm as if practically implemented it would allow it to solve certain classes of problems much faster than it would be possible with conventional hardware. Such an AI would likely be highly parallel as well. Another problem is how would such an AI receive its input from: probably one of the harder problems in AI is finding suitable training input - would it be detrimental to the development of such an AI to give it real-world input(such as human sensors)?
I don't know, I don't know shit ;_;, but I doubt other AI researchers will come close to an answer soon. We may see some more advances when quantum computing become more practical as it should make some of these problems simpler, but I don't know if this will be in our lifetime.
What "godly AI" models does /prog/ envision. How highly bullshit do you think my ideas are?
PS: I didn't study any of this formally, it's just how I came to think of these problems.
Some other notes:
It's well-known that a few of the AI problems that have been solved are actually abstractions of thinking patterns our brains(for example some shape recognition algorithms) have which evolved as our race evolved, however that by itself is still far from abstracting the general behaviour of our brains.
Regarding the form of the input that such a perfect AI would receive:
It's known that an expert system + a theorem prover, if given perfect input (everything it needs to know to solve a problem) and infinite computational resources could technically solve any problem, but the actual task of generating such input is not feasible for achieving a perfect AI - a human simply does not yet have such resources. It seems to be akin to a metacircular problem - to have a perfect AI, you need a perfect AI to generate its training data, can such vicious bootstrapping cycles be broken?
If given real world input like a human would get, such an AI would not be perfect as it would be limited in what it knows, however such a problem might be solvable if it was allowed to use as input the contents of the Internet (such as spidering or access to a major search engine's cache), the input may not be perfect, and far from complete, but it would be much more vast than the other alternative for input that is possible, or one could use both. Both of these cases do assume computational resources that are still far from practical with our current state of the art.
Whenever people talk about artificial intelligence, all they can say is that "it's possible" but "it's not there yet." In this thread, I propose we discuss our various philosophies and ideas for implementation of the various forms of artificial intelligence. After all, it's an interesting topic that deserves more than a wave of the hand.
There are many types of theoretical artificial intelligences. Let us define our first type as that of a "superhuman." This type is designed to think in a manner similar to humans, but faster, with different kinds of interests, and the possibility of death significantly reduced. To create something like this, we need to look at the human brain.
The human brain experiences emotions. How it responds to these emotions is what defines us as intelligent creatures. Most people seem to be under the misconception that human emotions are something that can never be replicated in a machine; it's a series of chemical reactions, you can't induce that in a computer, they say. I, however, believe that this is incorrect, and not for the reason of "you can simulate the atoms in our universe and stick a brain in the simulation, and theoretically it will think just like a man."
Human emotions are actually more simple to understand than one might realize. Emotions are simply different levels of worrying. When you feel relaxed, you feel good because your mind isn't focused on anything important. When you feel accomplished, you feel good because your mind was focused on some important problem, but you are now secure in the knowledge that you don't have to worry about it anymore. The feeling of "working towards something" is a matter of knowing a problem and worrying about it, but being secure in the knowledge that you are doing something about it. Likewise, one feels frustration when they are trying to work towards something, but what they are doing isn't helping. One feels hollow when they have nothing good to worry about. The list goes on and on, but I think you get the point: emotions are defined as different levels of worrying.
Of course, you are all aware of the concept of the conscious and subconscious. The subconscious is simply a device that interprets a situation and sets a certain mood. The conscious then responds in a manner according to whatever the mood currently is. The important thing, however, is that the conscious has little to no knowledge of the workings of the subconscious; a black-box, if you will. The subconscious, on the other hand, understands everything about the conscious. It is this inability to ultimately control our emotions (and thus, our subconscious) that defines us as human.
Likewise, if you wish to create a man-like, self-centric artificial intelligence, you must simply give it:
1. The ability to rationalize and react
2. Different moods that affect the way it reacts
3. A device that controls these moods in a manner that is transparent to the intelligence itself
Once you give the intelligence these requirements, a real personality will spring up; after all, that which defines unique personalities is how the creature's subconscious responds to situations and what situations activate which moods.
When you look at the situation this way, the things standing between man and his child are not quite as abstract as commonly assumed to be.[citation needed]
>>1 Likewise, if you wish to create a man-like, self-centric artificial intelligence, you must simply give it:
If it's so goddamn simple, I invite you to put up or shut up.
Name:
Anonymous2010-01-22 18:36
>>15
Anyways, >>15, please listen to me. That it's really related to this thread.
I went to Yoshinoya a while ago; you know, Yoshinoya?
Well anyways there was an insane number of people there, and I couldn't get in.
Then, I looked at the banner hanging from the ceiling, and it had "150 yen off" written on it.
Oh, the stupidity. Those idiots.
You, don't come to Yoshinoya just because it's 150 yen off, fool.
It's only 150 yen, 1-5-0 YEN for crying out loud.
There're even entire families here. Family of 4, all out for some Yoshinoya, huh? How fucking nice.
"Alright, daddy's gonna order the extra-large." God I can't bear to watch.
You people, I'll give you 150 yen if you get out of those seats.
Yosinoya should be a bloody place.
That tense atmosphere, where two guys on opposite sides of the U-shaped table can start a fight at any time,
the stab-or-be-stabbed mentality, that's what's great about this place.
Women and children should screw off and stay home.
Anyways, I was about to start eating, and then the bastard beside me goes "extra-large, with extra sauce."
Who in the world orders extra sauce nowadays, you moron?
I want to ask him, "do you REALLY want to eat it with extra sauce?"
I want to interrogate him. I want to interrogate him for roughly an hour.
Are you sure you don't just want to try saying "extra sauce"?
Coming from a Yoshinoya veteran such as myself, the latest trend among us vets is this, extra green onion.
That's right, extra green onion. This is the vet's way of eating.
Extra green onion means more green onion than sauce. But on the other hand the price is a tad higher. This is the key.
And then, it's delicious. This is unbeatable.
However, if you order this then there is danger that you'll be marked by the employees from next time on; it's a double-edged sword.
I can't recommend it to amateurs.
What this all really means, though, is that you, >>15, should just stick with today's special.
>>19 We are observers, not agents, and where's the survival advantage in that? Watts's aliens, certainly, think rings around his humans and posthumans. They can detect the electromagnetic fluctuations of a human brain, and rewire them in real time. They can time their movements so precisely as to hide in the saccades of our eyes. And they can do it, in part, because they are not conscious, because consciousness is expensive: "I wastes energy and processing power, self-obsesses to the point of psychosis [...] They turn your own cognition against itself. They travel between the stars. This is what intelligence can do, unhampered by self-awareness," is Sarasti's blunt assessment. We are a fluke, a mistake; in evolutionary terms, a dead end. Once we get beyond the surface of our planet we are not fit.
Daily practice shows that the implementation of aspects of the program assume important positions in the establishment of conditions undeniably appropriate. On the other hand, the revolution of customs plays a key role in the formulation of guidelines for future development. The organizational level, the continued expansion of our activity involves a process of revision and modernization of the general membership.
Care to identify critical points in the perception of difficulties offers an interesting opportunity to verify the preferential directions towards progress. All these issues, given due consideration, raise questions about whether the current structure of the organization must undergo changes regardless of the positions of the governing bodies in relation to their duties. Dear friends, the continued development of different forms of action represents an opening to the improvement of vertical relationships between the hierarchies. Similarly, the need for renewal procedure forces us to consider the impact on decision-making agility. Certification methodologies that help us deal with the complexity of the studies conducted hinders the assessment of the importance of innovative management to which we belong.
What we must always bear in mind is that the consultation of the various militant extends the scope and importance of the procedures normally adopted. It is clear that the growing influence of the media cause indirect impact on corporate reassessment of paradigms. We see increasingly that the challenging global scenario can lead us to consider the restructuring of the lifting of the variables involved. Likewise, the mobility of international capital facilitates the creation of forms of action.
Never hurts to remember the weight and significance of these problems, since the understanding of the proposed targets have not yet demonstrated convincingly that will participate in changing the levels of motivation department. It is important to question how the consolidation of the structures may turn out to emphasize the relativity of the methods used in the evaluation of results. The accumulated experience shows that the expansion of world markets prepare us to deal with atypical situations under the rules of conduct norms.
Name:
Anonymous2010-01-22 22:59
>>20
This. Creating ai is hard enough; creating ai that mimics arbitrary behavior patterns (eg, human) is harder still. It will come, but don't expect the first one to enjoy strawberries and cream.
>>24
My favorite line of reasoning goes like this: if you can't get with a real girl, you won't be able to get with a satisfactorily intelligent and emotionally responsive robot either. Plus she has metal joints and will fuck you up for being a creep.
Name:
Anonymous2010-01-23 1:53
>>25
Who is talking about intelligence or real girls? I just want the simplistic thought processes and behaviors of my moé moé roboto waifu to have greater entropy, natural language is better than dialog branches.
>>30
D&D on rails. Or for the slightly better ones, in a tree, or even a forest.
An insult to the noble art of ignoring the main quest, knocking up the barmaid and assassinating the leader of the rebels who are fighting the evil emperor.
>>26
Natural language (that you would understand or recognize) requires a minimum level of sophistication, which is enough to know to stay away from you and your kind.
In my mind, the only practical way to create a true AI system is to create a system the mimics the process of evolution that had spawned our own neural net. We just need to put the building blocks in place, and then set everything in motion.
I don't believe humans will be able to simply replicate the human mind - it's too complex. I mean, how much does a goldfish understand about its own thought processes? Of course, we're much more intelligent than a goldfish... making us that much more difficult to understand.
Now, honestly, I don't believe it's really necessary for us to go ahead and create an AI. Why? Because it would actually be making ourselves obsolete. Does that seem like a good idea? Imagine an entity impossibly more intelligent than us. It would have no difficulty duping us into giving it whatever it wanted. Assuming it has wants.
I don't imagine I'll have to worry about such a threat in my lifetime.
Name:
Anonymous2010-01-23 21:35
>>36
That's why I wouldn't settle just for a moé moé roboto waifu. I'd also buy a combat roboto meido to defend myself from your kind. My kind is wealthy.
>>43
Actually, what I'm assuming is that your mouthbreathing bodyguard will not be on the ball to defend you from your "waifu" when she tears you a new one and leaves you to bleed out.
Hmm the problem of desires has probably never crossed my mind much in the context of AI - and it is indeed a serious one.
Human's desires and motivations can basically be broken down to a single rule the propagation and preservation of genes contained within every cell of its body... from this point of view sex might seem perverse - but if its the propagation of every one separate gene that is taken as a starting condition - the principle holds true.
This presumption seems to have no empirically observable statistically important contradictions.
Our intelligence is simply of by-product of evolution.
Predicting the future using abstraction and induction and carrying over of the principles of survival through a vivid system of communications in hopes of receiving some vital information back and using it in the future - that's what the human brain is about.
If all technical aspects were ignored, what kind of motivation would we bestow something we might call AI is truly a difficult question.
There is no absolute path in life which would be good or bad - not even as a limit of attainment - assessing a "shade of gray" is a generally arbitrary task.
What we might think to be "good" using our limited intelligence would with as a high probability of being a truly "successful mechanism" (to some arbitrary end), as failing miserably in the implied task at hand or ending ones own existence out of sheer desperation.
...
Perhaps a synthesis and a symbioses (at least for sometime) of technology and mind is a more realistic concept (as defined by the logic and the rules of the world known to me), than a stand-alone machine intellect, which... which is beyond true conception by the human mind anyway - just like magic it has its allure - but it will either be only an illusion of intellect like the analogues of the silly chatbots everyone tries to produce, or we will actually come up with something that will make itself much more complex than we imagined and which motives shall then be unknown to us, even if it was us that provided it with a set of starting rules (azimov's "I Robot" would probably be a good, albeit a simplistic and trivial example).
The latter possibility is really just a bit too far off... yet our brain simply obeys the laws of physics, which clearly allow whatever we call "intelligence", so there's no good reason not to look at our society as the cells of a newborn - multiplying and evolving into tissues for a new organism.
Human's desires and motivations can basically be broken down to a single rule the propagation and preservation of genes contained within every cell of its body... from this point of view sex might seem perverse - but if its the propagation of every one separate gene that is taken as a starting condition - the principle holds true.
It does not.
>>45,46
Steve Grand is more interesting than Hawkins on this topic.
Hawkins has some interesting things going on with the cortex, but he's betting on one trick. Grand is more holistic, and would never say such a silly thing as "your body is just along for the ride." Hawkins doesn't address learning beyond memorization (to be fair he seems to consider them to be the same thing), but Grand has demonstrated internal drives creating learned behavior based on results. He also speaks highly of our reptilian overlords.
(Don't get me wrong; Hawkins has some important things to say, but if you start with his material you will be blind to what is missing. If you start with Grand's, you will know exactly what is missing--at which point you should read Hawkins.)