Return Styles: Pseud0ch, Terminal, Valhalla, NES, Geocities, Blue Moon. Entire thread

AI is SHIT

Name: Anonymous 2012-08-11 20:48

Hey! Look! It's even possible to teach dog human speech...
http://www.youtube.com/watch?v=rEtvcsqGipo

but you cant teach a computer. You programming sucks. AI is shit. McCarthey is shit. Minsky is shit. MIT is shit. Go fuck yourself.

Name: Anonymous 2012-08-15 22:40

>>31
epic irony /b/ro

Name: Anonymous 2012-08-16 0:48

>>32
Thanks for the link to your homepage, but you're a little lost, you should click the home button to return to it.
>>33
u /g/o /g/url

Name: Anonymous 2012-08-16 0:57

>>34
home button
Sorry, I don't use Internet Explorer. You must have confused me with one of your desktop thread buddies on /g/. Go back there, please.

Name: Anonymous 2012-08-16 2:30

>>35
Sure will do, but it's gonna take me a bit, do me a favor and make your way there and let them know I'm on my way. While you're there you should really poke around and consider settling in.

Name: Anonymous 2012-08-16 21:03

>>31-36
Guys, this is getting stupid.

Why don't we all go back to Reddit?

Name: Anonymous 2012-08-16 22:34

>>37
No, it's just starting to get really good.
You can go back there though, I've been here before Reddit even existed and have never gone there.
Until you all go the fuck back there, my work here isn't gonna be done.

Name: Anonymous 2012-08-16 22:43

>>16

Speech recognition will end up needing AI if it ever wants to get the level of the average human in correct recognition. A lot of words sound almost the same, and the phonemes in many cases get changed drastically due to idiosyncrasy in pronunciation. The brain does not process speech in real time in the sense that it just listens to the phonemes and compares them against the set of all possible phonemes and then matches them to a semantic understanding. No, that would make us catatonic. It uses highlighted conditioning to register the phonemes used in the language, and then combines them with a semantic understanding of the current context, to create a predictive analysis model for parsing speech. If you've ever noticed that when you're not paying attention / expecting a certain word it's very hard to make out what someone says if you lost the beginning of the word, because then the brain has to switch context, in a sense a "missed branch", and has to evaluate the running context and check against many more phoneme / semantic pairs. That's why almost all speech processing models of today that are the most accurate are trained over certain sets of words with certain intonations, and the model has to be trained for new pronunciations, no one model even now works on an unlimited domain in English. If you mention words it doesn't know it won't piece them together from the pronunciation, it'll just try a closest case scenario, since it does not "understand" what is being said, as in, it has no grasp on semantics, where humans know that an unrecognized word is probably a new word, and they're able to reconstruct it using phoneme pairs. Computers cannot recognize new words without an NLP model.

Name: Anonymous 2012-08-16 22:48

>>39
It sounds like a naive bays predictor would provide that well.

Name: Anonymous 2012-08-16 23:28

It's very risky to program superhuman AI to do something you think you want. Human values are extremely complex and fragile. Also, I bet my values would change if I had more time to think through them and resolve inconsistencies and accidents and weird things that result from running on an evolutionarily produced spaghetti-code kluge of a brain. Moreover, there are some serious difficulties to the problem of aggregating preferences from multiple people — see for example the impossibility results from the field of population ethics.

>if it is super intelligent, it will have its own purpose.

Well, it depends. "Intelligence" is a word that causes us to anthropomorphize machines that will be running entirely different mind architectures than we are, and we shouldn't assume anything about AIs on the basis of what we're used to humans doing. To know what an AI will do, you have to actually look at the math.
An AI is math: it does exactly what the math says it will do, though that math can have lots of flexibility for planning and knowledge gathering and so on. Right now it looks like there are some kinds of AIs you could build whose behavior would be unpredictable (e.g. a massive soup of machine learning algorithms, expert systems, brain-inspired processes, etc.), and some kinds of AIs you could build whose behavior would be somewhat more predictable (transparent Bayesian AIs that optimize a utility function, like AIXI except computationally tractable and with utility over world-states rather than a hijackable reward signal). An AI of the sort may be highly motivated to preserve its original goals (its utility function), for reasons explained in The Superintelligent Will.
Basically, the Singularity Institute wants to avoid the situation in which superhuman AIs' purposes are incompatible with our needs, because eventually humans will no longer be able to compete with beings whose "neurons" can communicate at light speed and whose brains can be as big as warehouses. Apes just aren't built to compete with that.

>Dr. Neil DeGrasse Tyson mentioned that if we found an intelligence that was 2% different from us in the direction that we are 2% different [genetically] from the Chimpansees, it would be so intelligent that we would look like beings with a very low intelligence.

Yes, exactly.

>How does your group see something of that nature evolving and how will we avoid going to war with it?

We'd like to avoid a war with superhuman machines, because humans would lose — and we'd lose more quickly than is depicted in, say, The Terminator. A movie like that is boring if there's no human resistance with an actual chance of winning, so they don't make movies where all humans die suddenly with no chance to resist because a worldwide AI did its own science and engineered an airborn, human-targeted supervirus with a near-perfect fatality rate.
The solution is to make sure that the first superhuman AIs are programmed with our goals, and for that we need to solve a particular set of math problems (outlined here), including both the math of safety-capable AI and the math of aggregating and extrapolating human preferences.
Obviously, lots more detail on our research page and in a forthcoming scholarly monograph on machine superintelligence from Nick Bostrom at Oxford University. Also see the singularity paper by leading philosopher of mind David Chalmers.

Name: Anonymous 2012-08-17 3:45

Participating http://aigamedev.com/

Newer Posts
Don't change these.
Name: Email:
Entire Thread Thread List