>>7
I love it when illiretards think words like ``sentience'' are fancy.
Name:
Anonymous2012-08-12 6:22
>>8
Take Ruby for example. It has a fancy name for a crap of a language. While C/C++ has ugly name right in your fucking face.
Name:
Anonymous2012-08-12 19:00
teaching computers speech
Last I heard, nobody seems to have taken into account how sounds are made up of frequencies, or that our ears work by splitting those sounds into their component frequencies IN REAL TIME. Or how that makes it so much easier to decode sounds.
Or the upshot: That sounds (like, say, speech) are best constructed by isolating which sounds are made up of which frequencies. And then putting those frequencies together. In the right amount, and in the right order.
Rather than fiddling around with sampling tech from 1980.
>>10
Great idea! We just need to build a microphone that picks up 100 different frequencies seperately and then our software will magically parse these 100 inputs into natural language!
Because a billion different tiny components is easier to understand, right? Just look at how easy it is to parse text and analyze images!
Also checkem.
Name:
Anonymous2012-08-13 5:01
>>10
Dog's ears, brain and vocal apparatus werent designed to decode and produce human speech. Just because dog's brain is general enough to learn a new information to get the snacks.
Now if you start breeding the dogs based on their human speech recognition, after a few generations you will get a breed of dogs capable of human speech. You cant do that with modern computer science.
Name:
Anonymous2012-08-13 6:40
Now if you start breeding the dogs based on their human speech recognition, after a few generations you will get a breed of dogs capable of human speech
So your ideal candidate is that dog on Youtube who growls out "I rove you?"
>>13
a good starting point. and you can get funding from these old ladies who need an animal companion.
Name:
Anonymous2012-08-13 7:43
Speech recognition has nothing to do with AI. Genetic programming isn't AI either.
AI is just a weasel word for fancy algorithms which promise to thing we don't have enough processing power for yet.
Name:
Anonymous2012-08-13 8:15
>>16
i.e. it is a buzzword, Jews use to get investor's money for some unrelated astronomy project.
Name:
Anonymous2012-08-14 22:09
>>11
Somebody never heard about DSPs.
Also, apples and oranges. There's a gadzillion ways to obfuscate what a picture shows (angle, distance, rotation, tint, obstacles, etc) ─ with sounds (once you separate the component frequencies) it's basically just signal vs noise.
>>16
From the looks of it, AI would ease speech recognition considerably. And that's a statement I'll hold to be wrong _after_ somebody actually proves it wrong, not before.
`
>implying ``teaching'' isnt programming
>implying neural networks arent code as data
>implying this dog was actually ``taught'' human speech recognition and not PROGRAMMED to respond to the phonetic sounds of a single phrase from a specific voice with its mimicry of the same string of sounds
>implying this dog has even a nuance of a clue of what said string means
>implying that if she said ``i hate you'' in the same tone that it wouldnt respond the same exact way
>implying it taking 2 days to ``learn'' isnt slower than ruby on rails implemented in the java virtual machine
>>27
heh, i actually invented the '>>>' thing. i find it quite disturbing that a site as big as 4chan actually uses it. i'm very sorry for unleashing such a horrible idea upon the world.
>>34 home button
Sorry, I don't use Internet Explorer. You must have confused me with one of your desktop thread buddies on /g/. Go back there, please.
Name:
Anonymous2012-08-16 2:30
>>35
Sure will do, but it's gonna take me a bit, do me a favor and make your way there and let them know I'm on my way. While you're there you should really poke around and consider settling in.
>>37
No, it's just starting to get really good.
You can go back there though, I've been here before Reddit even existed and have never gone there.
Until you all go the fuck back there, my work here isn't gonna be done.
Speech recognition will end up needing AI if it ever wants to get the level of the average human in correct recognition. A lot of words sound almost the same, and the phonemes in many cases get changed drastically due to idiosyncrasy in pronunciation. The brain does not process speech in real time in the sense that it just listens to the phonemes and compares them against the set of all possible phonemes and then matches them to a semantic understanding. No, that would make us catatonic. It uses highlighted conditioning to register the phonemes used in the language, and then combines them with a semantic understanding of the current context, to create a predictive analysis model for parsing speech. If you've ever noticed that when you're not paying attention / expecting a certain word it's very hard to make out what someone says if you lost the beginning of the word, because then the brain has to switch context, in a sense a "missed branch", and has to evaluate the running context and check against many more phoneme / semantic pairs. That's why almost all speech processing models of today that are the most accurate are trained over certain sets of words with certain intonations, and the model has to be trained for new pronunciations, no one model even now works on an unlimited domain in English. If you mention words it doesn't know it won't piece them together from the pronunciation, it'll just try a closest case scenario, since it does not "understand" what is being said, as in, it has no grasp on semantics, where humans know that an unrecognized word is probably a new word, and they're able to reconstruct it using phoneme pairs. Computers cannot recognize new words without an NLP model.
>>39
It sounds like a naive bays predictor would provide that well.
Name:
Anonymous2012-08-16 23:28
It's very risky to program superhuman AI to do something you think you want. Human values are extremely complex and fragile. Also, I bet my values would change if I had more time to think through them and resolve inconsistencies and accidents and weird things that result from running on an evolutionarily produced spaghetti-code kluge of a brain. Moreover, there are some serious difficulties to the problem of aggregating preferences from multiple people — see for example the impossibility results from the field of population ethics.
>if it is super intelligent, it will have its own purpose.
Well, it depends. "Intelligence" is a word that causes us to anthropomorphize machines that will be running entirely different mind architectures than we are, and we shouldn't assume anything about AIs on the basis of what we're used to humans doing. To know what an AI will do, you have to actually look at the math.
An AI is math: it does exactly what the math says it will do, though that math can have lots of flexibility for planning and knowledge gathering and so on. Right now it looks like there are some kinds of AIs you could build whose behavior would be unpredictable (e.g. a massive soup of machine learning algorithms, expert systems, brain-inspired processes, etc.), and some kinds of AIs you could build whose behavior would be somewhat more predictable (transparent Bayesian AIs that optimize a utility function, like AIXI except computationally tractable and with utility over world-states rather than a hijackable reward signal). An AI of the sort may be highly motivated to preserve its original goals (its utility function), for reasons explained in The Superintelligent Will.
Basically, the Singularity Institute wants to avoid the situation in which superhuman AIs' purposes are incompatible with our needs, because eventually humans will no longer be able to compete with beings whose "neurons" can communicate at light speed and whose brains can be as big as warehouses. Apes just aren't built to compete with that.
>Dr. Neil DeGrasse Tyson mentioned that if we found an intelligence that was 2% different from us in the direction that we are 2% different [genetically] from the Chimpansees, it would be so intelligent that we would look like beings with a very low intelligence.
Yes, exactly.
>How does your group see something of that nature evolving and how will we avoid going to war with it?
We'd like to avoid a war with superhuman machines, because humans would lose — and we'd lose more quickly than is depicted in, say, The Terminator. A movie like that is boring if there's no human resistance with an actual chance of winning, so they don't make movies where all humans die suddenly with no chance to resist because a worldwide AI did its own science and engineered an airborn, human-targeted supervirus with a near-perfect fatality rate.
The solution is to make sure that the first superhuman AIs are programmed with our goals, and for that we need to solve a particular set of math problems (outlined here), including both the math of safety-capable AI and the math of aggregating and extrapolating human preferences.
Obviously, lots more detail on our research page and in a forthcoming scholarly monograph on machine superintelligence from Nick Bostrom at Oxford University. Also see the singularity paper by leading philosopher of mind David Chalmers.