Return Styles: Pseud0ch, Terminal, Valhalla, NES, Geocities, Blue Moon.

Pages: 1-4041-

So when will the day come,

Name: !L33tUKZj5I 2010-11-10 5:55

When programmers aren't needed anymore and we just tell robots to do the programming? If I were you guys I'd be scared for your jobs. If you don't have a job, then these robots would make your existence pointless anyway.

Name: Anonymous 2010-11-10 6:35

oh silly you

Name: Anonymous 2010-11-10 6:42

It would be an utopia anyway, so why would it matter? Besides, I'm sure people would still do certain things by themselves anyway, even if a robot could easily do their job.

Name: Anonymous 2010-11-10 6:56

robot doing the programming
enjoy your botnet

Name: Anonymous 2010-11-10 8:53

When programmers aren't needed anymore and we just tell robots to do the programming? If I were you guys I'd be scared for your jobs.
Scared for my job? What, do you want to toil for your sustenance? When robots can program they will be pretty much able to do everything else too, so there's no point in working. I can hardly imagine a more luxurious existence than, in addition to having all your food, water, electricity, etc. taken care of, having all my software needs taken care of too.

Also, I would program recreationally, much like I do when I'm not working now.

Name: Anonymous 2010-11-10 9:39

We'll solve NP-hard problems before we can get acceptable quality software solutions generated from robots.

Name: Anonymous 2010-11-10 12:52

>>1
Not too worried about it. Strong AI is a pipe dream.

Name: Anonymous 2010-11-10 13:48

>>7
Depends on one's definition of Strong AI. What is yours?

I do believe that we'll achieve human-level intelligence AIs within 50 years, or in much less time than that. If not by anything novel, at least by emulating high-level structures in the human brain and integrating them with less "natural" technologies. The human brain is not as much of a mystery as people make it, however there are still plenty of unanswered questions, but that doesn't mean we'll need to know everything to achieve human-level intelligence AIs. Such AIs may be flawed in the way that they are hardly perfect and will be as flawed as humans can be, but with much higher potentials for learning capacity.

If your definition of Strong AI means general AI which does not use heuristics to solve problems (it uses exhaustive approaches), then that is indeed a pipe dream due to the amount of resources required(time and memory).

I don't see a reason why humans won't be able to achieve AT LEAST human-level intelligence AIs. If you see one, care to elaborate?

Name: Anonymous 2010-11-10 14:57

>>1
There is code that generates more code.
They are called macros.

Name: Anonymous 2010-11-10 17:53

print "print 'Hello, World!'"
JOBS OVER

Name: !L33tUKZj5I 2010-11-10 19:02

>>8
Depends on one's definition of Strong AI. What is yours?
An AI good enough to never get trolled.

Name: Anonymous 2010-11-10 19:36

>>11
while true:
  print "IHBT"

Name: Anonymous 2010-11-10 20:08

>>11
Since humans are capable of being trolled, that would certainly rule out humans or anything based on them.

Name: !L33tUKZj5I 2010-11-10 21:06

>>13
I thought eventually there was going to be AI coded that could improve itself?

Name: Anonymous 2010-11-10 21:34

>>12

The first letter in ``true'' is supposed to be uppercase.

Name: Anonymous 2010-11-10 22:20

>>14
Humans can improve themselves in changing the way they reason about things (through a good education in math, philosophy, logic, etc), however partially by doing so they subvert naturally learned heuristics which are themselves fallible, but 'good enough' to allow a human to function, actually without them, humans wouldn't be able to function at all - they are at the essence of human intelligence.

Such an AI would have to either base its reasoning on similar heuristics (one example could be probabilistic logic, among others), however most heuristic reasoning is fallible. And a general AI which only works in absolutes would be rather useless in the real world which mostly deals in rather fuzzy/uncertain data (of course, for types of problems (such as theorem proving) that work with much more absolute concepts, one can apply different types of "reasoning", however as most people probably know by now, using some "general" algorithm to finding a solution to certain types of problems can be undecidable (or in other cases np-hard, etc)... making such an AI mostly useless without guidance (human heuristics or by an AI using some advanced form of heuristic reasoning)).

As for the case of detecting ``trolling'', one would first need a good definition of what the act constitutes: different people have different concepts on what it means to ``troll'' or ''meta-troll' - as it is with many subjective words/concepts (for example: ``God'', ``qualia'', ``love'', ``consciousness'' and so on). It might be argued that merely considering a troll would be a way of self-trolling, thus concieving something that is immune from trolling is impossible. I could propose various forms of this as a ``Trolling paradox'', similar to the ``Liar paradox''. I could also postulate that a rock is untrollable by the fact that it cannot reason, so a rock would fit the definition of ``strong AI'', however the most common understanding of ``strong AI'' suggests that some form of intelligence or reasoning is required, thus no such ``strong AI'' can be ``untrollable''. This means that either `untrollability'' is unfit to judge wether a ``strong AI'' can exist or that a ``strong AI'' cannot exist, simply by the fact that anything that reasons is ultimately susceptible to trolling.

Name: !L33tUKZj5I 2010-11-10 22:58

I could also postulate that a rock is untrollable by the fact that it cannot reason
Fair enough, let me elaborate. An AI that is deemed to have independent conscious thought, that is unable to be trolled.

anything that reasons is ultimately susceptible to trolling.
Just because something does not exist yet, does not mean it never will, people a thousand years back probably thought we'd never go to the moon. Well, most of them.

Name: Anonymous 2010-11-10 23:14

>>17
You'll have to provide a definition of ``trolling'' and ``meta-trolling''. Popular definition of the two terms (and the way that ``meta-trolling'' can also be considered as a form of ``trolling'') usually leads one to the conclusion that anything can be a potential ``troll'' or ``meta-troll'', thus it's logically impossible to avoid trolling (except by not reasoning at all, or completly ignoring everything).

Name: !L33tUKZj5I 2010-11-10 23:36

>>18
Fine. Trolling is trying to provoke a strong emotional reaction from someone instead of wanting to actually have a discourse with them, or educate them.

Name: Anonymous 2010-11-10 23:49

>>19
Wouldn't any ``Strong AI'' incapable of emotion qualify? Even a human with a damaged Amygdala could qualify. Of course, both the human and the AI could probably learn to emulate emotional behaviour, but that would not mean that they experience "genuine" emotions.

Name: !L33tUKZj5I 2010-11-10 23:55

>>20
Good point...I suppose maybe you could expand it to say that getting someone to keep repeating themselves by pretending to not understand them would maybe be added to that list.

Name: Anonymous 2010-11-11 0:47

>>15
YHBT

Name: sage 2010-11-11 17:33

Considering that we haven't got a clue how conciousness functions how can we build an AI? Current research points to conciousness requiring quantum entanglement, that means we are a long long long time away from any processing power of that sort.

I fucking hate these singularity hipsters.

>>21
>>20
>>19

Name: Anonymous 2010-11-11 18:00

Current research points to conciousness requiring quantum entanglement[citation needed]

Name: Anonymous 2010-11-11 19:00

>>1
That day is a long way off.
There are going to be a LOT more lower level jobs taken first by "AI".  Programmers will be the ones making it possible.

Worry about the entitled masses first.

Name: Anonymous 2010-11-11 19:10

That's like making a robot to write a book for you. OP is a silly goose.

Name: Anonymous 2010-11-11 20:37

Name: !L33tUKZj5I 2010-11-12 3:36

>>26
I see no reason why that can't happen one day.
It might not be as good as one made by a human in some respects, but then again who knows?

Name: Anonymous 2010-11-12 11:58

Who maintains the robots.

Other robots?

Who maintains the robot robot maintainers?

Other robots?

...

...

...

...

...

Humans.

Name: Anonymous 2010-11-12 12:44

Why would you use a robot to program? That's stupid.

Name: !L33tUKZj5I 2010-11-12 13:18

>>29
They maintain each other, like humans do.

Name: Anonymous 2010-11-12 13:38

>>29
Doctors.

Have a nice day. :-)

Name: Anonymous 2010-11-12 13:51

>>31,32

But we didn't establish already that there will be other robots are are able to maintain themselves and code or robots that can heal other robots?!

The OP specifically said robots who could program.

I see you two don't have that much attention to detail which is kind of disappointing......

Name: Anonymous 2010-11-12 14:27

I'm not worried. I'm actually working with genetic algorithm and self-learning AI research.

I'll MAKE those robots, get rich in the process and won't have to work anyway.

Name: Anonymous 2010-11-12 16:24

Name: Anonymous 2010-11-12 22:59

>>23
I think you're a bit confused.

Considering that we haven't got a clue how conciousness functions how can we build an AI?

There are 2 problems concerning consciousness.

The "simple" one, which is merely how cognitive functions work, how "thoughts" are generated, and so on. The "simple" problem is merely about understanding the physical processes at work at the low-level (physical processes that make neurons work, and interactions between them) and at the high-level (interactions between functional blocks of the brain, and so on - even if in reality there is a continuity between regions, and there is no clear delimiter). There are a lot of books talking about the low-level processes and some which attempt to construct high-level theories (some of which are quite logical and possibly how we actually work).
The "hard" problem of consciousness is the existential one: what it is "to be like something", it concerns our perception and the nature of qualia. There is nothing physical that indicates we have qualia. It's either that it's an illusion and this raw/basic form of consciousness doesn't exist, it being an illusion caused by our brain, and we are just hardwired to believe in it (See "Consciousness Explained" by Daniel Dennett for this view), or that this type of consciousness is a property of the world we live in (either non-physical (see David Chalmers' "The Conscious Mind") or pyshical (see Penrose's books, since this appears to be your viewpoint)).
The view Dennett gives is rather logical and might be how it actually works, however I find it terribly hard to believe in, as that's probably how I'm hardwired to be.
The view Chamlers gives is a bit more interesting, in which it supposes that consciousness would just form in any system organized so that it would form, and that system would experience qualia and so on - it's not something you need to engineer at all, it just appears naturally and is a basic property of everything in the world.
The view that Penrose/Searle givesis actually the hardest to stomach for me and it's one of the most unpopular views (the one which you seem to think it's true: consciousness is due to quantum entanglement), and assuming his view would be true, it would lead to some really unlikely conclusions).
Current research points to conciousness requiring quantum entanglement, that means we are a long long long time away from any processing power of that sort.
I'd really like a citation on this. The only one that seems to  believe this is Penrose, and this view is highly unpopular.

You should read the literature and draw your own conclusions on this problem.

However, regardless of what point of view you take on this "hard" problem of consciousness, you will notice that it's not required at all for building (strong) AIs. The only thing that will change is wether you think that such AIs can achieve a similar consciousness to yours or not. It doesn't mean that such AIs can't achieve the same level of intelligence as you, or much better.

I'd also like to understand why do you think understanding consciousness completely (the hard problem, especially) is of any importance to building general AIs? It may be something we humans are puzzled about, but I really doubt it has any importance on wether we can build them or not.

I've been loosely following various neuroscience and AI research and I do think we'll be able to achieve human-level intelligence AIs within 50 years or less (probably much less), and I'd also be interested in doing research in some of these fields (and some related ones) someday.

Name: Anonymous 2010-11-12 23:52

>>33
>>29-san posits these robot-maintaining robots just so, attention-to-detail-sama.

Name: !L33tUKZj5I 2010-11-13 15:43

>>36
Holy shit, five star post nigga.
You made of rethink a lot of stuff.

Name: Anonymous 2010-11-13 16:08

>>38
What's a ``post nigga''?

Name: !L33tUKZj5I 2010-11-13 16:27

>>39
See, your inability to undertand this jive talk is what stops you getting invited to parties.

Name: Anonymous 2010-11-13 17:10

>>40
You make it sound as if going to the parties were something desirable.

Name: Anonymous 2010-11-13 21:27

>>41
No, being at the parties is something desirable. Going to them can be a bit of a chore.

Name: Anonymous 2010-11-14 2:57

>>42
Being at a party sounds desirable. What happens there?

Name: Anonymous 2010-11-14 9:42

>>39
It's the next version of a homie, the avant guarde nigga of yesterday.

Name: Anonymous 2010-11-14 18:09

>>36
Because Penrose/Hameroff and I share similar 'unpopular' views does not mean we are incorrect, I only have to cite heliocentrism to prove that.

As for breaking down conciousness into two problems I feel you are avoiding the point, how can you be so brazen as to de-construct something into two domains when we have no understanding of the actual domain? Fair enough we have observational data but that leads to us simulating human behaviour(s). If that's what you want, fine, I don't disagree with that.

I held the same opinion as you 10 years ago, after reading shadows of the mind I couldn't get the issue of paramecium out of my head. Also I would suggest you hold off throwing around claims of "human-level intelligence AIs within 50" this makes you sound like a TV scientist. I feel Orch-OR is a step in the right direction, there is something there that we don't understand, research into this area is a good thing.

Name: Anonymous 2010-11-14 21:41

Although they are not mutually exclusive, there is a difference between 'unpopular' and 'unsubstantiated and stupid.'

Name: Anonymous 2010-11-14 23:03

>>45
It would be nice/interesting if our brains would be capable of quantum computing of some form, however such claims are lacking evidence, and the evidence that was found seems to indicate the contrary. It is widely believed that our brains are mostly deterministic and we lack "free will"(whatever that is), and most evidence suggests this. Even if we were to assume that quantum computing of some sort was possible in the brain, that still doesn't show that it has anything to do with the way we experience consciousness.

As for the separation of consciousness into 2 problems, let's take an idealized scenario (non-real world scenario, although likely possible in a simulation) where you could know everything about each neuron's state, you would be able to see how the information flows through the system and how the data is "processed". What we perceive as consciousness depends (if not, IS) on the state of some parts of the system ( http://en.wikipedia.org/wiki/Neural_correlates_of_consciousness ). However, even if we knew how all the data flowed, and even if we would be able to reconstruct things such as what we see, what we imagine, what we hear, etc (conscious perception), it would still not account for the "hard problem", since we experience things rather continously (even if they are not), and to actually reconstruct such perceptions, we would have to know exactly how each part that plays any role in some perception actually works. The brain is fairly decentralized, and for example processing visual input is done in fairly small chunks with the information being more compressed and centralized as it moves from V1 to V2 To V4 to IT and toward the prefrontal cortex, yet what we experience is fairly continuous and coherent, as we are able to experience the state of the entire system at once (http://en.wikipedia.org/wiki/Binding_problem), it may be an illusion, or it may be something else, but all these questions about how "it is to be like something" represent the "hard problem of consciousness", because if you were to inspect the system only by its behaviour, you would see no evidence of this at all... the only reason we even consider the existence of this problem is because that's how we experience the world.

Name: !L33tUKZj5I 2010-11-15 7:32

>>43
Drunken women happen at parties.

Don't change these.
Name: Email:
Entire Thread Thread List