Return Styles: Pseud0ch, Terminal, Valhalla, NES, Geocities, Blue Moon. Entire thread

So when will the day come,

Name: !L33tUKZj5I 2010-11-10 5:55

When programmers aren't needed anymore and we just tell robots to do the programming? If I were you guys I'd be scared for your jobs. If you don't have a job, then these robots would make your existence pointless anyway.

Name: Anonymous 2010-11-10 6:35

oh silly you

Name: Anonymous 2010-11-10 6:42

It would be an utopia anyway, so why would it matter? Besides, I'm sure people would still do certain things by themselves anyway, even if a robot could easily do their job.

Name: Anonymous 2010-11-10 6:56

robot doing the programming
enjoy your botnet

Name: Anonymous 2010-11-10 8:53

When programmers aren't needed anymore and we just tell robots to do the programming? If I were you guys I'd be scared for your jobs.
Scared for my job? What, do you want to toil for your sustenance? When robots can program they will be pretty much able to do everything else too, so there's no point in working. I can hardly imagine a more luxurious existence than, in addition to having all your food, water, electricity, etc. taken care of, having all my software needs taken care of too.

Also, I would program recreationally, much like I do when I'm not working now.

Name: Anonymous 2010-11-10 9:39

We'll solve NP-hard problems before we can get acceptable quality software solutions generated from robots.

Name: Anonymous 2010-11-10 12:52

>>1
Not too worried about it. Strong AI is a pipe dream.

Name: Anonymous 2010-11-10 13:48

>>7
Depends on one's definition of Strong AI. What is yours?

I do believe that we'll achieve human-level intelligence AIs within 50 years, or in much less time than that. If not by anything novel, at least by emulating high-level structures in the human brain and integrating them with less "natural" technologies. The human brain is not as much of a mystery as people make it, however there are still plenty of unanswered questions, but that doesn't mean we'll need to know everything to achieve human-level intelligence AIs. Such AIs may be flawed in the way that they are hardly perfect and will be as flawed as humans can be, but with much higher potentials for learning capacity.

If your definition of Strong AI means general AI which does not use heuristics to solve problems (it uses exhaustive approaches), then that is indeed a pipe dream due to the amount of resources required(time and memory).

I don't see a reason why humans won't be able to achieve AT LEAST human-level intelligence AIs. If you see one, care to elaborate?

Name: Anonymous 2010-11-10 14:57

>>1
There is code that generates more code.
They are called macros.

Name: Anonymous 2010-11-10 17:53

print "print 'Hello, World!'"
JOBS OVER

Name: !L33tUKZj5I 2010-11-10 19:02

>>8
Depends on one's definition of Strong AI. What is yours?
An AI good enough to never get trolled.

Name: Anonymous 2010-11-10 19:36

>>11
while true:
  print "IHBT"

Name: Anonymous 2010-11-10 20:08

>>11
Since humans are capable of being trolled, that would certainly rule out humans or anything based on them.

Name: !L33tUKZj5I 2010-11-10 21:06

>>13
I thought eventually there was going to be AI coded that could improve itself?

Name: Anonymous 2010-11-10 21:34

>>12

The first letter in ``true'' is supposed to be uppercase.

Name: Anonymous 2010-11-10 22:20

>>14
Humans can improve themselves in changing the way they reason about things (through a good education in math, philosophy, logic, etc), however partially by doing so they subvert naturally learned heuristics which are themselves fallible, but 'good enough' to allow a human to function, actually without them, humans wouldn't be able to function at all - they are at the essence of human intelligence.

Such an AI would have to either base its reasoning on similar heuristics (one example could be probabilistic logic, among others), however most heuristic reasoning is fallible. And a general AI which only works in absolutes would be rather useless in the real world which mostly deals in rather fuzzy/uncertain data (of course, for types of problems (such as theorem proving) that work with much more absolute concepts, one can apply different types of "reasoning", however as most people probably know by now, using some "general" algorithm to finding a solution to certain types of problems can be undecidable (or in other cases np-hard, etc)... making such an AI mostly useless without guidance (human heuristics or by an AI using some advanced form of heuristic reasoning)).

As for the case of detecting ``trolling'', one would first need a good definition of what the act constitutes: different people have different concepts on what it means to ``troll'' or ''meta-troll' - as it is with many subjective words/concepts (for example: ``God'', ``qualia'', ``love'', ``consciousness'' and so on). It might be argued that merely considering a troll would be a way of self-trolling, thus concieving something that is immune from trolling is impossible. I could propose various forms of this as a ``Trolling paradox'', similar to the ``Liar paradox''. I could also postulate that a rock is untrollable by the fact that it cannot reason, so a rock would fit the definition of ``strong AI'', however the most common understanding of ``strong AI'' suggests that some form of intelligence or reasoning is required, thus no such ``strong AI'' can be ``untrollable''. This means that either `untrollability'' is unfit to judge wether a ``strong AI'' can exist or that a ``strong AI'' cannot exist, simply by the fact that anything that reasons is ultimately susceptible to trolling.

Name: !L33tUKZj5I 2010-11-10 22:58

I could also postulate that a rock is untrollable by the fact that it cannot reason
Fair enough, let me elaborate. An AI that is deemed to have independent conscious thought, that is unable to be trolled.

anything that reasons is ultimately susceptible to trolling.
Just because something does not exist yet, does not mean it never will, people a thousand years back probably thought we'd never go to the moon. Well, most of them.

Name: Anonymous 2010-11-10 23:14

>>17
You'll have to provide a definition of ``trolling'' and ``meta-trolling''. Popular definition of the two terms (and the way that ``meta-trolling'' can also be considered as a form of ``trolling'') usually leads one to the conclusion that anything can be a potential ``troll'' or ``meta-troll'', thus it's logically impossible to avoid trolling (except by not reasoning at all, or completly ignoring everything).

Name: !L33tUKZj5I 2010-11-10 23:36

>>18
Fine. Trolling is trying to provoke a strong emotional reaction from someone instead of wanting to actually have a discourse with them, or educate them.

Name: Anonymous 2010-11-10 23:49

>>19
Wouldn't any ``Strong AI'' incapable of emotion qualify? Even a human with a damaged Amygdala could qualify. Of course, both the human and the AI could probably learn to emulate emotional behaviour, but that would not mean that they experience "genuine" emotions.

Name: !L33tUKZj5I 2010-11-10 23:55

>>20
Good point...I suppose maybe you could expand it to say that getting someone to keep repeating themselves by pretending to not understand them would maybe be added to that list.

Name: Anonymous 2010-11-11 0:47

>>15
YHBT

Name: sage 2010-11-11 17:33

Considering that we haven't got a clue how conciousness functions how can we build an AI? Current research points to conciousness requiring quantum entanglement, that means we are a long long long time away from any processing power of that sort.

I fucking hate these singularity hipsters.

>>21
>>20
>>19

Name: Anonymous 2010-11-11 18:00

Current research points to conciousness requiring quantum entanglement[citation needed]

Name: Anonymous 2010-11-11 19:00

>>1
That day is a long way off.
There are going to be a LOT more lower level jobs taken first by "AI".  Programmers will be the ones making it possible.

Worry about the entitled masses first.

Name: Anonymous 2010-11-11 19:10

That's like making a robot to write a book for you. OP is a silly goose.

Name: Anonymous 2010-11-11 20:37

Name: !L33tUKZj5I 2010-11-12 3:36

>>26
I see no reason why that can't happen one day.
It might not be as good as one made by a human in some respects, but then again who knows?

Name: Anonymous 2010-11-12 11:58

Who maintains the robots.

Other robots?

Who maintains the robot robot maintainers?

Other robots?

...

...

...

...

...

Humans.

Name: Anonymous 2010-11-12 12:44

Why would you use a robot to program? That's stupid.

Name: !L33tUKZj5I 2010-11-12 13:18

>>29
They maintain each other, like humans do.

Name: Anonymous 2010-11-12 13:38

>>29
Doctors.

Have a nice day. :-)

Name: Anonymous 2010-11-12 13:51

>>31,32

But we didn't establish already that there will be other robots are are able to maintain themselves and code or robots that can heal other robots?!

The OP specifically said robots who could program.

I see you two don't have that much attention to detail which is kind of disappointing......

Name: Anonymous 2010-11-12 14:27

I'm not worried. I'm actually working with genetic algorithm and self-learning AI research.

I'll MAKE those robots, get rich in the process and won't have to work anyway.

Name: Anonymous 2010-11-12 16:24

Name: Anonymous 2010-11-12 22:59

>>23
I think you're a bit confused.

Considering that we haven't got a clue how conciousness functions how can we build an AI?

There are 2 problems concerning consciousness.

The "simple" one, which is merely how cognitive functions work, how "thoughts" are generated, and so on. The "simple" problem is merely about understanding the physical processes at work at the low-level (physical processes that make neurons work, and interactions between them) and at the high-level (interactions between functional blocks of the brain, and so on - even if in reality there is a continuity between regions, and there is no clear delimiter). There are a lot of books talking about the low-level processes and some which attempt to construct high-level theories (some of which are quite logical and possibly how we actually work).
The "hard" problem of consciousness is the existential one: what it is "to be like something", it concerns our perception and the nature of qualia. There is nothing physical that indicates we have qualia. It's either that it's an illusion and this raw/basic form of consciousness doesn't exist, it being an illusion caused by our brain, and we are just hardwired to believe in it (See "Consciousness Explained" by Daniel Dennett for this view), or that this type of consciousness is a property of the world we live in (either non-physical (see David Chalmers' "The Conscious Mind") or pyshical (see Penrose's books, since this appears to be your viewpoint)).
The view Dennett gives is rather logical and might be how it actually works, however I find it terribly hard to believe in, as that's probably how I'm hardwired to be.
The view Chamlers gives is a bit more interesting, in which it supposes that consciousness would just form in any system organized so that it would form, and that system would experience qualia and so on - it's not something you need to engineer at all, it just appears naturally and is a basic property of everything in the world.
The view that Penrose/Searle givesis actually the hardest to stomach for me and it's one of the most unpopular views (the one which you seem to think it's true: consciousness is due to quantum entanglement), and assuming his view would be true, it would lead to some really unlikely conclusions).
Current research points to conciousness requiring quantum entanglement, that means we are a long long long time away from any processing power of that sort.
I'd really like a citation on this. The only one that seems to  believe this is Penrose, and this view is highly unpopular.

You should read the literature and draw your own conclusions on this problem.

However, regardless of what point of view you take on this "hard" problem of consciousness, you will notice that it's not required at all for building (strong) AIs. The only thing that will change is wether you think that such AIs can achieve a similar consciousness to yours or not. It doesn't mean that such AIs can't achieve the same level of intelligence as you, or much better.

I'd also like to understand why do you think understanding consciousness completely (the hard problem, especially) is of any importance to building general AIs? It may be something we humans are puzzled about, but I really doubt it has any importance on wether we can build them or not.

I've been loosely following various neuroscience and AI research and I do think we'll be able to achieve human-level intelligence AIs within 50 years or less (probably much less), and I'd also be interested in doing research in some of these fields (and some related ones) someday.

Name: Anonymous 2010-11-12 23:52

>>33
>>29-san posits these robot-maintaining robots just so, attention-to-detail-sama.

Name: !L33tUKZj5I 2010-11-13 15:43

>>36
Holy shit, five star post nigga.
You made of rethink a lot of stuff.

Name: Anonymous 2010-11-13 16:08

>>38
What's a ``post nigga''?

Name: !L33tUKZj5I 2010-11-13 16:27

>>39
See, your inability to undertand this jive talk is what stops you getting invited to parties.

Newer Posts
Don't change these.
Name: Email:
Entire Thread Thread List