Return Styles: Pseud0ch, Terminal, Valhalla, NES, Geocities, Blue Moon. Entire thread

Leaked NSA Software

Name: Anonymous 2013-08-27 6:02

https://encyclopediadramatica.se/PRISM#Parabon_Leaks
https://mega.co.nz/#F!00xx1QjA!G1kBmCk0C5Qo8H8kZ-Qhjg!dxxxXCQJ

It can easily hack HTTPS/SSL. They key encryption software is rendered as useless garbage. You have been warned. All online banking is now over. The Internet as we know it, is now over.

Expect the media to ignore it for a few weeks, while the governments start WW3 and use it to justify taking down the Internet.

Name: Anonymous 2013-08-27 6:39

Very ENTERPRISE.

No exceptions, etc.

Name: Anonymous 2013-08-27 8:42

while the governments start WW3 and use it to justify taking down the Internet.
You do not understand how the Internet works.

Name: Anonymous 2013-08-27 9:34

Let the government do their job. After all most of the information is looked at by robots and ai either way. A lot of helpful algorithms could be run that could benefit society as a whole. Everyone loves going to the store to buy their food. But how many people want to contribute to that?

Name: Anonymous 2013-08-27 9:45

dounlouding

Name: Anonymous 2013-08-27 9:46

I would only trust the governement if the government was an open-source machine.

Name: Anonymous 2013-08-27 10:01

>>6
It isn't as if you would have the time to read and audit the entire source anyway. Besides, even if you did, you certainly wouldn't fully understand all the processes involved. AI algorithms applied to massive data problems become black boxes.

Therefore it follows that no one would ever really be able to put a bias into a machine of this complexity, for it outweighs the combined cognitive power of everyone working on it. It will figure out relations by itself and it will be fair, because that's the only way it will work.

>>4 is right. I don't honestly see a problem with any of this. The amount of data is so astronomical that humans will never look at any of it. It will just be AIs. I don't care of AIs look at my data. It's the only way to really advance beyond a certain point, anyway.

Name: Anonymous 2013-08-27 10:11

>>7
You don't have to audit it yourself if you trust your community of auditors to share the burden of auditing.

Name: Anonymous 2013-08-27 10:12

>>8
That doesn't solve any problems. Then you're just shifting trust from one party to another.

Name: Anonymous 2013-08-27 10:52

>>7

That's the thing. You can always to Unit Testing and Integration Testing. You make simple tests to make sure things go the way it should.

Name: Anonymous 2013-08-27 10:54

>>9
An auditor who finds a legitimate problem in the system has an incentive to report the problem. The auditor who reports an issue before others gets a merit of prestige amongst the community of auditors. While it's possible that an auditor will find some problem and keeps that information quiet for his own advantage, the hidden problem would not stay hidden for very long when there is some other auditor who desires the prestige of finding a problem within the system.

Name: Anonymous 2013-08-27 11:00

>>4,7

List the benefits the elimination of private communicate has for society as a whole. Make sure you read about east germany before replying.

Name: Anonymous 2013-08-27 11:01

It can easily hack HTTPS/SSL. They key encryption software is rendered as useless garbage. You have been warned. All online banking is now over. The Internet as we know it, is now over.
This is how SSL is intended to work.  If you were surprised by this, you are not a programmer.

Name: Anonymous 2013-08-27 11:08

>>12

East Germany wasn't ruled by AIs. I already listed the main benefit. The only way a system like that would work is if it would be fair.

It isn't about the lack of privacy, but about the technological advancement. As time goes on infrastructure in general becomes more and more vast, and all will need to be completely managed by AIs at certain points. It's starting to appear in cluster / distributed computing platforms, for example Amazon's EC2, which is self-regulated and self-healing, both intra-zone and inter-zone. Sometimes the algorithms go insane and take down everything but it's only just the starting. Scale this up to critical infrastructure, power grid management, routing traffic, detecting crime, etc, these systems are the first step to reaching a world that's truly optimal, because algorithms on this scale will not work if they're biased, they self-regulate because they all want the same thing, health of a system. That's how the best protocols work, no real limit, just self-stabilization across the participants.

And the way blacklisting works on a protocol-level, in the world truly detrimental individuals will be weeded out, and if you have something against that then you are a truly detrimental individual.

Name: Anonymous 2013-08-27 11:09

>>13
Explain yourself ``programmer''.

Name: Anonymous 2013-08-27 11:16

>>14
Humans construct machines and write software, and inject their bias into ais. AIs do nothing more than find an optimal way to carry out goals their designer intended. You are advocating for a totalitarian government in which the ruler makes ``good'' decisions. The only difference from your version and the classical one si the ruler has a vast technological array of surveillance and data processing for their own use. The unanswered question is who gets to decide what detrimental is, and what actions are taken against the ones marked as detrimental. To put it bluntly, you are a fascist with a technology fetish.

Name: Anonymous 2013-08-27 11:29

>>16
Humans construct machines and write software, and inject their bias into ais.
Not how it works. Humans will have an end goal in mind. The AI algorithm actually achieves that goal. As an example, AI algorithms have made microprocessor layouts for all processors made in the past 20 years. Humans want an optimal layout, but it is the AI that decides how to structure it and what is and isn't possible.

When said end goals become too abstract and start working on a complex systems stability level, then humans will not be able to apply their bias to an AI's "workings" because it will be far too complex. Look at Watson, for example, when they fed it Urban Dictionary it started cursing. That was not foreseen, yet the unsupervised learning algorithms figured out those relations and incorporated them into its future actions.

In data sets as massive as every single piece of digital information created, one will not be able to say "Oh, I'll just suppress all the data parts that will make the AI not do that I want", because you would never even begin to comprehend what those parts are. The only way that systems like this work is if they're allowed to stabilize and come to the most optimal configuration by processing as much data as possible.

If a human decides to be the arbiter of what is and isn't a ``good'' decision in this, the system will plainly not work. Because one would not be able to foresee the chains of process/result/action that the AI is able to keep track of. Therefore it is not a system that can be directed, it is a system that must direct itself through finding the most optimal paths.

You are naive and cower in fear in the face of advancement that you might not be able to understand. That's your problem.

Name: Anonymous 2013-08-27 11:45

>>17
No, I just understand what people do with limitless power, and history has shown it time and time again. There is another vital piece of information missing from your post. How does the AI measure optimality?

Name: Anonymous 2013-08-27 11:51

>>18
Close enough is good enough. Find a number of close solutions then pick one. Finding the most optimal solution to many problems are probably NP-hard.

Name: Anonymous 2013-08-27 11:51

s/another/one

Name: Anonymous 2013-08-27 11:58

>>19
I understand it's algorithm will be some kind of heuristic search, but what measurement does it aim to maximize or minimize? In order to operate, it needs to differentiate a good scenario for society from a bad one. So where does this judgement come from? Who creates it or defines it?

Name: Anonymous 2013-08-27 12:01

>>18,19
Depends on the algorithm you're using. For things such as this, it would likely be many different models in a hierarchy. Unsupervised learning and clustering to categorize data, figure out relations humans are unable to see, what to focus on, patterns, etc. Supervised learning (goal-based optimization) with tagged / semantic data sets to give a sort of "direction". I assume the rest would just be simulations on the data received. If you have access to every piece of information that is produced, you can probably start to predict future events to some extent. So if you can do that for some category of events, you can test out how certain actions / policies / regulations / etc will propagate through the system.

The actions deemed optimal will depend on which ones give greater stability to the system as whole.

The point is that humans don't stop the microprocessor layout AIs midway through and say "Oh, no, you should take this path", it's incomprehensible, the billions of pathways that need to synchronized are not something humans could ever begin to make sense of. So it's all left to the AI. The model is tested and simulated, and the best effort one is chosen.

Same with Watson. The IBM devs didn't tell Watson to stop cursing, they purged Urban Dictionary's database from its memory, because the algorithms are too complex / too interdependent to just say "refrain from using this class of words", also because of context-based problems in human speech, etc.

The technocracy AIs would operate on data sets that humans would never even be able to imagine, much less try to purge or filter or what have you.

Either way, humans are obviously incapable of governing themselves, at this point this is really the only viable alternative. If it works, great. If it doesn't, we never had a chance in the long run anyway.

Name: Anonymous 2013-08-27 12:09

>>22
Oh, and if you're going to ask "Why is it that the actions chosen will be the ones geared toward stability?", it's because likely an AI would see this as participants in a protocol. If I ask, "What will benefit me? Take those actions", on a long-term massive-data scale, what benefits any one individual on average the most in the long term are actions that better every individual. Because then the world is made more stable, more resourceful, etc. Why? Taking actions that benefit only a class of people leaves room for exploitation. Favouring classes allows for exploiting those classes, in the same way that the favoured classes are exploiting the system itself. So to prevent this, a proper protocol would have a fair decision spread, where all decisions benefit everyone equally over time, so no one can gain an advantage and break everything.

As an example, see TCP congestion flow control.

Name: Anonymous 2013-08-27 12:11

The actions deemed optimal will depend on which ones give greater stability to the system as whole.

That's really fucking vague. The extinction of the human race would be very stable from that point onwards. See what the problem is? You just injected your own subjectivity into the AI's judgement.

Name: Anonymous 2013-08-27 12:18

>>15
http://en.wikipedia.org/wiki/Certificate_authority
What do I need to explain?  If you configure a custom CA on the client, you can spoof any site trivially.  That's what the software in question does.  Since NSA surely has access to VeriSign and similar keys, they can do the same with a client in the default configuration.

SSL (as implemented by web browsers) was never designed to provide security from government, only from casual thieves and ISPs.  Other cryptosystems are not affected.

Name: Anonymous 2013-08-27 12:20

>>25
Just use a browser set to trust a CA that operates from the moon and be done with it.

Name: Anonymous 2013-08-27 14:15

Idiot technophiles, as if this kind of technology is even possible or will happen completely and optimally.

Go back to lesswrong.com and your other anti-human futurist transhumanist bullshit websites.

Name: Anonymous 2013-08-27 14:16

>>22
Kid, please take an economics class you arrogant idiotic fucktard. You aren't bright.

Name: Anonymous 2013-08-27 14:18

>>17
How about you try to explain this kind of AI being even possible and existent in the future? Or are you going to point me to your pop sci websites where you read about this kind of stuff since it is clear you are truly no authority on this?

Name: Anonymous 2013-08-27 14:19

Does anybody here even have a Ph.D in computer science? No? then fuck off back to reddit.

Name: Anonymous 2013-08-27 14:33

Looks like the lesswrong kike found /prog/.

Name: Shlomo AIberg 2013-08-27 14:43

>>21
In order to operate, it needs to differentiate a good scenario for society from a bad one. So where does this judgement come from? Who creates it or defines it?
Good is defined by the number of goyim that are killed. The deader, the better. We will have robots as our slaves.

Name: Anonymous 2013-08-27 14:59

>>31
I welcome the New Jew. Too many /g/oys lately.

Name: Anonymous 2013-08-27 15:00

And by /g/oys I obviously meant /g/oyim. I'm such a putz.

Name: Anonymous 2013-08-27 17:51

>>25
BS! ssl/tls could be used without CA's simply using self-signed certs, as you use PGP without keyservers

Name: Anonymous 2013-08-27 20:08

>>35
It could be, but it isn't.

Name: Anonymous 2013-08-27 21:06

e/g/in :')

Name: Anonymous 2013-08-27 22:35

To the faggots arguing for a strictly computerized government: whoever writes the software gets to introduce bias. The software also can't do shit about civil matters, so it's really limited to economic stuff (which is the area of governance where most corruption happens anyway).

Name: Anonymous 2013-08-27 23:13

this thread reminds me of shitty discussions about the judgment computers in neon genesis evangelion on various anime-related forums and BBSes

bad input yields bad output, end of story

Name: Anonymous 2013-08-27 23:19

We should use this protocol instead.

1. Know the website you want to visit and the author.
2. Drive to the author's house or company.
3. Confirm their identity using as many as the following as possible:
  a. dna match to blood sample taken forcibly.
  b. dna match to sexual fluid taken forcibly.
  c. face and iris match taken forcibly.
4. Demand the author writes down the content of the site's SSL public key using a trusted paper and pencil.
5. Drive home.
6. Use author's public key to access the site's SSL.
7. Share the author's public key with people who trust you to carry out steps 1-5 as intended.

Newer Posts
Don't change these.
Name: Email:
Entire Thread Thread List