>>1,4-5
Our current hardware is likely too slow (relatively) and too small(memory-wise). While it's not unimaginable some resource-constrainted AGI could run on geographically separated, limited resource computers such most PCs (that don't have TB of RAM and much bigger HDDs as well as many processor cores that have direct and fast access to that data), it would likely be too slow to be usable in realtime. This is the case for AGI's like OpenCog which try the resource constrained high-level, but not too high-level approach. For anything that tries to run neural networks the size of our brains that's utterly unfeasible with our current hardware as far as memory and energy costs are concerned (although, some specialized memristive modules designed specifically for such neuromorphic computing should be much more feasible, if they ever get that cheaply implemented (at least one project is underway)), however as I said before - forget about running this over a long-distance network. If you were a mind upload(SIM), the Internet itself would be unbearably slow and you would all either slow yourself down to current human realtime speed or just form (geo)local hubs with other SIMs.
This doesn't mean that such a human-level AGI or even a human SIM couldn't "take over" the Internet in the sense that they could work with "the force of a thousand script kiddies and security researchers", merely on the virtue of running much faster than than normal humans (a few orders of magnitude, assuming proper hardware implementation for a SIM, it's more variable for an AGI as there's a lot of ways that might work - mindspace is larger there).
I wouldn't really be worrying about taking over the Internet, there's a lot more ``nicer'' things an AGI (or SIM given hundreds of years of subjective time and a bit of self-modifiability options) could do that would affect anyone in much more direct ways and important ways. Obviously this means that if you're working on AGIs, you should be careful on its supergoals/goal-systems (if existing) or how you educate it - you don't want the first agent that thinks considerably faster than you, superintelligent or not to be an unethical bastard that can completly mess up our world (obviously there's also immesurable benefits for us succeeding in making an AGI, that even some slight existential risk is tolerable, so working on AGI is very much worthwhile and should be done; that said, we should do our best to minimize such risk through proper and clever design and eventual education).