>>1
robots.txt is about interacting with search engines that play nice, not "hiding" pages, dipshit. If you don't want a page found, don't mention it in there. And if you've got the bandwidth to send an infinitely large amount of random data, how much are you paying per month? (Assuming you've got a good enough entropy source to refill the pool, or that you mean "not actually random".)
So robots.txt is the retarded way to go. You could switch content on user-agent, but then again, the crawlers that don't play nice are going to lie anyway. Or, you could do what anyone with half a fucking brain does: HTTPS and WWW-Authenticate header. HTTPS is taxing to do in bulk, so crawlers don't tend to do it as much. Requiring auth (and the headers not to be snooped due to HTTPS) keeps unwanted eyes out of it, with the usual caveats.