>>18,19
Depends on the algorithm you're using. For things such as this, it would likely be many different models in a hierarchy. Unsupervised learning and clustering to categorize data, figure out relations humans are unable to see, what to focus on, patterns, etc. Supervised learning (goal-based optimization) with tagged / semantic data sets to give a sort of "direction". I assume the rest would just be simulations on the data received. If you have access to every piece of information that is produced, you can probably start to predict future events to some extent. So if you can do that for some category of events, you can test out how certain actions / policies / regulations / etc will propagate through the system.
The actions deemed optimal will depend on which ones give greater stability to the system as whole.
The point is that humans don't stop the microprocessor layout AIs midway through and say "Oh, no, you should take this path", it's incomprehensible, the billions of pathways that need to synchronized are not something humans could ever begin to make sense of. So it's all left to the AI. The model is tested and simulated, and the best effort one is chosen.
Same with Watson. The IBM devs didn't tell Watson to stop cursing, they
purged Urban Dictionary's database from its memory, because the algorithms are too complex / too interdependent to just say "refrain from using this class of words", also because of context-based problems in human speech, etc.
The technocracy AIs would operate on data sets that humans would never even be able to imagine, much less try to purge or filter or what have you.
Either way, humans are obviously incapable of governing themselves, at this point this is really the only viable alternative. If it works, great. If it doesn't, we never had a chance in the long run anyway.