>>1
With respect to your question, I'll assume you're limiting yourself to, e.g., a single architecture. The best way to understand this is actually to look back at the history of software development.
(Note that this *is* a simplified, and not completely accurate, representation. I'm no historian, nor do I pretend to be.)
Back in the day, computers were programmed by essentially opening and closing switches, generally by plugging cables into a large board. There was no reality of stored programs, so once a computer was programmed this way, it stayed that way until somebody came along and physically changed the configuration of the electrical cables.
Now, this really sucked. A lot. You needed highly intelligent experts to write even the simplest calculations because they needed to know and understand the entirety of the innards of their computer. Fortunately for everybody, technology evolved.
Eventually, through the work of Turing and many other very intelligent folks, it became understood that you *could* create a computer which was actually able to run programs that were defined elsewhere. It was a machine that could emulate other machines. This led to storable programs.
Of course, the language that was used didn't immediately evolve with that concept. It was just really cool that we could write out (or punch out into cards, or however they did it) all those ones and zeroes that we used to connect manually via electrical wire.
And, yes, they were essentially using ones and zeroes to do this work. Straight machine code, very complex, very difficult, very error-prone. Forgot a '1'? 'splode.
And so the early assembly languages were created as a mnemonic for all those ones and zeroes. It's way easier to remember "MOV R1 R2" than it is to remember "11101001". Of course, this was a breakthrough because it also required the first assembler, a program designed to parse these mnemonics and generate the corresponding binary code within the computer.
Gradually, though, a desire formed to create languages that would represent the problem domain more accurately than assembler. Assembler was functional, it's true, but certain patterns occurred over and over and it would be a lot easier to use those patterns with the assembler as building blocks than to use the assembler straight on.
And so high-level languages were born. And a plethora of them, at that. Because high level languages (COBOL, FORTRAN, B, C, BASIC, Pascal, ADA, Forth, Lisp, et al.) were developed for various goals, they emphasized different aspects of their problem domains. FORTRAN (from "Formula Translation") was developed for scientific and mathematical applications, e.g., while COBOL was developed for business software. Different targets, different languages, different semantics. People continue inventing languages today, in fact, for specific problem domains. There's some research being done on the creation of a language whose job is to (ah-ha!) create and define domain-specific languages!
So you see, these languages weren't all developed at the same time as computers. Rather, it's been a layered evolution of the industry which has resulted in the state of things today. (Oh, btw, assembler is no longer the lowest-level language, either. Below assembler, in many modern general-purpose processors, lies a processor- or architecture-specific language called "microcode".)
So yes, the compiler translates your C++ code into assembler. Assembler is not a high-level language, though. It's a direct conversion/translation into binary code. The task of that translation is extremely non-trivial, and that's why very few specialists write exclusively in assembler these days. Compilers can typically do a better job than a single person anyway.
As for the universal programming language, some high-flying researchers have considered it, but it's really a faulty idea. The reason is based in the reality of the fact that a language must, at some level, reflect the architecture of the underlying hardware. You'll never get such universal acceptance from the companies involved.
And even if you did, the proliferation of high level languages is really a natural consequence of the fact that, in design (specifically, problem-domain specification), there's no single right answer. It's all a question of trade-offs, and the only thing that a universal language would really accomplish is making everybody equally unhappy with it.
C++ is a good general tool for many purposes, it's quite popular in the industry, so it's a good place to start. It isn't (nor should it be) the only option for all problems, though. That'd be like saying that a scalpel is the only manual tool anybody could ever need. Patently silly.
I hope this helped. If not, ask your question again. :)