Return Styles: Pseud0ch, Terminal, Valhalla, NES, Geocities, Blue Moon.

Pages: 1-4041-

Am I the only one...

Name: Anonymous 2005-10-22 10:12

who thinks GCC sucks? Not the compiling engine, but the interface to it. It's made of confusing, illogical, error prone options with stupid default values, and the idea of calling everyone and their mother itself (and mix everyone's options together) is as bad as penis cancer. For example, -s is passed directly to the linker, yet -rpath isn't; -Wall and most of -Wstuff controls warnings, yet -Wl is some kind of hack to pass comma-separated arguments to the linker, especially the ones not supported by gcc; etc. Whoever did this has a disturbed mind. And I think I know who did this.

Anyways, is there a sane interface which provides completely separate, non-automagical preprocessor+compiler, assembler, and linker, with well thought out options?

Name: Anonymous 2005-10-22 10:21

agree

Name: Anonymous 2005-10-22 12:18

A lot of compilers that have been around have the option problem, espescially since the gcc people don't want to add options that are not only 1 letter.   As for default options, GCC has been around a long time and it breaks old code if you change them.

At the same time very few people are going to be able to remember any more than the most commonly used options.  Any weird ones needed you look up, put them in the make file and forget about them.

I've never had problems figuring out what options I needed to do some build, and once they're in the makefile I can forget about them.  I don't know about you but I would have to read the documentation for all but a few options that I use a lot.

Now to answer your question:
I'm not aware of any such interface.  Once you had designed the interface, writing it would be pretty easy.  It sounds like you know what you want though so knock yourself out.

Name: Anonymous 2005-10-22 13:35

I found GNAT really easy to use, but that's for Ada...

Name: Anonymous 2005-10-22 18:02

I don't really have much problems with GCC, but then again I tend to not really do much which involves obscure command line flags.

Name: Anonymous 2005-10-22 18:14

>>5
Yeah, but when you want more code optimization, want to enforce some special rule, want to locate libraries elsewhere, etc., you'll have to deal through them.

Name: Anonymous 2005-10-23 4:11

Unless you're doing some unusual or strange shit, -O2 should cover everything you need. Are you some Gentoo ricer faggot?

Libraries elsewhere? -I and -L for all your needs.

Warnings? 95% covered by -Wall.

Hard man. Real hard.

Name: Anonymous 2005-10-23 7:33

>>7
O2 should cover everything you need
No. There are more optimizations possible, so I'm doing them. Besides, you'll want to use -mxx and -march=xxx.

Libraries elsewhere? -I and -L for all your needs.
No. Runtime library search path must be set too. And GCC can't set it directly.

Warnings? 95% covered by -Wall.
I wanted more control over what's warned on.

Hard man. Real hard.
It's not only a matter of if it's hard or not. It's a matter of if it sucks big, greasy cocks or not, like many legazy "features" you have to go through in Linux.

Name: Anonymous 2005-10-23 8:28

There are more optimizations possible, so I'm doing them.

In short, you're a ricer. Like I said, unless you're doing something unusual, using other options will get you 5% at most. march is one of the very few exceptions to this.

Runtime library search path must be set too.

LD_LIBRARY_PATH

I wanted more control over what's warned on.

Your code shouldn't emit any warnings with -Wall. Oh, it does? Fix your code.

Name: Anonymous 2005-10-23 10:11

>>9
Anything over 1% is worth doing, especially when it comes "free", at no other cost than compiling time, which is done only once. The "I don't care" attitude applied to every part of the system makes it so slow.

LD_LIBRARY_PATH
Particular software.

Your code shouldn't emit any warnings with -Wall. Oh, it does? Fix your code.
Final versions don't, even with -pedantic, but I want to know what am I enabling.

Name: Anonymous 2005-10-23 22:41

Anything over 1% is worth doing, especially when it comes "free", at no other cost than compiling time,

While at the same time creating bugged code due to flaws in the compiler or mistaken assumptions in the code. Furthermore, how long do you plan to keep testing and compiling before you're happy, and do your optimizations transfer to other computers?

Nah, you're just a ricer. Those -O2 options weren't chosen by chance.

Particular software.

Clueless.

want to know what am I enabling.

You're enabling -Wall, you tard. Furthermore, non-final versions of my code don't have errors either, because I fix them when they occur. Fix your code, you undisciplined lazy twit.

Good coding practices? lol wut?

Name: Anonymous 2005-10-23 22:50

I've minimally (using march) Riced out BSD before; ports too. I didn't honestly notice much of a size difference.

Compiling Mozilla and KDE really SUCKS, let me tell you.

Name: Anonymous 2005-10-23 22:56

>>12 I meant "much of a difference except maybe size"; sorry.

Name: Anonymous 2005-10-24 6:59

Compiling Mozilla and KDE really SUCKS, let me tell you.
Signed, I'm a Gentoo user but I use the binary package for Firefox because I don't like spending all day compiling every time there's a patch. I've given up on using KDE entirely (I was only using it to start konsole and then do everything on the command line, seems pointless to have to deal with a bloated DE just for that.

Name: Anonymous 2005-10-26 11:52

>>14
Enjoy your super optimalized single terminal DE. I sure hope it runs fast, I'd hate to think all that compiling was a waste of cpu cycles.

In the meanwhile, I'm going to continue enjoying my slow bloated KDE that I got installed in only a couple of minutes.

Name: Anonymous 2005-10-26 15:47

>>15
I'll enjoy my fast --fomg-optimized fluxbox that I got installed in a couple of minutes.

Name: Anonymous 2005-10-26 17:01

>>16
Mmm, ion.

Name: Anonymous 2005-10-26 17:29

is the difference between sse2 and without so noticable?

Name: Anonymous 2005-10-26 18:47

I request a --rice option which causes GCC to call CPUID, decide what's best for your processor, enable all compiling optimizations, strips all symbols, and runs.

Name: Anonymous 2005-10-27 2:31

sse2 and without so noticable

Only in very specific domains. Normal programs won't notice a difference.

Name: Anonymous 2005-10-27 2:32

decide what's best for your processor

If only, but that's impossible.

Name: Anonymous 2005-10-27 5:10

>>21
Not too hard. Google for information on how to interpret what's on your registers after CPUID. Then determine -march. Enable -O3. Disable for particular cases if known issues. Use the fastest stable FPU (forgot the switch). Etc.

Name: Anonymous 2005-10-27 6:07

>>22
Uh, I don't think you understand. The -O2 and -O3 options are already a mix of options that will give you best performance for the general case in a safe manner. All those other switches? They're not guaranteed to work. -march is one of the very few, and even so, it's useless if you're making a general binary.

The only way to do what you suggest is to make the compiler put in profiling machanisms, you run the software a few dozen times in general usage, and you feed the profiling data back for a final compile. Even so, that'll be tailored to your usage patterns, and unless you're willing to do this a few dozen times, it still won't find the optimal set of switches. This of course also ignores that profiling will also alter the performance characteristics of the software.

You think the GCC folks wouldn't have added such a switch already if it were possible?

Name: Anonymous 2005-10-27 6:09 (sage)

profiling mechanisms

fixed

Name: Anonymous 2005-10-29 4:57

>>1
Are you the only noob here who doesn't understand what a compiler is?

Name: Anonymous 2005-10-30 7:54

>>23
Well, the other day I was trying some amazing utility that took a Win32 executable and converted bits of FPU code to SSE whenever possible. To decide when to do it, it must know how to do it, the rewritten code must fit, and finally, it ran both pieces of code a fifty million times each, to see if the new code was really faster or not.

Name: Anonymous 2005-10-30 8:42

fifty million times makes it bettar!

Name: Anonymous 2005-10-30 8:43

>>27
Fifty million times makes it barely measurable on modern processors.

Name: Anonymous 2005-10-30 17:14

>>26
That's one of the dumber things I've heard being done. Whoever wrote that utility didn't know what SSE is meant for.

Name: Anonymous 2005-10-31 16:27

Why run it 50mil times? Why not just measure the number of instructions and multilpy with the time it takes to run 1.

Name: Anonymous 2005-10-31 16:49

>>30
Lol naïve view of processors.

Name: Anonymous 2005-10-31 17:12

>>30

I was about to mock you, but then I realized you posed your post in the form of a question and may be genuinely interested in why that approach is so flawed.

Short & Simplified version:

Different instructions take a different amount of time to process.  Differing characteristics of the internal state of the computation may affect timings as well.  It's far simpler to just run and measure than to try to work out what is happening time-wise in the chip and model it.

Name: Anonymous 2005-10-31 18:11

>>32
New pentiums use risc, a cycle an instruction, and microcodes (or whatever they are called) on top of it. 1 cycle can be a unit. And you can measure from there.

Also in non-risc machines, you can assign points to this and that instruction and make a good approximation of it.

I imagine it will get pretty complicated for more than the fibonacci function.

Name: Anonymous 2005-10-31 21:35

Still a poor metric, because while newer processors are RISC internally, they're still running CISC instruction. In other words, the x86 instructions are translated into CPU microops, and the translation isn'tone-to-one. A translated x86 instruction may take several microops.

Furthermore, there are all the problems surrounding branch misses, cache lines, etc. So no, a simple analysis of the code like you mention won't work except for the most trivial example.

BTW, I really doubt a program like >>26 mentions exists. Think about it.

Name: Anonymous 2005-10-31 23:16

>>33

All newer x86 processors use a RISC philosophy in their microcode.  The whole CISC/RISC debate died years ago with the selected optimal solution (as in all cases of design) being a merging of the two concepts.  I wasn't talking about instructions on the microcode level, though, as God only knows how that all gets generated, pipelined, and flushed just so.

My comments stand at the assembler level.  Intel assembly is *very* CISC.

I won't say anymore as >>34 already did, and more concisely than I would have too.

Name: Anonymous 2005-10-31 23:25

>>34
We know how many microops it takes for one x86 instruction, I hope. 1 microop is a cycle or 1 unit of time.

Yeah, the branch predictions and what not would make it extremely complicated and close to impossible.

Name: Anonymous 2005-11-01 6:45

We know how many microops it takes for one x86 instruction

I'm not certain that will always be the case. Microops are used internally by the CPU, and really aren't any business of an external entity. CPU manufacturers of old (as in VAX-era) CPUs often updated the microcode programs that ran inside the CPUs, and Intel and AMD almost certainly still do this too.

I'm being a complete nit-pick though, because the Intel manuals list how many cycles an instruction takes, which is what we're after anyway.

Name: Anonymous 2005-11-01 16:55

>>33
It gets complicated because things like memory caching and usual context and instruction set switching get in the way, and not only every processor generation and vendor is different, but now we have a bunch of different processor cores in each line. The best (and most realistic) way is to measure it.

>>34
http://www.aegir.bur.st/files/morrowind/index.php?dir=fpu2sse

>>36
We don't, because it depends on pairing and memory performance, which gets complicated to estimate with two caches and newer memory.

Name: Anonymous 2005-11-01 23:45

usability of a C compiler @_@

Name: Anonymous 2005-11-08 2:08

>>30
This is clearly an induction problem. Let's assume it's true for k instructions, now we shall prove for k+1 ...

Name: Anonymous 2005-11-08 16:33

An induction problem? It's not.

Name: Anonymous 2005-11-08 16:47

Do any of you use SCREEN?  It's a "terminal multiplexer"; if you're forced to use it, you might hate it.  But to people who find it (eventually and inevitably, after going through the current-slew of window managers, loving fluxbox for a while, then abandoning X altogether since it sucks), it is a godly little app.

<gay>Console Warriors Unite!</gay>

You should find "real" reasons why gcc is bad, like how the latest and greatest revisions breaks stuff, or how it's a huge unmanagable mess to maintain (ala GIMP, KDE, et al.)

Name: Anonymous 2005-11-08 16:59

>>42
i thought screen used x since it can play mozilla and cetera

Name: Anonymous 2005-11-08 17:41

Request URL since screen is a pretty common word to Google for and I'm not so sure what is it

Name: Anonymous 2005-11-08 19:30

Name: Anonymous 2005-11-08 19:35

screen isn't that useful on a local machine, but it can't be beat over a remote connection.

screen + SSH ftw

Name: Anonymous 2005-11-10 5:51

>>43 see >>45


yeah screen rocks the command line.  when my friend showed it to me i fell in love at first sight.  =)

Name: test !!6zZFK8hdXkJGFwa 2009-02-03 18:30

lol

Don't change these.
Name: Email:
Entire Thread Thread List