Return Styles: Pseud0ch, Terminal, Valhalla, NES, Geocities, Blue Moon.

Pages: 1-

C optimization question

Name: Anonymous 2007-12-06 20:03

Here is my situation.

I have an array of 16-bit signed integers that is updated every iteration of the while(1) loop in my code's main function.  Obviously I am aiming to optimize the updating algorithms as much as possible.  Now, my problem is this:  Other optimizations I have made in other places in the code (involving lookup tables) expect the value of each element to be somewhere between -127 and 127.  The new value of the array is dependent upon previous adjacent values in the array. Occasionally it is possible for the calculations to result in a value outside of the expected bounds.  Once this happens, it is indeed possible for a systemic breakdown in the integrity of the data (insofar as more and more values go outside of the expected bounds). 

I have been thinking about a quick way to clip the values to -127 or 127 if the values are outside of these bounds.  Obviously I could do an "if(array[i] > 127) array[i] = 127;" but that isn't very efficient.  I tried it bitwise doing the following:

        array[i] = (snipped expression) & -127;

but that did not work either.  My reasoning was that that particular value would eliminate any bits outside of the sign and the bits corresponding with 2^7 and above.

Can someone tell me what I am doing wrong, or suggest an efficient alternative?

Name: Anonymous 2007-12-06 20:10

>>1
array[i]=(int8_t)(snipped expression);

Name: Anonymous 2007-12-06 20:14

Well, the obvious answer is that your optimizations are stupid. How sure are you that they're necessary? What possessed you to use 16 bits in one place and 8 in another?

Name: Anonymous 2007-12-06 20:16

>>2
That seems to have worked extraordinarily well, thanks very much

Name: Anonymous 2007-12-06 20:18

>>3
My reasoning was thus:
If I used 8 bits and the calculation ended up exceeding the expected bounds, then I would be experiencing integer wrap-around and still have anomalous values

Name: Anonymous 2007-12-06 20:19

When in doubt, just cast the shit out of everyone. That's my motto.

Name: Anonymous 2007-12-06 20:42

>>5
So why truncate later?

Name: Anonymous 2007-12-06 21:01

>>7
Because of the fact that systemic breakdowns are possible (a few bad values can rapidly make others around it "bad", and thus it spreads further), adding more elements to my lookup tables would only be delaying the inevitable.  Sure, it might take longer to go above 256, or 512, or 1024, but if something wasn't done to prevent a value from EVER going above the imposed bounds, it would  happen eventually.

Name: Anonymous 2007-12-06 21:14

>>8
It just seems to me that a large number must have some meaning and there would be a more appropriate time to interpret and deal with it than capping at ±127. Of course, you've only given us enough information to help you with the problem you think you have.

Name: Anonymous 2007-12-06 21:27

>>9
Right now I have an algorithm that works in most cases but in certain special circumstances can "fail" (i.e. deliver anomalous values).  The only significance these anomalous values have is the fact that there are no lookup table values for them, and thus there would be glitches (one of my lookup tables contains RGB color values, as I plot each element as a pixel, and so attempting to plot anomalous values has obvious graphical glitches).

Name: Anonymous 2007-12-07 5:23

anonymous values? you sir, are not an EXPERT PROGRAMMER

Name: Anonymous 2007-12-07 11:36

>>1
I suggest reading ``TMOIAACSAOGFAPACM'' (``The microarchitecture of Intel and AMD CPU’s: An optimization guide for assembly programmers and compiler makers'').

http://www.agner.org/optimize/

Name: Anonymous 2007-12-07 11:51

>>1
Premature optimization is the most stupid thing one could do. In any case, this looks like a useless micro-optimization to me. How sure are you that this is going to make any difference to the overall speed of your code? Have you profiled it? Are you sure that a handful of extra cycles are even worth it?

Name: Anonymous 2007-12-07 11:56

>>13
One word, the premature optimization of code.  Slowness over.

Name: Anonymous 2007-12-07 12:24

>>1
ZOMG OPTIMIZED

Name: Anonymous 2007-12-07 12:35

>>13
>>3,>>7,>>9 here
Yes.

Name: Anonymous 2007-12-07 14:56

>>1
OMG OPTIMIZED!

Premature optimization is the root of all evil
All optimization is premature
Use Lisp

Name: Anonymous 2007-12-07 15:28

>>17
Low-quality troll detected. Read PAIP.

Name: Anonymous 2007-12-07 18:23

Use LLVM on a machine with a SIMD ISA.  /thread

Name: Anonymous 2007-12-07 22:26

>>12
Fukken saved.

Name: Anonymous 2007-12-08 5:08

>>13
Considering that this is embedded development and the fact that the set of operations is performed nearly 50,000 times each update cycle, yes.  I had some other operations that were encapsulated within an #ifdef statement, the inclusion of which into the code DID make a tangible difference.

Regardless, the optimization in >>2 was trivial to implement and seems to have done the job.

Name: Anonymous 2007-12-08 5:32

>>12
One of the few rare /prog/ gems. Bookmarked like the motherfucking fist of the north star.

Name: Anonymous 2007-12-08 5:38

`'Stalin'` is and --aggressively optimizing-- Scheme compiler.
.

Name: Anonymous 2007-12-08 6:17

>>1 needs to learn to love branch prediction. Also conditional moves.

Name: Anonymous 2007-12-10 15:30

>>130
>echo `perl -e 'print rand 0, 9999'`

Well, aren't you a fucking expert.

Name: Anonymous 2007-12-10 15:31

>>130
echo `perl -e 'print rand 0, 9999'`

Well, aren't you a fucking expert.

Name: Anonymous 2009-03-18 2:33

Don't call me gay, but I need some mary jay!

Marijuana MUST be legalized.

Name: Anonymous 2010-12-23 12:31


Don't change these.
Name: Email:
Entire Thread Thread List