I have an array of 16-bit signed integers that is updated every iteration of the while(1) loop in my code's main function. Obviously I am aiming to optimize the updating algorithms as much as possible. Now, my problem is this: Other optimizations I have made in other places in the code (involving lookup tables) expect the value of each element to be somewhere between -127 and 127. The new value of the array is dependent upon previous adjacent values in the array. Occasionally it is possible for the calculations to result in a value outside of the expected bounds. Once this happens, it is indeed possible for a systemic breakdown in the integrity of the data (insofar as more and more values go outside of the expected bounds).
I have been thinking about a quick way to clip the values to -127 or 127 if the values are outside of these bounds. Obviously I could do an "if(array[i] > 127) array[i] = 127;" but that isn't very efficient. I tried it bitwise doing the following:
array[i] = (snipped expression) & -127;
but that did not work either. My reasoning was that that particular value would eliminate any bits outside of the sign and the bits corresponding with 2^7 and above.
Can someone tell me what I am doing wrong, or suggest an efficient alternative?
Well, the obvious answer is that your optimizations are stupid. How sure are you that they're necessary? What possessed you to use 16 bits in one place and 8 in another?
Name:
Anonymous2007-12-06 20:16
>>2
That seems to have worked extraordinarily well, thanks very much
Name:
Anonymous2007-12-06 20:18
>>3
My reasoning was thus:
If I used 8 bits and the calculation ended up exceeding the expected bounds, then I would be experiencing integer wrap-around and still have anomalous values
Name:
Anonymous2007-12-06 20:19
When in doubt, just cast the shit out of everyone. That's my motto.
>>7
Because of the fact that systemic breakdowns are possible (a few bad values can rapidly make others around it "bad", and thus it spreads further), adding more elements to my lookup tables would only be delaying the inevitable. Sure, it might take longer to go above 256, or 512, or 1024, but if something wasn't done to prevent a value from EVER going above the imposed bounds, it would happen eventually.
Name:
Anonymous2007-12-06 21:14
>>8
It just seems to me that a large number must have some meaning and there would be a more appropriate time to interpret and deal with it than capping at ±127. Of course, you've only given us enough information to help you with the problem you think you have.
Name:
Anonymous2007-12-06 21:27
>>9
Right now I have an algorithm that works in most cases but in certain special circumstances can "fail" (i.e. deliver anomalous values). The only significance these anomalous values have is the fact that there are no lookup table values for them, and thus there would be glitches (one of my lookup tables contains RGB color values, as I plot each element as a pixel, and so attempting to plot anomalous values has obvious graphical glitches).
Name:
Anonymous2007-12-07 5:23
anonymous values? you sir, are not an EXPERT PROGRAMMER
Name:
Anonymous2007-12-07 11:36
>>1
I suggest reading ``TMOIAACSAOGFAPACM'' (``The microarchitecture of Intel and AMD CPU’s: An optimization guide for assembly programmers and compiler makers'').
>>1
Premature optimization is the most stupid thing one could do. In any case, this looks like a useless micro-optimization to me. How sure are you that this is going to make any difference to the overall speed of your code? Have you profiled it? Are you sure that a handful of extra cycles are even worth it?
Name:
Anonymous2007-12-07 11:56
>>13
One word, the premature optimization of code. Slowness over.
>>13
Considering that this is embedded development and the fact that the set of operations is performed nearly 50,000 times each update cycle, yes. I had some other operations that were encapsulated within an #ifdef statement, the inclusion of which into the code DID make a tangible difference.
Regardless, the optimization in >>2 was trivial to implement and seems to have done the job.
Name:
Anonymous2007-12-08 5:32
>>12
One of the few rare /prog/ gems. Bookmarked like the motherfucking fist of the north star.
Name:
Anonymous2007-12-08 5:38
`'Stalin'` is and --aggressively optimizing-- Scheme compiler.
.
Name:
Anonymous2007-12-08 6:17
>>1 needs to learn to love branch prediction. Also conditional moves.