I have a basic script processing some very large CSVs, and at one point there is a 4-level nested for loop. In the inner most loop there is a condition on integers that takes the form:
if (a-c)(b-c) > 0:
break
which runs about 5 billion times. I have done everything I can to speed up the for loops in general, but am wondering now if I can get any further performance enhancement by rearranging the arithmetic here. Will expanding the binomial speed it up at all? Or does Python already do that in the preprocessing/bytecode stage before it runs?
Further, if these end up being floats, which is likely, and I don’t need much precision, will making them 16 or 32 bit rather than 64 bit save any time, or will the overhead of converting them just end up being the same?