When Zero is Less Than a Negative Number
QUESTION: I've encountered a bizarre situation where IDL thinks that 0 is less than a negative number. Can anyone rationalize this? Is it really not okay to compare the value of an unsigned integer with a signed integer? Shouldn't the compiler handle this?
Here is a test case.
pro test_gt_lt a = ulong64(0) b = long(100) if (a lt b) then begin print, 'ZERO IS LESS THAN 100' endif else begin print, 'ZERO IS GREATER THAN 100' endelse c = ulong64(0) d = long(-100) if (c lt d) then begin print, 'ZERO IS LESS THAN -100' endif else begin print, 'ZERO IS GREATER THAN -100' endelse end
Running the program results in this output:
ZERO IS LESS THAN 100 ZERO IS LESS THAN -100
ANSWER: Dick Jackson supplies the answer.
Forgive me for boiling down your test case to this.
IDL> Print, 0ULL LT -100L 1
I think what's happening is that, to compare a 64-bit (unsigned) type to a 32-bit (signed) type, the 32-bit value is converted to the "higher precedence" type, even though it will no longer be able to represent a negative number.
From Help on "Language > Operators > Relational Operators":
Each operand is promoted to the data type of the operand with the greatest precedence or potential precision. (See Data Type and Structure of Expressions for details.)
Here's what was happening.
IDL> Print, 0ULL LT ULong64(-100L) 1 IDL> help, -100L <Expression> LONG = -100 IDL> help, ULong64(-100L) <Expression> ULONG64 = 18446744073709551516
If we're pushing the limits here, this is possibly even more troublesome:
Note: Signed and unsigned integers of a given width have the same precedence. In an expression involving a combination of such types, the result is given the type of the leftmost operand.
This leads to the following curiosity, where it seems that, with the same "level" of precision (64 bits, but one signed and one unsigned), a < b and b < a:
IDL> Print, 0ULL LT -100LL 1 IDL> Print, -100LL LT 0ULL 1
I suppose the lesson here is, if there's a chance of comparing positive and negative values, be sure to convert both expressions to a signed type, or a float type.
Finally, Craig Markwardt chimes in with this sage advice.
When I was young I thought integer math on a computer was so simple and easy. After doing some moderately intensive integer calculations in C, I realized that integer math is the work of evil.
The interactions of signed vs unsigned, and short vs long data types, is very subtle and prone to error. One needs to pay very careful attention to compiler/interpreter conventions regarding integer math. In my particular case I was using integer math in C to avoid the overhead of floating point, so it was "worth it."
Version of IDL used to prepare this article: IDL 8.2.3.