How can I compute the range of signed and unsigned types
jmax at toad.net
jmax at toad.net
Wed Apr 18 09:41:12 PDT 2001
Quoting Chris Richard Adams:
>
> How can I 'compute' the different sizes of types on the machine? to
> prove that the values within limits.h are valid. I also would like to
> do this for reals. Anyone have any tips - I'd appreciate it.
>
OK... some groundwork first. I assume you're asking how your code can tell,
at _run-time_ what the sizes of types on the machine it's executing on are.
I stress the run-time above, because several people have given answers as
to how to tell at compile-time. Their answers amount to "use limits.h", and
this is correct. If you don't trust your compiler to have a good limits.h,
then you shouldn't be using it at all. Large chunks of the standard library
and your compiler's support library probably won't work right if limits.h
is screwed.
As to run-time computation of limits.h values...
Short answer: You can't.
Now, _if_ you are willing to make a couple assumptions which the C standard
doesn't guarantee are true, several people have given you parts of the answer.
The first assumption you need is that sizeof gives you the size of things in
terms of 8-bit bytes. This is true for every C compiler I've ever worked with,
but the standard _explicitly_ doesn't guarantee this, and I know that there
are C compilers that this isn't true for (especially in the mainframe world).
The second assumption you need is that you know the numeric representation
scheme; i.e., two's complement, one's complement, sign-magnitude. Again, the
C standard very carefully and explicitly steers around specifying this. _Most_
modern machines (every microprocessor I'm aware of) use two's complement.
If you're willing to assume two's complement, it's fairly straightforward
for integer types:
#define MAX_UNSIGNED_SHORT (~((unsigned short) 0))
#define MAX_UNSIGNED_INT (~((unsigned int) 0))
#define MAX_UNSIGNED_LONG (~((unsigned long) 0))
#define MIN_SHORT_INT ((short int) (1 << (sizeof(short int) * 8 - 1)))
#define MAX_SHORT_INT (-(MIN_SHORT_INT + 1))
#define MIN_INT ((int) (1 << (sizeof(int) * 8 - 1)))
#define MAX_INT (-(MIN_INT + 1))
#define MIN_LONG_INT ((long int) (1 << (sizeof(long int) * 8 - 1)))
#define MAX_LONG_INT (-(MIN_LONG_INT + 1))
Of course, this assumes that the code is being compiled on the machine
that it is to be executed on...
As Knuth says "beware bugs in the above code; I have only proven it
correct, not tested it."
Walter Ligon has pointed you down the right track with his check for
two's complement / one's complement / signed magnitude representation.
Be careful of big-endian vs. little-endian differences here, and also
watch out for compilers that do funny things with casting values that
are too big for the target type.
Floating point is a whole 'nother ballgame, and is even trickier to
deal with; short of trying every bit-pattern possible within a floating
point value (and writing code to do _that_ is tricky; you have to be sure
you know how many bits are in a floating point value (see my comments above
on sizeof), and be prepared to cope with bit patterns that aren't valid
floating point values) I don't see a way to do that in the general case,
even theoretically (In addition to being tricky, that approach is
impractical in terms of time; how long will a typical machine take
to execute a loop 2^64 times? Hint: _way_ longer than you can wait.)
The short, practical answer is: Compile your code seperately for each machine
it needs to run on, and use limits.h. The entire reason it's there is because
the problem you've asked about is very hard.
-John
_________________________________________________________________________
This mail sent via toadmail.com, web e-mail @ ToadNet - want to go fast?
http://www.toadmail.com
More information about the Beowulf
mailing list