How can I compute the range of signed and unsigned types

Robert G. Brown rgb at phy.duke.edu
Wed Apr 18 10:40:02 PDT 2001


On Wed, 18 Apr 2001 jmax at toad.net wrote:

> Floating point is a whole 'nother ballgame, and is even trickier to
> deal with; short of trying every bit-pattern possible within a floating
> point value (and writing code to do _that_ is tricky; you have to be sure
> you know how many bits are in a floating point value (see my comments above
> on sizeof), and be prepared to cope with bit patterns that aren't valid
> floating point values) I don't see a way to do that in the general case,
> even theoretically (In addition to being tricky, that approach is
> impractical in terms of time; how long will a typical machine take
> to execute a loop 2^64 times? Hint: _way_ longer than you can wait.)

Well, yes, but as long as you can handle arithmetic exceptions one would
obviously use an e.g.  binary search method that would only take a small
fraction of this amount of time to get very close indeed.  As in double
size until the float or double result overflows, and then use a binary
division of the remaining interval until the largest number that doesn't
overflow is known to either the desired accuracy (I'm usually happy
enough knowing the largest EXPONENT that definitely won't overflow or
smallest that won't underflow for any mantissa) or if you are diligent,
to machine accuracy.

I assume that this is what the original respondant was looking for -- a
simple enough binary search algorithm that can be implemented "anywhere"
and that is guaranteed to give answers that are either exact or "good
enough" on any platform, however outre, including ones that might or
might not be running gcc or whatever.

Similar methods can be used for integers and so forth.  I believe that
the exceptions can all be trapped (to avoid a crash) by installing your
own handler(s) for:

       +----------------------------------------------+
       |                   SIGFPE                     |
       +-----------+----------------------------------+
       |FPE_INTDIV | integer divide by zero           |
       +-----------+----------------------------------+
       |FPE_INTOVF | integer overflow                 |
       +-----------+----------------------------------+
       |FPE_FLTDIV | floating point divide by zero    |
       +-----------+----------------------------------+
       |FPE_FLTOVF | floating point overflow          |
       +-----------+----------------------------------+
       |FPE_FLTUND | floating point underflow         |
       +-----------+----------------------------------+
       |FPE_FLTRES | floating point inexact result    |
       +-----------+----------------------------------+
       |FPE_FLTINV | floating point invalid operation |
       +-----------+----------------------------------+
       |FPE_FLTSUB | subscript out of range           |
       +-----------+----------------------------------+

although I've never really tried it.  Obviously this DOES mean that your
code will rely on a reasonable implementation of signal() or your binary
search will itself have to be run repeatedly as the code crashes (if it
crashes with the default handler) on a floating point exception.

One place to look for this sort of code is in cryptography and random
number generation programs.  Both of these often need to use things like
"the largest integer available on your system" and I've run across
routines that did such a search to find it/them, although I don't seem
to have any of the old source handy.

Another alternative that is fully portable across at least those systems
that support the Gnu compilers is to use the portable gnu variants for
the various types, e.g (from /usr/include/glib.h):

/* system specific config file glibconfig.h provides definitions for
 * the extrema of many of the standard types. These are:
 *
 *  G_MINSHORT, G_MAXSHORT
 *  G_MININT, G_MAXINT
 *  G_MINLONG, G_MAXLONG
 *  G_MINFLOAT, G_MAXFLOAT
 *  G_MINDOUBLE, G_MAXDOUBLE
 *
 * It also provides the following typedefs:
 *
 *  gint8, guint8
 *  gint16, guint16
 *  gint32, guint32
 *  gint64, guint64
 *
 * It defines the G_BYTE_ORDER symbol to one of G_*_ENDIAN (see later in
 * this file).

Basically, if one ALWAYS writes code that defines e.g.

 gdouble x,y,z
 gint i,j,k;

and so forth instead of the non-"g" variants of the same statements, one
can reliably recompile and run your code on any Gnu-supported platform
using e.g. GMAX_DOUBLE as a guaranteed-accurate macro to the largest
double.  Presumably this is fast and easy on a "sane" platform where the
hardware arithmetic is basically consistent with these variables and
likely slow (emulated in software) on platforms where the hardware is
very different.  [Some 15 years ago I used to run stuff on a Harris 800
(running the "Vulcan Operating System", shades of Spock:-) which used 3,
6 and 12 byte words.  I still have antique fortran IV with "real*12"
data definitions in it.  Moral of the story -- don't assume that
hardware is now or will remain sane.  And as Walter said, then there is
ENDIAN to deal with...;-)]

Glib stuff and reasons for using the g-variants are discussed in some of
the gtk/gnome books out there and in several other documentation
locations on the web (again, specifics elude my gradually
spongiform-eroding memory) -- probably in the Gnu Programmer's Guide or
some such if the neurons aren't being triggered by cosmic rays again.

Hope this helps...

   rgb

-- 
Robert G. Brown	                       http://www.phy.duke.edu/~rgb/
Duke University Dept. of Physics, Box 90305
Durham, N.C. 27708-0305
Phone: 1-919-660-2567  Fax: 919-660-2525     email:rgb at phy.duke.edu







More information about the Beowulf mailing list