Post by bartcPost by s***@casperkitty.comHow about allowing the sizes to be configured via compiler options or
directives that precede the first usage of the types in question?
Then you lose the advantage of just /knowing/ that int is 32 bits.
Almost no code can be expected to work usefully on a compiler that is
not configured properly for use with it. I would like to see standardized
way by which code could specify a configuration via directives and either
have a compiler automatically configure itself to fit the indicated
requirements or refuse compilation when able to do so, but compilers
must be configured *somehow*.
If configurable int sizes were seen as a desirable feature, however, then
code which expects "int" to be 32 bits would not be limited to platforms
where "int" would default to 32 bits, but would also be usable on those
where it would default to some other size but could be configured to be
32 bits anyway.
Post by bartcSuppose you want to combine several sets of functions from different
sources into one module, but they make different assumptions about 'int'
or each requires different set up.
Combining such functions in one module would likely not be possible unless
a compiler included directives that could be used within a source file. On
the other hand, a build system could allow compilation units with different
integer sizes to be combined within a single program if it has a directive
to request name mangling based on parameter types [used mainly for library
functions] and code which doesn't use that directive uses fixed-sized types
in its interfaces.
Provided that it's possible to specify data types with particular sizes and
semantics when needed, having other more "flexible" ways of specifying data
types may improve efficiency as well, especially if it's possible to give
compilers more flexibility than they now have (e.g. declare a type which
will need to hold values -5000..+5000, but which a compiler may replace with
a larger type at its convenience. On many processors, code like:
T array[50000];
for (unsigned i=0; i<50000; i++) T[i]=1234;
will run faster if T is int16_t than if it's int32_t, but code like:
T foo;
....
foo++;
if (foo) ...;
would run slower with int16_t than int32_t [on a system where "int"
is 32 bits, incrementing an int16_t that holds 32767 would have to yield
-32768 unless an implementation documents that it adds code that would slow
down in-range computations]. Having a type that could behave as int16_t in
the former scenario but could (at the compiler's discretion) behave as
int32_t in the latter, would allow more efficient code generation than would
be possible under today's rules.
Post by bartcPost by s***@casperkitty.comCompilers for the Macintosh in the 1980s were able to process "int" as
either 16 or 32 bits,
Didn't that use the 68K? That couldn't make up its mind if it was a 16-
or 32-bit device. The rest of the microcomputer world was in transition
(I didn't switch 'int' 16-bit to 32-bit until 2002 I think).
On the 68000, most 16-bit operations had a slight speed advantage over
32-bit operations. On today's processors, even those which without
question are "64-bit" systems, operations on smaller types will still
often have a speed advantage over those on larger ones, either because
of vectorization or because of caching issues.
In addition, by the time people were writing C compilers for the Macintosh
there was already a lot of C code that had been written for 16-bit systems,
and an option to make "int" be 16 bits made it easier to port such code
for use on the Macintosh.
Post by bartcNow, for the main processors used in computers, an int of 32 bits is
expected, but there is no pressing need for it to be 64 bits, especially
if 'long' fills that role.
Is there any reason "long long" or "int64_t" couldn't fulfill the need for
a longer type just as well?
Since the mid 1980s, a lot of code has been written with the premise that:
char -- 8
short -- 16
int -- 16 or 32
long -- 32
Obviously such types won't work on a 36-bit system, but I see no reason why
a compiler for an octet-based machine shouldn't be configurable to work with
code that expects such types.
Post by bartcPost by s***@casperkitty.comand most modern processors should have little
difficulty handling any combination of power-of-two integer sizes. If
code will called upon in some applications to process lots of values
smaller than two billion, but in other applications it may need to
process much larger values, having it use "long" but changing the
compiler configuration based upon requirements would allow the code
to be more efficient in the first application (32-bit values can be
cached twice as efficiently as 64-bit ones) but also satisfy the needs
of the latter application when built for 64-bit "int".
You mean that the same application can be built for 32- or 64-bit
machines, but with different limitations? So that the 32-bit/2GB machine
doesn't waste effort on 64-bit ops that would never be needed because
counts won't go above 32 bits.
That would be one purpose. Though if an implementation could use name
mangling for routines that have types like "int" in their signatures
it would even be possible to combine pieces of code that have different
expectations about integer sizes into one program. The only cases that
would be particularly problematic would be those where it's necessary
to pass a va_list to code that does not expect to pass arguments using
the largest size of object [if all arguments are physically passed as
64 bits, then code which passes a 16-bit "int" to a variadic function
would need to truncate it to 16 bits and then sign-extend it to 64,
and the code that fetches the argument wouldn't have to know or care
what the original size was].
Post by bartcSomething like my idea of using 'intm' might come in useful here, which
adapts itself to the machine. (Doesn't size_t do what in C?)
I think size_t exists because of platforms like large- or compact-model
8086 or 80286 where pointers are 32 bits, but pointer arithmetic is
limited to a 64K range within any given segment. No individual object is
going to exceed 64K, so there's no need to have sizeof() yield a value
larger than a 16-bit "unsigned". Incidentally, on many such systems,
ptrdiff_t behaves interestingly. Given any two pointers p and q to
different parts of the same object, it's possible that the difference
between p and q will exceed +/- 32768, but p+(q-p)==q will hold anyway.
The Standard doesn't require that, and some pedants might claim such a
thing only works by happenstance, but I think those who designed such
implementations were well aware of the useful properties of modular
arithmetic.