Discussion:
The required minimum omega for double
(too old to reply)
Stefan Ram
2017-02-10 00:52:50 UTC
Permalink
Raw Message
Newsgroups: comp.lang.c,comp.lang.c++

#include <float.h>

Some implementations have this property:

A double has at least 14 significant digits (about 14-16) and

9007199254740991.

is the largest double value that can be added to 1.0
giving the "next" double value

9007199254740992.

(when one adds 1.0 again, the value will not change
anymore).

I don't know whether there is a common name for such a
value. I will call it (just for this post) "omega".
(In the above example, the omega is 9007199254740991.)

Now, the standard does not require 14 significant
digits, but 10 (DBL_DIG).

I wonder whether one can somehow find the smallest
omega for the type double that is required by the
standard.

Clearly, omega >= 0 and omega <= DBL_MAX.

Newsgroups: comp.lang.c,comp.lang.c++
Ben Bacarisse
2017-02-10 01:15:50 UTC
Permalink
Raw Message
Post by Stefan Ram
Newsgroups: comp.lang.c,comp.lang.c++
#include <float.h>
A double has at least 14 significant digits (about 14-16) and
9007199254740991.
I.e. 2^53 - 1 or, in most C implementations these days

(1llu << DBL_MANT_DIG) - 1

This works when FLT_RADIX is 2, but it need not be.
Post by Stefan Ram
is the largest double value that can be added to 1.0
giving the "next" double value
9007199254740992.
(when one adds 1.0 again, the value will not change
anymore).
I don't know whether there is a common name for such a
value. I will call it (just for this post) "omega".
(In the above example, the omega is 9007199254740991.)
Now, the standard does not require 14 significant
digits, but 10 (DBL_DIG).
I wonder whether one can somehow find the smallest
omega for the type double that is required by the
standard.
I don't think you can get more than an approximation to it and probably
quite a crude one at that. The standard does not give minimum values
for DBL_MANT_DIG as you have no doubt found out already.

<snip>
--
Ben.
Robert Wessel
2017-02-10 05:58:01 UTC
Permalink
Raw Message
On Fri, 10 Feb 2017 01:15:50 +0000, Ben Bacarisse
Post by Ben Bacarisse
Post by Stefan Ram
Newsgroups: comp.lang.c,comp.lang.c++
#include <float.h>
A double has at least 14 significant digits (about 14-16) and
9007199254740991.
I.e. 2^53 - 1 or, in most C implementations these days
(1llu << DBL_MANT_DIG) - 1
This works when FLT_RADIX is 2, but it need not be.
(pow(FLT_RADIX, DBL_MANT_DIG) - 1)

Should get close. That does have the disadvantage that it can't be
computed at compile time. And resulting in a FP number.
Post by Ben Bacarisse
Post by Stefan Ram
is the largest double value that can be added to 1.0
giving the "next" double value
9007199254740992.
(when one adds 1.0 again, the value will not change
anymore).
I don't know whether there is a common name for such a
value. I will call it (just for this post) "omega".
(In the above example, the omega is 9007199254740991.)
Now, the standard does not require 14 significant
digits, but 10 (DBL_DIG).
I wonder whether one can somehow find the smallest
omega for the type double that is required by the
standard.
I don't think you can get more than an approximation to it and probably
quite a crude one at that. The standard does not give minimum values
for DBL_MANT_DIG as you have no doubt found out already.
I'm not sure what the OP is asking, but if it's about the minimum
possible "omega" on a conforming implementation (rather than the
actual omega of a particular implementation), then the answer would
appear to be approximately:

(pow(10,DBL_DIG) - 1)
Scott Lurndal
2017-02-10 14:57:49 UTC
Permalink
Raw Message
Post by Robert Wessel
On Fri, 10 Feb 2017 01:15:50 +0000, Ben Bacarisse
(pow(FLT_RADIX, DBL_MANT_DIG) - 1)
Should get close. That does have the disadvantage that it can't be
computed at compile time. And resulting in a FP number.
With GCC you can do this as a compile time constexpr.

e.g., something similar to:

constexpr
unsigned int log2(uint64_t n) { return (n<=1) ? 0: (64-__builtin_clzll(n-1)); };
Robert Wessel
2017-02-12 07:52:42 UTC
Permalink
Raw Message
Post by Scott Lurndal
Post by Robert Wessel
On Fri, 10 Feb 2017 01:15:50 +0000, Ben Bacarisse
(pow(FLT_RADIX, DBL_MANT_DIG) - 1)
Should get close. That does have the disadvantage that it can't be
computed at compile time. And resulting in a FP number.
With GCC you can do this as a compile time constexpr.
constexpr
unsigned int log2(uint64_t n) { return (n<=1) ? 0: (64-__builtin_clzll(n-1)); };
That's not the same thing - the above uses pow(), because FLT_RADIX
does not have to be 2. And it's its an exponential function, not a
logarithm.

If you're implying that you could use a logarithm as a part of that
computation, assuming, something like:

a**b = 2 ** (a * lg(b))

and generating the logarithm with a clz, and then the exponentiation
with a left shit.

That doesn't work because no integer approximation of the lg will be
close enough to create anywhere near the correct value for the
exponentiation. An exception being where (a) is a power of two.

Now while I suppose someone could define a pow() that is, itself, a
constexpr, I've never seen it done.
Scott Lurndal
2017-02-14 13:39:03 UTC
Permalink
Raw Message
Post by Robert Wessel
Post by Scott Lurndal
Post by Robert Wessel
On Fri, 10 Feb 2017 01:15:50 +0000, Ben Bacarisse
(pow(FLT_RADIX, DBL_MANT_DIG) - 1)
Should get close. That does have the disadvantage that it can't be
computed at compile time. And resulting in a FP number.
With GCC you can do this as a compile time constexpr.
constexpr
unsigned int log2(uint64_t n) { return (n<=1) ? 0: (64-__builtin_clzll(n-1)); };
That's not the same thing - the above uses pow(), because FLT_RADIX
does not have to be 2. And it's its an exponential function, not a
logarithm.
"something like" in this case means call 'pow' directly in
a constexpr function. It will be evaluated at compile time.
Robert Wessel
2017-02-14 14:02:06 UTC
Permalink
Raw Message
Post by Scott Lurndal
Post by Robert Wessel
Post by Scott Lurndal
Post by Robert Wessel
On Fri, 10 Feb 2017 01:15:50 +0000, Ben Bacarisse
(pow(FLT_RADIX, DBL_MANT_DIG) - 1)
Should get close. That does have the disadvantage that it can't be
computed at compile time. And resulting in a FP number.
With GCC you can do this as a compile time constexpr.
constexpr
unsigned int log2(uint64_t n) { return (n<=1) ? 0: (64-__builtin_clzll(n-1)); };
That's not the same thing - the above uses pow(), because FLT_RADIX
does not have to be 2. And it's its an exponential function, not a
logarithm.
"something like" in this case means call 'pow' directly in
a constexpr function. It will be evaluated at compile time.
It would have to be an custom version of pow(), one presumably limited
to integer arguments, that would be implemented as a constexpr
function. pow() itself, is not, AFAIK, valid in a constexpr function.
Scott Lurndal
2017-02-14 17:30:12 UTC
Permalink
Raw Message
Post by Robert Wessel
Post by Scott Lurndal
Post by Robert Wessel
Post by Scott Lurndal
Post by Robert Wessel
On Fri, 10 Feb 2017 01:15:50 +0000, Ben Bacarisse
(pow(FLT_RADIX, DBL_MANT_DIG) - 1)
Should get close. That does have the disadvantage that it can't be
computed at compile time. And resulting in a FP number.
With GCC you can do this as a compile time constexpr.
constexpr
unsigned int log2(uint64_t n) { return (n<=1) ? 0: (64-__builtin_clzll(n-1)); };
That's not the same thing - the above uses pow(), because FLT_RADIX
does not have to be 2. And it's its an exponential function, not a
logarithm.
"something like" in this case means call 'pow' directly in
a constexpr function. It will be evaluated at compile time.
It would have to be an custom version of pow(), one presumably limited
to integer arguments, that would be implemented as a constexpr
function. pow() itself, is not, AFAIK, valid in a constexpr function.
$ cat /tmp/a.cpp
#include <float.h>
#include <math.h>

constexpr double omega(void) { return pow(FLT_RADIX, DBL_MANT_DIG) - 1; }

int
main(int argc, const char **argv, const char **envp)
{
double o = omega();

return (int)o;
}

$ g++ -std=c++11 -o /tmp/a /tmp/a.cpp
$
Robert Wessel
2017-02-14 23:39:41 UTC
Permalink
Raw Message
Post by Scott Lurndal
Post by Robert Wessel
Post by Scott Lurndal
Post by Robert Wessel
Post by Scott Lurndal
Post by Robert Wessel
On Fri, 10 Feb 2017 01:15:50 +0000, Ben Bacarisse
(pow(FLT_RADIX, DBL_MANT_DIG) - 1)
Should get close. That does have the disadvantage that it can't be
computed at compile time. And resulting in a FP number.
With GCC you can do this as a compile time constexpr.
constexpr
unsigned int log2(uint64_t n) { return (n<=1) ? 0: (64-__builtin_clzll(n-1)); };
That's not the same thing - the above uses pow(), because FLT_RADIX
does not have to be 2. And it's its an exponential function, not a
logarithm.
"something like" in this case means call 'pow' directly in
a constexpr function. It will be evaluated at compile time.
It would have to be an custom version of pow(), one presumably limited
to integer arguments, that would be implemented as a constexpr
function. pow() itself, is not, AFAIK, valid in a constexpr function.
$ cat /tmp/a.cpp
#include <float.h>
#include <math.h>
constexpr double omega(void) { return pow(FLT_RADIX, DBL_MANT_DIG) - 1; }
int
main(int argc, const char **argv, const char **envp)
{
double o = omega();
return (int)o;
}
$ g++ -std=c++11 -o /tmp/a /tmp/a.cpp
$
While GCC (6.3) does compile that (and with -O2, actually reduces it
to a constant), ICC (17) does not, and Clang (3.9.1) accepts it, but
appears to still generate a call to pow() (more specifically it
converts the tail call in main to a jump to pow).

My question is is this actually required by the standard? GCC seems
to take a tour through some of the "promoted" stuff, which I really
don't understand. Unless I've missed it, I really haven't seen
anything like a list of math.h functions that are required to be
constexpr (assuming appropriate inputs). Obviously a compiler can
provide an extension providing an constexpr pow().
Tim Rentsch
2017-02-12 18:16:09 UTC
Permalink
Raw Message
Post by Robert Wessel
On Fri, 10 Feb 2017 01:15:50 +0000, Ben Bacarisse
Post by Ben Bacarisse
Post by Stefan Ram
Newsgroups: comp.lang.c,comp.lang.c++
#include <float.h>
A double has at least 14 significant digits (about 14-16) and
9007199254740991.
I.e. 2^53 - 1 or, in most C implementations these days
(1llu << DBL_MANT_DIG) - 1
This works when FLT_RADIX is 2, but it need not be.
(pow(FLT_RADIX, DBL_MANT_DIG) - 1)
Should get close. That does have the disadvantage that it can't be
computed at compile time. And resulting in a FP number.
Since DBL_MANT_DIG is an integer, this expression can be computed
at compile time, assuming the value of DBL_MANT_DIG is within a
reasonable range (say, less than 1,000). The resulting expansion
is usable (thank goodness!) as a constant expression.
Post by Robert Wessel
Post by Ben Bacarisse
Post by Stefan Ram
is the largest double value that can be added to 1.0
giving the "next" double value
9007199254740992.
(when one adds 1.0 again, the value will not change
anymore).
I don't know whether there is a common name for such a
value. I will call it (just for this post) "omega".
(In the above example, the omega is 9007199254740991.)
Now, the standard does not require 14 significant
digits, but 10 (DBL_DIG).
I wonder whether one can somehow find the smallest
omega for the type double that is required by the
standard.
I don't think you can get more than an approximation to it and probably
quite a crude one at that. The standard does not give minimum values
for DBL_MANT_DIG as you have no doubt found out already.
I'm not sure what the OP is asking, but if it's about the minimum
possible "omega" on a conforming implementation (rather than the
actual omega of a particular implementation), then the answer would
(pow(10,DBL_DIG) - 1)
I believe this statement is correct, if we consider only those
implementations whose floating-point representations agree with
the model of section 5.2.4.2.2.

If we consider other possible representation schemes, then the
question gets more complicated, and I'm not sure what that means
for a minimal value for omega. For example, I believe the
Standard allows, and is meant to allow, floating-point numbers to
be represented as logarithms. I don't know what that implies for
omega, and probably it depends on the details of how omega is
formally defined. I put together a short program that calculates
omega exactly in a way that I believe corresponds to what Stefan
was trying to get at. (Note that this is omega for a particular
implementation, not a minimal omega.) That program could be used
to define omega formally, but other plausible programs might give
different results, especially if, eg, a logarithm representation
is used.
Juha Nieminen
2017-02-13 07:33:44 UTC
Permalink
Raw Message
Post by Ben Bacarisse
This works when FLT_RADIX is 2, but it need not be.
In which practical system it isn't?

Of course for optimization you could use #if to use the integer
calculation, else the floating point one.
Robert Wessel
2017-02-13 10:07:00 UTC
Permalink
Raw Message
On Mon, 13 Feb 2017 07:33:44 +0000 (UTC), Juha Nieminen
Post by Juha Nieminen
Post by Ben Bacarisse
This works when FLT_RADIX is 2, but it need not be.
In which practical system it isn't?
Of course for optimization you could use #if to use the integer
calculation, else the floating point one.
On IBM mainframes, you can use either traditional hex FP or IEEE
binary FP for floats, doubles and long doubles (based on a compiler
option). FLT_RADIX changes appropriately.

The system also supports IEEE decimal floating point, but I'm not
aware of any way you can make ordinary floats use decimal FP (you
actually do define such items with "_Decimal64", etc., and additional
constants are defined in decimal.h).
James Kuyper
2017-02-10 18:06:12 UTC
Permalink
Raw Message
Post by Stefan Ram
Newsgroups: comp.lang.c,comp.lang.c++
#include <float.h>
A double has at least 14 significant digits (about 14-16) and
9007199254740991.
is the largest double value that can be added to 1.0
giving the "next" double value
Do you really mean "next double value" or "next integer"? The next
integer is 1 higher than Omega. The "next double value" will, in
general, be Omega+FLT_RADIX. I'm going to explain below why "next double
value" is problematic. Then I'll go ahead on the assumption that you
meant "next integer".
Post by Stefan Ram
9007199254740992.
(when one adds 1.0 again, the value will not change
anymore).
I don't know whether there is a common name for such a
value. I will call it (just for this post) "omega".
(In the above example, the omega is 9007199254740991.)
Now, the standard does not require 14 significant
digits, but 10 (DBL_DIG).
I wonder whether one can somehow find the smallest
omega for the type double that is required by the
standard.
Clearly, omega >= 0 and omega <= DBL_MAX.
You've defined Omega in terms of what happens when you add 1.0 to the
number. However, that will depend upon the rounding mode.

Consider the smallest integer for which the next integer has no
representation as a double, lets call it Omega1, to keep it distinct
from your Omega.

Omega1 and Omega1+FLT_RADIX will both be representable. If FLT_RADIX==2,
those values will both be equally far from the mathematical value of
Omega1 + 1. The C standard imposes NO requirements of it's own on the
accuracy of such floating point expressions (5.2.4.2.2p6). None -
whatsoever. In particular, this means that a fully conforming
implementation of C is allowed to implement floating point math so
inaccurately that DBL_MAX-DBL_MIN < DBL_MIN - DBL_MAX.
However, if __STDC_IEC_559__ is pre#defined by the implementation, then
the requirements of annex F apply (6.10.8.3p1), which are essentially
equivalent to IEC 60559:1989, which is equivalent ANSI/IEEE 754-1985
(F1p1). In that case, if FLT_RADIX == 2, and you add 1.0 to Omega1, the
result will be rounded to a value of either Omega1 or Omega1+2,
depending upon the current rounding mode. If the rounding mode is
FE_TONEAREST, FE_DOWNWARD, or FE_TOWARDZERO, then it will round to
Omega1, and that will be true of every value up to 2*Omega1, so 2*Omega1
will be the quantity you define as Omega. On the other hand, if the
rounding mode is FE_UPWARD, then Omega is the same as Omega1.

If you meant "next integer", that ambiguity disappears. Regardless of
rounding mode, Omega1 is the the smallest integer to which you can add
1.0 without getting the value of the next integer - because, by
definition, that next integer cannot be represented.

Section 5.2.4.2.2 specifies a model for how floating point types are
represented. That model need not actually be followed, but the required
behavior of floating point operations, and the ranges of representable
values are specified in terms of that model; a floating point
representation for which that model is sufficiently bad might not have
any way of correctly implementing the standard's requirements. So I'll
restrict my discussion to implementations for which that model is
perfectly accurate.
The standard uses subscripts, superscripts, and greek letters in it's
description of the model, which don't necessarily display meaningfully
in a usenet message. I'll indicate subscripts and superscripts by
preceeding them with _ and ^, respectively, and I'll replace the
summation with Sum(variable, lower limit, upper limit, expression).
Post by Stefan Ram
The following parameters are used to
s sign (+/-1)
b base or radix of exponent representation (an integer > 1)
e exponent (an integer between a minimum e_min
and a maximum e_max )
p precision (the number of base-b digits in the significand)
f_k nonnegative integers less than b (the significand digits)
x = s b^e Sum(k, 1, p, f_k b^-k) e_min <= e <= e_max
The term in that sum with the smallest value is the one for k==p. f_p
gets multiplied by both b^e and b^-p, and therefore by b^(e-p). The
lowest integer that cannot be represented exactly is one which would
require a precision p+1 to represented, because b^(e-p) is greater than
1. Therefore, the Omega1 must have e == p+1, f_1 == 1, and f_k == 0 for
all other values of k. Therefore,

Omega1 = (+1) * b^(p+1) * 1*b^(-1) == b^p

== b/b^(1-p) == FLT_RADIX/DBL_EPSILON

Note: even on implementations that pre#define __STDC_IEC_559__, a
certain amount of inaccuracy is still permitted in both the
interpretation of floating point constants and in floating point
expressions. Jut barely enough, in fact, to allow FLT_RADIX/DBL_EPSILON
to come out as Omega1-1. However, that shouldn't happen if those macros
are defined as hexadecimal floating point constants, which must be
converted exactly if they can be represented exactly.

The smallest permitted value for FLT_RADIX is 2, and the maximum
permitted value for DBL_EPSILON is 1e-9, so omega1-1 cannot be less than
2/1e-9. A value for DBL_EPSILON of exactly 1e-9 is not possible if
FLT_RADIX is 2; the closest you can get is with p=29, so DBL_EPSILON =
2^(-30) == 1.0/(1,073,741,824), so the smallest permitted value for
omega1-1 is 2/(2^-30) == 2,147,483,648.
Tim Rentsch
2017-02-12 17:52:31 UTC
Permalink
Raw Message
Post by Stefan Ram
Newsgroups: comp.lang.c,comp.lang.c++
#include <float.h>
A double has at least 14 significant digits (about 14-16) and
9007199254740991.
is the largest double value that can be added to 1.0
giving the "next" double value
Do you really mean "next double value" or "next integer"? The next
integer is 1 higher than Omega. The "next double value" will, in
general, be Omega+FLT_RADIX. I'm going to explain below why "next double
value" is problematic. Then I'll go ahead on the assumption that you
meant "next integer".
Post by Stefan Ram
9007199254740992.
(when one adds 1.0 again, the value will not change
anymore).
I don't know whether there is a common name for such a
value. I will call it (just for this post) "omega".
(In the above example, the omega is 9007199254740991.)
Now, the standard does not require 14 significant
digits, but 10 (DBL_DIG).
I wonder whether one can somehow find the smallest
omega for the type double that is required by the
standard.
Clearly, omega >= 0 and omega <= DBL_MAX.
You've defined Omega in terms of what happens when you add 1.0 to the
number. However, that will depend upon the rounding mode.
Consider the smallest integer for which the next integer has no
representation as a double, lets call it Omega1, to keep it distinct
from your Omega.
Omega1 and Omega1+FLT_RADIX will both be representable. If FLT_RADIX==2,
those values will both be equally far from the mathematical value of
Omega1 + 1. The C standard imposes NO requirements of its own on the
accuracy of such floating point expressions (5.2.4.2.2p6). None -
whatsoever. In particular, this means that a fully conforming
implementation of C is allowed to implement floating point math so
inaccurately that DBL_MAX-DBL_MIN < DBL_MIN - DBL_MAX.
However, if __STDC_IEC_559__ is pre#defined by the implementation, then
the requirements of annex F apply (6.10.8.3p1), which are essentially
equivalent to IEC 60559:1989, which is equivalent ANSI/IEEE 754-1985
(F1p1). In that case, if FLT_RADIX == 2, and you add 1.0 to Omega1, the
result will be rounded to a value of either Omega1 or Omega1+2,
depending upon the current rounding mode. If the rounding mode is
FE_TONEAREST, FE_DOWNWARD, or FE_TOWARDZERO, then it will round to
Omega1, and that will be true of every value up to 2*Omega1, so 2*Omega1
will be the quantity you define as Omega. On the other hand, if the
rounding mode is FE_UPWARD, then Omega is the same as Omega1.
If you meant "next integer", that ambiguity disappears. Regardless of
rounding mode, Omega1 is the the smallest integer to which you can add
1.0 without getting the value of the next integer - because, by
definition, that next integer cannot be represented.
Section 5.2.4.2.2 specifies a model for how floating point types are
represented. That model need not actually be followed, but the required
behavior of floating point operations, and the ranges of representable
values are specified in terms of that model; a floating point
representation for which that model is sufficiently bad might not have
any way of correctly implementing the standard's requirements. So I'll
restrict my discussion to implementations for which that model is
perfectly accurate.
The standard uses subscripts, superscripts, and greek letters in its
description of the model, which don't necessarily display meaningfully
in a usenet message. I'll indicate subscripts and superscripts by
preceeding them with _ and ^, respectively, and I'll replace the
summation with Sum(variable, lower limit, upper limit, expression).
Post by Stefan Ram
The following parameters are used to
s sign (+/-1)
b base or radix of exponent representation (an integer > 1)
e exponent (an integer between a minimum e_min
and a maximum e_max )
p precision (the number of base-b digits in the significand)
f_k nonnegative integers less than b (the significand digits)
x = s b^e Sum(k, 1, p, f_k b^-k) e_min <= e <= e_max
The term in that sum with the smallest value is the one for k==p. f_p
gets multiplied by both b^e and b^-p, and therefore by b^(e-p). The
lowest integer that cannot be represented exactly is one which would
require a precision p+1 to represented, because b^(e-p) is greater than
1. Therefore, the Omega1 must have e == p+1, f_1 == 1, and f_k == 0 for
all other values of k. Therefore,
Omega1 = (+1) * b^(p+1) * 1*b^(-1) == b^p
== b/b^(1-p) == FLT_RADIX/DBL_EPSILON
Note: even on implementations that pre#define __STDC_IEC_559__, a
certain amount of inaccuracy is still permitted in both the
interpretation of floating point constants and in floating point
expressions. Jut barely enough, in fact, to allow FLT_RADIX/DBL_EPSILON
to come out as Omega1-1. However, that shouldn't happen if those macros
are defined as hexadecimal floating point constants, which must be
converted exactly if they can be represented exactly.
The smallest permitted value for FLT_RADIX is 2, and the maximum
permitted value for DBL_EPSILON is 1e-9, so omega1-1 cannot be less than
2/1e-9. A value for DBL_EPSILON of exactly 1e-9 is not possible if
FLT_RADIX is 2; the closest you can get is with p=29, so DBL_EPSILON =
2^(-30) == 1.0/(1,073,741,824), so the smallest permitted value for
omega1-1 is 2/(2^-30) == 2,147,483,648.
I believe this analysis is not correct. The Standard gives a
minimum value for DBL_DIG of 10. Any implementation that uses a
floating-point representation matching the one described in
section 5.2.4.2.2, and has DBL_DIG at least 10, would have to be
able to represent all integer values up to 9,999,999,999 (ie, and
represent them exactly). Unless you are using some idea of omega
that isn't the same as what I understand Stefan Ram to be asking,
omega must be at least 9,999,999,999 (again, under the assumption
that implementations follow the model of 5.2.4.2.2).
j***@verizon.net
2017-02-12 19:24:15 UTC
Permalink
Raw Message
...
Post by Tim Rentsch
Post by James Kuyper
Consider the smallest integer for which the next integer has no
representation as a double, lets call it Omega1, to keep it distinct
from your Omega.
...
Post by Tim Rentsch
Post by James Kuyper
Section 5.2.4.2.2 specifies a model for how floating point types are
represented. That model need not actually be followed, but the required
behavior of floating point operations, and the ranges of representable
values are specified in terms of that model; a floating point
representation for which that model is sufficiently bad might not have
any way of correctly implementing the standard's requirements. So I'll
restrict my discussion to implementations for which that model is
perfectly accurate.
The standard uses subscripts, superscripts, and greek letters in its
description of the model, which don't necessarily display meaningfully
in a usenet message. I'll indicate subscripts and superscripts by
preceeding them with _ and ^, respectively, and I'll replace the
summation with Sum(variable, lower limit, upper limit, expression).
Post by Stefan Ram
The following parameters are used to
s sign (+/-1)
b base or radix of exponent representation (an integer > 1)
e exponent (an integer between a minimum e_min
and a maximum e_max )
p precision (the number of base-b digits in the significand)
f_k nonnegative integers less than b (the significand digits)
x = s b^e Sum(k, 1, p, f_k b^-k) e_min <= e <= e_max
The term in that sum with the smallest value is the one for k==p. f_p
gets multiplied by both b^e and b^-p, and therefore by b^(e-p). The
lowest integer that cannot be represented exactly is one which would
require a precision p+1 to represented, because b^(e-p) is greater than
1. Therefore, the Omega1 must have e == p+1, f_1 == 1, and f_k == 0 for
all other values of k. Therefore,
Omega1 = (+1) * b^(p+1) * 1*b^(-1) == b^p
== b/b^(1-p) == FLT_RADIX/DBL_EPSILON
Note: even on implementations that pre#define __STDC_IEC_559__, a
certain amount of inaccuracy is still permitted in both the
interpretation of floating point constants and in floating point
expressions. Jut barely enough, in fact, to allow FLT_RADIX/DBL_EPSILON
to come out as Omega1-1. However, that shouldn't happen if those macros
are defined as hexadecimal floating point constants, which must be
converted exactly if they can be represented exactly.
The smallest permitted value for FLT_RADIX is 2, and the maximum
permitted value for DBL_EPSILON is 1e-9, so omega1-1 cannot be less than
2/1e-9. A value for DBL_EPSILON of exactly 1e-9 is not possible if
FLT_RADIX is 2; the closest you can get is with p=29, so DBL_EPSILON =
2^(-30) == 1.0/(1,073,741,824), so the smallest permitted value for
omega1-1 is 2/(2^-30) == 2,147,483,648.
I believe this analysis is not correct. The Standard gives a
minimum value for DBL_DIG of 10. Any implementation that uses a
floating-point representation matching the one described in
section 5.2.4.2.2, and has DBL_DIG at least 10, would have to be
able to represent all integer values up to 9,999,999,999 (ie, and
represent them exactly). Unless you are using some idea of omega
that isn't the same as what I understand Stefan Ram to be asking,
omega must be at least 9,999,999,999 (again, under the assumption
that implementations follow the model of 5.2.4.2.2).
I specified precisely what Omega1 is, and I named it differently than Stefan
Ram's Omega precisely because it is not quite the same quantity.

The standard specifies that DBL_DIG is

p log10 b if b is a power of 10
⎣(p − 1) log10 b⎦ otherwise

In case the characters at the beginning and end of that second expression don't
display properly, or if they are unfamiliar to the reader (you probably
recognize them, but other readers might not), what that expression means is
"the greatest integer that is less than or equal to (p-1)*log10(b)".

log10(Omega1), as I defined it above, is p*log10(b), and is therefore strictly
greater than DBL_DIG, unless FLT_RADIX is a power of 10, in which case they are
exactly equal. A requirement that DBL_DIG be at least 10 therefore constrains
the value of Omega1 more tightly than the separate constraints on FLT_RADIX and
DBL_EPSILON do, and I missed that fact. That means that the combination of
FLT_RADIX and DBL_EPSILON that I used in my calculation is not conforming.

For any conforming implementation, FLT_RADIX/DBL_EPSILON is the correct
formula for Omega1, as I have defined that term, and unless FLT_RADIX is a power of 10, 10^DBL_DIG underestimates that value by at least a factor of FLT_RADIX.
Tim Rentsch
2017-02-16 15:15:46 UTC
Permalink
Raw Message
Post by j***@verizon.net
...
Post by Tim Rentsch
Post by James Kuyper
Consider the smallest integer for which the next integer has no
representation as a double, lets call it Omega1, to keep it distinct
from your Omega.
...
Post by Tim Rentsch
Post by James Kuyper
Section 5.2.4.2.2 specifies a model for how floating point types are
represented. That model need not actually be followed, but the required
behavior of floating point operations, and the ranges of representable
values are specified in terms of that model; a floating point
representation for which that model is sufficiently bad might not have
any way of correctly implementing the standard's requirements. So I'll
restrict my discussion to implementations for which that model is
perfectly accurate.
The standard uses subscripts, superscripts, and greek letters in its
description of the model, which don't necessarily display meaningfully
in a usenet message. I'll indicate subscripts and superscripts by
preceeding them with _ and ^, respectively, and I'll replace the
summation with Sum(variable, lower limit, upper limit, expression).
Post by Stefan Ram
The following parameters are used to
s sign (+/-1)
b base or radix of exponent representation (an integer > 1)
e exponent (an integer between a minimum e_min
and a maximum e_max )
p precision (the number of base-b digits in the significand)
f_k nonnegative integers less than b (the significand digits)
x = s b^e Sum(k, 1, p, f_k b^-k) e_min <= e <= e_max
The term in that sum with the smallest value is the one for k==p. f_p
gets multiplied by both b^e and b^-p, and therefore by b^(e-p). The
lowest integer that cannot be represented exactly is one which would
require a precision p+1 to represented, because b^(e-p) is greater than
1. Therefore, the Omega1 must have e == p+1, f_1 == 1, and f_k == 0 for
all other values of k. Therefore,
Omega1 = (+1) * b^(p+1) * 1*b^(-1) == b^p
== b/b^(1-p) == FLT_RADIX/DBL_EPSILON
Note: even on implementations that pre#define __STDC_IEC_559__, a
certain amount of inaccuracy is still permitted in both the
interpretation of floating point constants and in floating point
expressions. Jut barely enough, in fact, to allow FLT_RADIX/DBL_EPSILON
to come out as Omega1-1. However, that shouldn't happen if those macros
are defined as hexadecimal floating point constants, which must be
converted exactly if they can be represented exactly.
The smallest permitted value for FLT_RADIX is 2, and the maximum
permitted value for DBL_EPSILON is 1e-9, so omega1-1 cannot be less than
2/1e-9. A value for DBL_EPSILON of exactly 1e-9 is not possible if
FLT_RADIX is 2; the closest you can get is with p=29, so DBL_EPSILON =
2^(-30) == 1.0/(1,073,741,824), so the smallest permitted value for
omega1-1 is 2/(2^-30) == 2,147,483,648.
I believe this analysis is not correct. The Standard gives a
minimum value for DBL_DIG of 10. Any implementation that uses a
floating-point representation matching the one described in
section 5.2.4.2.2, and has DBL_DIG at least 10, would have to be
able to represent all integer values up to 9,999,999,999 (ie, and
represent them exactly). Unless you are using some idea of omega
that isn't the same as what I understand Stefan Ram to be asking,
omega must be at least 9,999,999,999 (again, under the assumption
that implementations follow the model of 5.2.4.2.2).
I specified precisely what Omega1 is, and I named it differently than Stefan
Ram's Omega precisely because it is not quite the same quantity.
The standard specifies that DBL_DIG is
p log10 b if b is a power of 10
?(p ? 1) log10 b? otherwise
In case the characters at the beginning and end of that second expression don't
display properly, or if they are unfamiliar to the reader (you probably
recognize them, but other readers might not), what that expression means is
"the greatest integer that is less than or equal to (p-1)*log10(b)".
log10(Omega1), as I defined it above, is p*log10(b), and is therefore strictly
greater than DBL_DIG, unless FLT_RADIX is a power of 10, in which case they are
exactly equal. A requirement that DBL_DIG be at least 10 therefore constrains
the value of Omega1 more tightly than the separate constraints on FLT_RADIX and
DBL_EPSILON do, and I missed that fact. That means that the combination of
FLT_RADIX and DBL_EPSILON that I used in my calculation is not conforming.
For any conforming implementation, FLT_RADIX/DBL_EPSILON is the correct
formula for Omega1, as I have defined that term, and unless FLT_RADIX
is a power of 10, 10^DBL_DIG underestimates that value by at least a
factor of FLT_RADIX.
Yes, my point was not about the formula for omega1 in terms for
FLT_RADIX and DBL_EPSILON, which I think you have done correctly,
but about what is the minimum possible value for omega1 (assuming a
conforming implementation). I think you now agree that omega1
cannot be smaller than 9,999,999,999 - right? If so then that
was my point and we can be done.
j***@verizon.net
2017-02-16 19:26:01 UTC
Permalink
Raw Message
...
Post by Tim Rentsch
Post by j***@verizon.net
exactly equal. A requirement that DBL_DIG be at least 10 therefore constrains
the value of Omega1 more tightly than the separate constraints on FLT_RADIX and
DBL_EPSILON do, and I missed that fact. That means that the combination of
FLT_RADIX and DBL_EPSILON that I used in my calculation is not conforming.
...
Post by Tim Rentsch
Yes, my point was not about the formula for omega1 in terms for
FLT_RADIX and DBL_EPSILON, which I think you have done correctly,
but about what is the minimum possible value for omega1 (assuming a
conforming implementation). I think you now agree that omega1
cannot be smaller than 9,999,999,999 - right? If so then that
was my point and we can be done.
Yes, as I said in the text quoted above, "I missed that fact".
Tim Rentsch
2017-02-19 18:22:15 UTC
Permalink
Raw Message
Post by j***@verizon.net
...
Post by Tim Rentsch
Post by j***@verizon.net
exactly equal. A requirement that DBL_DIG be at least 10 therefore constrains
the value of Omega1 more tightly than the separate constraints on FLT_RADIX and
DBL_EPSILON do, and I missed that fact. That means that the combination of
FLT_RADIX and DBL_EPSILON that I used in my calculation is not conforming.
...
Post by Tim Rentsch
Yes, my point was not about the formula for omega1 in terms for
FLT_RADIX and DBL_EPSILON, which I think you have done correctly,
but about what is the minimum possible value for omega1 (assuming a
conforming implementation). I think you now agree that omega1
cannot be smaller than 9,999,999,999 - right? If so then that
was my point and we can be done.
Yes, as I said in the text quoted above, "I missed that fact".
I saw the statement about DBL_DIG constraining Omega1 more
tightly. I did not see any statement giving any specific
revised value, which is what I was asking about.
j***@verizon.net
2017-02-19 19:04:40 UTC
Permalink
Raw Message
Post by Tim Rentsch
Post by j***@verizon.net
...
Post by Tim Rentsch
Post by j***@verizon.net
exactly equal. A requirement that DBL_DIG be at least 10 therefore constrains
the value of Omega1 more tightly than the separate constraints on FLT_RADIX and
DBL_EPSILON do, and I missed that fact. That means that the combination of
FLT_RADIX and DBL_EPSILON that I used in my calculation is not conforming.
...
Post by Tim Rentsch
Yes, my point was not about the formula for omega1 in terms for
FLT_RADIX and DBL_EPSILON, which I think you have done correctly,
but about what is the minimum possible value for omega1 (assuming a
conforming implementation). I think you now agree that omega1
cannot be smaller than 9,999,999,999 - right? If so then that
was my point and we can be done.
Yes, as I said in the text quoted above, "I missed that fact".
I saw the statement about DBL_DIG constraining Omega1 more
tightly. I did not see any statement giving any specific
revised value, which is what I was asking about.
I didn't give a specific revised value, because that would require some time to check out the possibilities, and I didn't have time to do that when I posted that message. Your argument supports the claim that Omega1 >= 9,999,999,999, but does not establish 9,999,999,999 as the minimum permitted value of Omega1. It is not a permitted value.

Omega1 is given exactly by b^p. The only positive integer values of b and p for which b^p == 9,999,999,999 are b=9,999,999,999 and p=1. For such an implementation, DBL_EPSILON == 1, which greatly exceeds the upper limit on DBL_EPSILON of 1e-9.

On the other hand, b=10, p=10 give DBL_EPSILON == 1E-9, and DBL_DIG==10, both of which are barely acceptable, and corresponds to Omega1 == 1E10. I think that's the true minimum value, though I might have missed something again. It occurs to me, looking over the requirements, that b==10, p==10 might have been the boundary case that the committee was using to set the requirements.
Tim Rentsch
2017-02-23 20:26:55 UTC
Permalink
Raw Message
Post by j***@verizon.net
Post by Tim Rentsch
Post by j***@verizon.net
...
Post by Tim Rentsch
Post by j***@verizon.net
exactly equal. A requirement that DBL_DIG be at least 10 therefore constrains
the value of Omega1 more tightly than the separate constraints on FLT_RADIX and
DBL_EPSILON do, and I missed that fact. That means that the combination of
FLT_RADIX and DBL_EPSILON that I used in my calculation is not conforming.
...
Post by Tim Rentsch
Yes, my point was not about the formula for omega1 in terms for
FLT_RADIX and DBL_EPSILON, which I think you have done correctly,
but about what is the minimum possible value for omega1 (assuming a
conforming implementation). I think you now agree that omega1
cannot be smaller than 9,999,999,999 - right? If so then that
was my point and we can be done.
Yes, as I said in the text quoted above, "I missed that fact".
I saw the statement about DBL_DIG constraining Omega1 more
tightly. I did not see any statement giving any specific
revised value, which is what I was asking about.
I didn't give a specific revised value, because that would require
some time to check out the possibilities, and I didn't have time to
do that when I posted that message. Your argument supports the claim
that Omega1 >= 9,999,999,999, but does not establish 9,999,999,999 as
the minimum permitted value of Omega1. It is not a permitted value.
Omega1 is given exactly by b^p. The only positive integer values of
b and p for which b^p == 9,999,999,999 are b=9,999,999,999 and p=1.
For such an implementation, DBL_EPSILON == 1, which greatly exceeds
the upper limit on DBL_EPSILON of 1e-9.
On the other hand, b=10, p=10 give DBL_EPSILON == 1E-9, and
DBL_DIG==10, both of which are barely acceptable, and corresponds to
Omega1 == 1E10. I think that's the true minimum value, though I
might have missed something again. It occurs to me, looking over the
requirements, that b==10, p==10 might have been the boundary case
that the committee was using to set the requirements.
It would have saved us both some trouble if you had waited a bit
before responding so you would have had time to answer my
question, rather than giving a short response with essentially
no information.

And it would have been even more helpful if you had taken the
time to look back at the original posting about omega, which has
an off-by-1 relative to your definition of Omega1, which could
explain my stated value of 9,999,999,999 (and which in fact it
does).

For someone who supposedly prides himself on a lawyer-like style
of speaking, your comments evidence some of the more annoying
aspects of lawyer-ish speech. Not all lawyers talk that way, but
some do, and they are often (rightly, IMO) not highly regarded.
So you might want to aim a little higher.
j***@verizon.net
2017-02-24 19:08:28 UTC
Permalink
Raw Message
Post by Tim Rentsch
Post by j***@verizon.net
Post by Tim Rentsch
Post by j***@verizon.net
...
Post by Tim Rentsch
Post by j***@verizon.net
exactly equal. A requirement that DBL_DIG be at least 10 therefore constrains
the value of Omega1 more tightly than the separate constraints on FLT_RADIX and
DBL_EPSILON do, and I missed that fact. That means that the combination of
FLT_RADIX and DBL_EPSILON that I used in my calculation is not conforming.
...
Post by Tim Rentsch
Yes, my point was not about the formula for omega1 in terms for
FLT_RADIX and DBL_EPSILON, which I think you have done correctly,
but about what is the minimum possible value for omega1 (assuming a
conforming implementation). I think you now agree that omega1
cannot be smaller than 9,999,999,999 - right? If so then that
was my point and we can be done.
Yes, as I said in the text quoted above, "I missed that fact".
I saw the statement about DBL_DIG constraining Omega1 more
tightly. I did not see any statement giving any specific
revised value, which is what I was asking about.
I didn't give a specific revised value, because that would require
some time to check out the possibilities, and I didn't have time to
do that when I posted that message. Your argument supports the claim
that Omega1 >= 9,999,999,999, but does not establish 9,999,999,999 as
the minimum permitted value of Omega1. It is not a permitted value.
Omega1 is given exactly by b^p. The only positive integer values of
b and p for which b^p == 9,999,999,999 are b=9,999,999,999 and p=1.
For such an implementation, DBL_EPSILON == 1, which greatly exceeds
the upper limit on DBL_EPSILON of 1e-9.
On the other hand, b=10, p=10 give DBL_EPSILON == 1E-9, and
DBL_DIG==10, both of which are barely acceptable, and corresponds to
Omega1 == 1E10. I think that's the true minimum value, though I
might have missed something again. It occurs to me, looking over the
requirements, that b==10, p==10 might have been the boundary case
that the committee was using to set the requirements.
It would have saved us both some trouble if you had waited a bit
before responding so you would have had time to answer my
question, rather than giving a short response with essentially
no information.
The only question you wrote in the message I was responding to was "I think you
now agree that omega1 cannot be smaller than 9,999,999,999 - right?" That's a
yes or no question, to which I responded with "Yes", followed by a brief
explanation. I'll admit that a Yes-or-No answer conveys, in itself, only one
bit of information, but it's precisely the amount of information that was
requested. If you wanted more than one bit, you should have asked for more than
one bit (as you eventually did, in your follow-up).
Post by Tim Rentsch
And it would have been even more helpful if you had taken the
time to look back at the original posting about omega, which has
an off-by-1 relative to your definition of Omega1, which could
explain my stated value of 9,999,999,999 (and which in fact it
does).
Not really. First of all, I was very careful to distinguish Omega1 from Omega,
precisely because it is NOT guaranteed to be an off-by-1 difference. Depending
upon your interpretation of the OP's definition of Omega, it can depend upon
the rounding mode, and in some cases can be twice as big as my Omega1. I also
shifted it by 1 from the most obviously parallel definition, because I kept
finding myself referring to Omega1-1 in my argument. I simplified things by
redefining Omega1 to include the -1. It also made a number of statements easier
to express, because as I ended up defining it, Omega1 is guaranteed to be
representable as a double, whereas Omega1+1 is guaranteed to not be
representable, so when I was using my first definition of Omega1, I had to
emphasize that Omega1-1 is inherently NOT a C expression, but a mathematical
one.

Secondly, I was responding to comments of yours that were expressed in terms of
Omega1, not Omega. I therefore reasonably assumed that you were accepting my
change in the terms of the discussion.

Thirdly, your argument for a lower limit on Omega1 was based upon the minimum
allowed value of DBL_DIG. The value of DBL_DIG is defined, when FLT_RADIX is
not a power of 10, in terms of taking the greatest integer less than or equal
to the value of a transcendental expression in terms of the basic parameters
describing the floating point representation. Therefore, your argument
establishes only a lower limit, and not necessarily the true minimum
permissible value of Omega1. You did not take the additional steps needed to
establish the true minimum value.
Post by Tim Rentsch
For someone who supposedly prides himself on a lawyer-like style
of speaking, your comments evidence some of the more annoying
aspects of lawyer-ish speech. Not all lawyers talk that way, but
some do, and they are often (rightly, IMO) not highly regarded.
So you might want to aim a little higher.
It would be a more useful criticism to identify the specific lawyer-ish
features in my writing that bother you. As it is, I have no clear idea what
you're criticizing. I suspect I would not be inclined to change the things that
annoy you, but I won't know for sure without more precise identification.
Tim Rentsch
2017-04-17 20:28:34 UTC
Permalink
Raw Message
Post by j***@verizon.net
Post by Tim Rentsch
Post by j***@verizon.net
Post by Tim Rentsch
Post by j***@verizon.net
...
Post by Tim Rentsch
Post by j***@verizon.net
exactly equal. A requirement that DBL_DIG be at least 10 therefore constrains
the value of Omega1 more tightly than the separate constraints on FLT_RADIX and
DBL_EPSILON do, and I missed that fact. That means that the combination of
FLT_RADIX and DBL_EPSILON that I used in my calculation is not conforming.
...
Post by Tim Rentsch
Yes, my point was not about the formula for omega1 in terms for
FLT_RADIX and DBL_EPSILON, which I think you have done correctly,
but about what is the minimum possible value for omega1 (assuming a
conforming implementation). I think you now agree that omega1
cannot be smaller than 9,999,999,999 - right? If so then that
was my point and we can be done.
Yes, as I said in the text quoted above, "I missed that fact".
I saw the statement about DBL_DIG constraining Omega1 more
tightly. I did not see any statement giving any specific
revised value, which is what I was asking about.
I didn't give a specific revised value, because that would require
some time to check out the possibilities, and I didn't have time to
do that when I posted that message. Your argument supports the claim
that Omega1 >= 9,999,999,999, but does not establish 9,999,999,999 as
the minimum permitted value of Omega1. It is not a permitted value.
Omega1 is given exactly by b^p. The only positive integer values of
b and p for which b^p == 9,999,999,999 are b=9,999,999,999 and p=1.
For such an implementation, DBL_EPSILON == 1, which greatly exceeds
the upper limit on DBL_EPSILON of 1e-9.
On the other hand, b=10, p=10 give DBL_EPSILON == 1E-9, and
DBL_DIG==10, both of which are barely acceptable, and corresponds to
Omega1 == 1E10. I think that's the true minimum value, though I
might have missed something again. It occurs to me, looking over the
requirements, that b==10, p==10 might have been the boundary case
that the committee was using to set the requirements.
It would have saved us both some trouble if you had waited a bit
before responding so you would have had time to answer my
question, rather than giving a short response with essentially
no information.
The only question you wrote in the message I was responding to was
"I think you now agree that omega1 cannot be smaller than
9,999,999,999 - right?" That's a yes or no question, to which I
responded with "Yes", followed by a brief explanation. I'll admit
that a Yes-or-No answer conveys, in itself, only one bit of
information, but its precisely the amount of information that was
requested. If you wanted more than one bit, you should have asked
for more than one bit (as you eventually did, in your follow-up).
You would be doing everyone a favor if you would learn to
listen better and argue less.
Post by j***@verizon.net
Post by Tim Rentsch
And it would have been even more helpful if you had taken the
time to look back at the original posting about omega, which has
an off-by-1 relative to your definition of Omega1, which could
explain my stated value of 9,999,999,999 (and which in fact it
does).
Not really. First of all, I was very careful to distinguish
Omega1 from Omega, precisely because it is NOT guaranteed to be an
off-by-1 difference. Depending upon your interpretation of the
OP's definition of Omega, it can depend upon the rounding mode,
and in some cases can be twice as big as my Omega1. I also
shifted it by 1 from the most obviously parallel definition,
because I kept finding myself referring to Omega1-1 in my
argument. I simplified things by redefining Omega1 to include the
-1. It also made a number of statements easier to express,
because as I ended up defining it, Omega1 is guaranteed to be
representable as a double, whereas Omega1+1 is guaranteed to not
be representable, so when I was using my first definition of
Omega1, I had to emphasize that Omega1-1 is inherently NOT a C
expression, but a mathematical one.
Secondly, I was responding to comments of yours that were
expressed in terms of Omega1, not Omega. I therefore reasonably
assumed that you were accepting my change in the terms of the
discussion.
Thirdly, your argument for a lower limit on Omega1 was based upon
the minimum allowed value of DBL_DIG. The value of DBL_DIG is
defined, when FLT_RADIX is not a power of 10, in terms of taking
the greatest integer less than or equal to the value of a
transcendental expression in terms of the basic parameters
describing the floating point representation. Therefore, your
argument establishes only a lower limit, and not necessarily the
true minimum permissible value of Omega1. You did not take the
additional steps needed to establish the true minimum value.
Ditto my last comment.
Post by j***@verizon.net
Post by Tim Rentsch
For someone who supposedly prides himself on a lawyer-like style
of speaking, your comments evidence some of the more annoying
aspects of lawyer-ish speech. Not all lawyers talk that way, but
some do, and they are often (rightly, IMO) not highly regarded.
So you might want to aim a little higher.
It would be a more useful criticism to identify the specific
lawyer-ish features in my writing that bother you. As it is, I
have no clear idea what you're criticizing. I suspect I would not
be inclined to change the things that annoy you, but I won't know
for sure without more precise identification.
There's an old joke that goes something like this:

A man decides he wants to cross the Atlantic ocean in a balloon.
He sets out across the ocean, and is doing pretty well, but he
gets lost in clouds and fog. After some time he brings the
balloon down through the fog, until he can make out some land
about 100 feet below him. He seems someone on the ground, and
calls out to him:

Man in balloon: "Where am I?"

Man on ground: "You're in a balloon, 100 feet in the air."

Man in balloon: "Ahh. You must be a lawyer."

Man on ground: "Why do you say that?"

Man in balloon: "Because your answer was completely correct,
and also utterly useless."

(end of joke)

Your comments often have the property that they are, on one
level, correct, but on another level utterly useless. And what's
frustrating is I know you must be smart enough so that needn't
be so.
j***@verizon.net
2017-04-17 20:59:47 UTC
Permalink
Raw Message
On Monday, April 17, 2017 at 4:28:42 PM UTC-4, Tim Rentsch wrote:
...
Post by Tim Rentsch
You would be doing everyone a favor if you would learn to
listen better and argue less.
I do think I've been spending too much time arguing with you recently.
Real Troll
2017-04-17 21:45:00 UTC
Permalink
Raw Message
Post by j***@verizon.net
...
Post by Tim Rentsch
You would be doing everyone a favor if you would learn to
listen better and argue less.
I do think I've been spending too much time arguing with you recently.
You would do yourself justice if you stop responding to some two months
old post. Tim Rentsch decided to reopen a very old post and you fell
foul of his trap.

Real Troll here!!
Tim Rentsch
2017-04-27 19:56:09 UTC
Permalink
Raw Message
Post by j***@verizon.net
...
Post by Tim Rentsch
You would be doing everyone a favor if you would learn to
listen better and argue less.
I do think I've been spending too much time arguing with you recently.
That's kind of my point. If you put more effort into
communicating and less into arguing it would be better
for both of us.
j***@verizon.net
2017-04-27 20:58:19 UTC
Permalink
Raw Message
Post by Tim Rentsch
Post by j***@verizon.net
...
Post by Tim Rentsch
You would be doing everyone a favor if you would learn to
listen better and argue less.
I do think I've been spending too much time arguing with you recently.
That's kind of my point. If you put more effort into
communicating and less into arguing it would be better
for both of us.
My response deliberately covered only half of your point.

I put a lot of effort into communicating - and as far as I can tell, it's
precisely the results of that effort which provoked your criticism of my
writing as "... evidenc[ing] some of the more annoying aspects of lawyer-ish
speech". That's quite vague - identification of those "annoying aspects" might
help, if you can convince me that being annoyed by them is justified. Simply
increasing my effort is unlikely to produce results that are more to your
liking. Feel free to be more specific about how you suggest I should re-direct
that effort. I recommend using the message I wrote which triggered your
criticism as an example, to make your suggestions more concrete.

I think it's quite likely that part of your annoyance is due to me often
reiterating things you already know. But I do that only because you've said
something that conflicts with my understanding of the situation under
discussion. That means you must disagree with either my premises or my argument
in support of my conflicting understanding. If the nature of your disagreement
is not obvious (which is often the case), I present you with those premises and
that argument so you can identify which part you disagree with; it's not
because I think you're dumb enough to be completely unaware of them.

From my point of view, it's analogous to debugging code - I go over the code
one step at a time, until I can identify a step that doesn't produce the result
I expected it to produce. Only after identifying that step can I investigate
the reason why it produced an unexpected result. It could be an error in the
code, or code which correctly implements an erroneous algorithm, or faulty
expectations on my part about what the correct results should be, among other
more complicated possibilities.
Tim Rentsch
2017-05-02 16:35:12 UTC
Permalink
Raw Message
[...]
I wanted to let you know I read your posting and do have
some remarks or suggestions to give in response. I'm
mulling over how best to express those, which may take
a day or two to put the thoughts into words. So just
to let you know this didn't fall through the cracks,
and a followup should be forthcoming shortly.
Tim Rentsch
2017-05-04 18:37:01 UTC
Permalink
Raw Message
Post by Tim Rentsch
Post by j***@verizon.net
...
Post by Tim Rentsch
You would be doing everyone a favor if you would learn to
listen better and argue less.
I do think I've been spending too much time arguing with you recently.
That's kind of my point. If you put more effort into
communicating and less into arguing it would be better
for both of us.
[I have rearranged text in what follows to aid exposition]
I put a lot of effort into communicating [...]. Simply increasing
my effort is unlikely to produce [good results]. Feel free to be
more specific about how you suggest I should re-direct that effort.
Yes that is definitely a good idea. Let me start with an example
and then afterwards try to extract some guiding principles.

There was a subpoint that came up in a recent thread "Is this
code considered to transgress the signed overflow purism?" The
OP in that thread had gone through an involved calculation to
produce a value for the size of one of the standard integer
types. He knew about sizeof but explained that he needed a
compile-time constant (this is my paraphrase, not his exact
words). Of course several people responded to explain that
sizeof does produce a value at compile time (not counting the
exceptions to that rule, which is not the issue here), so he
should have been able to use sizeof.

The difficulty is that what he needed is not just a constant
expression but one that could be used in a pre-processor
conditional, eg, #if. He didn't say that, and indeed it may be
the case that he didn't really understand the distinction. But
reading his comments let me guess about what he meant. I then
raised the question in my response, and the guess turned out to
be right, and that let the conversation advance.

That's the end of this example. It is admittedly a rather
minor example, but I hope it will help to solidify the more
abstract statements that follow.

The overarching principle is to focus on the other person. Start
with listening: What does the other person mean? This often
isn't easy because what they mean may be quite different from
what they write. The question is not what do the words mean or
what would I mean if I said what they did, but what do /they/
think they meant? Moreover, what they think may be based on a
different set of assumptions than I would make, which may be
unconscious assumptions in one or both of our cases. A huge part
of good communication is learning how to listen effectively.

The second step is What does the other person what? This might
be broken down into two questions, (1) Do I even know what the
other person wants, and (2) What is it? Here again answering
these questions often isn't easy because of different (and
sometimes simply incorrect) assumptions, etc. If the other
person has said something that I think is flat out wrong, that
might indicate that I don't understand what they meant, or it
might mean that it's an incidental statement that is irrelevant
to what they want. Rather than go charging off down a path that
may be a wild goose chase, I think it's better to start with a
question to try to clarify the issue. If you couldn't figure out
what they meant, you might say "I'm not sure what you mean." If
you don't know what they're looking for, you might say "I can't
tell what you're looking for here; is it X, or Y, or is it
something else?". The key point is to decide what you think
the other person wants, and if there is any uncertainty raise
the question explicitly.

Now we get to the point where we start to frame a response.
Following the overarching principle (and assuming any preliminary
questions have been put forth already), start with a brief
answer to what the other person is looking for. After that
you might want to expand on the brief answer, or bring up
some related matters, or give a line of reasoning explaining
why you think some particular point is true; but start with
a summary, in the area that is of most interest to the other
person. Newspaper articles often illustrate this principle:
they start with a high-level sketch, filling in details as
they go, but generally one can stop reading at almost any
point and still have gotten a thorough picture at more-or-less
the same level of detail over the whole landscape.

There is more I might say but I'm going to stop here for now,
so as to check in with you about what's been said so far.
(What else I might say depends in part on your feedback, if
you have any.)
j***@verizon.net
2017-05-05 14:56:00 UTC
Permalink
Raw Message
Post by Tim Rentsch
Post by Tim Rentsch
Post by j***@verizon.net
...
Post by Tim Rentsch
You would be doing everyone a favor if you would learn to
listen better and argue less.
I do think I've been spending too much time arguing with you recently.
That's kind of my point. If you put more effort into
communicating and less into arguing it would be better
for both of us.
[I have rearranged text in what follows to aid exposition]
I put a lot of effort into communicating [...]. Simply increasing
my effort is unlikely to produce [good results]. Feel free to be
Replacing "results that are more to your liking" with "[good results]" implies a
value judgment (which is fine) and attributes it to me (which is not). If I work
harder to achieve a goal that I consider desirable, it is entirely likely to
produce results that I think are good. It's only the discrepancy between our two
judgments that makes increased effort on my part likely to be counterproductive
in your judgment.
Post by Tim Rentsch
more specific about how you suggest I should re-direct that effort.
Yes that is definitely a good idea. Let me start with an example
and then afterwards try to extract some guiding principles.
There was a subpoint that came up in a recent thread "Is this
code considered to transgress the signed overflow purism?" The
OP in that thread had gone through an involved calculation to
produce a value for the size of one of the standard integer
types. He knew about sizeof but explained that he needed a
compile-time constant (this is my paraphrase, not his exact
words). Of course several people responded to explain that
sizeof does produce a value at compile time (not counting the
exceptions to that rule, which is not the issue here), so he
should have been able to use sizeof.
The difficulty is that what he needed is not just a constant
expression but one that could be used in a pre-processor
conditional, eg, #if. He didn't say that, and indeed it may be
the case that he didn't really understand the distinction. But
reading his comments let me guess about what he meant. I then
raised the question in my response, and the guess turned out to
be right, and that let the conversation advance.
That's the end of this example. It is admittedly a rather
minor example, but I hope it will help to solidify the more
abstract statements that follow.
In the same message you're responding to, just after the part you quoted above,
Post by Tim Rentsch
I recommend using the message I wrote which triggered your
criticism as an example, to make your suggestions more concrete.
There's a reason why I made that recommendation - I still have no idea how the
message you criticized by saying "your comments evidence some of the more
annoying aspects of lawyer-ish speech" demonstrates a failure to follow the
Post by Tim Rentsch
The overarching principle is to focus on the other person. Start
with listening: What does the other person mean? This often
isn't easy because what they mean may be quite different from
what they write. The question is not what do the words mean or
what would I mean if I said what they did, but what do /they/
think they meant? Moreover, what they think may be based on a
different set of assumptions than I would make, which may be
unconscious assumptions in one or both of our cases. A huge part
of good communication is learning how to listen effectively.
I think I'm doing my best to understand what people say, and to clearly
communicate the fact when I think I've failed to do so. By definition, there's
nothing I can do about the case where I'm unaware of my failure.
Post by Tim Rentsch
The second step is What does the other person what? This might
be broken down into two questions, (1) Do I even know what the
other person wants, and (2) What is it? Here again answering
these questions often isn't easy because of different (and
sometimes simply incorrect) assumptions, etc. If the other
person has said something that I think is flat out wrong, that
might indicate that I don't understand what they meant, or it
might mean that it's an incidental statement that is irrelevant
to what they want. Rather than go charging off down a path that
may be a wild goose chase, I think it's better to start with a
question to try to clarify the issue. If you couldn't figure out
what they meant, you might say "I'm not sure what you mean." If
you don't know what they're looking for, you might say "I can't
tell what you're looking for here; is it X, or Y, or is it
something else?". The key point is to decide what you think
the other person wants, and if there is any uncertainty raise
the question explicitly.
I think my own approach differs from the above only in that I tend to try to
teach the person I'm responding to the relevant concepts and terminology, so
that (if they accept the education) they'll better understand my question, and
be better equipped to answer it. That could be described as "charging off down a
path that might be a wild goose chase", if I've done a sufficiently poor job of
guessing which concepts and terminology were actually relevant to what the other
person meant.
Post by Tim Rentsch
Now we get to the point where we start to frame a response.
Following the overarching principle (and assuming any preliminary
questions have been put forth already), start with a brief
answer to what the other person is looking for. After that
you might want to expand on the brief answer, or bring up
some related matters, or give a line of reasoning explaining
why you think some particular point is true; but start with
a summary, in the area that is of most interest to the other
they start with a high-level sketch, filling in details as
they go, but generally one can stop reading at almost any
point and still have gotten a thorough picture at more-or-less
the same level of detail over the whole landscape.
On usenet, my message is necessarily formatted as a response to someone else's
message, and I try to place my response on a given point as close as possible to
place where the person I'm responding to raised that point. That limits my scope
for applying the above advice, but I do recognize it as good advice.
Tim Rentsch
2017-05-17 21:24:58 UTC
Permalink
Raw Message
Post by j***@verizon.net
Post by Tim Rentsch
Post by Tim Rentsch
Post by j***@verizon.net
...
Post by Tim Rentsch
You would be doing everyone a favor if you would learn to
listen better and argue less.
I do think I've been spending too much time arguing with you recently.
That's kind of my point. If you put more effort into
communicating and less into arguing it would be better
for both of us.
[I have rearranged text in what follows to aid exposition]
I'm sorry I wasn't more effective at conveying what I was trying
to say. I have a few other incidental comments but the most
important one is I'm sorry my efforts didn't turn out better.
Post by j***@verizon.net
Post by Tim Rentsch
I put a lot of effort into communicating [...]. Simply increasing
my effort is unlikely to produce [good results]. Feel free to be
Replacing "results that are more to your liking" with "[good
results]" implies a value judgment (which is fine) and attributes it
to me (which is not). [...]
I didn't mean to change the meaning by my paraphrase. I'm sorry
that my doing that confused the issue.
Post by j***@verizon.net
Post by Tim Rentsch
more specific about how you suggest I should re-direct that effort.
Yes that is definitely a good idea. Let me start with an example
and then afterwards try to extract some guiding principles.
There was a subpoint that came up in a recent thread "Is this
code considered to transgress the signed overflow purism?" The
OP in that thread had gone through an involved calculation to
produce a value for the size of one of the standard integer
types. He knew about sizeof but explained that he needed a
compile-time constant (this is my paraphrase, not his exact
words). Of course several people responded to explain that
sizeof does produce a value at compile time (not counting the
exceptions to that rule, which is not the issue here), so he
should have been able to use sizeof.
The difficulty is that what he needed is not just a constant
expression but one that could be used in a pre-processor
conditional, eg, #if. He didn't say that, and indeed it may be
the case that he didn't really understand the distinction. But
reading his comments let me guess about what he meant. I then
raised the question in my response, and the guess turned out to
be right, and that let the conversation advance.
That's the end of this example. It is admittedly a rather
minor example, but I hope it will help to solidify the more
abstract statements that follow.
In the same message you're responding to, just after the part you
Post by Tim Rentsch
I recommend using the message I wrote which triggered your
criticism as an example, to make your suggestions more concrete.
Yes, I saw that reading your earlier message.
Post by j***@verizon.net
There's a reason why I made that recommendation - I still have no
idea how the message you criticized by saying "your comments
evidence some of the more annoying aspects of lawyer-ish speech"
I chose a different example because I thought it would provide a
better basis to illustrate and convey what I was trying to say.
I don't know if that's true but I did think so at the time.
Post by j***@verizon.net
Post by Tim Rentsch
The overarching principle is to focus on the other person. Start
with listening: What does the other person mean? This often
isn't easy because what they mean may be quite different from
what they write. The question is not what do the words mean or
what would I mean if I said what they did, but what do /they/
think they meant? Moreover, what they think may be based on a
different set of assumptions than I would make, which may be
unconscious assumptions in one or both of our cases. A huge part
of good communication is learning how to listen effectively.
I think I'm doing my best to understand what people say, and to
clearly communicate the fact when I think I've failed to do so.
I didn't mean to imply I think you aren't trying, and I hope it
didn't come across otherwise. One suggestion though: try to
understand what people mean as well as what they say - often
there is a difference between the two.
Post by j***@verizon.net
By definition, there's nothing I can do about the case where I'm
unaware of my failure.
There isn't anything you can do to change the past. You
could do something in the present (and in the future) to
try to acquire new communication skills; that might let
you go back and look at old messages and see things that
you weren't able to see before.
Post by j***@verizon.net
Post by Tim Rentsch
The second step is What does the other person what? This might
be broken down into two questions, (1) Do I even know what the
other person wants, and (2) What is it? Here again answering
these questions often isn't easy because of different (and
sometimes simply incorrect) assumptions, etc. If the other
person has said something that I think is flat out wrong, that
might indicate that I don't understand what they meant, or it
might mean that it's an incidental statement that is irrelevant
to what they want. Rather than go charging off down a path that
may be a wild goose chase, I think it's better to start with a
question to try to clarify the issue. If you couldn't figure out
what they meant, you might say "I'm not sure what you mean." If
you don't know what they're looking for, you might say "I can't
tell what you're looking for here; is it X, or Y, or is it
something else?". The key point is to decide what you think
the other person wants, and if there is any uncertainty raise
the question explicitly.
I think my own approach differs from the above only in that I tend
to try to teach the person I'm responding to the relevant concepts
and terminology, so that (if they accept the education) they'll
better understand my question, and be better equipped to answer
it. That could be described as "charging off down a path that
might be a wild goose chase", if I've done a sufficiently poor job
of guessing which concepts and terminology were actually relevant
to what the other person meant.
Yes, I can see that you do that. Some of what I'm suggesting
is that you put more emphasis on listening, and try to talk
to them in their language rather than trying to teach them
your language. At least to start off with.
Post by j***@verizon.net
Post by Tim Rentsch
Now we get to the point where we start to frame a response.
Following the overarching principle (and assuming any preliminary
questions have been put forth already), start with a brief
answer to what the other person is looking for. After that
you might want to expand on the brief answer, or bring up
some related matters, or give a line of reasoning explaining
why you think some particular point is true; but start with
a summary, in the area that is of most interest to the other
they start with a high-level sketch, filling in details as
they go, but generally one can stop reading at almost any
point and still have gotten a thorough picture at more-or-less
the same level of detail over the whole landscape.
On usenet, my message is necessarily formatted as a response to
someone else's message, and I try to place my response on a given
point as close as possible to place where the person I'm
responding to raised that point.
Sometimes that helps in framing an effective reply; other times
a different approach will work better. You might want to try out
various other strategies to discover some possible alternatives.
In my posting two up-thread, for example, I rearranged text in
the quoted material to fit in with what seemed like a better
order to me. Whether that particular choice was a good one or
it wasn't, I encourage you to explore different ideas for how to
frame replies and responses.
Post by j***@verizon.net
That limits my scope for applying the above advice,
but I do recognize it as good advice.
I'm very glad to hear my comments haven't been a total loss. :)

Loading...