Hi,

I want to know how TVGS decide the floating-point precision.

For example,if the expression is:

a > 10.0

the type of "a" is double.

How to decide the low bound or high bound?

3 posts
• Page **1** of **1**

Hi,

I want to know how TVGS decide the floating-point precision.

For example,if the expression is:

a > 10.0

the type of "a" is double.

How to decide the low bound or high bound?

I want to know how TVGS decide the floating-point precision.

For example,if the expression is:

a > 10.0

the type of "a" is double.

How to decide the low bound or high bound?

- jack
**Posts:**16**Joined:**Sat Mar 15, 2008 9:59 am

The subject of floating point precision in VGS is pretty complicated. There are many rules and relationships that go into determining the current precision of a variable. First, let me address your specific question about

if the expression is:

a > 10.0

the type of "a" is double.

How to decide the low bound or high bound?

VGS starts with the data type of the variable involved. The IEEE Float64 (double) and IEEE Float32 (single/float) standards are the starting point. FLoat64 representation has about 15.5 decimal digits of precision to start with. Float32 has only about 5.5 decimal digits. At this point, VGS only assumes 14 and 5 digits, respectively, as a starting point. Now, with respect to your question

How to decide the low bound or high bound?

If the user does not specify a range for "a" then VGS will default the low and high bound range values to +/- 1.0e+012 (double) and +/- 1.0E+004 (floats) so that at the boundaries a variable sill has 4 and 1 digits of precision below the decimal point. For example if "a" is a double then its default domain, at its low/high values, can expect to be able to represent values with these digits

xxxxxxxxxx.xxxx

If "a" was a single/float type, then it will be able to represent maximum and minimum values with these digits

xxxx.x

These default limits +/- 1.0e+012 (double) and +/- 1.0E+004 (floats) have been chosen based on about 20 years of VGS development experience. This does not mean that TTM model can not work with numbers that are of greater magnitudes. However, to do so requires use of user-defined data types rather than default data types.

Now, these limits and precisions are only starting point values. If "a" is involved in relationships with any other variables and/or computational operators, its expected precision value can grow towards fewer digits of precision due to round-off errors in the computations and due to inequality relationships with other variables. This precision information is used in the determination of the new high/low bound limits when "a" is in a relationship such as

a > 10.0

In order to insure that all single point values for "a" will satisfy the a > 10.0 relationship in the actual executing code on the target platform, VGS will converge the domain of "a" to a low bound value based on the current precision knowledge about "a" and also based on the magnitude of the constant or variable that "a" is being compared with. In this case, because the initial low_bound for "a" was -1.0e+012 its initial precision setting only included a few places to the right of the decimal point. In addition, because it is being compared to a number 10.0 that has a magnitude a couple places to the left of the decimal point the resulting low bound, after comparison with 10.0 will be 1.02e+001. However, if the initial domain for "a" was +/-1.0e+004 and "a" was still a double, then its initial precision would include a larger number of decimal places to the right of the decimal point and its low bound after comparison to 10.0 would instead be 1.000000001e+001.

The subject of floating point precision/tolerance is too complex for a User's Forum reply, and the algorithms used to determine this informaion is also very proprietary. But I hope my reply does provide enough information to gain a basic understanding about the topic and answers your questions to your satisfaction.

if the expression is:

a > 10.0

the type of "a" is double.

How to decide the low bound or high bound?

VGS starts with the data type of the variable involved. The IEEE Float64 (double) and IEEE Float32 (single/float) standards are the starting point. FLoat64 representation has about 15.5 decimal digits of precision to start with. Float32 has only about 5.5 decimal digits. At this point, VGS only assumes 14 and 5 digits, respectively, as a starting point. Now, with respect to your question

How to decide the low bound or high bound?

If the user does not specify a range for "a" then VGS will default the low and high bound range values to +/- 1.0e+012 (double) and +/- 1.0E+004 (floats) so that at the boundaries a variable sill has 4 and 1 digits of precision below the decimal point. For example if "a" is a double then its default domain, at its low/high values, can expect to be able to represent values with these digits

xxxxxxxxxx.xxxx

If "a" was a single/float type, then it will be able to represent maximum and minimum values with these digits

xxxx.x

These default limits +/- 1.0e+012 (double) and +/- 1.0E+004 (floats) have been chosen based on about 20 years of VGS development experience. This does not mean that TTM model can not work with numbers that are of greater magnitudes. However, to do so requires use of user-defined data types rather than default data types.

Now, these limits and precisions are only starting point values. If "a" is involved in relationships with any other variables and/or computational operators, its expected precision value can grow towards fewer digits of precision due to round-off errors in the computations and due to inequality relationships with other variables. This precision information is used in the determination of the new high/low bound limits when "a" is in a relationship such as

a > 10.0

In order to insure that all single point values for "a" will satisfy the a > 10.0 relationship in the actual executing code on the target platform, VGS will converge the domain of "a" to a low bound value based on the current precision knowledge about "a" and also based on the magnitude of the constant or variable that "a" is being compared with. In this case, because the initial low_bound for "a" was -1.0e+012 its initial precision setting only included a few places to the right of the decimal point. In addition, because it is being compared to a number 10.0 that has a magnitude a couple places to the left of the decimal point the resulting low bound, after comparison with 10.0 will be 1.02e+001. However, if the initial domain for "a" was +/-1.0e+004 and "a" was still a double, then its initial precision would include a larger number of decimal places to the right of the decimal point and its low bound after comparison to 10.0 would instead be 1.000000001e+001.

The subject of floating point precision/tolerance is too complex for a User's Forum reply, and the algorithms used to determine this informaion is also very proprietary. But I hope my reply does provide enough information to gain a basic understanding about the topic and answers your questions to your satisfaction.

- busser
- Site Admin
**Posts:**52**Joined:**Thu Mar 13, 2008 7:42 pm

Bob,thanks again,your reply is very helpful!

- jack
**Posts:**16**Joined:**Sat Mar 15, 2008 9:59 am

3 posts
• Page **1** of **1**

Users browsing this forum: No registered users and 1 guest