Floating point accuracy of MHA parser variables

Post Reply
mandolini_
Posts: 15
Joined: Fri Feb 14, 2020 1:50 am

Floating point accuracy of MHA parser variables

Post by mandolini_ » Thu Mar 19, 2020 1:37 am

I am trying to allow some of my algorithm's tuning parameters to be updated at runtime with the help of the MHA-parser. However, when I inspect my self-developed plugin with 'analysemhaplugin my_plugin' it seems that the floating point numbers are not quite the same as what I set them as. What is the cause of this?

Code: Select all

MHAParser::float_t param_1;
MHAParser::float_t param_2;
MHAParser::float_t param_3;

// In constructor
      param_1("param_1 description",
             "0.98",
             "[0.90, 0.99]"),
      param_2("param_2 description",
             "0.98",
             "[0.90, 0.99]"),
      param_3("param_3 description",
              "0.3",
              "[0.01, 0.5]"),
              
      insert_item("param_1", &param_1);
      insert_item("param_2", &param_2);
      insert_item("param_3", &param_3);
Output of 'analysemhaplugin my_plugin':

Code: Select all

# param_1 description
# float:[0.899999976,0.99000001] (writable)
alphaY = 0.980000019

# param_2 description
# float:[0.899999976,0.99000001] (writable)
alphaV = 0.980000019

# param_3 description
# float:[0.00999999978,0.5] (writable)
minGain = 0.300000012

tobiasherzke
Posts: 27
Joined: Mon Jun 24, 2019 12:51 pm

Re: Floating point accuracy of MHA parser variables

Post by tobiasherzke » Sat Mar 21, 2020 5:04 pm

Floating point data representations as used in computers have limited accuracy. Not all decimal numbers can be stored with their exact value in a floating point data type on a computer. The numbers are rounded to the nearest value that can be represented.

0.99 for example cannot be represented exactly in a single precision floating point data type. The nearest value that can be stored is exactly 0.990000009536743164062500. The next smaller value is 0.989999949932098388671875, and the next larger value is 0.990000069141387939453125. Therefore, for values near 0.99, the step size between adjacent single-precision floating point values is 0.000000059604644775390625.

When displaying stored values, the MHA parser converts stored floating-point values to decimal representations. When it does, it normally uses 9 significant decimal digits: 0.9900000095367431640625 is rounded to 0.990000010. This is the value that you see for your 0.99 input value, except that the trailing 0 was truncated by the C++ iostream implementation.

We use a default of 9 significant decimal digits because some single-precision floating point numbers require 9 significant decimal digits for unambiguous decimal representation, see e.g. https://stackoverflow.com/questions/607 ... mal-digits. If you want you can change this by setting the environment variable MHA_PARSER_PRECISION before starting mha or analysemhaplugin, e.g.

Code: Select all

MHA_PARSER_PRECISION=6 analysemhaplugin my_plugin
will give you the output that you had expected, but it does so by merely hiding the inherent inaccuracy of floating point values, not by being more accurate.

openMHA like most professional audio signal processing software chooses to do most of its computations with single precision floating point values instead of double precision floating point values because for the intended purpose, producing audio results, practically all inaccuracies caused by the single-precision floating point inaccuracy are already inaudible to human listeners by a large enough margin to also cover accumulated, follow-up inaccuracies.

My suggestion is that once you understand where this unexpected display comes from to not let it bother you.

mandolini_
Posts: 15
Joined: Fri Feb 14, 2020 1:50 am

Re: Floating point accuracy of MHA parser variables

Post by mandolini_ » Mon Mar 23, 2020 8:04 pm

Thanks -- I figured something like this was the case, but wanted to make sure, as I hadn't seen the same behavior when applying analysemhaplugin to most other plugins.

+1 for thoroughness.

Post Reply