3

I have to raise 10 to the power of a double a lot of times.

Is there a more efficient way to do this than with the math library pow(10,double)? If it matters, my doubles are always negative between -5 and -11.

I assume pow(double,double) uses a more general algorithm than is required for pow(10,double) and might therefore not be the fastest method. Given some of the answers below, that might have been an incorrect assumption.

As for the why, it is for logartihmic interpolation. I have a table of x and y values. My object has a known x value (which is almost always a double).

double Dbeta(struct Data *diffusion, double per){
  double frac;
  while(per>diffusion->x[i]){
      i++;
  }
  frac = (per-diffusion->x[i-1])/(diffusion->x[i]-diffusion->x[i-1]);
  return pow(10,log10DB[i-1] + frac * (log10DB[i]-log10DB[i-1]));
}

This function is called a lot of times. I have been told to look into profiling, so that is what I will do first.

I have just been told I could have used natural logarithms instead of base 10, which is obviously right. (my stupidity sometimes amazes even myself.)

After replacing everything with natural logarithms everything runs a bit faster. With profiling (which is a new word I learned today) I found out 39% of my code is spend in the exp function, so for those who wondered if it was in fact this part that was bottlenecking my code, it was.

13
  • 2
    Do the double powers actually have integer values? Commented Oct 22, 2020 at 7:21
  • 2
    And will the second argument to pow always be an integer? Will you ever pass e.g. -6.8 (or similar) as the second argument? Or will it always only be -5, -6, -7, -8, -9, -10 or -11 (and nothing else in between)? Commented Oct 22, 2020 at 7:33
  • 3
    "I have to raise 10 to the power of a double a lot of times" This is rather unusual, can you show some of your code? Commented Oct 22, 2020 at 7:53
  • 3
    I suspect this is the bottleneck Don't suspect - profile and be sure, otherwise you may just be wasting time. Remember to print out a copy of rules of optimizations. Commented Oct 22, 2020 at 7:58
  • 3
    if the exponents are always integers or belong to a limited set of known values then just use a lookup table Commented Oct 22, 2020 at 8:03

2 Answers 2

5

For pow(10.0, n) it should be faster to set c = log(10.0), which you can compute once, then use exp(c*n), which should be significantly faster than pow(10.0, n) (which is basically doing that same thing internally, except it would be calculating log(10.0) over and over instead of just once). Beyond that, there probably isn't much else you can do.

1
3

Yes, the pow function is slow (roughly 50x the cost of a multiply, for those asking for benchmarks).

  • By some log/exponents trickery, we can express 10^x as

    10^x = exp(log(10^x)) = exp(x * log(10)).
    

    So you can implement 10^x with exp(x * M_LN10), which should be more efficient than pow.

  • If double accuracy isn't critical, use the float version of the function expf (or powf), which should be more efficient than the double version.

  • If rough accuracy is Ok, precompute a table over the [-5, -11] range and do a quick look up with linear interpolation.

Some benchmarks (using glibc 2.31):

Benchmark                Time
---------------------------------
pow(10, x)               15.54 ns
powf(10, x)               7.18 ns
expf(x * (float)M_LN10)   3.45 ns
1
  • thank you, very helpfull. It might be accurate enough to use float, so I will try that as well. I think the lookup table would be too inaccurate. Commented Oct 22, 2020 at 8:25

Your Answer

By clicking β€œPost Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.