- Code: Select all
program test_dlapy2
implicit none
double precision :: myzero, dlapy2, x, y
write(*,*) 'write 0.0 and hit enter'
read(*,*) myzero
x = 0.0d0/myzero
write(*,*) 'x is ', x
y = dlapy2(1.0d0, x)
write(*,*) 'y is ', y
end program test_dlapy2
returns
- Code: Select all
write 0.0 and hit enter
0.0
x is NaN
y is 1.4142135623730951
This has severe implications because it can cause xlarfg to hang during scaling if alpha=0 and x has a nan. I.e. all code that uses elementary reflectors is vulnerable to this as e.g. reported for the SVD in https://github.com/JuliaLang/julia/issues/21757.
It appears that Intel has fixed this in MKL without upstreaming their fix.

