The LAPACK forum has moved to https://github.com/Reference-LAPACK/lapack/discussions.

issues in matrix decompostion...

Open discussion regarding features, bugs, issues, vendors, etc.

issues in matrix decompostion...

Postby scitigg » Wed Feb 06, 2008 2:29 pm

I am using lapack (from a deb package) on a Debian machine, and receiving huge 'error' when decomposing and then recomposing a matrix through diagonalization (the purpose of the diagonalization is for matrix exponentiation).

I am writing this in C and here is a sample section of code. I've compiled this with the same results as my larger section to ensure continuity. More error is seen when the matrix has decimals of higher-precision --I just typed in a simple one to prove my point. I assume this 'error', is in fact not error, and only do to the eigenvector normalization. Any suggestions?

Thanks all!

-------------------------------------------
Code: Select all
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <math.h>

void mk_diag(double* diag, double* mat, int n, const int m){
    int i;
    for(i =0; i < m; ++i)
        mat[i*n+i] = diag[i];
}

int main()
{

    // --------------------------- test values
    double A[] =
            {0,1,2,3,
             1,2,3,4,
             1,3,4,5,
             1,1,1,1 };
    int n = 4;
    // ---------------------------------------------
    char jobv_ = 'V';
    double *wi,*wr,*U,*work,work_size,*Ui,*D;
    int lwork,info;
    wi = (double*) malloc( n*sizeof(double));
    wr = (double*) malloc( n*sizeof(double));
    U  = (double*) calloc( n*n,sizeof(double));
    D  = (double*) calloc( n*n,sizeof(double));
    Ui = (double*) calloc( n*n,sizeof(double));
    lwork = -1;
    dgeev_(&jobv_,&jobv_,&n,A,&n,wr,wi,U,&n,Ui,&n,&work_size,&lwork,&info);
    if( info == 0 ) {
        lwork = (int)work_size;
        work = (double*) calloc( lwork , sizeof( double) );
        dgeev_(&jobv_,&jobv_,&n,A,&n,wr,wi,U,&n,Ui,&n,work,&lwork,&info);
        if( info == 0 ) {
            mk_diag(wr,D, n,n); //create diagonal matrix
        } else { return 0; }
        free( work );
    } else { return 0; }

    double *UD, *UDUi, a = 1,b=0;
    char ntran_ = 'N';
    char ytran_ = 'T';
    char ctran_ = 'C';
    UD = (double*) malloc( n*n*sizeof(double));
    dgemm_( &ntran_,&ntran_,&n,&n,&n,&a,U,&n,D,&n,&b,UD,&n );
    UDUi = (double*) malloc( n*n*sizeof(double));
    dgemm_( &ntran_,&ytran_,&n,&n,&n,&a,UD,&n,Ui,&n,&b,UDUi,&n );

    int i,j;
    printf ("\n(A):\n\t");
    for (i=0; i<n; ++i) {
        for (j=0; j<n; ++j){
            printf("[%6.1f] ", UDUi[i*n+j]);
        }
        printf("\n\t");
    }

    return 0;
}
scitigg
 
Posts: 3
Joined: Wed Feb 06, 2008 1:59 pm

Postby Julien Langou » Wed Feb 06, 2008 7:11 pm

Hello,

several poinst:

1- DGEEV is a diagonalization routine, therefore it is better to take a diagonalizable matrix as
input in particular if you want to check A = V * D * inv( V ). In your case your matrix is not
diagonalizable. The eigenvalue 0 has algebraic multiplicity 2 and geometric multiplicity 1.

2- So you need to change your matrix. Assume that A(1,1) = 1.0e+00 (and not 0.0e+00),
then your matrix becomes diagonalizable. Then your code won't work either and this is
explained in mere detail at:
http://icl.cs.utk.edu/lapack-forum/viewtopic.php?t=298
The idea is that LAPACK scale both the left eigenvectors and the right eigenvectors so
that each vector has norm1. Therefore the matrix of the left eigenvectors is not the
inverse of the right eigenvectors. If your matrices are well conditionned (e.g. if A(1,1) =
1.0e+00) then you can use your code but do not forget to scale Ui appropriately.

Julien
Julien Langou
 
Posts: 835
Joined: Thu Dec 09, 2004 12:32 pm
Location: Denver, CO, USA

Postby scitigg » Thu Feb 07, 2008 1:44 pm

Thanks for your quick reply and detailed information --looking past the caveat of diagonalizability. Left eigenvectors were never discussed in my linear or matrix algebra classes so some of this is new to me --the rest is a refresher.

On point one, i'll be using matrices that are similar to a symmetric matrix --thus diagonalizable. The matrix is used to solve the differential equations: P = exp(Q*c), where c will vary --thus the desirability to diagonalize.

Just to clarify the scaling method:

VL_i^t*VR_i => alpha_i
if alpha > 1 then we have algebraic multiplicity (...and major issues --in my case these shouldn't happen)
then (VL_i*alpha_i)*VR = I

Thanks so much!
scitigg
 
Posts: 3
Joined: Wed Feb 06, 2008 1:59 pm

Postby Julien Langou » Thu Feb 07, 2008 2:28 pm

So about the scaling method, you are pretty much right.

Just one thing, you can not really deduce anything about algebraic multiplicty / geometric
multiplicity just by looking at the alpha_i. Since VL_i and VR_i are both of norm 1, | alpha_i |
should be smaller than 1 in exact arithmetic. You might see | alpha_i | a little bit greater
than 1 on the computer but that should not alarm you. In any case, it has nothing to do
with diagonalizability.

[To check if you have given a nondiagonalizable matrix to DGEEV, you can compute the
condition number of V. If this condition number is high, this is a bad sign. Discussing what
high means exactly on the computer is complex so I'll stop here, say cond(V) > 1e7 for
example]

Julien.
Julien Langou
 
Posts: 835
Joined: Thu Dec 09, 2004 12:32 pm
Location: Denver, CO, USA

Postby scitigg » Mon Feb 11, 2008 5:01 pm

Thanks...

There are a few cases where I will be getting matrices with algebraic multiplicity > 1 (actually mostly on the 0 eigenvalue). Curious what you can say about this issue?

for an example, we have the following replaced in the above...

Code: Select all
    double A[16] = {
        -0.750000000,    0.2500000000,    0.2500000000,   0.2500000000,
        0.2500000000,    -0.750000000,    0.2500000000,   0.2500000000,
        0.2500000000,    0.2500000000,    -0.750000000,   0.2500000000,
        0.2500000000,    0.2500000000,     0.250000000,   -0.750000000 };
    int n = 4;


will produce U*U_i as...

Code: Select all
        [ 1.00000] [ 0.00000] [ 0.00000] [ 0.00000]
        [ 0.00000] [ 1.00000] [ 0.00000] [ 0.00000]
        [-0.00000] [-0.00000] [ 1.00000] [-0.00000]
        [ 0.36057] [ 0.00000] [-0.00000] [ 1.00000]


any suggestion in the lapack routines? (yes, the symmetric routine --and it gives the correct result. But checking symmetry in cases where they are symmetric is time consuming and rare, and special cases where they are symmetric, and known to be symmetric have analytical solutions).
scitigg
 
Posts: 3
Joined: Wed Feb 06, 2008 1:59 pm

Postby Julien Langou » Mon Feb 11, 2008 5:29 pm

Your first matrix was nondiagonalizable. The matrix you are showing here as few in
common with the previous one. There is no obvious analogy between both.

First this second matrix is symmetric so it is diagonalizable in an orthonormal basis. The
eigenvalue 0 has algebraic multiplicty 1 (and geometric multiplicity 1) and the eigenvalue
-1 has algebraic multiplicty 3 (and geometric multiplicity 3). Algebraic and geometric will
always be the same since the matrix is diagonalizable.

Keep in mind that LAPACK does not guarantee that VL = inv(VR)' . VL is a left
eigenvector basis (with vector of norm 1) and VR is a right eigenvector basis (with vector
of norm 1). Mathematically, you know that if all the eigenvalue are distinct then you can
ensure that VL(:,i) is colinear to the ith column of inv(VR)'. If there are multiple
eigenvalues (-1 in your case), then you can defineteley not ensure this.

any suggestion in the lapack routines?


Well, using a symmetric eigensolver would be the right thing to do in this case
though. I am not sure checking symmetry would add that much of an overhead
compared to what's going on in DGEEV. If you do not want to waste time to check
for symmetry and want to keep going with DGEEV, then, if you have multiple
eigenvalues, you'll be in trouble. I.e. VL is not inv(VR)' up to scalar multiplication. I believe
you can do some tricks around this but if your matrices are 4-by-4, you might as well
compute the inverse of VR explicity using DGETRF (LU fact.) and maybe DGETRI
(compute inverse using LU fact.).

Julien.
Julien Langou
 
Posts: 835
Joined: Thu Dec 09, 2004 12:32 pm
Location: Denver, CO, USA


Return to User Discussion

Who is online

Users browsing this forum: No registered users and 4 guests