double-complex precision

Functions

void magma_zgemv (magma_trans_t transA, magma_int_t m, magma_int_t n, magmaDoubleComplex alpha, magmaDoubleComplex_const_ptr dA, magma_int_t ldda, magmaDoubleComplex_const_ptr dx, magma_int_t incx, magmaDoubleComplex beta, magmaDoubleComplex_ptr dy, magma_int_t incy)
 Perform matrix-vector product.
void magma_zgerc (magma_int_t m, magma_int_t n, magmaDoubleComplex alpha, magmaDoubleComplex_const_ptr dx, magma_int_t incx, magmaDoubleComplex_const_ptr dy, magma_int_t incy, magmaDoubleComplex_ptr dA, magma_int_t ldda)
 Perform rank-1 update, $ A = \alpha x y^H + A $.
void magma_zgeru (magma_int_t m, magma_int_t n, magmaDoubleComplex alpha, magmaDoubleComplex_const_ptr dx, magma_int_t incx, magmaDoubleComplex_const_ptr dy, magma_int_t incy, magmaDoubleComplex_ptr dA, magma_int_t ldda)
 Perform rank-1 update (unconjugated), $ A = \alpha x y^H + A $.
void magma_zhemv (magma_uplo_t uplo, magma_int_t n, magmaDoubleComplex alpha, magmaDoubleComplex_const_ptr dA, magma_int_t ldda, magmaDoubleComplex_const_ptr dx, magma_int_t incx, magmaDoubleComplex beta, magmaDoubleComplex_ptr dy, magma_int_t incy)
 Perform Hermitian matrix-vector product, $ y = \alpha A x + \beta y $.
void magma_zher (magma_uplo_t uplo, magma_int_t n, double alpha, magmaDoubleComplex_const_ptr dx, magma_int_t incx, magmaDoubleComplex_ptr dA, magma_int_t ldda)
 Perform Hermitian rank-1 update, $ A = \alpha x x^H + A $.
void magma_zher2 (magma_uplo_t uplo, magma_int_t n, magmaDoubleComplex alpha, magmaDoubleComplex_const_ptr dx, magma_int_t incx, magmaDoubleComplex_const_ptr dy, magma_int_t incy, magmaDoubleComplex_ptr dA, magma_int_t ldda)
 Perform Hermitian rank-2 update, $ A = \alpha x y^H + conj(\alpha) y x^H + A $.
void magma_ztrmv (magma_uplo_t uplo, magma_trans_t trans, magma_diag_t diag, magma_int_t n, magmaDoubleComplex_const_ptr dA, magma_int_t ldda, magmaDoubleComplex_ptr dx, magma_int_t incx)
 Perform triangular matrix-vector product.
void magma_ztrsv (magma_uplo_t uplo, magma_trans_t trans, magma_diag_t diag, magma_int_t n, magmaDoubleComplex_const_ptr dA, magma_int_t ldda, magmaDoubleComplex_ptr dx, magma_int_t incx)
 Solve triangular matrix-vector system (one right-hand side).
void magmablas_zgemv_conjv (magma_int_t m, magma_int_t n, magmaDoubleComplex alpha, magmaDoubleComplex_const_ptr dA, magma_int_t ldda, magmaDoubleComplex_const_ptr dx, magma_int_t incx, magmaDoubleComplex beta, magmaDoubleComplex_ptr dy, magma_int_t incy)
 ZGEMV_CONJV performs the matrix-vector operation.
magma_int_t magmablas_zhemv_work (magma_uplo_t uplo, magma_int_t n, magmaDoubleComplex alpha, magmaDoubleComplex_const_ptr dA, magma_int_t ldda, magmaDoubleComplex_const_ptr dx, magma_int_t incx, magmaDoubleComplex beta, magmaDoubleComplex_ptr dy, magma_int_t incy, magmaDoubleComplex_ptr dwork, magma_int_t lwork, magma_queue_t queue)
 magmablas_zhemv_work performs the matrix-vector operation:
magma_int_t magmablas_zhemv (magma_uplo_t uplo, magma_int_t n, magmaDoubleComplex alpha, magmaDoubleComplex_const_ptr dA, magma_int_t ldda, magmaDoubleComplex_const_ptr dx, magma_int_t incx, magmaDoubleComplex beta, magmaDoubleComplex_ptr dy, magma_int_t incy)
 magmablas_zhemv performs the matrix-vector operation:
magma_int_t magmablas_zhemv_mgpu (magma_uplo_t uplo, magma_int_t n, magmaDoubleComplex alpha, magmaDoubleComplex_const_ptr const d_lA[], magma_int_t ldda, magma_int_t offset, magmaDoubleComplex const *x, magma_int_t incx, magmaDoubleComplex beta, magmaDoubleComplex *y, magma_int_t incy, magmaDoubleComplex *hwork, magma_int_t lhwork, magmaDoubleComplex_ptr dwork[], magma_int_t ldwork, magma_int_t ngpu, magma_int_t nb, magma_queue_t queues[])
 magmablas_zhemv_mgpu performs the matrix-vector operation:
magma_int_t magmablas_zhemv_mgpu_sync (magma_uplo_t uplo, magma_int_t n, magmaDoubleComplex alpha, magmaDoubleComplex_const_ptr const d_lA[], magma_int_t ldda, magma_int_t offset, magmaDoubleComplex const *x, magma_int_t incx, magmaDoubleComplex beta, magmaDoubleComplex *y, magma_int_t incy, magmaDoubleComplex *hwork, magma_int_t lhwork, magmaDoubleComplex_ptr dwork[], magma_int_t ldwork, magma_int_t ngpu, magma_int_t nb, magma_queue_t queues[])
 Synchronizes and acculumates final zhemv result.
void magmablas_zswapblk (magma_order_t order, magma_int_t n, magmaDoubleComplex_ptr dA, magma_int_t ldda, magmaDoubleComplex_ptr dB, magma_int_t lddb, magma_int_t i1, magma_int_t i2, const magma_int_t *ipiv, magma_int_t inci, magma_int_t offset)
magma_int_t magmablas_zsymv_work (magma_uplo_t uplo, magma_int_t n, magmaDoubleComplex alpha, magmaDoubleComplex_const_ptr dA, magma_int_t ldda, magmaDoubleComplex_const_ptr dx, magma_int_t incx, magmaDoubleComplex beta, magmaDoubleComplex_ptr dy, magma_int_t incy, magmaDoubleComplex_ptr dwork, magma_int_t lwork, magma_queue_t queue)
 magmablas_zsymv_work performs the matrix-vector operation:
magma_int_t magmablas_zsymv (magma_uplo_t uplo, magma_int_t n, magmaDoubleComplex alpha, magmaDoubleComplex_const_ptr dA, magma_int_t ldda, magmaDoubleComplex_const_ptr dx, magma_int_t incx, magmaDoubleComplex beta, magmaDoubleComplex_ptr dy, magma_int_t incy)
 magmablas_zsymv performs the matrix-vector operation:

Function Documentation

void magma_zgemv ( magma_trans_t  transA,
magma_int_t  m,
magma_int_t  n,
magmaDoubleComplex  alpha,
magmaDoubleComplex_const_ptr  dA,
magma_int_t  ldda,
magmaDoubleComplex_const_ptr  dx,
magma_int_t  incx,
magmaDoubleComplex  beta,
magmaDoubleComplex_ptr  dy,
magma_int_t  incy 
)

Perform matrix-vector product.

$ y = \alpha A x + \beta y $ (transA == MagmaNoTrans), or
$ y = \alpha A^T x + \beta y $ (transA == MagmaTrans), or
$ y = \alpha A^H x + \beta y $ (transA == MagmaConjTrans).

Parameters:
[in] transA Operation to perform on A.
[in] m Number of rows of A. m >= 0.
[in] n Number of columns of A. n >= 0.
[in] alpha Scalar $ \alpha $
[in] dA COMPLEX_16 array of dimension (ldda,n), ldda >= max(1,m). The m-by-n matrix A, on GPU device.
[in] ldda Leading dimension of dA.
[in] dx COMPLEX_16 array on GPU device. If transA == MagmaNoTrans, the n element vector x of dimension (1 + (n-1)*incx);
otherwise, the m element vector x of dimension (1 + (m-1)*incx).
[in] incx Stride between consecutive elements of dx. incx != 0.
[in] beta Scalar $ \beta $
[in,out] dy COMPLEX_16 array on GPU device. If transA == MagmaNoTrans, the m element vector y of dimension (1 + (m-1)*incy);
otherwise, the n element vector y of dimension (1 + (n-1)*incy).
[in] incy Stride between consecutive elements of dy. incy != 0.
void magma_zgerc ( magma_int_t  m,
magma_int_t  n,
magmaDoubleComplex  alpha,
magmaDoubleComplex_const_ptr  dx,
magma_int_t  incx,
magmaDoubleComplex_const_ptr  dy,
magma_int_t  incy,
magmaDoubleComplex_ptr  dA,
magma_int_t  ldda 
)

Perform rank-1 update, $ A = \alpha x y^H + A $.

Parameters:
[in] m Number of rows of A. m >= 0.
[in] n Number of columns of A. n >= 0.
[in] alpha Scalar $ \alpha $
[in] dx COMPLEX_16 array on GPU device. The m element vector x of dimension (1 + (m-1)*incx).
[in] incx Stride between consecutive elements of dx. incx != 0.
[in] dy COMPLEX_16 array on GPU device. The n element vector y of dimension (1 + (n-1)*incy).
[in] incy Stride between consecutive elements of dy. incy != 0.
[in,out] dA COMPLEX_16 array on GPU device. The m-by-n matrix A of dimension (ldda,n), ldda >= max(1,m).
[in] ldda Leading dimension of dA.
void magma_zgeru ( magma_int_t  m,
magma_int_t  n,
magmaDoubleComplex  alpha,
magmaDoubleComplex_const_ptr  dx,
magma_int_t  incx,
magmaDoubleComplex_const_ptr  dy,
magma_int_t  incy,
magmaDoubleComplex_ptr  dA,
magma_int_t  ldda 
)

Perform rank-1 update (unconjugated), $ A = \alpha x y^H + A $.

Parameters:
[in] m Number of rows of A. m >= 0.
[in] n Number of columns of A. n >= 0.
[in] alpha Scalar $ \alpha $
[in] dx COMPLEX_16 array on GPU device. The m element vector x of dimension (1 + (m-1)*incx).
[in] incx Stride between consecutive elements of dx. incx != 0.
[in] dy COMPLEX_16 array on GPU device. The n element vector y of dimension (1 + (n-1)*incy).
[in] incy Stride between consecutive elements of dy. incy != 0.
[in,out] dA COMPLEX_16 array of dimension (ldda,n), ldda >= max(1,m). The m-by-n matrix A, on GPU device.
[in] ldda Leading dimension of dA.
void magma_zhemv ( magma_uplo_t  uplo,
magma_int_t  n,
magmaDoubleComplex  alpha,
magmaDoubleComplex_const_ptr  dA,
magma_int_t  ldda,
magmaDoubleComplex_const_ptr  dx,
magma_int_t  incx,
magmaDoubleComplex  beta,
magmaDoubleComplex_ptr  dy,
magma_int_t  incy 
)

Perform Hermitian matrix-vector product, $ y = \alpha A x + \beta y $.

Parameters:
[in] uplo Whether the upper or lower triangle of A is referenced.
[in] n Number of rows and columns of A. n >= 0.
[in] alpha Scalar $ \alpha $
[in] dA COMPLEX_16 array of dimension (ldda,n), ldda >= max(1,n). The n-by-n matrix A, on GPU device.
[in] ldda Leading dimension of dA.
[in] dx COMPLEX_16 array on GPU device. The m element vector x of dimension (1 + (m-1)*incx).
[in] incx Stride between consecutive elements of dx. incx != 0.
[in] beta Scalar $ \beta $
[in,out] dy COMPLEX_16 array on GPU device. The n element vector y of dimension (1 + (n-1)*incy).
[in] incy Stride between consecutive elements of dy. incy != 0.
void magma_zher ( magma_uplo_t  uplo,
magma_int_t  n,
double  alpha,
magmaDoubleComplex_const_ptr  dx,
magma_int_t  incx,
magmaDoubleComplex_ptr  dA,
magma_int_t  ldda 
)

Perform Hermitian rank-1 update, $ A = \alpha x x^H + A $.

Parameters:
[in] uplo Whether the upper or lower triangle of A is referenced.
[in] n Number of rows and columns of A. n >= 0.
[in] alpha Scalar $ \alpha $
[in] dx COMPLEX_16 array on GPU device. The n element vector x of dimension (1 + (n-1)*incx).
[in] incx Stride between consecutive elements of dx. incx != 0.
[in,out] dA COMPLEX_16 array of dimension (ldda,n), ldda >= max(1,n). The n-by-n matrix A, on GPU device.
[in] ldda Leading dimension of dA.
void magma_zher2 ( magma_uplo_t  uplo,
magma_int_t  n,
magmaDoubleComplex  alpha,
magmaDoubleComplex_const_ptr  dx,
magma_int_t  incx,
magmaDoubleComplex_const_ptr  dy,
magma_int_t  incy,
magmaDoubleComplex_ptr  dA,
magma_int_t  ldda 
)

Perform Hermitian rank-2 update, $ A = \alpha x y^H + conj(\alpha) y x^H + A $.

Parameters:
[in] uplo Whether the upper or lower triangle of A is referenced.
[in] n Number of rows and columns of A. n >= 0.
[in] alpha Scalar $ \alpha $
[in] dx COMPLEX_16 array on GPU device. The n element vector x of dimension (1 + (n-1)*incx).
[in] incx Stride between consecutive elements of dx. incx != 0.
[in] dy COMPLEX_16 array on GPU device. The n element vector y of dimension (1 + (n-1)*incy).
[in] incy Stride between consecutive elements of dy. incy != 0.
[in,out] dA COMPLEX_16 array of dimension (ldda,n), ldda >= max(1,n). The n-by-n matrix A, on GPU device.
[in] ldda Leading dimension of dA.
void magma_ztrmv ( magma_uplo_t  uplo,
magma_trans_t  trans,
magma_diag_t  diag,
magma_int_t  n,
magmaDoubleComplex_const_ptr  dA,
magma_int_t  ldda,
magmaDoubleComplex_ptr  dx,
magma_int_t  incx 
)

Perform triangular matrix-vector product.

$ x = A x $ (trans == MagmaNoTrans), or
$ x = A^T x $ (trans == MagmaTrans), or
$ x = A^H x $ (trans == MagmaConjTrans).

Parameters:
[in] uplo Whether the upper or lower triangle of A is referenced.
[in] trans Operation to perform on A.
[in] diag Whether the diagonal of A is assumed to be unit or non-unit.
[in] n Number of rows and columns of A. n >= 0.
[in] dA COMPLEX_16 array of dimension (ldda,n), ldda >= max(1,n). The n-by-n matrix A, on GPU device.
[in] ldda Leading dimension of dA.
[in] dx COMPLEX_16 array on GPU device. The n element vector x of dimension (1 + (n-1)*incx).
[in] incx Stride between consecutive elements of dx. incx != 0.
void magma_ztrsv ( magma_uplo_t  uplo,
magma_trans_t  trans,
magma_diag_t  diag,
magma_int_t  n,
magmaDoubleComplex_const_ptr  dA,
magma_int_t  ldda,
magmaDoubleComplex_ptr  dx,
magma_int_t  incx 
)

Solve triangular matrix-vector system (one right-hand side).

$ A x = b $ (trans == MagmaNoTrans), or
$ A^T x = b $ (trans == MagmaTrans), or
$ A^H x = b $ (trans == MagmaConjTrans).

Parameters:
[in] uplo Whether the upper or lower triangle of A is referenced.
[in] trans Operation to perform on A.
[in] diag Whether the diagonal of A is assumed to be unit or non-unit.
[in] n Number of rows and columns of A. n >= 0.
[in] dA COMPLEX_16 array of dimension (ldda,n), ldda >= max(1,n). The n-by-n matrix A, on GPU device.
[in] ldda Leading dimension of dA.
[in,out] dx COMPLEX_16 array on GPU device. On entry, the n element RHS vector b of dimension (1 + (n-1)*incx). On exit, overwritten with the solution vector x.
[in] incx Stride between consecutive elements of dx. incx != 0.
void magmablas_zgemv_conjv ( magma_int_t  m,
magma_int_t  n,
magmaDoubleComplex  alpha,
magmaDoubleComplex_const_ptr  dA,
magma_int_t  ldda,
magmaDoubleComplex_const_ptr  dx,
magma_int_t  incx,
magmaDoubleComplex  beta,
magmaDoubleComplex_ptr  dy,
magma_int_t  incy 
)

ZGEMV_CONJV performs the matrix-vector operation.

y := alpha*A*conj(x) + beta*y,

where alpha and beta are scalars, x and y are vectors and A is an m by n matrix.

Parameters:
[in] m INTEGER On entry, m specifies the number of rows of the matrix A.
[in] n INTEGER On entry, n specifies the number of columns of the matrix A
[in] alpha COMPLEX_16 On entry, ALPHA specifies the scalar alpha.
[in] dA COMPLEX_16 array of dimension ( LDDA, n ) on the GPU.
[in] ldda INTEGER LDDA specifies the leading dimension of A.
[in] dx COMPLEX_16 array of dimension n
[in] incx Specifies the increment for the elements of X. INCX must not be zero.
[in] beta DOUBLE REAL On entry, BETA specifies the scalar beta. When BETA is supplied as zero then Y need not be set on input.
[out] dy DOUBLE PRECISION array of dimension m
[in] incy Specifies the increment for the elements of Y. INCY must not be zero.
magma_int_t magmablas_zhemv ( magma_uplo_t  uplo,
magma_int_t  n,
magmaDoubleComplex  alpha,
magmaDoubleComplex_const_ptr  dA,
magma_int_t  ldda,
magmaDoubleComplex_const_ptr  dx,
magma_int_t  incx,
magmaDoubleComplex  beta,
magmaDoubleComplex_ptr  dy,
magma_int_t  incy 
)

magmablas_zhemv performs the matrix-vector operation:

y := alpha*A*x + beta*y,

where alpha and beta are scalars, x and y are n element vectors and A is an n by n Hermitian matrix.

Parameters:
[in] uplo magma_uplo_t. On entry, UPLO specifies whether the upper or lower triangular part of the array A is to be referenced as follows:

  • = MagmaUpper: Only the upper triangular part of A is to be referenced.
  • = MagmaLower: Only the lower triangular part of A is to be referenced.
[in] n INTEGER. On entry, N specifies the order of the matrix A. N must be at least zero.
[in] alpha COMPLEX_16. On entry, ALPHA specifies the scalar alpha.
[in] dA COMPLEX_16 array of DIMENSION ( LDDA, n ). Before entry with UPLO = MagmaUpper, the leading n by n upper triangular part of the array A must contain the upper triangular part of the Hermitian matrix and the strictly lower triangular part of A is not referenced. Before entry with UPLO = MagmaLower, the leading n by n lower triangular part of the array A must contain the lower triangular part of the Hermitian matrix and the strictly upper triangular part of A is not referenced. Note that the imaginary parts of the diagonal elements need not be set and are assumed to be zero.
[in] ldda INTEGER. On entry, LDDA specifies the first dimension of A as declared in the calling (sub) program. LDDA must be at least max( 1, n ). It is recommended that ldda is multiple of 16. Otherwise performance would be deteriorated as the memory accesses would not be fully coalescent.
[in] dx COMPLEX_16 array of dimension at least ( 1 + ( n - 1 )*abs( INCX ) ). Before entry, the incremented array X must contain the n element vector x.
[in] incx INTEGER. On entry, INCX specifies the increment for the elements of X. INCX must not be zero.
[in] beta COMPLEX_16. On entry, BETA specifies the scalar beta. When BETA is supplied as zero then Y need not be set on input.
[in,out] dy COMPLEX_16 array of dimension at least ( 1 + ( n - 1 )*abs( INCY ) ). Before entry, the incremented array Y must contain the n element vector y. On exit, Y is overwritten by the updated vector y.
[in] incy INTEGER. On entry, INCY specifies the increment for the elements of Y. INCY must not be zero.
magma_int_t magmablas_zhemv_mgpu ( magma_uplo_t  uplo,
magma_int_t  n,
magmaDoubleComplex  alpha,
magmaDoubleComplex_const_ptr const   d_lA[],
magma_int_t  ldda,
magma_int_t  offset,
magmaDoubleComplex const *  x,
magma_int_t  incx,
magmaDoubleComplex  beta,
magmaDoubleComplex *  y,
magma_int_t  incy,
magmaDoubleComplex *  hwork,
magma_int_t  lhwork,
magmaDoubleComplex_ptr  dwork[],
magma_int_t  ldwork,
magma_int_t  ngpu,
magma_int_t  nb,
magma_queue_t  queues[] 
)

magmablas_zhemv_mgpu performs the matrix-vector operation:

y := alpha*A*x + beta*y,

where alpha and beta are scalars, x and y are n element vectors and A is an n by n Hermitian matrix.

Parameters:
[in] uplo magma_uplo_t. On entry, UPLO specifies whether the upper or lower triangular part of the array A is to be referenced as follows:

  • = MagmaUpper: Only the upper triangular part of A is to be referenced. **Not currently supported.**
  • = MagmaLower: Only the lower triangular part of A is to be referenced.
[in] n INTEGER. On entry, N specifies the order of the matrix A. N must be at least zero.
[in] alpha COMPLEX_16. On entry, ALPHA specifies the scalar alpha.
[in] d_lA Array of pointers, dimension (ngpu), to block-column distributed matrix A, with block size nb. d_lA[dev] is a COMPLEX_16 array on GPU dev, of dimension (LDDA, nlocal), where
{ floor(n/nb/ngpu)*nb + nb if dev < floor(n/nb) % ngpu, nlocal = { floor(n/nb/ngpu)*nb + nnb if dev == floor(n/nb) % ngpu, { floor(n/nb/ngpu)*nb otherwise.
Before entry with UPLO = MagmaUpper, the leading n by n upper triangular part of the array A must contain the upper triangular part of the Hermitian matrix and the strictly lower triangular part of A is not referenced. Before entry with UPLO = MagmaLower, the leading n by n lower triangular part of the array A must contain the lower triangular part of the Hermitian matrix and the strictly upper triangular part of A is not referenced. Note that the imaginary parts of the diagonal elements need not be set and are assumed to be zero.
[in] offset INTEGER. Row & column offset to start of matrix A within the distributed d_lA structure. Note that N is the size of this multiply, excluding the offset, so the size of the original parent matrix is N+offset. Also, x and y do not have an offset.
[in] ldda INTEGER. On entry, LDDA specifies the first dimension of A as declared in the calling (sub) program. LDDA must be at least max( 1, n + offset ). It is recommended that ldda is multiple of 16. Otherwise performance would be deteriorated as the memory accesses would not be fully coalescent.
[in] x COMPLEX_16 array **on the CPU** (not the GPU), of dimension at least ( 1 + ( n - 1 )*abs( INCX ) ). Before entry, the incremented array X must contain the n element vector x.
[in] incx INTEGER. On entry, INCX specifies the increment for the elements of X. INCX must not be zero.
[in] beta COMPLEX_16. On entry, BETA specifies the scalar beta. When BETA is supplied as zero then Y need not be set on input.
[in,out] y COMPLEX_16 array **on the CPU** (not the GPU), of dimension at least ( 1 + ( n - 1 )*abs( INCY ) ). Before entry, the incremented array Y must contain the n element vector y. On exit, Y is overwritten by the updated vector y.
[in] incy INTEGER. On entry, INCY specifies the increment for the elements of Y. INCY must not be zero.
hwork (workspace) COMPLEX_16 array on the CPU, of dimension (lhwork).
[in] lhwork INTEGER. The dimension of the array hwork. lhwork >= ngpu*nb.
dwork (workspaces) Array of pointers, dimension (ngpu), to workspace on each GPU. dwork[dev] is a COMPLEX_16 array on GPU dev, of dimension (ldwork).
[in] ldwork INTEGER. The dimension of each array dwork[dev]. ldwork >= ldda*( ceil((n + offset % nb) / nb) + 1 ).
[in] ngpu INTEGER. The number of GPUs to use.
[in] nb INTEGER. The block size used for distributing d_lA. Must be 64.
[in] queues magma_queue_t array of dimension (ngpu). queues[dev] is an execution queue on GPU dev.
magma_int_t magmablas_zhemv_mgpu_sync ( magma_uplo_t  uplo,
magma_int_t  n,
magmaDoubleComplex  alpha,
magmaDoubleComplex_const_ptr const   d_lA[],
magma_int_t  ldda,
magma_int_t  offset,
magmaDoubleComplex const *  x,
magma_int_t  incx,
magmaDoubleComplex  beta,
magmaDoubleComplex *  y,
magma_int_t  incy,
magmaDoubleComplex *  hwork,
magma_int_t  lhwork,
magmaDoubleComplex_ptr  dwork[],
magma_int_t  ldwork,
magma_int_t  ngpu,
magma_int_t  nb,
magma_queue_t  queues[] 
)

Synchronizes and acculumates final zhemv result.

For convenience, the parameters are identical to magmablas_zhemv_mgpu (though some are unused here).

See also:
magmablas_zhemv_mgpu
magma_int_t magmablas_zhemv_work ( magma_uplo_t  uplo,
magma_int_t  n,
magmaDoubleComplex  alpha,
magmaDoubleComplex_const_ptr  dA,
magma_int_t  ldda,
magmaDoubleComplex_const_ptr  dx,
magma_int_t  incx,
magmaDoubleComplex  beta,
magmaDoubleComplex_ptr  dy,
magma_int_t  incy,
magmaDoubleComplex_ptr  dwork,
magma_int_t  lwork,
magma_queue_t  queue 
)

magmablas_zhemv_work performs the matrix-vector operation:

y := alpha*A*x + beta*y,

where alpha and beta are scalars, x and y are n element vectors and A is an n by n Hermitian matrix.

Parameters:
[in] uplo magma_uplo_t. On entry, UPLO specifies whether the upper or lower triangular part of the array A is to be referenced as follows:

  • = MagmaUpper: Only the upper triangular part of A is to be referenced.
  • = MagmaLower: Only the lower triangular part of A is to be referenced.
[in] n INTEGER. On entry, N specifies the order of the matrix A. N must be at least zero.
[in] alpha COMPLEX_16. On entry, ALPHA specifies the scalar alpha.
[in] dA COMPLEX_16 array of DIMENSION ( LDDA, n ). Before entry with UPLO = MagmaUpper, the leading n by n upper triangular part of the array A must contain the upper triangular part of the Hermitian matrix and the strictly lower triangular part of A is not referenced. Before entry with UPLO = MagmaLower, the leading n by n lower triangular part of the array A must contain the lower triangular part of the Hermitian matrix and the strictly upper triangular part of A is not referenced. Note that the imaginary parts of the diagonal elements need not be set and are assumed to be zero.
[in] ldda INTEGER. On entry, LDDA specifies the first dimension of A as declared in the calling (sub) program. LDDA must be at least max( 1, n ). It is recommended that ldda is multiple of 16. Otherwise performance would be deteriorated as the memory accesses would not be fully coalescent.
[in] dx COMPLEX_16 array of dimension at least ( 1 + ( n - 1 )*abs( INCX ) ). Before entry, the incremented array X must contain the n element vector x.
[in] incx INTEGER. On entry, INCX specifies the increment for the elements of X. INCX must not be zero.
[in] beta COMPLEX_16. On entry, BETA specifies the scalar beta. When BETA is supplied as zero then Y need not be set on input.
[in,out] dy COMPLEX_16 array of dimension at least ( 1 + ( n - 1 )*abs( INCY ) ). Before entry, the incremented array Y must contain the n element vector y. On exit, Y is overwritten by the updated vector y.
[in] incy INTEGER. On entry, INCY specifies the increment for the elements of Y. INCY must not be zero.
[in] dwork (workspace) COMPLEX_16 array on the GPU, dimension (MAX(1, LWORK)),
[in] lwork INTEGER. The dimension of the array DWORK. LWORK >= LDDA * ceil( N / NB_X ), where NB_X = 64.
[in] queue magma_queue_t. Queue to execute in.

MAGMA implements zhemv through two steps: 1) perform the multiplication in each thread block and put the intermediate value in dwork. 2) sum the intermediate values and store the final result in y.

magamblas_zhemv_work requires users to provide a workspace, while magmablas_zhemv is a wrapper routine allocating the workspace inside the routine and provides the same interface as cublas.

If users need to call zhemv frequently, we suggest using magmablas_zhemv_work instead of magmablas_zhemv. As the overhead to allocate and free in device memory in magmablas_zhemv would hurt performance. Our tests show that this penalty is about 10 Gflop/s when the matrix size is around 10000.

void magmablas_zswapblk ( magma_order_t  order,
magma_int_t  n,
magmaDoubleComplex_ptr  dA,
magma_int_t  ldda,
magmaDoubleComplex_ptr  dB,
magma_int_t  lddb,
magma_int_t  i1,
magma_int_t  i2,
const magma_int_t *  ipiv,
magma_int_t  inci,
magma_int_t  offset 
)
See also:
magmablas_zswapblk_q
magma_int_t magmablas_zsymv ( magma_uplo_t  uplo,
magma_int_t  n,
magmaDoubleComplex  alpha,
magmaDoubleComplex_const_ptr  dA,
magma_int_t  ldda,
magmaDoubleComplex_const_ptr  dx,
magma_int_t  incx,
magmaDoubleComplex  beta,
magmaDoubleComplex_ptr  dy,
magma_int_t  incy 
)

magmablas_zsymv performs the matrix-vector operation:

y := alpha*A*x + beta*y,

where alpha and beta are scalars, x and y are n element vectors and A is an n by n complex symmetric matrix.

Parameters:
[in] uplo magma_uplo_t. On entry, UPLO specifies whether the upper or lower triangular part of the array A is to be referenced as follows:

  • = MagmaUpper: Only the upper triangular part of A is to be referenced.
  • = MagmaLower: Only the lower triangular part of A is to be referenced.
[in] n INTEGER. On entry, N specifies the order of the matrix A. N must be at least zero.
[in] alpha COMPLEX_16. On entry, ALPHA specifies the scalar alpha.
[in] dA COMPLEX_16 array of DIMENSION ( LDDA, n ). Before entry with UPLO = MagmaUpper, the leading n by n upper triangular part of the array A must contain the upper triangular part of the symmetric matrix and the strictly lower triangular part of A is not referenced. Before entry with UPLO = MagmaLower, the leading n by n lower triangular part of the array A must contain the lower triangular part of the symmetric matrix and the strictly upper triangular part of A is not referenced. Note that the imaginary parts of the diagonal elements need not be set and are assumed to be zero.
[in] ldda INTEGER. On entry, LDDA specifies the first dimension of A as declared in the calling (sub) program. LDDA must be at least max( 1, n ). It is recommended that ldda is multiple of 16. Otherwise performance would be deteriorated as the memory accesses would not be fully coalescent.
[in] dx COMPLEX_16 array of dimension at least ( 1 + ( n - 1 )*abs( INCX ) ). Before entry, the incremented array X must contain the n element vector x.
[in] incx INTEGER. On entry, INCX specifies the increment for the elements of X. INCX must not be zero.
[in] beta COMPLEX_16. On entry, BETA specifies the scalar beta. When BETA is supplied as zero then Y need not be set on input.
[in,out] dy COMPLEX_16 array of dimension at least ( 1 + ( n - 1 )*abs( INCY ) ). Before entry, the incremented array Y must contain the n element vector y. On exit, Y is overwritten by the updated vector y.
[in] incy INTEGER. On entry, INCY specifies the increment for the elements of Y. INCY must not be zero.
magma_int_t magmablas_zsymv_work ( magma_uplo_t  uplo,
magma_int_t  n,
magmaDoubleComplex  alpha,
magmaDoubleComplex_const_ptr  dA,
magma_int_t  ldda,
magmaDoubleComplex_const_ptr  dx,
magma_int_t  incx,
magmaDoubleComplex  beta,
magmaDoubleComplex_ptr  dy,
magma_int_t  incy,
magmaDoubleComplex_ptr  dwork,
magma_int_t  lwork,
magma_queue_t  queue 
)

magmablas_zsymv_work performs the matrix-vector operation:

y := alpha*A*x + beta*y,

where alpha and beta are scalars, x and y are n element vectors and A is an n by n complex symmetric matrix.

Parameters:
[in] uplo magma_uplo_t. On entry, UPLO specifies whether the upper or lower triangular part of the array A is to be referenced as follows:

  • = MagmaUpper: Only the upper triangular part of A is to be referenced.
  • = MagmaLower: Only the lower triangular part of A is to be referenced.
[in] n INTEGER. On entry, N specifies the order of the matrix A. N must be at least zero.
[in] alpha COMPLEX_16. On entry, ALPHA specifies the scalar alpha.
[in] dA COMPLEX_16 array of DIMENSION ( LDDA, n ). Before entry with UPLO = MagmaUpper, the leading n by n upper triangular part of the array A must contain the upper triangular part of the symmetric matrix and the strictly lower triangular part of A is not referenced. Before entry with UPLO = MagmaLower, the leading n by n lower triangular part of the array A must contain the lower triangular part of the symmetric matrix and the strictly upper triangular part of A is not referenced. Note that the imaginary parts of the diagonal elements need not be set and are assumed to be zero.
[in] ldda INTEGER. On entry, LDDA specifies the first dimension of A as declared in the calling (sub) program. LDDA must be at least max( 1, n ). It is recommended that ldda is multiple of 16. Otherwise performance would be deteriorated as the memory accesses would not be fully coalescent.
[in] dx COMPLEX_16 array of dimension at least ( 1 + ( n - 1 )*abs( INCX ) ). Before entry, the incremented array X must contain the n element vector x.
[in] incx INTEGER. On entry, INCX specifies the increment for the elements of X. INCX must not be zero.
[in] beta COMPLEX_16. On entry, BETA specifies the scalar beta. When BETA is supplied as zero then Y need not be set on input.
[in,out] dy COMPLEX_16 array of dimension at least ( 1 + ( n - 1 )*abs( INCY ) ). Before entry, the incremented array Y must contain the n element vector y. On exit, Y is overwritten by the updated vector y.
[in] incy INTEGER. On entry, INCY specifies the increment for the elements of Y. INCY must not be zero.
[in] dwork (workspace) COMPLEX_16 array on the GPU, dimension (MAX(1, LWORK)),
[in] lwork INTEGER. The dimension of the array DWORK. LWORK >= LDDA * ceil( N / NB_X ), where NB_X = 64.
[in] queue magma_queue_t. Queue to execute in.

MAGMA implements zsymv through two steps: 1) perform the multiplication in each thread block and put the intermediate value in dwork. 2) sum the intermediate values and store the final result in y.

magamblas_zsymv_work requires users to provide a workspace, while magmablas_zsymv is a wrapper routine allocating the workspace inside the routine and provides the same interface as cublas.

If users need to call zsymv frequently, we suggest using magmablas_zsymv_work instead of magmablas_zsymv. As the overhead to allocate and free in device memory in magmablas_zsymv would hurt performance. Our tests show that this penalty is about 10 Gflop/s when the matrix size is around 10000.


Generated on 3 May 2015 for MAGMA by  doxygen 1.6.1