MAGMA
2.3.0
Matrix Algebra for GPU and Multicore Architectures

\( y = \alpha Ax + \beta y \) More...
Functions  
void  magmablas_chemv_batched (magma_uplo_t uplo, magma_int_t n, magmaFloatComplex alpha, magmaFloatComplex **dA_array, magma_int_t ldda, magmaFloatComplex **dX_array, magma_int_t incx, magmaFloatComplex beta, magmaFloatComplex **dY_array, magma_int_t incy, magma_int_t batchCount, magma_queue_t queue) 
CHEMV performs the matrixvector operation: More...  
void  magmablas_chemv_vbatched (magma_uplo_t uplo, magma_int_t *n, magmaFloatComplex alpha, magmaFloatComplex_ptr dA_array[], magma_int_t *ldda, magmaFloatComplex_ptr dx_array[], magma_int_t *incx, magmaFloatComplex beta, magmaFloatComplex_ptr dy_array[], magma_int_t *incy, magma_int_t batchCount, magma_queue_t queue) 
CHEMV performs the matrixvector operation: More...  
void  magmablas_dsymv_batched (magma_uplo_t uplo, magma_int_t n, double alpha, double **dA_array, magma_int_t ldda, double **dX_array, magma_int_t incx, double beta, double **dY_array, magma_int_t incy, magma_int_t batchCount, magma_queue_t queue) 
DSYMV performs the matrixvector operation: More...  
void  magmablas_dsymv_vbatched (magma_uplo_t uplo, magma_int_t *n, double alpha, magmaDouble_ptr dA_array[], magma_int_t *ldda, magmaDouble_ptr dx_array[], magma_int_t *incx, double beta, magmaDouble_ptr dy_array[], magma_int_t *incy, magma_int_t batchCount, magma_queue_t queue) 
DSYMV performs the matrixvector operation: More...  
void  magmablas_ssymv_batched (magma_uplo_t uplo, magma_int_t n, float alpha, float **dA_array, magma_int_t ldda, float **dX_array, magma_int_t incx, float beta, float **dY_array, magma_int_t incy, magma_int_t batchCount, magma_queue_t queue) 
SSYMV performs the matrixvector operation: More...  
void  magmablas_ssymv_vbatched (magma_uplo_t uplo, magma_int_t *n, float alpha, magmaFloat_ptr dA_array[], magma_int_t *ldda, magmaFloat_ptr dx_array[], magma_int_t *incx, float beta, magmaFloat_ptr dy_array[], magma_int_t *incy, magma_int_t batchCount, magma_queue_t queue) 
SSYMV performs the matrixvector operation: More...  
void  magmablas_zhemv_batched (magma_uplo_t uplo, magma_int_t n, magmaDoubleComplex alpha, magmaDoubleComplex **dA_array, magma_int_t ldda, magmaDoubleComplex **dX_array, magma_int_t incx, magmaDoubleComplex beta, magmaDoubleComplex **dY_array, magma_int_t incy, magma_int_t batchCount, magma_queue_t queue) 
ZHEMV performs the matrixvector operation: More...  
void  magmablas_zhemv_vbatched (magma_uplo_t uplo, magma_int_t *n, magmaDoubleComplex alpha, magmaDoubleComplex_ptr dA_array[], magma_int_t *ldda, magmaDoubleComplex_ptr dx_array[], magma_int_t *incx, magmaDoubleComplex beta, magmaDoubleComplex_ptr dy_array[], magma_int_t *incy, magma_int_t batchCount, magma_queue_t queue) 
ZHEMV performs the matrixvector operation: More...  
\( y = \alpha Ax + \beta y \)
void magmablas_chemv_batched  (  magma_uplo_t  uplo, 
magma_int_t  n,  
magmaFloatComplex  alpha,  
magmaFloatComplex **  dA_array,  
magma_int_t  ldda,  
magmaFloatComplex **  dX_array,  
magma_int_t  incx,  
magmaFloatComplex  beta,  
magmaFloatComplex **  dY_array,  
magma_int_t  incy,  
magma_int_t  batchCount,  
magma_queue_t  queue  
) 
CHEMV performs the matrixvector operation:
y := alpha*A*x + beta*y,
where alpha and beta are scalars, x and y are n element vectors and A is an n by n Hermitian matrix. This is the fixed size batched version of the operation.
[in]  uplo  magma_uplo_t. On entry, UPLO specifies whether the upper or lower triangular part of the array A is to be referenced as follows:

[in]  n  INTEGER. On entry, N specifies the order of each matrix A. N must be at least zero. 
[in]  alpha  COMPLEX. On entry, ALPHA specifies the scalar alpha. 
[in]  dA_array  Array of pointers, dimension(batchCount). Each is a COMPLEX array A of DIMENSION ( LDDA, n ). Before entry with UPLO = MagmaUpper, the leading n by n upper triangular part of the array A must contain the upper triangular part of the Hermitian matrix and the strictly lower triangular part of A is not referenced. Before entry with UPLO = MagmaLower, the leading n by n lower triangular part of the array A must contain the lower triangular part of the Hermitian matrix and the strictly upper triangular part of A is not referenced. Note that the imaginary parts of the diagonal elements need not be set and are assumed to be zero. 
[in]  ldda  INTEGER. On entry, LDDA specifies the first dimension of each A as declared in the calling (sub) program. LDDA must be at least max( 1, n ). It is recommended that ldda is multiple of 16. Otherwise performance would be deteriorated as the memory accesses would not be fully coalescent. 
[in]  dX_array  Array of pointers, dimension(batchCount). Each is a COMPLEX array X of dimension at least ( 1 + ( n  1 )*abs( INCX ) ). Before entry, the incremented array X must contain the n element vector X. 
[in]  incx  INTEGER. On entry, INCX specifies the increment for the elements of X. INCX must not be zero. 
[in]  beta  COMPLEX. On entry, BETA specifies the scalar beta. When BETA is supplied as zero then Y need not be set on input. 
[in,out]  dY_array  Array of pointers, dimension(batchCount). Each is a COMPLEX array Y of dimension at least ( 1 + ( n  1 )*abs( INCY ) ). Before entry, the incremented array Y must contain the n element vector Y. On exit, Y is overwritten by the updated vector Y. 
[in]  incy  INTEGER. On entry, INCY specifies the increment for the elements of Y. INCY must not be zero. 
[in]  batchCount  INTEGER. The number of problems to operate on. 
[in]  queue  magma_queue_t Queue to execute in. 
void magmablas_chemv_vbatched  (  magma_uplo_t  uplo, 
magma_int_t *  n,  
magmaFloatComplex  alpha,  
magmaFloatComplex_ptr  dA_array[],  
magma_int_t *  ldda,  
magmaFloatComplex_ptr  dx_array[],  
magma_int_t *  incx,  
magmaFloatComplex  beta,  
magmaFloatComplex_ptr  dy_array[],  
magma_int_t *  incy,  
magma_int_t  batchCount,  
magma_queue_t  queue  
) 
CHEMV performs the matrixvector operation:
y := alpha*A*x + beta*y,
where alpha and beta are scalars, x and y are n element vectors and A is an n by n Hermitian matrix. This is the variable size batched version of the operation.
[in]  uplo  magma_uplo_t. On entry, UPLO specifies whether the upper or lower triangular part of the array A is to be referenced as follows:

[in]  n  INTEGER array, dimension(batchCoutn + 1). On entry, each element N specifies the order of each matrix A. N must be at least zero. 
[in]  alpha  COMPLEX. On entry, ALPHA specifies the scalar alpha. 
[in]  dA_array  Array of pointers, dimension(batchCount). Each is a COMPLEX array A of DIMENSION ( LDDA, N ). Before entry with UPLO = MagmaUpper, the leading N by N upper triangular part of the array A must contain the upper triangular part of the Hermitian matrix and the strictly lower triangular part of A is not referenced. Before entry with UPLO = MagmaLower, the leading N by N lower triangular part of the array A must contain the lower triangular part of the Hermitian matrix and the strictly upper triangular part of A is not referenced. Note that the imaginary parts of the diagonal elements need not be set and are assumed to be zero. 
[in]  ldda  INTEGER array, dimension(batchCount + 1). On entry, each element LDDA specifies the first dimension of each A as declared in the calling (sub) program. LDDA must be at least max( 1, n ). It is recommended that LDDA is multiple of 16. Otherwise performance would be deteriorated as the memory accesses would not be fully coalescent. 
[in]  dx_array  Array of pointers, dimension(batchCount). Each is a COMPLEX array X of dimension at least ( 1 + ( n  1 )*abs( INCX ) ). Before entry, the incremented array X must contain the n element vector X. 
[in]  incx  INTEGER array, dimension(batchCount + 1). On entry, each element INCX specifies the increment for the elements of each X. INCX must not be zero. 
[in]  beta  COMPLEX. On entry, BETA specifies the scalar beta. When BETA is supplied as zero then Y need not be set on input. 
[in,out]  dy_array  Array of pointers, dimension(batchCount). Each is a COMPLEX array Y of dimension at least ( 1 + ( n  1 )*abs( INCY ) ). Before entry, the incremented array Y must contain the n element vector Y. On exit, Y is overwritten by the updated vector Y. 
[in]  incy  INTEGER array, dimension(batchCount + 1). On entry, each element INCY specifies the increment for the elements of each Y. INCY must not be zero. 
[in]  batchCount  INTEGER. The number of problems to operate on. 
[in]  queue  magma_queue_t Queue to execute in. 
void magmablas_dsymv_batched  (  magma_uplo_t  uplo, 
magma_int_t  n,  
double  alpha,  
double **  dA_array,  
magma_int_t  ldda,  
double **  dX_array,  
magma_int_t  incx,  
double  beta,  
double **  dY_array,  
magma_int_t  incy,  
magma_int_t  batchCount,  
magma_queue_t  queue  
) 
DSYMV performs the matrixvector operation:
y := alpha*A*x + beta*y,
where alpha and beta are scalars, x and y are n element vectors and A is an n by n symmetric matrix. This is the fixed size batched version of the operation.
[in]  uplo  magma_uplo_t. On entry, UPLO specifies whether the upper or lower triangular part of the array A is to be referenced as follows:

[in]  n  INTEGER. On entry, N specifies the order of each matrix A. N must be at least zero. 
[in]  alpha  DOUBLE PRECISION. On entry, ALPHA specifies the scalar alpha. 
[in]  dA_array  Array of pointers, dimension(batchCount). Each is a DOUBLE PRECISION array A of DIMENSION ( LDDA, n ). Before entry with UPLO = MagmaUpper, the leading n by n upper triangular part of the array A must contain the upper triangular part of the symmetric matrix and the strictly lower triangular part of A is not referenced. Before entry with UPLO = MagmaLower, the leading n by n lower triangular part of the array A must contain the lower triangular part of the symmetric matrix and the strictly upper triangular part of A is not referenced. Note that the imaginary parts of the diagonal elements need not be set and are assumed to be zero. 
[in]  ldda  INTEGER. On entry, LDDA specifies the first dimension of each A as declared in the calling (sub) program. LDDA must be at least max( 1, n ). It is recommended that ldda is multiple of 16. Otherwise performance would be deteriorated as the memory accesses would not be fully coalescent. 
[in]  dX_array  Array of pointers, dimension(batchCount). Each is a DOUBLE PRECISION array X of dimension at least ( 1 + ( n  1 )*abs( INCX ) ). Before entry, the incremented array X must contain the n element vector X. 
[in]  incx  INTEGER. On entry, INCX specifies the increment for the elements of X. INCX must not be zero. 
[in]  beta  DOUBLE PRECISION. On entry, BETA specifies the scalar beta. When BETA is supplied as zero then Y need not be set on input. 
[in,out]  dY_array  Array of pointers, dimension(batchCount). Each is a DOUBLE PRECISION array Y of dimension at least ( 1 + ( n  1 )*abs( INCY ) ). Before entry, the incremented array Y must contain the n element vector Y. On exit, Y is overwritten by the updated vector Y. 
[in]  incy  INTEGER. On entry, INCY specifies the increment for the elements of Y. INCY must not be zero. 
[in]  batchCount  INTEGER. The number of problems to operate on. 
[in]  queue  magma_queue_t Queue to execute in. 
void magmablas_dsymv_vbatched  (  magma_uplo_t  uplo, 
magma_int_t *  n,  
double  alpha,  
magmaDouble_ptr  dA_array[],  
magma_int_t *  ldda,  
magmaDouble_ptr  dx_array[],  
magma_int_t *  incx,  
double  beta,  
magmaDouble_ptr  dy_array[],  
magma_int_t *  incy,  
magma_int_t  batchCount,  
magma_queue_t  queue  
) 
DSYMV performs the matrixvector operation:
y := alpha*A*x + beta*y,
where alpha and beta are scalars, x and y are n element vectors and A is an n by n symmetric matrix. This is the variable size batched version of the operation.
[in]  uplo  magma_uplo_t. On entry, UPLO specifies whether the upper or lower triangular part of the array A is to be referenced as follows:

[in]  n  INTEGER array, dimension(batchCoutn + 1). On entry, each element N specifies the order of each matrix A. N must be at least zero. 
[in]  alpha  DOUBLE PRECISION. On entry, ALPHA specifies the scalar alpha. 
[in]  dA_array  Array of pointers, dimension(batchCount). Each is a DOUBLE PRECISION array A of DIMENSION ( LDDA, N ). Before entry with UPLO = MagmaUpper, the leading N by N upper triangular part of the array A must contain the upper triangular part of the symmetric matrix and the strictly lower triangular part of A is not referenced. Before entry with UPLO = MagmaLower, the leading N by N lower triangular part of the array A must contain the lower triangular part of the symmetric matrix and the strictly upper triangular part of A is not referenced. Note that the imaginary parts of the diagonal elements need not be set and are assumed to be zero. 
[in]  ldda  INTEGER array, dimension(batchCount + 1). On entry, each element LDDA specifies the first dimension of each A as declared in the calling (sub) program. LDDA must be at least max( 1, n ). It is recommended that LDDA is multiple of 16. Otherwise performance would be deteriorated as the memory accesses would not be fully coalescent. 
[in]  dx_array  Array of pointers, dimension(batchCount). Each is a DOUBLE PRECISION array X of dimension at least ( 1 + ( n  1 )*abs( INCX ) ). Before entry, the incremented array X must contain the n element vector X. 
[in]  incx  INTEGER array, dimension(batchCount + 1). On entry, each element INCX specifies the increment for the elements of each X. INCX must not be zero. 
[in]  beta  DOUBLE PRECISION. On entry, BETA specifies the scalar beta. When BETA is supplied as zero then Y need not be set on input. 
[in,out]  dy_array  Array of pointers, dimension(batchCount). Each is a DOUBLE PRECISION array Y of dimension at least ( 1 + ( n  1 )*abs( INCY ) ). Before entry, the incremented array Y must contain the n element vector Y. On exit, Y is overwritten by the updated vector Y. 
[in]  incy  INTEGER array, dimension(batchCount + 1). On entry, each element INCY specifies the increment for the elements of each Y. INCY must not be zero. 
[in]  batchCount  INTEGER. The number of problems to operate on. 
[in]  queue  magma_queue_t Queue to execute in. 
void magmablas_ssymv_batched  (  magma_uplo_t  uplo, 
magma_int_t  n,  
float  alpha,  
float **  dA_array,  
magma_int_t  ldda,  
float **  dX_array,  
magma_int_t  incx,  
float  beta,  
float **  dY_array,  
magma_int_t  incy,  
magma_int_t  batchCount,  
magma_queue_t  queue  
) 
SSYMV performs the matrixvector operation:
y := alpha*A*x + beta*y,
where alpha and beta are scalars, x and y are n element vectors and A is an n by n symmetric matrix. This is the fixed size batched version of the operation.
[in]  uplo  magma_uplo_t. On entry, UPLO specifies whether the upper or lower triangular part of the array A is to be referenced as follows:

[in]  n  INTEGER. On entry, N specifies the order of each matrix A. N must be at least zero. 
[in]  alpha  REAL. On entry, ALPHA specifies the scalar alpha. 
[in]  dA_array  Array of pointers, dimension(batchCount). Each is a REAL array A of DIMENSION ( LDDA, n ). Before entry with UPLO = MagmaUpper, the leading n by n upper triangular part of the array A must contain the upper triangular part of the symmetric matrix and the strictly lower triangular part of A is not referenced. Before entry with UPLO = MagmaLower, the leading n by n lower triangular part of the array A must contain the lower triangular part of the symmetric matrix and the strictly upper triangular part of A is not referenced. Note that the imaginary parts of the diagonal elements need not be set and are assumed to be zero. 
[in]  ldda  INTEGER. On entry, LDDA specifies the first dimension of each A as declared in the calling (sub) program. LDDA must be at least max( 1, n ). It is recommended that ldda is multiple of 16. Otherwise performance would be deteriorated as the memory accesses would not be fully coalescent. 
[in]  dX_array  Array of pointers, dimension(batchCount). Each is a REAL array X of dimension at least ( 1 + ( n  1 )*abs( INCX ) ). Before entry, the incremented array X must contain the n element vector X. 
[in]  incx  INTEGER. On entry, INCX specifies the increment for the elements of X. INCX must not be zero. 
[in]  beta  REAL. On entry, BETA specifies the scalar beta. When BETA is supplied as zero then Y need not be set on input. 
[in,out]  dY_array  Array of pointers, dimension(batchCount). Each is a REAL array Y of dimension at least ( 1 + ( n  1 )*abs( INCY ) ). Before entry, the incremented array Y must contain the n element vector Y. On exit, Y is overwritten by the updated vector Y. 
[in]  incy  INTEGER. On entry, INCY specifies the increment for the elements of Y. INCY must not be zero. 
[in]  batchCount  INTEGER. The number of problems to operate on. 
[in]  queue  magma_queue_t Queue to execute in. 
void magmablas_ssymv_vbatched  (  magma_uplo_t  uplo, 
magma_int_t *  n,  
float  alpha,  
magmaFloat_ptr  dA_array[],  
magma_int_t *  ldda,  
magmaFloat_ptr  dx_array[],  
magma_int_t *  incx,  
float  beta,  
magmaFloat_ptr  dy_array[],  
magma_int_t *  incy,  
magma_int_t  batchCount,  
magma_queue_t  queue  
) 
SSYMV performs the matrixvector operation:
y := alpha*A*x + beta*y,
where alpha and beta are scalars, x and y are n element vectors and A is an n by n symmetric matrix. This is the variable size batched version of the operation.
[in]  uplo  magma_uplo_t. On entry, UPLO specifies whether the upper or lower triangular part of the array A is to be referenced as follows:

[in]  n  INTEGER array, dimension(batchCoutn + 1). On entry, each element N specifies the order of each matrix A. N must be at least zero. 
[in]  alpha  REAL. On entry, ALPHA specifies the scalar alpha. 
[in]  dA_array  Array of pointers, dimension(batchCount). Each is a REAL array A of DIMENSION ( LDDA, N ). Before entry with UPLO = MagmaUpper, the leading N by N upper triangular part of the array A must contain the upper triangular part of the symmetric matrix and the strictly lower triangular part of A is not referenced. Before entry with UPLO = MagmaLower, the leading N by N lower triangular part of the array A must contain the lower triangular part of the symmetric matrix and the strictly upper triangular part of A is not referenced. Note that the imaginary parts of the diagonal elements need not be set and are assumed to be zero. 
[in]  ldda  INTEGER array, dimension(batchCount + 1). On entry, each element LDDA specifies the first dimension of each A as declared in the calling (sub) program. LDDA must be at least max( 1, n ). It is recommended that LDDA is multiple of 16. Otherwise performance would be deteriorated as the memory accesses would not be fully coalescent. 
[in]  dx_array  Array of pointers, dimension(batchCount). Each is a REAL array X of dimension at least ( 1 + ( n  1 )*abs( INCX ) ). Before entry, the incremented array X must contain the n element vector X. 
[in]  incx  INTEGER array, dimension(batchCount + 1). On entry, each element INCX specifies the increment for the elements of each X. INCX must not be zero. 
[in]  beta  REAL. On entry, BETA specifies the scalar beta. When BETA is supplied as zero then Y need not be set on input. 
[in,out]  dy_array  Array of pointers, dimension(batchCount). Each is a REAL array Y of dimension at least ( 1 + ( n  1 )*abs( INCY ) ). Before entry, the incremented array Y must contain the n element vector Y. On exit, Y is overwritten by the updated vector Y. 
[in]  incy  INTEGER array, dimension(batchCount + 1). On entry, each element INCY specifies the increment for the elements of each Y. INCY must not be zero. 
[in]  batchCount  INTEGER. The number of problems to operate on. 
[in]  queue  magma_queue_t Queue to execute in. 
void magmablas_zhemv_batched  (  magma_uplo_t  uplo, 
magma_int_t  n,  
magmaDoubleComplex  alpha,  
magmaDoubleComplex **  dA_array,  
magma_int_t  ldda,  
magmaDoubleComplex **  dX_array,  
magma_int_t  incx,  
magmaDoubleComplex  beta,  
magmaDoubleComplex **  dY_array,  
magma_int_t  incy,  
magma_int_t  batchCount,  
magma_queue_t  queue  
) 
ZHEMV performs the matrixvector operation:
y := alpha*A*x + beta*y,
where alpha and beta are scalars, x and y are n element vectors and A is an n by n Hermitian matrix. This is the fixed size batched version of the operation.
[in]  uplo  magma_uplo_t. On entry, UPLO specifies whether the upper or lower triangular part of the array A is to be referenced as follows:

[in]  n  INTEGER. On entry, N specifies the order of each matrix A. N must be at least zero. 
[in]  alpha  COMPLEX_16. On entry, ALPHA specifies the scalar alpha. 
[in]  dA_array  Array of pointers, dimension(batchCount). Each is a COMPLEX_16 array A of DIMENSION ( LDDA, n ). Before entry with UPLO = MagmaUpper, the leading n by n upper triangular part of the array A must contain the upper triangular part of the Hermitian matrix and the strictly lower triangular part of A is not referenced. Before entry with UPLO = MagmaLower, the leading n by n lower triangular part of the array A must contain the lower triangular part of the Hermitian matrix and the strictly upper triangular part of A is not referenced. Note that the imaginary parts of the diagonal elements need not be set and are assumed to be zero. 
[in]  ldda  INTEGER. On entry, LDDA specifies the first dimension of each A as declared in the calling (sub) program. LDDA must be at least max( 1, n ). It is recommended that ldda is multiple of 16. Otherwise performance would be deteriorated as the memory accesses would not be fully coalescent. 
[in]  dX_array  Array of pointers, dimension(batchCount). Each is a COMPLEX_16 array X of dimension at least ( 1 + ( n  1 )*abs( INCX ) ). Before entry, the incremented array X must contain the n element vector X. 
[in]  incx  INTEGER. On entry, INCX specifies the increment for the elements of X. INCX must not be zero. 
[in]  beta  COMPLEX_16. On entry, BETA specifies the scalar beta. When BETA is supplied as zero then Y need not be set on input. 
[in,out]  dY_array  Array of pointers, dimension(batchCount). Each is a COMPLEX_16 array Y of dimension at least ( 1 + ( n  1 )*abs( INCY ) ). Before entry, the incremented array Y must contain the n element vector Y. On exit, Y is overwritten by the updated vector Y. 
[in]  incy  INTEGER. On entry, INCY specifies the increment for the elements of Y. INCY must not be zero. 
[in]  batchCount  INTEGER. The number of problems to operate on. 
[in]  queue  magma_queue_t Queue to execute in. 
void magmablas_zhemv_vbatched  (  magma_uplo_t  uplo, 
magma_int_t *  n,  
magmaDoubleComplex  alpha,  
magmaDoubleComplex_ptr  dA_array[],  
magma_int_t *  ldda,  
magmaDoubleComplex_ptr  dx_array[],  
magma_int_t *  incx,  
magmaDoubleComplex  beta,  
magmaDoubleComplex_ptr  dy_array[],  
magma_int_t *  incy,  
magma_int_t  batchCount,  
magma_queue_t  queue  
) 
ZHEMV performs the matrixvector operation:
y := alpha*A*x + beta*y,
where alpha and beta are scalars, x and y are n element vectors and A is an n by n Hermitian matrix. This is the variable size batched version of the operation.
[in]  uplo  magma_uplo_t. On entry, UPLO specifies whether the upper or lower triangular part of the array A is to be referenced as follows:

[in]  n  INTEGER array, dimension(batchCoutn + 1). On entry, each element N specifies the order of each matrix A. N must be at least zero. 
[in]  alpha  COMPLEX_16. On entry, ALPHA specifies the scalar alpha. 
[in]  dA_array  Array of pointers, dimension(batchCount). Each is a COMPLEX_16 array A of DIMENSION ( LDDA, N ). Before entry with UPLO = MagmaUpper, the leading N by N upper triangular part of the array A must contain the upper triangular part of the Hermitian matrix and the strictly lower triangular part of A is not referenced. Before entry with UPLO = MagmaLower, the leading N by N lower triangular part of the array A must contain the lower triangular part of the Hermitian matrix and the strictly upper triangular part of A is not referenced. Note that the imaginary parts of the diagonal elements need not be set and are assumed to be zero. 
[in]  ldda  INTEGER array, dimension(batchCount + 1). On entry, each element LDDA specifies the first dimension of each A as declared in the calling (sub) program. LDDA must be at least max( 1, n ). It is recommended that LDDA is multiple of 16. Otherwise performance would be deteriorated as the memory accesses would not be fully coalescent. 
[in]  dx_array  Array of pointers, dimension(batchCount). Each is a COMPLEX_16 array X of dimension at least ( 1 + ( n  1 )*abs( INCX ) ). Before entry, the incremented array X must contain the n element vector X. 
[in]  incx  INTEGER array, dimension(batchCount + 1). On entry, each element INCX specifies the increment for the elements of each X. INCX must not be zero. 
[in]  beta  COMPLEX_16. On entry, BETA specifies the scalar beta. When BETA is supplied as zero then Y need not be set on input. 
[in,out]  dy_array  Array of pointers, dimension(batchCount). Each is a COMPLEX_16 array Y of dimension at least ( 1 + ( n  1 )*abs( INCY ) ). Before entry, the incremented array Y must contain the n element vector Y. On exit, Y is overwritten by the updated vector Y. 
[in]  incy  INTEGER array, dimension(batchCount + 1). On entry, each element INCY specifies the increment for the elements of each Y. INCY must not be zero. 
[in]  batchCount  INTEGER. The number of problems to operate on. 
[in]  queue  magma_queue_t Queue to execute in. 