ScaLAPACK Archives

[Scalapack] undefined reference errors with scalapack


Great Derek!

Thank you to let you know that your issue is solved.
Julie

---------------------------------------------------------------


Hi Julie

Sorry it's been so long since I last emailed you. I have been very busy 
with
our new cluster.

I want to let you know we now have blacs & scalapack working ( compiled 
from source ). Users have compiled and tested their programs with blacs 
& scalapack with no errors so far.

So I want to say a big thank you for your help and advice for which I am 
most grateful. Without your help I would not have been able to achieve 
this. So again thank you.

Cheers

Derek



Derek
You net to add /contrib2/hpmpi/lib/linux_ia64 in the LD_LIBRARY_PATH  
environment variable.
LD_LIBRARY_PATH is an environment variable you set to give the run-time 
shared library loader (ld.so) an extra set of directories to look for 
when searching for shared libraries. Multiple directories can be listed, 
separated with a colon (:). This list is /prepended/ to the existing 
list of compiled-in loader paths for a given executable, and any system 
default loader paths.

Usually, we put that it in our .cshrc or whatever config file you are 
using.

Julie
Da.McPhee@Domain.Removed wrote:
Hi Julie

When I go to INSTALL directory & do make xfmpi_sane xsize xtc_CsameF77 
etc Then when i execute them I get ./xsize: error while loading shared 
libraries: libmpi.so.1: cannot open shared object file: No such file 
or directory.

This file is in  /contrib2/hpmpi/lib/linux_ia64.

Getting closer.

Thanks again for all your help.

Cheers

Derek




On Feb 20 2006, Julie Langou wrote:

Derek, this is a good news that the tester is not compiling. It just 
means that once you managed to solved that problem, everything should 
work...

But now, we have to find out your problem...

To get the correct flag for the Bmake,inc, go  in the INSTALL/EXE 


directory and do:
make  xfmpi_sane xsize xtc_CsameF77 xcmpi_sane  xintface    
xsyserrors  xtc_UseMpich
after execute the program to get the flags for your machine.

Derek, I have never use hpmpi, I have to ask some people if we got it 
on one of our machines to try to reproduce your problem.
Do you know what is compiler below it?
I don't think this is the gcc/g77, it might be the Intel or another 
compiler...If you don't know, you can use mpif77 as the Fortran 
compiler and mpicc as the C compiler. In general , try to avoid Gnu 
compilers when you already got specific compilers on your machine.

Add also the include:
-I/contrib2/hpmpi/include
next to the F77 flags:F77FLAGS

Another remark, for the testing, the modification to the Makefile 
needs to be done in the Makefile from the TESTING directory, not in 
the Bmake.inc.

I hope it will help you.
Let me know your progress on your issue.
Sincerely
Julie


Da.McPhee@Domain.Removed wrote:
Hi Julie

Sorry I did not get back to you sooner, but I've been very busy.

Thanks for all your help.

I now having difficulties compiling 'blacs test '

When I run the make test I get :-


( cd TESTING ; make ) make[1]: Entering directory > 
`/contrib2/BLACS/TESTING' g77 -o > 
/contrib2/BLACS/TESTING/EXE/xFbtest_MPI-LINUX-0 blacstest.o btprim_MPI.o
tools.o /contrib2/BLACS/LIB/blacsF77init_MPI-LINUX-0.a > 
/contrib2/BLACS/LIB/blac s_MPI-LINUX-0.a > 
/contrib2/BLACS/LIB/blacsF77init_MPI-LINUX-0.a /contrib2/hpmpi/l > 
ib/linux_ia64//libmpi.a make[1]: Leaving directory > 
`/contrib2/BLACS/TESTING'

and I get these type of errors

 >  > 
/contrib2/hpmpi/lib/linux_ia64//libmpi.a(hpmpudaplinit.o)(.text+0x2662): 
In func tion `udapl_resolve_entrypoints':
: undefined reference to `dlsym'
 >  > 
/contrib2/hpmpi/lib/linux_ia64//libmpi.a(hpmpudaplinit.o)(.text+0x26b2): 
In func tion `udapl_resolve_entrypoints':
: undefined reference to `dlsym'
 >  > 
/contrib2/hpmpi/lib/linux_ia64//libmpi.a(hpmpudaplinit.o)(.text+0x2702): 
In func tion `udapl_resolve_entrypoints':
: undefined reference to `dlsym'
 >  > 
/contrib2/hpmpi/lib/linux_ia64//libmpi.a(hpmpudaplinit.o)(.text+0x2762): 
In func tion `udapl_resolve_entrypoints':
: undefined reference to `dlerror'
 >  > 
/contrib2/hpmpi/lib/linux_ia64//libmpi.a(hpmpudaplinit.o)(.text+0x27c2): 
In func tion `udapl_resolve_entrypoints':
: undefined reference to `dlopen'
 >  > 
/contrib2/hpmpi/lib/linux_ia64//libmpi.a(hpmpudaplinit.o)(.text+0x27f2): 
In func tion `udapl_resolve_entrypoints':
: undefined reference to `dlerror'
collect2: ld returned 1 exit status
make: *** [/contrib2/BLACS/TESTING/EXE/xFbtest_MPI-LINUX-0] Error 1

I've also attached a copy of my blacs bmake.inc file.

Apart from " INTFACE = -Df77IsF2C " I'm not sure if the other 
settings > are correct.

Many thanks for all your help

Derek







On Feb 14 2006, Julie Langou wrote:

Derek,

The best is to run the BLACS tester and the scaLAPACK tester to 
make >> sure your installation is correct.

For Linux installation, change BLACS/TESTING/Makefile line 39 
from: >> Code: $(F77) $(F77NO_OPTFLAGS) -c $*.f to: Code: $(F77) >> 
$(F77NO_OPTFLAGS) -fno-globals -fno-f90 -fugly-complex -w -c $*.f

For more information, see: >> 
http://www.netlib.org/blacs/blacs_errata.html#TestErrata

Let me know if everything is correct.

One more thing: the linking sequence is exactly the opposite..and 
moreover your user needs to add blacsF77init_MPI-LINUX-0.a.
It should look something like this:

mpif90 pstg3r.f -o pstg3r.x
-L/opt/hpmpi/lib/linux_ia64/libscalapak.a
-L/opt/hpmpi/lib/linux_ia64/blacsF77init_MPI-LINUX-0.a
-L/opt/hpmpi/lib/linux_ia64/blacs_MPI-LINUX-0.a
-L/opt/hpmpi/lib/linux_ia64/libmpi.a

I hope it helps
Please let me know if your issue is solved
Julie

Da.McPhee@Domain.Removed wrote:

Hi

My name is Derek McPhee for Queen's University Belfast.

We have a Linux ( suse ) cluster, which I 've just compiled >> 
scalapack & > blacs from source on. These I downloaded from the >> 
scalapack web site.

A user accessing these libraries gets error messages i.e.

" undefined reference to 'blacs_gridexit" , blacs_gridinfo etc.

In his fortran program he has ' call blacs_gridexit(ictxt)'

We also have an HPUX cluster with scalapack install & the same 
program > compiles on this m/c with no errors.

The command line is as follows mpif90 pstg3r.f -o pstg3r.x > >> 
-L/opt/hpmpi/lib/linux_ia64/libmpi.a > >> 
-L/opt/hpmpi/lib/linux_ia64/blacs_MPI-LINUX-0.a > >> 
-L/opt/hpmpi/lib/linux_ia64/libscalapak.a

Has anyone see this before?

Many thanks

Derek









_______________________________________________
Scalapack mailing list
Scalapack@Domain.Removed
http://lists.cs.utk.edu/listinfo/scalapack
 >>


------------------------------------------------------------------------

 >  > 
#=============================================================================
 
#====================== SECTION 1: PATHS AND LIBRARIES > 
======================= > 
#=============================================================================
 
# The following macros specify the name and location of libraries > 
required by # the BLACS and its tester. > 
#=============================================================================
 

 > #  --------------------------------------
#  Make sure we've got a consistent shell
#  --------------------------------------
   SHELL = /bin/sh

#  -----------------------------
#  The top level BLACS directory
#  -----------------------------
   BTOPdir = /contrib2/BLACS

# > 
--------------------------------------------------------------------------- 
# The communication library your BLACS have been written for. # 
Known > choices (and the machines they run on) are: # # COMMLIB 
MACHINE # > ....... 
.............................................................. > # 
CMMD Thinking Machine's CM-5 # MPI Wide variety of systems # MPL > 
IBM's SP series (SP1 and SP2) # NX Intel's supercomputer series 
(iPSC2, > iPSC/860, DELTA, PARAGON) # PVM Most unix machines; See PVM 
User's > Guide for details # > 
--------------------------------------------------------------------------- 

   COMMLIB = MPI

#  -------------------------------------------------------------
#  The platform identifier to suffix to the end of library names
#  -------------------------------------------------------------
   PLAT = LINUX

#  ----------------------------------------------------------
#  Name and location of the BLACS library.  See section 2 for > #  
details on BLACS debug level (BLACSDBGLVL).
#  ----------------------------------------------------------
   BLACSdir    = $(BTOPdir)/LIB
   BLACSDBGLVL = 0
   BLACSFINIT = > 
$(BLACSdir)/blacsF77init_$(COMMLIB)-$(PLAT)-$(BLACSDBGLVL).a
   BLACSCINIT = > 
$(BLACSdir)/blacsCinit_$(COMMLIB)-$(PLAT)-$(BLACSDBGLVL).a
   BLACSLIB    = $(BLACSdir)/blacs_$(COMMLIB)-$(PLAT)-$(BLACSDBGLVL).a

#  -------------------------------------
#  Name and location of the MPI library.
#  -------------------------------------

   MPIdir = /contrib2/hpmpi
   MPILIBdir = $(MPIdir)/lib/linux_ia64/
   MPIINCdir = $(MPIdir)/include/64
   MPILIB = $(MPILIBdir)/libmpi.a

#  -------------------------------------
#  All libraries required by the tester.
#  -------------------------------------
   BTLIBS = $(BLACSFINIT) $(BLACSLIB) $(BLACSFINIT) $(MPILIB) >
#  ----------------------------------------------------------------
#  The directory to put the installation help routines' executables
#  ----------------------------------------------------------------
   INSTdir = $(BTOPdir)/INSTALL/EXE

#  ------------------------------------------------
#  The name and location of the tester's executable
#  ------------------------------------------------
   TESTdir = $(BTOPdir)/TESTING/EXE
   FTESTexe = $(TESTdir)/xFbtest_$(COMMLIB)-$(PLAT)-$(BLACSDBGLVL)
   CTESTexe = $(TESTdir)/xCbtest_$(COMMLIB)-$(PLAT)-$(BLACSDBGLVL) 

#=============================================================================
 
#=============================== End SECTION 1 > 
=============================== > 
#=============================================================================
 

 >
 >  > 
#=============================================================================
 
#========================= SECTION 2: BLACS INTERNALS > 
======================== > 
#=============================================================================
 
# The following macro definitions set preprocessor values for the > 
BLACS. # The file Bconfig.h sets these values if they are not set by 
the makefile. # User's compiling only the tester can skip this 
entire > section. # NOTE: The MPI defaults have been set for MPICH. > 
#=============================================================================
 


# > 
----------------------------------------------------------------------- 
# The directory to find the required communication library include 
files, # if they are required by your system. # > 
-----------------------------------------------------------------------
   SYSINC = -I$(MPIINCdir)

# > 
--------------------------------------------------------------------------- 
# The Fortran 77 to C interface to be used. If you are unsure of 
the > correct # setting for your platform, compile and run > 
BLACS/INSTALL/xintface. # Choices are: Add_, NoChange, UpCase, or > 
f77IsF2C. # > 
--------------------------------------------------------------------------- 

   INTFACE = -Df77IsF2C # > 
------------------------------------------------------------------------ 
# Allows the user to vary the topologies that the BLACS default > 
topologies # (TOP = ' ') correspond to. If you wish to use a 
particular > topology # (as opposed to letting the BLACS make the 
choice), uncomment > the # following macros, and replace the 
character in single quotes with > the # topology of your choice. # > 
------------------------------------------------------------------------ 
# DEFBSTOP = -DDefBSTop="'1'" # DEFCOMBTOP = -DDefCombTop="'1'"

#  -------------------------------------------------------------------
#  If your MPI_Send is locally-blocking, substitute the following line
#  for the empty macro definition below.
#  SENDIS = -DSndIsLocBlk
#  -------------------------------------------------------------------
   SENDIS =

#  
--------------------------------------------------------------------
#  If your MPI handles packing of non-contiguous messages by 
copying to
#  another buffer or sending extra bytes, better performance may be
#  obtained by replacing the empty macro definition below with the
#  macro definition on the following line.
#  BUFF = -DNoMpiBuff
#  
--------------------------------------------------------------------
   BUFF = >
# > 
----------------------------------------------------------------------- 
# If you know something about your system, you may make it easier 
for > the # BLACS to translate between C and fortran communicators. 
If the > empty # macro defininition is left alone, this translation 
will cause > the C # BLACS to globally block for MPI_COMM_WORLD on 
calls to > BLACS_GRIDINIT # and BLACS_GRIDMAP. If you choose one of 
the options > for translating # the context, neither the C or fortran 
calls will > globally block. # If you are using MPICH, or a 
derivitive system, you > can replace the # empty macro definition 
below with the following (note > that if you let # MPICH do the 
translation between C and fortran, you > must also indicate # here if 
your system has pointers that are longer > than integers. If so, # 
define -DPOINTER_64_BITS=1.) For help on > setting TRANSCOMM, you can 
# run BLACS/INSTALL/xtc_CsameF77 and > BLACS/INSTALL/xtc_UseMpich as 
# explained in BLACS/INSTALL/README. # > TRANSCOMM = -DUseMpich 
#DPOINTER_64_BITS=1 # # If you know that your > MPI uses the same 
handles for fortran and C # communicators, you can > replace the 
empty macro definition below with # the macro definition on > the 
following line. # TRANSCOMM = -DCSameF77 # > 
-----------------------------------------------------------------------
  TRANSCOMM =

# > 
-------------------------------------------------------------------------- 
# You may choose to have the BLACS internally call either the C or 
Fortran77 # interface to MPI by varying the following macro. If > 
TRANSCOMM is left # empty, the C interface 
BLACS_GRIDMAP/BLACS_GRIDINIT > will globally-block if # you choose to 
use the fortran internals, and > the fortran interface will # block 
if you choose to use the C > internals. It is recommended that the # 
user leave this macro > definition blank, unless there is a strong 
reason # to prefer one MPI > interface over the other. # WHATMPI = 
-DUseF77Mpi # WHATMPI = -DUseCMpi > # > 
-------------------------------------------------------------------------- 

   WHATMPI =

# > 
--------------------------------------------------------------------------- 
# Some early versions of MPICH and its derivatives cannot handle 
user > defined # zero byte data types. If your system has this 
problem > (compile and run # BLACS/INSTALL/xsyserrors to check if 
unsure), > replace the empty macro # definition below with the macro 
definition on > the following line. # SYSERRORS = -DZeroByteTypeBug # 

--------------------------------------------------------------------------- 

   SYSERRORS =

#  ------------------------------------------------------------------
#  These macros set the debug level for the BLACS.  The fastest
#  code is produced by BlacsDebugLvl 0.  Higher levels provide
#  more debug information at the cost of performance.  Present levels
#  of debug are:
#  0 : No debug information
#  1 : Mainly parameter checking.
#  ------------------------------------------------------------------
   DEBUGLVL = -DBlacsDebugLvl=$(BLACSDBGLVL)

# > 
------------------------------------------------------------------------- 
# All BLACS definitions needed for compile (DEFS1 contains 
definitions > used # by all BLACS versions). # > 
------------------------------------------------------------------------- 

   DEFS1 = -DSYSINC $(SYSINC) $(INTFACE) $(DEFBSTOP) $(DEFCOMBTOP) 
$(DEBUGLVL)
   BLACSDEFS = $(DEFS1) $(SENDIS) $(BUFF) $(TRANSCOMM) $(WHATMPI) > 
$(SYSERRORS) > 
#=============================================================================
 
#=============================== End SECTION 2 > 
=============================== > 
#=============================================================================
 

 >
 >  > 
#=============================================================================
 
#=========================== SECTION 3: COMPILERS > 
============================ > 
#=============================================================================
 
# The following macros specify compilers, linker/loaders, the 
archiver, > # and their options. Some of the fortran files need to be 
compiled with > no # optimization. This is the F77NO_OPTFLAG. The 
usage of the > remaining # macros should be obvious from the names. > 
#=============================================================================
 

   F77            = g77
   F77NO_OPTFLAGS = F77FLAGS = $(F77NO_OPTFLAGS) -fno-globals 
-fno-f90 > -fugly-complex -w -c $*.f

#   F77FLAGS       = $(F77NO_OPTFLAGS) -O
   F77LOADER      = $(F77)
   F77LOADFLAGS   = >    CC             = gcc
   CCFLAGS        = -O4
   CCLOADER       = $(CC)
   CCLOADFLAGS    = >
# > 
-------------------------------------------------------------------------- 
# The archiver and the flag(s) to use when building an archive > 
(library). # Also the ranlib routine. If your system has no ranlib, 
set > RANLIB = echo. # > 
-------------------------------------------------------------------------- 

   ARCH      = ar
   ARCHFLAGS = r
   RANLIB    = ranlib

 >  > 
#=============================================================================
 
#=============================== End SECTION 3 > 
=============================== > 
#=============================================================================
 









On Feb 20 2006, Julie Langou wrote:

Derek, this is a good news that the tester is not compiling. It just 
means that once you managed to solved that problem, everything should 
work...

But now, we have to find out your problem...

To get the correct flag for the Bmake,inc, go  in the INSTALL/EXE 


directory and do:
make  xfmpi_sane xsize xtc_CsameF77 xcmpi_sane  xintface    
xsyserrors  xtc_UseMpich
after execute the program to get the flags for your machine.

Derek, I have never use hpmpi, I have to ask some people if we got it 
on one of our machines to try to reproduce your problem.
Do you know what is compiler below it?
I don't think this is the gcc/g77, it might be the Intel or another 
compiler...If you don't know, you can use mpif77 as the Fortran 
compiler and mpicc as the C compiler. In general , try to avoid Gnu 
compilers when you already got specific compilers on your machine.

Add also the include:
-I/contrib2/hpmpi/include
next to the F77 flags:F77FLAGS

Another remark, for the testing, the modification to the Makefile 
needs to be done in the Makefile from the TESTING directory, not in 
the Bmake.inc.

I hope it will help you.
Let me know your progress on your issue.
Sincerely
Julie


Da.McPhee@Domain.Removed wrote:
Hi Julie

Sorry I did not get back to you sooner, but I've been very busy.

Thanks for all your help.

I now having difficulties compiling 'blacs test '

When I run the make test I get :-


( cd TESTING ; make ) make[1]: Entering directory > 
`/contrib2/BLACS/TESTING' g77 -o > 
/contrib2/BLACS/TESTING/EXE/xFbtest_MPI-LINUX-0 blacstest.o btprim_MPI.o
tools.o /contrib2/BLACS/LIB/blacsF77init_MPI-LINUX-0.a > 
/contrib2/BLACS/LIB/blac s_MPI-LINUX-0.a > 
/contrib2/BLACS/LIB/blacsF77init_MPI-LINUX-0.a /contrib2/hpmpi/l > 
ib/linux_ia64//libmpi.a make[1]: Leaving directory > 
`/contrib2/BLACS/TESTING'

and I get these type of errors

 >  > 
/contrib2/hpmpi/lib/linux_ia64//libmpi.a(hpmpudaplinit.o)(.text+0x2662): 
In func tion `udapl_resolve_entrypoints':
: undefined reference to `dlsym'
 >  > 
/contrib2/hpmpi/lib/linux_ia64//libmpi.a(hpmpudaplinit.o)(.text+0x26b2): 
In func tion `udapl_resolve_entrypoints':
: undefined reference to `dlsym'
 >  > 
/contrib2/hpmpi/lib/linux_ia64//libmpi.a(hpmpudaplinit.o)(.text+0x2702): 
In func tion `udapl_resolve_entrypoints':
: undefined reference to `dlsym'
 >  > 
/contrib2/hpmpi/lib/linux_ia64//libmpi.a(hpmpudaplinit.o)(.text+0x2762): 
In func tion `udapl_resolve_entrypoints':
: undefined reference to `dlerror'
 >  > 
/contrib2/hpmpi/lib/linux_ia64//libmpi.a(hpmpudaplinit.o)(.text+0x27c2): 
In func tion `udapl_resolve_entrypoints':
: undefined reference to `dlopen'
 >  > 
/contrib2/hpmpi/lib/linux_ia64//libmpi.a(hpmpudaplinit.o)(.text+0x27f2): 
In func tion `udapl_resolve_entrypoints':
: undefined reference to `dlerror'
collect2: ld returned 1 exit status
make: *** [/contrib2/BLACS/TESTING/EXE/xFbtest_MPI-LINUX-0] Error 1

I've also attached a copy of my blacs bmake.inc file.

Apart from " INTFACE = -Df77IsF2C " I'm not sure if the other 
settings > are correct.

Many thanks for all your help

Derek







On Feb 14 2006, Julie Langou wrote:

Derek,

The best is to run the BLACS tester and the scaLAPACK tester to 
make >> sure your installation is correct.

For Linux installation, change BLACS/TESTING/Makefile line 39 from: 
Code: $(F77) $(F77NO_OPTFLAGS) -c $*.f to: Code: $(F77) >> 
$(F77NO_OPTFLAGS) -fno-globals -fno-f90 -fugly-complex -w -c $*.f

For more information, see: >> 
http://www.netlib.org/blacs/blacs_errata.html#TestErrata

Let me know if everything is correct.

One more thing: the linking sequence is exactly the opposite..and 
moreover your user needs to add blacsF77init_MPI-LINUX-0.a.
It should look something like this:

mpif90 pstg3r.f -o pstg3r.x
-L/opt/hpmpi/lib/linux_ia64/libscalapak.a
-L/opt/hpmpi/lib/linux_ia64/blacsF77init_MPI-LINUX-0.a
-L/opt/hpmpi/lib/linux_ia64/blacs_MPI-LINUX-0.a
-L/opt/hpmpi/lib/linux_ia64/libmpi.a

I hope it helps
Please let me know if your issue is solved
Julie

Da.McPhee@Domain.Removed wrote:

Hi

My name is Derek McPhee for Queen's University Belfast.

We have a Linux ( suse ) cluster, which I 've just compiled >> 
scalapack & > blacs from source on. These I downloaded from the >> 
scalapack web site.

A user accessing these libraries gets error messages i.e.

" undefined reference to 'blacs_gridexit" , blacs_gridinfo etc.

In his fortran program he has ' call blacs_gridexit(ictxt)'

We also have an HPUX cluster with scalapack install & the same >> 
program > compiles on this m/c with no errors.

The command line is as follows mpif90 pstg3r.f -o pstg3r.x > >> 
-L/opt/hpmpi/lib/linux_ia64/libmpi.a > >> 
-L/opt/hpmpi/lib/linux_ia64/blacs_MPI-LINUX-0.a > >> 
-L/opt/hpmpi/lib/linux_ia64/libscalapak.a

Has anyone see this before?

Many thanks

Derek









_______________________________________________
Scalapack mailing list
Scalapack@Domain.Removed
http://lists.cs.utk.edu/listinfo/scalapack
 >>


------------------------------------------------------------------------

 >  > 
#=============================================================================
 
#====================== SECTION 1: PATHS AND LIBRARIES > 
======================= > 
#=============================================================================
 
# The following macros specify the name and location of libraries > 
required by # the BLACS and its tester. > 
#=============================================================================
 

 > #  --------------------------------------
#  Make sure we've got a consistent shell
#  --------------------------------------
   SHELL = /bin/sh

#  -----------------------------
#  The top level BLACS directory
#  -----------------------------
   BTOPdir = /contrib2/BLACS

# > 
--------------------------------------------------------------------------- 
# The communication library your BLACS have been written for. # 
Known > choices (and the machines they run on) are: # # COMMLIB 
MACHINE # > ....... 
.............................................................. > # 
CMMD Thinking Machine's CM-5 # MPI Wide variety of systems # MPL > 
IBM's SP series (SP1 and SP2) # NX Intel's supercomputer series 
(iPSC2, > iPSC/860, DELTA, PARAGON) # PVM Most unix machines; See PVM 
User's > Guide for details # > 
--------------------------------------------------------------------------- 

   COMMLIB = MPI

#  -------------------------------------------------------------
#  The platform identifier to suffix to the end of library names
#  -------------------------------------------------------------
   PLAT = LINUX

#  ----------------------------------------------------------
#  Name and location of the BLACS library.  See section 2 for > #  
details on BLACS debug level (BLACSDBGLVL).
#  ----------------------------------------------------------
   BLACSdir    = $(BTOPdir)/LIB
   BLACSDBGLVL = 0
   BLACSFINIT = > 
$(BLACSdir)/blacsF77init_$(COMMLIB)-$(PLAT)-$(BLACSDBGLVL).a
   BLACSCINIT = > 
$(BLACSdir)/blacsCinit_$(COMMLIB)-$(PLAT)-$(BLACSDBGLVL).a
   BLACSLIB    = $(BLACSdir)/blacs_$(COMMLIB)-$(PLAT)-$(BLACSDBGLVL).a

#  -------------------------------------
#  Name and location of the MPI library.
#  -------------------------------------

   MPIdir = /contrib2/hpmpi
   MPILIBdir = $(MPIdir)/lib/linux_ia64/
   MPIINCdir = $(MPIdir)/include/64
   MPILIB = $(MPILIBdir)/libmpi.a

#  -------------------------------------
#  All libraries required by the tester.
#  -------------------------------------
   BTLIBS = $(BLACSFINIT) $(BLACSLIB) $(BLACSFINIT) $(MPILIB) >
#  ----------------------------------------------------------------
#  The directory to put the installation help routines' executables
#  ----------------------------------------------------------------
   INSTdir = $(BTOPdir)/INSTALL/EXE

#  ------------------------------------------------
#  The name and location of the tester's executable
#  ------------------------------------------------
   TESTdir = $(BTOPdir)/TESTING/EXE
   FTESTexe = $(TESTdir)/xFbtest_$(COMMLIB)-$(PLAT)-$(BLACSDBGLVL)
   CTESTexe = $(TESTdir)/xCbtest_$(COMMLIB)-$(PLAT)-$(BLACSDBGLVL) > 
#=============================================================================
 
#=============================== End SECTION 1 > 
=============================== > 
#=============================================================================
 

 >
 >  > 
#=============================================================================
 
#========================= SECTION 2: BLACS INTERNALS > 
======================== > 
#=============================================================================
 
# The following macro definitions set preprocessor values for the > 
BLACS. # The file Bconfig.h sets these values if they are not set by > 
the makefile. # User's compiling only the tester can skip this entire 
section. # NOTE: The MPI defaults have been set for MPICH. > 
#=============================================================================
 


# > 
----------------------------------------------------------------------- 
# The directory to find the required communication library include > 
files, # if they are required by your system. # > 
-----------------------------------------------------------------------
   SYSINC = -I$(MPIINCdir)

# > 
--------------------------------------------------------------------------- 
# The Fortran 77 to C interface to be used. If you are unsure of the 
correct # setting for your platform, compile and run > 
BLACS/INSTALL/xintface. # Choices are: Add_, NoChange, UpCase, or > 
f77IsF2C. # > 
--------------------------------------------------------------------------- 

   INTFACE = -Df77IsF2C # > 
------------------------------------------------------------------------ 
# Allows the user to vary the topologies that the BLACS default > 
topologies # (TOP = ' ') correspond to. If you wish to use a 
particular > topology # (as opposed to letting the BLACS make the 
choice), uncomment > the # following macros, and replace the character 
in single quotes with > the # topology of your choice. # > 
------------------------------------------------------------------------ 
# DEFBSTOP = -DDefBSTop="'1'" # DEFCOMBTOP = -DDefCombTop="'1'"

#  -------------------------------------------------------------------
#  If your MPI_Send is locally-blocking, substitute the following line
#  for the empty macro definition below.
#  SENDIS = -DSndIsLocBlk
#  -------------------------------------------------------------------
   SENDIS =

#  --------------------------------------------------------------------
#  If your MPI handles packing of non-contiguous messages by copying to
#  another buffer or sending extra bytes, better performance may be
#  obtained by replacing the empty macro definition below with the
#  macro definition on the following line.
#  BUFF = -DNoMpiBuff
#  --------------------------------------------------------------------
   BUFF = >
# > 
----------------------------------------------------------------------- 
# If you know something about your system, you may make it easier 
for > the # BLACS to translate between C and fortran communicators. If 
the > empty # macro defininition is left alone, this translation will 
cause > the C # BLACS to globally block for MPI_COMM_WORLD on calls to 
BLACS_GRIDINIT # and BLACS_GRIDMAP. If you choose one of the options 
for translating # the context, neither the C or fortran calls will > 
globally block. # If you are using MPICH, or a derivitive system, you 
can replace the # empty macro definition below with the following 
(note > that if you let # MPICH do the translation between C and 
fortran, you > must also indicate # here if your system has pointers 
that are longer > than integers. If so, # define -DPOINTER_64_BITS=1.) 
For help on > setting TRANSCOMM, you can # run 
BLACS/INSTALL/xtc_CsameF77 and > BLACS/INSTALL/xtc_UseMpich as # 
explained in BLACS/INSTALL/README. # > TRANSCOMM = -DUseMpich 
#DPOINTER_64_BITS=1 # # If you know that your > MPI uses the same 
handles for fortran and C # communicators, you can > replace the empty 
macro definition below with # the macro definition on > the following 
line. # TRANSCOMM = -DCSameF77 # > 
-----------------------------------------------------------------------
  TRANSCOMM =

# > 
-------------------------------------------------------------------------- 
# You may choose to have the BLACS internally call either the C or > 
Fortran77 # interface to MPI by varying the following macro. If > 
TRANSCOMM is left # empty, the C interface 
BLACS_GRIDMAP/BLACS_GRIDINIT > will globally-block if # you choose to 
use the fortran internals, and > the fortran interface will # block if 
you choose to use the C > internals. It is recommended that the # user 
leave this macro > definition blank, unless there is a strong reason # 
to prefer one MPI > interface over the other. # WHATMPI = -DUseF77Mpi 
# WHATMPI = -DUseCMpi > # > 
-------------------------------------------------------------------------- 

   WHATMPI =

# > 
--------------------------------------------------------------------------- 
# Some early versions of MPICH and its derivatives cannot handle 
user > defined # zero byte data types. If your system has this problem 
(compile and run # BLACS/INSTALL/xsyserrors to check if unsure), > 
replace the empty macro # definition below with the macro definition 
on > the following line. # SYSERRORS = -DZeroByteTypeBug # > 
--------------------------------------------------------------------------- 

   SYSERRORS =

#  ------------------------------------------------------------------
#  These macros set the debug level for the BLACS.  The fastest
#  code is produced by BlacsDebugLvl 0.  Higher levels provide
#  more debug information at the cost of performance.  Present levels
#  of debug are:
#  0 : No debug information
#  1 : Mainly parameter checking.
#  ------------------------------------------------------------------
   DEBUGLVL = -DBlacsDebugLvl=$(BLACSDBGLVL)

# > 
------------------------------------------------------------------------- 
# All BLACS definitions needed for compile (DEFS1 contains 
definitions > used # by all BLACS versions). # > 
-------------------------------------------------------------------------
   DEFS1 = -DSYSINC $(SYSINC) $(INTFACE) $(DEFBSTOP) $(DEFCOMBTOP) > 
$(DEBUGLVL)
   BLACSDEFS = $(DEFS1) $(SENDIS) $(BUFF) $(TRANSCOMM) $(WHATMPI) > 
$(SYSERRORS) > 
#=============================================================================
 
#=============================== End SECTION 2 > 
=============================== > 
#=============================================================================
 

 >
 >  > 
#=============================================================================
 
#=========================== SECTION 3: COMPILERS > 
============================ > 
#=============================================================================
 
# The following macros specify compilers, linker/loaders, the 
archiver, > # and their options. Some of the fortran files need to be 
compiled with > no # optimization. This is the F77NO_OPTFLAG. The 
usage of the > remaining # macros should be obvious from the names. > 
#=============================================================================
 

   F77            = g77
   F77NO_OPTFLAGS = F77FLAGS = $(F77NO_OPTFLAGS) -fno-globals 
-fno-f90 > -fugly-complex -w -c $*.f

#   F77FLAGS       = $(F77NO_OPTFLAGS) -O
   F77LOADER      = $(F77)
   F77LOADFLAGS   = >    CC             = gcc
   CCFLAGS        = -O4
   CCLOADER       = $(CC)
   CCLOADFLAGS    = >
# > 
-------------------------------------------------------------------------- 
# The archiver and the flag(s) to use when building an archive > 
(library). # Also the ranlib routine. If your system has no ranlib, 
set > RANLIB = echo. # > 
-------------------------------------------------------------------------- 

   ARCH      = ar
   ARCHFLAGS = r
   RANLIB    = ranlib

 >  > 
#=============================================================================
 
#=============================== End SECTION 3 > 
=============================== > 
#=============================================================================
 

 



<Prev in Thread] Current Thread [Next in Thread>


For additional information you may use the LAPACK/ScaLAPACK Forum.
Or one of the mailing lists, or