The Tensor Artificial Viscosity in Spheral++
-
This page describes the version of Spheral++ used to prepare the paper on the
SPH tensor artificial viscosity. Since Spheral++ is a living, evolving project,
the most recent version of Spheral++ will not necessarily run the test problem
set-up scripts identically with what is presented in the paper. Therefore we
have set aside a CVS branch and cut a special release of Spheral++ specifically
to reproduce the results presented in the tensor viscosity methods paper.
-
The links below detail how to get the code and test problem set-ups, build, and
then run the various tests.
Downloading the Code and Test Problems.
The code and test problems are bundled together, so you can download them in one
operation. There are two possible methods of getting the appropriate source:
-
Download the release.
We have released the source frozen at the state used in producing the results
given in the tensor viscosity methods paper, which can be found on the Spheral++
releases page at this link.
-
The sources are also under a CVS branch of the Spheral++ depot, which can be
accessed by
-
Make yourself a directory where you want to download the source and
problems.
-
cd into the directory you just made, and issue the following set of CVS
commands:
cvs -d:pserver:anonymous@cvs.Spheral.sourceforge.net:/cvsroot/spheral login
cvs -z3 -d:pserver:anonymous@cvs.Spheral.sourceforge.net:/cvsroot/spheral co
-r Q_project src
cvs -d:pserver:anonymous@cvs.Spheral.sourceforge.net:/cvsroot/spheral logout
cd src/
cvs update -dP
Spheral++ depends on a core set of software that you must have, and there are
many optional packages you may want access to for use with Spheral. At minimum
you must have
-
Python.
Python is the main interface to Spheral++, and is absolutely required! The
odds are your system already has Python installed, but even if so I still
recommend downloading and building your own for use with Spheral++. The
reason for this is that when you want to install extensions to Python for use
with Spheral++, the default installation process for these extensions assumes
you have write access to the Python directories (which is not the case with
the system Python!) It's also useful to be able to use an up to date version
of Python, and system installations are often behind the times.
- Gnuplot Gnuplot is current the
standard 2-D plotting utility used in many of the Spheral++ test and demo
scripts. If your system already has Gnuplot installed, the system version
should be OK.
- Numeric extension for Python
- Python interface to Gnuplot
Building both Numeric Python and the Gnuplot Python interface are *really*
simple, in most cases you should just have to run "python setup.py install",
but make sure you use the version of Python that you intend to run Spheral++
under!
Additionally, if you want to run the parallel version of Spheral++, you will need
-
MPI
Parallel Spheral++ uses the Message Passing Interface (MPI) to coordinate
parallel runs, so MPI is a must. The standard vendor MPI on most parallel
machines is fine, but if you need to you can install the freely available
MPICH distribution from this link.
-
pyMPI.
The MPI enhanced version of Python. Follow the directions on this page to
download and build pyMPI. When run in a distributed parallel mode,
Spheral++ assumes a python process per processor, and requires that MPI be
initialized before entering any Spheral++ module. The parallel MPI enabled
version of Python provides this functionality, and provides an "mpi" module
that allows Python scripts to perform distributed operations (gather/scatter,
reduction across processors, etc.) This is required to run Spheral++ in
parallel!
-
ParMETIS.
ParMETIS is a parallel graph partitioning library, which Spheral++ uses to
automatically partition (or domain decompose) an SPH distribution of nodes
across processors. The build system for ParMETIS is rather simple if
low-tech: you simply edit the provided makefile filling in the appropriate
compiler
and associated flags for your system. The resulting libraries (libmetis.a and
libparmetis.a) and associated header file (parmetis.h) need to then be moved
someplace Spheral++ can find them, as described in the configure section for
"--with-parmetis=" configure flag below.
Configuring and Building.
Configuration
Thanks to Martin Casado, Spheral++ now has an autoconf build process. In most
cases you should simply have to execute the following set of steps (in the
src/src directory):
- ./boot
- ./configure
There are of course many options you can give to configure. You can see these options by typing
"./configure --help". The current set of options you may want to use include:
- --without-mpi
Compile Spheral++ for a serial run. Use this option if you don't want to
use the parallel distributed version of Spheral++.
- --with-CXX=ARG
Manually set C++ compiler to ARG.
- --with-CC=ARG
Manually set C compiler to ARG. Spheral++ does not currently use any C code,
so this option actually has no effect.
- --with-LD=ARG
Manually set linker to ARG.
- --with-python=ARG
Use non-standard python (other than the default in your path).
- --with-opt=Val
Set optimization level (0,1,2,3). Note that on some platforms (notably IBM
AIX we have found that compiling at high optimizations (>2) sometimes fails!
- --without-assert
Turn off Design by Contract asserts. Use this option if you are compiling for
an optimized run, and don't want these internal consistency checks to slow down
the run time performance.
- --with-timers
Turn on the Timer class profiling. This will generate a table of times
used by different sections of the code during a run.
- --with-papi
Activate PAPI for class profiling. Must be used in conjunction with
--with-timers, and produces more information about the run time performance.
- --with-parmetis=DIR
Indicates the base directory where the required ParMETIS files are stored.
Under this directory should be the following structure:
- DIR/include/parmetis.h
- DIR/lib/libmetis.a
- DIR/lib/libparmetis.a
- --enable-postgres
Enable support for PostgreSQL
- --with-postgres-programs=DIR
Location of PostgreSQL executables.
- --with-postgres-includes=DIR
Location of PostgreSQL includes.
- --with-postgres-libs=DIR
location of PostgreSQL libraries.
This process presupposes you have Gnu autoconf installed on your system. This
is probably true for most people, but if not please vist the Gnu autoconf site
and install autoconf.
Note -- if you are trying to compile on IBM AIX please consult the README.AIX
under src/.
Compile the code
At long last you should be ready to compile the code, so go into the "src/src"
directory and type "make". The libraries automatically install under the
path specified in the configure stage above.
Be warned, Spheral++ can severely exercise your compiler, so (particularly if
you are building optimized) it might take some time to build!
Configure your environment.
In order for your python process to find the Spheral++ components, you must set
the environment variables PYTHONPATH
and LD_LIBRARY_PATH to where you have
installed the Spheral++ components. (On AIX this variable is actually called
LIBPATH.) Currently the Spheral++ process is
hardwired to place the modules you build into a lib/ directory in the directory where you installed the Spheral++ source
code.
Running the Test Problems.
When you downloaded the Spheral++ source above, you also got the run scripts for
each of the tests presented in the tensor viscosity paper: they can be found
under the "src/results/TensorQ" directory. Note that these runs were not
actually done in this directory structure, so in some cases the script may
require minor editing. The various files found under this test directory are
briefly described in the README in "src/results/TensorQ".
To run the problems, you just make sure your environment is set up properly so
that python can find your Spheral++ libraries (described
here) and run python on one of the problem run scripts. For instance, to
run the 2-D cylindrical Noh problem shown in Figure 1 a & b serially:
- cd src/results/TensorQ/Fig01/MG-Balsara
- python -i Noh-cylindrical-2d.py
Most of the problems presented in the paper were run in parallel, so the scripts
were set up with parallelism in mind. If you want to run problems in parallel,
the invocation is system dependent depending on how MPI runs are handled on your
system. In most cases you just use
- mpirun -np 4 pyMPI -i Noh-cylindrical-2d.py
or, on IBM AIX systems you will likely use
- poe pyMPI -i Noh-cylindrical-2d.py -nodes 1 -procs 4
etc.