SourceForge Logo

 Back to Spheral++ main page


The Tensor Artificial Viscosity in Spheral++


Downloading the Code and Test Problems.

The code and test problems are bundled together, so you can download them in one operation. There are two possible methods of getting the appropriate source: Spheral++ depends on a core set of software that you must have, and there are many optional packages you may want access to for use with Spheral. At minimum you must have
  1. Python. Python is the main interface to Spheral++, and is absolutely required! The odds are your system already has Python installed, but even if so I still recommend downloading and building your own for use with Spheral++. The reason for this is that when you want to install extensions to Python for use with Spheral++, the default installation process for these extensions assumes you have write access to the Python directories (which is not the case with the system Python!) It's also useful to be able to use an up to date version of Python, and system installations are often behind the times.
  2. Gnuplot Gnuplot is current the standard 2-D plotting utility used in many of the Spheral++ test and demo scripts. If your system already has Gnuplot installed, the system version should be OK.
  3. Numeric extension for Python
  4. Python interface to Gnuplot
Building both Numeric Python and the Gnuplot Python interface are *really* simple, in most cases you should just have to run "python setup.py install", but make sure you use the version of Python that you intend to run Spheral++ under! Additionally, if you want to run the parallel version of Spheral++, you will need
  1. MPI Parallel Spheral++ uses the Message Passing Interface (MPI) to coordinate parallel runs, so MPI is a must. The standard vendor MPI on most parallel machines is fine, but if you need to you can install the freely available MPICH distribution from this link.
  2. pyMPI. The MPI enhanced version of Python. Follow the directions on this page to download and build pyMPI. When run in a distributed parallel mode, Spheral++ assumes a python process per processor, and requires that MPI be initialized before entering any Spheral++ module. The parallel MPI enabled version of Python provides this functionality, and provides an "mpi" module that allows Python scripts to perform distributed operations (gather/scatter, reduction across processors, etc.) This is required to run Spheral++ in parallel!
  3. ParMETIS. ParMETIS is a parallel graph partitioning library, which Spheral++ uses to automatically partition (or domain decompose) an SPH distribution of nodes across processors. The build system for ParMETIS is rather simple if low-tech: you simply edit the provided makefile filling in the appropriate compiler and associated flags for your system. The resulting libraries (libmetis.a and libparmetis.a) and associated header file (parmetis.h) need to then be moved someplace Spheral++ can find them, as described in the configure section for "--with-parmetis=" configure flag below.

Configuring and Building.

Configuration

Thanks to Martin Casado, Spheral++ now has an autoconf build process. In most cases you should simply have to execute the following set of steps (in the src/src directory):
  1. ./boot
  2. ./configure
There are of course many options you can give to configure. You can see these options by typing "./configure --help". The current set of options you may want to use include:
  1. --without-mpi Compile Spheral++ for a serial run. Use this option if you don't want to use the parallel distributed version of Spheral++.
  2. --with-CXX=ARG Manually set C++ compiler to ARG.
  3. --with-CC=ARG Manually set C compiler to ARG. Spheral++ does not currently use any C code, so this option actually has no effect.
  4. --with-LD=ARG Manually set linker to ARG.
  5. --with-python=ARG Use non-standard python (other than the default in your path).
  6. --with-opt=Val Set optimization level (0,1,2,3). Note that on some platforms (notably IBM AIX we have found that compiling at high optimizations (>2) sometimes fails!
  7. --without-assert Turn off Design by Contract asserts. Use this option if you are compiling for an optimized run, and don't want these internal consistency checks to slow down the run time performance.
  8. --with-timers Turn on the Timer class profiling. This will generate a table of times used by different sections of the code during a run.
  9. --with-papi Activate PAPI for class profiling. Must be used in conjunction with --with-timers, and produces more information about the run time performance.
  10. --with-parmetis=DIR Indicates the base directory where the required ParMETIS files are stored. Under this directory should be the following structure:
    • DIR/include/parmetis.h
    • DIR/lib/libmetis.a
    • DIR/lib/libparmetis.a
  11. --enable-postgres Enable support for PostgreSQL
  12. --with-postgres-programs=DIR Location of PostgreSQL executables.
  13. --with-postgres-includes=DIR Location of PostgreSQL includes.
  14. --with-postgres-libs=DIR location of PostgreSQL libraries.
This process presupposes you have Gnu autoconf installed on your system. This is probably true for most people, but if not please vist the Gnu autoconf site and install autoconf.
Note -- if you are trying to compile on IBM AIX please consult the README.AIX under src/.

Compile the code

At long last you should be ready to compile the code, so go into the "src/src" directory and type "make". The libraries automatically install under the path specified in the configure stage above. Be warned, Spheral++ can severely exercise your compiler, so (particularly if you are building optimized) it might take some time to build!

Configure your environment.

In order for your python process to find the Spheral++ components, you must set the environment variables PYTHONPATH and LD_LIBRARY_PATH to where you have installed the Spheral++ components. (On AIX this variable is actually called LIBPATH.) Currently the Spheral++ process is hardwired to place the modules you build into a lib/ directory in the directory where you installed the Spheral++ source code.

Running the Test Problems.

When you downloaded the Spheral++ source above, you also got the run scripts for each of the tests presented in the tensor viscosity paper: they can be found under the "src/results/TensorQ" directory. Note that these runs were not actually done in this directory structure, so in some cases the script may require minor editing. The various files found under this test directory are briefly described in the README in "src/results/TensorQ". To run the problems, you just make sure your environment is set up properly so that python can find your Spheral++ libraries (described here) and run python on one of the problem run scripts. For instance, to run the 2-D cylindrical Noh problem shown in Figure 1 a & b serially:
  1. cd src/results/TensorQ/Fig01/MG-Balsara
  2. python -i Noh-cylindrical-2d.py
Most of the problems presented in the paper were run in parallel, so the scripts were set up with parallelism in mind. If you want to run problems in parallel, the invocation is system dependent depending on how MPI runs are handled on your system. In most cases you just use
  1. mpirun -np 4 pyMPI -i Noh-cylindrical-2d.py
or, on IBM AIX systems you will likely use
  1. poe pyMPI -i Noh-cylindrical-2d.py -nodes 1 -procs 4
etc.