MOLCAS is a quantum chemistry software developed by scientists to be used by scientists. It is not primarily a commercial product and it is not sold in order to produce a fortune for its owner (the Lund University). The authors have tried in MOLCAS to assemble their collected experience and knowledge in computational quantum chemistry. MOLCAS is a research product and it is used as a platform by the Lund quantum chemistry group in their work to develop new and improved computational tools in quantum chemistry. Most of the codes in the software have newly developed features and the user should not be surprised if a bug is found now and then.

Official website : http://www.teokem.lu.se/molcas/

This document explains how to build Molcas-7.4 on Intel Westmere with Infiniband network using the follow software:

  • Intel Compiler Suite 11.1.072 (also includes MKL)
  • GlobalArrays 1.4.1
  • OpenMPI-1.4.2*

* OpenMPI was build with ICS-11.1.072, BLCR-0.8.2, OFED-1.5.1 and with SGE flags, and GlobalArrays was build with OpenMPI-1.4.2.

The performance results for OpenMPI and Intel-MPI, are almost the same. Finally we chose OpenMPI because of easy SGE integration and the checkpointing & restart options.

It's important to know that this build it's highly optimised for our environment, and obviously, if you have other network or architecture, you will have to investigate what kind of compilers, libraries and parallel environments offers to you the best performance.


Before evaluate other compilers, libraries and compiling options, we obtains the best performance with this proceeding.

Environment Set Up

First at all, I load the modules needed to build this software. We used to integrate the dependencies inside the module files. In this case, when we load the OpenMPI environtment, this module loads the Intel Compiler Suite, BLCR and OFED modules also.

# module load OpenMPI/1.4.2_ics-11.1.072_ofed-1.5.1_blcr-8.2
# module load intel_mkl/11.1.072
# module list
Currently Loaded Modulefiles:
1) intel_compiler_suite/11.1.072
2) blcr/0.8.2
3) OFED/1.5.1
4) OpenMPI/1.4.2_ics-11.1.072_ofed-1.5.1_blcr-8.2
5) intel_mkl/11.1.072

Molcas can do several class of calculations, and the scalability of this kind of processes can be improved if you use MPI or OpenMP. That's why we will build Molcas twice, one for MPI only and other for OpenMP only.

MPI version

Global Arrays

First, we build the global arrays

# tar -xvf molcas74.tar
# mv molcas74 molcas74_ompi
# cd molcas74_ompi/
# cd g
# gmake TARGET=LINUX64 FC=mpif77 CC=mpicc | tee -a make_nehalem-64.log

at the end, you will find these output:

gmake[1]: Leaving directory `/scratch/jblasco/MOLCAS/molcas74_ompi/g/global/testing'
An executable test program for GA is ./global/testing/test.x
There are also other test programs in that directory.
Also, to test your GA programs, suggested compiler/linker
options are as follows.
GA libraries are built in /scratch/jblasco/MOLCAS/molcas74_ompi/g/lib/LINUX64
INCLUDES = -I/scratch/jblasco/MOLCAS/molcas74_ompi/g/include

For Fortran Programs:
FLAGS = -g -Vaxlib -O3 -w -cm -xW -tpp7 -i8
LIBS = -L/scratch/jblasco/MOLCAS/molcas74_ompi/g/lib/LINUX64 -lglobal -lma -llinalg -larmci -ltcgmsg -lm

For C Programs:
LIBS = -L/scratch/jblasco/MOLCAS/molcas74_ompi/g/lib/LINUX64 -lglobal -lma -llinalg -larmci -ltcgmsg -lm -lm

In order to verify this compilation, we run ./global/testing/test.x

# ./global/testing/test.x
GA Statistics for process 0
create destroy get put acc scatter gather read&inc
calls: 11 10 1.45e+04 1563 1565 42 40 100
number of processes/call 1.00e+00 1.00e+00 1.00e+00 9.52e-01 1.00e+00
bytes total: 3.19e+06 2.17e+06 2.58e+05 5.87e+04 5.73e+04 8.00e+02
bytes remote: 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00
Max memory consumed for GA by this process: 676192 bytes

All tests successful

And after 0.002 seconds, will appears at the end of the output that all tests are successful.