FMO questions and answers


Why did the results change from version 3.1 to 3.2?
A number of the default options were changed. The old results can be got with the explicit setting of the applicable options. The only exception is the DFT gradient. Starting from 3.2, in the projection of the translation and rotation degrees of freedom unit masses are used instead of real masses in 3.1. As it is believed that the old way is wrong, there is no option to get the old results for the FMO-DFT gradient exactly.

The changed defaults are as follows:
FMO 3.1 -> FMO 3.2
RESPPC: 2.0 -> 2.5 (FMO3 only, else 2.0)
RESDIM: 2.0 -> 3.25 (FMO3 only, else 2.0)
RCORSD: 2.0 -> 3.25 (FMO3 only, else 2.0)
RITRIM: 2.0,2.0,2.0,2.0 -> 1.25,-1,2,2 (FMO3 only)
MODESP: 1 -> 0 (FMO2 only, else 1)
MODGRD: 0 -> 10 (FMO2 only, else 0)
MTHALL: 2 -> 4 (FMO/PCM only)
DFT grid: spherical -> Lebedev (FMO-DFT only)

Why do I get FMO2 energies in FMO3 different from those in FMO2 runs?
This is related to the above question. Some default options for FMO2 and FMO3 are different, these are: RESPPC, RESDIM, RCORSD, MODESP and MODGRD. By setting them in the input explicitly, one can get the FMO2 results agree. The reason for the default difference is the need to better balance terms in FMO3. The default options for either FMO2 or FMO3 are set to get the best practical accuracy for each method.

Why is FMO in GAMESS not fast enough in comparison to no fragmentation with some other programs?
In addition to the obvious difference in the computational algorithms, the default accuracy (integral, SCF convergence etc) are set differently. To get a fair comparison, one should match all necessary thresholds, which can have a large impact upon timings. GAMESS has a high accuracy standard, and FMO raises that even higher, to get reliable and reproducible results for the huge total energies which large molecules have. Other programs may have a different opinion about the accuracy.

Why was " $fmo nfg=1" not recognised?
GAMESS searches for the "fmo" string with three spaces after fmo (or a new line). Therefore, the above line should be " $fmo   nfg=1". Note that starting around FMO 4.0, this problem has been removed. Now you can use "$fmo nfg=1".

Why do I get this memory request? I am confused about MEMORY(MWORDS) and NINTIC (what is that?)?
***** ERROR: MEMORY REQUEST EXCEEDS AVAILABLE MEMORY
PROCESS NO. 0 WORDS REQUIRED= 232646216 AVAILABLE= 220000000
Thankfully, you did not ask about MEMDDI for MP2 or even worse CC.
If you have NINTIC set in the input file (possibly, without your knowledge by Facio or FMOutil), that defines a memory buffer which is allocated in the beginning of GAMESS exclusively (well, almost) for storing two-electron integrals. Thus the rest of GAMESS only has MEMORY-NINTIC words left, and that may be insufficient. Decrease the value of NINTIC by the amount needed: e.g., above 232646216-220000000=12646216 (better round up to 15000000). Usually, NINTIC is given with a minus sign, so you would take it off, subtract 15 mln and put it back.
Of course, that is not the only reason to get a "MEMORY REQUEST EXCEEDS...", but it is the confusing one more or less specific to FMO. You can run out of memory without setting NINTIC.

My MP2 runs stops for no reason. The following is present at the end of the output file.
TOTAL DISK REQUIRED (ALL PROCESSORS)= 121041 MBYTES
DISK SPACE PER CORE= 60520 MBYTES, USING P= 2

Check the log file. If it contains the following (or the like if you use a different FORTRAN compiler):
forrtl: No space left on device
forrtl: severe (38): error during write, unit 20, file
/scr1/myself/mybigjob.F20.000
Then this means you ran out of disk. One solution is to increase the number of nodes per group (for example, by making ngrfmo smaller for the step where this happened), because this file is distributed over nodes within each group. Another is to define smaller fragments. Other solutions are to buy a larger disk or switch to RI-MP2...

GAMESS crashes and no reason is given. What happened and what can I do?
CPU 0: STEP CPU TIME= 0.12 TOTAL CPU TIME= 7.1 ( 0.1 MIN)
TOTAL WALL CLOCK TIME= 16.9 SECONDS, CPU UTILIZATION IS 42.09%
DDI Process 0 (24): error code 911
DDI Process 0 (16): error code 911
DDI Process 0 (8): error code 911
ddikick.x: application process 8 quit unexpectedly.
There are two very likely reasons for this.
1. First, if you use GDDI ($gddi ngroup) then you should inspect all output files, not just the main one. The output files from other GDDI groups are typically named .F06.*. You may need to modify rungms so that .F06.*. are copied from all compute nodes (otherwise at the end of the job rungms may delete them). In this category, two very likely reasons for failure exist: not enough memory to do some calculation or SCF divergence (reported in plain ASCII in some .F06.* file).
2. Even if you do not use GDDI (even more likely if you do), in some cases the final (and most interesting part) of the output file may be lost, if you run in parallel. This can happen if a slave process comes to a problem first and dies before the master process, so that nothing is reported (slaves do not report problems). Possible causes for this are various: not enough memory, full disk, missing basis set file (external or internal), a mistake in the input file etc. In this case a good solution is to run the same job on 1 CPU core. If a job takes very long and you do not wish to do so, a possible solution is to simply rerun the job. If you are ready for more extremist ways, try to add an option which increases the output in the hope that you will get the error message printed before the output file is truncated by OS. Try various incantations in EXETYP. Examples are EXETYP=INT1 or RHFCL, and if you are in a very high degree of desperation, EXETYP=DEBUG.