UCLA AGCM USER GUIDE
Instructions on how to compile, run and execute the UCLA GCM
This document contains the information needed to compile, load and run the UCLA atmospheric general circulation model (AGCM). Instructions on how to set up a coupled GCM (atmosphere and ocean) run are also included. The code (which is 99% FORTRAN) resides in the directory camel_7.2p and the directories below it (for example, the agcm and esm directories).
To compile the code, there are several files which MAY need to be modified to each user's requirements. In the camel_7.2p directory there is a file called "cshrc.setup" in which the full directory path to the code must be specified (thru an environment variable called CAMILLEHOME). In the directory "camel_7.2p/include" there are two files which set options for the compiling and linking of the code. These files are "UserOptions.h" and "config.h".
"UserOptions.h", one sets the options for compiling
the AGCM code. There are two variables that control the inclusion
of code for the coupling of other model to the UCLA AGCM. The
variable USE_ESMF control wether or not the AGCM
code is compiled in a form suitable for coupling to other gridded
components utilizing ESMF package. If both USE_ESMF and OD_Package
is set to 0, the AGCM will run stand-alone with prescribed conditions
over ocean points. If variable OD_Package is not set
to zero, then it will modify the AGCM code for coupling to different
ocean general circulation model (OGCM) utilizing UCLA AGCM coupler
or DDB. If either USE_ESMF or OD_Package not equal to zero is
then the resulting executable must be built by linking to a suitable
OGCM and the two models will run coupled together. In "UserOptions.h",
one also sets the type of machine to run on, the dynamic memory
allocation syntax, and message passing software to use via the
variables: ARCH_OPTION, MEM_OPTION and MSG_OPTION. A portion of
this file is reproduced below;
In the "config.h" file, one must choose the compiler to be used, loader to be used and compiler and loader flags. One such configuration appropriate for the DEC/HP Alpha cluster is as follows:
== MSG_MPI) )
== ARCH_DEC) & (MSG_OPTION == MSG_MPI) )
These file will need to be changed when going to another machine.
2) Compiling and Linking
Prior to compiling and linking the model an appropriate setup for the platform to be used, library location(s), specification of compiles and other utilities most be accomplished first. Don't forget to source "cshrc.setup" in the camel_7.2p directory every time you logon or open up a new window. Once the required changes have been made to these files, the code can be compiled (and linked) by typing "mkmf" followed by "make" in the camel_7.2p directory, or linked only by typing "make link". Typing "make" in any directory under the camel_7.2p directory will compile the source code in that directory only. Other models are included by specifying the path and name of their archive file (.a file) in config.h file (i.e. -L/s/spahr/pop.popesmf/work -locean). All other models need to be compiled and an archive file needs to be created before the AGCM's link step. If another model is modified you can link it together with the AGCM by typing "make link" in the camel_7.2p directory.
3) Running the AGCM or an ESM coupled system
Before the code can be run, all the model's input control/namelist file need to be setup for the run that to be made and put in the run directory which by default is camel_7.2p/bin. AGCM input (including length of simulation, names of I/O files, etc.) is given through the file "esminput". All the data files need to be specifed in the appriate input control file and the file most be accessable at run time. After linking, the resulting executable (called "camel2" by default) is in the "camel_7.2p/bin" directory. It can be run there by executing the script file called "camelrun".
Any questions about this code distribution, how to run the model, or access to required data files can be addressed to Joseph Spahr.
Description for the UCLA AGCM output files
This document includes a description of the model's history tapes, and should be all that is needed by those who will not run the model, but simply use the results of existing runs.
MAIN history contains global data, including the instantaneous
values of all the prognostic variables used for re-starting the
model. This history is typically written every twelve or thirty-six
hours. All data on these three tapes is kept in the model's sigma
coordinate and C-grid, but a post-processing package is available
to interpolate data on the MAIN history to pressure coordinates.
The use of this package (FIELDS) and the PRESSURE history it produces
will also be described below. We will refer to all data written
at one time as a "day-time-group" (DTG). For each of
the four types of histories we will describe all possible DTGs,
the sequence in which they appear on each physical tape, and the
way tapes are concatenated to form a history. The following naming
convention is used for the output files:
The Main History Day-Time Groups (DTGs)
Each DTG on the MAIN history consists of a header record followed by JM-2 records of data in latitude strips (JM is the number of meridional grid points, including fictitious points at the poles). Each latitude record contains instantaneous values of the boundary conditions, the cloudiness, and all prognostic variables needed to restart the model. It may also contain time mean diagnostic fields, but these are optional. In addition, the option exists to output only part of the diagnostics normally kept by the model, and to vary the number of diagnostics output at different intervals. However it is done, all records after the header for a given DTG are identical and each contains data from a single latitude. This is shown in Figure 3.
The header record of each MAIN DTG is written with the FORTRAN:
WRITE (unit) TAU, SUN, CTP
time TAU is in hours from 0Z on January 1st of year 0 of the run
or sequence of runs. Leap years are ignored by the model. The
day of the year, counting from 1st January = DAY 1, is thus
The Latitude Records
The header is followed by JM-2 (JM is
stored in CTP (23)) latitude records. These come in three varieties,
which we will call " standard" (STD), "short QP"
(SQP) and "long QP" (LQP). They differ only in the number
of diagnostic fields saved.
Order of the Latitude Records
Latitude records are put on the tape in
the order of the computations in the GCM: from south to north,
and within each strip, from west to east. Because the data is
on the model's C-grid, not all variables with the same array indices
are co-located. On the C-grid, the zonal velocity component (u)
is kept half a grid interval east and west of the "mass"
field (potential temperatures and pressures), and the meridional
velocity component (v) half a grid interval north and south. Water
vapor and ozone mixing ratios are kept the "mass" point.
The indexing convention used in the model is that the u with the
same zonal index as the "mass" variables is half a grid
interval to the east, and the v with the same meridional index
as the "mass" variables is half a grid interval to the
south of the mass-point. This is shown on Figure 1.
Ordering of the Three DTG Types on the MAIN tapes
To provide some flexibility in the amount
of output produced, the model allows the three MAIN DTG types
(STD, SQP, and LQP) to be output with different frequency. Since
the three types differ only in the length of the trailing QP array,
and this length appears in each DTG header, the tapes can be read
without knowing in advance the ordering of the DTGs.
How Tapes are Concatenated
The number of daytime groups the model
puts on each tape is controlled by TAUHST(3) (CTP(62)). This is
the tape switching interval in hours, again measured relative
to TAUHST(1). When the writing time falls on this interval, the
model writes the appropriate DTG type on the current tape, closes
that tape, opens the next one, and copies the STD part of the
DTG on the new tape. Thus the last restart on one tape is repeated
as the first re-start on next tape.