With SMR, most aspects of data output are similar to the single grid version (see the User Guide
for more on outputs in Athena). For example, all the same output file formats can be used, with each
output type specified through a separate
<output> block in the input file.
However, there are some important differences in outputs with SMR, as noted below:
With SMR, output files at different levels of refinement are written to different directories. All output associated with
the root (level=0) Domain is written to the current directory (or the directory specified by the
-d command line
option). However, for levels>0, a new subdirectory with name
N is the level number, is created
at run time, and all output associated with Domains at each level is written into the corresponding subdirectories.
Specifying levels and domains for Outputs
The default behavior of Athena is to generate separate outputs for each level, and each Domain, in a SMR hierarchy.
However, sometimes it is useful to generate output only on a specific level, or only from a specific Domain on a
specific level. The parameters
domain in the
<output> blocks allow a specific level
or Domain to be set for each different output. See the Output Blocks section in
the User Guide for more details.
For output from the root (level=0) Domain, the filenames are unchanged from the single grid version. Thus, output from the level=0 Domain follows the normal convention
For output from all levels>0, the filenames have the level (and possibly the Domain) number included. For the first (domain number zero) Domain at each level, the filenames become
If there is more than one Domain at any given level, the filenames also include the Domain number:
Recall from above that the files from levels>0 will be output to separate directories named by level number.
With MPI, the directories created by each processor (and named
N is the rank of the
process) takes precedence. Thus, with SMR and MPI, the
idN directories will be created in the current
directory, and each of these will contain
levM directories for all the
M levels in the
calculation. Note that these directories are created by all processes, even if that process does not have data
(and therefore will not create output files) on a particular level and/or Domain.
With SMR, each Domain outputs a separate history file. The root Domain writes the file in the current directory,
while all Domains with level>0 write the file in the
levN directory. With MPI, the file is always written in
levN directory associated with parent (rank=0) process for that Domain.
The history file for the root Domain contains data which is volume averaged over the whole region of the computation. For all other Domains, the corresponding history files contain data averaged over the volume of that Domain. This is useful for keeping track of how much mass, momentum, and energy enters or leaves a refined region of the grid. However, these quantities in general are not conserved on refined regions.
Slicing with SMR
When output files are created using the slice operator (see Specifying Slices for Outputs), then no output will be generated by Domains (or Grids on a given Domain) that do not intersect the volume of the slice.