11.11. Appendix A: AMPX Instructions

11.11.1. AJAX: Module to Merge, Collect, Assemble, Reorder, Join, Copy, Selected Nuclides from AMPX Interfaces

AJAX is a module to combine data on AMPX master or working libraries. Options are provided to allow merging from any number of files in a manner that will allow the user to determine the final nuclide ordering if desired.

11.11.1.1. Input Data

Block 1

-1$   Core assignment [1]
        1.    NWORD   number of words to allocate (50,000)

0$    Logical assignments [2]
      1.      MWT     logical number of new library (1)
      2.      NWAX    not used (0)

1$    Number of files [1]
      1.      NFILE   number of files from which data will be selected

Terminate Block 1 with a T. Stack Block 2 and 3 one after the other NFILE times.

Block 2

2$    File and option selection [2]
      1.      NF      logical number of file considered
      1.      IOPT    nuclide treatment

    The following choices are available:
      -N: deletes N nuclides from NF to create the new file on MWT
      0: adds all nuclides to the new file on MWT
      N: adds N nuclides from NF to create the new file on MWT

      Sets with duplicate identifiers will not be entered on MWT.
      The first occurrence of an identifier selects that set for the new library.

5$    Sequence number [1]
      1.      SEQ     sequence number to use for working library

Terminate Block 2 with a T. Only use Block 3 if IOPT != 0.

Block 3

3$    Nuclides selected [IOPT]
      1.      ID      identifiers of nuclides to be added or deleted from NF
                      Only used if IOPT != 0.

4$    New identifiers [IOPT]
      1.      IDNEW   allows changing the identifier given in the 3$ array for the new library.
        Only used if IOPT > 0.

6$    Zone id to select [IOPT]
      1.      ZONE    zone id of the the nuclides to select. A negative value selects all (-1)
        Only used if IOPT != 0.

7$    New zone identifiers [IOPT]
      1.      NZONE   allows changing the identifier given in the 6$ array for the new library.
        (0) Only used if IOPT > 0.

Terminate Block 3 with a T. Repeat block title optionally up to five times.

Block Title

title: title card for the AMPX working library Type: Character*72

11.11.1.2. Sample Input

0$$ 40 1$$ 3 T
2$$ 13 T
3$$ 92235 92238 94249 T
2$$ 20 T
2$$ 31 T
3$$ 100000 T

This input creates a library on logical unit 4 using data from logical units 1, 2, and 3, as follows: three nuclides—92235, 92238, and 94249—are taken from logical unit 1; all nuclides from logical unit 2 are copied unless they use one of the three identifiers already copied. Finally, a data set identified by 100000 is copied from logical unit 3. Please note that AJAX does not check to determine whether the commands have been fully completed. In other words, if logical unit 1 does not have a 92235, it cannot be copied, but the code will not produce any errors. The AJAX output, however, does list the nuclides copied and their data sources.

11.11.1.3. Logical Unit Parameters

Variable

Unit number

Type

Description

NF

MWT

binary

binary

logical number of new library

18

binary

scratch

19

binary

scratch

11.11.2. ALPO: Module for Producing ANISN Libraries from AMPX Working Libraries

ALPO (ANISN Library Production Option) is a module for producing ANISN libraries from AMPX working libraries. Several working libraries can be accessed in a given run. The ANISN library can be produced in either binary or BCD format.

11.11.2.1. Input Data

Block 1

0$    Logical Assignments [2]
      1.      MAN     logical unit for the ANISN library (use a 7 when a punched card output is desired) (20)
      2.      MAX     start of ANISN IDs (1)

1$    Primary Options [9]
      1.      NFILE   number of working libraries to be accessed (0)

      2.      IHT     position of the total cross section in the ANISN tables (3)

      3.      IHS     position of the within-group cross section in the ANISN tables (3)
        IHT + IGM - IFTG + 1, where IGM is the number of neutron energy groups; IFTG is the first thermal group

      4.      ITL     table length of the ANISN tables (0)
        IHS + IGM + IPM - 1, where IPM is the number of gamma-ray groups

      5.      MAXPL   maximum order of scattering to be written on the ANISN library (20)

      6.      IOPTID  option to print label with each block of ANISN cross sections (0)
              0:      no printing
              1:      print data

      7.      IOPT2D  option to print scattering matrices (0)
              0:      no printing
              1:      print data

      8.      ITRANS  transport correction option (0)
              -N - truncate PN and above matrices and correct all lower ordered within-group terms by subtracting (2l+1)*sigmaN(g->g')/(2N+1)
               0 - no transport correction
               N - replace sigmat with sigmatr = sigmaa + (1 - mu)* sigmas, where mu is calculated by summing the Pl matrix and
            dividing by the P0 sum, or by 2/(3*A), when Pl is not given. The within-group term is also adjusted.

      9.      ICORE   number of words to allocate (50,000)

Terminate Block 1 with a T. Stack Block 2 and 3 one after the other NFILE times

Block 2

2$    File selection options [2]

      1.      NF      logical number of the working library (0)
      2.      IOPT    nuclide selection (0)
              -N - accepts all nuclides from the working library except the N designated in the 3$ array below
               0 - accepts all nuclides from the working library
               N - accepts 0 nuclides designated in the 3$ array below

Terminate Block 2 with a T. Only use Block 3 if IOPT != 0.

Block 3

3$    nuclides to be selected or ignored [IOPT]
      1.      NUCS    nuclides to be selected or ignored (0) only used if IOPT != 0

Terminate Block 3 with a T.

11.11.2.2. Sample Input

0$$ 20 E 1$$ 1 4 10 30 3 0 0 0 500000 T
2$$ 4 5 T
3$$ 92235 92238 8016 1001 26000 T

This discussion assumes that data are being accessed from a 50 group AMPX working library on logical unit 4. Input indicates that an ANISN library on logical unit 20 should be created. The total cross section is in position 4 in the ANISN cross section tables, which implies (since default values were not overriden using the 1$ array) that nu times the fission cross section is in position 3, the absorption cross section is in position 2, and the fission cross section is in position 1. Furthermore, by specifying that the within-group scattering cross section is in position 10, only 6 upscattering terms are possible. If upscatters are found on the working library that scatter by more than 6 groups up, those terms are “summed” into the source group number less 6 scattering terms. This keeps the scattering matrices balanced and allows the scatter to be put in the highest place available in the matrix. It is also specified that the “table length” is 30, which means that the table has slots for \(30-10\) or 20 downscattering terms. As is the case with upscattering, if terms are encountered which scatter down by more than 20 groups, they are added to the lowest transfer terms available in the table. Five nuclides were selected from the working library: 235U (92235), 238U (92238), 16O (8016), 1H (1001), and Fe (26000).

11.11.2.3. Logical Unit Parameters

Variable

Unit number

Type

Description

MAN

binary

logical unit for the ANISN

NF

binary

logical number of the working library

14

binary

scratch

11.11.3. BROADEN: Module to Doppler Broaden TAB1 Functions

BROADEN reads data on a double or single precision binary TAB1 library, and Doppler broadens the total, elastic-scattering, fission, first-chance fission, and capture cross sections. It writes the Doppler-broadened data onto a binary TAB1 library. Optionally, it will Doppler broaden processes other than those just mentioned. This code is based on two subroutines written by Dermit E. “Red” Cullen of LLNL, called HUNKY and FUNKY. These routines use numerical integrations to perform Doppler broadening.

11.11.3.1. Input Data

Block Input

Block starts on first encounter of a keyword in the block.

Keyword

Alternate

Default

Definition

INPUT=

LOGPT, NENDF, NTAP1

31

logical unit of the input TAB1 library

OUTPUT=

LOGDP, NDOP, NOUT

32

logical unit of the output TAB1 library

T=

space-separated list of temperature(s) in Kelvin at which data should be broadened

MAT=

space-separated list of material identifiers (if not present, all are Doppler broadened)

MT=

space-separated list of reaction identifiers to broaden (if not present, only default MT values are broadened)

addMT=

adds the list of indicated MT values to the list of MTs being broadened

icekeno=

1

option to also broaden 3, 20, 21 and 38

0 – do not broaden 3, 20, 21 and 38

1 – broaden 3, 20,21 and 38

outmode=

0

manner in which the output should be saved

0 – select the same mode as the input

1 – save as single precision

-1 – save as double precision

oldBroaden

option to not add extra points

eps=

0.001

precision level at which the adaptive mesh should be created

11.11.3.2. Notes

The numerical integration routines used in BROADEN were developed by Dermit E. Cullen at LLNL. A characteristic of these routines is that they assume the input cross section is linear in energy. The module POLIDENT constructs the cross section data on a suitable dense linear-linear mesh. In addition, the BROADEN module will add points as needed.

11.11.3.3. Sample Input

INPUT=1 OUTPUT=2
T= 300 900 2100 MAT= 1000 2000

This input indicates that data should be read from the TAB1 library on logical unit 1, and that data should be written to a new file on logical unit 2. The data will be Doppler broadened for temperatures of 300, 900, and 2100 Kelvin for the materials identified by 1000 and 2000.

11.11.3.4. Logical Unit Parameters

Variable

Unit number

Type

Description

INPUT

binary

logical unit of the input TAB1 library

OUTPUT

binary

logical unit of the output TAB1 library

14

binary

scratch

99

binary

scratch

11.11.4. CADILLAC: Combine All Data Identifiers Listed in Logical AMPX Coverx Format

CADILLAC (Combine All Data Identifiers Listed in Logical AMPX COVERX-format) is an AJAX-like module that can be used to combine multiple covariance data files in COVERX format into a single covariance data file. The user can change the material IDs as needed. CADILLAC reads and exports data in a binary format native to the computing platform.

11.11.4.1. Input Data

Block Output

Block starts on first encounter of a keyword in the block.

Keyword

Alternate

Default

Definition

out=

logical unit for final COVERX output file

directory=

no

Creates a contents directory of input COVERX files without assembling an output file.

no - creates an output file

yes - creates directory and exit

Repeat block Input as often as needed.

Block Input

Block starts on first encounter of a keyword in the block.

Block terminates on encountering the next occurrence of keyword in or end.

Keyword

Alternate

Default

Definition

in=

logical unit number for input COVERX file

file=

1

old or new COVERX formatted input file

0 - old COVERX input file

1 - new COVERX input file

All COVERX files produced by the current AMPX code system are in new COVERX format.

dec=

no

Dec formatted BIG_ENDIAN COVERX file

yes - old style DEC generated COVERX

file

no - not an old style DEC generated COVERX

file

all COVERX files produced by the current AMPX code system are in the new COVERX format.

delete=

space-separated list of materials to delete from input library

add=

space-separated list of materials to add from input library

The following special options are available:

  • add=0 selects all nuclides from “in”

  • if add > 0 and a 0 is specified as the last value in the array, all the nuclides on the input file will be selected; however the ids explicitly specified can be changed using the “new” array input.

new=

space-separated new material ids for materials given in add

The number of new materials must match the number of materials given in add exactly.

secondary=

space-separated list of secondary id values that will be changed

refers to the material id in cross material covariance matrices

matid=

space-separated list of new secondary id values for values given in secondary

11.11.4.2. Notes

  • For all newly created COVERX files, file=1 and dec=no.

  • There is a one-to-one correspondence between values in the “add” array and the “new” array. There is also a one-to-one correspondence between values in “secondary” and “matid”.

  • After “in” is specified, the keywords governing the operations on “in” must be specified prior to entering another “in”.

  • The minimum input following in specification is add=0 that specifies all nuclides from “in” will be copied to “out”.

  • The same keyword can only be entered once on a line of input. For example, the following input is invalid:
    add=id1 id2 in=33 add=id3 id4
    Entering the keyword “add” twice on the same line is invalid, but it is acceptable to have two different key words on the same line. In other words, there is no problem having the keywords “in” and “add” on the same line.

11.11.4.3. Sample Input

=cadillac out=23
in=20 file=1 delete=92233 end
=cadillac out=24
in=23 add=0 file=1
in=22 add=9222 new=92233 file=1 secondary=9228 9427
matid=92235 94239 end

The first input deletes the mat=92233 from the COVERX file on logical unit 20 and generates a new COVERX file on logical unit 23. The second CADILLAC input takes all materials from logical unit 23 and adds them to the new COVERX file on logical unit 24. In addition, the covariance matrices from the COVERX file on logical unit 22 that correspond to material id 9222 are added to the new COVERX file on logical unit 23 after first changing the material id to 92233. If cross material matrices with a second material id of 9228 or 9427 exist, the ids for the second material are changed to 92235 or 94239, respectively.

11.11.4.4. Logical Unit Parameters

Variable

Unit number

Type

Description

out

binary

logical unit for final COVERX output file

in

binary

logical unit number for input COVERX file

14

binary

scratch

15

binary

scratch

11.11.5. CAMELS: Module to Compare AMPX Master or Working Libraries

CAMELS (Compare AMPX Master Libraries) compares two cross section collections on two AMPX master libraries (master or working). The two libraries must use the same neutron and/or gamma-ray group structures.

Comparisons are made for the 1D data (group-averaged cross sections), Bondarenko factors, and the 2D data (group-to-group transfer matrices). There is no requirement that the two libraries use the same ordering in the manner in which data are written. CAMELS keys on the identifiers of all classes of data and makes comparisons when it finds matches. The two libraries to be compared have to be of the same type.

The primary output from CAMELS is a file written in the AMPX master or working library format, depending on the input, containing values defined by (A-B)/B, where A represents the values on the first library, and B respresents the values on the second library. The second library is the reference library, and the values are the relative difference of the values on the first library relative to the reference library. Since the output is in the AMPX master or working format, it can be listed, plotted, etc., using any appropriate AMPX utility module, such as the PALEALE module.

11.11.5.1. Input Data

Block Input

Block starts on first encounter of a keyword in the block.

Keyword

Alternate

Default

Definition

log1=

in1

1

logical unit of the first AMPX master/working library

log2=

in2

2

logical unit of the second AMPX master/working library

log3=

out

3

logical unit of the output AMPX master/working library

eps=

1e-5

the precision to compare

worker

If present, use to compare working libraries; otherwise master libraries are compared.

print=

data printing options

1dn - print 1D neutron differences

1dg - print 1D gamma differences

2dn - print 2D neutron differences or transfer matrices

2dy - print 2D yield matrices differences

2dg - print 2D gamma matrices differences bond - print Bondarenko data differences

11.11.5.2. Sample Input

log1=91 log2=92 eps=1e-3 print=1dn print=2dn print=bond

The example requests comparing the two data collections located on logical units 92 and 92. The differences with absolute values greater than 0.001 (0.1%) will be written on logical unit 3 in the AMPX master library format. In addition, all differences for 1D and 2D neutron data and for Bondarenko factors will be written on the screen.

11.11.5.3. Logical Unit Parameters

Variable

Unit number

Type

Description

log1

binary

logical unit of the first AMPX master/working library

log2

binary

logical unit of the second AMPX master/working library

log3

binary

logical unit of the output AMPX master/working library

11.11.6. CEEXTRACT: Extract Data out of a CE Library

CEEXTRACT allows for extracting 1D, kinematic data and probability tables in a format suitable for use in PLATINUM to make a new library. The 1D data contain collision cross sections, which PLATINUM will override. The collision cross section data can be deleted from the TAB1 formatted files prior to a reprocessing with PLATINUM using module ZEST.

11.11.6.1. Input Data

Block Input

Block starts on first encounter of a keyword in the block.

Keyword

Alternate

Default

Definition

zero=

the prefix of the library file

dir=

the directory containing the library files

1d=

0

unit of the file in which to save the 1D data

(If less or equal to zero, 1D data are not exported)

2df=

0

unit of the file in which to save the 2D temperature-independent d ata

(If less or equal to zero, 2D data are not exported.)

2dt=

0

unit of the file in which to save the 2D temperature-dependent dat a

(If less or equal to zero, 2D data are not exported.)

prob=

0

unit of the file in which to save the probability table data

(If less or equal to zero, probability table data are not exported.)

unit =

60

unit on which to read the CE library files

11.11.7. CHARMIN: Module to Convert TAB1 Libraries from Single to Double Precision, to Text, or from Any of these Formats to any of the Others

CHARMIN (Change and Re-Make INput File) is a code that converts between different TAB1-file formats.

11.11.7.1. Input Data

Block Input

Block starts on first encounter of a keyword in the block.

Keyword

Alternate

Default

Definition

input=

inpu inp in i

31

logical unit of input file

output=

outpu outp out ou o

32

logical unit of output file

Select one of these

single

Input file is single precision binary.

double

Input file is double precision binary.

fido

Input file is in FIDO format.

cen

Input file contains a CENTRM flux.

to

t

to

Keywords before this flag are for the input file. After this flag they are for the output file.

Select one of these

single

Output file is single precision binary.

double

Output file is double precision binary.

bcd

BCD Tab1 format

ploth

XY columns with headers

plot

XY columns without headers

fido

output file in FIDO format

mat=

material number to use if reading centrm flux data

mt=

reaction number to use if reading centrm flux data

Repeat block zone descriptions as often as needed.

11.11.7.2. Block Zone Descriptions

Block starts on first encounter of a keyword in the block.

Keyword

Alternate

Default

Definition

zone=

zone which to read

ztemp=

temperature for the zone

sig0=

background cross section value for the zone

za_l=

lowest value of za for which to use this flux

za_h=

highest value of za for which to use this flux

11.11.7.3. Sample Input

INPUT=1 OUTPUT=2 SINGLE TO DOUBLE

This indicates that the single precision binary file on logical unit 1 should be read and a double precision file on logical unit 2 should be created.

11.11.7.4. Logical Unit Parameters

Variable

Unit number

Type

Description

INPUT

OUTPUT

binary

BCD or binary

11.11.8. CLAROL: A Module to Replace Cross Sections on an AMPX Master Interface

CLAROL (Correct Libraries and Replace Old Labels) is a module that replaces or adds data in an AMPX master library at the lowest level (e.g., it can replace individual elements in either 1D or transfer arrays). It also has provisions for modifying entries in the table of contents on a master library and for overriding the title cards associated with each data set on a master library. Because this module operates at such a detailed level, it is recommended that the user be familiar with the idiosyncrasies of the AMPX master interface format before attempting to use CLAROL.

11.11.8.1. Input Data

Block Input

Block starts on first encounter of a keyword in the block.

Keyword

Alternate

Default

Definition

in=

logical unit of the input master/working library

out=

logical unit of the output master/working library

worker

If present, working library should be corrected.

nototal

If present, totals are not summed again.

noratio

If present, mt=1007 is not renormalized.

Otherwise, for non-moderator materials, the free gas scattering matrix (MT=1007) is normalized to MT=2. For moderators, the elastic cross section (MT=2) is substituted by the sum value of the thermal scattering matrix. Moderators have a nonzero value in id=46.

noup

If present, scattering matrices are not corrected for upscatter.

nobond

If present, Bondarenko data are not renormalized to resummed totals.

Not currently used, as Bondarenko data are not updated.

nosmall

If present, small values in 1D are retained.

noyield

If present, yield matrices are not converted to units of yield.

nocompact

If present, scattering matrices are not compacted, and small values in the scattering matrix are set to zero, If a l>0 matrix has a non-zero term, where the l=0 matrix does not, it is set to zero

smallcut=

1.0d-12

cut-off value to set 1D and scattering matrix values to zero

If nosmall is not set, all 1D cross section smaller than smallcut are set to zero. If nocompact is not set, all scattering matrix elements smaller than smallcut will be set to zero.

Repeat block Data as often as needed.

11.11.8.2. Block Data

Block starts on first encounter of a keyword in the block.

Block terminates on encountering the next occurrence of keyword end or neutron or gamma or yield or resonance or bondarenko or 1dn or 2dn or 1dg or 2dg or 2dy or sumn or sumg or title or end.

Keyword

Alternate

Default

Definition

Select one of these

1dn

lists changes for 1D neutron data

1dg

lists changes for 1D gamma data

2dn

lists changes for 2D neutron data

2dg

lists changes for 2D gamma data

2dy

lists changes for yield matrices

bond

lists change for bondarenko factors

refbond

lists change for reference bondarenko cross sections.

trans

lists changes for transfer matrix

title

lists an new title for the indicated nuclide

sumn

additional user sum rules for neutron data

sumg

additional user sum rules for gamma data

ido=

the id of the old set

If ido and idn are given and no records for idn exist, ido is copied and renamed to idn.

idn=

the id of the new set

See ido. If idn is not given, the value of ido is used.

mt=

the reaction for which to change the data

nf=

the first group for which to apply changes

If changing a scattering matrix, this is the source group

nl=

the last group for which to apply changes

If changing a scattering matrix, this is the source group

nsink=

the sink group if changing scattering matrix data

lval=

the order of the matrix to update if changing scattering data

temp=

the temperature of the matrix to update

data=

values or text listing the desired changes

The data section is enclosed between < and > signs. Multiple lines are allowed. For 1dn, 1dg, 2dn, 2dg,

2dy and trans, FIDO style array input is allowed.

11.11.8.3. Logical Unit Parameters

Variable

Unit number

Type

Description

in

binary

logical unit of the input master/

working library

out

binary

logical unit of the output master/working library

11.11.9. COGNAC: Conversion Operations for Group-Dependent Nuclides in AMPX COVERX Format

COGNAC (Conversion Operations for Group-Dependent Nuclides in AMPX COVERX-format) is a module used to convert COVERX formatted libraries from bcd to binary and vice versa.

11.11.9.1. Input Data

Block Input

Block starts on first encounter of a keyword in the block.

Keyword

Alternate

Default

Definition

in=

logical unit number for input COVERX file

out=

logical unit number for output COVERX file

Select one of these

bcd

Input file is in ASCII format.

binary

Input file is binary.

to

t

to

Keywords before this flag are for the input file. After this flag, they are for the output file.

Select one of these

bcd

Output file is in ASCII format.

binary

Output file is binary.

new=

no

process an input binary COVERX in the new file format

yes - characters were printed as characters

no - characters were printed as floats

All files produced with the current AMPX code system are of type new

dec=

no

processing old COVERX binary file generated on a

DEC Alpha in BIG_ENDIAN format

yes - DEC Alpha in BIG_ENDIAN format

no - Not DEC Alpha in BIG_ENDIAN format

dec=yes should only need to be specified with the

option new=no. All files produced with the current

AMPX code system are of type new

strip

no

If present, strip undesired reaction values.

If present, only retain covariances matrices where the mt values are 1, 2, 4, 16, 18, 101, 102, 103, 104, 105, 106, 107, 452, 1018.

11.11.9.2. Logical Unit Parameters

Variable

Unit number

Type

Description

in

out

BCD or binary

BCD or binary

logical unit number for input

COVERX file

logical unit number for output

COVERX file

11.11.10. COMBINE: Add, Subtract, Multiply, or Divide TAB1 Files

11.11.10.1. Input Data

Block Specifications

Block starts on first encounter of a keyword in the block.

Keyword

Alternate

Default

Definition

in1=

31

input TAB1 file

in2=

32

input TAB1 file

out=

33

output TAB1 file

con=

1.0

constant with which to multiply data in in2

option=

1

procedure to perform

add - add the two tab1 files

sub - subtract in2 from in2

mul - multiply the two tab1 files

div - divide in1 by in2

11.11.10.2. Logical Unit Parameters

Variable

Unit number

Type

Description

out

in in

binary

binary

binary

output TAB1 file

11.11.11. COMPRESS: Module to Compress Functions Written in TAB1 Format

COMPRESS is a module which reads a point TAB1 data file written by a program such as POLIDENT and reduces the number of points in the functions on the file by eliminating points which can be interpolated to within a user-specified tolerance. For example, POLIDENT typically generated functions that are accurate (in terms of generating a function according to ENDF/B specifications, not according to physical correctness) to within 0.1%. Many applications may only need functions that are accurate to a much coarser tolerance, such as 1%. COMPRESS allows this operation to be performed.

11.11.11.1. Input Data

Block Input

Block starts on first encounter of a keyword in the block.

Keyword

Alternate

Default

Definition

LOGIN=

IN

1

logical unit of input file

LOGOUT=

OUT

2

logical unit of output file

EPS=

tolerance to which points are tested to see if they can be eliminated

Note that EPS is the relative difference, (A-B)/A,

not the percentage difference. A value of 0.01 is equivalent to 1%.

11.11.11.2. Sample Input

IN=1 OUT=2 EPS=0.005 END

This input indicates that data should be read from the TAB1 file on logical unit 1 and that a TAB1 file should be written on logical unit 2 with functions accurate to within 0.5% of the original ones.

11.11.11.3. Logical Unit Parameters

Variable

Unit number

Type

Description

LOGIN

binary

logical unit of input file

LOGOUT

binary

logical unit of output file

11.11.12. COVCOMP: Compare Two COVERX Files or Add COVERX Files According to a Given Percentage

COVCOMP compares two COVERX files or adds/subtracts COVERX file data. If comparing, the program compares the files and writes the differences into a new COVERX formatted file. In addition, it writes summary information to the screen.

11.11.12.1. Input Data

Block Keyword-Based Input

Block starts on first encounter of a keyword in the block.

Keyword

Alternate

Default

Definition

in=

space-separated list of input file logical units

If negative, file is assumed to be binary.

perc=

Space-separated list of percentages associated with those units

If not given, 1 is assumed for all input matrices. It is only used if adding covariance matrices.

fac=

space-separated list of factors to apply to covariance

If not given, 1 is assumed. This is only useful when subtracting covariances matrices. If negative percentages are given, cross section data are subtracted, but covariance data are added. It is only used if adding covariance matrices.

out=

3

Logical unit of the output COVERX file

nytpe=

-1

Type of the output COVERX file

-1 - Use the type used in the first COVERX file

1 - Covariance matrix, standard deviation

2 - Relative covariance matrix and deviation

3 - Correlation matrix, standard deviation

eps=

1e-5

Precision to which to compare COVERX file data.

nullVal=

-9999

Value to substitute if matrix not found.

all

If comparing two COVERX files, print all differences.

convert

If comparing two COVERX files, always convert to ntype=1.

skip

Skip cross section data and matrices unless on all COVERX files.

add

If present, add the covariance matrices.

11.11.12.2. Notes

File format for the output file is a COVERX file with the following features:

  • All cross section, uncertainties and covariance data are written out as abs(a1-a2)/abs(a1), where a1 is the value in file 1, and a2 is the value in file 2. If a1 is null, the value abs(a1-a2) is used instead.

  • If a cross section or matrix does exist in one file but not the other, -9999 is written for all the values.

  • If the group structures in the two files do not agree then the files cannot be compared. In this case, the COVERX file does contain a header but contains 0 cross section and covariance matrix data.

11.11.12.3. Sample Input

in=1 -2 out=-3 eps=1e-5 all

The BCD formatted COVERX files on logical units 1 and the binary COVERX file on unit 2 are compared, and differences are printed in binary format on logical unit 3. In addition, all differences larger than 1e-5 are printed to the screen.

11.11.12.4. Logical Unit Parameters

Variable

Unit number

Type Description

log1

BCD or binary

log2 out

BCD or binary

BCD or binary logical unit of the output COVERX file

11.11.13. COVCONV: Program to Convert File 32 Resonance Data into File 33 Format

The program takes a COVERX file and converts the data pertaining to the File 32 information into the File 33 format. The group structure of the COVERX file is expected to contain all energy range end points from File 32. In addition, File 33 cannot contain any covariance information that overlaps with the covariance information in File 32.

11.11.13.1. Input Data

Block 1

0$    Logical unit assignment [8]
      1.      cov     logical unit for the coverx file containing File 32 covariance data (-1)
      If negative, file is assumed to be binary.

      2.      endf    logical unit for endf (2)
      any file 33 data in this file will be combined with the newly created File 33 data

      3.      inmode  ENDF library format (2)
          1:  binary
          2:  BCD

      4.      mat     material identifier

      5.      out     logical unit for output of new File 33 data (2)

      6.      outmo   ENDF library format for output file (2)
          1:  binary
          2:  BCD

      7.      unres   option for whether unresolved parameter matrix get translated (0)
          0:  yes
          1:  no

      8.      lty0    specifies how to treat lty=0 sections (0)
          0:  Do not allow to extend lty=0. Any energy of an overlapping lty=0 section will be automatically adjusted
          1:  Allow to extend lty=0

Terminate Block 1 with a T.

11.11.13.2. Logical Unit Parameters

Variable

Unit number

Type Description

cov

BCD or binary logical unit for the coverx file

containing File 32 covariance data

endf

BCD or binary logical unit for endf

out

BCD or binary logical unit for output of new File 33 data

11.11.14. COVERR: Program to Convert COVERX Files to ERRORR Covariance Files

COVERR is a program to convert coverx files to errorr covariance files. It can only convert COVERX files that contain one nuclide. Module CADILLAC should be used to select the desired material prior to running COVERR if the coverx formatted file contains more than one nuclide.

11.11.14.1. Input Data

Block 1

0$    Logical unit assignment [2]
      1.      log1    logical unit for the first COVERX file (1)
    If negative, file is assumed to be binary.

      2.      out     logical unit for errorr file (2)

      3.      mat     ENDF mat number to use in errorr file (0)

1*    ENDF header information [2]
      1.      za      ZA value for the nucleus (0)
    This is the value written in the BOXER file.

      2.      awr     mass ratio for the nucleus (0)
    This is the value written in the BOXER file.

Terminate Block 1 with a T.

11.11.14.2. Logical Unit Parameters

Variable

Unit number

Type

Description

log1

BCD or binary

Logical unit for the first coverx

file

out

BCD

Logical unit for errorr file

11.11.15. FABULOUS_URR: Module to Produce Bondarenko Factor Tables

FABULOUS is a module that produces full-range Bondarenko factor tables from ENDF/B evaluations. It does not read the ENDF/B evaluation directly, but instead uses Doppler-broadened point data produced by modules POLIDENT and BROADEN. If the evaluation contains unresolved resonance data, the unresolved point data must be created at the desired temperatures and background values by module PRUDE if factors from statistical integrals are desired. Alternatively, probability tables generated by PURM and PURM_UP can also be used in the URRR. In order to produce infinite dilute cross section data consistent with the 1-D neutron data, an AMPX master library containing group-averaged neutron cross section data is required. FABULOUS does not perform any Doppler broadening; instead it assumes that all point data have been created at the desired temperatures.

11.11.15.1. Input Data

Block Input

Block starts on first encounter of a keyword in the block.

Keyword

Alternate

Default

Definition

out=

1

unit for the output file containing the Bondarenko factors

in=

19

unit for the master containing the 1D and 2D neutron data

idlib=

identifier of the nuclide on the input master library

idpoint=

identifier of the material for the point data, the probability tables and the kinematic data

matwt=

99

material identifier of the flux data

mtwt=

2099

reaction identifier of the flux data

flux=

46

unit for the file containing the flux data

resol=

unit for the file containing the temperature dependent point-wise data

This file contains all point-wise data in broadened to the desired temperatures. If no file is given, Bondarenko factors will not be calculated. This can be used to calculate f-factors in the URR only.

urrpoint=

unit for the file containing the temperature and background dependent point-wise data in the URR This file contains all point-wise data in broadened to the desired temperatures and calculated at the desired background cross section values.

urrprob=

unit for the file containing the temperature dependent probability tables in the URR

kin=

unit for the file containing point-wise kinematic data This file is needed if removal f-factor values are desired

temps=

space-separated list of temperature(s) in Kelvin at which f-factors should be generated

sig0=

space-separated list of background cross section values at which f-factors should be generated

mts=

space-separated list of additional reactions at which f-factors should be generated

By default f-factors are generated for mt=1, 2, 18, 102, 1007, 1008, 2022 provided the required data. are available. If additional reactions are desired, they can be added in this array.

11.11.16. FABULOUS: Module to Produce Bondarenko Factor Tables

FABULOUS is a module that produces full-range Bondarenko factor tables from ENDF/B evaluations. It does not read the ENDF/B evaluation directly, but instead uses Doppler-broadened point data produced by modules POLIDENT and BROADEN. If the evaluation contains unresolved resonance data, the unresolved point data must be created at the desired temperatures and background values by module PRUDE. In order to produce infinite dilute cross section data consistent with the 1-D neutron data, it is strongly advised to supply an AMPX master library containing group-averaged neutron cross section data. FABULOUS does not perform any Doppler broadening; instead it assumes that all point data have been created at the desired temperatures and will terminate otherwise.

11.11.16.1. Input Data

Block 1

TITLE: Title to describe the Bondarenko factor set Type: Character*72

-1$   Core allocation [1]
      1.      ICORE   number of words of core to allocate (500,000)

0$    Logical unit assignments [4]
      1.      MMT     logical unit of the AMPX master library (1)
      2.      MXS     logical unit of the Doppler-broadened point data file (31)
      3.      MWS     logical unit of the Weighting Spectrum (46)
      4.      MUN     logical unit of the unresolved data from PRUDE (0)

1$    Primary parameters [5]
      1.      IDSET   identifier of the Bondarenko factors in the master library
      2.      MAT     material identifier of the nuclide to be processed
      3.      NTEMP   number of temperatures in the Bondarenko factor tables
      4.      NSIG0   number of sig0-values in the Bondarenko factor tables
      5.      IGM     number of neutron energy groups

2$    Weight function selection parameters [2]
      1.      MATWT   the MAT number for the weighting function
      2.      MTWT    the MT number for the weighting function

3$    Additional options [11]
      1.      NEXTRA  number of extra cross sections for which Bondarenko factors are to be made (0)
    By default, Bondarenko factors for total, elastic scattering, fission, and capture will be produced.

      2.      LIST1D  option to print the 1-D cross section (0)
                      0:      no
                      1:      yes

      3.      LISTBF  option to print the Bondarenko factors (0)
                      0:      no
                      1:      yes
      4.      IDEBUG  option to print debug information (0)
                      0:      no
                      1:      yes

      5.      master  unit of master to use for reference cross section data if desired (0).
    If not given, the group-averaged data calculated from the point data are used.

      6.      masterID        Nuclide id on master to use as reference cross section (MAT)

      7.      moderator       Option to select to indicate whether this a moderator (0)
                      0:      no
                      1:      yes
                      For a moderator, the Bondarenko factors in the thermal range are set to 1.

      8.      iftg    position of first thermal group (0)

      9.      IOPT5   not used (0)

      11.     IOPT7   not used (0)
      19.     IOPT6   not used (0)

4*    Energy range vver which Bondarenko factors are generated [2]
      1.      ELO     lower energy of range (1e-5)
      2.      EHO     upper energy of range (2e7)

5*    Floating point parameters [2]
      1.      AWR     mass ratio for nuclide
      2.      EPS     accuracy to which integration is to be converged (0.0001)

Terminate Block 1 with a T.

Block 5

7*    Energy group limits [IGM+1]
      1.      IGMS    energy group limits
    The boundaries are not needed if a standard AMPX group structure is used.
    Enter values high to low in energy in eV

8*    Temperatures [NTEMP]
      1.      TEMPS   temperatures at which Bondarenko factors are desired

9*    Sig0s [NSIG0]
      1.      SIG0S   Sig0 values at which Bondarenko factors are desired

10$   EXTRA_CROSS [NEXTRA]
      1.      extras  extra cross sections for which to generate Bondarenko data

Terminate Block 5 with a T.

11.11.16.2. Sample Input

0$$ 1 31 46 32 1$$ 1000 1395 3 8 238 2$$ 8000 99
5** 235.0 E T
8** 300 900 2100
9** 1.0E8 1.0E5 1.0E4 1.0E3 1.0E2 10.0 1.0 1.0E-6
T

This input indicates that a master library should be writtenon logical unit 1, point data should be read on logical unit 31, a weighting function should be read in TAB1 format on logical unit 46, and point unresolved resonance data should be read on logical unit 32. The tables included on the AMPX master library will be identified by 1000; the MAT number is 1395; Bondarenko factors are to be produced at three temperatures and eight values of the background cross section. The mass ratio is 235.0. The temperatures are 300, 900, and 2100 K. The background cross sections are 1.0E8, 1.0E5,……1.0E-6. (Note that the group structure for 238 groups is not specified since it is a standard AMPX group structure and will be automatically accessed).

11.11.16.3. Logical Input Parameters

Variable

Unit number

Type

Description

MMT

binary

logical unit of the AMPX master library

MXS

binary

logical unit of the Doppler-broadened point data file

MWS

binary

logical unit of the weighting spectrum

MUN

binary

logical unit of the unresolved data from PRUDE

77

binary

Scratch

11.11.17. FILTER: Select Specific Data from a Master of Working Library

FILTER allows for selection of a specific data type from a master or working library.

11.11.17.1. Input Data

Block Input

Block starts on first encounter of a keyword in the block.

Keyword

Alternate

Default

Definition

work

processes a working library

If present, a working library is processed

in=

1

logical unit of the input library

out=

2

logical unit of the output library

Repeat block data as often as needed.

11.11.17.2. Block Data

Block starts on first encounter of a keyword in the block.

Block terminates on encountering the next occurrence of keyword end or neutron or gamma or yield or resonance or bondarenko or 1dn or 2dn or 1dg or 2dg or 2dy or end.

Keyword

Alternate

Default

Definition

Select one of these

NEUTRON

includes all neutron data

GAMMA

includes all gamma-ray data

YIELD

includes gamma-ray yield data

RESONANCE

includes resolved resonance parameters

BONDARENKO

includes Bondarenko factor data

1DN

includes 1-D neutron data

2DN

includes neutron scattering matrices

1DG

includes 1-D photon data

2DG

includes photon scattering matrices

2DY

includes photon production matrices

mt=

List of reaction values to include or exclude

If all mt values are positive, the listed mt values will be selected from the partial library and added to the new library. If all mt values are negative, the listed mt values are excluded from the new library.

11.11.17.3. Notes

Data selected in the data block will only be included in the new library if they are present on the old library. If processing a working library, either 2dn or 2dg will select the transfer matrix.

11.11.17.4. Logical Unit Parameters

Variable

Unit number

Type

Description

IN

BCD

OUT

binary

11.11.18. FUNCCALC: Calculate Arbitrary Function

FUNCCALC calculates an arbitrary function using the data given on a tab1-formatted data file. The function is calculated using up to three sets of (mat,mt) values from the tab1-formatted data file. The (mat,mt) values are assumed to be unique. The module PICKEZE can be used to select the desired sets. These sets are denoted as function values 1,2, or 3 below. Values are created over the range el to eh using as many points as needed to create the function up to precision eps.

11.11.18.1. Input Data

Block Data

Block starts on first encounter of a keyword in the block.

Keyword

Alternate

Default

Definition

in=

10

logical unit of the input library

out=

11

logical unit of the output library

el=

1e-5

lower limit of the function to create

eh=

2e7

upper limit of the function to create

eps=

1e-4

precision to which to create the function

id1=

99

material id of the function to create

id2=

1099

reaction id of the function to create

Repeat Block Command as often as needed.

11.11.18.2. Block Command

Block starts on first encounter of a keyword in the block.

Block terminates on encountering the next occurrence of keyword com or end or mat.

Repeat block functions up to 3 times.

11.11.18.3. Block Functions

Block starts on first encounter of a keyword in the block.

Block terminates on encountering the next occurrence of keyword mat or end or com.

Keyword

Alternate

Default

Definition

mat=

Material value of function to access on the tab1 formatted file

mt=

Reaction value of function to access on the tab1 formatted file

11.11.18.4. Sample Input

in=10 out=20 id1=99 id2=1099
mat=9237 mt=18 mat=9347 mt=1 mat=99 mt=2099 com=sr ireg=1 creg=1
com=sr ireg=2 creg=2 com=sr ireg=3 creg=3 com=mr ireg=1 creg=2 com=vr ireg=4 creg=1e10 com=ar ireg=3 creg=4 com=dr
ireg=1 creg=3 com=sv

Load the cross section for (9237,18) in function 1, (9347,1) in function 2 and the flux (99,2099) in function.

  1. For each point to be calculated, the registers are filled as follows:

  1. Load (9237,18) or function 1 into register 1.

  2. Load (9437,1) or function 2 into register 2.

  3. Load (99,1099) or function 2 into register 3.

  4. Multiply register 1 by register 2 and store in register 1.

  5. Store a user supplied value of 1e10 in register 4.

  6. Add register 3 and register 4 and store in register 3.

  7. Divide register 1 by register 3 and store in register 1.

  8. Save the value in register 1 as the final function value.

11.11.19. IRFFACHOMO: Module to Produce Homogenous F-Factors

11.11.19.1. Input Data

Block Input

Block starts on first encounter of a keyword in the block.

Keyword

Alternate

Default

Definition

out=

77

unit for the output file containing the final Bondarenko factors

in=

78

unit for the master containing the 1-D and 2-D neutron data

fnuc=

identifier of the resonance nuclide to use

bnuc=

identifier of the background nuclide to use

dens=

density value to use for the resonance nuclide in the infinite medium calculation

ehres=

upper limit of the RR of the resonance nuclide

bcut=

1e-4

lowest possible density for the background nuclide

low=

0

lowest group for which to generate homogenous f- factors

If 0, the group containing the upper end of the RR is selected

high=

0

highest group for which to generate homogenous f- factors

If 0, the last group is selected

11.11.20. IRFFACTOR: Module to Calculate Intermediate Resonance F-Factors Based on Hetero Cells

11.11.20.1. Input Data

Block Input

Block starts on first encounter of a keyword in the block.

Keyword

Alternate

Default

Definition

in=

the unit number of the input cross section library

out=

the unit number of the output cross section library

fnuc=

resonance absorber nuclide for which to calculate the ffactors

mopt=

1

option for treating moderator XSs (see parameter ircalc in I_Crawdius input)

0 - include energy-dep PW XS’s [standard CENTRM

lib data]

1 - treat as IR moderator => no abs.; elastic=lambda*sigp

2 - treat as IR absorber => has abs.; elas=lambda*sigp; tot=abs+elas]

absopt=

0

option for treating absorber lam*sigp (see parameter ircalc in I_Crawdius input)

0 - do NOT include absorber lambda*sigp in background XS ]

1 - include absorber lambda*sigp in background XS ]

medit=

0

option for treating moderator XSs (see parameter ircalc in I_Crawdius input)

0 - no edits

1 - edit background XS’s obtained for cells

2 - also edit final f-factors]

nterp=

1

interpolation method for f-factors

0 - Segev interpolation

1 - Spline interpolation taken from GSL TPL

check=

no

If yes, only perform checking; if no, perform full execution

yes - input checked and background XS values are edited if medit > 0

no - input not checked

essm=

yes

yes - background XS is computed using essm method

no - Bonami background XS is used

yes - the background XS is computed using essm method

no - use Bonami background XS

iter=

yes

yes - inner iterations are performed in the computation of essm background XS

yes - essm background XS is computed using inner iterations in MG flux calc)

no - essm background XS is computed using NO

inner iterations (=> within grp XS=0)

cut=

1e-9

lower cut-off value for the density of background nuclide

bcut=

1e-5

lower cut-off value for f-factors (Values will be set to previous sig0 value.)

elow=

1e-3

lowest energy for which to calculate f-factors

ehigh=

2e+5

highest energy for which to calculate f-factors

If zero, the highest energy of the input master is used.

ehres=

0.0

upper energy of the resolved resonance range

A value of 0 indicates that energy ehigh is to be used as upper energy bound.

removal=

yes

options for computing within-group scatter

f-factors

yes - add removal f-factors

no - do not add removal f-factors

irmt=

2000

mt value for the lambda factors

cellfil=

full path to file containing scale csas input defining heterogeneous cases

The input string must be enclosed in quotes.

11.11.21. JAMAICAN: Module to Thin Point-Wise 2D Data

The point-wise 2D data created by module MONTEGO can contain a dense mesh of exit energies and angles. If converting to marginal and conditional probabilities, the mesh can often be thinned. This module thins the mesh and writes the data out in a format suitable for use in PLATINUM. The program typically uses equiprobable angle bins except for elastic and discrete inelastic reactions. However, if the distribution can be described with a number of non-equiprobable angle bins using less than nang angles, non-equiprobable angle bins are used.

11.11.21.1. Input Data

Block Input

Block starts on first encounter of a keyword in the block.

Keyword

Alternate

Default

Definition

mon=

20

file containing kinematic data in double differential form

out=

21

output file in platinum format

nang=

32

maximum number of equiprobable energy bins

eps=

1e-3

precision to which to calculate the distributions.

nothin

(If present, no thinning is performed.)

form=

native

file format of the input file native - native format

11.11.22. JERGENS: Module to Generate Weight Functions and to Combine ENDF/B TAB1 Records

JERGENS (Just an Excellent Routine to Generate Strings) is the module used to construct a weighting spectrum such as that needed by X10 or PRILOSEC. The output from the module is a file containing the new functions written in TAB1 format. JERGENS is used to generate certain predefined functions. If arbitrary functions are needed, use module FUNCCAL.

11.11.22.1. Input Data

Block 1

-1$   Options [20]
      1.      intlow  index for the lowest interpolation value (2)
    The list of allowed endf interpolation values (1-5) is given in inter1, inter2, ..., inter5.
    The value intlow gives the index of the lowest interpolation value to try.
    The user should typically set intlow and inthigh to 2 and inter1=1, inter2=2, ...
    This allows only linear-linear interpolation in the generated weighting function.

      2.      inthigh index for highest interpolation value (2) (see intlow for more detailed explanation)

      3.      IOPT3   not used (100)

      4.      mat     MAT number for the weighting functions (99)

      5.      mf      MF (file) number for the weighting function (3)

      6.      inter1  first interpolation scheme used (1)

      7.      inter2  second interpolation scheme used (2)

      8.      inter3  third interpolation scheme used (3)

      9.      inter4  fourth interpolation scheme used (4)

      10.     inter5  fifth interpolation scheme used (5)

      11.     ICORE   not used (100000)

      12.     OPTS    not used

0$    Logical assignments [3]

      1.      NDFB    All external functions required by JERGENS must reside here.
    The current version of JERGENS does not allow the use of external functions.

      2.      MWT     the logical unit of the output file

      3.      MSC     not used (18)

1$    Problem information [1]

      1.      NMWT    number of functions to be written on MWT

2*    Energy Range [2]

      1.      ELO     low-energy cutoff of functions to be generated (in eV) (0.00001)

      2.      EHI     high-energy cutoff of functions to be generated (in eV) (2.0e7)

Terminate Block 1 with a T. Repeat Block 2 NMWT times.

Block 2

3$    Identifier and option selectors [3]

      1.      IDWT    identifier for function to be created
    (equivalent to the MT number in ENDF/B)

      2.      NC      number of commands associated with the construction of this function
    (0) The current version of JERGENS only allows creation of predefined
    dose and weighting functions.

      3.      IW      Options for the desired dose function or weighting function
                              0:      1/E
                              1:      flat
                              2:      Maxwellian - 1/E - fission spectrum
                              3:      E
                              4:      Maxwellian - 1/E - fission spectrum - 1/E above 10 MeV
                              5:      neutron dose factors per ANSI/ANS 6.1.1-1977
                              6:      gamma-ray dose factors per ANSI/ANS 6.1.1-1977
                              7:      1/V (normalized to 1.0 at 2200 m/s)
                              8:      Henderson neutron dose factors in (Rads/hr)/((photons/cm**2)/sec)
                              9:      silicon gamma dose factors in (Rads/hr)/((photons/cm**2)/sec)
                              10:     Claiborne-Trubey gamma dose factors in (Rads/hr)/((photons/cm**2)/sec)
                              11:     1/E function with high and low cutoffs
                              12:     Watt fission spectrum
                              9031:   ANSI 6.1.1-1992 Neutron Dose Factors
                              9032:   air neutron kerma factors in (Gr/hr)/((neutrons/cm**2)/sec)
                              9033:   air neutron kerma factors in (Rad/hr)/((neutrons/cm**2)/sec)
                              9034:   dose equivalent factors in (Sv/hr)/((neutrons/cm**2)/sec)
                              9035:   dose equivalent factors in (Rem/hr)/((neutrons/cm**2)/sec)
                              9036:   neutron effective dose factors in (Sv/hr)/((neutrons/cm**2)/sec)
                              9037:   neutron effective dose factors in (Rem/hr)/((neutrons/cm**2)/sec)
                              9505:   ANSI 6.1.1-1991 gamma dose factors in (Rads/hr)/((photons/cm**2)/sec)
                              9502:   Henderson gamma dose factors in (Rads/hr)/((photons/cm**2)/sec)
                              9506:   gamma air kerma factors in (Greys/hr)/((photons/cm**2)/sec)
                              9507:   gamma air kerma factors in (Rad/hr)/((photons/cm**2)/sec)
                              9508:   dose equivalent factors in (Sv/hr)/((photons/cm**2)/sec)
                              9509:   dose equivalent factors in (Rem/hr)/((photons/cm**2)/sec)
                              9510:   gamma Effective Dose Factors in (Sv/hr)/((photons/cm**2)/sec)
                              9511:   gamma effective dose factors in (Rem/hr)/((photons/cm**2)/sec)
                              9029:   neutron dose factors per ANSI/ANS 6.1.1-1977
                              9504:   gamma-ray dose factors per ANSI/ANS 6.1.1-1977
                              9027:   Henderson Neutron Dose Factors in (Rads/hr)/((photons/cm**2)/sec)
                              9503:   Claiborne-Trubey Gamma Dose Factors in (Rads/hr)/((photons/cm**2)/sec)

4*    Constants [6]

      1.      TMAX    temperature of the Maxwellian spectrum (K) (300)
    If a Watt fission spectrum is generated, then this is the value of a in
    exp(-e/a)*sinh( sqrt(b) e) in units of MeV.

      2.      AKT     multiplier on KT to determine Maxwellian to 1/E join point (5)
    If a Watt fission spectrum is generated, then this is the value of b in
    exp(-e/a)*sinh( sqrt(b) e) in units of MeV.

      3.      THETA   effective temperature in eV of the fission spectrum (1.27e6)

      5.      FCUT    point at which to join I/E to fission spectrum (67.4e3)

      6.      SIGD    not used

      6.      EPS     accuracy to which functions are to be generated (0.0001)

Terminate Block 2 with a T.

11.11.22.2. Notes

The combination of IOPT1, IOPT2 with INT1, INT2, … INT5 allows a very flexible method of selecting the kinds of interpolation schemes allowed in the functions produced. The interpolation schemes are as follows:

Code — Type
1 — histogram
2 — linear x - linea y
3 — linear x - log y
4 — log x - linear y
5 — log x - log y

intlow points to the word in inter1…inter5 which contains the first interpolation code to be tried. inthigh points to the last word in the string containing the code to be tried. JERGENS cycles through the codes specified inter(intlow) to inter(inthigh) to determine the best code to use. By default, intlow and inthigh are both 2, indicating that a linear-linear function is being constructed. intlow = 2 and inthigh = 5 would try types 2, 3, 4, and 5 in exactly that order. Making intlow= 1 and inthigh = 2, and inter = 5 with inter2 = 2 would cycle between a log-log and a linear-linear scheme, etc. Attempting an interpolation of 1 (histogram) would be fruitless because accuracy specifications could never be satisfied. Therefore, it should be avoided.

11.11.22.3. Logical Unit Parameters

Variable

Unit number

Type

Description

NDFB

binary

all external functions required by JERGENS must reside here

MWT

binary

The logical unit of the output file

11.11.23. KINKOS: Kinematic Conversion System

Kinematics Konversion System (KINKOS) is a module to convert kinematics files generated by module Y12 into different formats.

11.11.23.1. Input Data

Block Input

Block starts on first encounter of a keyword in the block.

Keyword

Alternate

Default

Definition

INPUT=

IN

31

logical unit of input file

OUTPUT=

OUT

32

logical unit of output file

nl=

5

if converting from cosine moment to cosine moment format, the maximum number of cosine moments to use

eps=

1e-3

precision to which the cosine grid is to be constructed

Select one of these

y12_d

input file is double precision y12 format

y12_s

input file is single precision y12 format

kfc

input file is kfc format

mon

input file is montego format

native

input file is native format

to

Keywords before this flag are for the input file. After this flag, they are for the output file.

Select one of these

y12_d

Output file is double precision y12 format.

y12_s

Output file is single precision y12 format.

kfc

Output file is kfc format.

mon

Output file is montego format.

native

Output file is native format.

ascii

Output file in ascii.

format=

cos

if saving in native format, the format to which the data should be converted

cos - Save cosine moments

leg - Save as Legendre moments

tab - Save in tabulated form

fbot=

1e-5

if lopping is switched on, the fraction to remove from the bottom of the distribution

ftop=

1e-5

if lopping is switched on, the fraction to remove from the top of the distribution

upscatter

correct mt=1007 for upscatter

lop

lops small fraction from the exit energy distribution

id=

0

the new id to use for the data if id change is desired

eup=

3.0

if applying upscatter correction, the highest energy which can have upscatter

eterm=

5.0

if applying upscatter correction, the highest energy for thermal matrices

cross=

0

unit for the cross section data in TAB1 format

awi=

1.0

mass ratio of incident particle (needed if converting com to lab)

11.11.23.2. Logical Unit Parameters

Variable

Unit number

Type

Description

input

binary/ASCII

output

binary/ASCII

11.11.24. KINZEST: Module to Manage Kinematic Libraries

KINZEST (Zippy Ensembler of Strings) is a module analogous to ZEST, except it treats kinematic libraries.

11.11.24.1. Input Data

Block 1

0$    Logical assignments [2]

      1.      LOG     logical unit of library to be written (31)

      2.      NLOG    number of commands (or libraries) required to create LOG (1)

Terminate Block 1 with a T. Stack Blocks 2 and 3 one after the other NLOG times.

Block 2

2$    Input library selection [2]

      1.      NLIN    logical number of input library

      2.      NC      Options for how the strings are to be treated (0)
                              -N: deletes N strings from NLIN to create LOG
                               0: accepts all strings from NLIN
                               N: adds N strings from NLIN to create LOG

Terminate Block 2 with a T.

Only use Block 3 if NC != 0.

Block 3

3$    MAT numbers from NC [NC]

      1.      MAT     material identifier(s) of nuclides to be added or deleted. (0) Only used if NC != 0.
    There must be exactly NC values.

4$    New MAT numbers from NC [NC]

      1.      MATnew  new material identifier(s) of nuclides to be added. (0)
    Only used if NC > 0.
    A zero leaves the identifier unchanged.

5$    MT numbers from NC [NC]

      1.      MT      reaction identifiers of nuclides to be added or deleted (0)
    Only used if NC != 0.
    There must be exactly NC values.

6$    New reaction numbers from NC [NC]

      1.      MATnew  New reaction identifier(s) of nuclides to be added. (0)
    Only used if NC > 0
    A zero leaves the identifier unchanged.

7*    awp values to preserve/delete [NC]

      1.      awp     values of awp to keep or to delete. (0)
    Only used if NC > 0

8*    zap values to preserve/delete [NC]

      1.      zap     values of zap to keep or to delete. (0)
    Only used if NC > 0

Terminate Block 3 with a T.

11.11.24.2. Logical Unit Parameters

Variable

Unit number

Type

Description

LOG

binary

logical number of library to be written

NLIN

binary

11.11.25. LAMBDA: Module to Produce Lambda Factors

11.11.25.1. Input Data

Block Input

Block starts on first encounter of a keyword in the block.

Keyword

Alternate

Default

Definition

out=

77

unit for the output file containing the final Bondarenko factors

in=

78

unit for the master containing the 1D and 2D neutron data

fnuc=

identifier of the fissionable nuclide to use

bnuc=

identifier of the background nuclide to use

dens=

density value to use for the resonance nuclide in the infinite medium calculation

bdens=

density value to use for background nuclide in the reference case

iddens=

density value to use for resonance nuclide in the reference case

feps=

1e-3

lower limit to determine there is no fluctuation

If the standard deviation of the values for different background densities falls below this value, it is assumed that lambda for this group cannot be calculated, so it is set it to 1.

eps=

1e-3

used to determine whether enough background values have been added for calculation

bcut=

1e-4

lowest possible density for the background nuclide

temp=

293

temperature at which to perform the calculation

low=

1

lowest group for which to generate lambda factors

high=

0

highest group for which to generate lambda factors

If 0, the last group is selected.

lcut=

1e-5

lowest possible number density for the background nuclide

hcut=

1e5

highest possible number density for the background nuclide

irmt=

2000

reaction value to use for the generated lambda factors

11.11.26. LAVA: AMPX Module to Make an AMPX Working Library from an ANISN Library

LAVA (Let ANISN Visit AMPX) is a module that converts an ANISN library (neutron, gamma, or coupled neutron-gamma) to an AMPX working library such as those used in XSDRNPM. ANISN cross sections can be entered on cards (fixed or free-form FIDO format) or on a binary library.

11.11.26.1. Input Data

Block 1

-1$   Core assignment [1]

      1.      NWORD   number of words to allocate (50,000)

0$    Logical definitions [4]

      1.      N1      ANISN library (20)

      2.      N2      AMPX working library (4)

      3.      N3      scratch (18)

      4.      N4      scratch (19)

1$    ANISN library parameter data [8]

      1.      NNUC    number of isotopes to be put on new library

      2.      IGM     number of neutron groups

      3.      IHT     position of sigma_{total}

      4.      IHS     position of sigma_{g->g'}

      5.      IHM     table length

      6.      IFTG    first thermal group

      7.      IPM     number of gamma groups

      8.      IFM     format of ANISN library
                      -1:     binary
                      0:      free-form BCD
                      1:      formatted BCD

      9.      IFLAG   flag that selects the method for calculating scattering cross sections from scattering matrices (1)
                      0:      sets elastic cross section to sum_{g'}(sigma_{g->g'})
                      1:      attempts to calculate the correct elastic cross section
          See notes for more details

Terminate Block 1 with a T

Block 2

2$    Identifiers of block of data for the nuclide on the ANISN library [NNUC]

      1.      NUCIDS  Identifiers of Block of Data for the Nuclide on the ANISN Library

3$    Order of scattering for the nuclide on the ANISN library [NNUC]

      1.      SCAT    Order of scattering for the nuclide on the ANISN library:
    If an order of scattering for a nuclide is negative, the P(l > 0) matrices
    for the nuclide will be multiplied by (2l+1) to account for differences
    in the way different computer programs require these to be normalized.

4$    AMPX identifiers for the nuclides selected from ANISN library [NNUC]

      1.      AMPXID  AMPX identifiers for the nuclides selected from ANISN library

5$    Process identifiers for the top positions in the ANISN cross section tables [IHT]

      1.      PROCID  Process identifiers for the top positions in the ANISN cross section tables
              The order is from position IHT to position 1 (i.e., backwards from
    the way it is in the cross section tables). ANISN always expects sigma_{total}
    in position IHT, with nu*sigma_{f} above that, and sigma_{a} a above that.
    The contents of the other positions are arbitrary.

6*    Fission spectrum [IGM]

      1.      FISSION Fission spectrum
              If a nuclide has a nonzero fission cross section, and no fission spectrum
    (MT=1018) is specified in the ANISN library or the fission spectrum (CHI)
    flag for that nuclide has been set in the 9$ array, then the fission
    spectrum specified in the 6* array is used for that nuclide.

7*    Neutron energy group boundaries [IGM+1]

      1.      IGMS    Neutron energy group boundaries
    Read high to low in energy (eV)

8*    Gamma-ray energy group boundaries [IPM+1]

      1.      IPMS    Gamma-ray energy group boundaries
    Read high to low in energy (eV)

9$    Nuclide CHI flags [NNUC]

      1.      CHIS    Nuclide CHI flags
    If 0, use the fission spectrum from the ANISN library;
    if 1, use the fission spectrum from the 6* array

Terminate Block 2 with a T.

11.11.26.2. Notes

ANISN matrices are the sum of the individual scattering matrices (elastic, inelastic, n2n, n3n, etc.) for processes possible for the particular nuclide. LAVA attempts to arbitrarily determine values for an elastic (MT = 2) and an n2n (MT = 16) cross section, recognizing that elastic scattering is generally the most dominant scattering process, and that n2n is the most common scattering process that yields more than a single exit neutron. In order to accomplish this, the absorption cross section in the ANISN data must be the true absorption value (not an energy absorption cross section as in some older gamma- ray sets, or whatever alternative value). When IFLAG = 1, requiring the correct absorption, the elastic value is taken as

sigma_{el}^{g} = sigma_{t}^{g} - sigma_{a}^{g}
while the n2n is taken from
sigma_{n2n}^{g} = sum_{g'} sigma_{0}(g -> g') - sigma_{el}^{g}
When IFLAG = 0, no attempt is made to calculate an n2n value, and the elastic
value is simply sum_{g'}(sigma_{g->g'})

11.11.26.3. Sample Input

0$$ 20 4 18 19 1$$ 5 16 3 4 16 16 0 0 0 T
2$$ 1 5 9 13 17
3$$ 333 3 3
4$$ 92235 92238 26000 1001 8016
6** Put in 16 numbers for a Fission Spectrum
T

This input will create an AMPX working library on logical unit 4 from an ANISN binary library on logical unit 20. The ANISN library is for 16 neutron energy groups and has the total cross section in position 3, the within-group scattering in position 4 with a table length of 16. There are no gamma groups. The ANISN identifiers are 1, 5, 9, 13, and 17 for the P0 parts of a P3 fit to 235U, 238U, Fe, 1H, and 16O, respectively. The energy group boundaries are not read since the 16-group structure is one of the standard AMPX structures.

11.11.26.4. Logical Unit Parameters

Variable

Unit number

Type

Description

N1

binary

ANISN library

N2

binary

AMPX working library

N3

binary

scratch

N4

binary

scratch

N5

binary

N6

binary

47

binary

scratch

11.11.27. LINEAR: Module to Linearize Functions Written in TAB1 Format

LINEAR is a module that will read a point TAB1 data file, which is written by a program such as POLIDENT, and linearize the data to within a user-specified tolerance.

11.11.27.1. Input Data

Block Data

Block starts on first encounter of a keyword in the block.

Keyword

Alternate

Default

Definition

IN=

1

logical unit of input file

OUT=

2

logical unit of output file

FORCE=

yes

flag to force linearization even if it is already linear

yes - Force linearization

no - Do not force linearization

MORE=

no

flag to print arrays before and after linearization

yes - print

no - do not print

EPS=

0.001

tolerance to which points are tested to see if they can be linearly interpolated

Note that EPS is the relative difference (A-B)/A, not the percentage difference. A value of 0.01 is equivalent to 1%.

11.11.27.2. Sample Input

IN=23 OUT=24 EPS=0.005 END

This input indicates that data from the TAB1 file on logical unit 23 should be read, and a TAB1 file on logical unit 24 should be written with functions that can be linearly interpolated to within 0.5% of the original ones.

11.11.27.3. Logical Unit Parameters

Variable

Unit number

Type

Description

LOGIN

LOGOUT

binary

binary

11.11.28. LIPTON: Convert ASCII ENDF/B File That Contains File 3, 9, and 10 to Binary

LIPTON is a program to read an ASCII ENDF/B File that contains File 3, 9 and 10 data and create Tab1 binary records for File 3, 9 and 10 records. The resultant file can then be passed to PRILOSEC for processing. For Files 9 and 10, the MTs are redefined as MT*10000+LFS*100+LIS, and the functions are written as TAB1 triplets instead of using the multiple subsection scheme. File 9 functions are constructed by multiplying the appropriate cross sections from File 3 by the File 9 values. No attempt is made to form the product functions to a user-specified precision at present, though it may be done later.

11.11.28.1. Input Data

Block Input

Block starts on first encounter of a keyword in the block.

Keyword

Alternate

Default

Definition

file3=

3

binary tab1 file containing File 3 data

file9=

9

binary tab1 file containing File 9 and 10 data

out=

10

result of combining File 3, 9 and 10 in binary tab1 format.

Logical Unit Parameters

Variable

Unit number

Type

Description

ndfb

ASCII

tab1

binary

14

binary

scratch

15

binary

scratch

16

binary

scratch

11.11.29. MAKPEN: Module to Generate Cross Section Data in a PENDF Format

MAKPEN (MAKe PENDF) is module that reads CE cross section data in a TAB1 format and generates a Point ENDF cross section file (PENDF). MAKPEN reads the File 1and abbreviated File 2 information from the POLIDENT logical output LOGP1. Subsequently, MAKPEN reads the user-specified TAB1 formatted cross section data and constructs a PENDF library. The code input is freeform with keyword definitions.

11.11.29.1. Input Data

Block Input

Block starts on first encounter of a keyword in the block.

Keyword

Alternate

Default

Definition

title=

title line for the PENDF tape

in1=

31

TAB1 formatted cross section file

in2=

32

POLIDENT output file with File 1 and File 2 information

this is the LOGP1 file generated by POLIDENT

iout=

40

PENDF cross section output file

tol=

POLIDENT convergence tolerance for energy mesh construction

11.11.29.2. Sample Input

title=u-238 endf6, polident generated cross sections title=processed by m. e. dunn
in1=35 in2=32 iout=36 nnuc=1 tol=0.001

The input above can be used to convert an AMPX TAB1 file for 238U to a PENDF format that can be processed by the NJOY code system. For the example case, the ENDF/B-6 evaluation for 238U was processed through POLIDENT with an energy-mesh generation tolerance of 0.001 (i.e., 0.1%), and the TAB1 pointwise cross section file from POLIDENT is stored on logical unit 35. In the input for MAKPEN above, a title description is provided to define the point cross section file. Subsequently, the input TAB1 file is specified to be on logical unit 35. In addition, POLIDENT provides an output file with ENDF/B File 1 information, and the input in2=32 specifies that this information is located on logical unit 32. The PENDF created by MAKPEN will be produced on logical unit 36. Moreover, the sample input indicates that a single isotope/nuclide will be processed. Note that the energy-mesh generation tolerance is also specified in the MAKPEN input (i.e., tol=0.001).

11.11.29.3. Logical Unit Parameters

Keyword

Alternate

Default

Definition

title=

title line for the PENDF tape

in1=

31

TAB1 formatted cross section file

in2=

32

POLIDENT output file with File 1 and File 2 information

this is the LOGP1 file generated by POLIDENT

iout=

40

PENDF cross section output file

tol=

POLIDENT convergence tolerance for energy mesh construction

11.11.30. MALOCS: Module to Collapse AMPX Master Cross Section Libraries

MALOCS (Miniature AMPX Library of Cross Sections) is a module to collapse AMPX master cross section libraries. The module can be used to collapse neutron, gamma-ray, or coupled neutron-gamma master libraries.

11.11.30.1. Input Data

Block 1

0$    Library Logical Unit Numbers [2]

      1.      NOLD    logical number of device containing fine-group AMPX master library (1)

      2.      NNEW    logical number of device containing broad-group AMPX master library (22)

1$    Case Description [6]

      1.      NNEUT   number of neutron fine group

      2.      IGMF    number of neutron broad groups

      3.      NGAM    number of gamma-ray fine group

      4.      IPMF    number of gamma-ray broad groups

      5.      IWN     Neutron weighting option (0)
                      0:      Input neutron weighting spectrum in the 5* array
                      1:      Use MT=1099 1-D neutron data from each fine-group master data set for the
        neutron weighting spectrum.
                      other:  Use the 1-D data identified with an MT number of IOPT2
        For values < 0, see 3$ array. Use the 1-D data
        identified with an MT number of IOPT2 spectrum for
        all neutron data sets being collapsed.

      6.      IWG     gamma weighting option (0)
                      0:      Input gamma-ray weighting spectrum in the 7* array
                      1:      Use MT=1099 1-D gamma-ray data from each fine-group master data set for the
        gamma-ray weighting spectrum.
                      other:  Use the 1-D data identified with an MT number of IOPT6
        For values < 0, see 3$ array. Use the 1-D data identified with an MT number of
        IOPT6 spectrum for all gamma-ray data sets being collapsed.

3$    Option Triggers [10]

      1.      IOPT1   if > 0, identification number of master data for neutron weighting (0)
    Identification number of master data set from which the neutron
    weighting spectrum (IOPT2 data) will be obtained

      2.      IOPT2   if > 0, process identifier (MT number) of neutron weighting spectrum in
    IOPT1 master data set (0)

      3.      IOPT3   trigger to print broad-group 1-D cross section (0)
                      1:      print data
                      0:      do not print data

      4.      IOPT4   trigger to print broad-group transfer matrices (0)
                      0:      do not print data
                      other:  print arrays through order N

      5.      IOPT5   auxiliary gamma-ray weighting spectrum trigger (0)
    if > 0, identification number of master data set from which the
    gamma-ray weighting spectrum (IOPT6 data) will be obtained.

      6.      IOPT6   process identifier (MT number) of gamma-ray weighting spectrum in
    IOPT5 master data set (0)

      7.      IOPT7   trigger to collapse out upscatter terms if nonzero (0)
                      0:      no upscatter truncation (recommended)
                      1:      XSDRNPM method of upscatter truncation
                      2:      ANISN method of upscatter truncation
                      3:      simple sum method of upscatter truncation
                      4:      Non-negative ANSIN method of upscatter truncation

      8.      IOPT8   trigger to truncate downscatters to a maximum of IOPT8 terms below
    the within group if IOPT8 is nonzero (0)

      9.      IOPT0   not used (0)

      10.     IOPT10  weighting spectrum printing option (0)
                      0:      Do not print weighting spectrum
                      1:      print weighting spectrum

Terminate Block 1 with a T.

Block 2

4$    Neutron broad-group numbers by fine group [NNEUT]

      1.      NNEUTS  neutron broad-group numbers by fine group
    only used if NNEUT > 0
    a zero "suppresses" a fine group.

5*    Neutron weighting spectrum [NNEUT]

      1.      NNEUTW  neutron weighting spectrum
    only used if IWN = 0

6$    Gamma-ray broad-group numbers by fine group [NGAM]

      1.      NGAMS   gamma-ray broad-group numbers by fine group
    only used if NGAM > 0
    When collapsing the gamma groups in a coupled master library,
    the 6$ entries are the actual group numbers and do not need to
    include the number of neutron groups.

7*    Gamma-ray weighting spectrum [NGAM]

      1.      NGAMW   gamma-ray weighting spectrum
    only used if IWG = 0

Terminate Block 2 with a T.

11.11.30.2. Sample Input

1$$ 16 4 0 0 0 0 2$$ 1 2 T
4$$ 4R1 4R2 4R3 4R4
5**
1.234E-7 5.697E-7 8.724E-6 9.412E-5
9.269E-5 8.193E-4 3.627E-4 8.463E-4
3.492E-4 8.624E-3 7.999E-4 3.224E-5
1.947E-5 2.333E-5 8.387E-5 4.417E-6
T

This input produces a collapsed AMPX master library on logical unit 2 with 4 neutron energy groups, starting with a master library in 16 energy groups on logical unit 1. Groups 1–4 become broad group 1, 5–8 become broad group 2, 9–12 become broad group 3, and 13–16 broad become broad group 4.

11.11.30.3. Logical Unit Parameters

Variable

Unit number

Type

Description

NOLD

binary

logical number of devices containing fine-group AMPX master library

N1

binary

N2

binary

17

binary

scratch

18

binary

scratch

19

binary

scratch

11.11.31. MALT: Make ANISN Library Transformation

This program converts a binary ANISN library to the ASCII format and vice versa.

11.11.31.1. Input Data

Block Input

Block starts on first encounter of a keyword in the block.

Keyword

Alternate

Default

Definition

in=

31

logical unit of input file

out=

32

logical unit of output file

i1=

number of columns in the ANSIN output file

i2=

number of rows in the ANSIN output file

id=

ANISIN ID to use if reading an AMPX master

istart=

start of data

If 0, the user assumes that only neutron 1-D data are wanted. If gamma data with the same mt value exist, they are added after the neutron data. If larger than 0, only the gamma data are added, and istart is the number of neutron groups.

Select one of these

fixed

fixed ANISN library format

free

free ANISN library format

binary

binary ANISN library format

ampx

reads 1-D data from AMPX master

to

to

keywords before this flag are for input file. After this flag for output file

Select one of these

fixed

fixed ANISN library format

free

free ANISN library format

binary

binary ANISN library format

11.11.31.2. Logical Unit Parameters

Variable

Unit number

Type

Description

log

binary or BCD

11.11.32. MG_to_KIN: Convert Total MG Scattering Matrix to CE

This module converts a total scattering matrix given in a working format AMPX library into a double differential format suitable for processing in JAMAICAN. It is easier to generate a total scattering matrix in MG format, as the elastic and discrete inelastic scattering matrices are given in Legendre format, which is easily added together. This MG total scattering matrix can then be added to a CE library for use with point detectors.

11.11.32.1. Input Data

Block Input

Block starts on first encounter of a keyword in the block.

Keyword

Alternate

Default

Definition

in=

31

logical unit of input MG library

out=

32

logical unit of output kinematic file in native format

worker

If present, the MG library is a working library

scratch=

14

scratch unit used during processing

mat=

ID of the nuclide to process

11.11.32.2. Logical Unit Parameters

Variable

Unit number

Type

Description

in

binary

logical unit of input MG library

scratch out

binary binary

scratch

logical unit of output kinematic file in native format

11.11.33. PALEALE: Improved Module for Printing Data from AMPX Libraries

PALEALE is an extension of the ALE module that is provided to list data from AMPX master and working libraries. In addition to allowing a user more flexibility in selecting the information to be printed, some of the output formats have been improved, and a significant improvement is to allow a user to control the line lengths so that the printed information is easier to view on an 80-character terminal display. The input to PALEALE uses the ALE input as its basis (that is to say that a user can use exactly the same input and get improved outputs); however, additional parameters can be supplied in new arrays to give a user more control over how information is printed and to allow for reducing the volume of the output normally produced by an ALE run.

Please understand that all of the reports that can be printed by ALE have not been upgraded to give the user all the additional control. For example, the resonance information, whether it is resolved resonance parameters or Bondarenko factors still give the same output and are difficult to read on a terminal. The two areas that have been revised are the group-averaged data edits, and, especially, the transfer matrix edits. In the latter case, the edits have progressed from something that is virtually unreadable and hard to understand to something which is simple and well-labeled. Sample outputs will be given in a later section.

PALEALE will be modified as time is available to allow more user control over the edits it produces.

11.11.33.1. Input Data

Block 1

0$    Logical unit assignments [2]

      1.      MMT     logical unit of AMPX master library (1)

      2.      MWT     logical unit of AMPX working library (0)

1$    Number of nuclides for which edits are wanted [1]

      1.      NEDIT   number of nuclides for which edits are wanted

2$    Data classes to be printed [10]

      1.      ICLASS1 group-averaged neutron data (0)
                      0:      Do not print
                      1:      Print
      2.      ICLASS2 group-averaged gamma data (0)
                      0:      Do not print
                      1:      Print

      3.      ICLASS3 resonance parameter data (resolved data or Bondarenko factors) (0)
                      0:      Do not print
                      1:      Print

      4.      ICLASS4 not used

      5.      ICLASS5 not used

      6.      ICLASS6 not used

      7.      ICLASS7 not used

      8.      ICLASS8 not used

      9.      ICLASS9 not used

      10.     ICLASS10        not used

3$    Carriage control characters to be used in printing classes of data [25]

      1.      JCLASS1 option whether to start the data for a nuclide on a new page (0)
                      0:      Do not start the data for a nuclide on a new page.
                      1:      Start the data for a nuclide on a new page.

      2.      JCLASS2 option whether to start the group-averaged data on a new page (0)
                      0:      Do not start the group-averaged neutron cross sections on a new page.
                      1:      Start the group-averaged neutron cross sections on a new page.

      3.      JCLASS3 printing of group-averaged gamma cross (0)
                      0:      Do not start the group-averaged gamma cross sections on a new page.
                      1:      Start the group-averaged gamma cross sections on a new page.

      4.      JCLASS4 printing of transfer matrices (0)
                      0:      Do not start transfer matrices for each process selected on a new page.
                      1:      Start transfer matrices for each process selected on a new page.

      5.      JCLASS5 not used

4$    Process identifiers of transfer matrices to be printed [100]

      1.      MTID    process identifiers of transfer matrices to be printed
    Input up to 100 process identifiers (MT-numbers) for the transfer matrices
    that should be printed. Note that a working library has only one transfer
    matrix, the "total" transfer matrix, which is selected by entering a 1.

5$    Maximum order of Legendre coefficient of transfer matrix to be printed [100]

      1.      MAXOLC  maximum order of Legendre coefficient of transfer matrix to be printed
    Enter up to 100 values in one-to-one correspondence with the 4$ array

6$    Temperature for the scattering matrices to be printed [100]
      1.      MAXTEMP temperature for the scattering matrices to be printed
    Enter up to 100 temperatures in Kelvin for the scattering matrices to
    be printed. These must be entered in a one-to-one correspondence with the 4$ and 5$ arrays.

7$    Neutron process selection [200]

      1.      MAXNPROC        neutron process selection
    Enter up to 200 process identifiers (MT-numbers) for the processes
    to be included in the print of the neutron group-averaged data.
    For example, if 1, 2, 4, 16, 18, and 27 are entered and filled with
    zeroes, the printout will include the total, elastic scattering,
    inelastic scattering, n2n, fission, and absorption cross sections, respectively.

8$    Gamma process selection [200]

      1.      MAXGPROC        gamma process selection
    Enter up to 200 process identifiers for the processes to be included
    in the print of the gamma group-averaged data.

9$    Page format parameters [3]
      1.      NLMAX   order of scattering to be printed (10)
      2.      NTMAX   number of temperatures at which scattering matrices will be printed (10)
      3.      LINE    line length that will be printed (80)
      Note that this parameter only applies to group-averaged cross section
    and scattering matrix edits at present.

Terminate Block 1 with a T.

Only use Block 2 if NEDIT > 0.

Block 2

11$   Identifiers of the Nuclides [NEDIT]

      1.      IDS     identifiers of the nuclides for which the user wants data printed
    Only used if NEDIT > 0.
    Enter NEDIT nuclide identifiers. Note that when NEDIT=0, the information
    selected in the first block will be printed for all nuclides in the library.
    To avoid having data for some nuclides included in the printout,
    the AJAX module should be used to select the nuclides desired.

12$   Zone of the Nuclides [NEDIT]

      1.      IDZS    zone of the nuclides for which the user wants data printed (-1); only used if NEDIT > 0
    Enter NEDIT nuclide zone identifiers. A -1 selects all zones

Terminate Block 2 with a T.

11.11.33.2. Notes

Note that one cannot produce edits from a master and a working file in the same execution.

11.11.33.3. Sample Input

-1$$ 500000 0$$ 10 E 1$$ 1 4$ 2 F0
5$ 3 F0 7$ 1 2 4 18 27 E T
11$$ 1000 T

This input says to allocate 500,000 words of core to PALEALE and to read data from the AMPX master library on logical unit 10 for 1 nuclide, whose identifier is 1000. The scattering matrix for elastic scattering up to order P3 will be listed, along with the group-averaged data for MT=1 (total), MT=2 (elastic scattering), MT=4 (inelastic scattering), MT=18 (fission), and MT=27 (absorption).

11.11.33.4. Logical Unit Parameters

Variable

Unit number

Type

Description

MMT

binary

logical unit of AMPX master library

MWT

binary

logical unit of AMPX working library

14

binary

scratch

11.11.34. EXTRACT: Module to Read an NJOY PENDF and Create a TAB1 File

EXTRACT is a module that will read an ASCII PENDF that NJOY creates, and select the tabulated cross sections for a nuclide. These cross sections are written to a binary TAB1 File. File 1 and 3 reactions are copied.

11.11.34.1. Input Data

Block Input

Block starts on first encounter of a keyword in the block.

Keyword

Alternate

Default

Definition

IN=

31

logical unit of PENDF library

OUT=

32

logical unit of the output TAB1 library

mode=

1

format of the output TAB1 file

1 - single precision tab1 format

-1 - double precision tab1 format

T=

space-separated list of temperature(s) wanted on the output file. If not given, all temperatures are included.

MAT=

space-separated list of material identifiers wanted.

MT=

space-separated list of reaction identifiers. If not present, all are included.

11.11.34.2. Sample Input

in=32 out=34 mat=9237 mode=-1

This example requests copying file 1 and 3 data for 238U (MATNO=9237). The input PENDF is on logical unit 32, and the output TAB1 file is on logical unit 34 and is in double precision.

11.11.34.3. Logical Unit Parameters

Variable

Unit number

Type

Description

IN

BCD

logical unit of PENDF library

OUT

binary

logical unit of the output TAB1 library

scratch

14

binary

11.11.35. PICKEZE: Module to Pick Functions from a TAB1 File

PICKEZE is a module that selects functions or classes of data on a library written as a TAB1 file and writes a new TAB1 file that contains the selected data. For example, if a file with the total cross section at 300 K is needed, PICKEZE can be used to extract the desired cross section data. There are other AMPX modules that will perform similar operations, such as ZEST, but none at the level of detail allowed by PICKEZE.

11.11.35.1. Input Data

Block Parameters

-1$   Core allocation [1]

      1.      ICORE   not used (500000) START HEREA

0$    Logical unit assignments [2]

      1.      LOGIN   logical unit of the input TAB1 file (31)

      2.      LOGOUT  logical unit of the output TAB1 file (32)

1$    Selection option control parameters [7]

      1.      NMAT    number of materials to select (0)
    Zero selects all materials.

      2.      NMF     number of file types to select (0)
    Zero selects all file types.

      3.      NMT     number of processes to select (0)
    Zero selects all processes.

      4.      NT      number of temperatures to select (0)
    Zero selects all temperatures.

      5.      NSIG0   number of background cross section to select (0)
    Zero selects all background cross sections.

      6.      temp_sel        exclusively selects temperature (0)
                      1:      select exclusively
                      0:      also select non-broadened
                              If exclusive selection is chosen, only processes with the
          desired temperature are selected. If non-broadened is selected,
          processes with only one temperature (0K) are also selected.
          Usually only a subset of processes is broadened; the remaining
          processes have only one temperature.

      7.      sig_sel exclusively selects background cross section values (0)
                      1:      select exclusively
                      0:      also select sig0=0
                              If exclusive selection is chosen, only processes with the desired
          background cross section are selected. If Also select sig=0 is chosen,
          processes with only one value of sig0 on the file are also selected.

Terminate block parameters with a T.

Block Arrays

2$    Selected material identifiers [NMAT]

      1.      MATS    material identifiers for the desired processes

3$    Selected file identifiers [NMF]

      1.      MFS     file identifiers for the desired files

4$    Selected Process Identifiers [NMT]

      1.      MTS     reaction identifiers for the desired reactions.
    If negative, then the specified reactions will be removed

5*    Selected temperatures [NT]

      1.      NTS     values for the desired temperatures

6*    Selected background cross sections [NSIG0]

      1.      NSIG0S  values for the desired sig0 values

Terminate block arrays with a T.

11.11.35.2. Sample Input

0$$ 23 24 1$$ 0 0 1 1 0 E T
4$$ 1
5** 300
T

The input TAB1 file is on logical unit 23, and the output TAB1 file is on logical unit 24. One process and 1 temperature are selected: MT=1 (total cross section) and 300K, respectively.

11.11.35.3. Logical Unit Parameters

Variable

Unit number

Type

Description

LOGIN

binary

logical unit of the input TAB1 file

LOGOUT

binary

logical unit of the output TAB1 file

14

binary

scratch

11.11.36. PLATINUM: PKENO Library Assembler Tool in a Useable Module

PLATINUM (Pkeno Library Assembler Tool in a Useable Module) is a module that assembles a pointwise data library for CEKENO. PLATINUM reads 1D pointwise data, kinematics data, and probability table data in order to assemble a cross section library for a single isotope/nuclide for CEKENO. It can also create a gamma cross section file for an element from 1D pointwise data and kinematics data.

11.11.36.1. Input Data

Block Input

Block starts on first encounter of a keyword in the block.

Keyword

Alternate

Default

Definition

gamma

processes gamma data

identifier=

id for data set to be created for SCALE: MAT+10000*MOD+1000000*Version

vers=

7

evaluation version number - used to construct identifier if identifier not specified

source=

0

id for data evaluation source (max 2 digit integer)

-1 - unknown

0 - ENDFB

1 - JEF

2 - JENDL

3 - BROND

4 - CENDL

other: user-defined source

title=

100-character title for data set

(Title should be only entry on a line.)

out=

60

starting logical unit number for CE library (Code will increment the unit number for each temperature.)

maxtemp=

100

maximum number of temperatures possible on file

outtemp=

0

If not entered, all temperatures will be put out.

sigp=

0.0

potential scattering cross section

centrm=

corresponding thermal scattering kernel filename

debug

Prints extra debug output

eps=

0.0001

tolerance used for combining functions

icversion=

none

version of input creator used to create the input files

filever=

1.1

version of xsecs created by the latest input files

fileid=

output filename prefix. (Output filename will be fileid_temp where temp is the temperature.)

filedate=

date on which the input file is created by the input creator

ampxver=

6.0

version of AMPX being used

ampxdate=

date on which the AMPX module is created

scalever=

6.0

version of SCALE being used

scaledate=

date on which the SCALE package is created

union=

no

yes - turn on unionization

no - co not turn on unionization

fixnegatives=

no

yes - fix large negatives

no - do not fix large negatives

outdetail=

normal

output detail level

normal - print all useful information

more - print more than just useful information

gyield=

no

yes - put gamma yield data onto the final library

no - do not put gamma yield data onto the final library

If filever is 2.0 or larger, gyield is set to yes.

debug=

prints extra debug information

gamma=

creates gamma library file

Repeat block cross section 1 time.

Block Cross Section

Block starts on first encounter of a keyword in the block. Block end is reached if all required parameters are given.

Keyword

Alternate

Default

Definition

n1d=

1D CE cross sections (neutron or gamma)

id=

0

Material identifier for 1D data

Repeat block info file 1 time.

Block Info File

Block starts on first encounter of a keyword in the block. Block end is reached if all required parameters are given.

Keyword

Alternate

Default

Definition

n1d=

1D CE cross sections (neutron or gamma)

id=

0

Material identifier for 1D data

Repeat block fast kinematics data 1 time.

Block Fast Kinematics Data

Block starts on first encounter of a keyword in the block. Block end is reached if all required parameters are given

Keyword

Alternate

Default

Definition

n2d_fast=

logical unit for fast kinematics data (neutron or gamma)

id=

0

material identifier for 2D fast kinematics data

Repeat block thermal kinematics data as often as needed.

Block Thermal Kinematics Data

Block starts on first encounter of a keyword in the block. Block end is reached if all required parameters are given.

Keyword

Alternate

Default

Definition

Select one of these

n2d_free

0

logical unit for free-gas kinematics data (neutron)

n2d_sab

0

logical unit for thermal scattering law kinematics

(neutron)

If thermal scattering law data are specified, n2d-free must be 0.

id=

0

material identifier for 2-D thermal kinematics data

Repeat block probability table data as often as needed.

Block Probability Table Data

Block starts on first encounter of a keyword in the block. Block end is reached if all required parameters are given.

Keyword

Alternate

Default

Definition

ptable=

logical unit for probability table data (neutron)

id=

0

material identifier for probability table data

11.11.36.2. Sample Input

identifier=xxxxxxxx source=xx vers= output=log
title=ttttttttttttttttttttttttttttttttttttttttttttttttttt n1d=log
id=xxxxxxxx
info=log id=xxxxxxxx n2d_fast id=xxxxxxxx n2d_free id=xxxxxxxx

n2d_sab id=xxxxxxxx sigp=sigp centrm=fname
ptable id=xxxxxxxx icversion=icv
debug end

11.11.36.3. Logical Unit Parameters

Variable

Unit number

Type

Description

out

binary

starting logical unit number for CE library (Code will increment the unit number for each temperature.)

n1d

binary

1D CE cross

sections (neutron or gamma)

info

binary

logical unit for information file

n2d_fast

binary

logical unit for fast kinematics data (neutron or gamma)

n2d_free

binary

n2d_sab

binary

ptable

binary

logical unit for probability table

data (neutron)

14

binary

scratch

15

binary

scratch

16

binary

scratch

17

binary

scratch

11.11.37. POLIDENT: Module to Produce Point Data from Resonance Data

POLIDENT (Point Libraries of Data from ENDF/B Tapes) is a module that accesses the resonance parameters from File 2 of an ENDF/B library and constructs the CE cross sections in the resonance region. The cross sections in the resonance range are subsequently combined with the File 3 background data to construct the complete cross section representation as a function of energy. POLIDENT has the following notable features:

  • processes all resonance reactions that are identified in File 2 of the ENDF/B library

  • processes single- and multi-level Breit–Wigner, Reich–Moore and Adler–Adler resonance formalisms

  • provides a robust energy mesh generation scheme that determines the minimum, maximum and points of inflection in the cross section function

  • processes all CE cross section reactions identified in File 3 of the ENDF/B library and outputs all reactions in an ENDF/B TAB1 format that can be accessed by other AMPX modules

  • processes multi-isotope nuclides with different resonance ranges

  • treats discontinuities in cross section data by taking the limit of the function from both sides of the discontinuity

  • provides ENDF/B File 1 and abbreviated File 2 data that can be used to construct a PENDF (Point ENDF) file by the AMPX module MAKPEN

11.11.37.1. Input Data

Block 1

-1$   File9Processing [1]

      1.      File9   if not 0, unit in which to save file 9 and file 10 data (0)

0$    Output library [3]

      1.      LOGP    logical unit for point-wise cross section data (31)

      2.      LOGP1   logical unit for File 1 and abbreviated File 2 information (32)

      3.      LOGRES  restart unit (0)

1$    Number of cases [1]

      1.      NNUC    number of cases (1)

Terminate Block 1 with a T.

Repeat Block 2 NNUC times.

Only use Block 2 if NNUC > 0.

Block 2

2$    ENDF/B Data Source [4]

      1.      MAT     ENDF material identifier for nuclide to be processed

      2.      NDFB    logical unit number for ENDF library (11)

      3.      MODE    ENDF library format (2)
                      1:      binary
                      2:      BCD

      4.      NVERS   not used (0)

      5.      mesheps convergence tolerance for energy mesh generation (0.001)

4*    Floating Point Parameters [14]

      1.      EPS     epsilon to combine data from Files 3 and 2 (0.001)

      2.      R       the ratio factor used in a cross section energy mesh (0.99)
    value that is used only for nuclides using the Adler-Adler parameterization
    in the resolved resonance range

      3.      XNP     the number of points taken equally spaced in lethargy between resonance bodies (50.0)
    value used only for nuclides using the Adler--Adler parameterization in the resolved resonance range

      4.      XGT     the multiplier on the total width above and below a resonance over
    which the ratio mesh scheme is used (50.0)
    value used only for nuclides using the Adler-Adler
    parameterization in the resolved resonance range

      6.      OPT2    not used (0)

      7.      OPT3    not used (0)

      8.      OPT4    not used (0)

      9.      OPT5    not used (0)

      10.     OPT6    not used (0)

      11.     OPT7    not used (0)

      12.     OPT8    not used (0)

      13.     OPT9    not used (0)

      14.     OPT10   not used (0)

5$    Options [8]

      1.      intstart        starting value for interpolations to be tried (1)
    the values for interpolation to be tried start at inter1 and go through
    inter6, listing the default endf interpolation values. Usually lin-lin is
    the only one desired for point-wise cross section data.

      2.      intstop ending value for interpolations to be tried (1)
    the values for interpolation to be tried start at inter1 and go through
    inter6, listing the default endf interpolation values. Usually lin-lin is the
    only one desired for point-wise cross section data.

      3.      IOPT3   maximum number of interpolation regions allowed in the output (1)

      4.      inter1  Interpolation type to be tried (2)
    Linear-Linear is 2. Other allowed values are 1-6.

      5.      inter2  Interpolation type to be tried (0)
    Linear-Linear is 2. Other allowed values are 1-6.

      6.      inter3  Interpolation type to be tried (0)
    Linear-Linear is 2. Other allowed values are 1-6.

      7.      inter4  interpolation type to be tried (0)
    Linear-Linear is 2. Other allowed values are 1-6

      8.      inter5  interpolation type to be tried (0)
    Linear-Linear is 2. Other allowed values are 1-6

6$    Function parameters [4]

      1.      AddMt51 If not zero, add MT=51 from URR range if applicable. (0)

      2.      N2MAX   not used (0)

      3.      MLBW    not used (0)

      4.      IPOINTS maximum number of points per 10eV interval (5000)

Terminate Block 2 with a T.

11.11.37.2. Notes

Parameters R, XNP, and XGT in Block 2, Array 4 are only used for generating an energy mesh for nuclides with the Adler–Adler formalism. intstart, instop, and inter1 through inter5 are used to specify the interpolation types and their order which will be in combining two or more ENDF/B functions. The types are as follows:

  1. Histogram

  2. Linear x, linear y

  3. Linear x, log y

  4. Log x, linear y

  5. Log x, log y

instart and intstop specify which entries in the five-position table are to be used (e.g., the default values of 1 indicate that only the first entry in the table should be used, or linear-linear interpolation by default).

11.11.37.3. Sample Input

0$$ 31 32 1$$ 5 T
2$$ 9228 11 2 T
2$$ 9231 11 2 T
2$$ 2637 11 2 T
2$$ 125 11 2 T
2$$ 825 11 2 T

This input tells POLIDENT to access an ENDF/B file on logical unit 11 that is in BCD format and that contains the data for nuclides 235U, 238U, Fe, 1H, and 16O, identified by 9228, 9237, 2631, 125, and 825, respectively. The data will be written to logical unit 31.

11.11.37.4. Logical Unit Parameters

Variable

Unit number

Type

Description

NDFB

BCD

logical unit number for ENDF library

LOGRES

binary

restart unit

LOGP

binary

logical unit for point-wise cross section data

LOGRES

binary

restart unit

14

binary

scratch

15

binary

scratch

18

binary

scratch

11.11.38. PRELL: Module to Produce and Manipulate an Energy Limits Library

PRELL (Produce Reordered Energy Limits Library) is an AMPX module to create copy, modify, punch, or list an AMPX energy-group-limits library. The new library will be reordered in an increasing number of groups for neutron structures, then in the order of increasing groups for gamma structures. The new library may be printed. Modification features include adding new structures to an existing library and changing boundaries in an existing structure.

11.11.38.1. Input Data

Block 1

0$    Logical unit assignments [3]

      1.      NO      logical unit number of the old library (77)
    If a new group limits library is being created, a 0 is
    entered for this parameter.

      2.      NW      logical unit number of the new library (18)

      3.      NS      not used (0)

1$    Options [2]

      1.      NOPT    Print option (0)
                      0:      prints only new or updated group structures
                      1:      prints all group structures

      2.      NSETS   number of sets to be added/deleted and/or modified (0)

Terminate Block 1 with a T.

Stack Block 2 and 3 one after the other NSETS times.

Block 2

3$    FLAGS [3]

      1.      IG      number of groups in set
    If negative, the group structure is deleted from the file

      2.      ITYPE   type of group structure (0)
                      0:      neutron-group-structure
                      1:      gamma-group-structure

      3.      IVER    version of group structure; only 0 is currently allowed (0)

Terminate Block 2 with a T. Only use Block 3 if IG > 0.

Block 3

7*    Group boundaries [IG+1]

      1.      IGB     group boundaries in eV

Terminate Block 3 with a T.

11.11.38.2. Notes

The structure of the group limits file is very simple. It consists of one header record that indicates how many group structures are included and what they are. This is followed by one record for each group structure indicating the energy boundaries (in eV). The structure of record 1 is:

Record 1: NS, (INDEX(J,I), J=1,3), I=1,NS)

The NS records that follow this use:

Records 2, NS+1: IGM, (EBDRY(I), I=1,IGM+1)

The usage of the INDEX array is as follows:

INDEX(1,I) number of energy groups of the Ith structure,
INDEX(2,I) particle type (0 for neutrons, 1 for photons) of the Ith structure, and
INDEX(3,I) version of the Ith structure (this term has never been activated.

IGM is a repeat of the number of energy groups, and the EBDRY array contains the group limits in eV arranged in descending order.

11.11.38.3. Sample Input

0$$ 47 48 0 1$$ 0 2 T
3$$ 537 0 0 T
7** (Put in 538 energy boundaries for a 537 group neutron structure.)
T
3$$ 10 0 0 T
7** (Put in 11 energy boundaries for a 10 group neutron structure.)
T

This input shows how the user would update the standard energy group library on logical unit 47, to include new 538 and 10 energy group neutron structures. The new group library will be written on logical unit 48 and will contain all of the older structures, in addition to the two new structures.

11.11.38.4. Logical Unit Parameters

Variable

Unit number

Type

Description

NO

binary

logical unit number of the old library

NW

binary

logical unit number of the new library

11.11.39. PRILOSEC: Module to Produce ORIGEN Cross Section Libraries

PRILOSEC (Produce Incredible Libraries of Cross Sections) is a module that reads a file with TAB1 records and creates an AMPX master library for each material that it finds on the library. The ZA number will be used for an identifier unless the noorig keyword is supplied. Only one temperature for each nuclide is allowed. If the file contains more than one temperature, module PICKEZE should be used to select the desired temperature value.

11.11.39.1. Input Data

Block Input

Block starts on first encounter of a keyword in the block.

Keyword

Alternate

Default

Definition

master=

1

logical unit for final library

tab1=

32

logical unit of point data

logwt=

32

logical unit for weighting function file

logebdry=

47

logical unit for file containing energy boundaries The file containing the standard AMPX energy boundaries is by default linked to unit 47. If a non-standard group structure is preferred, the prell module should be used.

title=

title for the nuclides

nuclide id added automatically

matwt=

material number of the weighting function

mtwt=

reaction number of the weighting function

eps=

1e-5

precision to which to calculate the integral

igm=

number of neutron groups

noorig

If set, mt values and ids will not be reset for ORIGEN.

nowork

If set, an AMPX master file will be produced.

old

If set, the old format for ORIGEN libraries will be used.

11.11.40. PRUDE: AMPX Module to Create Cross Sections for the Unresolved Resonance Energy Region

PRUDE (Process Unresolved Data on ENDF/B) is a module that accesses the unresolved resonance data in file 2 of an ENDF/B library and writes out a file which gives the energy variation of average cross sections for several important processes as a function of temperature and the weighting parameters, sigma0. Its primary use is to pass these data to the TABU module, which creates Bondarenko factors that ultimately become part of an AMPX master interface. The Bondarenko factors are used by the BONAMI module for performing self-shielding in the unresolved region.

In the development of the Bondarenko treatment, a narrow-resonance weighting of the form

f(E) = PHI(E) / (sigmaT + sigma0 ) sigmagr(sigma0,T) = (intg[ sigma(E,T) f(E) dE] ) / (intg[ f(E) dE])

is used, where PHI(E) is a smooth weighting function (generally 1/E in the unresolved region), T is the temperature at which the cross sections were Doppler broadened, and sigma0 is the cross section that accounts for the cross sections of other nuclides in the mix with the resonance nuclide. PRUDE accepts an arbitrary number of temperatures and sigma0 values as input. At each pair of values, T and sigma0, it makes a calculation to determine shielded cross sections. The energy mesh is chosen to be either the energy mesh at which the unresolved parameters are specified in the ENDF/B library or at 100 points equally spaced in lethargy over the unresolved region when the parameters are constant. The output from PRUDE is a file of records written in the ENDF/B “TABL” format, as follows:

Record 1: MAT, MF, MT, 0, 0, 0, 0, 0, 0
Record 2: MAT, MF, MT, T, sigma0 , 0, 0, NR, NP, (NBTi, JNTi, i=1, NR),
( E(i), sigma( T, sigma0 ), i=l, NP)
Record 3: MAT, MF, 0, 0, 0, 0, 0, 0, 0

where MAT is the material identifier, MF is the file number, MT is the process identifier, T is the temperature in Kelvin, sigma0 is the background cross section, NR is the number of interpolation regions, NBTi, JNTi comprise the interpolation table, and E, sigma are the energy cross section values.

Each T, sigma0 pair will generate the three records shown above for each of six processes.

MT = 1, total cross section MT = 2, elastic scattering MT = 102, ( n,gamma ) MT = 18, fission MT = 1000, transport cross section MT = 4, inelastic scattering

11.11.40.1. Input Data

Block Units

0$    Point file assignment [1]

      1.      LOGP    logical unit in which point data are to be written (31)

1$    Processing Option [1]

      1.      NNUC    Number of materials to process

Terminate block units with a T. Stack block parameters and values one after the other NNUC times.

11.11.40.2. Block Parameters

2$    Data source and problem options [5]

      1.      MATNO   material number for the ENDF/B data

      2.      NSIG0   number of background values

      3.      NTEMP   number of temperatures

      4.      NDFB    logical unit containing the ENDF/B data (11)

      5.      MODE    format of ENDF/B data (1)
                      1:      binary
                      2:      BCD

Terminate block parameters with a T.

Block Values

3*    Background cross sections [NSIG0]

      1.      NSIG0S  Background cross sections (sigmma_0,i=1,NISG0)
    Specify these values in descending order

4*    Temperatures [NTEMP]

    1.        TEMPS   TEMPERATURE (T_i,i=1,NTEMP);
    Specify these values in ascending order

5*    Processing options [2]

      1.      EPS     precision with which to combine data (1.0e-3)

      2.      NEW     not used (0)

Terminate block values with a T.

11.11.40.3. Sample Input

Sample Input
0$$ 31 1$$ 2 T
2$$ 9228 8 5 11 2 T
3** 1.0E10 1.0E6 1.0E4 1000 100 10 1 1.0E-5
4** 300 600 1000 1500 2000 T
2$$ 9237 8 5 12 2 T
3** 1.0E10 1.0E6 1.0E4 1000 100 10 1 1.0E-5
4** 300 600 1000 1500 2000 T

This input illustrates how to use PRUDE to create a point library of data for the unresolved energy regions of 235U (MAT=9228) and 238U (MAT=9237). The 235U data are accessed from the BCD ENDF/ B library on logical unit 11, whereas the 238U data are from the BCD ENDF/B library on logical unit 12. In both cases, numbers for eight background cross sections and five temperatures will be produced. (Note that PRUDE is programmed to discard any background-temperature combinations producing negative cross section values that arise due to approximations used in the scheme that calculate self-shielded values).

11.11.40.4. Logical Unit Parameters

Variable

Unit number

Type

Description

LOGP

binary

logical unit where point data are to be written

NDFB

BCD

14

binary

scratch

15

binary

scratch

16

binary

scratch

17

binary

scratch

18

binary

scratch

11.11.41. PUFF_IV: Module to Generate MG Correlation Matrices

PUFF-IV is a module that reads the cross section uncertainty data from an ENDF/B library and constructs MG correlation matrices on a user specified energy grid structure. PUFF-IV has the following features:

  • Processes ENDF/B uncertainty data through Version VI

  • Provides output correlation matrices in the COVERX format

  • Processes short-range variance formats, as well as lumped reaction covariance formats that were introduced in ENDF/B-V and could not be processed by PUFF-III

  • Has a directory feature that provides a list of the explicitly and implicitly defined covariance matrices from ENDF/B Files 31 and 33; also, determines if resonance parameter uncertainty information from ENDF/B File 32 is available

  • Calculates eigenvalues for each correlation matrix and tests for positive definiteness

11.11.41.1. Input Data

Block 1

-1$   Core allocation [1]

      1.      LENGTH  number of words to allocate (500,000)

0$    Directory flag [1]

      1.      LDIR    directory option (0)
    If a logical unit (ldir > 0) is given, the program generates the directory
    output and exits. For ldir=0 (the default), covariance matrices are generated.

1$    Integer parameters [18]

      1.      NO28    unit for COVERX formatted output (-/+ = Binary/BCD) (-1)

      2.      ISS     unit number for standard deviations in user group structure from standard cross section file (0)
    only used if processing LTY=1 NC sub-subsection, and the standard cross
    section uncertainties must be processed apriori. A suitable file is
    normally generated on unit 16 if processing a standard uncertainty file.

      3.      I19     unit number for standard material ENDF uncertainty file in BCD format (0)
    only used if processing LTY=1 or 2 NC sub-subsections. I19 cannot equal IO32

      4.      IO11    unit number for AMPX master library if IXSOP=1 or TAB1 file if IXSOP=2(0)
    only used if IXSOP > 0

      5.      IO32    unit number for ENDF uncertainty file in BCD format (32)

      6.      MATUSE  MAT number of material to process (0)

      7.      IUSER   number of user groups for covariance calculation (-12)
                      -2:     240 group CSEWG
                      -3:     99 group GAM2
                      -4:     620 group SAND2
                      -5:     30 group LASL
                      -6:     68 group GAMI
                      -7:     171 group VITAMIN-C
                      -8:     26 group ORNL-5517
                      -9:     100 group GE
                      -10:    6 group ORNL-5318
                      -12:    44 group AMPX
                      other:  Otherwise user input in 2# array if greater than 0.
          If negative and not one of the above choices, use the number of
          groups in a standard AMPX group.

      8.      IXS     number of groups of input cross sections (-11)
                      -2:     240 group CSEWG
                      -3:     99 group GAM2
                      -4:     620 group SAND2
                      -5:     30 group LASL
                      -6:     68 group GAMI
                      -7:     171 group VITAMIN-C
                      -8:     26 group ORNL-5517
                      -9:     100 group GE
                      -10:    6 group ORNL-5318
                      -11:    Read from AMPX master library
                      -12:    44 group AMPX
                      other:  Otherwise user input in 3# array if greater than 0.
    Set only to give cross section data explicitly in the 4## array.

      9.      IWT     weighting function
                      1:      1/E
                      2:      1/(E * sum_{T}
                      3:      (1/E)*INPUT (INPUT placed in 5# array)
                      4:      INPUT (placed in 5# array)
                      other:  If less than 0 the unit of a Flux file in tab1 format
    If a flux file is given, the material and reaction value of the flux is read in the 5## array

      10.     IXSOP   cross section input (1)
                      0:      User input in 4# array
                      1:      AMPX master library
                      2:      TAB1 file containing point-wise cross section data

      11.     JOPT1   Files 31 and 33 processing options (2)
                      0:      processes File 33
                      1:      processes File 31
                      2:      processes Files 31 and 33
                      3:      processes neither File 31 nor File 33'

      12.     JOPT2   File 32 processing options (2)
                      0:      does not process File 32
                      1:      processes File 32 as sensitivity data
                      2:      full resonance calculation of File 32

      13.     JOPT3   option for matrix to be collapsed to user group (0)
                      0:      Yes
                      1:      No
    For normal operation, the covariance matrix should be collapsed to
    the user group structure. The collapsing is not wanted if a File 32
    covariance matrix should be processed in preparation for converting
    to File 33 format.

      14.     NOX     maximum number of covariance matrices in COVERX file (500)

      15.     NOCVX   not used

      16.     NMT     number of MAT-MT reaction pairs to process (-1)
    If larger or equal to 0 the number of covariances to process.
    They are given in Block 4. If -1 process all reaction pairs on ENDF tape

      17.     NDM1    Reads integral constants (0)
                      0:      uses standard integral constants
                      1:      reads integral constants from Block 6

      18.     LD8FL   How an LB=8 section gets calculated (0)
                      0:      calculated as described in ENDF standard
                      1:      assumes that ratio (Delta E_{k})/(Delta E_{I}) = 1 for all k and I
                      2:      ignores all contributions from LB=8 sections

Terminate Block 1 with a T.

Only use Block 2 if IUSER > 0.

Block 2

2#    usergrid [IUSER+1]

      1.      UserGrid        USER energy grid
    Only used if IUSER > 0.

Terminate Block 2 with a T. Only use Block 3 if IXS > 0.

Block 3

3#    IXS_ARRAY [IXS+1]

      1.      CrossGrid       cross section energy grid
    Only used if IXS > 0.

Terminate Block 3 with a T.

Only use Block 4 if NMT > 0.

Block 4

4#    MAT-MT pairs and cross section data [NMT*2 + NMT*IXS]

      1.      MATInfo material and reaction value of covariances to be calculated.
    Only used if NMT > 0

      2.      CrossSections   cross section data for MAT-MT pairs
    Only used if IXSOP = 0.

Terminate Block 4 with a T.

Only use Block 5 if ABS( IWT ) > 2.

Block 5

5#    IWT_ARRAY [IXS]

      1.      Weights user-defined weighting factors
    Only used if ABS( IWT ) > 2.
    If IWT is negative, the material id and the reaction id of the
    weighting function to use should be given. If using an AMPX cross
    section library the number of weights given has to be the same as the
    number of groups on the cross section library. If using point-wise cross
    section data, the number of weights must be the same as the number of groups
    in the super group structure. It is recommended to use IWT<0 in this case.

Terminate Block 5 with a T. Only use Block 6 if NDM1 = 1.

Block 6

6#    THERMAL_VALUES [3]

      1.      ThermalEner     energy for thermal cross section in eV (0.0253)
    Only used if NDM1 = 1.

      2.      LowRes  lower energy for resonance integral in eV (0.5)
    Only used if NDM1 = 1.

      3.      UppRes  upper energy for resonance integral in eV (5500)
    Only used if NDM1 = 1.

Terminate Block 6 with a T.

Block Title Cards

COVERX_TITLE: COVERX Title card Type: Character*72

11.11.41.2. Notes

If JOPT1=3 and File 31 or 33 are not present in the file, only the available information is processed, and PUFF-IV prints a warning message about the missing file. Similarly, if JOPT2=2 or JOPT2=3 is specified and File 32 is not present, the calculation proceeds as if JOPT2=0 was specified. A warning message is printed.

11.11.41.3. Sample Input

-1$$ 400000000 e
1$$ -1 0 0 11 32 9222 -12 -11 1 1 2 2 a16 -1 e t coverx file for u233

This input prompts puff-iv to process File 31, 32 and 33 from ENDF/B file on logical unit 32. The cross section data are taken from the AMPX library on logical unit 11. The covariances format id=9222 are generated for the 44 AMPX group structure with a weighting of 1/E. The title for the COVERX file is “coverx file for u233”.

11.11.41.4. Logical Unit Parameters

Variable

Unit number

Type

Description

NO28

BCD or binary

unit for COVERX formatted output (-/+ = Binary/BCD)

ISS

binary

unit number for standard deviations in user group structure from standard cross section file

I19

binary

unit number for standard material ENDF uncertainty file in BCD format

IO32

bcd

unit number for ENDF uncertainty file in BCD format

15

random access

scratch

16

binary

scratch

17

binary

scratch

18

binary

scratch

19

binary

scratch

20

random access

scratch

21

binary

scratch

22

binary

scratch

23

binary

scratch

25

binary

scratch

11.11.42. PURM: Module to Produce Probability Tables from Unresolved Resonance Data Using Monte Carlo

Purm (probability tables for the unresolved region using Monte Carlo) is a module that uses a Monte Carlo approach to calculate probability tables on an evaluator-defined energy grid in the unresolved- resonance region (urr). For each probability table, purm samples pairs of resonances surrounding the reference energy. The resonance distribution is sampled for each spin sequence (i.e., l-j pair), and purm uses the Delta3-statistics test to determine the number of pairs of resonances for each spin sequence. For each resonance, purm samples the resonance widths from a chi-square distribution for a specified number of degrees of freedom. Once the resonance parameters are sampled, purm calculates the total, capture, fission and scatter cross sections at the reference energy using the single-level Breit–Wigner formalism with appropriate treatment for temperature effects. The cross section calculation constitutes a single iteration or history. The calculation is repeated for a user-specified number of histories and batches. After completing the specified number of histories for a batch, a batch estimate for the probability for each cross section band within a table is obtained by dividing the number of tallies for the band by the total number of histories processed. Additional batches are processed until the user-specified number of batches is complete. Due to the nature of the calculational procedures, purm provides a mechanism for monitoring the convergence of the cross section calculation. For each reaction, a plot of the calculated cross section is provided by the batches run. Additional statistical checks are provided for each cross section calculation. Note that purm should only be used to process individual isotope evaluations, and it should not be used to process nuclide evaluations with multiple isotopes with unresolved-resonance regions.

11.11.42.1. Input Data

Block Input

Block starts on first encounter of a keyword in the block.

Keyword

Alternate

Default

Definition

logp=

31

output unit for the probability table

bond=

32

output unit for the Bondarenko factors

Repeat block nuclide as often as needed.

Block Nuclide

Block starts on encountering nuc.

Block terminates on encountering enuc.

Keyword

Alternate

Default

Definition

nbatch=

300

number of batches to run

iter=

600

number of iterations per batch

nband=

20

number of bands to create

mat=

ENDF material number

ndfb=

logical unit of ENDF file to process

temp=

space-separated list of temperature(s) in Kelvin at which probability tables are desired

sig0=

space-separated list of background values for the Bondarenko factors

equ

If present, bands are equiprobable.

eps=

0.001

precision to which to create the mesh if adding cross section data

extra=

0

number of points to add between energies given in the ENDF file

11.11.42.2. Logical Unit Parameters

Variable

Unit number

Type

Description

ndfb

binary

output unit for the probability

logp

binary

logb

binary

table

11.11.43. PURM_UP: Correct Probability Tables for File 3 Contributions

The module purm generates probability tables at the energies of references given in the ENDF/B formatted file. It does not take File 3 (smooth) cross section contribution into account. This module adds or multiplies with the File 3 cross section data, depending on the flag set in the ENDF/B formatted file.

11.11.43.1. Input Data

Block Input

Block starts on first encounter of a keyword in the block.

Keyword

Alternate

Default

Definition

in=

31

input unit for tables generated by PURM

out=

32

output unit for updated probability tables

ndfb=

11

logical unit of ENDF file to process

matf=

material number of tables generated by purm and in endf

matp=

material number desired on output file

eps=

0.001

precision to which to create the mesh if adding cross section data

11.11.43.2. Logical Unit Parameters

Variable

Unit number

Type

Description

ndfb

binary

logical unit of ENDF file to process

in

binary

input unit for tables generated by purme

out

binary

output unit for updated probability tables

11.11.44. RADE: Module to Check AMPX Master Cross Sections Libraries

RADE (Rancid AMPX Data Exposer) is provided to check AMPX- and ANISN-formatted MG libraries. It will check neutron, gamma, or coupled neutron-gamma libraries. Some of the more important checks are made to ensure that:

  • sigma_{t} = sigma_{a} + sigma_{s}

  • sigma_{in} = sum( sigma_{in}^{partial})

  • sigma_{a} = sigma_{c} + sigma_{f}

  • sigma_{c} = sigma_{n,gamma} + sigma_{n,alpha} +sigma_{n,p} + sigma_{n,d} + …

  • sigma_{el}^{g} = sum( sigma_{el}(g -> g’)

  • sigma_{0}(g -> g’) > 0

  • sigma_{t}, sigma_{a}, sigma_{f}, sigma_{n,gamma}, sigma_{n,p},… > 0

  • f_{l}^{min} < f_{l}( g -> g’) <= 1.0
    where f_{l}( g -> g’ = [sigma_{l}(g -> g’)]/[ (2l+1) sigma_{0}(g -> g’) ]
    and f_{l}^{min} = -1.0 for all odd l and for even l
  • l=2 - f_{l}^{min} = -0.5

  • l=4 - f_{l}^{min} = -0.433

  • l=6 - f_{l}^{min} = -0.419

  • l=8 - f_{l}^{min} = -0.414

In addition to these checks, the code will compute an estimate of the capture-binding energy for each neutron group in a coupled neutron-gamma set. On option, one can request a display of differential cross sections.

11.11.44.1. Input Data

Block 1

-1$   Core assignment [1]

      1.      NWORD   number of words to allocate (100,000)

1$    Checking commands [4]

      1.      MMT     checks the AMPX master interface on logical MMT (0)
    (can be a neutron, gamma, or a coupled neutron-gamma library)

      2.      MWT     checks the AMPX working/weighted interface on logical MWT (0)

      3.      MAN     checks the ANISN binary-formatted library on logical MAN (0)

      4.      IFM     formats of the ANISN library
                      -1:     ANISN library is binary formatted.
                      0:      ANISN library is BCD free form.
                      1:      ANISN library is BCD fixed form

2$    Options [20]

      1.      numAng  number of angles at which a display of differential cross sections is desired (0)
    These angles will be equally spaced in the cosine range, -1 to +1.
    These edits are for the group-integrated cross sections and not for
    each group-to-group transfer

      2.      eps     the epsilon in 1/1000s of a percent, to which checks are made (1)
    That is eps=1 is equivalent to 0.001% checking. This is the default value
    when eps is not entered or when a zero value is entered.

      3.      printbind       print option
                      0:      prints the estimated binding energy table
                      1:      suppresses printing the estimated binding energy tables for processes
          with gamma production data

      3.      OPT3    not used

3$    ANISN Options [7]

      1.      NSET    number of ANISN nuclides to check

      2.      IHT     position of sigma_{T} if checking ANISN library

      3.      IHS     position of sigma_{g} if checking ANISN library

      4.      ITL     table length if checking ANISN library

      5.      NL      maximum order of scattering if checking ANISN library

      6.      IGM     number of neutron groups if checking ANISN library

      7.      IPM     number of photon groups if checking ANISN library

Terminate Block 1 with a T.

Only use Block 2 if MAN != 0.

Block 2

4$    Identification numbers of P0 sets [NSET]

      1.      IDPO    identification numbers of P0 sets on ANISN binary library on logical
    MAN
    Only used if MAN != 0.

5$    Order of scattering for sets [NSET]

      1.      ISOR    order of scattering for sets of ANISN data on logical MAN
    Only used if MAN != 0.

7*    Neutron group structure [IGM+1]

      1.      NGS     neutron group structure
    Only used if MAN != 0.
    order high to low in eV

8*    Neutron group structure [IPM+1]

      1.      GGS     gamma group structure
    Only used if MAN != 0.

order high to low in eV Terminate Block 2 with a T.

11.11.44.2. Sample Input

1$$ 1 E T

This input instructs RADE to perform consistency checks on the data on the master library on logical unit 1.

11.11.44.3. Logical Unit Parameters

Variable

Unit number

Type

Description

MMT

master library

checks the AMPX master interface on logical MMT

MWT

working library

checks the AMPX working/ weighted interface on logical

MWT

MAT

ANISN library

18

binary

scratch

19

binary

scratch

11.11.45. SIMONIZE

SIMONIZE is a module that collects classes of data (resonance parameters, neutron data, gamma data, gamma production data, thermal scattering matrices, etc.) from an arbitrary number of AMPX master formatted data sources and combines them into a single comprehensive collection (i.e., a master library that contains all the data wanted) for a nuclide. At the same time, SIMONIZE normalizes and rearranges data to make a set of data that are ready for use in transport calculations or other applications.

11.11.45.1. Input Data

Block Input

Block starts on first encounter of a keyword in the block.

Keyword

Alternate

Default

Definition

IDENTIFIER=

identifier of the collection of data on the master

library

MASTER=

1

logical unit onto which the data will be written

TITLE=

title to use for the data

SIMONIZE will use the title from the NEUTRON data files if a TITLE is not supplied, which will be the most common situation.

za=

overrides ZA value of the nuclide

fastid=

overrides id identification for fast data

thermid=

overrides id identification of thermal data

gamid=

overrides id identification of photon data

yieldid=

overrides id identification of photon yield data

source=

0

source of the data as defined in the ENDF manual

small1d=

1.0e-12

1-D cross sections smaller than this are set to zero

small2d=

1.0e-12

2-D cross sections smaller than this are set to zero

kipratio

If not present, apply correction to MT=1007 if not a moderator, to mt=2 otherwise.

skipnorm

does not recalculate redundant cross sections

skipscatter

does not correct matrices for upsscatter

oldza

If present, convert the za value to the za values used for SCALE 6.1 and earlier.

Repeat block data descriptions as often as needed.

Block Data descriptions

Block starts on first encounter of a keyword in the block.

Keyword

Alternate

Default

Definition

Select one of these

NEUTRON

unit containing a collection of neutron data

GAMMA

unit containing a collection of gamma-ray data

YIELD

unit containing a collection of gamma-ray yield data

BONDARENKO

unit containing Bondarenko factor data

1DN

unit containing averaged neutron data

2DN

unit containing neutron scattering matrices

1DG

unit containing averaged photon data

2DG

unit containing photon scattering matrices

2DY

unit containing photon production matrices

MODERATOR

special keyword used to signal that the data originate from a thermal ENDF/B evaluation

ID19=

the identifier of the data on the logical unit that currently processed

mt=

list of reaction values to include or exclude

If all mt values are positive, the listed mt values will be selected from the partial library and added to the new library. If all mt values are negative, the listed mt values are excluded from the new library.

11.11.45.2. Sample Input

Identifier=92234 master=1 source=0
Neutron=20 id19=92234
Bondarenko=36 id19=9221
2dn=21 id19=92234

This input tells SIMONIZE to combine NEUTRON data produced by X10 that is located on logical unit 20 and identified by 92234 with thermal scattering data located on logical unit 21 identified by 92234, and with Bondarenko Factor Data on logical unit 36 identified by 9221 to create an AMPX master on logical unit 1 with a nuclide identifier of 92234.

11.11.45.3. Logical Unit Parameters

Variable

Unit number Type

Description

MASTER

binary

logical unit onto which the data will be written

NEUTRON

binary

GAMMA

binary

YIELD

binary

BONDARENKO

binary

1DN

binary

2DN

binary

1DG

binary

2DG

binary

2DY

binary

11.11.46. SMILER: AMPX Module to Convert NJOY GENDF Files to AMPX Master Libraries

The SMILER module (Second MILER*) was written to circumvent inefficiencies observed in the use of the original code.

MILER1 provides a means of converting group-averaged cross sections from the NJOY2 system for use by modules written for the AMPX system. By default, NJOY writes these data in a format called the GENDF format, which is an ENDF/B-like format.

SMILER is not a revision of MILER, but it is a response to the observation that many situations will cause MILER to require exorbitant I/O operations to convert between GENDF format and the AMPX master library format. SMILER uses procedures that take advantage of the current large-computer memories, allowing the user to liberally use core-size allocations. This is in contrast to previous processes in which the user shuttled data in and out of the core to accommodate many problems. Because of this change in programming style, SMILER uses simpler procedures than previously employed, thereby making it more compact and easier to maintain.

As with MILER, SMILER requires little input over simply specifying the GENDF files to be combined and converted. Like MILER, a SMILER run produces cross sections for only one nuclide. These one-nuclide master libraries can be easily collected by the AJAX module. SMILER accepts the BCD or binary formats of GENDF files.

Note that no code which prepares an AMPX master library should include an array identified by 1452 in the 1D arrays. SMILER does not include it and should never be modified to do this, as it will result in completely erroneous results when used in some code combinations.

11.11.46.1. Input Data

Block 1

0$    Logical Assignments [3]

      1.      MMT     logical unit of AMPX master interface (1)

      2.      MG1     first GENDF file (0)

      3.      MG2     second GENDF file(0)

      4.      MG3     third GENDF file(0)
              Note that because photon-only GENDF files do not strictly follow the
    GENDF format specifications and specify the number of photon groups in
    the word designation for the number of neutron groups, MG3 is reserved as the
    location for this type of file. Logical units MG1 and MG2 can both contain
    either neutron-only or coupled neutron-gamma data. Borrowing an idea from
    MILER, positive values for MG1, MG2, MG3 are used for BCD files, whereas
    negative values are used for binary files.

1$    Nuclide identifier and direct-access file status [2]

      1.      ID19    identifier of the set of data produced by SMILER (1)

      2.      N9STAT  not used (0)

2$    NJOY/AMPX thermal identifier correspondence list [100]

      1.      NJID    NJOY/AMPX thermal identifier correspondence list (221 1007 222 1008 e)
    Up to 50 doublets give the NJOY identifiers for a thermal-scattering process,
    followed by its corresponding AMPX identifier. By default, this array contains
    221 1007 222 1008, followed by 96 zeroes, which indicates that an AMPX
    identifier of 1007 should be used on the arrays which NJOY identifies
    with 221 and 1008 on those identified by 222.

Terminate Block 1 with a T.

11.11.46.2. Notes

Converting between different MG cross section formats is a very common requirement, but there is a wide variety of choices that one can make in designing a format.

The differences in GENDF and AMPX formats clearly demonstrate areas that can be different.

  1. The ordering of energy groups differs. Traditionally, group 1 is the highest energy group, as it is in the AMPX master interface. In GENDF, group 1 is the lowest energy group.

  2. The Legendre coefficients in scattering matrices in the AMPX master interface include the ( 2l + 1 ) multiplier following conventions established for the ANISN and DOT programs in the mid-1960s. GENDF does not.

  3. The matrices for reactions that produce multiple secondary particles, such as n2n, contain the multiplicity on GENDF. In AMPX, they do not.

  4. The units of temperatures associated with scattering matrices are in eV in AMPX vs. Kelvin in GENDF.

  5. Various process identifiers for averaged cross sections must be carefully monitored in order to interact properly with various AMPX modules. For example, the GENDF-scattering matrices for fission are identified by MT = 18, but use of this identifier on the AMPX master interface would lead to undesirable results, and it is redefined to be 9018. Likewise, MT = 221 … for thermal-scattering processes are converted to MT = 1007, 1008… to interface with the AMPX procedures.

  6. The fission spectrum on the GENDF file is in scattering-matrix form (a more correct form), whereas it is generally expected to be a single array on the AMPX interface.

The basic procedure in SMILER is very simple. Note that even though a GENDF file can contain many (up to the number of groups) collections of records for a process at a single temperature, these can be collected into a single record before they are shuttled to a direct-access scratch file. Furthermore, if one chooses a procedure that constructs all of the matrices for Legendre coefficients of scattering processes in core prior to writing to the direct access file, the requisite I/O operations are minimized.

11.11.46.3. Sample Input

0$$ 1 2 0 0 1$$ 92235 E T

This input will create an AMPX master library on logical unit 1 for 235U data taken from the NJOY GENDF file on logical unit 2. (Note that SMILER only processes one nuclide at a time so that the identifier of the data on GENDF is not required; i.e., the GENDF file must be for only one nuclide.)

11.11.46.4. Logical Unit Parameters

Variable

Unit number

Type

Description

MMT

binary

logical unit of AMPX master

interface

MG1

binary

first GENDF file

MG2

binary

second GENDF file

MG3

binary

third GENDF file

9

direct access

scratch

11.11.47. SPLICER: Sets the Functions on a TAB1 File to Zero Between el and eh, or Chop to the Given Range, or Splice with Data

SPLICER sets the functions on a TAB1 file to zero between el and eh. It is recommended to use the DCON module following the use of SPLICER to make sure the partials sum to the appropriate total values.

11.11.47.1. Input Data

Block Specifications

Block starts on first encounter of a keyword in the block.

Keyword

Alternate

Default

Definition

in1=

31

input TAB1 file

in2=

0

input TAB1 file

out=

33

output TAB1 file

el=

1.0e-5

lower energy value of range to zero

negative lower energy causes the lowest energy of the tab1 data in unit IN1 to be picked

eh=

3.0e7

upper energy value of range to work on

option=

1

procedure to perform

1 - splices the data on in2 between el and eh

0 - zeroes data between el and eh

-1 - only copys data between el and eh

11.11.47.2. Sample Input

in1=31 el=1e-5 eh=3.0 out=33 option=0

All values between 1e-5 eV and 3.0 eV should be set to zero on the data on logical unit 31 and saved on logical unit 33.

in1=31 el=1e-5 eh=3.0 option=-1 out=33

Only the values between 1e-5 eV and 3.0 eV are copied on the data on logical unit 31, and they are saved on logical unit 33.

in1=31 in2=33 el=1e-5 eh=3.0 option=1 out=35

If material, reaction and temperature match, data on logical unit 33 are spliced into the data on file 31 in the range 1e-5 eV to 3 eV. The output data are saved on logical unit 35.

11.11.47.3. Logical Unit Parameters

Variable

Unit number

Type

Description

out

binary

output TAB1 file

in in

binary

binary

11.11.48. TAB1COMPARE: Module to Compare Functions on Two TAB1 Files

TAB1COMPARE is a module to read two TAB1-formatted single precision binary files and compare the functions with the same identifiers (MAT, MF, MT). It writes a TAB1 single precision binary file that contains difference functions, (Function 1 - Function 2) / Function 2, identified by the original identifiers. This module can be used to compare two pointwise cross section files. For example, TAB1COMPARE can be used to compare point cross sections from AMPX with point cross sections generated by NJOY. Note that the AMPX module EXTRACT would be used to convert an NJOY-PENDF to TAB1 format. Subsequently, TAB1COMPARE would be used to compare the two TAB1 files.

11.11.48.1. Input Data

Block 1

-1$   Core allocation [1]
      1.      ICORE   number of words of core to allocate (500000)

0$    Logical unit assignments [3]
      1.      LOG1    logical unit on which the first TAB1 file is located (1)
      2.      LOG2    logical unit on which the second TAB1 file is located (2)
      3.      LOG3    logical unit where the difference TAB1 file will be written (3)

Terminate Block 1 with a T.

11.11.48.2. Sample Input

0$$ 23 24 25 T

This input indicates that the identical functions on logical units 23 and 24 should be compared and the difference file should be written in the TAB1 format on logical unit 25.

11.11.48.3. Logical Unit Parameters

Variable

Unit number

Type

Description

LOG1

binary

logical unit on which the first

LOG2

binary

TAB1 file is located in the logical unit on which the second TAB1 file is located

LOG3

binary

logical unit where the difference TAB1 file will be written

11.11.49. TABASCO: Module to Read Functions from an AMPX Master Library and Write Them to a TAB1 File as Histograms

TABASCO (TAB1 functions from AMPX/SCALE Master Libraries originally) is a module one can use to extract data from an AMPX master library and have it written onto a single precision binary TAB1 library as histograms equivalent to the averaged values in the master library. These histograms can then be plotted or used in other applications that need these data.

11.11.49.1. Input Data

Block 1

-1$   Indicates whether worker [1]

      1.      worker  If negative, a working library is read. (1)

0$    Logical unit assignments [2]

      1.      MMT     the logical unit of the input AMPX Master file (31)

      2.      LOGOUT  the logical unit of the output TAB1 file (32)

1$    Number of classes of data to select [1]

      1.      NCOM    the number of classes of data to select

Terminate block 1 with a T.

Block 2

2$    Identifiers of materials selected [NCOM]

      1.      MATS    Identifiers of Materials selected
    Zero entry selects everything

3$    Process identifiers selected [NCOM]

      1.      MTS     Process Identifiers selected
    Zero entry selects everything

4$    Not used [NCOM]

      1.      NNUSED1 Not used

5*    Temperatures selected [NCOM]

      1.      NNUSED2 Not used

6*    Sig0s selected [NCOM]

      1.      NNUSED3 Not used

Terminate Block 2 with a T.

0$$ 23 24 1$$ 2 T
2$$ 1395 1398 3$$ 1 0 T

This input indicates that data should be read from the AMPX master library on logical unit 23, and data should be written to a TAB1 file on logical unit 24. The total cross section (MT=1) for MAT=1395 is selected, and all processes are selected for MAT=1398.

11.11.49.2. Logical Unit Parameters

Variable

Unit number

Type

Description

NMT

binary

logical unit of the output

LOGOUT

binary

TAB1 file

11.11.50. TGEL: Module to Calculate Total Cross Sections for Functions Written in TAB1 Format

The module tgel is a program that will add up partial cross sections to form either an elastic (MT=1007+MT=1008), inelastic, capture, absorption, nonelastic, or total cross section. This ensures that values for redundant cross sections are consistent.

11.11.50.1. Input Data

Block Input

Block starts on first encounter of a keyword in the block.

Keyword

Alternate

Default

Definition

input=

1

single or double precision input TAB1 file

output=

2

single or double precision output TAB1 file

eps=

1e-4

precision to which to calculate cross section data

total

reconstruct total cross section

capture

reconstruct capture cross section

absorption

reconstruct absorption cross section

inelastic

reconstruct inelastic cross section

thermal

reconstruct thermal cross section

nonelastic

reconstruct nonelastic cross section

11.11.50.2. Logical Unit Parameters

Variable

Unit number

Type

Description

input

binary

single or double precision

input TAB1 file

output

binary

single or double precision

output TAB1 File

99

binary

scratch

11.11.51. TOMATO: Module to Change Material Identifiers (MAT Numbers) on a TAB1 File

TOMATO (Toss MAT numbers on a TAB1 file) is a module that allows the user to change the material identifiers (MAT numbers) on a TAB1 file. Isolating this simple functionality makes it much easier to develop and use other modules, such as the SPLICER module.

11.11.51.1. Input Data

Block 1

-1$   Core allocation [1]

      1.      ICORE   not used (50000)

0$    Logical unit assignments [2]

      1.      LOGIN   logical unit of the input TAB1 file (31)

      2.      LOGOUT  logical unit of the output TAB1 file (32)

1$    Number of materials to change [3]

      1.      NMAT    number of materials whose identifiers should be changed (0)

      2.      NMT     number of reactions whose identifiers should be changed (0)

      3.      NMF     number of file numbers whose identifiers should be changed (0)

Terminate block 1 with a T.

Block 2

2$   Identifiers of materials whose identifiers should be changed [NMAT]

     1.      NMATold identifiers of materials whose identifiers should be changed
   only used if NMAT > 0

3$   New identifiers for the material [NMAT]

     1.      NMATnew new identifiers for the material.
   only used if NMAT > 0

4$   Identifiers of reactions whose identifiers should be changed [NMT].

     1.      NMTold  identifiers of reactions whose identifiers should be changed
   only used if NMT > 0

5$   New identifiers for the reactions [NMT]

     1.      NMTnew  new identifiers for the reactions
   only used if NMT > 0

6$   Identifiers of file numbers whose identifiers should be changed [NMF]

     1.      NMFold  identifiers of file numbers whose identifiers should be changed
   only used if NMF > 0

7$   New Identifiers for the file numbers [NMF]

     1.      NMFnew  new identifiers for the file numbers
   only used if NMF > 0

Terminate block 2 with a T.

11.11.51.2. Sample Input

-1$$ 500000 0$$ 23 24 1$$ 2 T
2$$ 1395 1398 3$$ 1495 1498 T

This input indicates that 500,000 words of core should be allocated to TOMATO and that data should be read from the TAB1 library on logical unit 23, and data should be written to a new file on logical unit 24. The identifiers on the original library are changed from 1395 to 1495 and from 1398 to 1498. Other than these two nuclides, all will be copied with their identifiers unchanged.

11.11.51.3. Logical Unit Parameters

Variable

Unit number

Type

Description

LOGIN

binary tab1 file

logical unit of the input TAB1 file

LOGOUT

binary tab1 file

logical unit of the output TAB1 file

11.11.52. WORM: AMPX Module to Convert an AMPX Working Library to an AMPX Master Library

WORM (Working to Master Converter) is an AMPX module that converts a binary AMPX working library into a binary AMPX master library. WORM works with any working library to produce a master library containing neutron and/or gamma and/or gamma-production information. In the case of the working library containing more than one of the above types of data, WORM automatically splits the transfer matrices so that all neutron data are carried together and identified by MT = 1, gamma production data are carried together and identified by MT = 1, and, likewise, gamma data are carried together and identified by MT = 501. One-dimensional (reaction averages) cross sections are carried on a process-by-process basis, exactly as in the master library. Only the total transfer matrices are available, since it is impossible to split out individual transfer processes in a general manner once they are added together to produce the working library.

11.11.52.1. Input Data

Block 1

-1$   Core allocation [1]

      1.      ICORE   number of words to allocate to WORM (50,000)

0$    Logical unit assignments [1]

      1.      MMT     master library is written on this logical unit. (1)

      2.      MWT     working library is mounted on this logical unit. (4)
    Terminate Block 1 with a T.

11.11.52.2. Sample Input

0$$ 1 2 T

This input instructs WORM to convert the sets of data on the AMPX working library on logical unit 2 into the formats used on an AMPX master library and to write them on logical unit 1.

11.11.52.3. Logical Unit Parameters

Variable

Unit number

Type

Description

MMT

binary

Master library is written on this logical unit.

MWT

binary

Working library is mounted on this logical unit.

17

binary

scratch

18

binary

scratch

11.11.53. X10: Module to Produce MG Libraries from Three Tabular Files

X10 is the AMPX module for generating MG libraries. In its present form, it only generates neutron interaction, gamma-ray yield, or gamma-ray interaction cross sections.

Because there were three independent situations (neutron-neutron, gamma-gamma, and neutron-gamma) to address and because production capabilities were expected to accommodate other types of coupling (for example, neutrons produced by gamma rays), the design choice was a single system that is expandable to cover situations not previously addressed.

X10 can accomplish the above goals by accepting data from three tabular files:

  1. a file in TAB1 format that contains point cross sections,

  2. a file in TAB1 format that contains a smooth weighting function, and

  3. a tabular kinematics file generated by Y12.

X10 does not do physics; all kinematic data are in the LAB and in fully double differential form given in Legendre order or cosine moments. Since all processes for all particles use the same coding, it can be argued that the code will be easier to maintain since, in most situations, one can argue that the code will either work correctly or will always work incorrectly.

Another difference in the way X10 calculates transfer matrices is that it always uses a group structure for the source particle and another for the sink particle. The coding is performed in a manner that always considers the source group structure when making integrations for the interacting particle, and it always considers the sink group structure when making integrations for the particle that is produces, even when the two particles are the same. Nothing informs the integration routines that they are dealing with neutrons, photons, protons, etc.. In fact, the simple observation is made that these routines can be used to produce energy- absorption coefficients (or dose factors) simply by specifying the energy absorbed when a particle undergoes a particular type of reaction. Though it may have no practical application, one could go further and specify a group structure for the dose factors, which would also be a functions of scattering angle, just like typical scattering matrices.

The same routines that calculate scattering matrices also calculate averaged cross sections and multiplicities, such as nu-bar (which must be weighted by a combination of a cross section times a flux). This procedure makes it easier to ensure consistency between group-averaged values and transfer matrices.

Note that X10 never reads an ENDF/B library. Other modules (POLIDENT and Y12, for example) read these and produce point cross section files and kinematics files.

11.11.53.1. Input Data

Block Data

Block starts on first encounter of a keyword in the block.

Keyword

Alternate

Default

Definition

type=

neutron

execution mode for x10

neutron - generates neutron interaction data

yield - generates gamma-ray yield data

gamma - generates gamma-ray interaction data

tab1=

32

logical unit containing the Doppler-broadened point data

logwt=

30

logical unit containing the pointwise weighting function

matwt=

99

material for the weighting function

mtwt=

1099

reaction for the weighting function

logebdry=

77

logical unit for file containing energy boundary information

master=

1

logical unit for output AMPX master

title=

title to use for the AMPX master

kin=

0

logical unit containing kinematic data

id=

0

id of material to process on point-wise and kinematic file

nl=

5

maximum Legendre order in final master

igm=

0

number of neutron groups to use

iftg=

0

number of first thermal group

ipm=

0

number of gamma groups to use

eps=

1e-5

Precision at which to construct the mesh

pot=

0.0

potential scattering cross section to write into master

upscatter

If present, add an upscatter correction for thermal point-wise data

eup=

3.0

if performing upscatter correction, the energy at which correction starts

eterm=

5.0

if performing upscatter correction, the energy at which all upscatter is eliminated

11.11.53.2. Logical Unit Parameters

Variable

Unit number

Type

Description

tab1

binary

logical unit containing the Doppler-broadened point data

logwt

binary

logical unit containing the pointwise weighting function

logebdry

binary

logical unit for file containing energy boundary information

kin

binary

logical unit containing kinematic data

master

binary

logical unit for output AMPX master

11.11.54. Y12: Create Kinematic Data Files

This module creates kinematic files for incident neutron and gamma data, as well as thermal moderators. Y12 saves the kinematic file in cosine moments or Legendre moments for use in MG processing or as point-wise kinematic data for use in CE library processing.

11.11.54.1. Input Data

Block Input

Block starts on first encounter of a keyword in the block.

Keyword

Alternate

Default

Definition

ndf=

11

logical unit of ENDF file to process

mat=

material number to process

kin=

31

logical unit of output kinematic file

point=

-1

logical unit of file containing 1-D point data

If less or equal to 0, the point wise data will not be generated.

id=

-1

ID to be used on the kinematic and 1-D point file

If less or equal to 0, this will be the same as the mat number on ENDF.

eps=

1e-3

precision at which to generate the grid

nl=

5

if saving in Legendre coefficients of cosine moments, the number of moments to generate

emax=

5.05

if processing thermal moderator data or free gas data, the upper energy limit

emin=

1e-5

if processing thermal moderator data or free gas data, the lower energy limit

free

if present, generates free gas data

awr=

if processing free gas, the mass ratio to use

pot=

if processing free gas, the free atom scattering cross section

temp=

space-separated list of temperature(s) in Kelvin at which to generate free gas data

coform=

yes

option to apply the form factor for Klein-Nishina scattering

yes - applies the factor

no - does not apply the factor

awp=

If given, kinematic data should only be processed for particles with this mass ratio.

zap=

If given, kinematic data should only be processed for particles with this ZA value.

for=

tab

desired output format

tab - generates tabulated double differential data

cos - generates data in cosine moments

leg - generates data in Legendre coefficients

11.11.54.2. Logical Unit Parameters

Variable

Unit number

Type

Description

kin

binary

logical unit of output kinematic file

ndf

binary

logical unit of ENDF file to process

11.11.55. ZEST: Module to Manage String Libraries

ZEST (Zippy ensembler of strings) is a module analogous to AJAX, except ZEST uses string libraries such as those produced by POLIDENT. A string is a TAB l record in ENDF nomenclature. Options are provided to allow merging from any number of files in a manner to allow the user to determine the final nuclide ordering, if desired.

11.11.55.1. Input Data

Block 1

-1$   Core assignment [1]

      1.      NWORD   not used (50000)

0$    Logical assignments [2]

      1.      LOG     logical number of library to be written (31)

      2.      LBIG    writes out in single or double precision (0)
                      0:      double
                      1:      single

1$    Library Selector [1]

      1.      NLOG    number of commands (or libraries) required to create LOG (1)

Terminate block 1 with a T.

Stack Block 2 and 3 one after the other NLOG times.

Block 2

2$    Input library selection [2]

      1.      NLIN    logical number of input library

      2.      NC      how the strings are to be treated (0)
                              -N: deletes N strings from NLIN to create LOG.
                              0: accepts all strings from NLIN.
                              N: adds N strings from NLIN to create LOG

Terminate Block 2 with a T. Only use Block 3 if NC != 0.

Block 3

3$    MAT numbers From NC [NC]

      1.      MAT     material identifier(s) of nuclides to be added or deleted. (0)
    Only used if NC != 0
    There must be exactly NC values.

4$    MT numbers from NC [NC]

      1.      MT      reaction identifier(s) of nuclides to be added or deleted. (0) Only used if NC != 0.
    There must be exactly NC values.

5$    MF numbers from NC [NC]

      1.      MF      File identifier(s) of nuclides to be added or deleted. (0) Only used if NC != 0.
    There must be exactly NC values.

6$    New MAT numbers from NC [NC]

      1.      MATnew  New material identifier(s) of nuclides to be added. (0) Only used if NC > 0.
    A zero leaves the identifier unchanged.

7$    New MT numbers from NC [NC]

      1.      MTnew   New reaction identifier(s) of nuclides to be added. (0) Only used if NC > 0.
    A zero leaves the identifier unchanged.

8$    New MF numbers from NC [NC]

      1.      MFnew   New file identifier(s) of nuclides to be added. (0) Only used if NC > 0.
    A zero leaves the identifier unchanged

Terminate Block 3 with a T.

11.11.55.2. Sample Input

0$$   31      0       1$$ 10 T
2$$   21      0       T
2$$   22      0       T
2$$   23      0       T
2$$   24      0       T
2$$   25      0       T
2$$   26      0       T
2$$   27      0       T
2$$   28      0       T
2$$   29      0       T
2$$   30      0       T

This input instructs ZEST to combine the contents of the ten-point cross section libraries on logical units 21–30 onto a single point cross section library on logical unit 31. Note that the order in which the point cross section libraries are accessed determines the ordering on the output library so that a case like this can be used to force an ordering.

11.11.55.3. Logical Unit Parameters

Variable

Unit number

Type

Description

LOG

Binary

Logical number of library to be

written

NLIN

Binary