HEC-FFA Overview

HEC-FFA is used to compute flood frequencies in accordance with 'Guidelines for Determining Flood Flow Frequencies' Bulletin 17B of the U.S. Water Resources Council (WRC), March 1982. This guideline is designed for computing flood flow frequency curves where systematic stream gaging records of sufficient length (at least 10 years) to warrant statistical analysis are available as the basis for determination. The procedures do not cover watersheds where flood flows are appreciably altered by reservoir regulation or where the possibility of unusual events, such as dam failures, must be considered. This guideline is specifically developed for the treatment of annual flood peak discharge.


HEC-FFA was formerly called HECWRC. The name was changed to HEC-FFA, or simply FFA, to be more in keeping with other HEC computer program names, and to go back to something closer to its original name. HECWRC was originally a modification of the computer program FREQFLO written by Leo R. Beard and David Ford (Center for Research in Water Resources, the University of Texas at Austin) under contract to the Water Resources Council (WRC). The original program (FREQFLO) and documentation may be found in Appendix 13, Guidelines for Determining Flood Flow Frequencies, WRC, Bulletin 17, March 1976. The latest version of the Guidelines (Bulletin 17B) does not contain computer program documentation.

Flood Data

The following categories of flood data are recognized: systematic records, historic data, comparison with similar watersheds, and flood estimates from precipitation. When developing a flood flow frequency curve, the analyst should consider all available information. The four general types of data which can be included in the flood flow frequency analysis are described below:

  • Systematic Records
    Annual peak discharge information is observed systematically by many Federal and State agencies and private enterprises. Most  annual peak records are obtained either from a continuous trace of river stages  or from periodic observations of a crest-stage gage. A statistical analysis of  these data determination of the flow frequency curve is the primary basis for the for each station.
  • Historic Data
    At many locations, particularly where man has occupied the flood plain for an extended period, there is information about major floods which occurred either before or after the period of systematic data collection. This information can often be used to make estimates of peak discharge.
  • Comparison With Similar Watersheds
    Comparisons between computed frequency curves and maximum flood data of the watershed being investigated and those in a hydrologically similar region are useful for identification of unusual events and for testing the reasonableness of flood flow frequency determinations.
  • Flood Estimates From Precipitation
    Flood discharges estimated from climatic data (rainfall and/or snowmelt) can be a useful adjunct to direct stream flow measurements. Such estimates, however, require at least adequate climatic data and a valid watershed model for converting precipitation to discharge. Unless such models are already calibrated to the watershed, considerable effort may be required to prepare such estimates.

Data Assumptions

Necessary assumptions for a statistical analysis are that the array of flood information is a reliable and representative time sample of random homogeneous events. Assessment of the adequacy and applicability of flood records is therefore a necessary first step in flood frequency analysis, Effect of climatic trends, randomness of events, watershed changes, mixed populations, and reliability of flow estimates on flood frequency analysis must be considered.

Determination Of The Frequency Curve

NFlood events can be analyzed using either annual or partial-duration series. The annual flood series is based on the maximum flood peak for each year. A partial-duration series is obtained by taking all flood peaks equal to or greater than a predefined base flood. The main thrust is determination of the annual flood series. Procedures are also available to convert an annual to partial-duration flood series. The Pearson Type III distribution with log transformation of the flood data (log-Pearson Type III) is recommended as the basic distribution for defining the annual flood series. The method of moments is used to determine the statistical parameters of the distribution from station data. Generalized relations are used to modify the station skew coefficient. Several methods are available for treatment of most flood record problems encountered. HEC-FFA also supports procedures for refining the basic curve determined from statistical analysis of the systematic record and historic flood data to incorporate information gained from comparisons with similar watersheds and flood estimates from precipitation.

Following computational methods are supported:

  • Graphical Analysis
    The data are arrayed and the plotting positions may be computed by the Weibull, median or Hazen formulae.
  • The Distribution
    The log-Pearson Type III distribution is used in the computation of frequency curve.
  • Skew Coefficient
    The computed skew coefficient is weighted with the Input generalized skew coefficient.
  • Broken Record
    A broken record is automatically analyzed as a continuous record.
  • Incomplete Record
    Missing data at the low end is indicated by a negative number (-1) and the conditional probability adjustment is used to determine the frequency curve.
  • Zero Flood Years
    Any flood events of zero are automatically deleted and the conditional probability adjustment is used to determine the frequency curve (p. 15 and Appendix 5).
  • Outliers
    Initially the program calculates the station skew coefficient for the systematic record which is presented under preliminary results in the output.

The program then tests for high or low outliers in an order depending on the value of the station skew as discussed on pages 17-19 and shown on the flow chart on page 12-3 of the Guidelines. Basically if the skew is greater than 0.4, tests and adjustments for high outliers and historic peaks are made before testing for low outliers. If the station skew is less than -0.4, tests and adjustments are made for low outliers first. If the skew is between 0.4 and -0.4, tests for both high and low outliers are made based on systematic record statistics before any adjustments are made.

  • Historic Events
    Weighted plotting positions and statistics are computed incorporating any input historic events.
  • Confidence Limits
    The .05 and .95 confidence limit curves are computed unless other limits are specified.
  • Expected Probability
    The frequency curve ordinates are computed with and without the expected probability adjustment.

Reliability Applications

The preceding sections have presented recommended procedures for determination of the flood frequency curve at a gaged location. When applying these curves to the solution of water resource problems, there are certain additional considerations which must be kept in mind. Because use is made of data which are deficient, or biased, and because population properties must be estimated from these data by some technique, various errors and information losses are introduced into the flood frequency determination. Differences between the population properties and estimates of these properties derived from sample data constitute uncertainties. Risk can be decreased or minimized by various water resources developments and measures, while uncertainties can be decreased only by obtaining more or better data and by using better statistical techniques. HEC-FFA provides procedures for computing confidence limits to the frequency curve along with those for calculating risk and for making expected probability adjustments.