In the situation that opFlow has gotten too large for the disk/partitions it is installed on the following process can be followed to clean up all flow/conversation data (but keep other data, like reports).
There are two types of data that need to be cleaned up:
- Raw flow data (nfdump files)
- Database data
Before starting
It's best to shut all daemons down before starting.
service nfdump stop service opflowd stop service omkd stop
Raw flow data
The opFlow installer adds a cron job that cleans these files up. It uses the config variable opflow_raw_files_age_days and purges any raw flows that are older than the number of days specified.
# purge the raw nfdump input files once daily 23 4 * * * root /usr/local/omk/bin/opflow-cli.pl act=purge-raw quiet=true
To clean up manually, find the directory the files are saved into by looking at the config file omk/conf/opCommon.nmis:
# where nfdump inputs are expected, and saved dailies are kept '<opflow_dir>' => '/var/lib/nfdump',
The default is listed in the code block above. To clean out all files simply delete all files in the directory listed
rm -rf /var/lib/nfdump # NOTE: be sure this is the directory in the config found above
Database data
By default, all opFlow data is stored in a database named 'flows', this is configurable and opCommon.nmis has this setting which defines the dbname used 'opflow_db_name' => "flows".
To be continued...