Table of Contents |
---|
Introduction
Version: 3.3.3
The opHA 3 has a cli tool to perform the same operations than the CLI, but with some additional debugging information and it also allows the task automation.
Code Block |
---|
/usr/local/omk/bin/opha-cli.pl Usage: opha-cli.pl act=[action to take] [options...] opha-cli.pl act=discover url_base=... username=... password=.... role=... mirror=... opha-cli.pl act=<import_peers|export_peers|list_peers> opha-cli.pl act=delete_peer {cluster_id=...|server_name=...} opha-cli.pl act=pull [data_types=X...] [peers=Y] [force=t] pull data types except nodes primary <-- peers opha-cli.pl act=sync-all-nodes [peers=Y] sync all node data from the primary to the pollers primary --> peers opha-cli.pl act=sync-processed-nodes sync node data based on changes done by NMIS9 opnodenode_admin.pl primary --> peers opha-cli.pl act=import_config_data for firsts installation, provide initial data (groups) opha-cli.pl act=cleanup simulate=f/t clean metadata and files opha-cli.pl act=clean_orphan_nodes simulate=f/t remove nodes with unknown cluster_id opha-cli.pl act=resync_nodes peer=server_name remove the nodes from the poller in the primary and pull the nodes from the poller primary <-- peers opha-cli.pl act=clean_data peer=server_name [all=true] Like resync data but with all the data types primary <-- peers By default, cleanup just pull data all=true includes nodes opha-cli.pl act=cleanup_poller simulate=f/t from the pollers, clean duplicate configuration items and files opha-cli.pl act=check_duplicates check for duplicate nodes opha-cli.pl act=get_status opha-cli.pl act=setup-db opha-cli.pl act=show_roles opha-cli.pl act=data_verify opha-cli.pl act=lock_peer {cluster_id=...|server_name=...} opha-cli.pl act=unlock_peer {cluster_id=...|server_name=...} opha-cli.pl act=peer_islocked {cluster_id=...|server_name=...} Encryption key opha-cli.pl act=push_encryption_key |
Info |
---|
To get debug information in any command, please run with the following argument: debug=1..9 E.g. opha-cli.pl act=resync_nodes peer=server_name debug=8 |
Core functionality
Discover Peer
...
The sync-all-nodes is running in the opha cron job.
sync-processed-nodes
Will sync the nodes processed by opnodeNMIS9 node_admin.pl:
Code Block |
---|
opha-cli.pl act=sync-processed-nodes |
Import Initial data
for firsts installations, provide initial data, basically setup the groups for pollers and primary and add the peers.
Code Block |
---|
opha-cli.pl act=import_config_data |
Cleanup Functions
cleanup
Function to clean metadata for files and files with no metadata information. This is mainly for configuration files:
Code Block |
---|
opha-cli.pl act=cleanup |
By default, it will run in simulation mode.
Use simulate=f to perform the cleanup function.
clean orphan nodes
It is possible to check with nodes are not associated with any cluster id with the command:
Code Block |
---|
opha-cli.pl act=clean_orphan_nodes simulate=f/t |
By default, it will run in simulation mode.
Use simulate=f to remove the nodes (And associated data).
resync nodes
By default, the Primary pushes the nodes to the pollers. .Running this command, it is possible to update the nodes from the pollers:
Code Block |
---|
opha-cli.pl act=resync_nodes peer=server_name |
....Where:
- peer: Specify the server name.
Clean data
Will remove all the data from the peer and pull the data again.
By default, it is not removing/resync the nodes. It is possible to do it with:
- all=true
Cleanup poller
This operation should be run on a poller. .
cleanup poller
....
check duplicates
....
Diagnosis information
get status
....
Setup DB
....
Show roles
....
Data Verify
....
Lock Peer
....And will clean duplicate configuration items and files:
Code Block |
---|
opha-cli.pl act=cleanup_poller simulate=f/t |
By default, it will run in simulation mode.
Use simulate=f to remove the nodes (And associated data).
Diagnosis information
get status
Get all the peer status information as an array of perl hashes:
Code Block |
---|
opha-cli.pl act=get_status |
This is the same information that we see in the opHA front page.
Show roles
Show the roles defined in the system:
Code Block |
---|
opha-cli.pl act=show_roles |
Data Verify
Show how many data do we have for each peer:
Code Block |
---|
opha-cli.pl act=data_verify |
How many inventory records, roles, which peer is active or enabled, also duplicate nodes and catchall inventory records duplicated.
Check Duplicates
Code Block |
---|
opha-cli.pl act=check_duplicates |
Similar to data_verify, but will report just the duplicate data.
Lock Peer
(V. >= 3.3.3) When a peer is doing a critical operation, it will be locked. We can see the lock status of a peer:
Code Block |
---|
opha-cli.pl act=peer_islocked {cluster_id=...|server_name=...} |
We can change the lock status of a peer with:
Code Block |
---|
opha-cli.pl act=lock_peer {cluster_id=...|server_name=...}
opha-cli.pl act=unlock_peer {cluster_id=...|server_name=...} |
Setup DB
Setup DB indexed. This is run by the installer during installation or upgrade:
Code Block |
---|
opha-cli.pl act=setup-db |
Encryption
The primary can push the encryption key to all the pollers by running the following command:
Code Block |
---|
opha-cli.pl act=push_encryption_key |
It will run just if the server has the role primary and the key was not modified since the last time it was sent.
To force send it anyway, you can run it with the force=true argument:
Code Block |
---|
opha-cli.pl act=push_encryption_key force=1 |