opHA-MB 5.1 introduces the concept of a Primary & multiple Poller servers and a message bus (with replication):
The Primary is the server that keeps the information of all the pollers and it is where we can read all the information from. The Consumer (ophad running on Primary) reads data continuously off the message bus and stores it in the database
The Producer (ophad running on Pollers) has a change stream setup for the collections and receives update from the database if there is any. This change stream data is pushed onto the message bus.
With Replication and Nats if the Main Primary goes down the secondary primary will take over also if a poller goes down the mirror will take over.
Here are terms used in this project together with their meanings.
Term | Meaning |
|---|---|
Main Primary | server instance running opHA with role = 'primary_master' |
Secondary Primary | server instance running opHA with role = 'master'. Also referred to as just 'primary'. |
Peer | server instance running opHA with role = 'poller' or 'mirror'. It can optionally be of type 'streaming' which means that it is linked to an ophad instance. |
Poller | server instance running opHA with role = 'poller' |
Mirror | server instance running opHA with role = 'mirror'. It is paired with a poller and polls the same devices as that poller, however its data is not sync'd with primary unless the poller is offline. |
Producer | an instance of the ophad process running in 'producer' mode (as per configuration). This will typically be running on the same server instances as the primary. |
Consumer | an instance of the ophad process running in 'consumer' mode (as per configuration). This will typically be running on the same server instance as the poller (or mirror) |
opHA-MB 5.1.0 can be installed on infrastructure
New to opHA/opHA-MB ----- Install to new servers
Running opHA 4.X ----- Upgrade existing servers, running opHA 4.x
IMPORTANT: All the servers should be setup with the following
a. Server names set
b. MongoDB 6 installed
c. The role of the server on which opHA-5.1.0 needs to be installed should be known and appropriately set (opha_role) in opCommon.json
d. opHA-MB licenses to be installed.
e. Time should be properly synchronised (using chrony) on all the vms Primaries and Peers.
The default value of opha_role in opCommon.json is Standalone
shankarn@opha-dev7:~$ grep opha_role /usr/local/omk/conf/opCommon.json
"opha_role" : "Standalone", |
Edit /usr/local/omk/conf/opCommon.json to change in to one of
“Main Primary”
“Primary”
“Poller”
“Mirror”
Install the opHA-MB for NMIS & opCharts license on the Main Primary server only.
To obtain a license please contact FirstWave Sales.
Currently there are 2 available licenses and Message Bus will not run without a license.
opHA-MB for NMIS & opCharts (This license is required for Message Bus to run).
NMIS and opCharts data are synced from all poller/mirrors to the Main Primary using Message Bus. In replication mode, If the Main Primary were to go down the Secondary Primary will retain the NMIS and opCharts data.
opHA-MB for opEvents (This license is an add-on for opEvents).
With this add-on opEvents will use Message Bus to sync opEvents data from each poller/mirror to the Main Primary and in replication mode opEvents data is synced to the Secondary Primary. In replication mode, If the Main Primary were to go down the Secondary Primary will retain the opEvents data.
You need to enter a License Key in the Modules → opLicensing as below on the Main Primary.
Download the required software onto each server.
We will be using the below example host names throughout this guide.
You would replace with your own where applicable. For this example we have the required 6 servers for the setup.
IMPORTANT:
One server should be set as an Arbiter and in this example we are using Poller1 Server
Main Primary Server: opha-dev1.opmantek.net
Poller1 Server: opha-dev2.opmantek.net [Arbiter Server]
Mirror1 Server: opha-dev3.opmantek.net
Poller2 Server: opha-dev4.opmantek.net
Mirror2 Server: opha-dev5.opmantek.net
Second Primary: opha-dev6.opmantek.net
Run the installation commands on each server as needed.
All the following setup commands in this setup should be run as the root user
sh ./<executable> |
e.g
sudo sh ./opHA-Linux-x86_64-5.1.0.run |
During the installation of the arbiter server you must answer 'n' at the following prompt
This configuration should be applied to the Main Primary, Arbiter and Secondary Primary servers ONLY.
Edit Configuration: Update the /etc/nats-server.conf file with the following settings:
On Main Primary:
server_name: opha-dev1.opmantek.net
host: "opha-dev1.opmantek.net"
routes: [
# secondary primary
"nats://opha-dev6.opmantek.net:6222"
# arbiter
"nats://opha-dev2.opmantek.net:6222"
]
On Secondary Primary:
server_name: opha-dev6.opmantek.net
host: "opha-dev6.opmantek.net"
routes: [
# main primary
"nats://opha-dev1.opmantek.net:6222"
# arbiter
"nats://opha-dev2.opmantek.net:6222"
]
On Arbiter:
server_name: opha-dev2.opmantek.net
host: "opha-dev2.opmantek.net"
routes: [
# main primary
"nats://opha-dev1.opmantek.net:6222"
# secondary primary
"nats://opha-dev6.opmantek.net:6222"
]
Sample file for Main Primary:-
server_name: "opha-dev1.opmantek.net" #The local server
http_port: 8222
listen: 4222
jetstream: enabled
#tls {
# cert_file: "<path>"
# key_file: "<path>"
# #ca_file: "<path>"
# verify: true
#}
log_file: "/var/log/nats-server.log"
accounts {
$SYS {
users: [
{ user: "admin",
pass: "password"
}
]
}
ophad: {
users: [
{ user: "omkadmin", password: "op42opha42" }
]
jetstream: enabled
}
}
jetstream {
store_dir: "/opt/nats/storage"
max_memory_store: 1028M
max_file_store: 1028M
}
cluster {
name: "C1"
host: "opha-dev1.opmantek.net" #The current host
# the current server
listen: "0.0.0.0:6222"
routes: [
# secondary primary
"nats://opha-dev6.opmantek.net:6222"
# server with the arbiter
"nats://opha-dev2.opmantek.net:6222"
# other servers
]
} |
/usr/local/omk/conf/opCommon.jsonBackup Configuration: Make a backup copy of the opCommon.json file before making any changes.
Add NATS Cluster Configuration: Ensure the following configuration is included in opCommon.json on all servers. In the below example we have added the Main Primary, arbiter and Secondary Primary FQDNs and port/s to be utilized in the nats_cluster attribute, you can find this in the database section.
"nats_cluster": [ "opha-dev1.opmantek.net", "opha-dev2.opmantek.net", "opha-dev6.opmantek.net" ] |
Ensure "nats_num_replicas": 3 instead of 1 in /usr/local/omk/conf/opCommon.json.
Ensure there is only one instance of “nats_num_replicas” and "nats_cluster" in opCommon.json
"database" : {
"nats_num_replicas": 3,
"nats_cluster": [ "opha-dev1.opmantek.net", "opha-dev2.opmantek.net", "opha-dev6.opmantek.net" ]
} |
If there are more than one instance of these 2 parameters there could be issues
shankarn@opha-dev7:/usr/local/omk/bin$ grep nats_cluster /usr/local/omk/conf/opCommon.json
"nats_cluster" : [ "opha-dev4.opmantek.net", "opha-dev5.opmantek.net", "opha-dev7.opmantek.net" ],
shankarn@opha-dev7:~$ grep nats_num_replicas /usr/local/omk/conf/opCommon.json
"nats_num_replicas" : 3,
shankarn@opha-dev7:~$ |
Main Primary and Secondary Primary only: Ensure there is only one instance of “db_replica_set” and "db_mongo_cluster" in opCommon.json. And these 2 vars are as set below.
"database" : {
"db_replica_set": "rs1",
"db_mongo_cluster": [ "opha-dev1.opmantek.net", "opha-dev6.opmantek.net" ],
} |
If there are more than one instance of these 2 parameters there could be issues
shankarn@opha-dev7:/usr/local/omk/bin$ grep db_mongo_cluster /usr/local/omk/conf/opCommon.json
"db_mongo_cluster" : [ "opha-dev4.opmantek.net", "opha-dev7.opmantek.net" ],
shankarn@opha-dev7:~$ grep nats_num_replicas /usr/local/omk/conf/opCommon.json
"nats_num_replicas" : 3,
shankarn@opha-dev7:~$ |
/usr/local/omk/conf/opCommon.json for Main Primary and Secondary Primary only. For key “database” add the following information:db_mongo_cluster is cluster of ‘Main Primary’ and ‘Secondary Primary’
"database" : {
"db_replica_set": "rs1",
"db_mongo_cluster": [ "opha-dev1.opmantek.net", "opha-dev6.opmantek.net" ],
"nats_num_replicas": 3,
"nats_cluster": [ "opha-dev1.opmantek.net", "opha-dev2.opmantek.net", "opha-dev6.opmantek.net" ]
} |
The following steps should be performed only on the Main Primary MongoDB instance.
Connect to MongoDB using the mongosh command and configure the replica set:
Run the following command
mongosh --username opUserRW --password op42flow42 admin |
Add the main-primary and secondary-primary to the below command and run it
rs.initiate({ _id: "rs1", version: 1, members: [ { _id: 0, host : "opha-dev1.opmantek.net:27017", priority: 2 }, { _id: 1, host : "opha-dev6.opmantek.net:27017", priority: 1 } ] }) |
Run the following command to set the default write concern:
db.adminCommand({ "setDefaultRWConcern": 1, "defaultWriteConcern": { "w": 1 } }) |
Run the following command to add an arbiter to the replica set:
rs.addArb("opha-dev2.opmantek.net:27018") |
Run the following command on the Main Primary, Secondary Primary, and Arbiter servers ONLY:
sudo systemctl restart mongod |
Start the NATS service on the 3 servers: Main Primary, Secondary Primary, and Arbiter servers ONLY
systemctl start nats-server |
After configuration changes, restart mongod on all Mirrors and Pollers servers ONLY:
systemctl restart mongod |
After configuration changes have been made, you will need to restart the relevant FirstWave module daemons applicable to your server.
For example if you have NMIS9, opCharts, opEvents and opHA 5 installed you would execute as follows:
systemctl restart nmis9d opchartsd opeventsd omkd |
Click on Peers in opHA-MB portal on the Main Primary (http://<fqdn of Main Primary>/en/omk/opHA/peers)
Proceed to next step after discovering all the Peers.
For all the opHA Peers, go to opHA Peers page click on “Sync all nodes” followed by “Pull Peer Data”
Execute the following command on the poller and mirror servers.
cd /usr/local/omk/bin sudo systemctl restart ophad # if ophad is not running ./ophad cmd producer start |
If the target_dir is not /usr/local/omk but different, say /data/omk please suffix the command with the cli option
--opha-cli-path /data/omk/bin/opha-cli.pl
cd /usr/local/omk/bin sudo /data/omk/bin/ophad cmd producer start --opha-cli-path /data/omk/bin/opha-cli.pl |
Please run through opHA-MB 5.0 User Guide to find any config issues thats been missed.
Accept EULAs: Make sure when you login to each server to confirm that all End User License Agreements (EULAs) are accepted.
IMPORTANT: All the servers should be setup with the following
a. Server names set
b. MongoDB 6 installed
c. The role of the vm on which opHA-5.1.0 needs to be installed should be known and appropriately set (opha_role) in opCommon.json
d. opHA-MB licenses to be installed.
The default value of opha_role in opCommon.json is Standalone
shankarn@opha-dev7:~$ grep opha_role /usr/local/omk/conf/opCommon.json
"opha_role" : "Standalone", |
Edit /usr/local/omk/conf/opCommon.json to change in to one of
“Main Primary”
“Primary”
“Poller”
“Mirror”
Install the opHA-MB for NMIS & opCharts license on the Main Primary server only.
To obtain a license please contact FirstWave Sales.
Currently there are 2 available licenses and Message Bus will not run without a license.
opHA-MB for NMIS & opCharts (This license is required for Message Bus to run).
NMIS and opCharts data are synced from all poller/mirrors to the Main Primary using Message Bus. In replication mode, If the Main Primary were to go down the Secondary Primary will retain the NMIS and opCharts data.
opHA-MB for opEvents (This license is an add-on for opEvents).
With this add-on opEvents will use Message Bus to sync opEvents data from each poller/mirror to the Main Primary and in replication mode opEvents data is synced to the Secondary Primary. In replication mode, If the Main Primary were to go down the Secondary Primary will retain the opEvents data.
You need to enter a License Key in the Modules → opLicensing as below on the Main Primary.
Download the required software onto each server.
IMPORTANT: All the servers should be setup with the following
a. opHA 4 needs to be currently installed
b. Pollers and mirrors discovered by the Main Primary
c. Server names set
d. Secondary Primary server setup as Primary
e. One server needs to be an Arbiter. A poller can be used for this
f. MongoDB 6 installed
We will be using the below example host names throughout this guide.
You would replace with your own where applicable. For this example we have the required 6 servers for the setup.
IMPORTANT:
One server should be set as an Arbiter and in this example we are using Poller1 Server
Main Primary Server: opha-dev1.opmantek.net
Poller1 Server: opha-dev2.opmantek.net [Arbiter Server]
Mirror1 Server: opha-dev3.opmantek.net
Poller2 Server: opha-dev4.opmantek.net
Mirror2 Server: opha-dev5.opmantek.net
Second Primary: opha-dev6.opmantek.net
Run the installation commands on each server as needed.
All the following setup commands in this setup should be run as the root user
sh ./<executable> |
e.g
sh ./opHA-Linux-x86_64-5.1.0.run |
During the installation of the arbiter server you must answer 'n' at the following prompt
This configuration should be applied to the Main Primary, Arbiter and Secondary Primary servers ONLY.
Edit Configuration: Update the /etc/nats-server.conf file with the following settings:
server_name: "opha-dev1.opmantek.net" #The local server
http_port: 8222
listen: 4222
jetstream: enabled
#tls {
# cert_file: "<path>"
# key_file: "<path>"
# #ca_file: "<path>"
# verify: true
#}
log_file: "/var/log/nats-server.log"
accounts {
$SYS {
users: [
{ user: "admin",
pass: "password"
}
]
}
ophad: {
users: [
{ user: "omkadmin", password: "op42opha42" }
]
jetstream: enabled
}
}
jetstream {
store_dir: "/opt/nats/storage"
max_memory_store: 1028M
max_file_store: 1028M
}
cluster {
name: "C1"
host: "opha-dev1.opmantek.net" #The current host
# the current server
listen: "0.0.0.0:6222"
routes: [
# secondary primary
"nats://opha-dev6.opmantek.net:6222"
# server with the arbiter
"nats://opha-dev2.opmantek.net:6222"
# other servers
]
} |
/usr/local/omk/conf/opCommon.jsonBackup Configuration: Make a backup copy of the opCommon.json file before making any changes.
Add NATS Cluster Configuration: Ensure the following configuration is included in opCommon.json on all servers. In the below example we have added the Main Primary, arbiter and Secondary Primary FQDNs and port/s to be utilized in the nats_cluster attribute, you can find this in the database section.
"nats_cluster": [ "opha-dev1.opmantek.net", "opha-dev2.opmantek.net", "opha-dev6.opmantek.net" ] |
Ensure "nats_num_replicas": 3 instead of 1 in /usr/local/omk/conf/opCommon.json.
Ensure there is only one instance of “nats_num_replicas” and "nats_cluster" in opCommon.json
"database" : {
"nats_num_replicas": 3,
"nats_cluster": [ "opha-dev1.opmantek.net", "opha-dev2.opmantek.net", "opha-dev6.opmantek.net" ]
} |
If there are more than one instance of these 2 parameters there could be issues
shankarn@opha-dev7:/usr/local/omk/bin$ grep nats_cluster /usr/local/omk/conf/opCommon.json
"nats_cluster" : [ "opha-dev4.opmantek.net", "opha-dev5.opmantek.net", "opha-dev7.opmantek.net" ],
shankarn@opha-dev7:~$ grep nats_num_replicas /usr/local/omk/conf/opCommon.json
"nats_num_replicas" : 3,
shankarn@opha-dev7:~$ |
Main Primary and Secondary Primary only: Ensure there is only one instance of “db_replica_set” and "db_mongo_cluster" in opCommon.json. And these 2 vars are as set below.
"database" : {
"db_replica_set": "rs1",
"db_mongo_cluster": [ "opha-dev1.opmantek.net", "opha-dev6.opmantek.net" ],
} |
If there are more than one instance of these 2 parameters there could be issues
shankarn@opha-dev7:/usr/local/omk/bin$ grep db_mongo_cluster /usr/local/omk/conf/opCommon.json
"db_mongo_cluster" : [ "opha-dev4.opmantek.net", "opha-dev7.opmantek.net" ],
shankarn@opha-dev7:~$ grep nats_num_replicas /usr/local/omk/conf/opCommon.json
"nats_num_replicas" : 3,
shankarn@opha-dev7:~$ |
/usr/local/omk/conf/opCommon.json for Main Primary and Secondary Primary only"db_replica_set": "rs1", "db_mongo_cluster": [ "opha-dev1.opmantek.net", "opha-dev6.opmantek.net" ], "nats_num_replicas": 3, "nats_cluster": [ "opha-dev1.opmantek.net", "opha-dev2.opmantek.net", "opha-dev6.opmantek.net" ] |
The following steps should be performed only on the Main Primary MongoDB instance.
Connect to MongoDB using the mongosh command and configure the replica set:
Run the following command
mongosh --username opUserRW --password op42flow42 admin |
Add the main-primary and secondary-primary to the below command and run it
rs.initiate({ _id: "rs1", version: 1, members: [ { _id: 0, host : "opha-dev1.opmantek.net:27017", priority: 2 }, { _id: 1, host : "opha-dev6.opmantek.net:27017", priority: 1 } ] }) |
Run the following command to set the default write concern:
db.adminCommand({ "setDefaultRWConcern": 1, "defaultWriteConcern": { "w": 1 } }) |
Run the following command to add an arbiter to the replica set:
rs.addArb("opha-dev2.opmantek.net:27018") |
Run the following command on the Main Primary, Secondary Primary, and Arbiter servers ONLY:
sudo systemctl restart mongod |
Start the NATS service on the 3 servers: Main Primary, Secondary Primary, and Arbiter servers ONLY
systemctl start nats-server |
After configuration changes, restart mongod on all Mirrors and Pollers servers ONLY:
systemctl restart mongod |
After configuration changes have been made, you will need to restart the relevant FirstWave module daemons applicable to your server.
For example if you have NMIS9, opCharts, opEvents and opHA 5 installed you would execute as follows:
systemctl restart nmis9d opchartsd opeventsd omkd |
Click on Peers in opHA-MB portal on the Main Primary (http://<fqdn of Main Primary>/en/omk/opHA/peers)
Proceed to next step after discovering all the Peers.
For all the opHA Peers, go to opHA Peers page click on “Sync all nodes” followed by “Pull Peer Data”. Please refer to https://docs.community.firstwave.com/wiki/spaces/opHA/pages/3272605706/opHA-MB+5.0+User+Guide#Scenario-1-%3A-Using-opHA4-‘Pull’-on-Primary-to-synchronize-nmisng-collections for more details.
sudo systemctl restart ophad |
Execute the following command on the poller and mirror servers.
cd /usr/local/omk/bin ./ophad cmd producer start |
If the target_dir is not /usr/local/omk but different, say /data/omk please suffix the command with the cli option
--opha-cli-path /data/omk/bin/opha-cli.pl
cd /usr/local/omk/bin sudo /data/omk/bin/ophad cmd producer start --opha-cli-path /data/omk/bin/opha-cli.pl |
Please run through opHA-MB 5.0 User Guide to find any config issues thats been missed.
Accept EULAs: Make sure when you login to each server to confirm that all End User License Agreements (EULAs) are accepted.