...
| Code Block |
|---|
| breakoutMode | wide |
|---|
| breakoutWidth | 760 |
|---|
|
sudo systemctl daemon-reload
sudo systemctl restart ophad |
Scenario 4 : Using ophad command line to verify the configuration and connection status
Run the command sudo /usr/local/omk/bin/ophad verify on all the Peers/Primary.
The last line “ophad.verify: ready for liftoff 🚀 “ to indicate the configuration is good.
| Code Block |
|---|
| breakoutMode | wide |
|---|
| breakoutWidth | 760 |
|---|
|
shankarn@opha-dev5:~$ sudo /usr/local/omk/bin/ophad verify
[sudo] password for shankarn:
ophad v0.0.52: agent
Appending to file "/usr/local/omk/log/ophad.log"
Settings -----------------------------------------
* ClusterId: 783d7b91-6c64-4db9-a28f-6364a54b8505
* OMKDatabase:
* ConnectionTimeout: 5h33m20s
* RetryTimeout: 3m0s
* PingTimeout: 33m20s
* QueryTimeout: 1h23m20s
* Port: 27017
* Server: localhost
* MongoCluster: []
* ReplicaSet: (blank)
* Name: omk_shared
* Username: opUserRW
* Password: ******
* WriteConcern: 1
* Uri: (blank)
* BatchSize: 0
* BatchTimeout: 0
* NMISDatabase:
* ConnectionTimeout: 2m0s
* RetryTimeout: 3m0s
* PingTimeout: 20s
* QueryTimeout: 1h23m20s
* Port: 27017
* Server: localhost
* MongoCluster: []
* ReplicaSet: (blank)
* Name: nmisng
* Username: opUserRW
* Password: ******
* WriteConcern: 1
* Uri: (blank)
* BatchSize: 50
* BatchTimeout: 500
* OpEventsDatabase:
* ConnectionTimeout: 2m0s
* RetryTimeout: 3m0s
* PingTimeout: 20s
* QueryTimeout: 5m0s
* Port: 27017
* Server: localhost
* MongoCluster: []
* ReplicaSet: (blank)
* Name: opevents
* Username: opUserRW
* Password: ******
* WriteConcern: 1
* Uri: (blank)
* BatchSize: 50
* BatchTimeout: 500
* OMK:
* LogLevel: info
* BindAddr: *
* Directories:
* Base: /usr/local/omk
* Conf: /usr/local/omk/conf
* Logs: /usr/local/omk/log
* Var: /usr/local/omk/var
* OPHA:
* DBName: opha
* StreamingApps: [nmis opevents]
* Logfile: /usr/local/omk/log/ophad.log
* MongoWatchFilters: []
* StreamType: nats
* AgentPort: 6000
* NonActiveTimeout: 8m0s
* ResumeTokenCollection: resume_token
* OpHACliPath: /usr/local/omk/bin/opha-cli.pl
* Compression: true
* Role: Poller
* Consumer: false
* Producer: false
* ConsumerPollerSet: (blank)
* DebugEnabled: false
* Redis:
* RedisServer: localhost
* RedisPort: 6379
* RedisPassword: ******
* RetryTimeout: 3m0s
* RedisStreamLenCheckPeriod: 5
* RedisProducerMaxStreamLength: 10000
* MaxRetries: 180
* RedisTLSEnabled: false
* RedisTLSSkipVerify: false
* RedisProducerDegradeTimeout: 10
* RedisProducerFullDegradeTimeout: 10
* Kafka:
* Seeds: localhost:63616,localhost:63627,localhost:63629
* RetryTimeout: 3m0s
* MaxRetries: 180
* Nats:
* NatsServer: opha-dev4.opmantek.net
* NatsCluster: []
* NatsPort: 4222
* NatsNumReplicas: 1
* NatsUsername: omkadmin
* NatsPassword: ******
* RetryTimeout: 3m0s
* NatsStreamLenCheckPeriod: 5
* NatsProducerMaxMsgPerSubject: 1000000
* NatsMaxAge: 604800
* MaxRetries: 180
* NatsTLSEnabled: false
* NatsTLSCert: <path>
* NatsTLSKey: <path>
* NatsTLSSkipVerify: false
* NatsProducerDegradeTimeout: 10
* NatsProducerFullDegradeTimeout: 10
* Authentication:
* AuthTokenKeys: ******
--------------------------------------------------
2025-10-22T08:01:46.329+1100 [INFO] ophad.verify: verify nmis9 mongodb connection with database: name=nmisng
2025-10-22T08:01:46.451+1100 [INFO] ophad.verify: MongoDB NMIS connect: maybe="found nodes collection in nmis9 ✅"
2025-10-22T08:01:46.451+1100 [INFO] ophad.verify: verify omk mongodb connection with database: name=opha
2025-10-22T08:01:46.551+1100 [INFO] ophad.verify: MongoDB OMK connect: maybe="found opstatus collection in omk database ✅"
2025-10-22T08:01:46.575+1100 [INFO] ophad.verify: Nats connect:
result=
| can connect to nats-server: opha-dev4.opmantek.net version: 2.11.9 ✅
| we can connect to Nats-server ✅
2025-10-22T08:01:46.575+1100 [INFO] ophad.verify: ready for liftoff 🚀 |