This page provides a brief overview of the major changes between opHA3 releases.
Introduction
opHA introduces the concept of a master & pollers servers:
- The master is the node that have the information of all the pollers and it is where we can read all the information from.
- The pollers, collect their own data, and send that information to the master when it is requested.
The process of synchronising the information of the nodes is made by the master. The master requests the information for each poller with pull requests.
The first time the script runs, it is going to request all the data for each configured poller. Next time, it is going to request only the modified data since the last synchronisation.
3.0.8 BETA
Released Jun 2, 2019
- Bugfix: Discover peer window was closing when required fields missing.
- Bugfix: Update the last_update field to the synchronisation. This was causing some data not being updated in the master and expire_at field for events not being updated in all the documents.
- New cli cleanup functions to clean data from the cli tool.
- Added new cli function get_status to get the status for each poller in json format.
3.0.7 BETA
Released December 26, 2019
opCharts 3.0.7 requires NMIS 9.0.6
- Added new features to the centralised configuration from the master with support for OMK and NMIS files:
- Added new configuration file types.
- Remove files from pollers.
- Add a button with rollback instructions.
- Improved landing page, with information of the peers. Now the peers have an status API that check the daemons and the database status.
- Send the role to the pollers: If a role is set to poller, it doesn't let the use to use the GUI functions.
- Add conf.d to support zip.
- opha new cleanup functions.
- View resulting configuration files.
3.0.5 BETA
Released August 22, 2019
opCharts 3.0.5 requires NMIS 9.0.6
- Support for centralised configuration from the master with support for OMK and NMIS files.
3.0.4 BETA
- Support for NMIS 9 to show poller nodes from the master.
- A new button is added on the peers screen to edit manually the poller's url.
3.0.3 BETA
- Support for node deletion on the pollers. When a node was deleted on the poller, all this data wasn’t remove on the master. Now the pull process is going to remove nodes and associated data that was removed on the poller.
- Retry policy on pull failures: If there was a failure during the poller update, the pull finished. Now it is possible to specify a retry policy:
- retry number: How many times retry a request before finish the process unsuccessfully. 3 is the default value.
- delay: How many seconds is going to wait between retries (5 seconds by default).
- It is possible to modify this params in opCommon.nmis:
'opha_transfer_chunks' => { 'inventory' => 500, 'nodes' => 500, 'events' => 500, 'status' => 500, 'latest_data' => 500, 'retry_number' => 3, 'delay' => 5 },
Note that if no option is specified, there is not going to do any retry.
Also, note that if a poller is down, the process is going to take retry_number * delay seconds to finish.
- Show the registry synchronised data on the GUI.
- Small improvements on the GUI.
3.0.2-1 BETA
Hot fix to solve the problem of visualise the nodes graphics of the poller from the master.
3.0.2 BETA
On this version, the master is going to request the information by chunks. The number of calls is based on the chunk size and the number of results. The chunk size can be modified on conf/opCommon.nmis on the poller, with the next parameter (Note that a service restart is needed to use the new parameters):
|
- The configured pollers can be seen/modified on this page: http://master/en/omk/opHA/peers/
You can see how many calls is going to perform for each peer doing this request to that poller: http://poller/en/omk/opHA/api/v1/chunks
By default, all the data types are enabled, so all the data types are being synchronised (nodes, inventory, events, latest_data and status).
The synchronisation is configured in a cron job, that is going to run /usr/local/omk/bin/opha-cli.pl act=pull. For debugging purposes, you can run the same script with the following parameters:
|
And using force=true to bring all the data again, not only the data modified or created since the last synchronisation.
You should see something like this when the script es finished:
|