Introduction
To assist with batch node operations, NMIS includes a little script for importing nodes from a CSV file. From version 9.1.1G onwards there are also more fine-grained tools available, which are described on the page titled Node Administration Tools.******
Info |
---|
|
IMPORTANT: The import_nodes.pl script was updated on 10 Dec 2020 to better handle node activation which was causing problems with nodes. If you are using this tool you should update from the NMIS9 GitHub repository, the links are below. |
...
The bulk import script can be found in /usr/local/nmis9/admin/import_nodes.pl and there is a sample CSV file /usr/local/nmis9/admin/samples/import_nodes_sample.csv.
...
If you are running an update for all nodes it may take a while to complete. The following command shows how force an update for one node at a time (a good way to distribute the load). You can also schedule an update for all nodes by removing the job.node argument.
Code Block |
---|
|
./bin/nmis-cli act=schedule job.type=update at="now + 5 minutes" job.node=testnode job.force=1 |
To run an NMIS update for a single node, optionally with debug which will result in debug files in /tmp/
Code Block |
---|
|
./bin/nmis-cli act=schedule job.type=update at="now + 5 minutes" job.node=testnode job.force=1 job.verbosity=9 |
...
You can check how long the script is going to last for each node using the following argument:
Code Block |
---|
|
nmis9/admin$ ./import_nodes.pl csv=/usr/local/nmis9/admin/import.csv time=1 |
The output show be something like:
Code Block |
---|
|
0.00 Begin
0.00 Loading the Import Nodes from /usr/local/nmis9/admin/import.csv
done in 0.00
0.00 Processing nodes
0.00 Processing newnode
UPDATE: node=newnode host=127.0.0.1 group=DataCenter
=> Successfully updated node newnode.
0.00 Processing newnode end
0.00 Processing import_test2
UPDATE: node=import_test2 host=127.0.0.1 group=Sales
=> Successfully updated node import_test2.
0.00 Processing import_test2 end
0.00 Processing import_test3
UPDATE: node=import_test3 host=127.0.0.1 group=DataCenter
=> Successfully updated node import_test3.
0.01 Processing import_test3 end
0.01 Processing import_test1
UPDATE: node=import_test1 host=127.0.0.1 group=Branches
=> Successfully updated node import_test1.
0.01 Processing import_test1 end
0.01 End processing nodes |
Import nodes from the primary into the pollers (opHA)
We can use the primary to import nodes to the pollers, indicating the cluster_id in the header:
Code Block |
---|
|
name,host,group,role,community,netType,roleType,activated.NMIS,activated.opConfig,cluster_id
import_test1,127.0.0.1,Branches,core,nmisGig8,lan,default,1,1,a515c63a-0d11-4bcb-a402-39a4600aebb9 |
The cluster_id can be checked:
Code Block |
---|
|
/usr/local/omk/bin/opha-cli.pl act=list_peers
cluster_id id server_name status
a515c63a-0d11-4bcb-a402-39a4600aebb9 614f3ea8626660a3e47f4801 poller1 save |
Image Added
The nodes will be transferred when the opha cron job runs, it can be called manually:
Code Block |
---|
|
/usr/local/omk/bin/opha-cli.pl act=sync-all-nodes |
More information in the opHA documentation opHA 3 Redundant Node Polling and Centralised Node Management#Managingremotenodes