NMIS StatefulSet for Kubernetes
NMIS StatefulSet for Kubernetes is currently a beta release. Please contact Support if you are interested in testing it.
Overview
This document provides comprehensive guidance for deploying NMIS (Network Management Information System) in a Kubernetes environment using StatefulSets. For more information on StatefulSet, see: Kubernetes StatefulSet - Examples & Best Practices
The capability to deploy multiple NMIS instances from a standardized set of configuration files and manage them centrally is especially advantageous for large-scale network monitoring. This approach significantly reduces the overhead associated with deployment and configuration, enabling organizations to operate more efficiently.
- 1.1 Overview
- 1.2 Architecture Overview
- 1.3 Prerequisites
- 1.4 Deployment Components
- 1.4.1 1. Storage Configuration
- 1.4.2 2. MongoDB Configuration
- 1.4.2.1 MongoDB Secret
- 1.4.2.2 MongoDB Service
- 1.4.2.3 MongoDB Deployment
- 1.4.2.4 MongoDB PV and PVC
- 1.4.3 3. NMIS StatefulSet Configuration
- 1.4.3.1 NMIS Services
- 1.4.3.2 NMIS StatefulSet
- 1.4.3.3 Statefulset Initialisation architectural diagram
- 1.4.4 4. Ingress Configuration
- 1.4.5 5. Persistent Volumes
- 1.4.6 6. ConfigMaps
- 1.5 Deployment Procedure
- 1.6 Verification and Monitoring
- 1.7 Post-Deployment Configuration
- 1.7.1 DNS Configuration
- 1.8 Maintenance Procedures
- 1.8.1 Backup Procedures
- 1.8.2 Scaling Operations
- 1.9 Troubleshooting Guide
- 2 NMIS Kubernetes Deployment - Required Customizations
- 2.1 1. Node and Storage Configuration
- 2.1.1 Worker Node Selection
- 2.1.2 Storage Paths
- 2.1.3 Storage Sizes
- 2.2 2. Scaling Configuration
- 2.2.1 NMIS Replicas and Required Storage
- 2.2.1.1 1. StatefulSet Replicas
- 2.2.1.2 2. Required PV/PVC Creation
- 2.2.1.3 3. Directory Preparation
- 2.2.2 Resource Limits
- 2.2.1 NMIS Replicas and Required Storage
- 2.3 3. Network Configuration
- 2.3.1 Ingress Hostnames
- 2.3.2 TLS Configuration
- 2.3.3 Port Configuration
- 2.4 4. Database Configuration
- 2.4.1 MongoDB Credentials
- 2.5 5. Image Configuration
- 2.5.1 Image Registry
- 2.5.2 Image Pull Secrets
- 2.6 6. Kubernetes Namespace
- 2.7 7. Service Names
- 2.8 Important Considerations
- 2.9 Configuration Checklist
- 2.1 1. Node and Storage Configuration
Architecture Overview
*Please note that storage values are placeholders - please refer to NMIS StatefulSet for Kubernetes | NMIS Kubernetes Deployment Required Customizations and modify values to meet your requirements before applying configurations.
The deployment consists of two main components:
A MongoDB deployment that serves as the backend database
An NMIS StatefulSet that runs multiple NMIS instances with individual service endpoints
The architecture uses a namespace-based isolation approach with all components deployed in the nmis-system namespace. The deployment leverages StatefulSets for NMIS to ensure stable network identities and persistent storage for each pod. This approach allows each NMIS instance to maintain its own identity and state while sharing a common MongoDB backend that holds unique database’s for each NMIS pod.
Prerequisites
Kubernetes Infrastructure Requirements
Kubernetes cluster (version 1.20 or higher)
One control plane node
Minimum of 2 worker nodes
NGINX Ingress Controller installed and configured
Storage Requirements
Worker nodes must have the following directories pre-created:
# For NMIS instances
mkdir -p /mnt/data/nmis-0/{logs,var,conf,database}
mkdir -p /mnt/data/nmis-1/{logs,var,conf,database}
mkdir -p /mnt/data/nmis-2/{logs,var,conf,database}
# For MongoDB
mkdir -p /mnt/data/mongoResource Requirements
Minimum worker node specifications:
CPU: 4 cores recommended (to accommodate multiple NMIS instances)
RAM: 8GB minimum (allows for MongoDB and multiple NMIS pods)
Storage: SSD storage recommended for optimal performance
Network: All nodes must have connectivity to each other
Network Requirements
The following ports must be accessible:
Port 8080: NMIS web interface
Port 8042: OMK interface
Port 27017: MongoDB internal communication
Required customisations
Please consult the NMIS StatefulSet for Kubernetes | NMIS Kubernetes Deployment Required Customizations and edit the components listed if you wish to use the provided templates below
Deployment Components
1. Storage Configuration
First, we create the StorageClass for local storage. This configuration enables local volume provisioning:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer2. MongoDB Configuration
The MongoDB deployment consists of several components:
MongoDB Secret
apiVersion: v1
kind: Secret
metadata:
name: mongodb-secret
namespace: nmis-system
type: Opaque
stringData:
username: root
password: example # Change this in productionMongoDB Service
apiVersion: v1
kind: Service
metadata:
name: mongodb
namespace: nmis-system
spec:
ports:
- port: 27017
selector:
app: mongodbMongoDB Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongodb
namespace: nmis-system
spec:
replicas: 1
selector:
matchLabels:
app: mongodb
template:
metadata:
labels:
app: mongodb
spec:
containers:
- name: mongodb
image: mongo:4.4
command:
- mongod
- "--bind_ip_all"
env:
- name: MONGO_INITDB_ROOT_USERNAME
valueFrom:
secretKeyRef:
name: mongodb-secret
key: username
- name: MONGO_INITDB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mongodb-secret
key: password
ports:
- containerPort: 27017
volumeMounts:
- name: mongodb-data
mountPath: /data/db/
resources:
requests:
memory: "1Gi"
cpu: "0.5"
limits:
memory: "2Gi"
cpu: "1"
volumes:
- name: mongodb-data
persistentVolumeClaim:
claimName: mongodb-data-0MongoDB PV and PVC
apiVersion: v1
kind: PersistentVolume
metadata:
name: mongodb-data-0
namespace: nmis-system
spec:
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /mnt/data/mongo
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- kube-worker1
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mongodb-data-0
namespace: nmis-system
spec:
accessModes:
- ReadWriteOnce
storageClassName: "local-storage"
resources:
requests:
storage: 10Gi3. NMIS StatefulSet Configuration
NMIS Services
Each NMIS instance requires its own service for direct access:
apiVersion: v1
kind: Service
metadata:
name: nmis-0
namespace: nmis-system
spec:
selector:
app: nmis
statefulset.kubernetes.io/pod-name: nmis-0
ports:
- name: nmis
port: 8080
targetPort: 8080
- name: omk
port: 8042
targetPort: 8042
---
apiVersion: v1
kind: Service
metadata:
name: nmis-1
namespace: nmis-system
spec:
selector:
app: nmis
statefulset.kubernetes.io/pod-name: nmis-1
ports:
- name: nmis
port: 8080
targetPort: 8080
- name: omk
port: 8042
targetPort: 8042
---
apiVersion: v1
kind: Service
metadata:
name: nmis-2
namespace: nmis-system
spec:
selector:
app: nmis
statefulset.kubernetes.io/pod-name: nmis-2
ports:
- name: nmis
port: 8080
targetPort: 8080
- name: omk
port: 8042
targetPort: 8042NMIS StatefulSet
The complete StatefulSet configuration with init containers and volume management:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: nmis
namespace: nmis-system
spec:
serviceName: nmis
replicas: 3
selector:
matchLabels:
app: nmis
template:
metadata:
labels:
app: nmis
spec:
initContainers:
- name: cluster-id-init
image: busybox
command:
- /bin/sh
- -c
- |
if [ ! -f "/persistent/nmis9/cluster_id" ]; then
echo "Generating new UUID-style cluster ID for ${HOSTNAME}"
RANDOM_UUID=$(od -x /dev/urandom | head -1 | awk '{OFS="-"; print $2$3,$4,$5,$6,$7$8}')
echo "$RANDOM_UUID" > /persistent/nmis9/cluster_id
else
echo "Using existing cluster ID for ${HOSTNAME}"
fi
echo "Cluster ID: $(cat /persistent/nmis9/cluster_id)"
echo "export NMIS_CLUSTER_ID=$(cat /persistent/nmis9/cluster_id)" > /persistent/env/cluster_id
volumeMounts:
- name: conf-data
mountPath: /persistent/nmis9
- name: env-data
mountPath: /persistent/env
- name: config-sync
image: public.ecr.aws/n2x4v8j4/firstwave/nmis9_omk:v1.2
command:
- /bin/sh
- -c
- |
. /persistent/env/cluster_id
if [ ! -f "/persistent/nmis9/.initialized" ]; then
echo "First time setup - copying default configs"
mkdir -p /persistent/nmis9
mkdir -p /persistent/omk
cp /config-nmis/Config.nmis /persistent/nmis9/
cp /config-opcommon/opCommon.json /persistent/omk/
cp -r /usr/local/nmis9/conf/* /persistent/nmis9/
cp -r /usr/local/omk/conf/* /persistent/omk/
touch /persistent/nmis9/.initialized
echo "Initial configuration complete"
else
echo "Using existing configuration from persistent storage"
fi
sed -i "s|'db_name' => .*|'db_name' => 'nmis_${HOSTNAME}',|g" /persistent/nmis9/Config.nmis
sed -i "s|'db_server' => .*|'db_server' => '${NMIS_DB_SERVER}',|g" /persistent/nmis9/Config.nmis
sed -i "s|'cluster_id' => .*|'cluster_id' => '${NMIS_CLUSTER_ID}',|g" /persistent/nmis9/Config.nmis
sed -i "s|\"db_name\": .*|\"db_name\": \"nmis_${HOSTNAME}\",|g" /persistent/omk/opCommon.json
sed -i "s|\"db_server\": .*|\"db_server\": \"${NMIS_DB_SERVER}\",|g" /persistent/omk/opCommon.json
echo "Final cluster_id setting:"
grep cluster_id /persistent/nmis9/Config.nmis
env:
- name: NMIS_DB_SERVER
value: mongodb.nmis-system.svc.cluster.local
volumeMounts:
- name: config-nmis
mountPath: /config-nmis
- name: config-opcommon
mountPath: /config-opcommon
- name: conf-data
mountPath: /persistent
- name: env-data
mountPath: /persistent/env
containers:
- name: nmis
image: public.ecr.aws/n2x4v8j4/firstwave/nmis9_omk:v1.2
ports:
- containerPort: 8080
name: nmis
- containerPort: 8042
name: omk
env:
- name: NMIS_SERVER_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: NMIS_DB_SERVER
value: mongodb.nmis-system.svc.cluster.local
- name: NMIS_DB_NAME
value: "nmis_$(NMIS_SERVER_NAME)"
- name: NMIS_DB_USERNAME
valueFrom:
secretKeyRef:
name: mongodb-secret
key: username
- name: NMIS_DB_PASSWORD
valueFrom:
secretKeyRef:
name: mongodb-secret
key: password
volumeMounts:
- name: log-data
mountPath: /usr/local/nmis9/logs
- name: var-data
mountPath: /usr/local/nmis9/var
- name: conf-data
mountPath: /usr/local/nmis9/conf
subPath: nmis9
- name: conf-data
mountPath: /usr/local/omk/conf
subPath: omk
- name: database-data
mountPath: /usr/local/nmis9/database
- name: env-data
mountPath: /persistent/env
resources:
requests:
memory: "2Gi"
cpu: "1"
limits:
memory: "4Gi"
cpu: "2"
volumes:
- name: config-nmis
configMap:
name: nmis-config
- name: config-opcommon
configMap:
name: config-opcommon
- name: env-data
emptyDir: {}
imagePullSecrets:
- name: regcred
volumeClaimTemplates:
- metadata:
name: log-data
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "local-storage"
resources:
requests:
storage: 5Gi
- metadata:
name: var-data
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "local-storage"
resources:
requests:
storage: 5Gi
- metadata:
name: conf-data
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "local-storage"
resources:
requests:
storage: 1Gi
- metadata:
name: database-data
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "local-storage"
resources:
requests:
storage: 10GiStatefulset Initialisation architectural diagram
4. Ingress Configuration
how the routing works:
OMK Interface Routing (/((?:en/)?omk.*)):
Matches paths starting with /omk or /en/omk
Routes to port 8042 for OMK interface
The regex pattern (?:en/)? makes the 'en/' part optional
Static and UI Resources (/(cgi-nmis9|menu9|js|css|images)):
Handles web interface components
Routes to port 8080 for the main NMIS interface
The pattern matches multiple resource types in one rule
Default Root Path (/):
Catches all other requests
Routes to the main NMIS interface on port 8080
Each NMIS instance (nmis-0, nmis-1, etc.) gets its own hostname and complete set of routing rules. This means:
nmis-0.nmis-kube.opmantek.net routes to the first NMIS pod
nmis-1.nmis-kube.opmantek.net routes to the second pod
And so on for additional pods
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nmis-ingress
namespace: nmis-system
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/ssl-port: "30642"
nginx.ingress.kubernetes.io/secure-backends: "true"
nginx.ingress.kubernetes.io/configuration-snippet: |
# Force JSON content type for .json requests
if ($request_uri ~* \.json) {
add_header Content-Type "application/json" always;
proxy_set_header Accept "application/json";
proxy_set_header Content-Type "application/json";
}
# Add port handling
port_in_redirect off;
spec:
ingressClassName: nginx
tls:
- hosts:
- "*.nmis-kube.opmantek.net"
secretName: wildcard-cert
rules:
- host: "nmis-0.nmis-kube.opmantek.net"
http:
paths:
- path: /((?:en/)?omk.*)
pathType: Prefix
backend:
service:
name: nmis-0
port:
number: 8042
- path: /(cgi-nmis9|menu9|js|css|images)
pathType: Prefix
backend:
service:
name: nmis-0
port:
number: 8080
- path: /
pathType: Prefix
backend:
service:
name: nmis-0
port:
number: 8080
# Similar rules for nmis-1 and nmis-2...5. Persistent Volumes
Each NMIS instance requires four persistent volumes:
# NMIS-0 PVs
apiVersion: v1
kind: PersistentVolume
metadata:
name: log-data-nmis-0
namespace: nmis-system
labels:
app: nmis
type: log-data
spec:
capacity:
storage: 5Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
claimRef:
name: log-data-nmis-0
namespace: nmis-system
local:
path: /mnt/data/nmis-0/logs
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- kube-worker1
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: var-data-nmis-0
namespace: nmis-system
labels:
app: nmis
type: var-data
spec:
capacity:
storage: 5Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
claimRef:
name: var-data-nmis-0
namespace: nmis-system
local:
path: /mnt/data/nmis-0/var
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- kube-worker1
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: conf-data-nmis-0
namespace: nmis-system
labels:
app: nmis
type: conf-data
spec:
capacity:
storage: 1Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
claimRef:
name: conf-data-nmis-0
namespace: nmis-system
local:
path: /mnt/data/nmis-0/conf
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- kube-worker1
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: database-data-nmis-0
namespace: nmis-system
labels:
app: nmis
type: database-data
spec:
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
claimRef:
name: database-data-nmis-0
namespace: nmis-system
local:
path: /mnt/data/nmis-0/database
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- kube-worker1
---
# NMIS-1 PVs
apiVersion: v1
kind: PersistentVolume
metadata:
name: log-data-nmis-1
namespace: nmis-system
labels:
app: nmis
type: log-data
spec:
capacity:
storage: 5Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
claimRef:
name: log-data-nmis-1
namespace: nmis-system
local:
path: /mnt/data/nmis-1/logs
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- kube-worker1
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: var-data-nmis-1
namespace: nmis-system
labels:
app: nmis
type: var-data
spec:
capacity:
storage: 5Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
claimRef:
name: var-data-nmis-1
namespace: nmis-system
local:
path: /mnt/data/nmis-1/var
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- kube-worker1
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: conf-data-nmis-1
namespace: nmis-system
labels:
app: nmis
type: conf-data
spec:
capacity:
storage: 1Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
claimRef:
name: conf-data-nmis-1
namespace: nmis-system
local:
path: /mnt/data/nmis-1/conf
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- kube-worker1
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: database-data-nmis-1
namespace: nmis-system
labels:
app: nmis
type: database-data
spec:
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
claimRef:
name: database-data-nmis-1
namespace: nmis-system
local:
path: /mnt/data/nmis-1/database
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- kube-worker1
---
# NMIS-2 PVs
apiVersion: v1
kind: PersistentVolume
metadata:
name: log-data-nmis-2
namespace: nmis-system
labels:
app: nmis
type: log-data
spec:
capacity:
storage: 5Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
claimRef:
name: log-data-nmis-2
namespace: nmis-system
local:
path: /mnt/data/nmis-2/logs
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- kube-worker1
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: var-data-nmis-2
namespace: nmis-system
labels:
app: nmis
type: var-data
spec:
capacity:
storage: 5Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
claimRef:
name: var-data-nmis-2
namespace: nmis-system
local:
path: /mnt/data/nmis-2/var
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- kube-worker1
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: conf-data-nmis-2
namespace: nmis-system
labels:
app: nmis
type: conf-data
spec:
capacity:
storage: 1Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
claimRef:
name: conf-data-nmis-2
namespace: nmis-system
local:
path: /mnt/data/nmis-2/conf
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- kube-worker1
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: database-data-nmis-2
namespace: nmis-system
labels:
app: nmis
type: database-data
spec:
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
claimRef:
name: database-data-nmis-2
namespace: nmis-system
local:
path: /mnt/data/nmis-2/database
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- kube-worker1
6. ConfigMaps
Create the following ConfigMaps from your configuration files:
# Create Config.nmis
kubectl create configmap config-nmis \
--from-file=Config.nmis=./Config.nmis \
-n nmis-system
# Create OpCommon.json
kubectl create configmap config-opcommon \
--from-file=opCommon.json=./opCommon.json \
-n nmis-system
Deployment Procedure
Create the Namespace:
kubectl create namespace nmis-systemApply StorageClass:
kubectl apply -f storage-class.yamlApply MongoDB Resources:
kubectl apply -f mongodb-secret.yaml
kubectl apply -f mongodb-pv-pvc.yaml
kubectl apply -f mongodb.yamlApply all Persistent Volumes:
kubectl apply -f persistent-volumes.yamlDeploy NMIS StatefulSet and Services:
kubectl apply -f nmis-services.yaml
kubectl apply -f nmis-statefulset.yamlApply Ingress Configuration:
kubectl apply -f nmis-ingress.yamlVerification and Monitoring
Initial Deployment Verification
Verify Namespace and StorageClass:
kubectl get ns nmis-system
kubectl get storageclass local-storageCheck MongoDB Deployment:
# Check MongoDB pod status
kubectl get pods -n nmis-system -l app=mongodb
# Verify MongoDB service
kubectl get svc mongodb -n nmis-system
# Check MongoDB logs
kubectl logs -f $(kubectl get pods -n nmis-system -l app=mongodb -o name) -n nmis-systemVerify NMIS StatefulSet Deployment:
# Check StatefulSet status
kubectl get statefulset nmis -n nmis-system
# Verify all NMIS pods are running
kubectl get pods -n nmis-system -l app=nmis
# Check individual pod logs
kubectl logs -f nmis-0 -n nmis-system
kubectl logs -f nmis-1 -n nmis-systemValidate Persistent Volumes:
# Check PV status
kubectl get pv -n nmis-system
# Verify PVC bindings
kubectl get pvc -n nmis-system
# Check specific PVC details
kubectl describe pvc log-data-nmis-0 -n nmis-systemVerify Services and Ingress:
# Check all services
kubectl get svc -n nmis-system
# Verify ingress configuration
kubectl get ingress -n nmis-system
kubectl describe ingress nmis-ingress -n nmis-systemPost-Deployment Configuration
DNS Configuration
If experiencing MongoDB connection issues, check to make sure mongo’s fqdn match’s the ENV variable set in the nmis statefulset yaml file:
env:
- name: NMIS_DB_SERVER
value: mongodb.nmis-system.svc.cluster.local
and
- name: NMIS_DB_SERVER
value: mongodb.nmis-system.svc.cluster.local
Maintenance Procedures
Backup Procedures
MongoDB Backup:
# Create a backup directory on the host
mkdir -p /backup/mongodb
# Execute MongoDB backup
kubectl exec -it $(kubectl get pods -n nmis-system -l app=mongodb -o name) -n nmis-system -- \
mongodump --uri="mongodb://root:example@localhost:27017" --out=/data/backup/
# Copy backup to host
kubectl cp nmis-system/$(kubectl get pods -n nmis-system -l app=mongodb -o name | cut -d'/' -f2):/data/backup /backup/mongodb/NMIS Configuration Backup:
# Backup ConfigMaps
kubectl get configmap config-nmis -n nmis-system -o yaml > config-nmis-backup.yaml
kubectl get configmap config-opcommon -n nmis-system -o yaml > config-opcommon-backup.yaml
kubectl get configmap config-oplicense -n nmis-system -o yaml > config-oplicense-backup.yamlScaling Operations
Vertical Scaling:
To modify resource allocation, edit the StatefulSet:
kubectl edit statefulset nmis -n nmis-systemHorizontal Scaling:
To adjust the number of NMIS replicas:
kubectl scale statefulset nmis -n nmis-system --replicas=3Note: When scaling horizontally, ensure:
Sufficient node resources are available
PV/PVCs are properly configured for new instances
Ingress rules are updated for new instances
Troubleshooting Guide
Common Issues and Solutions
Pod Scheduling Failures
# Check pod events
kubectl describe pod <pod-name> -n nmis-system
# Verify node resources
kubectl describe node <node-name>Volume Mount Issues
# Check volume status
kubectl describe pv <pv-name>
kubectl describe pvc <pvc-name> -n nmis-system
# Verify directory permissions on worker nodes
ls -la /mnt/data/nmis-*MongoDB Connection Issues
# Test MongoDB connectivity from NMIS pod
kubectl exec -it nmis-0 -n nmis-system -- nc -zv mongodb 27017
# Check MongoDB logs
kubectl logs -f deployment/mongodb -n nmis-systemConfiguration Issues
# Verify ConfigMap contents
kubectl get configmap config-nmis -n nmis-system -o yaml
kubectl get configmap config-opcommon -n nmis-system -o yaml
# Check init container logs
kubectl logs nmis-0 -c config-sync -n nmis-systemNMIS Kubernetes Deployment - Required Customizations
1. Node and Storage Configuration
Worker Node Selection
File Location: All PersistentVolume definitions
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- kube-worker1 # MODIFY: Change to your worker node hostnameStorage Paths
File Location: All PersistentVolume definitions
local:
path: /mnt/data/mongo # MODIFY: Change to your MongoDB storage path
path: /mnt/data/nmis-{0,1,2}/{logs,var,conf,database} # MODIFY: Change to your NMIS storage pathsStorage Sizes
MongoDB Storage:
spec:
capacity:
storage: 10Gi # MODIFY: Adjust based on your data requirementsNMIS Storage Components:
# MODIFY: Adjust all these based on your requirements
- log-data: 5Gi
- var-data: 5Gi
- conf-data: 1Gi
- database-data: 10Gi2. Scaling Configuration
NMIS Replicas and Required Storage
1. StatefulSet Replicas
File Location: StatefulSet definition
spec:
replicas: 2 # MODIFY: Change number of NMIS instances2. Required PV/PVC Creation
For each new replica, you must create four new PersistentVolumes and their corresponding claims:
# Required for each new replica (example for nmis-3):
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: log-data-nmis-3
namespace: nmis-system
labels:
app: nmis
type: log-data
spec:
capacity:
storage: 5Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
claimRef:
name: log-data-nmis-3
namespace: nmis-system
local:
path: /mnt/data/nmis-3/logs
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- kube-worker1
# Repeat similar PV definitions for:
# - var-data-nmis-3
# - conf-data-nmis-3
# - database-data-nmis-3
# The StatefulSet will automatically create PVCs, but they must have corresponding PVs3. Directory Preparation
Before scaling, create required directories on the worker node:
# For a new replica (e.g., nmis-3):
mkdir -p /mnt/data/nmis-3/{logs,var,conf,database}
chmod 755 /mnt/data/nmis-3
chmod 755 /mnt/data/nmis-3/{logs,var,conf,database}Resource Limits
MongoDB Resources:
resources:
requests:
memory: "1Gi" # MODIFY: Adjust based on your environment
cpu: "0.5"
limits:
memory: "2Gi" # MODIFY: Adjust based on your environment
cpu: "1"NMIS Resources:
resources:
requests:
memory: "2Gi" # MODIFY: Adjust based on your environment
cpu: "1"
limits:
memory: "4Gi" # MODIFY: Adjust based on your environment
cpu: "2"3. Network Configuration
Ingress Hostnames
File Location: Ingress definition
spec:
tls:
- hosts:
- "*.nmis-kube.opmantek.net" # MODIFY: Change to your domain
rules:
- host: "nmis-0.nmis-kube.opmantek.net" # MODIFY: Change to your domainTLS Configuration
secretName: wildcard-cert # MODIFY: Change to your TLS secret namePort Configuration
nginx.ingress.kubernetes.io/ssl-port: "30642" # MODIFY: Change if different port needed4. Database Configuration
MongoDB Credentials
File Location: mongodb-secret.yaml
Copy
stringData:
username: root # MODIFY: Change to secure username
password: example # MODIFY: Change to secure password5. Image Configuration
Image Registry
image: docker.opmantek.net/nmis9:latest # MODIFY: Change if using different registryImage Pull Secrets
imagePullSecrets:
- name: regcred # MODIFY: Change to your registry credentials secret6. Kubernetes Namespace
If you want to use a different namespace:
metadata:
namespace: nmis-system # MODIFY: Change all namespace references if using different namespace7. Service Names
If changing service names:
metadata:
name: mongodb # MODIFY: Update all service references if changing names
name: nmis-0 # MODIFY: Update all service references if changing namesImportant Considerations
Storage Preparation:
All storage paths must exist on the worker nodes
Proper permissions must be set
Sufficient disk space must be available
Node Labels:
Ensure nodes have correct labels for affinity rules
Consider adding additional node selectors if needed
Networking:
DNS must be configured for your ingress hostnames
Certificates must be available for TLS
Firewall rules must allow required ports
Resource Planning:
Calculate total resource requirements based on replica count
Ensure nodes have sufficient resources
Plan for scaling headroom
Security:
Change all default passwords
Set appropriate file permissions
Consider adding network policies
Configuration Checklist
Update node affinity rules
Configure storage paths and sizes
Set replica count
Adjust resource limits
Configure ingress hostnames
Set up TLS certificates
Update MongoDB credentials
Configure image pull secrets
Prepare storage directories
Update namespace if needed
Configure DNS for ingress
Set appropriate file permissions
⚠️ Important Scaling Considerations:
All PVs must be created before scaling the StatefulSet
Directory structures must exist on the worker node
Each replica requires 4 PVs (log, var, conf, database)
Total storage per replica: 21Gi (5Gi + 5Gi + 1Gi + 10Gi)
Ensure sufficient node storage capacity before scaling