Table of Contents |
---|
Warning |
---|
Exercise caution while editing /usr/local/omk/conf/opCommon.nmis, /usr/local/omk/conf/opCommon.json or /etc/mongod.conf; if a syntax error is induced all OMK applications will cease to function. |
...
- ABI 1 (NMIS 8)
- In omk/conf/opCommon.nmis:
- omkd_workers is set at 10 by default. Try reducing this number to 2 and then restart the omkd service.
If you THEN find YOU need more omkd_workers, increment this value by one and test again until you get to a suitable value for omkd_workers - /omkd/omkd_max_requests: 100 to 500 (start at 100 and increase from this value if needed)
- omkd_workers is set at 10 by default. Try reducing this number to 2 and then restart the omkd service.
- In omk/conf/opCommon.nmis:
- ABI 2 (NMIS 9)
- In omk/conf/opCommon.json:
- omkd_workers is set at 10 by default. Try reducing this number to 2 and then restart the omkd service.
If you THEN find YOU need more omkd_workers, increment this value by one and test again until you get to a suitable value for omkd_workers - /omkd/omkd_max_requests: 100 to 500 (start at 100 and increase from this value if needed)
- omkd_workers is set at 10 by default. Try reducing this number to 2 and then restart the omkd service.
- in nmis9/conf/Config.nmis:
- /system/nmisd_worker_max_cycles: 100
- /system/nmisd_worker_max_cycles: 100
- In omk/conf/opCommon.json:
- Consider installing and using zswap, with its default settings, provided the server has more than 1GB RAM:
- https://www.kernel.org/doc/html/latest/vm/zswap.html provides that:
- Overcommitted guests that share a common I/O resource can dramatically reduce their swap I/O pressure, avoiding heavy handed I/O throttling by the hypervisor.
This allows more work to get done with less impact to the guest workload and guests sharing the I/O subsystem. - Users with SSDs as swap devices can extend the life of the device by drastically reducing life-shortening writes.
- Overcommitted guests that share a common I/O resource can dramatically reduce their swap I/O pressure, avoiding heavy handed I/O throttling by the hypervisor.
- Performance Analysis of Compressed Caching Technique
- See the CONCLUSION of this paper for insights as to why zswap should not be used on a server with less than 1GB RAM.
- https://www.ibm.com/support/pages/new-linux-zswap-compression-functionality provides that for a server with 10GB RAM and 20% zswap pool size:
- On x86, the average zswap compression ratio was 3.6
- For the x86 runs, the pool limit was hit earlier - starting at the 15.5 GB data point
- Don't be tempted to increase maximum pool percent from the default setting of 20: this will most likely affect performance adversely.
- Command to view zswap info during operation:
- sudo grep -rRn . /sys/kernel/debug/zswap/
- sudo grep -rRn . /sys/kernel/debug/zswap/
- https://www.kernel.org/doc/html/latest/vm/zswap.html provides that:
...