Niagara N4 Supervisor VM sizing - what are you guys using?

Hey everyone,

Looking for some real-world input on sizing a Niagara N4 Supervisor VM. Tridium docs are helpful, but once you’re past small sites it still turns into “it depends,” and IT always wants hard justification.

Site: hospital / critical facility, N4.15
From our station export:

  • 45 field controllers (22 QNX/TITAN JACE-class, 19 WEBX, 4 HON-IPC)

  • ~ 55,298 points

  • ~ 109,414 histories (~90,063 on the Supervisor)

  • ~ 1,499 devices

We’re leaning toward a moderate Supervisor VM and SQL for long-term history archive (separate VM) so the Supervisor stays responsive and history growth is controlled.

What we’re thinking (baseline vs top tier)

Baseline / middle-of-the-road (what we’d likely implement):

  • Supervisor VM: 6–8 vCPU, 16–32 GB RAM, 300–500 GB SSD/NVMe

  • SQL Archive VM: 4–8 vCPU, 16–32 GB RAM, ~500 GB–2 TB SSD (depends on interval/retention)

Top tier / future-proof (if retention is long + intervals are aggressive + reporting is heavy):

  • Supervisor VM: 8 vCPU, 32–64 GB RAM, 500 GB–1 TB SSD

  • SQL Archive VM: 8+ vCPU, 32–64 GB RAM, 2–5 TB SSD

Where I’m getting pushback (and what I’d like feedback on)

I’ve gotten pushback from some IT departments any time we ask for 16 GB+ RAM or >250 GB disk on the Supervisor they point at Tridium’s published “minimums” and assume anything bigger is overkill.

In your experience, what are the biggest drivers that justify going beyond the published minimums?

  • Is it mostly history count + trend interval + retention?

  • Disk I/O from history writes/queries?

  • Concurrent users / graphics / reporting?

  • Integrations (BACnet, drivers, analytics, APIs, etc.)?

  • Anything else that reliably forces you into higher RAM / disk?

Also curious what you’ve seen bottleneck first in the real world (RAM vs disk I/O vs CPU) on systems around this size.

Appreciate any rules of thumb or “learned the hard way” lessons.

We recently had to provide system sizing to a customer for a deployment similar to this, and the official Tridium minimum specifications are not really enough for what we’re doing.

Our system will have ~110+ JACE 9000s, with a significantly larger number of points and histories, many of which will be exported to an external SQL database. Given the size, driver count, and history load, we recommended specifications well above the min

Processor:
Intel® Xeon® 4th Gen or newer, Intel® Core™ i-series 12th Gen or newer, or AMD® Ryzen™ 5 Gen 3 or newer

CPU Allocation:
16 vCPUs minimum, driven by the large system size, high JACE count, and multiple concurrent drivers (mqtt, sql, modbus, bacnet, niagara, snmp) and reporting services

Memory:
64 GB DDR4 RAM minimum

Storage:
SSD storage minimum

C: (OS) – 500 GB

D: (Data) – 1 TB starting capacity (with growth expected for histories, backups, and exports)

One important thing is that vCPU allocation can become a bottleneck if the host is running additional VMs or workloads that compete for physical CPU resources. While CPU pinning is a thing, we didn’t introduce that complexity, as hypervisor-level tuning falls outside our wheelhouse.

Thanks for the response and the input! This is exactly what we ended up providing to our customers IT deployment.

I’ve attached the PDF requirements document below as a point of reference if it helps anyone in the future who needs to create something similar. If anyone would like the original Word template to adapt for your own projects, just let me know and I can provide that as well.

NBIMC_NiagaraVM_IT_Requirements_AME.pdf (676.4 KB)

funny coincidence we have done work recently with AME.

1 Like