Hey everyone,
Looking for some real-world input on sizing a Niagara N4 Supervisor VM. Tridium docs are helpful, but once you’re past small sites it still turns into “it depends,” and IT always wants hard justification.
Site: hospital / critical facility, N4.15
From our station export:
-
45 field controllers (22 QNX/TITAN JACE-class, 19 WEBX, 4 HON-IPC)
-
~ 55,298 points
-
~ 109,414 histories (~90,063 on the Supervisor)
-
~ 1,499 devices
We’re leaning toward a moderate Supervisor VM and SQL for long-term history archive (separate VM) so the Supervisor stays responsive and history growth is controlled.
What we’re thinking (baseline vs top tier)
Baseline / middle-of-the-road (what we’d likely implement):
-
Supervisor VM: 6–8 vCPU, 16–32 GB RAM, 300–500 GB SSD/NVMe
-
SQL Archive VM: 4–8 vCPU, 16–32 GB RAM, ~500 GB–2 TB SSD (depends on interval/retention)
Top tier / future-proof (if retention is long + intervals are aggressive + reporting is heavy):
-
Supervisor VM: 8 vCPU, 32–64 GB RAM, 500 GB–1 TB SSD
-
SQL Archive VM: 8+ vCPU, 32–64 GB RAM, 2–5 TB SSD
Where I’m getting pushback (and what I’d like feedback on)
I’ve gotten pushback from some IT departments any time we ask for 16 GB+ RAM or >250 GB disk on the Supervisor they point at Tridium’s published “minimums” and assume anything bigger is overkill.
In your experience, what are the biggest drivers that justify going beyond the published minimums?
-
Is it mostly history count + trend interval + retention?
-
Disk I/O from history writes/queries?
-
Concurrent users / graphics / reporting?
-
Integrations (BACnet, drivers, analytics, APIs, etc.)?
-
Anything else that reliably forces you into higher RAM / disk?
Also curious what you’ve seen bottleneck first in the real world (RAM vs disk I/O vs CPU) on systems around this size.
Appreciate any rules of thumb or “learned the hard way” lessons.