This guide provides a starting point for sizing a new DCE solution.
If you set up DCE on a virtual machine, keep in mind that virtual environments run in a dynamic environment and need to be monitored and evaluated constantly. Adjustments are usually needed to ensure successful system performance.
For additional details on how to monitor and inspect the performance health of your DCE server, see the DCE performance troubleshooting guide.
Sensor updates calculation
When you size a DCE server, it is important to know the number of sensor updates per hour that the system will track. Sensor updates per hour is the number of sensors with values that change during one or more poll intervals over the course of an hour.
Calculating this value is not as simple as total number of sensors multiplied by the number of sensors in the system. Many devices have several sensors that do not change very often. If you calculate the rate of sensor change using the total sensor count, the result will be incorrectly inflated.
There are a few ways to more accurately approximate this value. To give you a starting point, we collected data from DCE systems that are connected to EcoStruxure and generated a data set from the connected devices. The average sensor quantity and the average rate of change per hour by device make and model are listed in the sensor updates calculator. You can use these reference values to calculate your sensor update per hour values.
For cases where you can’t use the reference document, you can use a small DCE deployment to measure the data, preferably with live devices deployed in their intended environment. Deploy a small DCE configuration and discover the device type in question. Then go to http://<dce server ip>/nbc/compress/support/sensorqstats. This page updates hourly and shows the amount of “processed” sensors for that hour. To get an accurate measurement, let the test run for a few hours, and then use the reported number as your update rate. Repeat this for the devices you want to profile.
Once you have all the data for each of your devices, add up the values. Remember, you want to calculate the sensor update quantity per hour for the entire system. Make sure you include virtual sensors if they will be used in the environment. Make sure you take your desired poll rate into account. You can drastically change your sensor update per hour value when you modify the poll period.
CPU and RAM sizing
Use the following characteristics to evaluate the CPU and RAM requirements for DCE server:
- Device count
- Sensor count
- Sensor updates per hour
If your requirements are lower than all the listed requirements for a configuration size, use the CPU and RAM suggested values. If any of the three values are above the listed requirement, use the next largest configuration size.
Since sensor count and sensor updates per hour are often difficult to know before deployment, you can use the sensor updates calculator to see sample data gathered from DCEs deployed in the field. The guide contains average sensor count and average sensor updates per hour for some popular devices that DCE supports to help you determine estimated values.
It is important to note that operating with SNMPv3 comes with increased overhead and decreases the number of devices and sensors that a given DCE instance can monitor. The limits in the tables below assume using a specific protocol to manage the entire population (SNMPv1, Modbus, or SNMPv3). Mixed environments may see different upper limits for devices and sensors. These findings are based on testing against APC and Schneider Electric devices.
Basic Server Load
|4 CPU / 4 GB of RAM|
|Sensor updates per hour||45,000||11,250|
Standard Server Load
|8 CPU / 8 GB of RAM|
|Sensor updates per hour||180,000||45,000|
Enterprise Server Load
|16 CPU / 16GB of RAM|
|Sensor updates per hour||360,000||90,000|
For configurations that go above the limits for the Enterprise server load, consider splitting up the device load across multiple DCE servers. You can also contact technical support to review specific sizing requirements on a case by case basis.
Additional DCE variables that impact CPU and RAM sizing
There are other components in DCE to consider when you plan CPU and RAM sizing for your DCE virtual machine. These parameters vary widely, so exact guidance cannot be provided for all cases. The following activities and parameters have a direct impact on DCE performance. Depending on the extent of their use, additional modifications to the CPU and RAM may be needed.
- Virtual Sensors
- API Integrations (DCO, Web Services, etc.)
- Number of users logging into or logged into DCE thick client at same time
- Graphing and reporting usage
- Discovering a large number of devices in a short period of time
If you plan to use these features, or you are concerned about their impact on system performance, you can review the details in the DCE performance troubleshooting guide.
Virtualization considerations for CPU and RAM sizing
All the sizing guidance provided assumes dedicated resources provisioned exclusively for the DCE virtual machine. In practice, unless you are using dedicated ESXi for each DCE VM, it is likely that your DCE virtual machine will share a pool of resources with other virtual machines. The load of other virtual machines being serviced by the same CPU and RAM resources has the potential to directly impact the performance of your DCE. This is especially true when you start to overprovision CPU and RAM resources, which can then lead to increased latency and resource contention.
To better understand the health of your DCE virtual machine in its virtualization environment, use the resources in the DCE performance troubleshooting guide to analyze your system’s performance in real time and adjust the system accordingly.
Successful deployment and operation of DCE relies on appropriately provisioned storage. There are two main components to appropriately size storage:
- Disk capacity
The DCE virtual machine performs a write-heavy workload with high volumes of small I/O operations and is extremely sensitive to disk latency.
DCE latency events often appear as dropped sensor changes. This is reported by DCE in the nbc.xml log and is a good indication of storage contention issues. When making decisions about storage, choose a storage solution that is optimized for:
- IO workloads that are 90% or more write-centric
- Writes that are mainly 1k block aligned
- Supporting <1ms latency for all read / write operations
With the above I/O pattern and latency requirements accounted for, the number of sensor updates per hour again comes into play to size the disk throughput appropriately. Use the following as guidance for how much storage throughput will be required. It is strongly encouraged that ALL of the following configurations use SSD drives.
- Up to 45,000 sensor updates per hour
- Requires 2MB/sec sustained write throughput
- Up to 180,000 sensor updates per hour
- Requires 8MB/sec sustained write throughput
- Up to 360,000 sensor updates an hour
- Requires 16MB/sec sustained write throughput
- Storage caching of 1GB or larger
For configurations that go above the limits outlined in the Enterprise server load section, consider splitting up the device load across multiple DCE servers. You can contact technical support to review specific sizing requirements on a case by case basis.
This is the recommended disk capacity deployment strategy:
- Deploy the DCE OVA and DO NOT adjust the size of the initial hard drive.
- Add a second drive to the virtual machine with a capacity of 250GB.
- Monitor the Storage Repository usage in the DCE thick client and use the purge notification settings to alert you when data retention is nearing current capacity.
- Add additional disks (never resize existing) to the DCE virtual machine in 250GB increments to meet data retention needs.
Additional DCE variables that impact storage performance
There are several DCE activities to take into consideration when you are sizing the DCE server. There are too many permutations of these variables to provide guidance on all of them. You can reference the DCE performance troubleshooting guide to better understand how to measure and tune this aspect of your system.
Variable activities that affect storage performance:
- Other virtual machines using the network storage
- Network latency
- Disk latency
- CPU latency
- Virtual sensors
- API Integrations (DCO, Web Services, etc.)
- Number of users logging into or logged into the DCE thick client at same time
- Graphing and reporting usage
- Data purging
- Discovering a large quantity of devices in a short period of time