NetX Sizing Guide

Configurations

Hardware requirements depend on many variables: maximum concurrent users, maximum asset size, maximum image conversion load, size of the asset repository, and acceptable latency. Additionally, these applications rely on many sub-components, each that can create bottlenecks under different circumstances, including the Application Server, the Database (over JDBC), the image processing engine (such as Adobe InDesign Server, Equilibrium MediaRich and Telestream Vantage — usually over HTTP/S), and perhaps other components. These applications can be configured in two main ways:

  • On a single hardware configuration.
  • On many server machines, in a distributed fashion.

Server Hardware Requirements

Single server deployment (SMB/departmental use)

Components Minimum Recommended Notes
Processor 4 cores 4 cores  
Memory 8GB 16GB 8GB RAM minimum; 16GB+ is recommended; larger installations may require more than 16GBs of RAM
Local disk storage 500GB 1TB A minimum of 500GB local storage is required for database, logs, and application only. SAS or SSD highly recommended. Asset/constituent storage space will vary depending on the expected size of your repository . If you want to use NAS or SAN for asset storage, gigabit connectivity is required.  For best performance, asset repository and related appFiles should be stored on dedicated physical/virtual volumes — NFS, SMB,  or other mounted volumes are supported.
 

Enterprise Multi-Server Deployment (multiple servers can be used for each role, as necessary for scaling)

Server role CPU Memory Local disk storage
Application (NetX) 4 cores 16GB 100GB
Zoom 4 cores 16GB 250GB
Video 32 cores 64GB 500GB
Search engine 16 cores 32GB 500GB SSD
Database 16 cores 64GB 500GB

Number of nodes

To scale horizontally, NetX can be deployed on multiple "nodes" — these nodes are additional running instances of the NetX core application,  all of which use the same central database and underlying storage facility. These nodes are often load-balanced, but they don't have to be. Often, "utility" nodes are deployed to manage specific tasks — typically importing or external data source syncing. Separating these nodes from a load-balanced cluster of nodes allows those segmented tasks to run on a node that is not used by end-users. Instead, users access the load-balanced nodes so they are unaffected by the tasks on utility nodes. 

Determining how many nodes to recommend for a particular installation can be challenging as there are many factors to take into account. Additional, performance requirements can be subjective; any installation can run on a single node, just like you can run any desktop computer with a single core processor. The question is: how performant does NetX need to be?

Next, it is important to understand that NetX internally queues tasks. For example, if you import one thousand JPG asset files, it will work on ingesting number of them concurrently, but it will not be overwhelmed with having to import all 1000 at once. Instead, it's a question of import duration: how long is too long for any particular deployment? This is the subjective aspect to determine how many nodes are required.

The table below provides a guide for sizing based on years of NetX experience working with customer installations, taking into account: user experience around import duration, downloads, task operations, and the like.

Condition 1 node 2 nodes 3 nodes 4 nodes 6 nodes 8 nodes
Workflows 5 10 24 50 100 200
Permissions 25 50 100 250 500 1000
Folders 25,000 50,000 100,000 250,000 500,000 1,000,000

Peak download rate

25GB/day 50GB/day 100GB/day 250GB/day 500GB/day 1TB/day
Peak repurposing rate 10GB/day 25GB/day 50GB/day 100GB/day 250GB/day 500GB/day
Peak ingestion rate 10GB/day 25GB/day 50GB/day 100GB/day 250GB/day 500GB/day
Assets 100,000 250,000 500,000 1,000,000 2,000,000 4,000,000
Peak concurrent user sessions 10 25 50 100 200 1000
External data sync sources 1 2 3 4 5 6

If any of one of these conditional thresholds is exceeded, NetX recommends additional nodes to your deployment. Additionally, if any three conditions are at all close to the thresholds, then it's recommend that you also consider adding the additional node.

Initial ingestion 

For your initial ingestion — and if it is a large initial ingest — you may want to consider adding additional NetX nodes, perhaps temporarily. You can linearly reduce the ingestion time by adding NetX nodes, and importing across all NetX nodes simultaneously. For example, if you ballpark your ingestion at 30 days for a single NetX instance, you can reduce that to 10 days simply by running 3 NetX nodes.

Commercial transcoding engines

The Adobe InDesign Server, Equilibrium MediaRich, and Telestream Vantage engines should run on a dedicated server (or VM), separate from the main Application Servers. System requirements vary by vendor. See NetX System Requirements for more information.

Staging and development instances

NetX highly recommends deploying Staging and/or Development DAM instances, to be used for testing various patches, upgrade, configurations and component changes. Servers required for these environments vary greatly.

High availability and scaling

NetX can be scaled using the Hydra Data Manager, and deploying multiple instances of the DAM. Furthermore, these instances can be load-balanced using third-party network load balancing (NLB) technologies, including products from Microsoft, F5, Kemp, Barracuda, and many others. The NLB must support “sticky sessions” — this routes each particular user to one production instance or the other.

Example topology

The diagram below illustrates a larger deployment. Note that this deployment includes multiple NetX instances that are load-balanced, and proxies through a network "DMZ". All of this is optional; this diagram is provided to illustrate one example.

Specific recommendations

Please contact your account manager or sales@netx.net for more information on server recommendations tailored to your specific needs.


Was this article helpful?
0 out of 0 found this helpful