What exactly does the setting SERVER_MAX_CONCURRENT_TRANS do, and how does it interact with SERVER_POOL_TASKS? More importantly, when should one make use of these parameters?
These two parameters control the behavior of the server task's thread pool, as well as how the thread pool handles client transactions. These parameters only apply to NRPC communication, and hence only apply to the server task, 'nserver.exe' on W32, 'server' on Unix. These parameters do not affect any other task, such as HTTP, IMAP, Router, etc.
Note: The following document serves to update all previous literature regarding the explanation and use of the two INI parameters SERVER_MAX_CONCURRENT_TRANS and SERVER_POOL_TASKS. This document applies to R5, ND6 and ND7.
The implementation of Max_Concurrent_Trans stems from a time in the evolution of Domino prior to the use of thread pools (introduced in R5). Prior to R5, the Notes Core NRPC Server made use of a dynamically sized number of threads, creating a dedicated thread per client session (RPC session). In cases with large loads with hundreds or perhaps thousands of connections, this resulted in hundreds or thousands of threads. Outside of the obvious memory usage concerns with spawning too many threads, heavy context-switching issues occurred on certain Operating Systems, resulting in an overtaxed CPU.
In order to address cases where too many threads were trying to do too many things at once, Notes introduced an artificial bottleneck, referred to as MaxConcurrentTrans. In this new configuration (beginning in R4), Notes scaled activity by only allowing a certain number of server threads to process a client transaction at one time, with each thread executing only one transaction at a time. By default only 20 threads were allowed to execute a transaction simultaneously, where all other server threads were put to sleep until one of the current 20 threads completed its transaction.
The construct used to implement this throttling mechanism is based on Domino Events, which are a modified form of Domino Semaphores used to signal threads when a certain condition has occurred. In this case, the condition is that of an active thread completing its current transaction, which allows a sleeping thread to wake up and execute its own transaction. In R5, Domino introduced another scaling feature, the thread pool. In using a thread pool, Domino moved away from the limiting configuration of a one-to-one ratio between client sessions (NRPC) and server threads. Instead, Domino now makes use of IOCP to create a small number of worker threads capable of handling a large number of client sessions.
In moving to a thread pool, Domino R5 implements a fixed thread pool size (on most platforms), avoiding the configuration that leads to context-switching problems. However, instead of doing away with Server_Max_Concurrent_Trans, this parameter is actually used indirectly to set the size of the thread pool.
How MaxConcurrentTrans Affects Thread Pool Size
Beginning in Domino R5 and above, the core NRPC Server uses a fixed thread pool size, where each Notes Port receives its own listener thread and its own dedicated thread pool. Once created, this thread pool size does not grow. The thread pool size for each Notes port is calculated as two times the value for MaxConcurrentTrans. By default MaxConcurrentTrans is 20, which gives a thread pool size of 40 threads (2 * 20=40). Keep in mind that this means EACH Notes port receives a thread pool of 40 threads, so if the customer has configured three Notes Ports, this will produce three thread pools, each with 40 threads, resulting in a total of 120 threads.
The value of MaxConcurrentTrans is applied across all NRPC Notes Ports, which means that in the above case, even though 120 worker threads are created to handle client transactions, only 20 of them will be able to actively process their transaction, where the remaining 100 will potentially wait until signalled to proceed. Clearly, this can cause performance problems, especially on high end systems with many concurrent connections.
In cases where the hardware is capable of handling more than 20 concurrent transactions, customers may adjust this throttle by using the parameters Server_Max_Concurrent_Trans and Server_Pool_Tasks. However, it is critical to remember that because the thread pool size is automatically calculated based on MaxConcurrentTrans, whenever you set Server_Max_Concurrent_Trans, you MUST also set Server_Pool_Tasks.
To understand why this is, you need only realize that, by default, anytime you increase MaxConcurrentTrans, the thread pool will also increase (by double the size). Yet, the reason for adjusting MaxConcurrentTrans is to make use of the threads you already have. Therefore, the intent is to keep the thread pool size fixed while increasing MaxConcurrentTrans.
Change in ND 8.0.1
In 8.0.1 and above, anything other than zLinux, OS390 or iSeries will follow the rule where the server_pool_tasks will be set equal to the max concurrent transactions if max concurrent transactions is between 0 and 100 (whether set in the ini or automatically calculated).
Note: The Domino Servers on zSeries and zLinux differ in their configuration in the following way. Both are allowed unlimited MaxConcurrentTrans, and their thread pools sizes default to 100 thread for zSeries and 40 thread for zLinux. These thread pool sizes can be adjusted using the INI Server_Pool_Tasks, but it is not recommended to change these values.
What Should Server_Max_Concurrent_Trans and Server_Pool_Tasks be set to?
In nearly every case, the reason for changing MaxConcurrentTrans is to allow existing threads the ability to process more work. There is little reason for having 120 threads configured if only 20 of them can do anything at one time. Hence, one should always start with the desire to keep the current thread pool size unchanged.
- Step 1 - Set Server_Pool_Tasks=40 - this will retain the default size of the thread pool for each Notes Port. Keep in mind that the number of worker threads that will result will also be determined by the number of Notes Port. For example, three Notes ports at 40 threads each results in 120 worker threads. Taking the default size of 40 threads is the best starting point.
Now that we have the thread pool sizes fixed, we should set MaxConcurrentTrans equal to the number of worker threads. If we have 120 server threads, MaxConcurrentTrans should be set to 120 as well.
- Step 2 - Set Server_Max_Concurrent_Trans=120 - this will allow all 120 server threads to handle client transactions with no bottleneck.
Note - these numbers are examples only, and should be calculated based on the number of Notes Ports configured on the server. It is best to start with a thread pool size of 40 threads per pool, and increase only as needed. As one might expect, this configuration should be tested thoroughly before rolling into production, and will depend greatly on hardware and the nature of client traffic on the server. Creating too many threads will result in adverse memory and CPU usage.
It is highly recommended that one configure no more than 120 threads total across all Notes ports. This also means that one should not set Server_Max_Concurrent_Trans to more than 120 transactions. When changing number of transactions, you should always also adjust the Server_Pool_Tasks parameter.
When to change these parameters
In the above example, what we have done above is to remove the MaxConcurrentTrans bottleneck altogether, and allow the number of server threads to control throughput on the server. What happens if we set MaxConcurrentTrans to be higher than the number of worker threads? Nothing - in theory, this would allow more concurrent transactions to be processed. However, since we haven't changed the number of threads, there is no change in throughput. The number of server threads has now become the limiting factor.
In most environments, you will not need to adjust these settings. You should only adjust these settings if IBM Support has direct evidence that the MaxConcurrentTrans bottleneck is adversely affecting the performance of a heavily loaded server. Such evidence includes the collection of call stacks via NSD during a period of severe slowdown. Support will examine the call stacks of various server worker threads to determine if the MaxConcurrentTrans threshold has been saturated.
Avoid Using Server_Max_Concurrent_Trans=-1
In Domino R5 and ND6, the server will crash when Server_Max_Concurrent_Trans is set to "-1", which indicates an unlimited number of concurrent transactions. The reason this crash occurs may now be clear, since we know that unless Server_Pool_Tasks is also set, then the number of threads is determined as twice the value of MaxConcurrentTrans. If this value is unlimited, then the server will attempt to spawn an unlimited number of threads, which results in a low memory condition and a crash once the thread count reached roughly 1800 threads (with the exception of Domino for zSeries and ZLinux, which do not use this algorithm). This behavior has been addressed in ND7.
In order to avoid this crash, you should always set Server_Pool_Tasks, and set Server_Max_Concurrent_Trans to be equal to Server_Pool_Tasks multiplied by the number of Notes Ports. Do NOT set Server_Max_Concurrent_Trans to unlimited.
A Note About Server_Pool_Tasks
In ND6, the parameter Server_Pool_Tasks will create the same size thread pool for each Notes Port, regardless of its use. It is currently not possible in ND6 to configure each Notes Port with its own thread pool size in order to reduce overall number of threads. In ND7, a new INI parameter has been introduced that allows an administrator to individually size the NRPC thread pool for each Notes Port. For more information on this new parameter, refer to document # 1220856 titled How to set the NRPC thread pool sizes for each Notes port in Notes Domino 7.
Rate this page:
Copyright and trademark information
IBM, the IBM logo and ibm.com are trademarks of International Business Machines Corp., registered in many jurisdictions worldwide. Other product and service names might be trademarks of IBM or other companies. A current list of IBM trademarks is available on the Web at "Copyright and trademark information" at www.ibm.com/legal/copytrade.shtml.