Home for HMNL Enterprise Computing

Domino Speed Trial

Ian Tree  05 August 2006 13:08:23

Domino Speed Trial


Background


The other day I was faced with doing some analysis of SMTP mail routing going through a corporate SMTP mail hub. While the results of the analysis were interesting, at least for the customer, I was also struck by performance of the Domino servers that did the analysis, so much so that I though that I would share them with you.
The task was to analyse the routing/addressing paths that were being used in an SMTP hub that was running SendMail. The SendMail text logs were copied across to a Domino server and processed by an agent that build a single notes document for every message that passed through the SendMail hub, various categorised views were built on top of the raw message data to give counts of documents that were coming from and going to particular domains and collections of domains and following particular routes.
The analysis was carried out on a modern Wintel box (dual processor, 2 Gb RAM) that was running two Domino partitions, there was not much else running on the servers. Domino version was 6.5.3.

Processing


There were two log files available for the initial analysis run so these were processed in parallel into the same database by two copies of the "import" agent. On the initial run one agent ran for 11 Hrs 35 minutes and created 1,589,287 documents. The other agent (executing at the same time) ran for 6 Hrs 18 mins and generated 642,179 documents. So we now had a database with 2,231,266 documents which took a total run time of 17Hrs 53 mins to process in an elapsed time of less than 12 hrs. The database was about 1.2 Gb in size. UPDALL was then run against the database with the -C option (build all unbuilt indexes) this took 4Hrs 12 mins to complete the building of 10 different views across the data, the database size grew to 3.1 Gb. Opening the database from a client took about 6 seconds and switching between views took about 4 seconds (so was perfectly usable).

Observations


1)  You don't need to resort to an RDBMS for processing this type of data at 2,000,000 + volumes, Domino can handle it very nicely thank you.

2)  There were no signs of any approaching limit (wall) at the 2,200,000 documents processed.

3)  UPDALL -C is an often forgotten technique for "preparing" a large Domino database for production use.

4)  Indexing on databases of the 2,000,000 document size is perfectly manageable, providing of course that you don't open a view on a client in order to build the indexes.


Share: