Home for HMNL Enterprise Computing

Enterprise Computing

Artictle Recent Articles

Are GIF Images Really Limited To 256 Colours?

Ian Tree  02 December 2015 21:16:26 - Amsterdam
There has been quite some debate on this topic over the years, much of it peppered with misunderstanding, misinformation and downright netbollox. In this article we take a fresh look at the question from the point of view of the words and intentions of the GIF specifications and the conformance of implementations in different codecs.


There are two different versions of the GIF specification, the first identified as “87a” was issued in 1987 and this was followed by the “89a” version in 1989.
The 1987 specification describes two key features that have an impact on the maximum colours question. First an image file or stream may contain multiple separate images and secondly each image can specify a separate colour map. Appendix D of the specification makes it clear how codecs are expected to implement a stream that is encoded with separate images.

"Since a GIF data stream can contain multiple images, it is necessary to describe processing and display of such a file.  Because the image descriptor allows for placement of  the image within  the logical screen, it is possible to define a sequence of images that may each be a partial screen, but in total fill the entire screen.   The guidelines for handling the multiple image situation are:

1.  There is no pause between images.  Each is processed immediately as seen by the decoder.

2.  Each image explicitly overwrites any image already on the screen inside of its window.  The only screen clears are at the beginning and end of the GIF image process.   See discussion on the GIF terminator.”

It is clear that the specification supports composite images both through tiling and through overlaying. Given that each image (or tile) supports a separate colour map each containing up to 256 individual colours it is clear that even in the 87a specification images with more than 256 colours are supported, providing that the colour distribution in the image is such that it can be broken down into sub-images (or tiles) with each tile limited to 256 colours. In theory any image with any number of colours can be decomposed into tiles or overlays (or a combination of both) using sub-images as small as a single pixel, however, such an implementation may not be realistic.

It must now be stated that extending the colour palette of an image through the use of composite tiles or overlays was not foreseen at the time of writing the specification. The part of the specification that describes the use of Local (per image) colour maps states the following.

“A Local Color Map is optional and defined here for future use.”

The 89a specification, published in 1989 was intended to extend the original 87a specification as can be seen from the foreword of that document.

“This document defines the Graphics Interchange Format(sm). The specification given here defines version 89a, which is an extension of version 87a.”

Although the text of the “Appendix D” from the 87a specification does not appear in the 89a version, there is no text that negates the continued validity and applicability of the appendix. Multiple (sub) images in a single GIF stream or file are still recognized as can be seen from the section defining the “Image Descriptor” block.

“Exactly one Image Descriptor must be present per image in the Data Stream.  An unlimited number of images may be present per Data Stream.”

The 89a specification introduced the (per image) Graphics Control Extension (GCE);  a generic mechanism for implementing an extension was defined in the 87a specification.

“The Graphic Control Extension contains parameters used when processing a graphic rendering block. The scope of this extension is the first graphic rendering block to follow. The extension contains only one data sub-block.

This block is OPTIONAL; at most one Graphic Control Extension may precede a graphic rendering block. This is the only limit to the number of Graphic Control Extensions that may be contained in a Data Stream.”

It should be noted that although the GCE block is optional the “Version Number” section of the 89a specification states the following.

“The encoder should make every attempt to use the earliest version number covering all the blocks in the Data Stream; the unnecessary use of later version numbers will hinder processing by some decoders.”

The GCE introduced three new fields all of which are relevant to the discussion about the number of colours limitation in GIF images. The first field is the “Delay Time” which is described as follows.

“Delay Time - If not 0, this field specifies the number of hundredths (1/100) of a second to wait before continuing with the processing of the Data Stream. The clock starts ticking immediately after the graphic is rendered.”

The delay time field allows a stream containing multiple images to be rendered as an animation. There is a clear implication that a value of zero for the field would elicit the behaviour described in Appendix D of the 87a specification, rendering the next image in the stream immediately after the current one has been rendered.

The second field is a packed field of which 3 bits are used to describe the disposition of the image which is described as follows.

“Disposal Method - Indicates the way in which the graphic is to be treated after being displayed.

Values :        0 -   No disposal specified. The decoder is not required to take any action.

               1 -   Do not dispose. The graphic is to be left in place.

               2 -   Restore to background color. The area used by the graphic must be restored to the background color.

               3 -   Restore to previous. The decoder is required to restore the area overwritten by the graphic with what was there prior to rendering the graphic.

                4-7 -    To be defined.”

The value of 1 “Do not dispose” follows the processing described in Appendix D of the 87a specification.

The last new field of interest is the “Transparency Index” and the corresponding flag in the packed fields, described as follows.

“Transparency Index - The Transparency Index is such that when encountered, the corresponding pixel of the display device is not modified and processing goes on to the next pixel. The index is present if and only if the Transparency Flag is set to 1.”

It may not be immediately obvious how this field relates to the maximum colours question. Allowing a transparent colour to be used in an image removes any restriction on colours and their distribution, any image with any number of colours can be decomposed into a series of overlaid images using a transparent colour in all but the first image.

In conclusion the GIF specifications do and always have supported mechanisms that allow for encoding images with an unlimited number of colours.


This section looks at the implementation of the specification in a few illustrative codecs to see if they constrain the maximum number of colours in practical use.

GIF 87a Tiled Image Behaviour

A GIF image was prepared with 200 (20 x 10) images, each image 20 x 20 pixels tiling a 400 x 200 logical screen. Each tile uses a different colour index in a Global Colour Map. The image is marked with a version identifier of “87a”.  The image is then viewed in different software applications to see how their respective codecs perform.

MS Edge and MS IE 11 both only display the first image out of 200, interestingly MS Paint displays the complete 200 tile image. Google Chrome (on Windows 10) displays the complete 200 tile image, however, it treats the image stream as an animation and enforces a default minimum delay time on each image in the stream taking approximately 20 seconds to render the complete image (10/100ths sec delay time). Mozilla Firefox displays the complete 200 tile image without any rendering delays.

Of the codecs tested only Mozilla Firefox conforms to Appendix D of the 87a specification.
GIF 87a Image - 200 Tiles
GIF 87a Image - 200 Tiles

GIF 89a Tiled Image Behaviour

A GIF image was prepared with 200 (20 x 10) images, each image 20 x 20 pixels tiling a 400 x 200 logical screen. Each tile uses a different colour index in a Global Colour Map and specifies a delay time of 0 in the GCE. The image is marked with a version identifier of “89a”.  The image is then viewed in different software applications to see how their respective codecs perform.

MS Edge, MS IE 11 and Google Chrome all display the complete 200 tile image as an animation with an enforced delay on each image. All three codecs took about 20 seconds to render the complete animation, implying a default delay time of 10/100ths seconds per image. Mozilla Firefox displays the complete 200 tile image without any rendering delays.

Again only the Mozilla Firefox codec conformed to the expectation that a zero delay time in a GCE would induce the behaviour as laid out in Appendix D of the 87a specification.
GIF 89a Image - 200 Tiles
GIF 89a Image - 200 Tiles

GIF89a Tiled Animation

A GIF 89a Image was prepared identical to that used in the previous experiment, except that the delay time set in each GCE was set to 20 (20/100ths of a second). The image is then viewed in different software applications to see how their respective codecs perform.

All four codecs displayed the complete 200 tile image, taking 40 seconds to complete the rendering process. All codecs conform to the behaviour expected from the 89a specification.
GIF 89a Animation - 200 Tiles
GIF 89a Animation - 200 Tiles

GIF 89a Tiled 512 Colour Image Behaviour

A GIF 89a Image was prepared with two side-by-side images (tiles), each tile was populated with a 16 x 16 grid of squares, each square 20 x 20 pixels of a different colour, thus the complete image should render 512 different colours. One tile used a Global Colour Map and the other a Local Colour Map. Both images had a GCE with a delay time set to zero. The image is then viewed in different software applications to see how their respective codecs perform.

MS Edge, MS IE 11 and Google Chrome all display the complete image with all 512 colours, there was no perceptible delay visible during rendering. From the previous examples these three codes will have treated the image as an animation and would have enforced a 10/100ths of a second delay between the first and second tile but this delay would not be visible during rendering. Mozilla Firefox rendered the image correctly with delay in the rendering.

The behaviour of all codecs was as would be expected from the previous experiments.
GIF 89a Image - 512 colours
GIF 89a Image - 512 colours

Off Topic

The GIF 89a specification introduced the Application Extension which provided a mechanism for individual application to store additional persistent information in the GIF image stream. This extension mechanism was used by Netscape to add a field that controls how many times the render mechanism will “play” an animation. This extension is recognised by all of the codecs in common use and all agree that a value of zero for the field means that the animation should be played continuously until the image is dismissed. They also agree that if the extension is not present then the animation is played only once. They do not agree on the meaning of the field when the value is non-zero, with some using the field value as an iteration counter for the render loop where a value of one causes the animation to be played once while others take it as a repeat counter where a value of one causes the animation to be played once and repeated once, therefore played twice. Unfortunately documentation is unclear on which interpretation is correct, both are equally valid. It should be noted that most common use is to play once or play continuously therefore omitting the extension to play once and setting a count of zero to play continuously will work in most codecs.


In theory a GIF image can support an unlimited (or rather limited only by the supported colour encoding which is 24 bit RGB - RGB888) number of colours. However, the implementation in a number of codecs that are in common use does put practical limits on the number of different colours that can be displayed, this limit is however far in excess of 256. The rendering time for a GIF image stream that contains multiple images as tiles or overlays is entirely dictated by the minimum default image delay time set by non-conforming codecs. The delay time imposed by all of the non-conforming codecs that were examined is 10/100ths of a second. It is therefore possible to budget for that delay and set a maximum number of colours that can be used in practice accordingly. If a total render time of three seconds is considered to be acceptable then this would allow for 30 separate images in the stream this would permit 30 * 255 = 7,650 different colours to be comfortably displayed in an image.

MS Edge and MS IE 11 are considered non-conforming to the GIF 87a specification as they do not recognize multiple images in a single image stream. Google Chrome is considered non-conforming to the GIF 87a specification as it does not follow Appendix D and treats the multiple in-stream images as an animation and imposes a minimum image delay time. Mozilla Firefox is considered as conforming to the GIF 87a specification.

MS Edge, MS IE 11 and Google Chrome are all considered as non-conforming to the GIF 89a specification as they all impose a minimum image delay time for in-stream images that have a delay time of zero specified in the Graphics Control Extension block. The Mozilla Firefox codec is considered as conforming.

It must be said that given the stricter conformance of the Mozilla Firefox codec, in the areas considered, they would also get my vote for the most logical implementation of the Netscape Application Extension iteration count.

Heavy Lift Computing

Ian Tree  28 August 2013 13:23:10 - Eindhoven

Heavy Lift Computing

Heavy Lift Computing

What is it?

We use the term "Heavy Lift Computing" to denote any IT project that has one or more of the following characteristics.
  • Large data volumes to be moved and/or transformed
  • Large numbers of objects that must undergo multiple state changes
  • Rigid deadlines
  • The "Mission Impossible" tag

Typical of these types of projects would be the transformations associated with mergers and acquisitions (M&A), corporate spin-offs (de-mergers), IT platform migrations, IT infrastructure consolidations and changes in operational IT sourcing models. In all of these projects you typically find a fixed deadline for completion driven by operational or business imperatives along with an undefined number of transformations on an ill-defined volume of data.

Some research and development in the area of "High Throughput Computing" (HTC) have a bearing on the discipline of HLC however the problem domain in the enterprise is wholly different from that in the scientific arena and introduces a different set of priorities and drivers.

The Domino eXplorer tool set has been designed to address many of the technical challenges involved in Domino based HLC projects.



Source Packages




Autonomic Throughput Optimisation

Ian Tree  28 August 2013 12:33:35 - Eindhoven

Autonomic Throughput Optimisation

The Domino eXplorer (DX) was developed as a means for facilitating the rapid development of C++ tools to be used in projects that involve high volumes of data transformation. DX has been, and continues to be developed for use across a wide range of Domino versions and platforms. The tool set is also appropriate for Business Intelligence applications that have to process "Big Data" in Notes Databases. DX is also used as a research tool to investigate various aspects of Autonomic Systems, in particular Autonomic Throughput Optimisation.

Using Domino eXplorer (DX) applications in typical "Heavy Lift Computing" projects can involve a lot of performance tuning in order to get applications to meet target throughput objectives, this can be a technically challenging and time consuming task.

The goal of the research is find a model and algorithms that would allow the DX kernel to maximise the throughput of an application autonomically independent of the current workload profile and the state of the execution environment. The kernel uses a very simplistic component model to explore different aspects of throughput optimisation. In the model the application is viewed as an agent that generates a stream of requests that represent a unit of workload, the application passes these request to the kernel for execution, the kernel executes these workload requests on behalf of the application, the execution of the requests result in resources being consumed from the execution environment. The following properties influence application throughput in the model.

1.        Application design.
2.        The profile of the units of workload presented in each request.
3.        Constraints, contention and limits on resource consumption from the execution environment.
4.        The multiprogramming level in the kernel.

Application Design

The kernel will influence that application design through the API that it exposes to the application. Obviously the application design cannot be varied dynamically during execution so this property is not taken into consideration for autonomic optimisation.

Unit of Workload Profile

While the application and kernel have little influence over the profile of a primitive unit of workload, in terms of wait states, I/O demand, Memory demand and CPU demand we already recognise in the DX3 kernel that each request that is submitted for execution should multiplex a number of primitive units of workload into each request that is passed to the kernel. Some applications that use the DX3 kernel will determine an appropriate multiplex level at execution time according to different factors. The determination of the multiplex level is controlled in the application layer and is not communicated to the kernel.

The current research version of the kernel DX3R takes a slightly different approach that requires only minimal code change in the application layer. All requests are submitted to the kernel containing only a single primitive unit of workload, the kernel builds these unit requests into "trains" this mechanism replaces the multiplexing currently performed in the application layer. The application must indicate to the kernel that the request being submitted can be multiplexed with the previous request. The execution code iterates over the train of requests assembled by the kernel rather than the collection of primitive units of workload contained in the request.

Execution Environment

Operating systems already have enough problems trying to understand and manage the constraints, contention and limits of resource consumption in the execution environment. The DX kernel makes no attempt to measure nor to manage any aspects of the behaviour of the execution environment. However the kernel is aware that changes within the execution environment will directly affect optimal throughput levels.

Multiprogramming Level

The current DX3 production kernel does not dynamically vary the number of worker threads in the thread pool that are being used to satisfy the requests being generated by the application, the number is determined by the application (usually through a command line parameter). The research DX3R kernel does however has the capability to actively manage the number of available threads on-demand.

Current Research Model

The current research model can manipulate both the multiprogramming level and the size of request "trains" (unit of workload multiplex level). The kernel looks at the recent history of throughput which is measured in terms of the number of primitive units of workload completed in a unit of time and adjusts the workload multiplex level and the multiprogramming level in order to try and find the optimal throughput level and maintain optimal throughput in reaction to changes in the workload characteristics and execution environment. The kernel expects that for any application at a particular point in time, and for a limited time interval there will exist a "sweet spot" at which the settings will maximise throughput, as illustrated below.

Sweet Spot

Algorithms are currently being investigated that continuously search for the sweet spot based on the recent history of settings of the workload multiplex and multiprogramming levels in combination with the recent history of throughput achievement.


Manageable Extensions in Domino

Ian Tree  08 May 2012 10:11:36 - Eindhoven

Domino Extensions


The Domino Extension Manager (and DSAPI) functionality can be extremely useful, it provides a mechanism for intercepting NSF calls in the Domino Server or Notes Client. The code of an Extension Manager runs as a part of the Server or workstation and can perform low-level manipulation of a wide range of Domino functions. Because the Extension Manager code is loaded and unloaded as part of the Server or Workstation core functionality they tend to be very rigidly constructed and configured only through changes to their code. The following presents a model for constructing an Extension Manager that is dynamically configurable and controllable and sharing common code paths between an Extension Manager and a DSAPI.

Design Pattern

The Domino Extension Manager is constructed following the usual pattern, except, all control and configuration variables are stored in a shared memory segment in the DLL and are exposed through a series of additional entry points in the DLL so that they can be read and written to. A Server AddIn task is written that manipulates all of the control and configuration settings on-demand, via calls to the DLL entry points. Using this pattern functionality in the extension manager can be enabled, disabled and re-configured at any time without even restarting the Domino Server.

Simple eh! A little more detail follows.

1. Dynamically Configurable and Controllable Extension Manager and DSAPI Modules

Extension Manager and DSAPI modules are implemented in Domino as DLL's (Shared Libraries in UNIX). These are usually coded in a quite rigid manner often with all of the variable environmental data hard-coded in the DLL. This rigid pattern does not meet the needs for the XAM implementation where a set of filter rules needs to be dynamically changed. It is also useful to be able to modify the operating state of such modules on-the-fly, traditionally this involves changing ini settings and restarting the HTTP task or the entire server.

Design Pattern

a. The Extension Manager or DSAPI DLL is implemented along with a "paired" controller add-in task.

b. All stateful control variables in the DLL are implemented as static variables in a shared data segment, this ensures that all copies of the DLL loaded into different processes all access a common set of control variables.

NOTE: all static variables in the shared data segment MUST have an initial value assigned, this forces the addresses to be fixed in the shared data segment at compile time.

c. The DLL exposes access to the shared control variables to the Add-In task through a collection of public getter and setter functions.

d. Control variables that have a variable memory size requirement should be set in memory allocated through Domino and marked as shared. The address of these memory areas are stored in the static shared memory segment used by the DLL.

e. The Add-In can now detect necessary state or configuration changes through console commands or changes in the content of a configuration database and reflect these changes in the state or configuration of the DLL by using the appropriate setter functions in the DLL.

2. Common Logic for Extension Manager and DSAPI Modules

With the increasing emphasis on web enabling applications special logic that is deployed through an Extension Manager may also need to be implemented in the web access path i.e. in a DSAPI, this is the case with XAM. In cases where this duality is necessary it would be preferable to implement the logic in a shared code path.

Design Pattern

a. There is no conflict between the entry points through which an Extension Manager and a DSAPI are invoked, therefore, it is possible for Extension Manager and DSAPI functionality to coexist in the same DLL.

b. Implement the core logic underlying the Extension Manager and DSAPI functionality as private functions in the DLL, map calls to the invocation entry points in the extension manager and DSAPI into calls to the appropriate common private functions.

Design Pattern for Data Synchronisation

Ian Tree  10 November 2008 13:05:19 - Eindhoven

Data Synchronisation

I have, over the years, come across many implementations of data synchronisation both within Domino/Notes and between Domino/Notes and other platforms. Many of these implementations are.
  • Flawed by design
  • Inefficient
  • Not scalable
  • Error prone
  • Impossible to maintain
  • All of the above

I will present below a generic design pattern for data synchronisation that eliminates all of the above problems and provides a simple, stable basis for applications that need this type of functionality.

The Problem

I have 1-n data sources that contain records that need to be synchronised across the data sources based on a key (the synchronisation key) that is shared across the data sources. I assume that I can access all of the data sources in the order defined by the synchronisation key. There are two possible variants of the Data Synchronisation problem one is the "Master/Slave" situation where one of the Data Sources is designated as Master and all other Data Sources must represent the state of that Data Source, the other is the "Peer" situation where additions/deletions/updates can be propagated into any one of the Data Sources and must be reflected in all others. This design pattern is suitable for implementation with either of the situations identified.

A Typical (and trivial) Solution

The simplest problem definition for this set of problems is where there are two Data Sources, one master and one slave. A typical example of this would be a Domino Directory that requires (partial) synchronisation with some external directory. A common implementation of this would use the following logic.

1)  Make a sequential pass of the Master Directory and lookup (by Key) each entry in the slave directory, if the key is not found add the record from the master directory to the slave directory.

2)  Make a sequential pass of the Slave Directory and lookup (by Key) each entry in the Master Directory, if the key is not found then remove the entry from the Slave Directory.

The logic for the first pass can be extended to compare timestamps and determine if content updates need to be applied to the Slave Directory entries.

There is nothing wrong with this solution, there needs to be a little bit of safety logic added to detect duplicate records in either directory but the pattern will work. The solution starts to unwind where you have multiple directories with large numbers of entries then the added complexity and processing requirements quickly mount up to give an unworkable solution.

The Conceptual Solution

The Design Pattern needed to simplify this picture is based on taking a different view of the data that is being processed. If you can imagine being able to merge all of the data from all Data Sources into a single Data Source (only for the purpose of the synchronisation processing) then it is possible to see a relatively simple core logic that processed all incoming records (in key sequence) and on detecting key-changes just looked at the count and source of the records that shared a particular synchronisation key and determined what action should be taken.

Data Source #1Data Source #2Data Source #3
Key #1Key #1Key #2
Key #2Key #2Key #3
Key #4Key #3Key #4
Key #5Key #5Key #5

We can create an abstraction of the problem data as a single ordered sequential stream of records.


Processing the data in this way creates a very simple logic pattern to implement and has the great performance and scalability advantage that the complete synchronisation is performed with only a single sequential pass of each of the data sources.

The Design Pattern

The first and most important component in the design pattern is the "multi-source iterator". The component must be capable of iterating 1 to n sequential (but ordered) data sources and on each iteration it should return the record with the lowest key value that is present on any one of the streams. The data object that is returned by the iterator may need to be more complex depending on the implementation, the calling routine will require access to the following entities associated with a returned record (document).

The data record (document)
An identifier to indicate which data source the current record belongs to
A handle to the data source (e.g. database) that will allow the creation of new documents

The iterator needs to support a minimum of two methods the first tests if there is a next record available in the multi-source stream (equivalent to hasNext() in the java iterator interface) i.e. a return value of false indicated end-of-file on all data sources. The second method next() returns the next data record object in sequence. Implementation in java can be done using the classical iterator interface.

The logic of the iterator is fairly simple, it must keep a buffer for one record and key on each data source and an indicator for each. Each call to the hasNext() method should check if all data sources are at end-of-file (EOF), if so it should return false, otherwise it should return true. Each call to the next() method will search all of the key buffers (only for data sources that are not at EOF) and compare the keys it should then return the data record with the lowest key and read a new record into the buffer that has just been returned (i.e. from the same data source from which the returned record came). It is important to note that the insertion or deletion of keys in any of the data sources must NOT alter the key position of the readers within the iterator, if the implementation does not allow this then additional logic must be added to the iterator to re-position the individual reader whenever an addition or deletion is performed on that data source.

The main processing (synchronisation) logic can now be built around a simple iteration loop. The routine will read sequentially that data records from the iterator. On each iteration it must first test if the key of the data record returned to determine if it has changed, if the key has not changed then it must buffer the data record ready for the next key change event, note that buffers must be bound to an individual data source. While buffering that data record it is possible to detect and deal with duplicate key conditions on an individual data source, according to the business logic. When a key change (or EOF) event is detected then the main synchronisation logic comes into play. Comparison of the keys and records across the buffers for all data sources will give a complete state definition for the current synchronisation state of the current key value and the appropriate actions taken, according to the business rules, to make all data sources fully synchronised for the current key value. After the current key has been fully synchronised then the buffers should be emptied and buffer filling starts again with the record that triggered the key change.

Some Notes on the Use of Recursion

Ian Tree  01 March 2008 15:45:30 - Eindhoven

Notes on the Use of Recursion

There have been many comments flying around the internet recently on the subject of why the use of recursive functions are fundamentally unsafe in programs. The arguments boil down to two not unrelated, points. Firstly the depth of any recursion cannot be fixed at design time, this can cause the recursion to overflow the stack capability of the platform on which it is implemented. Secondly, in the case of a stack overflow the ability to complete effective error handling and recovery is not always certain.

The two preceding points are both valid and in combination can lead to an implementation that is fundamentally disaster prone. However, there are practical implementation techniques that can help.

1. Tracking Recursion Depth

Add an additional integer parameter to your recursive function, this is set to 0 (zero) on the initial call to the recursive function. The recursive call in the function sets the parameter to the passed value plus 1 (one). This provides you with a measure of the recursion depth that can be tested on entry to the recursive function - when the level exceeds the maximum safe design depth the routine can fail gracefully and recover in a controlled manner.

It should be noted that this technique is also useful for protecting the application from heap/resource exhaustion. If, for instance, the recursive function opens a cursor, document or other similar resource, there will be limits on the number of resources that can be open concurrently. This implementation pattern allows the program design to be validated against these limits.

2. Update on Percolation

If you do not update, serialise or commit any data in your recursive function until after the call has been made to the recursive function then the result can be determined before you make any changes. This pattern makes the atomicity of the recursion function as single unit. All data changes are percolated as the result of a successful change at the safe (see point 1 above) lowest level of the recursion. Of course all validation and preparation of the data updates must be done before the call is made to the recursive function;

foo(........... , depth)
   //  Check recursion limit
   if (depth + 1 > MAX_RECURSION_DEPTH) return false;
   //  Validate and prepare data for update
   if foo(............. , depth + 1)
       //  Update data
       return true;
   return false;


Using Java in Domino

Ian Tree  10 May 2006 17:22:18 - Eindhoven

Why, Where and How to Use Java in Domino Developments

This article discusses the use of Java in Notes/Domino developments.

I am always mildly surprised at how little Java I come across in the wild in Notes/Domino developments. There seems to be a number of different reasons for this situation, including ...
  • Lack of Java experience among Domino developers
  • Lack of Domino experience among Java developers
  • Scepticism on the Web concerning the Domino Java implementation
  • Lack of good design patterns for Java use in Domino applications
  • Bad experiences of some developers when experimenting with Java
  • Reluctance of designers to go for multi-language implementations

All of these factors have served to slow the take-up of Java as a front line development language for Domino applications.

Why Use Java?

There are many good reasons for using Java as a part of an Notes/Domino Application Development project. Among the reasons to use Java are.

1. Because You Have To

For instance if you are building a Servlet to run under Domino then this has to be constructed in Java.

2. To Avoid Pain and Suffering

If, for instance, you needed to interact from a Form in the Notes Client with some other application that offered an http: interface, then it might be possible to either bodge something, put the interaction code in a complex C++ LSX and interface through LotusScript. Both of these solutions could be made to work but are extremely painful options when compared to providing the interface code in Java.

3. Because You Can Save Time

It may be that a significant part of your application could be coded using existing Java packages and therefore you just need to strap them into a Domino/Java framework in order to get most of the functional code developed.

4. Because You Have Java Development Resources Available

Good Java programmers can handle the Domino API relatively easily after doing "Domino Objects 101" and "Using recycle() 102" , they can become productive quite quickly (providing that there is some experienced Domino developer resource available in the development team).

5. You Want to Have Platform Portability

Classes that you have to develop as a part of a development project may also be needed in a WebSphere Application that is also being developed, it makes no sense to do a LotusScript version and a Java version.

Where To Use Java

You can use Java effectively in Notes/Domino developments in the following places.

1. Servlets

Servlets have to be implemented in Java. The Servlet is an under utilised application component in Notes/Domino web-enabled applications. I have seen a number of projects struggle with complexity and performance issues using traditional Notes/Domino components for a web application when a Servlet implementation could easily have saved the day.

As an aside on the subject of Servlets in Domino I came across the following line in the stdnames.h include file in the Notes 6.5 API toolkit.

#define DESIGN_FLAG_SERVLET                                'z' /*  FILTER: this is a servlet, not an agent! */

Hmmmmmm! Interesting, the ability to package a Servlet in an agent in a notes database (distribute via replication, no File System access for updates etc) very nice (if it happens).

2. Scheduled Agents

No problem.

3. Foreground Agents

If the agents do not interact with the UI then there is no problem. If interaction is required then they must either implement thei own UI interaction (AWT, Swing etc) or avoid doing the interaction directly.

4. Form, View and Databse Events

No, not unless you fire a Foreground Agent and don't need any interaction.

5. Stand Alone Applications for Client or Server


How To Use Java

It has to be said, the Notes Designer is not the slickest development environment in the world, particularly when it comes to doing Java. Use your favourite Java IDE and import your classes into Notes. Do use Java libraries, they are a great innovation allowing replication to distribute java packages and no need to put JAR files into the classpath - excellent (several of my non-domino Java developers drool when I explain how application deployment works in Notes/Domino).

Interacting with the UI

Ian Tree  10 May 2006 17:19:24 - Eindhoven

Interacting with the UI

There is a handy little trick that you you can use if you want to run a Foreground Java agent to do some processing and you have to interact with the user and you don't want to have to engineer a Swing UI just to support this agent. Just follow the steps below.

1. Create a profile form with all of the fields that are needed by the Java Agent. (both input & output).

2. In the profile document have ok & cancel buttons that set a field to "ok" or "cancel" and saves & closes the profile

3. Instead of calling your agent from an action somewhere in your application generate a calling sequence that

    3.1 Does an @Command([EditProfile].... on your profile document
    3.2 If the profile field is not set to "cancel" - runs your Java agent
    3.3 If the profile field is not set to "cancel" - does another @Command([EditProfile].... to display your results

Your Java agent now has a very simple UI construct in that it reads it's input data from the profile document and writes it's output back to the same profile document - job done.