Pages that do not relate to a particular BPM technology.

.NET : Processes, Threads and Multi-threading

So I’ve been digging around in my evernote to publish something this evening that might be of actual use.  My notes tend to be lots of small tid bits that I’ll tag into evernote whilst I’m working on projects and whilst they’re great on their own as a golden nuggets of information that I’ll always have access to, it fails to be useful for use in a blog article.

I did come across some notes from a couple of years ago around threading, specifically threading in the .NET framework…. So as I’ve little time this week to dedicate to writing some new content, I thought I’d cheat a bit and upload these notes, which should go some way in introducing you to threading and processes on the Windows / .NET platform. Apologies for spelling errors in advance as well as the poor formatting of this article. I’ve literally not got the time to make it look any prettier.

So, what is a windows process?

An application is made up of data and instructions. A process is an instance of a running application. It has it’s own ‘process address space’. So a process is a boundary for threads and data.
A .NET managed process can have a subdivision, called an AppDomain.

Right, so what is a thread?

A thread is an independent path of execution of instructions within a process.
For unmanaged threads (that is, non .NET), threads can access to any part of the process address space.
For managed threads (.NET CLR managed), threads only have access to the AppDomain within the process. Not the entire process. This is more secure.

A program with multiple simultaneous paths of execution (concurrent threads) is said to be ‘multi-threaded’. Imagine some string (a thread of string) that goes in and out of methods (needle heads) from a single starting point (main). That string can break off (split) into another pieces of string thread that goes in and out of other methods (needle heads) at the very same time as it’s sibling thread.

When a process has no more active threads (i.e. they’re all in a dead state because all the instructions within the thread have been processed by the CPU already), the process exits (windows ends it).

So if you think about when you manually end a process via Task Manager. You are forcing the currently executing threads and the scheduled to execute threads into a ‘dead’ state. Thus the process is killed as it has no more code instructions to execute.
 threadendOnce a thread is started, the IsAlive property is set to true. Checking this will confirm whether a thread is active or not.
Each thread created by the CLR is assigned it’s own memory ‘stack’ so that local variables are kept separate

Thread Scheduling

A CPU, although it appears to do process a billion things at the same time can only process a single instruction at a time. The order of which instruction is processed by the CPU is determined by the thread priority. If a thread has a high priority, the CPU will execute the instructions  in sequential order inside that thread before any other thread of a lower priority. This requires that the thread execution is scheduled, according to priority.   If threads have the same priority however then an equal amount of time is dedicated to each (through time slicing of usually 20ms for each thread).  This might leave low priority threads out in the cold however if the CPU is being highly utilized, so to avoid not executing low priority threads, Windows specifically dedicates a slice of time to processing those instructions, but that time is a lot less than given to the higher priority threads. The .NET CLR actually lets the windows operating system thread scheduler take care of managing all the time slicing for threads.

Windows uses pre-emptive scheduling. All that means is that when a thread is scheduled to execute on the CPU then Windows can (if it wants to) unschedule a thread if need be.
Other operating systems may use non pre-emptive scheduling, meaning the OS cannot unschedule a thread if it wants to if the thread has yet not finished.

Thread States

Multi-threading in terms of programming, is the co-ordination of multiple threads within the same application. It is the management of those threads to different thread states.

A thread can be in…
  • Ready state – The thread tells the OS that it is ready to be scheduled. Even if a thread is resumed, it must go to ‘Ready’ state to tell the OS that it is ready to be put in the queue for the CPU.
  • Running state – Is currently using the CPU to execute its instructions.
  • Dead state – The CPU has completed the execution of instructions within the thread.
  • Sleep state – The thread goes to sleep for a period of time. On waking, it is put in Ready state so it can be scheduled for continued execution.
  • Suspended state – Thread has stopped. It can suspend it’s self or can be suspended by another thread. It cannot be resumed by itself. A thread can be suspended indefinatley.
  • Blocked state – The thread is held up by the execution of another thread within the same memory space. Once the blocking thread goes into dead state (completes), the blocked thread will resume.
  • Waiting state – A thread will release its resources and wait to be moved into a ready state.

Why use multiple threads?

Using multiple threads allows your applications to remain responsive to the end user, whilst doing background work. For example, you may have a windows application that requires that the user continue working in the UI whilst an I/O operation is performed in the background (loading data from a network connection into the process address space for example). Using multi-threading also gives you control over what parts of your applications (which threads) get priotity CPU processing. Keeping the user happy whilst performing non critical operations on background threads can make or break an application. These less critical, low priority threads are usually called ‘Background’ or ‘Worker’ threads.


If you created a simple windows form and drag another window over it, your windows form will repaint itself to the windows UI.

If you created a button on that form, then when clicked put the thread to sleep for 5 seconds (5000ms), the windows you are dragging over your form would stay visible on the form, even when you dragged it off of your form. The reason being is that the thread was being held up by the 5 second sleep, so it waiting to repaint itself to the screen until the thread resumed.

Implementing multi-threading, i.e. putting a background thread to sleep and allowing the first thread to repaint the window would keep users happy.

Multi-threading on single core / single processor 

Making your app multi-threaded can affect performance on machines that have a single CPU.  The reason for this is that the more threads you use, the more time slicing the CPU has to perform to ensure all threads get equal time being processed.  The overhead involved in the scheduler switching between multiple threads to allow for processing time slices extends the processing time. There is additional ‘scheduler admin’ involved.

If you have a multi-core system however, let’s say 4 cpu cores this becomes less of a problem, because each thread is processed physically at the same time across the cpu cores. No switching between threads is involved.


Using multiple threads makes code a little harder to read and testing / debugging becomes more difficult because threads could be running at the same time, so they’re hard to monitor.

CLR Threads

A thread in the CLR is represented as a System.Threading.Thread object.  When you start an application, the entry point of the application is the start of a single thread that all new applications will have. To start running code on new threads, you must create a new instance of the System.Threading.Thread object, passing in the address of the method that should be the first point of code execution. This in turn tells the CLR to create a new thread within the process space.

To access the properties of the currently executing thread, you can use the ‘CurrentThread’ static property of the thread class:

You can point a starting thread at both a method that has parameters or a parameterless method. Below are examples of how to start these 2 method types:

  • ThreadStart – new Thread(Method);
  • ParameterizedThreadStart – new Thread(Method(object stateArg));
ThreadStart example

Thread backgroundThread = new Thread(ThisMethodWillExecuteOnSecondThread);
            backgroundThread.Name = “A name is useful for identifying thread during debugging!”;
backgroundThread.Priority = ThreadPriority.Lowest;
backgroundThread.Start(); //Thread put in ready state and waits for the scheduler to put it into a running state.

public void ThisMethodWillExecuteOnSecondThread()
//Do Something

ParameterizedThreadStart example

Thread background = new Thread(ThisMethodWillExecuteOnSecondThread);

public void ThisMethodWillExecuteOnSecondThread(object stateArg)
     string value = stateArg as string;
//Do Something

Thread lifetime

When the thread is started, the life of the thread is dependent on a few things:
  • When the method called at the start point of the thread returns (completes).
  • When the thread object has it’s interrupt or abort methods invoked (which essentially injects an exception into the thread) from another thread that has handled an outside exception (asynchronous exception).
  • When an unhandled exception occurs within the thread (synchronous exception).
Synchonous exception = From within.
ASynchronous exception = From outside.

Thread Shutdown

Whilst threads will end in the scenario’s listed above, you may wish to control when the thread ends and have the parent thread regain control before it leaves the current method.
The example below shows the Main() thread starting off a secondary thread to take care of looping. It uses a volatile field member (value will always be checked for updates) to tell the thread to finish its looping.
Then the main thread tells the secondary thread to rejoin the main threads execution (which blocks the secondary thread).

Background vs Foreground Threads

Foreground Thread – A foreground thread if still running will keep the application alive until the thread ends. A foreground thread has it’s IsBackground property set to false (is default value).
Background Thread – A background thread can be terminated if there are no more foreground threads to execute. They are seen as non important, throw away threads (IsBackground = true);

Thread Pools

Threads in the CLR are a pooled resource. That is, they can be borrowed from a pool of available threads, be used and then return the thread back into the pool.
Threads in the pool automatically have their IsBackground property set to true, meaning that they are not seen as important by the CLR and if a parent thread (if is NOT a background thread) ends, the child will end whether complete or not. Threads in the pool work on a FIFO queue basis. The first available thread added to the queue is the first thread pulled out of the pool for use. When that thread ends execution, it is returned to the pool queue. Thread pool threads are useful for non important background checking / monitoring that do not need to hold up the application.

//Creating a new thread from the pool

Threadpool.QueueUserWorkItem(MethodName, methodArgument); //This will be destroyed if the foreground thread ends.


Transactional Isolation

Transactional isolation is usually implemented by locking whatever resource is being accessed (db table for example) during a transaction thus isolating the resource from other transactions.  There are two different types transactional locking: Pessimistic locking and optimistic locking:
Pessimistic Locking : With pessimistic locking, a resource being accessed is essentially locked from the time it is first accessed in a transaction until the transaction completes, making the resource unusable by any other transactions during that time. If competing transactions simply need to read the resourceonly (a SELECT of a data row for example) then an exclusive lock may be overkill. The lock exists until the transaction has either been committed or rolled back at which point, the resource is made available again for other transactions.
Optimistic Locking : Optimistic locking works a little differently.A resource being accessed isn’t locked when first used, but the state of the resource is noted.  This allows other transactions to concurrently access to the resource and the possibility of conflicting changes is possible. At commit time, when the resource is about to be updated to persistent storage, the state of the resource is read from storage again and compared to the state previously noted when the resource was first accessed. If the two states differ, a conflicting update was made, and the transaction will roll back.

Good Design Documentation

Ok, that’s a relatively general title. What I mean specifically and what I want to talk about here is the importance of a good enough technical design specification and how the quality of said document really does impact the resulting solution. I want to share some recent experiences where documentation really did let the side down.

I’ve not long finished a 6 month placement at a company working on Cordys C3 based bug fixes and change requests. It was challenging in 3 respects.

Firstly, the work was specific to Business Processes and XForms based user interfaces and both of which where not exactly organized within Cordys in a very intuitive and structured manner that provided any contextual reference to how the business solutions worked together.  Second, the new change requests being received really required a good knowledge of the existing 300+ processes and forms that made up the applications that ran on Cordys.  This was because design documentation was less than adequate and in six months, you really can’t get a detailed enough grasp of such a large number of processes, especially non-standard business processes in the world of media management and digital content distribution.  Finally, the loaded web service interfaces (and so the services themselves) included operation parameters such as arg0, arg1 and arg2, which as you’ll have no doubt evaluated, is unbelievably unhelpful in determining what the service operation actually does and what data should be provided in the messages.

OK, these issues aside for a moment, the real issue I want to discuss was how the design documentation should go some way in explaining which components, services and forms should be used or designed for the given solution, what dependencies are outstanding at the time of writing and what risks may be involved.  I worked on a couple of CR’s and the documentation was poor.  This was through no fault of the architect however, who knew the systems inside out and was stretched so thinly, that their desk was only visited once every 2 days when meetings were cancelled. Poor and constantly changing requirements where no help either.

In order for developers to attach time estimates to tasks detailed in a design specification document, there must be enough detail so that the estimate confidence level of such tasks can be as high as possible and thus the business is kept more realistically informed of how long the solution might take.  The problems I found with the documentation in this instance where as follows:

– Very brief document objective
– Zero mention of any outstanding dependencies at the time of writing (i.e. are services required by this functionality written yet? Some where not)
– ‘To Be’ process illustrations made up the bulk of the document, with very little supporting written description.
– Message interface definitions where non-existent
– Not all of the requirements were met in the design document
– No exception handling or compensatory actions where detailed in the event of errors
– No architectural overview was presented and minimal system interactions (i.e. sequence diagrams) where present

In short, the design specification put the responsibility on the developer to fill in the gaps in detail. Whilst this free’s up the architects time, this really is no good for a few reasons:

1) Not all of the design is documented and therefore cannot be referenced in the future if the client questions functionality or attempts to request amendments outside of a new CR
2) New developers to the team (myself in this instance) are left with gaps in their knowledge that requires time consuming investigation with potentially mutliple teams (and as such consumes estimated time but no development is done)
3) Gaps in design specifics can lead to incorrect assumptions about how the solution should operate
4) Inadequate design detail leaves room for (mis)interpretation of the design and can mean solutions move away from company design standards and architectural rules. This leaves a messy set of solutions that operate differently, don’t really utilize re-use and only further confuse developers.

In this case, clearly the company may not have the resource to focus more on detailed documentation or maybe they believe it’s just not as important as I do.  The bottom line however is that if you are going to develop a solution that’s more complex than ‘Hello World’ you should really think about documenting the following (and I apologize in advance if you are great at your design specifications) :

– Start with a document summary. This should includes, author, distribution list, document approvers and release history.
– Basic I know, but include a ‘contents’ section that logically breaks up the design into layers (data access, service, process, ui).
– Provide a detailed overview of the solution. Detailed being the key word here. Copying chunks of text from the functional specification is not cheating. The overview should include how the solution will improve any existing solutions (i.e. improve stability, boost usability, provide a greater level of system managability)
– If necessary, provide details of the runtime environment and any antipicated configuration changes
– Make reference to any other materials such as the functional specification and use case documents
– Include design considerations (company or otherwise)
– Detail any known risks, issues or constraints
– Detail any dendancies at the time of writing. This should include any work being performed by other teams that the solution being detailed requires in order to operate successfully.
– Provide a top level architectural diagram, even if the solution is basic. Diving into the detail without giving the developers a 1000 foot view of where the solution fit’s in to the wider solutions architecture to me is just wrong. Support the diagram with a sentance or two.
– List the components that will change as part of the design
– List any new components
– Diagram component interactions (sequence diagrams)
– ‘To be’ process designs should include annotation, even if it’s assumed current knowledgable developers would know this information. You will not always have the same people doing the work.
– For UI’s, detail the style and appearence ensuring it’s in line with company accessibility policies. That may require detailing fonts and hexidecimal colour references [#FFFFFF].
– Detail what programming may be required. Server side, client side. What functionality should this code provide. What coding best practices should be honoured.
– Keep a glossary of terms to the back of the document

Finally and most importantly, even if you are the architect and the master of all knowledge when it comes to your solution domain… Distribute the document and enable change tracking. Send to the subject experts for clarification and the business stakeholders, even if they don’t understand the content.

Most of us do and that’s great.  Design specification templates can be found online so there’s really no excuse.

SOA. Just the basic facts. In 5 minutes.

What a SOA is not

SOA does not mean using web services.
SOA is not just a marketing term.
SOA also does not just mean ‘using distributed services’.

What a SOA is

SOA is an architecture of business services (usually distributed and sometimes ‘connected’ by means of a service bus) that operate independently of each other, advertise what services they offer through well-defined interfaces and can be heavily re-used not only to aid development productivity of the IT department but to enable use of existing IT assets/systems. Ultimately, this means quicker turn around of IT projects, improving how IT serves the business and thus improves business agility.

‘Service’ orientation means that logic is provided via a suite of services which should be seen as ‘black boxes’. They do stuff, but you or the services consuming them don’t need to know what’s going on under the hood, only what messages they require as input (usually SOAP messages) and what the service will do for / return to you. A black box service doesn’t have to be a  web service, though they are the most commonly implemented type of service for maximum distribution and cross-platform compatibility.

So whilst that goes some way in explaining what SOA is on a general level using these developer written ‘services’… What SOA really is, is A FUNDAMENTAL CHANGE IN THE WAY YOU DO BUSINESS via a top down transformation requiring real commitment from the business, not just IT. That requires a change in mind-set of the top people.

Characteristics of a black box ‘service’ in a SOA

– Loosely coupled (minimizes service dependency)
– Contractual (adherence to well-defined service interface contracts… ‘if you wanna do business you need to abide by my interface contract’)
– Abstract (service is a black box, internal logic is hidden from service consumers)
– Reusable (divide and conquer! – We divide up business logic to basic reusable services)
– Composable (can be used as a building block to build further composite services)
– Stateless (retains little to no information about who it interacts with)
– Discoverable (describes itself, a bit like a CV so that the service can be found and assessed ‘hello I’m a service, here is my CV’)

What can these services do?

Whatever you need them to do in order to satisfy business change needs / requirements. Common functions of services include:

– Perform business logic
– Transform data
– Route messages
– Query data sources
– Apply business policy
– Handle business exceptions
– Prepare information for use by a user interface
– Generate reports
– Orchestrate conversations between other services

The business benefits of implementing a SOA strategy

– Open standards based
– Vendor neutral
– Promotes discovery
– Fosters re-usability
– Emphasizes extensibility
– Promotes organizational agility
– Supports incremental implementation (bit by bit development)

What a SOA might look like

The below shows a business application based on a SOA.

The lowest level of operation consists of application logic, including existing API’s, DAL code and legacy systems. This may include ‘application connectors’ that are middle men that interface between a simple exposed API and large systems like ERP, MRP etc).
This low-level application logic is then exposed as basic level services (application orientated services as they are a wrapper for parts of the application logic).
These basic level services form the building blocks of composite level services. More application aligned services are combined together to form services that are more aligned with the business, thus are more business orientated services. This can include exposing a business process as an independent business service.
Basic (application orientated) and composite (more business orientated) services can then be orchestrated by business processes.
These business processes may include human interaction points where user interfaces are required. Processes can also be initiated via user interfaces (requests / orders / applications etc).


Steps in Implementing a SOA with web services

1) Creating and exposing services (development team creating component services)
2) Registration of services (SOA isn’t truly in place when you just have random web services sitting on different web servers exposing WSDL. Where services are just consumed based on word of mouth and passing WSDL documents around.  A SOA requires a directory of its available services where all available services can be registered. UDDI 3.0 being the standard when using web services. This directory is the yellow pages for the services).
3) Address security. Exposing business logic as services over large networks opens up a serious set of security challenges. Security standards must be implemented for the services so that consumers of services can meet the security requirements.
4) Ensure reliability. Services must be monitored to make sure they are always up (high availability) and performance must be monitored to ensure reliability.
5) Concentrate on governance. How are all of these steps governed? What are the policies that should be enforced at runtime. Compliance is also important.


That’s all for now. Hopefully that paints a very top-level picture of what SOA is, what it is not and how you should go about implmenting it (with the all important business buy in).

Metastorm BPM : It’s not an application development tool

After 2 years of designing a large operational system using Metastorm v7.6, I wanted to reflect on why it’s a bad idea to use Metastorm BPM to build big workflow based systems.

The problem with the finished system is not that it doesn’t satisfy the requirements or doesn’t perform well enough (considering), it’s that it is a maintenance nightmare.  I wrote an article this time last year whilst travelling back home to Holland from being snowed in and which concerned why maintainability is the most important design factor (over and above scalability and extensibility).  Coupled with a ‘big bang’ design approach (over an agile dev approach) and consistent requirement changes, it’s a surprise the system runs in its operational state.

I don’t wish to run the product down, because for small to medium workflow driven applications, it does the job. But, it’s clear lack of object orientation is the biggest single product flaw and when building a big system with Metastorm this cripples maintainability.   A solid design architecture is obviously of major importance.  Basic application architecture fundamentals such as breaking a system design down into cohesive functional ‘components’  that represent ‘concern’ area’s for the application can be difficult to implement.  This is down to the fact that process data is stored in the database per process and passing data between processes using flags can become messy, especially when certain characters are passed using those flags (that Metastorm classes as delimiters).  Sub-processes are then an option, but these also have inherent flaws.

Forms, which again are application components are process specific, so re-use is again suffering and so replication of forms has to be done, further disabling the idea of good maintainability.

Having data repeated in processes and having no code dependency features is bad enough, but because you have to remember where you have used process variables and keep in mind when and where values may change, the tool puts all the responsibility on the developer.  Once the system get’s very large, the event code panels (the ‘on start’, ‘on complete’ etc) get very complicated and tracking variables and when they may change etc becomes a struggle in itself.  Changing a variable value in a large process has the risk of making other parts of the process not work quite right because ‘you’ve forgotten that the variable is used in a conditional expression later on’.

This then begs the question, should you even use the Metastorm event/do this panels for ANY business logic.  I’d say no.  Only UI or process specific logic should be used and you should push ALL of your business logic to server-side scripts and a suite of external .NET assemblies.  You can then at least implement a fully swappable business logic layer.

So along comes v9.  This product is a great move towards stronger application architectures.  OOP design and ability to debug alone save a whole lot of system maintenance time.  So although this version takes us closer to being able to create solid, maintainable operational applications, it was released too early.  It is slow (halving development productivity against version 7), it had many broken features and grids, one of the most used visual components, especially for data driven applications (which is most business apps) were just terrible.  They continue to perform almost independently from the rest of the system and patch 9.1.1 is still addressing grid shortfalls.  Obvious shortfalls which should have been picked up by a thorough QC team @ (OpenText) Metastorm.

The new OOP approach means that designers and developers no longer have to use the ‘Line by line interpreted’ syntax of v7 and can design re-usable components.  So there is a greater case for using Metastorm BPM as an application development tool for fair-sized applications but whilst development productivity is still poor and the designer is still very buggy, it’s not quite there yet.

Metastorm announcing BPM 9.1

This morning at US Eastern Daylight Time, Metastorm are set to make some product launch announcements including Metastorm BPM 9.1.  The other product announcements include Smart Business Workspace 9.0 and Metastorm Business Performance Intelligence Dashboards and the predefined dashboards for Metastorm Business Process Management, Metastorm Integration Manager and Metastorm Knowledge Exchange.  These boosting the product(s) visibility to business users.

For me personally, the Metastorm BPM 9.1 release is the most welcome news.  I work with this product every day and if you’ve read my recent posts, I’ve shared my concerns about its general stability and poor performance.  It sounds like many new features in this release surround the client UI and building of forms with this version (hopefully!) introducing Panels (collapsible groupings of form controls), Field Anchoring (spacing your form content when window maximized), improved field visibility options, dynamically visible actions and I’m sure many others.

Many Metastorm BPM developers are looking for big improvements in this release and are hoping that Metastorm has made the development experience more efficient.  It has taken some getting used to the new way of design and it can be tedious at times, so any new features that boost development efficiency I can see being received with applause by the development community.

Let’s wait and see what comes to pass.

Free ‘BPM For Dummies’ book

When I started in the field of BPM, I started hands on, creating workflows based on customer requirements.  I hadn’t read any books on BPM as it just seemed like one of those fields that was really  just common sense, surely no concepts and best practices could exist for just ‘workflow’, which I defined as just moving ‘stuff’ through a sequence of activities?!!?

Equipped with MS Visio and common sense, I used my own home cooked notations that people came to recognised within the company I worked for at the time.  In a way I sort of faked it until I made it (isn’t that what we do in IT though?).  Now this was all well and good and with most fields, you learn by mistakes but I started noticing that some of the process decisions I’d made during design didn’t turn out to be as efficient because I’d gone for the big bang solution as opposed to the agile.  My ability to communicate my idea’s wasn’t based on any common concepts so became difficult in some instances and when I showed my home cooked Visio masterpieces, some people just didn’t quite get it.

Like most, I’ve learnt a lot over the years, working with different clients and within different companies, using their own standards taught by their BA’s and also using industry standard approaches and modelling notations.  One thing I do know is that although BPM from the outset may appear like an easy thing to get into as its mostly common sense coupled with a good ability to draw shapes, it’s not and whether you have a technical background or not, BPM requires you read up on some fundamental concepts.

My point here? – If you’re starting in BPM, read white papers, go to BPM focused community sites and forums and start to understand the most common business processes (or level zero processes as they’re referred to by some).  A nice starter is a free BPM for Dummies book.  I personally like the ‘Dummies’ series of books for getting into most new subjects at a basic level.  A free copy of the BPM for Dummies book is available via if you sign up.  This link may take you directly to the pdf copy of the book itself, otherwise click here to signup and get access to it.

The Nomadic Developer

I work in consulting and have for over 4 years and had never really thought about reading or listening to a book that discussed the exciting world of technology consulting.  Recently however, I discovered ‘The Nomadic Developer’ by Aaron Erickson on iTunes.  This audio book is a good 9 hours long and covers a myriad of topics relating to how technical consultancies work,consultancies to avoid, why companies choose consultants over internal employees including examples of fee metrics and how soft skills and writing skills are equally as important as technical skills to name just a small handful.  I highly recommend this book/audio book to any technical worker either in the consulting world or who is looking to break into technology consulting.

Metastorm 9 : Development Productivity

Ok, so I’ve seen several forum posts and had a discussion with a fellow BPM Consultant this week about Metastorm 9 and two main ‘gripes’ that people have. Namely, the reduction in developer productivity against version 7 and the fact that version 9 now really requires you to be a developer and not just a business analyst. I wanted to provide some thoughts on these two:

Slow down in developer productivity

My colleague and several other developers in the community have commented on how many more button pushes are required to put together a functional process in version 9.  Now compared to version 7, yes you do have to click ‘into’ the product a few more times to apply some logic to a process, take the stage and action on start and on complete event handlers.  In version 7, these where two large free text area’s in the ‘do this’ tab of any stage or action.  You can could click on a stage and start typing (for those developer that knew the version 7 function syntax instinctively).  In contrast to this, version 9 requires a click on a stage, then a click on an event handler button, then the selection of a visual script activity (for example, assignment) and then the use of expression builder to make that assignment.  So a couple of extra steps.  I think what is being missed here however is the focus on re-use and overall maintainability.

Any Metastorm developer worth their salt has worked on a large project that has taken more than 6 months to plan, write and test due to either large processes or a number of smaller but more complex processes with many rules and alternative flows.  I have not long finished a 2 year project and if I could show you the amount of onstart and oncomplete code that is used and more importantly repeated, you would tell me that Metastorm was the wrong development environment to use. Whilst I agree, some companies run their entire operational application suite off of custom Metastorm applications, this client being one of them.  Now I can see that with all of their systems that maintainability using version 7 has become a nightmare, there are several full-time developers that are bug fixing as their day jobs. All because of the line by line syntax where no OOP patterns and principles can be used (unless you write the entire thing in Jscript.NET).  Important code support features like code dependency (knowing what may break if you change a variable value), automatic refactoring etc that make life so much easier with many development environments just don’t exist in version 7.  Finally I should mention the use of objects to represent business entities and the ability to loop these.  If you’re working with data, you’re looping it to run some row by row processing which again required extra programming in version 7 to accomplish (e.g. writing a static server-side method that takes a SQL statement and returns a dataset/datatable for enumeration).

My main point here is that yes there are a few more clicks in version 9 in accomplishing some goals, but the ability to reuse server-side code for assignments, create visual toolbox items for common process activities (thus avoiding writing code against a version 7 common stage with a bunch of conditionals attached as to only execute in some situations) and completely re-use visual scripts supports true OOP abstraction. You can edit the smallest part of a process to make a change and have everything else that depends on that code or visual activity be changed too.

When it comes to using version 9, you have to understand good design practices such as cohesion, low coupling, area’s of concern as well as OOP principles and patterns.  You have to look to the long term and understand that the OOP approach to designing processes in version 9 will mean longer term maintainability.  Anyone that has delivered a version 9 project (and I have) will notice that a good design is far more maintainable and will require less man power going forward than a version 7 procedure.  The design stage of a project actually becomes a lot easier too.

What is a few more clicks compared to putting together a solid consistent, sustainable and maintainable design? (and don’t get me started on the ability to debug in 9 and not in 7… how much time are we saving here?) The short term productivity does suffer, but only slightly and if you where to analyse the amount of time spent on a project long term, you might be surprised by the results.

You have to be a developer to create processes in version 9

Version 9 certainly has shifted its focus on to the developer.  You really have to approach the design of a process from a OOP perspective and use many techniques used in the design of a .NET application for example to properly plan and design a Metastorm 9 solution.  If you can not code C# or understand basic .NET concepts such as the heap, the stack, value and reference types, type conversion, type modifiers and commonly used namespaces such as System,System.Text,System.Data and System.IO namespaces then you might as well just close the application and take your lunch break.

In a way, I agree with this point but understand the move.  A custom Metastorm 7 syntax was never going to work long term in a world where open standards are king and compatibility is of major importance.  A Metastorm application at the end of the day is an application, it will execute in a business environment, handle possibly business critical data and need to be up most of the time.  Therefore it needs qualified designers (on paper or by experience, the latter generally being the most important) to design, create and adequately test the application.  There is nothing stopping a business analyst putting process actions and stages together using the ‘classic’ or basic process designer in version 9 and when they are doing so, it should be a must that the analyst collaborates with an experienced process designer who has a native technical knowledge but also understands how the functional requirements will translate into technical requirements, advising the best solutions to some of the most common process problems and bringing some sense to some of the more ‘out there’ ideas.

Some compatibility with a widely used modelling tool such as Visio would certainly reel the business analyst community closer, but in the age of BPM being an umbrella for many other technologies including Enterprise Systems Integration and Enterprise Content Management, the design of BPM systems is now at least 70% a technical field and I think the new version 9 product adequately represents that. If a client is going for a simpler process, then maybe Metastorm 9 is not the tool and they should be opting for a free open source alternative like bonita open solution.  I like to think that Metastorm 9 is not as ‘Fisher Price’ as it once was and is now a mature pure play BPM product.

Internet Explorer 9. The game is back on.

I’ve been a Microsoft fan boy for many years.  Vista aside for a moment, they do develop industry leading products that run the bulk of home and business machines.  In recent years, the company has received a lot of stick for the security issues in previous releases of IE and the wait and poor standard of the Vista operating system.  Since Microsoft realised that their bulky software was no longer cutting it and many users where looking elsewhere (think about the rise in the Ubuntu Desktop user base and the rise of Firefox), they do appear to have got their act together.  Its important to note that Microsoft still do own a large proportion of the OS market share (approx 90%) but clearly their last operating system dived considering the 2001 released XP is still found on more machines that both Vista and Windows 7.  Talking of Windows 7, this is where Microsoft started to get it right.  For the home OS, the breakdown according to goes like this:

Windows XP – 55.26% (56.72%)
Windows 7 – 22.31% (20.87%)
Windows Vista – 11.66% (12.11%)
Windows 2000 – 0.27% (0.31%)
Windows NT – 0.13% (0.22%)

Windows 7 has now surpassed (nearly doubled) Vista in market share and is looking to be on the increase (stats taken as of Jan 2011, stats in brackets Dec 2010).  This is because Windows 7 is a genuinely good operating system and Microsoft’s focus on speed and security for this release have really paid off.  All of the Microsoft haters will tell you that MS ‘robbed’ features from other operating systems that already fashioned said features, but at the end of the day, its not important where they came from, a good idea that improves productivity for example is good for any operating system and Windows 7 certainly delivers these.  For me, just the ability to snap two windows to either side of the screen for comparison is a major step forward.  I have a wide-screen monitor and working on code, whilst referring to some API documentation for example is uber useful.

To the point of this article however.  Microsoft this month released their new version of Internet Explorer, IE9.  Now, like Windows 7, it’s different but again Microsoft have focused on speed and security with some focus on Windows 7 integration.  I have used Chrome for a few years now as I still find Firefox too clunky for my needs, but IE9 has made me switch, almost immediately.

In terms of browser market share, Microsoft does still hold the biggest share, but it was evident that their share was continuing to slip and that Firefox was waiting in the wings to jump in its place.  Here are the stats, also from for browser market share in Jan 2011:

Microsoft Internet Explorer – 56.00% (57.08%)
Firefox – 22.75% (22.81%)
Chrome – 10.70% (9.98%)
Safari – 6.30% (5.89)
Opera – 2.28% (2.23%)
Opera Mini – 0.89% (0.98%)
Netscape – 0.85% (0.78%)

So why do I like IE9 so much?…

Well, its noticeably faster, the browsing speed and the browser load time is super fast compared to IE8.  It has removable tabs that can be docked in the task bar so you can launch web sites like you can windows applications.  The UI has been streamlined similar to Chrome so that you have more screen real estate for browsing in.  The favourites bar is again like Chrome and very easily accessible.  The address bar is now an integrated search bar that allows search engines to be added so that when you have entered your search string, you can double click a search provider and you are there at their rearch results in seconds (I find the youtube search very useful).  You can improve performance by viewing and disabling add ons (tells you how much time in seconds you are saving).  Security has been further enhanced.

In terms of HTML 5 compatibility, whilst better than IE8 this version still lacking, especially in forms so doesn’t appear as compatible as Chrome just yet.

With Windows 7, .NET 4 and now a super fast browser in IE9 (and let’s face it, the success story that is the Xbox Connect); it seems Microsoft are really listening to users and delivering some pretty solid products of late.  To get your hands on IE9 and see if it converts you, go download it here.