Enterprise Architecture

What makes a good solution architect?


I’ve worked with quite a few over the years, from many different companies in many different sectors / verticals. You find however that the qualities of a solution architect shine through pretty obviously.  A solution architect does not have a wall full of qualifications proving then know how to ‘code in a proper way’ or ‘understand architectural patterns’ as this proves nothing to the business.  When it comes to someone directing how a suite of business requirements is translated into a solid business solution there is no substitute for experience and experience is full of failure.

In my view , a solution architect must understand the business and the technical side of things. In a broad way. I know of one or two ‘Solution Architects’ that have just spent time looking after computer networks for 10 years and I really feel that this is in no way a path to becoming an architect.  The architect must understand the business first and foremost but have experience in more than one specific area of technology, but have expertise in at least one.  For me, that would translate into experience into:

– Network Infrastructure : It’s important that an architect knows how the network hangs together. How machines communicate each other and be able to use the most common OS command line / shell tools to work on remote servers and detect problems on the network.  Network security is also important, as are topics such as certificates and encryption. Finally, an architect should be familiar with the OSI model and the communication protocols that operate at each layer of the stack.

– Server Technologies : An architect should have a relatively good understanding of core server roles and the services and features they provide to clients. DNS and DHCP for example are basic server features that almost all servers of differing operating systems will have and need configuring. Some experience in servers such as Windows Server, if you are working in a Microsoft dominated network environment are also a must. Understanding active directory, network secure-able objects (users/computers etc), domain group policy and NTFS permissions are basic features that one should be aware of.

– Database Administration or Design : Almost all applications that a solution architect will work with will have a persistent data store and that is a database for the majority of the time. Therefore understanding SQL is a no-brainer. You simply cannot call yourself a solution architect if you don’t know how to SELECT from an INNER JOIN. Database servers are generally under utilized in a big way (at least that’s my experience working with SQL Server for 10 years).  SQL Server for example comes with several major ‘components’ for reporting, analysis, notifications,data extraction and loading and of course the core database. Many developers only know how to use the database feature and get stuck when it comes to the SQL Server security model.  I have not met a developer yet who understands all of the database roles and what features they provide.  Finally, a good solution architect know’s when logic should be placed in stored procedures and when it should be placed in the business logic tier.  A major bug bare of mine is large stored procedures that do more than basic data manipulation and organization.  In summary, understanding what data layer support you have when architecting an application is key so you can properly place component responsibility within the data layer.

– Software Design & Development : Yes, that means that as an architect, you should have experience not just writing code, but designing applications and seeing those designs through the entire first cycle.  This means that from a set of business requirements, you understand the technical landscape (see above) enough to present to the business a preliminary, high level design by taking the ‘what’ of the functional and non functional requirements and turning those into the ‘how’ in terms of high level implementation and software sub-systems.  That means identification of the technology stack to use (hence the server/network/database knowledge requirements) and what are the main high level areas of concern (including cross cutting, such as security). The architect must also determine what operational requirements should be taken into account, what architectural patterns will be used in the solution and finally how the solution will honour a set of quality attributes (maintainable, secure, extensible, scalable etc). Once the high level specification is signed off, the architect should then be able to take the high level design and move down to the implementation detail by decomposing an existing system or set of requirements into the individual software components and create a domain model.  These components should then (ideally) conform to the 3 P’s of software design, Principles, Practices and Patterns in their design and interaction.  In summary, whilst it’s important to know your code syntax and the common objects available in the libraries as part of development frameworks, you must be familiar with the principles and methods of designing good components that interact well.

So to re-iterate. These are the qualities and skills that I personally believe all good solution architects I have met have. I am certainly no barometer for assessing the role, but have worked with enough competent architects to know what skills are important and almost all of them do not hold a lot of qualifications in the technical subjects.

Experience and a strategic/practical/pragmatic skill set are far more important in reality.

To conceptualize, abstract reality, direct a solution’s development, look at every problem with the architecture in mind, provide the why, what and how in any task arising from the development project and support the developers are skills that an architect should hold.

My personal favourite skill observed however is the modesty of some architects.  When you sit in a meeting with them, you realize their experience and that they are truly the guru of the business domain, they have a constant want for knowledge and that shows in their modest approach. They clearly know more than they show and that, to be is one of the greatest qualities an architect can have.

As always, I welcome and appreciate reader opinions on the subject.

Advertisements

Transactional Isolation


Transactional isolation is usually implemented by locking whatever resource is being accessed (db table for example) during a transaction thus isolating the resource from other transactions.  There are two different types transactional locking: Pessimistic locking and optimistic locking:
Pessimistic Locking : With pessimistic locking, a resource being accessed is essentially locked from the time it is first accessed in a transaction until the transaction completes, making the resource unusable by any other transactions during that time. If competing transactions simply need to read the resourceonly (a SELECT of a data row for example) then an exclusive lock may be overkill. The lock exists until the transaction has either been committed or rolled back at which point, the resource is made available again for other transactions.
Optimistic Locking : Optimistic locking works a little differently.A resource being accessed isn’t locked when first used, but the state of the resource is noted.  This allows other transactions to concurrently access to the resource and the possibility of conflicting changes is possible. At commit time, when the resource is about to be updated to persistent storage, the state of the resource is read from storage again and compared to the state previously noted when the resource was first accessed. If the two states differ, a conflicting update was made, and the transaction will roll back.

Good Design Documentation


Ok, that’s a relatively general title. What I mean specifically and what I want to talk about here is the importance of a good enough technical design specification and how the quality of said document really does impact the resulting solution. I want to share some recent experiences where documentation really did let the side down.

I’ve not long finished a 6 month placement at a company working on Cordys C3 based bug fixes and change requests. It was challenging in 3 respects.

Firstly, the work was specific to Business Processes and XForms based user interfaces and both of which where not exactly organized within Cordys in a very intuitive and structured manner that provided any contextual reference to how the business solutions worked together.  Second, the new change requests being received really required a good knowledge of the existing 300+ processes and forms that made up the applications that ran on Cordys.  This was because design documentation was less than adequate and in six months, you really can’t get a detailed enough grasp of such a large number of processes, especially non-standard business processes in the world of media management and digital content distribution.  Finally, the loaded web service interfaces (and so the services themselves) included operation parameters such as arg0, arg1 and arg2, which as you’ll have no doubt evaluated, is unbelievably unhelpful in determining what the service operation actually does and what data should be provided in the messages.

OK, these issues aside for a moment, the real issue I want to discuss was how the design documentation should go some way in explaining which components, services and forms should be used or designed for the given solution, what dependencies are outstanding at the time of writing and what risks may be involved.  I worked on a couple of CR’s and the documentation was poor.  This was through no fault of the architect however, who knew the systems inside out and was stretched so thinly, that their desk was only visited once every 2 days when meetings were cancelled. Poor and constantly changing requirements where no help either.

In order for developers to attach time estimates to tasks detailed in a design specification document, there must be enough detail so that the estimate confidence level of such tasks can be as high as possible and thus the business is kept more realistically informed of how long the solution might take.  The problems I found with the documentation in this instance where as follows:

– Very brief document objective
– Zero mention of any outstanding dependencies at the time of writing (i.e. are services required by this functionality written yet? Some where not)
– ‘To Be’ process illustrations made up the bulk of the document, with very little supporting written description.
– Message interface definitions where non-existent
– Not all of the requirements were met in the design document
– No exception handling or compensatory actions where detailed in the event of errors
– No architectural overview was presented and minimal system interactions (i.e. sequence diagrams) where present

In short, the design specification put the responsibility on the developer to fill in the gaps in detail. Whilst this free’s up the architects time, this really is no good for a few reasons:

1) Not all of the design is documented and therefore cannot be referenced in the future if the client questions functionality or attempts to request amendments outside of a new CR
2) New developers to the team (myself in this instance) are left with gaps in their knowledge that requires time consuming investigation with potentially mutliple teams (and as such consumes estimated time but no development is done)
3) Gaps in design specifics can lead to incorrect assumptions about how the solution should operate
4) Inadequate design detail leaves room for (mis)interpretation of the design and can mean solutions move away from company design standards and architectural rules. This leaves a messy set of solutions that operate differently, don’t really utilize re-use and only further confuse developers.

In this case, clearly the company may not have the resource to focus more on detailed documentation or maybe they believe it’s just not as important as I do.  The bottom line however is that if you are going to develop a solution that’s more complex than ‘Hello World’ you should really think about documenting the following (and I apologize in advance if you are great at your design specifications) :

– Start with a document summary. This should includes, author, distribution list, document approvers and release history.
– Basic I know, but include a ‘contents’ section that logically breaks up the design into layers (data access, service, process, ui).
– Provide a detailed overview of the solution. Detailed being the key word here. Copying chunks of text from the functional specification is not cheating. The overview should include how the solution will improve any existing solutions (i.e. improve stability, boost usability, provide a greater level of system managability)
– If necessary, provide details of the runtime environment and any antipicated configuration changes
– Make reference to any other materials such as the functional specification and use case documents
– Include design considerations (company or otherwise)
– Detail any known risks, issues or constraints
– Detail any dendancies at the time of writing. This should include any work being performed by other teams that the solution being detailed requires in order to operate successfully.
– Provide a top level architectural diagram, even if the solution is basic. Diving into the detail without giving the developers a 1000 foot view of where the solution fit’s in to the wider solutions architecture to me is just wrong. Support the diagram with a sentance or two.
– List the components that will change as part of the design
– List any new components
– Diagram component interactions (sequence diagrams)
– ‘To be’ process designs should include annotation, even if it’s assumed current knowledgable developers would know this information. You will not always have the same people doing the work.
– For UI’s, detail the style and appearence ensuring it’s in line with company accessibility policies. That may require detailing fonts and hexidecimal colour references [#FFFFFF].
– Detail what programming may be required. Server side, client side. What functionality should this code provide. What coding best practices should be honoured.
– Keep a glossary of terms to the back of the document

Finally and most importantly, even if you are the architect and the master of all knowledge when it comes to your solution domain… Distribute the document and enable change tracking. Send to the subject experts for clarification and the business stakeholders, even if they don’t understand the content.

Most of us do and that’s great.  Design specification templates can be found online so there’s really no excuse.

SOA. Just the basic facts. In 5 minutes.


What a SOA is not

SOA does not mean using web services.
SOA is not just a marketing term.
SOA also does not just mean ‘using distributed services’.

What a SOA is

SOA is an architecture of business services (usually distributed and sometimes ‘connected’ by means of a service bus) that operate independently of each other, advertise what services they offer through well-defined interfaces and can be heavily re-used not only to aid development productivity of the IT department but to enable use of existing IT assets/systems. Ultimately, this means quicker turn around of IT projects, improving how IT serves the business and thus improves business agility.

‘Service’ orientation means that logic is provided via a suite of services which should be seen as ‘black boxes’. They do stuff, but you or the services consuming them don’t need to know what’s going on under the hood, only what messages they require as input (usually SOAP messages) and what the service will do for / return to you. A black box service doesn’t have to be a  web service, though they are the most commonly implemented type of service for maximum distribution and cross-platform compatibility.

So whilst that goes some way in explaining what SOA is on a general level using these developer written ‘services’… What SOA really is, is A FUNDAMENTAL CHANGE IN THE WAY YOU DO BUSINESS via a top down transformation requiring real commitment from the business, not just IT. That requires a change in mind-set of the top people.

Characteristics of a black box ‘service’ in a SOA

– Loosely coupled (minimizes service dependency)
– Contractual (adherence to well-defined service interface contracts… ‘if you wanna do business you need to abide by my interface contract’)
– Abstract (service is a black box, internal logic is hidden from service consumers)
– Reusable (divide and conquer! – We divide up business logic to basic reusable services)
– Composable (can be used as a building block to build further composite services)
– Stateless (retains little to no information about who it interacts with)
– Discoverable (describes itself, a bit like a CV so that the service can be found and assessed ‘hello I’m a service, here is my CV’)

What can these services do?

Whatever you need them to do in order to satisfy business change needs / requirements. Common functions of services include:

– Perform business logic
– Transform data
– Route messages
– Query data sources
– Apply business policy
– Handle business exceptions
– Prepare information for use by a user interface
– Generate reports
– Orchestrate conversations between other services

The business benefits of implementing a SOA strategy

– Open standards based
– Vendor neutral
– Promotes discovery
– Fosters re-usability
– Emphasizes extensibility
– Promotes organizational agility
– Supports incremental implementation (bit by bit development)

What a SOA might look like

The below shows a business application based on a SOA.

The lowest level of operation consists of application logic, including existing API’s, DAL code and legacy systems. This may include ‘application connectors’ that are middle men that interface between a simple exposed API and large systems like ERP, MRP etc).
This low-level application logic is then exposed as basic level services (application orientated services as they are a wrapper for parts of the application logic).
These basic level services form the building blocks of composite level services. More application aligned services are combined together to form services that are more aligned with the business, thus are more business orientated services. This can include exposing a business process as an independent business service.
Basic (application orientated) and composite (more business orientated) services can then be orchestrated by business processes.
These business processes may include human interaction points where user interfaces are required. Processes can also be initiated via user interfaces (requests / orders / applications etc).

Image

Steps in Implementing a SOA with web services

1) Creating and exposing services (development team creating component services)
2) Registration of services (SOA isn’t truly in place when you just have random web services sitting on different web servers exposing WSDL. Where services are just consumed based on word of mouth and passing WSDL documents around.  A SOA requires a directory of its available services where all available services can be registered. UDDI 3.0 being the standard when using web services. This directory is the yellow pages for the services).
3) Address security. Exposing business logic as services over large networks opens up a serious set of security challenges. Security standards must be implemented for the services so that consumers of services can meet the security requirements.
4) Ensure reliability. Services must be monitored to make sure they are always up (high availability) and performance must be monitored to ensure reliability.
5) Concentrate on governance. How are all of these steps governed? What are the policies that should be enforced at runtime. Compliance is also important.

SERVICES THAT ARE EXPOSED, REGISTERED, SECURE AND PERFORM WELL FORM A SOLID SOA FOUNDATION.

That’s all for now. Hopefully that paints a very top-level picture of what SOA is, what it is not and how you should go about implmenting it (with the all important business buy in).

Metastorm BPM : It’s not an application development tool


After 2 years of designing a large operational system using Metastorm v7.6, I wanted to reflect on why it’s a bad idea to use Metastorm BPM to build big workflow based systems.

The problem with the finished system is not that it doesn’t satisfy the requirements or doesn’t perform well enough (considering), it’s that it is a maintenance nightmare.  I wrote an article this time last year whilst travelling back home to Holland from being snowed in and which concerned why maintainability is the most important design factor (over and above scalability and extensibility).  Coupled with a ‘big bang’ design approach (over an agile dev approach) and consistent requirement changes, it’s a surprise the system runs in its operational state.

I don’t wish to run the product down, because for small to medium workflow driven applications, it does the job. But, it’s clear lack of object orientation is the biggest single product flaw and when building a big system with Metastorm this cripples maintainability.   A solid design architecture is obviously of major importance.  Basic application architecture fundamentals such as breaking a system design down into cohesive functional ‘components’  that represent ‘concern’ area’s for the application can be difficult to implement.  This is down to the fact that process data is stored in the database per process and passing data between processes using flags can become messy, especially when certain characters are passed using those flags (that Metastorm classes as delimiters).  Sub-processes are then an option, but these also have inherent flaws.

Forms, which again are application components are process specific, so re-use is again suffering and so replication of forms has to be done, further disabling the idea of good maintainability.

Having data repeated in processes and having no code dependency features is bad enough, but because you have to remember where you have used process variables and keep in mind when and where values may change, the tool puts all the responsibility on the developer.  Once the system get’s very large, the event code panels (the ‘on start’, ‘on complete’ etc) get very complicated and tracking variables and when they may change etc becomes a struggle in itself.  Changing a variable value in a large process has the risk of making other parts of the process not work quite right because ‘you’ve forgotten that the variable is used in a conditional expression later on’.

This then begs the question, should you even use the Metastorm event/do this panels for ANY business logic.  I’d say no.  Only UI or process specific logic should be used and you should push ALL of your business logic to server-side scripts and a suite of external .NET assemblies.  You can then at least implement a fully swappable business logic layer.

So along comes v9.  This product is a great move towards stronger application architectures.  OOP design and ability to debug alone save a whole lot of system maintenance time.  So although this version takes us closer to being able to create solid, maintainable operational applications, it was released too early.  It is slow (halving development productivity against version 7), it had many broken features and grids, one of the most used visual components, especially for data driven applications (which is most business apps) were just terrible.  They continue to perform almost independently from the rest of the system and patch 9.1.1 is still addressing grid shortfalls.  Obvious shortfalls which should have been picked up by a thorough QC team @ (OpenText) Metastorm.

The new OOP approach means that designers and developers no longer have to use the ‘Line by line interpreted’ syntax of v7 and can design re-usable components.  So there is a greater case for using Metastorm BPM as an application development tool for fair-sized applications but whilst development productivity is still poor and the designer is still very buggy, it’s not quite there yet.

Cordys BOP4 : Messaging on the SOA Grid


I’ve spent a few weeks getting used to Cordys BOP 4 and as I usually try and do with a new product, I wanted to know more about what’s going on under the bonnet with it.  The central coordinating component of Cordys is its SOA grid, which takes care of messaging between all of the core Cordys services and other web services.  Based on the information provided in the Cordys offline documentation and because I’m a visual learner, I’ve drawn up the following image that should hopefully shed some light on how Cordys organises its internal services and how they communicate via the SOA grid. Click on the image to zoom to actual size.


What I’m trying to show here is how Cordys deals with an inbound service request.  The dark line represents the path of the message along the service bus.

To illustrate an example of how the above image can be used to understand what Cordys does is the request of an XForm from the Cordys client.  The client sends a request to display an XForm so sends a HTTP request to the web server for a resource of .caf file extension.  The web server, based on the .caf file extension hands the request over to the Cordys web gateway.  The web gateway contacts the LDAP service container and checks for the location of the XForms service container (the LDAP service must always be up and running for proper SOA grid functioning).  The LDAP service container has an LDAP application connector which talks to CARS.  Next the SOAP request is sent to the XForms service container and the XForms engine takes care of rendering the HTML response.  Not only that, but the XForms engine also validates controls against schemas and automatically contacts other web services required whilst rendering.  Once the HTML is generated, it is returned via the SOA grid to the Cordys web gateway, then back to the calling client.

I should mention at this point that web services on the SOA grid are called based on the service operation name and namespace in the SOAP request.

This is very high level and it’s always a good idea to read further into the Cordys documentation, but I hope this graphic helps to illustrate the architecture of services, service containers and service groups on the Cordys SOA Grid.

Understanding Multi-tenancy


I’m doing a lot of research and practical ‘playing’ with the Cordys BOP 4 environment at the moment.  It’s a relatively young product from a young company (founded in 2001) but from what I have understood about the product and its architecture it is a strong, versatile collaborative tool that really supports rapid application development and maintenance for business change thus reducing general cost of ownership.  I don’t want to talk about the product itself, I will be doing that soon enough, but I wanted to cover one of the software’s best features in my opinion and that is its ability to operate within the enterprise and in the cloud utilizing a fully multi-tenant architecture.

Multi-tenancy is an architectural principle which says that a single install of a software program / server application can service multiple client ‘organizations’, providing a customized isolated application environment with its own data.  This is in complete contrast to the idea of multiple instances for each organization (i.e. installing an instance of some server software and underlying database to serve one organization and store only that organizations data).  Each ‘organization’ (in the general sense of the word) is classed as a tenant, using the application, therefore if one single installed application can serve multiple tenants their customized view of the application, then it is said to have multi-tenancy (supports multiple tenants). Google apps is a perfect example of a multi-tenant architecture.  Multi-tenancy is the foundation architecture for most if not all Software as a Service applications, thus cloud applications support multi-tenancy.

How multi-tenancy is implemented in an application can vary from application to application.  The general principle however is to have the application virtually partition portions of its data and services to different consuming organizations.  A very simple example at the data level would be to have the application generate a set of related data tables under new database schemas as new organizations are set-up to use the application (so a schema per organization).  This separates off the data into logical groups that have their own security context on the database.  There are other ways to partition the data, but this is just to illustrate one potential method.

So multi-tenancy is a software architecture and one that is prevalent in cloud applications.  Cordys BOP 4 does this very well and I’m looking forward to investigating this product and its virtualization capabilities further.

.NET Architecture Guide v2.0


I was quite pleased this morning to stumble across a link to a document I’d read a few years ago that I remember being one of the best .NET focused guides on how to architect .NET applications and services. The document was called ‘Application Architecture for .NET – Designing Applications and Services’ and was (and still is) available to download for free in PDF format from the Microsoft website.  I remember the book being a very informative look at .NET architecture patterns for local and distributed applications / services build with .NET 1.0 but when I recently went googling for some up to date material specific to .NET 2.0 +, I learnt that there didn’t appear to be an updated version of the document available for download… until I came across Tom Hollander’s blog post.

Tom mentions that the project to update the original document was put on hold for a while, but in 2009 a v2 of it was released and is now also available for download.  The new(est) version is much bigger and expands on a lot of the original material as well as discussing the advancements in .NET features and how they support recommended architectures.

I strongly advise any developer (whether in the market as a budding architect or not) to read this guide, it’s an amazing resource for understanding .NET centric design architectures and the best part… it’s free.

BPM and ECM


Based on some recent speculation, forum discussions and off of the back of yesterdays confirmation that OpenText have acquired Metastorm, I wanted to talk about the topic of whether it is inevitable that BPM and ECM will eventually become one technology offering.  There are lots of opinions on this topic and some think that both will merge and others think that whilst there is currently overlap, they will continue as separate and in some cases competing technologies.

So ECM, stands for Enterprise Content Management.  Microsoft’s Sharepoint is an ECM tool in that it allows you to organize your enterprise content / digital assets centrally which is useful for collaboration.  ECM’s not only allow you to organize your digital content but also generally provide basic process automation using this content and in the case of Sharepoint, that means using Microsoft Office and Workflow Foundation.  The later two, the workflow and UI for interacting with the process is where we start to move into the BPM realm.  But not really.  This tends to be the main argument that ECM and BPM will merge, that both technologies offer a workflow solution.  The problem with this however is that BPM as a field is misunderstood in a lot of cases.  BPM does not only mean just process automation using a workflow engine, there is so much more to the field of BPM.

ECM is about content and how it is organized and made available to an organization, some process automation is thrown in to ensure that this content can be moved around the organization but is limited. BPM is the consistent improvement of how a business is run (via its many processes), which is applied not only to automate processes but to raise visibility of how the business is run, via process activity monitoring, business intelligence, dashboards etc.  BPM is not all about content, yes BPM generally creates the content and may consume content, but BPM has had document management support for years, so this isn’t new.

Systems Integration, or EA (enterprise integration) is another area of technology that has a far closer relationship to BPM. By using programming frameworks like Java or .NET or using enterprise service bus components that implement a message orientated integration feature you can integration almost anything, out of the box, with BPM servers.  This being the case, enterprise integration remains its own independent technology even though the vendors are offering BPM products that cover the two.

I do think it is inevitable that some vendors will attempt to further develop their products with ECM features as to offer up an ‘all in one’ ESB, BPM, ECM server but for the most part I believe these will be still be sold as seperate products with simple API’s and thus mean that they remain competing technologies (as the products tend to drive what technologies are grouped). In the case of Microsoft, with Biztalk, Sharepoint and Office, I believe they have the right strategy and I do think OpenText will keep its BPM and ECM products separate but closely paired (makes sense from a sales / licencing perspective)

There are of course benefits to storing and organizing documentation and process models in an ECM like is done with some Business Process Analysis tools but this is the case with any project documentation and as such ECM, with its ‘Enterprise’ clue should be seen as an enterprise wide repository not specific to the process management realm.

In summary, whilst the two technologies are overlapping and I do see content management as important to BPM (think open XML document formats and web services), I do believe that the two will not become one, but content management will become one of the many area’s of the BPM space (along with rules management, process automation, activity monitoring, systems integration etc) – in what form is still to be seen.

SCRUM : Why I like it so much


Scrum. It’s that point in the game where the guys all get together to discuss the finer points of ball management. They set a plan, huddle together and execute.

That’s not a million miles away from SCRUM, that agile software development framework that IT professionals use to execute and keep track of IT development projects.  I like it, because its simple and the status of the project is clear at any point in the iterative cycles.  There are little rules in SCRUM, but its a clear and to the point framework.  The whole point of scrum is to ensure that regular iterative cycles called sprints are undertook and at any point in those cycles the state of the development is made clear, usually in meetings that have the team stand, which so far, I like because they last about 15 minutes max.

The general flow of a development project being conducted with the SCRUM framework applied would look like the following:

Set a SCRUM Master (basically the project manager / project leader). Work with your subject experts and business analysts to identify a worklist and set of features the product being developed is to include (normally based on the functional requirements). This work list is commonly known in SCRUM as the product backlog.  Based on the finalized backlog, work with the development team to order the product backlog in priority order.  In my experience, this happens following some work with general architecture design of the product, so lots of UML diagrams and talk of design principles and patterns by the lead Enterprise / Solution Architect(s).

Once the order of work has been set, the product backlog is split up into sets of ‘sprint backlogs’ which illustrate the list of tasks to be performed during each ‘Sprint’.  Sprints are the iterative development cycles and tend to range in time between 2 to 6 weeks or so (at least in my experience with SCRUM).  The aim of the sprint is to have an area of the product in a ‘ship ready’ state by the time the sprint ends, so that should mean that the unit tests have been carried out and the QA guys and gals have signed off on the work.

During the sprint, daily team catch up meetings (huddled in headlocks of course 😉 take place to ascertain the state of play each day.  Being open and transparent in these meetings is the key to ensuring the sprint will finish on time.  In the SCRUM meetings I have taken part in, there is generally a rule within the team that there is no bad news, just be clear and no friction will arise.  No one care’s if you’ve not finished something, just be honest so the team can handle it during the current sprint.  In my experience, a playback of development to the customer occurs at the end of each sprint.

The workload and project timeline is normally set out against the burn down chart. This is a popular diagram with project managers / Scrum Masters because its a visual indication of whether the project is ‘burning down’ enough to land the project on time for delivery.  By understanding in advance whether the sprints are delivering, you can understand whether additional resource or cost injection is needed in order to finish the work in the timescale agreed with the stake holders.  There’s nothing worse for a project manager or programme manager than being grilled by the board because a project is over run and over budget.

I’ve certainly had my fair share of involvement in projects that have overrun and have gone well over budget (who hasn’t 😉 and these are non SCRUM projects.  I’m a fan of SCRUM because of its honesty about each sprint’s state of development.  You know exactly where you’re at every day and because everyone does tend to stand in my experience (you don’t have to), people are quick to update and get to the crux of matters because they wan’t to sit down. This leads to quick identity of issues and resolution of those issues.

I’m a fan of SCRUM and I’d love to hear about projects implemented using this framework that have been delivered both on time and within budget… Comments please!

Oh and last but not least, Happy New Year!