Software Sustainability Maturity Model

by Ross Gardler on 7 September 2010 , last updated


When choosing software for procurement or development reuse - regardless of the licence and development model you will use - you need to consider the future. While a software product may satisfy today’s needs, will it satisfy tomorrow’s needs? Will the supplier still be around in five years’ time? Will the supplier still care for all its customers in five years’ time? Will the supplier be responsive to bug reports and feature requests? In other words, is the software sustainable?

When evaluating closed source software, much of this is guesswork. Open source software, on the other hand, presents a number of advantages over closed source, such as options for the ongoing maintenance and development of local software configurations. This can reduce the risks presented by supplier failure. But it can also complicate product evaluation, since newcomers to open source development methodologies may find it difficult to evaluate the software in terms of its sustainability.

This document outlines a proposal for a new Software Sustainability Maturity Model (SSMM), which can be used to formally evaluate both open and closed source software with respect to its sustainability. The model provides a means of estimating the risks associated with adopting a given solution. It is useful for those procuring software solutions for implementation and/or customisation, as well as for reuse in new software products. It is also useful for project leaders and developers, as it enables them to identify areas of concern, with respect to sustainability, within their projects.

Types of reuse

The concept of Software reuse is important in relation to evaluating software sustainability. So before we examine existing techniques for measuring software sustainability, let’s define some important terms as they are used in this context.

Software reuse is the re-application of knowledge encapsulated in software code in order to reduce the effort of developing and maintaining a new software system. Even when a complete software application cannot be reused individual components, data formats, high-level designs, algorithms or other items may still be reusable.

When reusing individual components of software systems there are generally two approaches. The first is to reuse of a component as-is, without the need to understand its inner workings. That is, one simply makes calls on the component and receives the results. When reusing a component in this way, one has to be sure the component works exactly as required, as it is not possible to modify the component’s behaviour.

The second approach when reusing software components is make some modifications to the component before including it in the system. This can be done when the component does not perform precisely as required in the new environment. However, in order to allow for this type of reuse, the component must be well-designed and documented, and licensed in a way that allows for modification.

Factors affecting software sustainability

The sustainability of software is affected by both technical and non-technical issues. Technical issues tend to focus on how reusable the software is - i.e. its potential for adaptation - while non-technical issues include how a project is governed and funded. Often it is not possible to cleanly separate the technical issues from the non-technical issues.

To be sustained over time, software needs to be be both useful and adaptable. It also needs to evolve as the users’ needs evolve. The potential for reuse is therefore a key factor in the sustainability of software. Reuse can save time and money, and increase the reliability of resulting products. However, an attempt to reuse software that is not easily reusable can have the reverse effect. The goal of the Software Sustainability Maturity Model is to provide a means of evaluating the risk factors in reusing software.

Software sustainability is also affected by the number of environments in which the software is likely to be used and reused. For example, if a tool is applicable to only a small number of users who make minimal contributions to the project, it is less likely that sufficient resources will be available for the ongoing support and development of the product 1.

It is not normally good practice to attempt to create a software solution that seeks to satisfy too many disparate groups of users. This is because it is difficult to address the needs of all potential users without having to make significant compromises. It is therefore becoming increasingly common for suppliers to share core components of a software system and then focus on a specific group of user needs at higher levels of abstraction, such as the user interfaces. For example, consider that numerous Internet search engines, social networks and micro-blogging services all collaborate on the same core data management frameworks. In these cases organisations that often compete in the marketplace are able to collaborate on overlapping technology, while still developing competitive advantages in their specific market. Often called open innovation, this approach is discussed in more detail in the video presentation above.

There are many examples of software code being reused, especially if we look to open source software for inspiration. Reusable software components range from small libraries, such as those in the Apache Commons project, through to frameworks for building complete applications, such as the Eclipse Rich Client Platform. There are also complete applications that can be extended through plug-in systems, such as Moodle, Drupal or the Mozilla Firefox web browser.

Techniques for measuring sustainability

A sufficiently rich sustainability evaluation technique should address both technical and non-technical aspects of software components and systems intended for implementation or reuse.

Many approaches to measuring the sustainability of software rely on a set of qualitative measures that examine the implementation of the software and its management processes. However, qualitative measures are problematic, as they allow personal opinion to influence the results. For example, a good salesperson or project lead can significantly influence the qualitative evaluation of a product.

In addition to the qualitative measures, there are numerous quantitative approaches to measuring reusability. These will examine the code itself and/or publicly available data about the project’s development model. But these require access to the source code and management data (such as bug tracking data), thus making it impossible to evaluate both open and closed source software using the same method. An openly developed project allows all aspects of management, governance and source code to be examined, while a closed project will often rely on assurances from the project owners, rather than an exhaustive evaluation.

In this section we examine a number of existing evaluation techniques and introduce a new ‘openness rating’, which is designed to enable the evaluation of opportunities for collaboration with third parties in the design, implementation and maintenance of a project’s outputs. A key difference between the openness rating and existing techniques is that the openness rating examines potential barriers to the kinds of collaboration necessary to reap the rewards of open innovation in software.

Informal techniques

Informal techniques are quick and easy to apply, but they are inconsistent and open to interpretation. They focus on information that is easily accessible to the evaluators. In an open source software project, far more information is available than in a closed source project. This enables a much more complete evaluation of the project. For example, in addition to existing visible features and uses of a closed source project, the status, pace and direction of ongoing development are clearly visible in an open source project.

Because of their flexibility and unstructured nature, informal techniques can be created and applied to both technical and non-technical aspects of software development and reuse. An informal evaluation may include a preliminary evaluation of required features, a cursory review of existing users and an examination of the project management. However, they cannot reliably be applied to different projects precisely because they are informal, that is they lack defined and repeatable evaluation processes. Their usefulness is therefore limited to narrowing down a large number of candidates to a smaller, more manageable set of candidates that can then be formally evaluated.

Formal techniques

There are many formal techniques for measuring the effectiveness of software development teams and processes. Some are well established, such as the Capability Maturity Model. Others, such as the Reuse Readiness Rating, are less well established but are nevertheless employed in significant organisations. Yet more are even less established but still provide useful background work for a Software Sustainability Maturity Model. The sections below briefly outline some of the formal evaluation techniques used in this work.

Capability Maturity Model

The Capability Maturity Model (CMM) was introduced in the early 1980s. It provides a set of assessment models that are used to determine an organisation’s ability to deliver a given piece of software, on time and with an acceptable level of quality. The most common examples are CMMI-DEV (Capability Maturity Model Integration for Development) and SPICE (Software Process Improvement and Capability dEtermintation, otherwise known as ISO 15504). In order to apply these models, one requires access to the development team. At first sight, this makes them ideal for applying to both open source and closed source projects.

However, the CMM cannot be directly applied to open source project communities, as it assumes a bounded organisational structure. Nevertheless, since a well-run open source project will have defined processes and structures in place, there is no reason why the concepts found in these models cannot be adapted in order to apply them to open source projects.

Reuse readiness rating

The NASA Earth Science Data Systems (ESDS) Software Reuse Working Group (WG) is developing Reuse Readiness Rating as a model for measuring and evaluating the reusability of software components and systems. The Reuse Readiness Rating is designed to allow evaluation of any software component, regardless of the development methodology of that component.

When reusing software components without modification, evaluation using these levels is sufficient in most cases. However, if we intend to make adaptations to the software we are reusing we also need an evaluation of the development methodology and project management. The Reuse Readiness Levels do not provide a mechanism for such an evaluation, but do provide a framework for the technical evaluation of the components in question.

Open source evaluation models

There are a number of models for the evaluation of open source software, including the Open Source Maturity Model by Capgemini and another of the same name by Navica, the Business Readiness Rating and Qualification and Selection of Open Source software (QSOS). Each of these models attempts to provide a means of evaluating the non-technical aspects of an open source software project, what some would call the ‘openness’ of their development model. That is, they are used to help guide the evaluation of the project as a sum of both its outputs and its development team (or community in the case of open source).

These approaches are useful in that they highlight the importance of considering community health when selecting open source products. However, they cannot easily be applied to closed source projects where the required visibility into the project development team is usually not present. For example, it is impossible to evaluate the effectiveness of issue-tracking and release-planning without access to the projects issue tracker. This is not a failing of the evaluation models, but a result of the lack of transparent information about the development of most closed source products.

Openness Rating

OSS Watch, in partnership with Pia Waugh, have developed an ‘openness rating’. This is a series of questions designed to guide evaluation of a projects structure with respect to the management of intellectual property, standards adoption, knowledge management, project management, and market opportunities. Unlike earlier models designed to evaluate open source projects, this model can be applied to both open and closed source software products.

The openness rating allows one to identify strong and weak points in a project’s management structure with respect to enabling third party collaboration and reuse of outputs. This enables project managers to identify areas that may be unintentionally limiting collaboration opportunities. Similarly, the rating allows third parties to understand what barriers exist with respect to making local modifications to a third party’s software outputs. By using this tool both parties can more effectively plan their allocation of resources in line with their needs.

Automated techniques

A number of techniques for the automatic measurement of software quality and community development models have also been explored. This is, perhaps, one of the most contentious areas of software evaluation. Automated tools tend to deliver a high level summary that obscures the details underneath. For this reason some people feel that they are not an effective measure of sustainability. Nevertheless, it would be a mistake to ignore all automated approaches without considering where they might add value.

Automated community evaluation

Community evaluation techniques tend to measure activity on issue trackers, bug trackers, mailing lists and version control. They provide some indication of community activity and therefore, according to their proponents, a measure of community health. However, measuring community health using quantitative techniques is, for many people, flawed.

Quantitative measures, such as the a number of emails sent or commits made, make no reference to the value of those activities. For example, a single email that prevents a major error in design is far more important with respect to sustainability of the software than multiple emails discussing the colour of a button in the user interface. Quantitative techniques are unable to make this distinction.

The limitations of quantitative evaluation of open source communities make automated techniques an interesting topic for academic study. However, at present they are of limited value when measuring sustainability. However, with careful use, they may act as indicators of areas requiring a more detailed examination or of general trends within the community.

Automated software quality evaluation

The automated evaluation of software quality is more mature than that of community development models and therefore provides more valuable measures. Typically, tools will seek to measure such factors as consistency, testability, and security. In many projects it is considered good practice to integrate some of these types of tools within the development process.

There are many examples of automated code analysis tools. These tools when used correctly, can provide valuable information about specific aspects of code quality and management.

Software Sustainability Maturity Model (SSMM)

In order to define a maturity model for software sustainability, we have drawn inspiration from each of the evaluation techniques described above. In some cases (Reusability Readiness Levels and Capability Maturity Model), we have extended rather than adapted the model. We describe the key attributes found at each level, but do not detail the properties and features required to progress from one level to the next. Before examining these key attributes, let’s define each of the nine levels in the model. In naming each of these levels we have taken inspiration from the life cycle of a pear tree:

The project is little more than an idea and a blank canvas. At this stage, nobody but the project owner(s) and their immediate environment can have any direct influence over the final outcome.
The project is starting to take shape, but it is still little more than a proposal. The project owner(s) have not started to communicate the project’s objectives in any meaningful way.
There is an early-stage implementation of the solution at this point. However, without the commitment of the project owner(s) the project is highly unlikley to survive. Project owner(s) do not, in general, seek external input other than through contracted help, although they do make the source code available for reuse.
The project is starting to take on a life of its own, although it is still mostly guided by the project owner(s). Some aspects of project design can now be led by someone other than the initial project owner(s), although the original project owner(s) are still critical to project development since they are the ‘gatekeepers’ to the project.
The project is able to function independently within a narrowly defined set of criteria. External influences are starting to have a significant effect on the future of the project and thus a community of peers is building around the project.
The project and its related community are no longer controlled by the original project owner(s). It is possible that the project would continue if the project owner(s) were to withdraw entirely. The project owner(s) themselves recognise this and are ensuring that mechanisms are in place to allow the community to guide the project towards full sustainability.
The community is self-organising and fulfilling important roles in the project management structure. The project leader is still vital to the survival of the project, but there are candidates that might be able to fill their role, given a managed transition.
The project has broken free of the original project owner(s) and can survive independently. The community and the project that the community works with are now able to make decisions for the benefit of all peers, rather than for any subset of community members.
The project is satisfying the needs of a diverse set of users and contributors and would almost certainly survive the departure of the current project lead.

The table below outlines the key features of each of these levels and maps them to the relevant Capability Maturity Model and Reuse Readiness Levels. Each SSMM level incorporates the requirements of both the CMM and RRLs, in addition to the further requirements outlined in this table. Note that the requirements to reach a level are the minimum, not the maximum. For example, software code may be available under an open source licence prior to level 5, but in order to reach level 5 this must be the case.

Table 1. Draft Software Sustainability Maturity Model
SSMM Level SSMM Title Openness Summary Reuse Readiness Level Capability Maturity Model Level
0 Seed No source code available under an OSI licence. 1 - No reusability; the software is not reusable. 0 - Incomplete
1 Germination Either no source code available or unverifiable source code; there is no IP management process allowing for third party contributions. 2 - Initial reusability; software reuse is not practical. 1 - Competent people and heroics
2 Seedling Verifiable source code is available in a public version control system with traceable IP ownership and licensing. 3 - Basic reusability; the software might be reusable by skilled users at substantial effort, cost and risk. 2 - Basic project management
3 Juvenile Project owner(s) have a mechansim for engaging with and understanding third party interest in the project; the project has a responsive mail address. 4 - Reuse is possible; the software might be reused by most users with some effort, cost and risk.
4 Flowering Third parties are able to examine, understand and influence the future of the project; the project has a public issue tracker, informational web site and mailing list, all of which are used to manage the project. 5 - Reuse is practical; the software could be reused by most users with reasonable cost and risk. 3 - Process standardisation
5 Pollination Third parties are willing and able to take responsibility for key apects of project development; the issue tracker and mailing list are active and responsive to third party requests; source code is released under an OSI approved licence. 6 - Software is reusable; the software can be reused by most users although there may be some cost and risk.
6 Fruiting Software is managed in such a way as to ensure that one person's changes do not break another's reuse; code has sufficient unit tests and release management processes to ensure that releases are of a resonably consistent quality. 7 - Software is highly reusable; the software can be reused by most users with minimum cost and risk. 4 - Quantitative management
7 Ripening Roles within the project are clearly defined and key management tasks are handled by more than one community member; decision-making and conflict resolution processes are defined in a governance document and followed. 8 - Demonstrated reusability; the software has been reused by multiple users.
8 Dispersal The project is able to maintain its own momentum independently of any one participant in the community; a governance model is adhered to and modified in response to emerging challenges; no single project participant has control over the project and thus newcomers are able to gain influence. 9 - Proven reusability; the software is being reused by many classes of users over a wide range of systems. 5 - Continuous process improvement


This proposal for a Software Sustainability Maturity Model demonstrates how a new openness rating2 can be combined with existing models used to evaluate various aspects of software development processes and their outputs.

We argue that by evaluating a project in terms of three elements of sustainability - openness, reusability and capability - it is possible to highlight opportunities for improvement in open innovation, product design and process maturity. This in turn will allow project managers seeking to reuse or further develop software outputs, to allocate resources in the most appropriate way for their project’s needs.

If you would like to apply the openness rating to your own project, please contact us.

Further reading


Related information from OSS Watch:

  1. A small number of users that make minimal contributions can limit available resources. But this is not true of a small number of users that make substantial contributions. The same holds for a large number of users who each make a small contribution, a concept known as the Long Tail).

  2. The openness rating will be documented and published in the future, but at the time of writing you will need guidance in applying it. If you would like to apply the openness rating to your own project, please contact us