People spend a lot of time debating about software quality but, very often, they speak in abstract terms. They don’t measure anything. Even worst, they make early design decisions in name of non-functional requirements (especially concerning perfomance issues). Without any measurement to support decisions, our intuition can be misleading most of the time. UML models let us to visualize large-scale structures better than the mere code, but again they are not always sufficient to fully describe the overall complexity of the system, nor they suffice to compare design alternatives. Consider, for example, the design illustrated in Figure 1 (real names are replaced by letters; in any case, what is important here is the model structure, not the specific domain discussed).
The model illustrates two aggregates, G and M, a hierarchy H and some infrastructure classes all around the aggregates (repositories, factories, and proxies). Looking at this design, the designer experimented a little bit in order to improve it. Pheraps the hierarchy merges two different dimensions, an abstraction and an implementation (Hj and Hjk, where j stands for the abstraction concern and k for the implementation concern, respectively). Thus, one possible refactoring strategy is to separate such dimensions, applying the Bridge pattern (the original hierarchy rooted at H now has been divided into two structures rooted at H1 and H2, respectively). Another improvement can be to merge the classes X and Y in order to provide a single, convenient access point to the hierarchy for the aggregates M and G (another Proxy interface?). Finally, some small modifications affecting the dependencies between the infrastructure classes and their aggregates, and we obtain the design described in model Figure 2.
In his refactoring, the designer applied two well-known design patterns. This sounds good, but does he really improve the design? Looking at the structures of the two models, which one is better is not so evident. In general, there is no single answer. We need to understand which non-functional requirement is more important, in order to choose a suitable metric to measure it. Considering maintainability and testability, better means “simpler” in terms of complexity and coupling. We can calculate the CCD/NCCD metrics for both design alternatives in order to evaluate the dependecy structure of the two models. Despite of the fact that the second model applies some design patterns, the overall maintainability is not improved. The crude numbers are showed in Table 1:
The second design shows an increase in complexity of 9,334% according to NCCD metric despite of a reduction of the system size (-13,793%). This example illustrates several points:
- The simple application of design patterns does not always guarantee an improvement in quality. A pattern represents a well documented solution to a recurring problem, but it comes with costs and consequences. If we choose a pattern which do not represents a good trade-off with respect to the specific non-functional requirement we want to optimize, the result will not be optimal.
- Changing the non-functional requirement to prioritize, the numbers may tell a completely different story. In our example, if we take into account scalability, probably the alternative model of Figure 2 is better, at least considering the maintainance of the evolving hierarchy structure. After all, this is exactly the reason why the Bridge pattern was introduced.
- Supported by a mathematical model such as that provided by the CCD/NCCD metric, we can measure one dimension of software quality objectively and systematically.
- Without an objective evaluation, making strategic decisions (e.g. optimizations, design speculations, architectural choices, and so on) on the basis of the sole designer’s intuition can produce unexpected (unpleasant) results.