If you ever find yourself designing something a certain way because you think it would be better that way, then you’re probably performing art and not design. Art is about self-expression. Design is selfless.
(Jeff Harris, Matter)
Some time ago I have written in this blog about the importance of measuring the quality of a design from the perspective of maintainability and testability. Quality can signify different things for different people and there is no one single “good” interpretation. Most of the time, for me this term assumes an objective, quantitative, measurable connotation. Jeff Harris reinforces this idea attributing a selfless expression to any good design. Harris is not a computer scientist (in fact he is a graphic designer) but I think he talks about the same concept, even if it is materialized in a different way. Of course there is some “art” in the process of software design as there is in the graphic design, but what I like to stress here is the need to have objective metrics that help us to manage the complexity of a software system in an effective but selfless way.
In this post I simply show how such metrics can be collected and shown in a non obtrusive way by using a little automation. To demonstrate the concept I have exploited the extensibility of Sparx Enterprise Architect CASE tool building an add-on that reads the UML model and calculates the metrics for the current diagram. Each class is annotated with a tag corresponding to the particular metric calculated (in particular I have used the Component Dependency metric to evaluate the encumbrance of a class). Then a note is created for the overall diagram with the metrics for the entire graph of objects depicted in the diagram itself (again, starting from the CD metrics, the automation calculates the Cumulative Component Dependency value and its normalized variations: ACD and CCD). An example of the result is illustrated below.
This little automation can provide several benefits for the software modeler:
- it enables an early evaluation of maintainability and testability since the first stages of the modeling activity;
- every evaluation is performed on demand either for the whole model or for a single model slice (typically for one or more specific diagrams);
- the metrics can be hidden in every moment at the diagram level, using a feature of the modeling tool (which show/hide tags);
- the evaluation can be done at the class level or at the package level, allowing an incremental and hierarchical analysis of a system;
- the analysis can be performed even before a single line of code has been written, enabling a validation of the design, independently of its completeness and the level of detail used in the diagrams.
These metrics are not new but in my experience they are not widely used as one would expect, especially considering the value of prognosis they provide. Experienced architects know very well the importance of a good dependency management in a large project. Nevertheless, there are few tools that elaborate their calculation from the code, such as Sonargraph and almost no one CASE tool (at the best of my knowledge) that introduces this calculation at the UML level. I believe that this little automation is worth the time spent to create it, especially for the way it is introduced (unobtrusively). I would be very glad to hear your experiences and thoughts about this way of gathering indicators of testability and maintainability directly from UML models. Feel free to comment.
A final note: Below a snapshot of a diagram illustrating how the add-on works at the package level. The picture has been taken during the development of the plug-in so it represent a good example of early evaluation. Indeed the snapshot illustrates the high-level structure of the UMLaidToolkit project: an effort to build add-ons including a broad range of extensions for Enterprise Architect.