We do not need richer software process models
A group of researchers is looking to issue a Manifesto for Rich Software Process Models. Here’s my position on this topic.
“… perfection is achieved not when there is nothing left to add, but when there is nothing left to take away.” Antoine de St. Exupéry, Terre des Hommes, 1939, chap.3
Over the last 30 years we have tried very hard the rich process models, and we have not been extremely successful at it. Maybe we should try lean and mean software process models, rather than making them “richer.” At minimum, we should try to analyze why the rich approaches have not worked; where they failed. Could it be that we were trying to solve the wrong problem? or that the real problems by far overshadow the process model issue? Or maybe the whole construction paradigm we use for software development is not adapted anymore? My position is that we should try the route of very simple software process models, to ensure a wider applicability, greater versatility, and acceptance. Possibly these new process models would be based on other paradigms of software or system development than the “technical-rational” construction idea. I would be wary of richer process models.
The Great Process March (1987-2001)
Many of us (old enough) had a great “aha!” moment when Lee Osterweil told us in 1987 that “Software processes are software too”. We elaborated on this with process languages and models. We tried to implement them in various Integrated Programming Support Environments (IPSE) such as the wonderful and clever Rational Environment® (1987-1998). We embraced object-oriented modeling to build processes like Objectory [3], then the Rational Unified Process® (RUP®) and many other such processes or process frameworks. We extended this to quasi-standards like the Software Process Engineering Metamodel (SPEM 2.0), and real standards, such as ISO 24744-2007 Software Engineering — Metamodel for Development Methodologies (SEMDM), as well as the tools to go with them. Some of this great work branched out into business processes, with languages and tools to support them. And I am sure I am missing a great body of academic work here, having skipped 6 or 7 years of ICSP(P) installments. Other approaches were brought in, looked at, tried out: software processes as state machines or Petri nets, software processes as collection of interacting agents, software processes as complex adaptive systems, etc., etc.
But what have we really achieved? (by 2010)
In most cases, the process models look at transformations: input-process-output, and therefore have an excessive focus on artifacts, and decomposing these artifacts into elementary constituent elements. Such processes are in practice very hard to configure. They are also rapidly misused. I have seen do half of RUP® implementations fail, caving in under the weight of the artifacts that the process “forced” (so they say) developers to create, manage, update. See this tongue-in-cheek paper.
In a great reaction 10 years ago, the Agile manifesto started to hack at the base of the tree trunk by claiming that we should favor “interaction between individuals over processes and tools”.
Should we blame the process models? Are they not rich enough? Detailed enough? Flexible enough? They are. Maybe we are not addressing the right problem? Our fundamental metaphor: “this is a process, it looks like a program, the machine is a group of humans” is the wrong metaphor. Machines are very deterministic, humans are not; programs process well-defined input; all software projects are different; even the same software project would not be done the same way twice.
Have we been modelling the right thing?
Maybe even the construction (and architecture?) metaphor is flawed—at least when we push it too far. It does not account well for the creative, trial-and-error approach, nor for the interactions and collaborations between individuals.
Consider the “nice party metaphor”: you organize a party this Saturday at your home. You invite a group of people, order the food, move the furniture, and you record carefully: who comes, when, who eats what, drinks what, etc., and everyone leaves having had a good time. The next Saturday, to have another great time, you decide to have the very same people, the very same food and drinks, you ask them to dress the same way, arrive at the same time, you introduce them to the same people, bring them the same glasses with the same drinks, make the same jokes,… will you be guaranteed to have a nice party again…? Hadn’t you captured the exact right recipe? (Source unknown)
Are software development projects replicable enough to benefit from repeatable, prescriptive software processes, supported or enforced by tools? The many adopters of lean and agile approaches seem to have voted (“no!”) with their feet, and moved away. The spectrum of software development circumstances and contexts is very vast (see my posts on contextualization).
In search of new paradigms to represent software development
We need very simple models of software development. Models that are simple enough to have very wide —quasi-universal— applicability. Models that also encompass the large number of variability factors, not only technical variability (size, technologies, platforms, tools), but also social, cultural, linguistic, legal, commercial factors, etc. Models that move away from the “process as a program, the team as the machine” paradigm. Simple and robust models that are not based on the construction metaphor, the gradual, progressive transformation of various artifacts. Models that can incorporate creativity, knowledge management, social interactions, trust, shared mental models.
Probably these simple models can gradually, in steps and in layers, or just in some narrow pockets, be made more and more complicated and rich in details and expressiveness, but we should not start with the complex and detailed and super-expressive before we have identified the right paradigm, and the handful of foundational principles that come with it. Revisit Richard Gabriel’s “worse-is-better is better”.
We need to start from what people do (observation) and how they conceptualize what they do (mental models, shared or not), leading to a descriptive approach, that offers choices, possibilities, and not from what we think a priori they should be doing, the prescriptive technical-rational approach. Jane Jacobs said the same thing about great American cities.
In this direction, years ago (1996), Joel Jeffrey had attempted to look at software development from a different perspective, folding in people and collaboration in his paper “Addressing the essential difficulties of software engineering”. More recently, Paul Ralph at UBC proposed a significantly different paradigm for software development with his Sensemaking-Coevolution-Implementation Framework. I tried myself to develop a conceptual model of software development (the frog and the octopus), around:
- 4 concepts: Intent, Product, Work and People
- 5 attributes: Time, Uncertainty, Quality, Value and Cost
- and a handful of variability factors: Size, Business model, Rate of changes, Geographic distribution, etc. See also Scott Ambler’s Agile Scaling Model.
The tools that successful software projects use today are also very simple and lightweight: scrum or kanban boards, tools allowing people to keep track of activities and bring to the developer information associated with a certain (type of) activity.
Richer process models? Danger!
I would be very concerned that “rich” rapidly means heavy, detailed, complicated; that rich means taking the models we have and enriching them with additional concepts, more entities, more details, more relationships between them. I have the same worry with the SEMAT initiative. Can we have rich (in expressiveness and usefulness) and lean at the same time?
I look forward for the face-to-face debate in May! I have to confess that I am shooting from the hip, here, not having a clue what the proponents of a “Rich process model” have in mind. Maybe we’ll find ourselves in furious agreement. In the meantime I welcome your comments (here on wordpress). It will help sharpen my arguments.
Note: the nice party metaphor is not mine, but I cannot remember where I heard it… if someone knows the origin, please let me know.
Maybe there is two forces in action here: 1) people like ‘simple’ because it is easier to understand and apply, and 2) people want to help through guidance, hence they start to elaborate on ‘simple’ and over time ‘simple’ becomes ‘rich’.
Dr. Winston W. Rovce’s paper titled “Managing the development of large software systems” is by some referenced as the ‘original’ source for what people might generally know as the ‘waterfall method’ (probably due to his figure 2 and call for extensive documentation). But reading the paper, it seems to me that Winston’s proposal is relatively simple, (somewhat) iterative and neither heavy or complicated. Maybe it is just (my guess) that the ‘waterfall method’ has evolved (through added guidance to be helpful) and mutated (due to misunderstandings) into this rigid, heavy, slow process, that it is perceived to be today.
But guidance is (often) better than no guidance, and as such it may be more a question of being able to choose (wisely) regarding what guidance you need rather than attempt to follow *all* of the guidance on offer by the so called ‘rich methods’? And the opposite question is: Do ‘agile methods’ provide enough guidance, for all projects, or only some, and if so then what kind of projects? Will today’s agile (or light, or simple) methods be tomorrow’s ‘rich waterfall’ methods?
I hope this was helpful…. 🙂
John
Dr Winstons’ paper: http://portal.acm.org/citation.cfm?id=41801
Btw, if you find that the link for the “Addressing the essential difficulties of software engineering” paper isn’t working then try this one: http://dx.doi.org/10.1016/0164-1212(94)00067-0
“The challenge is not to try and achieve certainty but to learn to manage uncertainty.”
Chris Blake [Blake07, p.19]
[Blake07] Blake, Chris (2007), “The Art of Decisions : How to manage in an uncertain world”, Financial Times Prentice Hall, UK.