bitwise technical
editor Dermot
Hogan has managed and developed global risk
management systems for several major international
banks and financial institutions. He says he is
a joy to work for (and who
are we to argue?).
In this month's Bytegeist, Dermot ponders
the connection between software engineering
and quantum mechanics
Quantum Software
“There were two ways to do physics.
One was to use mathematics; the other was to ask Feynman”:
(Richard P Feynman’s obituary in 1988) – truly
an obituary to die for. It’s been repeated many
times since. It might also be applied to writing software.
As far as I know, Feynman, though a truly great Nobel
Prize winning quantum physicist, didn’t write
a single line of Object Oriented Java in his life.
What he did do was investigate the first space shuttle
(Challenger) disaster in 1988. And this has some
bearing on software engineering today.
But first, let’s look at the problem. Research
seems to indicate that around two out of three software
projects substantially fail in some way – they
are way over budget, don’t work or simply get cancelled.
I can verify this figure from personal experience – in
over 20 years hacking away at IT, only two of the many
projects that I’ve been involved in worked as intended.
Most of the rest went wrong in one way or another. The
most common reason, from what I recollect, was that the
users or the requirements changed – the users left,
got fired or the world around the users changed.
Another common reason was that the project was under-priced.
If the users had been told the true price, then the project
would never have been started. Now this practice of “under-bidding” is
pretty endemic in IT. The responsibility doesn’t
lie just with the sales force, it also lies with the
customer. Customers are supposed to go for the lowest
cost option, but typically a customer has no idea of
what the true cost is. They happily swallow the low cost
bid and rarely look closely at the track record of the
company. In any case, IT vendors usually manage to keep
the skeletons firmly locked in the cupboard. Except for
the truly exceptional UK Child Support Agency system,
where the skeletons are out and happily
dancing over taxpayers and families alike.
"It took people a few thousand
years to learn how build large bridges that don’t
fall down, for example. It could be that we just
don’t know how to build complex IT projects
yet."
It seems to me that this is start of the problem – unrealistic
expectations from users and the senior IT management.
I well remember the groans from the project teams I’ve
been on when we were told (yet again) that we had to
deliver the impossible in an improbable amount of time.
What then followed was classic Fred Brooks, straight
from his software engineering manual, The
Mythical Man Month. The project gets later – one day at a time,
of course – and more and more people are assigned
to it (to “help out”). Which makes it later.
I recall reading The Mythical
Man Month for
the first time. A growing sense of recognition arose
of his descriptions: “Conceptual Integrity”, “Second
System Effect”, “The Pilot System”.
I’d seen them all – or the lack of them – in
the systems I’d been involved in or designed. The
most important of these is still (to my mind) Conceptual
Integrity. If the overall idea of the project looks like
a dog’s breakfast then it has no chance of working
properly. To prove my point, just consider any large
government IT project (and here you can pick a government
of you choice, it seems to be a global problem). I can’t
think of one where the considering the “conceptual
integrity” of the system doesn’t induce derision.
So what’s going wrong then? And what’s
the solution? Naturally, I don’t have all the answers – or
possibly even any of them. Some of the problems result
from using parasites known as “Management Consultants”.
Some are down to the simple fact that IT is very young.
It took people a few thousand years to learn how build
large bridges that don’t fall down, for example.
It could be that we just don’t know how to build
complex IT projects yet.
But let’s return to Feynman. He was asked to
investigate the Challenger disaster. You can read the
results in “Mr Feynman goes
to Washington” from
the excellent book “What do
you care what other people think?” that recounts some of Feynman’s
exploits. One of the more interesting parts of his penetrating
analysis of the NASA culture of the time was the section
on the Challenger software. It may surprise you to know
that he thought it was excellent. “One group would
design the software components, in pieces. After that,
the parts would be put together into huge programs, and
tested by an independent group”. Then after both
groups were happy, they would have a simulation test: “… they
had a principle: this simulation is not just an exercise …;
it’s a real flight – if anything fails now
it’s extremely serious, as if the astronauts were
really on board and in trouble”.
I can’t ever recollect meeting such an attitude
in all my time writing and designing software. More typical
was “if it compiles, ship it”. OK, we aren’t
all designing life-critical systems. But possibly, just
possibly, if IT professionals and consumers took a more
serious attitude to achieving quality, the IT systems
we deliver might be a little better. And a starting point
might be to read The Mythical Man
Month before every
project.