No IT system (or machine or device) is 100% error free. There are always bugs. But these were covered up and the users carried the can. What galls me in these sort of cases is that the business drags the sufferer to the court steps and then (sometimes) pays them off without admitting liability.
I don think anyone expects perfection. But its the deny-deny-deny attitude that hurts.
Bang on. As a former IT professional, it's pretty much a given that a system is not going to work in the way it is supposed to work, especially using the old "cascade" style models for specification and development.
Much of this is down to "technical" factors, but human factors play a huge part - in too many development environments, it ends up with the dev team "marking its own homework", which never provides much of an incentive to dig really deep and try to smash the system, with the result that often-trivial mistakes get made which can have profound consequences, either in terms of the errors they produce, or in the effort required to fix them, particularly if they have sneaked through to production.
What seems to have happened here is that Fujitsu
did identify bugs, but the relationship between them and their client (POL) was such that, either deliberately or by accident, a collusive relationship developed between them whereby POL then controlled the activity around reporting errors. That's 100% a human problem, not a technical one. In a nutshell, POL chose to lie (mostly by concealment, but also from the sound of it, actively) about the problems with the system.
As I was easing out of IT, there was the beginning of a move towards stuff like test-driven development, where before you even start writing code, you write a battery of tests designed to validate the way the system works, and then each iteration of the development cycle is submitted to the test regime, and the resultant outputs compared to the anticipated results. Even this isn't 100% bullet proof, and it helps a lot if those designing the tests are not the same people who are writing the code. Nowadays, those test environments can be automated, so as to enable the development cycle to incorporate a "test early, test often" approach, which can be carried forward to production systems, to provide an ongoing test regime that can be used to validate installed systems, as well as any late changes, updates, revisions, or further development of the system. It doesn't sound as if this happened - indeed, from the description of the Fujitsu test regime, it seemed to be heavily reliant on error reports from the end-users of the system (you know, the ones who ended up going to prison
).
So, to cut a long story short, there was something rotten about the whole philosophy of this system's conception and design. It was doomed to fail. As so many large IT projects are - the difference with this one is the brazenness of the client (POL) in simply refusing to acknowledge the possibility of system failure. Personally, I do think that this refusal verges on the criminal, but it is at least reckless.
Whatever penalties and price that those responsible end up paying (and I'd bet quite heavily that those who were most responsible will get away with it scot-free), the way systems like these are developed needs to be radically reviewed - almost, I think, to the point where systems where it is even remotely conceivable that a system failure will result in the kind of harm done to its users should require the testing and development to take place under some kind of legally binding regime, with appropriate oversight.
And outfits like POL shouldn't ever be allowed to bring private prosecutions on the basis of evidence yielded by such systems. You can't legislate for the fact that people and organisations will be less likely to self-report, but you can at least make sure that such a lack of suitable testing and disclosure renders any evidence such a system produces inadmissible for use in prosecutions.