.comment-link {margin-left:.6em;}

Wednesday, February 15, 2006

 

Just Because Its Possible, Doesn't Mean Its Good

Marc Hedlund posted a piece about Web 2.0 development techniques a few days back offering the blogosphere some new techniques that Web 2.0 developers are using that aren't found in traditional software development. Some are quite ingenious and good. Others, however, are frightening.

Marc wrote about software revisions as follows:

"Gone are the days of 1.0, 1.1, and 1.3.17b6. They have been replaced by the '20060210-1808:32 push'. For nearly all of these companies, a version number above 1.0 just isn't meaningful any more. If you are making revisions to your site and pushing them live, then doing it again a half hour later, what does a version number really mean? At several companies I've met, the developers were unsure how they would recreate the state of the application as it was a week ago -- and they were unsure why that even matters."


I'll echo a comment made directly to this post that the developers in question have never heard about change management controls and versioning. There are plenty of tools to accomplish revision control out there that work very well, and here's a real good reason why it's necessary: if somebody's 1/2-hour revision blows up and brings about an outage or severe loss of functionality for any reason, going back to the last-known-good revision in live production, and backing out the change is the first order of business in getting the system back on-line.

I also wonder how these folks regression test their systems before turning them live, and even with an automated test suite I doubt that any system of substance can be revised and completely tested in a half-hour.

That's assuming that these folks know what regression testing is in the first place, and why it matters.

Moving on to QA and testing:

"Developers -- and users -- do the quality assurance: More and more startups seem to be explicitly opting out of formalized quality assurance (QA) practices and departments. Rather than developers getting a bundle of features to a completed and integrated point, and handing them off to another group professionally adept at breaking those features, each developer is assigned to maintain their own features and respond to bug reports from users or other developers or employees. More than half of the companies I'm thinking of were perfectly fine with nearly all of the bug reports coming from customers. "If a customer doesn't see a problem, who am I to say the problem needs to be fixed?" one developer asked me. I responded, what if you see a problem that will lead to data corruption down the line? "Sure," he said, "but that doesn't happen. Either we get the report from a customer that data was lost, and we go get it off of a backup, or we don't worry about it." Some of these companies very likely are avoiding QA as a budget restraint measure -- they may turn to formal QA as they get larger. Others, though, are assertively and philosophically opposed. If the developer has a safety net of QA, one manager said, they'll be less cautious. Tell them that net is gone, he said, and you'll focus their energies on doing the right thing from the start. Others have opted away from QA and towards very aggressive and automated unit testing -- a sort of extreme-squared programming. But for all of them, the reports from customers matter more than anything an employee would ever find."


Wow, what a bad, bad move. Hope that they don't tell the CFO what types of risks they're taking betting the company like that. Lose the customer's data and get it off a backup? Fine. But what happens when the data was, inadvertently or otherwise, leaked to someone who isn't supposed to have it? Forget about some taggy Google Maps mashup...what about financial, credit, SSN, and medical records data? Forget about QA at this point after the government and trial lawyers pick over what's left of the carcasses of companies that 'test' this way.

Closer to the point, a guy named Boris Beizer wrote a software testing and QA book back in 1984 that, while it's out-of-print, has stood the test of time as a seminal, definitive work on software QA that is still applicable today, regardless of the technologies employed. In it, he stated one primary tenet of software testing: developers are poor resources to use in testing efforts, particularly of their own code, since they have an inherent bias towards their work that cannot be overcome nor relied upon to find substantial bugs in their code. This wasn't a slam on developers, as Beizer simply and correctly pointed out the natural biases we all have about our work, and software development isn't any different than other professions. And the Web as a computing platform doesn't change this tenet one iota.

In fact, the Web as a massive computing platform is no different than any other information system that came before it, with the requisite needs for adequate and substantial testing, revision and change management, and proper risk-mitigation techniques in production code and systems. Yes, the processes and techniques can certainly be optimized for speed and feature deployment, but thouroughness and the mitigation of outage, security, and data risks are still necessary no matter what new tools and techniques (or, as it appears, the lack of them) are engaged.

Finally, it appears that everything old is new again...we used to call 'eternal betas' prototypes back in the day....:)



Comments: Post a Comment



<< Home
Technology Blog Top Sites |

This page is powered by Blogger. Isn't yours?

Weblog Commenting and Trackback by HaloScan.com