elglin: (Default)
[personal profile] elglin
I gave some thought to the concept (a trivial introduction is here), and came to the idea that it might be after all a religious debate if not on par with vi vs. EMACS, then at least in the same area code.
Bottom line, IT happens. Unless you're on par with Donald Knuth or something, you will, sooner or later, run into a case when a bug of whatever kind has somehow escaped code review, automated and manual testing, lower environments runs and has finally crept into production. I will look at two edges of the psychological spectrum here; in a futile attempt to escape labeling I would name them "debug" and "last known good".
A "debug" person sees this as a technical challenge. After all, this bug has somehow avoided all the traps set for it, so it's quite a worthwhile opponent. If the person had a major part in setting those traps, the challenge becomes almost personal. Also, if the codebase is of pretty high quality, such things are few and far between, and are a welcome distraction from the tedium of, you know, release process. In this case as well, the bug is quite likely to be something probably not trivial to find, but relatively easy to fix or at least work around - so for a pretty minor investment of extra effort you don't let the effort of writing the 99+% of good code lie on the shelf for an extra release cycle because of that 1% of bad code. This is a pretty concise mindset which is internally consistent. Once again, in a futile attempt to avoid labeling, I will refrain from any judgement.
A "last known good" person sees the same situation in a different way. The proof is in the pudding, after all. Why are we so sure that the Eiffel tower could be built? Because, quite obviously, it was. So we have that previous version of code, which we know can work - because it has, probably for quite a while. We also have this new version, which we know has a chance of not working - because, currently, it doesn't. If this person does not have a personal affinity to the newly deployed version, and indeed is for whatever reason concerned with the stability of the code, no matter what version it is - well, we revert, and better luck next time. Ironically, if the codebase is pretty good so this doesn't happen often, this point is valid - hey, people will be paying extra attention next time, so what are the chances of a pretty good codebase to fail twice in a row? That effort developing the new version - well, it's not thrown away, just shelved for a while - and meanwhile, both the system is stable and whoever is responsible to fix the problem has all the time in the world to do it - also without management and business peeking over his shoulder and asking distracting questions. This is also a pretty internally consistent mindset.

A very good quote (I certainly fail to reproduce it verbatim) says: "Fight, flee, or play dead - all these are evolutionarily confirmed strategies, and any can be optimal in a given situation". Here we have the same - depending on the details, where the devil is, leaning towards either may be more effective.
However, my personal experience (and I'm really not a great judge of character) is that a person, while certainly not immovable on this scale, tends to shift pretty slowly to either side. Which means that, in a fixed place and time, people working on a same issue, but having pretty distant views on this scale, may find it surprisingly hard to understand each other. It ultimately comes to "values", or, better said, cognitive presuppositions. It is a pretty rude awakening to understand that another does not think something very important to you is worth much, and, of course, vice versa. But that problem, I guess, runs much wider than IT.
Page generated May. 13th, 2026 11:30 am
Powered by Dreamwidth Studios