So, I was now in Regina. Over the period of the IEF saga, I had moved from the Corporate Systems area to the Technical Development (R&D) area, which included methodology support (me!), hardware and software standards and new tech evaluations, and IT Training was in their too. This is where I watched some things happened and learned some useful lessons.
The first one was around those Client-Server tools I had mentioned before. There were a couple of leading tools that people wanted to look at and use, so TD was assigned the job of picking one…but some IT teams were clamoring to use a tool now to get something done (all those new managers wanting quick success). So, our manager and these managers decided that one team would try out one of the tools, and another team would try out the other tool, on real projects for 3 or 4 months… then they would all get together and decide which one had worked the best and that would be selected as our standard CS tool.
Can you see the problem here? Both teams learned to use the tool they had and built a system, and were happy. So when the decision time came, each team claimed that the tool they had used had to better than the other one, because they had delivered something with it. An underlying driver was that if one tool was chosen over the other, then the team that had used the ‘losing’ tool was going to have re-develop their system. Well, no one wants to do that, and any attempt to force a choice was denigrated as central TD not being flexible enough to meet the needs of each team.
1) Always have all reviewers of products being reviewed use ALL the products, so they can make a comparison; otherwise, they will prefer the tools they have used (if it is basically adequate for the task). There is big example of this: back when Wordperfect was still a viable competitor to Word, one group of Wordperfect users was asked to review it versus Word, and a group of Word users was asked to review it versus Wordperfect. The (predictable) result? Each group preferred the tool they had been using. You could have shown them an un-biased review that showed at a point time, one product was better than the other (until the next release of one of then came out), they would still prefer what they have used; that’s human nature.
2) IT Standards, that list of technologies and products that the company says it uses, is not useful for its specific content, but for measuring how much variance from the standard exists at a point in time… because there will be non-standard stuff being used when ever a standard is ‘set’, and powerful managers (that make the money for the company) will get exceptions from the standard if that’s what they want. Once you accept that, that the standards will never be absolute, you can use them to be able to advise people how much more money it will cost, or how much less support they will get, if they buy something that is non-standard. If that information is provided, you are helping those people understand the impact of their choice; they may still go non-standard, but with their eyes open and they can’t bitch later about lower support and such.