Monday, November 30, 2009

Memories of IT - the 90's continue - the user from heck

Next up was a desired enhancement and guess what, it was about Agent Management. In the United States, the rules for becoming an insurance agent are at a state level, so that means different rules in different states. Some of these rules had to with an Insurance Company sponsoring Agents, the proper way of registering, and if you did it incorrectly, large fines would result because the insurance boards took a dim view of people selling insurance who were not legally registered and sponsored to do so.

It’s a really complicated part of the business which (surprise, surprise) our vendor had not put in their system, but they sure would if we gave them requirements, and paid for the development work. Here we have another good thing this management had done, and that was assign the most knowledgeable business people to the project full-time to make sure the system did the right things. That is rare and to be applauded; unfortunately (!), management thought these people could write the requirements to send to the vendor, and that was a disaster. A document had been written, sent to the vendor and they sent it back saying it was unusable. So, a walk-through by selected team members was scheduled, and as an Analyst who had done Requirements before, I got invited.

The meeting was intended to change the document to make it usable, but it was just pages and pages of rambling text, so after some period of time I had to pipe up and say we needed to produce a real Requirements document, and I knew how to do it.

So, I am assigned an Agency SME and an Underwriter, the latter being the person who knew the rules, because underwriters checked these rules when a new agent submitted their first application.

The Agency guy was great; he was the one I had worked with before that liked my Data Models. However, the underwriter was the person who had written the original document, and was understandably not happy that her hard work had been rejected, and so she became my User From Hell. I scheduled a week in a meeting room to do the Requirements, but she fought me tooth and nail and it took four weeks. I did functional decomposition, a data model and rules, and she didn’t want to know that this could work. It was painful, and like all things you would rather forget, the specifics and severity of the pain have faded, but I do now that when we were done and the result was reviewed successfully internally and the vendor loved it, then that underwriter switched around to be the biggest supporter of what we had done… I felt good!

Sunday, November 29, 2009

Memories of IT - Packages and Reverse Engineering

Ah, packages. They are better now, but this was still a period where it was pretty likely you wanted to change it. One thing management had done right (and you gotta acknowledge that when you see it) was the vendor agreed to work with us to make changes if we gave them requirements. This was really one of those “win-win” deals, because almost all packages are created for the USA first and, if you are a Canadian company, the first thing you have to do is “Canadianize” the package; that can mean currency differences or adding French versions of screens and such. This vendor was indeed American, and we were their first Canadian customer, so they wanted to end up with a new version of their system to sell to other Canadian insurance companies.

They didn’t have such a version yet because the package was still pretty new. It had been developed in-house at a Chicago insurance company, and then it had been spun-out to a separate company. The original company was still their number one customer.

Was it a good package? I can’t really say for sure because (spoiler alert!) I wasn’t around to see it fully implemented. I do know the original version was not using industrial-strength technology. The database was some Dbase/Xbase thing, the code was Basic or something, and the user interface was green screen running under DOS; but the vendor wasn’t dumb. They piggy-backed on top of our agreement the effort to upgrade the technology: Oracle, Unix and other good stuff. This came about because the business liked the system as they saw it, but our IT people said we can’t support this, and our scale of business would probably swamp the system. So, the business people agreed to pay for the upgrade, and what vendor would not love that.

So, I have these D/Xbase table definitions, and there are hundreds of tables. The central tables emerged easily and I started subject areas, and the structure became clearer. What I found was that a lot of these tables were add-ons: if they had a Policy table, and then decided they needed more attributes for a Policy, they did not change the table, they created a new one with the same key. I can see how that would be easier to do, but it made the database into a dog’s breakfast. I raised this in one of the on-site meetings with the vendors, that I was not impressed, and they were not impressed that I was not impressed.

So, I continued parsing my way through this mess, creating a pretty big data model. I was doing this while starting to do direct project work, but I eventually finished. The next time some vendor people came in, I showed it to them as I thought it would assist in specifying good requirements and their impact on the database. Their response was muted at best, which left me puzzled. It was only some time later that I was taking to a consultant who had been part of the package search that I found out why I had got that response: right in our contract was a clause that we would not reverse-engineer any part of the system to understand and document it, that was verboten. Well, nobody had told me, I don’t think anybody else like who had joined after the contract was signed knew either. It was no secret I was doing it, and nobody ever said I should not do it. I don’t know if there were any repercussions from the vendor after they saw it, but I had the data model and I used it from then on.

Friday, November 27, 2009

Memories of IT - mid 90's - Activity Based Costing and Functional Decompositions

Other stuff was happening in the 90’s, things like Business Process Re-Engineering, and Business Process Improvement. Crown Life got into a specific approach called Activity Based Costing (ABC), where you defined and broke down your overall process a few levels so that you figure what different parts did and what they cost. Each business unit had done this, and when I saw them, I said “great, these are Functional Decompositions.”

When I joined the Individual Insurance package project, I learned they had used their ABC decomposition to evaluate packages, and it was a good decomposition, functional not organizational. As I still had a licence for IEF, I put that decomp into IEF, as part of learning about the business before doing real work. What I found out, however, was that no data requirements had been defined, a big piece of the puzzle that was missing.

Turns out I had about a month after I joined the project before some real work started, so I said I would use the time to get up to speed. I said, do we have documentation on the package that I can read? Yes, we did, look in this LAN folder. In there I found table definitions for all the package's data, with foreign keys defined. I said to myself, I can reverse-engineer this stuff to build a data model to align with the func decomp, and I will have a great picture of the business, an actual Information Architecture.

So, that’s what I started, and started learning about the package too.

Thursday, November 26, 2009

Memories of IT - mid-90's - Tossed out of the White Tower

A little background on Technical Development (TD); it had existed in Toronto, and was staffed up again in Regina. We all worked for a middle manager, a couple of different ones who came and went. When one of them did move on, that’s when the VP I have mentioned previously took over. She had managed a large application area, and a re-org took away that area, and she came to manage us white-tower, R&D folks. It seemed like overkill to me, but it turned out that her job was to get the current of R&D projects done, This was mainly getting a server-based environment implemented that could be used to take over from the mainframe, which was supposed to be cheaper and better because it would be GUIs on PCs and such.

This reminds me that when we moved to Regina, our mainframe processing was switched from Datacrown/Crowntec to a local IBM/ISM facility, so we were still paying for every MF cycle and disc storage byte, so servers were seen as the solution.

So, at some point, it was divined that the server environment was ready, so we would never have to do R&D again, so TD was to be disbanded. It was like “the end of history…”. I also think it was because we were a senior bunch of people, higher-paid, and management decided it was time for us to work on actual projects and earn our pay.

So, we all dispersed across the company, and the VP got a new big app area to manage. I was concerned, because a lot of app areas were run by people who had declined to do IEM/IEF, and that’s what I was tagged with. However, I ended up joining the team doing the new Individual Insurance package, to do requirements for possible changes to the package.

So, back to the project trenches I went…

Wednesday, November 25, 2009

Memories of IT - mid 90's - The IAA from IBM

What IBM had was something called the Insurance Application Architecture, the IAA. They assigned an IAA consultant to work with us, so he came in one day to do an overview. I was really curious how they could have a model that any Insurance company could use, as I had done some models and they certainly looked specific to my company to me.

So, he starts his presentation and gets into some detail and it dawned on me that this was an example of a concept that I had recently been learning about: the IAA was a “meta-model”, a general model that can be used for modeling something specific. In fact, it was a meta-meta-model, maybe more “meta’s” than that. So, about 20 minutes in to the presentation, it burst from my lips, “aahh, this is a meta-model…”. My VP and others in the room said “what?”, but the consultant said “Yes, you’re right.”

Apparently they had worked with about 20 client companies for many months in Brussels to come up with a data model and a functional decomposition. The decomposition looked reasonable, and had business words, but the data model entities and attributes seemed to have no insurance words. That’s because they had seriously generalized the “things of interest” such that a major part of the model would actually be used to define the business, with the rest used to actually capture data. I remember one subject area was Agreements, which would be used to define what your insurance products, but could also be used for any legal agreement. This was also the first time I saw what we now know as a Party model, a meta-model for defining customers and other business participants,

Because the model was general, it came with syntax for defining what data would be needed to do things like define a Product. It was at this point that my brain rebelled. I am a pretty Conceptual/Logical thinking kind-of-guy, but this was just too much. However, my co-worker who did tech support for IEF grokked it completely, so he and the consultant would sit in a room and spin out this syntax, and nod at each other and agree on stuff, and I would be in the room nodding like I knew what the f&%k was going on. I was worried, what if I never “got it”?

Skipping ahead a bit, IAA started being used for that Agent System. I had done data modeling with the business people about this just before IAA had been bought. The consultant and our IEF guy started holding meetings with these business people, and I joined the first meeting a few hours in, and the whiteboards and flip-charts were filled with the IAA syntax. One of the business people, a great guy, turned to me and said “I like your models better”. I felt a whole lot better.

In the long run, I need not have worried, as management totally blew the implementation of IAA. It was a general model, and very big, so its was recommended you start with a basic subject area like Party, and implement and start cutting your existing systems to start getting Customer data from the new Party system. It did not work well for doing a specific line of business or function, because you would have to use almost the whole model right away.

However, after evaluation and purchase of IAA, our VP sat down with the IAA consultant and said “OK, start with Agent Management and Compensation”. The rest of us meet with the consultant directly after this, and he holds us to secrecy to tell us that he recommended, even pleaded with the VP, not to do this, but she was adamant. So, that was that and those IAA modeling and syntax sessions began.

Now, doing this kind of modeling and analysis should likely have me involved, but I managed to avoid it somehow, it just seemed to default to the two other folks to do it. There were other projects going on, as usual, so they kept me occupied as well. I know that the IAA project did get to our tech guy writing some Action Diagrams and database generation in IEF to support party and agreement, but it stalled. I was still keeping up-to-date on the progress, as I still wanted to be able to learn and do this if it was going to stick around. What became apparent to me the doing analysis using a meta-model was next to impossible. If the model did not have business words like Customer or Policy, the business people would not get it and could never validate it. My recommendation was to do the analysis using logical models to capture the business requirement. It was true that systems built from such models would need to be changed whenever the business changed, and the power of the meta-model is that you make such change by changing data, not code. …But people can’t think ‘meta’’, so do the analysis logically, get it approved, then generalize it to the meta-model for use in subsequent development. My belief was that IBM should be able to build something that would take a logical model and generate the IAA syntax to use in the IAA model.

Unfortunately (again) for IAA and IEF, the overall desire to replace the whole CLASSIC system had reached fever pitch. One day, we heard from the VP of Individual Insurance that a package had been bought to replace CLASSIC and would also do Agent Management and such. So, that was the end of IAA.

Tuesday, November 24, 2009

Memories of IT - mid 90's - Life in a slightly white tower

So I am ensconced in Regina, and IEF usage has plateaued, I am still in the R&D Technical Development area, and I start taking on various projects and research.

One of the projects was to pick a new Project Management tool. Crown has been using some older products but, as in other cases, a lot of the new people had been using MS Project, so the result was pretty much pre-ordained. I also looked at back-end tools for doing a PMO, merging of projects and resource management, all from partner companies. We did not buy any of those, but they would have been really useful on some future projects.

Client-Server, as I mentioned earlier, was getting big, and the merits of 2-tier versus n-tier was already coming up. Two-tier was quick and easy, but it was never clear where the business logic was run, so three to N-tier introduced intermediate platforms where logic parts of the app ran, between the screen and the database. I ended up writing a short methodology for client-server development (wish I had that one too).The thrust was to do your analysis and design so that it was produced in logical parts that could be implemented on different styles of CS, mainly 2 versus n tier.

Our group also did some internal consulting to project teams; mostly I would help do data models on projects. We were a group of about 8 people with different sets of expertise. Pretty much anything new in IT would be evaluated by us, sometimes with a project team doing a trial. I don’t think we ever repeated the evaluation disaster of CS tools.

And so the development cycle I have described before had come around to look at the Individual Insurance area of the company. It used a system called CLASSIC. CL stood for Crown life, don’t know what the rest meant. It was the first online app in the company, developed in the 1970’s in PLI and IMS. It got bigger and bigger over time. I recall that running a system test took forever, and cost a lot of money to run at Crowntek, so the VP in charge said you only got to run it once, and then you implemented. I never worked on the system; this was what I heard from people who did. The actual subject at hand was Agent Management and Compensation, which was sort of an add-on to CLASSIC and did not work well; At least two previous projects had tried to fix this and failed.

One day, our Tech Dev VP came back from a meeting with IBM, and announced we were going to buy a model from IBM for Insurance Systems that would help us with this and other problems. I know that there was also a latent desire to replace CLASSIC itself, which drove this choice. This was actually a second chance for IEF, as the model could be delivered in IEF, so that got me involved.

What IBM had was something called the Insurance Application Architecture, the IAA. ...more next time.

Monday, November 23, 2009

Memories of IT - mid-90's - Lessons learned about IT Standards in a Company

So, I was now in Regina. Over the period of the IEF saga, I had moved from the Corporate Systems area to the Technical Development (R&D) area, which included methodology support (me!), hardware and software standards and new tech evaluations, and IT Training was in their too. This is where I watched some things happened and learned some useful lessons.

The first one was around those Client-Server tools I had mentioned before. There were a couple of leading tools that people wanted to look at and use, so TD was assigned the job of picking one…but some IT teams were clamoring to use a tool now to get something done (all those new managers wanting quick success). So, our manager and these managers decided that one team would try out one of the tools, and another team would try out the other tool, on real projects for 3 or 4 months… then they would all get together and decide which one had worked the best and that would be selected as our standard CS tool.

Can you see the problem here? Both teams learned to use the tool they had and built a system, and were happy. So when the decision time came, each team claimed that the tool they had used had to better than the other one, because they had delivered something with it. An underlying driver was that if one tool was chosen over the other, then the team that had used the ‘losing’ tool was going to have re-develop their system. Well, no one wants to do that, and any attempt to force a choice was denigrated as central TD not being flexible enough to meet the needs of each team.

Lessons learned:

1) Always have all reviewers of products being reviewed use ALL the products, so they can make a comparison; otherwise, they will prefer the tools they have used (if it is basically adequate for the task). There is big example of this: back when Wordperfect was still a viable competitor to Word, one group of Wordperfect users was asked to review it versus Word, and a group of Word users was asked to review it versus Wordperfect. The (predictable) result? Each group preferred the tool they had been using. You could have shown them an un-biased review that showed at a point time, one product was better than the other (until the next release of one of then came out), they would still prefer what they have used; that’s human nature.

2) IT Standards, that list of technologies and products that the company says it uses, is not useful for its specific content, but for measuring how much variance from the standard exists at a point in time… because there will be non-standard stuff being used when ever a standard is ‘set’, and powerful managers (that make the money for the company) will get exceptions from the standard if that’s what they want. Once you accept that, that the standards will never be absolute, you can use them to be able to advise people how much more money it will cost, or how much less support they will get, if they buy something that is non-standard. If that information is provided, you are helping those people understand the impact of their choice; they may still go non-standard, but with their eyes open and they can’t bitch later about lower support and such.

Friday, November 20, 2009

Memories of IT - 90's - why did we stop using IEF?

Why did IEF development stop at Crown? It was the move; the majority of middle management and a lot of senior management did not move to Regina. As I have said, this gave the company a chance to downsize, so the number of staff after the move was 20 or 25% less than before the move…but that still meant a lot of people had to be hired. So, all the original stakeholders who had supported IEF were gone, and the new management had no stake in its continued use.

And IEF was really susceptible to this situation, because overall it was used in support of strategic redevelopment of all our systems, the original business case pictured this happening over the course of seven years, that’s a strategy.

But if you are a new manager in a new job in a new company, you don’t want to fall in line with a seven year strategy created by people who are now longer around, you want to deliver quick success, show that you are worth having around. That’s OK, and is acceptable in normal turn-over at a company; but with 60 to 70% of the managers all new at the same time, the IEM/IEF strategy (no matter how worthwhile) was dead.

I made a last stab at keeping it alive, writing a white-paper and doing presentations and such. I received great compliments on the white-paper -- I wish I had kept a copy --- but the response overall was “I can’t do that now”. I even presented how to use IEF on a more tactical level, just doing any project without an overall architecture. Jams Martin had figured out people wanted this and developed a one-project-only version of IEM called RAD (Rapid Application Development), and the promise of this version had helped sell IEM in the first place.

Unfortunately(!), this was the point in IT history when Client-Server development tools appeared, especially 2-tier tools that had you paint a screen, run it on a PC and it went directly against a database on a server. These tools did a whole lot less than IEF, but that also made them a whole lot cheaper, so when I would meet with a Project Manager about IEF, he or she would say SQLWindows was cheaper (and they had used it in their last job), so that’s what I am using, sorry.

And so ended the real saga of IEF at Crown. We kept our hands in it because of the Canada Pensions system, and I did get to go to some IEF user conferences. Such conferences are always in nice places, like Disneyworld or Vegas etc., so you take those perks when you can…but it was at one of those conferences, as described earlier, that TI announced it was selling IEF …and that was the real end.

Thursday, November 19, 2009

Memories of IT - 1991 - Regina Bound...

The company was in trouble, but who really knew that. I found over the years that I worked at Crown that my friends, family, and any one I met had never heard of the company. It did not advertise to the public, it marketed through agents and brokers. It also meant that though it was apparently the 18th largest insurance company in North America, I never saw much about it in the business newspapers.

Then, one day in 1991, the announcement came: a company from Saskatchewan (holding company for the richest family out there) had bought up control of Crown Life, and was going to move the company to Regina. That was news; it even came up in the Ontario Parliament question period, the opposition blaming the government for losing business/jobs from the province.

If you want to downsize your company staff, I can think of no better way to do it than pickup your company and move it 1000 miles, especially from the biggest city in the country to a relative hinterland. Current staff was offered the chance to move with the company, all expenses paid, or stay to a certain date and get a good-sized settlement. This was 1991, and Ontario was in a recession, so even if the settlement was good, opportunities for a new job were bleak. So, I decided to go to Regina. Looking ahead a bit, I can tell you that I and my family lived in Regina for 4 years. (A lot of people did not move, usually citing love of Toronto, wanting to stay near family, and many other good reasons)

I always tell people (truthfully) that I do not regret moving to Regina, but neither do I regret leaving Regina after those 4 years. I had grown up in Toronto and lived/worked in area of Toronto ever since. So, when I have been writing these posts, the place all this happened to me did not really affect what happened, it was in a big city, I commuted, lived in suburbs, like many millions of people. Regina was different, in both life and work, and that difference will come out in some of my future posts.

But it was over a year before I actually moved; a group of ‘pioneers’ went first to get started, using temporary space while a new head office was built and such. In the meantime, that Canada Pensions IEF project was still underway, had delivered some of the first parts of the system (structured by Business Areas), but it would not be done before the business unit made its move to Regina, and no one on the project team was going to move (they all felt skill in IEF was marketable, and I think that was true for a while). The unit management persuaded the team to keep working in Toronto after the move till the system was done, and they delivered a good system.

By that time, I was in Regina, about the only person with IEF exposure who made the move. One thing that I worked on first was a program by Texas Instruments for sponsoring education on IEF at universities, and we got the program set up for the University of Regina. I actually went out and spoke about IEF, systems development and Crown Life to a senior class. I didn’t think I made any impression, but apparently some students learned IEF.

I know this because the Canada Pensions IEF project did finish, and the Toronto team members all went on their way. So, the business unit had to get people in Regina to support and enhance the system, and usually new people need time to learn about a system before they can be productive; but, one thing the unit did was hire some UofR grads who had learned IEF, and because they could read the Data Model and Action Diagrams, they were productive almost immediately. It proved that systems generated from commonly known modeling techniques were a whole lot easier to maintain and enhance.

Unfortunately (and I feel I have to say that a lot), that Canada Pensions system was the first and last IEF system built at Crown Life…

Wednesday, November 18, 2009

Memories - early 90's - How not to do ISPs, and other stuff...

The previous post mentioned one project I worked on, and it was probably one of several I may have been assigned to. If you work in a typical company, and you are not on a big development project, then you usually have more than one project on the go at any one time.

My focus was still around IEM/IEF. I would like to say it all went smoothly, but how likely is that... The company was divided up into about 12 business units at the time, basically a combination of product line and geography, like Canadian Life or U.S. Pensions. As a result, the IEM approach was to do an ISP for each unit, plus one for Corporate & Investments. I ran the ISP for Corporate as a trial of the process, and I and a James Martin consultant did manage to get the senior VP and his reports in a room for a day and do some models and prioritizations. That senior VP eventually became President, and he always remembered who I was whenever we met (it was not a huge company, so it was possible to see senior management around now and then).

Meanwhile, one business unit was chomping at the bit to go. It was Canada Pensions and it was the part of the business whose time for a new system had come. I can't recall if they looked for packages first, or if they really did an ISP, but they were soon off to do their data model, function decomp, got down to doing Action Diagrams. They had people go to IEF training, had a few experienced consultants come on.

Then the "while" I mentioned in my last post came to an end...

Tuesday, November 17, 2009

Memories of IT - meanwhile, back in the (insurance) business

Ok, the last several posts have proved to be an arc on the one topic of IEM and IEF, like several linked episodes of a TV show. From start to finish, the arc covers several years, from the start of the project to pick IEM/IEF in 1989, to me changing jobs in 1997. A lot of other things happened, plus I have more on how we used IEF in the first years after we bought it.

A lot of it is better understood in context of the state of the company, which had been struggling. Crown Life was a typical Life and Health company, actually created by an Act of Parliament in 1901. Since them it had grown, entered and sometimes left foreign markets, added investment/pension products. Like a lot of companies in the 30 to 40 years after World War 2, it made money pretty much independently of any specific things management did or changed over the years; the basic business model was still working. So, Crown Life was a pleasant place to work, often referred to as the "Crown Life Country Club". Each president had come up from the Actuary ranks, and there were about 15 levels of management possible between worker and president.

However, the business environment for insurance soured in the 80's. I am not going to recount why, but stuff happens, and profits sank. Crown Life was a stock company with a few primary owners, and they eventually (mid-80's) sacked the last of the old-style presidents and brought in a turn-around guy, Bob Bandeen was his name. He had done the turn-around at CN so his arrival was momentous. After the usual few months of looking around, he started squashing those 15 levels to about five, so almost every day you would have seen some middle manager heading out the door with a box of stuff in his arms and a shocked look on their face. A noted financial writer of the time produced a book about the Canadian Insurance industry of the time, and it had a chapter for each of main companies; the chapter on Crown Life was called The Abattoir(!).

Amazingly, IT/MIS suffered very little, so in a strange way I was in a protected bubble of business as usual; I had to read that book to find out how bad it had been.

But eventually, Bob finished squashing and moved on. Crown Life sort of merged with Extendicare and created an overall company called Crownx (no typo); it was going to use Crowntek as a basis for getting further in IT services as a new business, selling PCs to companies, and other not-well-thought-out-stuff that sucked money from Crown Life and Extendicare until it was abandoned.

That left Crown Life in precarious shape. I had a bit of insight as one thing I worked on was cash flow reporting and investment management, which tried to predict how much actual cash was coming in from premiums and investments, and how much could be reinvested or kept for claims. What was apparent is that the flows were almost always mis-matched and the company was always short of actual cash. My absolute favourite moment was when we sold the head office building on Bloor Street in Toronto to some real estate company and leased it back, to get a cash injection into the company. I still shake my head over that one when I think of it.

I moved on to other stuff (like IEM/IEF) so my direct knowledge of company problems was reduced, and I suppose this and other tricks kept things going for a while, but not a long while. I will return to this topic when the "while period" ended in a future post.

Monday, November 16, 2009

Memories of IT - the death of IEF

So, why isn't everyone using IEF today, and if you are of a certain age, why have you not even heard of IEF? CASE tools were a big thing for a while, which means many people liked them and many other people did not. The latter were usually put off by the rigor, they thought they were giving up flexibility. Programmers could be put off by the fear that it replaced what they did, whereas what it did was just move programming up to logical Action Diagrams, just like 3GLs had been a move up from assembler coding.

But two things happened that really killed IEF and CASE as a whole: IBM's AD-Cycle, and ERP systems like SAP.

AD-Cycle: As I said, CASE tools were a big thing in the years around 1990. IEF was only one of many tools you could buy, but the vast majority of the tools only did part of the job, As described in an earlier post, there were modeling tools that analysts would use but went no further; these were called Upper-CASE. Other tools existed that would generate code from some kind of input; these were called Lower-CASE. The Upper and Lower referred to the parts of the lifecycle the tools covered when viewed as a waterfall that went from high (initiate, analyze) to low (design, code, test). After a while, vendors of one kind of tool would partner with the vendor of the other kind of tool, and both would trumpet that you could now do the whole life-cycle if you used their tools together.

Unfortunately, there were so many tools that you could not just pick any two you liked; if you picked one, then only so many other tools would work with it. I suppose somebody though this was a huge problem or opportunity, because IBM (still the big player in the largely still mainframe world of the time) decided they had the solution.

You see, each upper-CASE tool had some kind of repository or encyclopedia to store its models, especially if you created them on a PC, after which you would upload to one repository that all modelers would have access to. Those repositories, like the tools, were proprietary to the vendor. IBM decided it would create one common repository that all tools could use, so you could then use any combination of upper and lower you wanted. Add some services and its own tools, and the whole thing is presented to the world as AD-Cycle. Immediately a whole lot of the most popular tools signed on to the program.

Remember, IEF wasn't upper or lower, it was the whole deal, which was known as Integrated-CASE. Texas Instruments looked at AD-Cycle and said, we like our own repository just fine and we don't interface to any other tools, so you folks carry on and when you have something usable we will consider it. (I am trying to remember if IEW did sign on to AD-Cycle, I think it did but don't recall why.)

The problem was that the AD-Cycle repository was a disaster. Real customers who bought it got something that was huge, slow and not very functional. News got around and sales tanked but, even worse, companies who had not used any CASE tools yet avoided all CASE tools, not just the AD-Cycle repository. The whole tools segment was hit, and this hit home to me when I was attending my second IEF user conference, and the main TI guy for IEF walked up to the microphone and announced that TI had sold IEF to a relatively unknown software tool company. TI was a hardware company, and they just decided a failing software segment was not for them anymore. The new vendor changed the name but eventually was bought up, and up and up, until IEF disappeared into the maw of Computer Associates. I changed companies not too much later (for other reasons), so that was the last I saw of IEF.

But it did carry on, and I think some version of it may still be being used by its original customers, but that was it.

What helped to finally bury it was the parallel arrival of the big ERP systems like SAP. They were selling to management that you could buy SAP and not have to develop anything. So, if you stopped in-house development pretty much cold, why would you buy an admittedly pricey I-CASE tool that was just for development? Well, you wouldn't, and that was that.

Saturday, November 14, 2009

Memories of IT - IEF and Action Diagrams

So, you have a data model detailed with all entities, relationships and attributes, and a set of elementary processes that mainly CRUD all that data. The specifics of those CRUD functions would then be detailed using a rigorous logic called Action Diagrams (AD) . You would define your input data, what the process would do with the data, and the output. The rigor of this logic was that all data used had to be in the data model, and the AD would use views of the data model to define input, output, and of course CRUDs of the data in the model. The AD also enforced the rules in the data model, such as the example in the previous post. If you did not specify it correctly, IEF gave you an error, The whole AD was supported in IEF as selection of logic phrases from only what was actually valid at the point you are defining the logic. This ensured that when you had finished the AD logic for a process, IEF could generate code free of execution errors. You could still define the wrong logic for the process, but it would run.

Last point: IEM/IEF defined a Process as a logical activity. When you wanted to use that process, you would create a Procedure, either an online procedure or a batch procedure, and these would use processes as needed, the key thing being that a process, defined once, would be used as many times as needed within all the procedures, like an online transaction for Add Customer, or a batch program that would get a file of data and use Add Customer as many times as was needed.

So, you have Procedures that use Processes that use the Data Model, all in IEF. The tool has assisted you in specifying all these things to avoid execution-errors, and it also had other validation capabilities that would ensure that all the pieces you have work together. Once all the pieces have been validated, select Code Gen and IEF created all the code for your application: DDL for creating the database, code for execution of process and procedure, and online transactions or job control language for the procedures. Send all that to the various compilers for the languages you have generated, and out comes your executable. No hand-coding needed; you can literally not look at and even throw away the generated code away.

IEF first generated COBOL, DB2 and CICS for mainframe systems. As client-server emerged in the early 90’s, they added generation of C, Oracle and Unix to run on PCs and servers.

Next time: So why aren't we all using IEF?

Wednesday, November 11, 2009

Memories of IT - early 90's - IEM and Business Area Analysis

So, you have enough of a business model in IEF to divide the enterprise into cohesive Business Areas (BAs), which now need to be detailed enough to be a complete requirement for systems. The planning mentioned previously will indicate which Business Area to do first. Other than the limits imposed by natural build sequence (data created in one BA needs to be built before other BA'™s can use it), you could do the work on some BA'™s in parallel if they are not directly dependent. In IEM, this step was called Business Area Analysis (BAA). This was done by mainly parallel decomposition of the high-level data and function models.

Most people would not think of a Data Model as hierarchical, as it usually happens in reverse. You start putting Entity Types on a page, connecting them up with Relationship lines, and soon you have a lot of boxes and the page is filling up. Design studies tell us that the optimum number of objects to draw on a page is seven plus or minus two, or the human brain doesn't comprehend it well. Less than 5 is usually not a problem if not that useful, but more than 9 is.

What you will see is some entity types have many others hanging off them, often called central entity types, like Customer, Product, Employee. Each of these is the central entity type of a "Subject Area", usually named as the plural of the central entity, so Subject Areas are Customers, Products, Employees. Group all your entity types this way and you have a 2 level hierarchy of data. IEF supported this grouping into Subject Areas, and then further groupings of Subject Areas into a higher Subject Area, so a multilevel hierarchy results.

Meanwhile, you have a level or 2 of functions, more formally known as a Functional Decomposition.

Decomposing an enterprise is analysis in its purest form; understanding a thing by examining its pieces and how they relate to each other. You look at a thing/function and determine what are the seven plus or minus two sub-things that comprise the thing.

IEM defined a functional decomposition as composed of two types of "things", Functions and Processes. A Function is an activity that is continuous, no obvious start and end, like Marketing. A Process, then, is an activity with defined start and end, like Create New Marketing Program. So, the decomposition will start with some levels of functions, then each path of decomposition will reach a point where the next level down is a group of Processes, and then remaining levels of that path will be processes.

Functional decomposition usually gets criticized or can get misused. The most common misuse is that people think that functional decomp is the same as the Org Chart; it is not. The best way to realize this is think about how many reorganizations you have been through ---probably lots---, and how many times this actually changed the work you did --- almost none ---.

If determining the decomposition is difficult, some advanced IEM methods recommended parallel decomposition, meaning in parallel with the data model, so the function Marketing is parallel with the subject area Markets. Given this match, you decompose both models together. When you get to Processes, they will be verb-noun, where the noun is an entity or attribute in the data model.

All this decomposition is done to get to Elementary Processes, which answers the question "how do I know when to stop decomposing?". Each process will define how data in the data model is managed. A good process is one that manages data and leaves the data model in a valid state. So, if a process creates an occurrence of an entity and it has a mandatory relationship to another entity, then the process has to create that one too, otherwise the state of the data is invalid. A process that creates only the first entity is sub-elementary, and you have decomposed too far.

IEF supported this functional decomposition, enforced some rules like Functions can be composed of Functions or Processes, but Processes only decompose into other Processes; and it had you indicate what you believed to be the Elementary Process. The interesting thing is that all of this decomposition is done to get to those Elementary Processes; once they are all defined, you don't need the decomp any more.

(Note; this definition of process is not the same as that for Business Process Modeling or Re-Engineering.)

Next Time...Action Diagrams

Monday, November 02, 2009

Memories of IT -into the 90's- What was IEF,anyway?

The decade turns...

What was IEF, anyway?


It was automated Information Engineering. That methodology was based on information across a whole enterprise, so its first step was to create the Information Strategy Plan (ISP) for a complete enterprise. The core task was to create a high-level model of the enterprises functions and data (remember, function + data = information). This was indeed high-level where the data defined was Customers, Product, Materials, Staff, and such. The functions were the first to second levels of a functional decomposition, usually based on the main activities of the business: Define and Market Product, Acquire Material, Make Product, Sell Product, and supporting functions like Hire Staff and Create Financial Statements.

The functions were always defined as doing something with data. Given this perspective, you could create the CRUD matrix, Data items on one axis and Functions on the other, and each matrix cell contains the letter for Create, Read, Update, Update…or blank. Given this matrix, you can now do Affinity Analysis, which is a process of identifying what groups of functions manage a Data Item. I did this manually back in an earlier project.

But IEF captured the Data Model and the Function Model, and the CRUD matrix; then you initiated an automated affinity analysis process, and out came your restructured matrix. The result is a set of clusters of functions managing a set of data, which are de-coupled from each other. Each cluster was then used as the definition of a Business Area; a typical enterprise would have 5 to 9 Business Areas defined.

This is your Information Architecture. The IE Methodology (IEM) then provided a series of evaluation and analysis tasks to determine how well current systems support each Business Area, what is the value of automating a Business Area, and such…from which you would create a plan, the Information Strategy Plan, for moving from current systems to a new set of Business Area-focused systems that would eliminate silos, data duplication, etc..

So, now you were ready for Business Area Analysis...

About Me

Ontario, Canada
I have been an IT Business Analyst for 25 years, so I must have learned something. Also been on a lot of projects, which I have distilled into the book "Cascade": follow the link to the right to see more.