It has been pointed out to me by one of my offspring that he and his brothers have not been mentioned in these posts. He found them because they are being replicated over on Facebook as Notes, and he thinks by reading them he might understand what I have been doing for 30 years that put a roof over our heads and food on the table.
Unless requested, I will leave names out of this. I did marry my lovely wife in 1979, and sons arrived in 1981, 1986, and 1989. My wife claims she remembers nothing about the 80's except diapers and formula (I did my part too, so sleep-deprivation was equally shared).
It might be useful for these posts to give some geographical context, because it does start changing later on. I grew up in what was then the suburbs of Toronto, a place called Etobicoke (KE are silent). I went to University of Toronto (which I think I mentioned). My employer through the 80's was located in the "middle" of Toronto, Yonge and Bloor. When I started, I could take the bus and subway to get there, then we moved further out, so a car to the subway was needed, then we moved much farther out in order to afford a house in the time of 15% mortage rates, so that meant driving/carpooling , or commuter (GO) train and subway. The drive took an hour when we first bought our house, it was up to two hours ten years later.
Overall, the sons came along in a fairly stable time in terms of where we lived and where I worked. The company had a Children's Christmas Party each year, right in the head office, so children could be taken up the elevator to see daddy's desk. It's state of organization must have made an impression, because I started getting presents like mouse pads with the Tasmanian Devil on it whirling around leaving destruction in his path.
But things would change and become quite interesting, as you will see in future posts...
Friday, October 30, 2009
Wednesday, October 28, 2009
Memories of IT - 1990 - IEW vs. IEF
1990, and the promise of CASE was huge...
We have two products, IEW and IEF, to choose between.
Memory and perception can be funny things, so when it comes to IEW (Information Engineering Workbench), any corrections from the reading public are especially welcome.
First off, I recall the vendor company'™s name was Knowledgeware. Its president or CEO was one Fran Tarkenton, indeed the famous NFL quarterback. I never did figure out what he really was to the company: was he a closet geek who really was involved in the product? Was this where he invested his NFL salary? Was he a figurehead? Comments welcome!
The other angle was that Knowledgeware was supposed to be very closely related to James Martin, but in what way I can't be sure. The implication was that if you were really doing Information Engineering, then Knowledgeware and IEW just had to be your choice.
So it surprised me no end that when I saw the product demonstrated, its functional modeling was based squarely on Data Flow Diagrams (DFDs). It might seem like an esoteric issue now, but if you had followed the methodology advancements of the 1980's, you would have seen that DFDs had featured strongly in Structured Analysis and Design, but they had fallen into disfavor with the rise of Information or Data-centric approaches using Data Modeling. In these approaches, DFDs with data flowing around and many Files were thought to lead to bad data design, silos and all that. And IEM (The methodology) did not use DFDs for functional modeling, it used a straight Functional Decomposition; but, you could probably have used DFDs without breaking any methodology rules.
On the other hand, there is IEF (Information Engineering Facility) from the software division of Texas Instruments. TI is really an engineering, hardware company, but they backed into selling a CASE tool because they had bought into Information Engineering for their own information systems and wanted a tool to support it, so they built one. IEF was automated IEM, for sure, but with a focus on the parts of the methodology that led to producing code; any diagrams that didn't lead to code were not automated. One of these was, in fact, DFDs, which IEM did use in a limited way for documenting current systems, but no more, so TI kept them out of IEF.
In the end, it came down to code generation; both tools generated code, but IEF was the most complete and straightforward; IEM was missing some parts and impressed ourt technical people less. So, IEF emerged the winner.
Next time: What was IEF, anyway?
We have two products, IEW and IEF, to choose between.
Memory and perception can be funny things, so when it comes to IEW (Information Engineering Workbench), any corrections from the reading public are especially welcome.
First off, I recall the vendor company'™s name was Knowledgeware. Its president or CEO was one Fran Tarkenton, indeed the famous NFL quarterback. I never did figure out what he really was to the company: was he a closet geek who really was involved in the product? Was this where he invested his NFL salary? Was he a figurehead? Comments welcome!
The other angle was that Knowledgeware was supposed to be very closely related to James Martin, but in what way I can't be sure. The implication was that if you were really doing Information Engineering, then Knowledgeware and IEW just had to be your choice.
So it surprised me no end that when I saw the product demonstrated, its functional modeling was based squarely on Data Flow Diagrams (DFDs). It might seem like an esoteric issue now, but if you had followed the methodology advancements of the 1980's, you would have seen that DFDs had featured strongly in Structured Analysis and Design, but they had fallen into disfavor with the rise of Information or Data-centric approaches using Data Modeling. In these approaches, DFDs with data flowing around and many Files were thought to lead to bad data design, silos and all that. And IEM (The methodology) did not use DFDs for functional modeling, it used a straight Functional Decomposition; but, you could probably have used DFDs without breaking any methodology rules.
On the other hand, there is IEF (Information Engineering Facility) from the software division of Texas Instruments. TI is really an engineering, hardware company, but they backed into selling a CASE tool because they had bought into Information Engineering for their own information systems and wanted a tool to support it, so they built one. IEF was automated IEM, for sure, but with a focus on the parts of the methodology that led to producing code; any diagrams that didn't lead to code were not automated. One of these was, in fact, DFDs, which IEM did use in a limited way for documenting current systems, but no more, so TI kept them out of IEF.
In the end, it came down to code generation; both tools generated code, but IEF was the most complete and straightforward; IEM was missing some parts and impressed ourt technical people less. So, IEF emerged the winner.
Next time: What was IEF, anyway?
Tuesday, October 27, 2009
Memories of IT - 1989 - Methodologies and CASE tools
1989...
As per my previous post, we have a couple of methodologies to evaluate. PRIDE was all about Information Resource Management; IEM was, well, about Information Engineering. If I was to line them up against each other today, I am not sure there would be much difference between the two, except we knew that IEM had two popular CASE tools supporting it, so PRIDE never really stood a chance.
So, IEM won. This was James Martin's baby, through his latest organization, James Martin & Associates. At the time, they had a Canadian office that we worked with, so I don't how many degrees of separation there was between myself and James, but it wasn't close. He was doing tours at that point, charging large sums; when he did come through Toronto, only VPs of my company got to go. I recall he was already moving on to new topics, like Enterprise Engineering and Value Flows...
Meanwhile, back on the project, we have IEM, so now we look at supporting CASE tools... but let's talk about CASE first. Computer Assisted Software/System Engineering. There were actually a few different angles to it. It had started with the model/diagramming tools I have mentioned before. Because they supported tasks in the first few phases of the SDLC, they were tagged as Upper-Case, meaning the diagrams were good but it stopped there. At some point, other vendors created code generator products which, because coding happens later in the SDLC, were tagged as Lower-Case; then vendors of both types of tools would hook-up, so that Upper-Case diagrams could be used (somehow) as input to the Lower-Case tools to tell them what code to generate.
I never saw a Lower-Case tool up-close, so I never knew how they worked independently, or how interfaces with Upper-Case tools really worked. I never did have to know that, because we were looking at the third angle: Integrated CASE tools (I-CASE, long before i-pods or other such stuff). This was a product that did the whole SDLC, from first diagrams to final code gen and testing, and there were two main players: the Information Engineering Workbench (IEW), and the Information Engineering Facility (IEF).
Next time, comparing IEW and IEF...
As per my previous post, we have a couple of methodologies to evaluate. PRIDE was all about Information Resource Management; IEM was, well, about Information Engineering. If I was to line them up against each other today, I am not sure there would be much difference between the two, except we knew that IEM had two popular CASE tools supporting it, so PRIDE never really stood a chance.
So, IEM won. This was James Martin's baby, through his latest organization, James Martin & Associates. At the time, they had a Canadian office that we worked with, so I don't how many degrees of separation there was between myself and James, but it wasn't close. He was doing tours at that point, charging large sums; when he did come through Toronto, only VPs of my company got to go. I recall he was already moving on to new topics, like Enterprise Engineering and Value Flows...
Meanwhile, back on the project, we have IEM, so now we look at supporting CASE tools... but let's talk about CASE first. Computer Assisted Software/System Engineering. There were actually a few different angles to it. It had started with the model/diagramming tools I have mentioned before. Because they supported tasks in the first few phases of the SDLC, they were tagged as Upper-Case, meaning the diagrams were good but it stopped there. At some point, other vendors created code generator products which, because coding happens later in the SDLC, were tagged as Lower-Case; then vendors of both types of tools would hook-up, so that Upper-Case diagrams could be used (somehow) as input to the Lower-Case tools to tell them what code to generate.
I never saw a Lower-Case tool up-close, so I never knew how they worked independently, or how interfaces with Upper-Case tools really worked. I never did have to know that, because we were looking at the third angle: Integrated CASE tools (I-CASE, long before i-pods or other such stuff). This was a product that did the whole SDLC, from first diagrams to final code gen and testing, and there were two main players: the Information Engineering Workbench (IEW), and the Information Engineering Facility (IEF).
Next time, comparing IEW and IEF...
Wednesday, October 21, 2009
Memories of IT (late 80's) - new Methodologies... and tools?
It's always amazing how much you don't know, or even worse, what you don't know you don't know. A new analyst joined us for the new methodology project, and we were discussing various tools for modeling and analysis, and he informed me, to my initial disbelief, that there were tools out there that could generate complete systems from Data and Function models. I thought I was pretty good at keeping up with trends, but this had escaped me, so it was time to catch up.
This happened within the context of our new development methodology project, which also included CASE tools that might support such new methodologies. The approach was pretty good: find the methodology that best met our needs, and then pick a tool that best supported that methodology. I was the lead analyst, charged with gathering requirements that would be used for RFPs and detailed evaluation. Key IT people from each unit participated in requirements sessions. I know we produced a good, long list, but the details have faded from memory. This group was not working in "controlled isolation", so I am sure that what any or all of us knew about existing products, and also from looking ahead to tools, influenced the results. I know I was already looking for candidate products, and reading up on all of them.
What emerged from the requirements list was a desire for a methodology that helped us deliver low-maintenance systems, and wouldn't it be nice if a tool automated that methodology to speed up the process a little. Of about a dozen methodologies I found (pre-Web, so the big magazines like Computerworld were a key source), there were only a few that matched up in any real way. One was PRIDE, which is still out there, and the other was the Information Engineering Methodology (IEM) .
Next time, looking at the two methodologies...
This happened within the context of our new development methodology project, which also included CASE tools that might support such new methodologies. The approach was pretty good: find the methodology that best met our needs, and then pick a tool that best supported that methodology. I was the lead analyst, charged with gathering requirements that would be used for RFPs and detailed evaluation. Key IT people from each unit participated in requirements sessions. I know we produced a good, long list, but the details have faded from memory. This group was not working in "controlled isolation", so I am sure that what any or all of us knew about existing products, and also from looking ahead to tools, influenced the results. I know I was already looking for candidate products, and reading up on all of them.
What emerged from the requirements list was a desire for a methodology that helped us deliver low-maintenance systems, and wouldn't it be nice if a tool automated that methodology to speed up the process a little. Of about a dozen methodologies I found (pre-Web, so the big magazines like Computerworld were a key source), there were only a few that matched up in any real way. One was PRIDE, which is still out there, and the other was the Information Engineering Methodology (IEM) .
Next time, looking at the two methodologies...
Tuesday, October 13, 2009
Memories of IT - circa 1988 - The Maintenance Dilemma
My company's management, IT and Business, were now grappling with the 'maintenance problem', the generally agreed dictum that 75% or more of a company's IT 'development' budget was actually spent on fixing and enhancing its existing systems, leaving little for delivering the new systems everyone wants to support new business initiatives.
The standard reaction was (and is) usually to find some way to get more new development out of the available resources, resulting in the adoption and eventual abandonment of many tarnished 'silver bullets'. A less common but no more successful approach was to find ways to maintain those existing systems with fewer resources: code analyzers, reverse-engineering in models, and such.
The third and least used (and least understood) approach was to recognize that systems had to be built from the start to require less maintenance effort, so that the 75-25 resource split could be moved towards 50-50 or better. This requires a long-term strategic view of your information systems inventory, one that recognizes that over 7 to 10 years, many of your current systems will be replaced, so why not do this following a strategic plan; otherwise, you will end up in the same state in 10 years, with a few new systems.
One thing that anyone reading this will agree on is that thinking out 7 to 10 years is difficult for the average company, even for its core business of what it sells or services; taking such a long term view of its supporting information systems is really difficult. The allure of the quick fix can be hard to resist. So, in retrospect, the fact that the average insurance company I worked for would even consider a strategic approach to its information systems still stands out as an amazing development that would take my own career down a new path... to Information Engineering.
The standard reaction was (and is) usually to find some way to get more new development out of the available resources, resulting in the adoption and eventual abandonment of many tarnished 'silver bullets'. A less common but no more successful approach was to find ways to maintain those existing systems with fewer resources: code analyzers, reverse-engineering in models, and such.
The third and least used (and least understood) approach was to recognize that systems had to be built from the start to require less maintenance effort, so that the 75-25 resource split could be moved towards 50-50 or better. This requires a long-term strategic view of your information systems inventory, one that recognizes that over 7 to 10 years, many of your current systems will be replaced, so why not do this following a strategic plan; otherwise, you will end up in the same state in 10 years, with a few new systems.
One thing that anyone reading this will agree on is that thinking out 7 to 10 years is difficult for the average company, even for its core business of what it sells or services; taking such a long term view of its supporting information systems is really difficult. The allure of the quick fix can be hard to resist. So, in retrospect, the fact that the average insurance company I worked for would even consider a strategic approach to its information systems still stands out as an amazing development that would take my own career down a new path... to Information Engineering.
Thursday, October 08, 2009
Memories of IT - when I started to learn about Methodology
So, these posts are still in the 80's, but a lot was going on. By coincidence, both I and my company were thinking more about methodologies and the system development life cycle. Looking back, it's hard to explain that we weren't really thinking in these terms, work just got done, a simpler time I suppose. Of course, the idea of using a methodology was not new in the mid-80's but it wasn't accepted everywhere either.
The only real methodology concept I was exposed to early in my career was the Scheduled Maintenance Release. Working on an existing system, requests for change would come in at any time. I suppose at one point before my time such changes might be dealt with as they arrived, but it had become apparent that this was not a best use of resources. It became clear that "opening up a system" for changes carried a certain level of cost irrespective of what the change was, including implementing changes into production.
So, change requests were evaluated as they came in (production bugs were fixed as they happened); if the change could wait, they went on the change list. At a future point, either on a regular basis or when resources were available, all the current changes were considered for a maintenance release project.
As I worked in a department where the systems were relatively small, and project teams may be one person, I don't recall doing much estimating or cost-benefit analysis for these projects during that era. Releases might be organized around major functions, like all changes for month-end reporting. Once a scope and a set of changes were agreed to with my main business contact, I just went ahead and did the work. I remember that I would figure what code changes were needed, do them, and test them. For the small systems I worked on, I don't recall there being separate unit, integration or User Acceptance Testing.
If you have read my earlier posts, you will know that I did work on a large in-house development project as a programmer, but I can't say if the project was following a methodology. I came on the project during construction, and all I remember was that the PM/BA did write and give out what would be considered Specs today. I think she also did the integration testing of the system as we delivered unit-tested bits.
But there was indeed some work on Methodology work going on in the company... One day some of us were scheduled to attend training on the company's new System Development Methodology (SDM). Apparently one or two people in IS Training had been developing an SDM (still had that in-house bias). So off we went; to the creators' credit, I recall it what we saw was pretty good. This is probably when I first heard the word "phases", and that there were at least 4 or 5 of them in this SDM. Unfortunately, creating an SDM is a lot of work, and so far they had only completed the Analysis phase in detail, the rest was just the framework. They said the remaining phases would come over time; well, time ran out on this work when someone figured out you could buy a whole/complete SDM, so the remaining phases were never done and the in-house SDM was never mentioned again.
... but I recall it was my IS department that then went out and got an SDM. The winner was from a local consulting company, who offered "The One Page Methodology"; methodologies were already getting the reputation that they were big and unwieldy, and the manuals would be put on a shelf and never used again. Now, this "one page" was as 4' by 2' foot wall-poster, but it served the purpose. The poster was divided in to 5 horizontal bars, one for each phase, and each phase had around 10 boxes/steps, going from left to right, but that's all I remember.
What I do remember was the vendor also had a CASE tool, called "The Developer", to automate the diagrams used in the methodology. These were basically data models and data flow diagrams. It also had a data dictionary for the data model, and text boxes for documenting your DFDs. So, Excelerator was gone, replaced by this Developer.
I used it quite a lot as I started doing analysis on a lot of smaller projects. I can't say that our developers got what the models were for, but it was mostly current system maintenance so they would ask questions and figure it somehow.Not sure how this situation was tolerated, but things were changing all the time, so newer methods and tools were coming...
The only real methodology concept I was exposed to early in my career was the Scheduled Maintenance Release. Working on an existing system, requests for change would come in at any time. I suppose at one point before my time such changes might be dealt with as they arrived, but it had become apparent that this was not a best use of resources. It became clear that "opening up a system" for changes carried a certain level of cost irrespective of what the change was, including implementing changes into production.
So, change requests were evaluated as they came in (production bugs were fixed as they happened); if the change could wait, they went on the change list. At a future point, either on a regular basis or when resources were available, all the current changes were considered for a maintenance release project.
As I worked in a department where the systems were relatively small, and project teams may be one person, I don't recall doing much estimating or cost-benefit analysis for these projects during that era. Releases might be organized around major functions, like all changes for month-end reporting. Once a scope and a set of changes were agreed to with my main business contact, I just went ahead and did the work. I remember that I would figure what code changes were needed, do them, and test them. For the small systems I worked on, I don't recall there being separate unit, integration or User Acceptance Testing.
If you have read my earlier posts, you will know that I did work on a large in-house development project as a programmer, but I can't say if the project was following a methodology. I came on the project during construction, and all I remember was that the PM/BA did write and give out what would be considered Specs today. I think she also did the integration testing of the system as we delivered unit-tested bits.
But there was indeed some work on Methodology work going on in the company... One day some of us were scheduled to attend training on the company's new System Development Methodology (SDM). Apparently one or two people in IS Training had been developing an SDM (still had that in-house bias). So off we went; to the creators' credit, I recall it what we saw was pretty good. This is probably when I first heard the word "phases", and that there were at least 4 or 5 of them in this SDM. Unfortunately, creating an SDM is a lot of work, and so far they had only completed the Analysis phase in detail, the rest was just the framework. They said the remaining phases would come over time; well, time ran out on this work when someone figured out you could buy a whole/complete SDM, so the remaining phases were never done and the in-house SDM was never mentioned again.
... but I recall it was my IS department that then went out and got an SDM. The winner was from a local consulting company, who offered "The One Page Methodology"; methodologies were already getting the reputation that they were big and unwieldy, and the manuals would be put on a shelf and never used again. Now, this "one page" was as 4' by 2' foot wall-poster, but it served the purpose. The poster was divided in to 5 horizontal bars, one for each phase, and each phase had around 10 boxes/steps, going from left to right, but that's all I remember.
What I do remember was the vendor also had a CASE tool, called "The Developer", to automate the diagrams used in the methodology. These were basically data models and data flow diagrams. It also had a data dictionary for the data model, and text boxes for documenting your DFDs. So, Excelerator was gone, replaced by this Developer.
I used it quite a lot as I started doing analysis on a lot of smaller projects. I can't say that our developers got what the models were for, but it was mostly current system maintenance so they would ask questions and figure it somehow.Not sure how this situation was tolerated, but things were changing all the time, so newer methods and tools were coming...
Wednesday, October 07, 2009
Memories of IT - mid-80's - Analysis and CASE Tools
So now I move on to the next project, replacing yet another batch system with something newer and shinier, and I am doing the Analysis phase. In a timely fashion, a few things arrive that will help me greatly...
Next, I could select a level 0 function, and then draw a new diagram , exploding that function to level 1, keeping a link to level 0. This was like gold for me, so pencil and erasers were now history.
Excelerator did not support Data Models in 1.0, but I used a general diagram to at least draw the diagram and kept definitions and such in a text document.
Looking back, I would have to say that the quality of what I produced was probably low, but I was young, all was new, and there weren't many examples to compare to. I am fortunate to have had the time to learn and improve since then, and it hasn't stopped; there is always room for improvement, and new things to learn.
- a fast, collating, stapling photocopier, which I would need for distributing the requirements documents
- an IBM PC AT, with a hard drive, a mouse, and a graphics card. The drive storage was tiny by today's standards, 10 meg. The monitor was still monochrome, an annoying orange-yellow.
- and the reason for the AT, a copy of Excelerator 1.0
Next, I could select a level 0 function, and then draw a new diagram , exploding that function to level 1, keeping a link to level 0. This was like gold for me, so pencil and erasers were now history.
Excelerator did not support Data Models in 1.0, but I used a general diagram to at least draw the diagram and kept definitions and such in a text document.
Looking back, I would have to say that the quality of what I produced was probably low, but I was young, all was new, and there weren't many examples to compare to. I am fortunate to have had the time to learn and improve since then, and it hasn't stopped; there is always room for improvement, and new things to learn.
Subscribe to:
Posts (Atom)
About Me
- David Wright
- Ontario, Canada
- I have been an IT Business Analyst for 25 years, so I must have learned something. Also been on a lot of projects, which I have distilled into the book "Cascade": follow the link to the right to see more.