If a project succeeds and meets expectations, you probably don't spend much time thinking about it. You move on...
to the next one. On the other hand, if a project fails or misses expectations, you're wise to spend some time trying to understand what happened and why. Recently I had the chance to think about a project that missed my expectations. From that review, I formalized seven key steps for managing technical risk and one golden rule.
Learning from my own mistakes
A sister company of ours wanted to set up secure Web access to its HTTP server using SSL. Knowing that I had done it at our shop, they asked me if I could do the work at their shop. They told me their environment and needs were similar to mine. I gave them an estimate of three to four days of work. It actually took more than 10. Yes, the secure Web access works fine now, BUT the project certainly failed to meet my expectations on the workload. What went wrong?
Mistake #1: No input to the project planning or management
I was the technical expert but DIDN'T have any real input into how the project was managed. I should have made that a clear condition to accept the work. I will next time. I could not change HOW they operated in IT, and it had a major impact on my project.
Mistake #2: I assumed their environment was the same as mine
Since I was going to be setting up a technical environment I knew how to do, I made the mistake of assuming I would know how to do the same thing in THEIR environment with the SAME EFFORT. Wrong. The net results of my time audit on the project are shown below in Figure 1.
|1||Actual SSL reverse proxy plan, setup and testing||3.5 days|
|2||Time spent waiting ON SITE to get access to their network||2 days|
|3||Time spent waiting on users to test the application||1.5 days|
|4||Time spent doing performance benchmarks||3 days|
Figure 1: Time spent on secure Web access project
I clearly failed to anticipate items 2, 3 and 4. Why? Time spent waiting ON SITE for network access doesn't make sense to me, but it's a reality for this IT shop. They were down on three occasions that I showed up with due to different Windows server viruses. I waited different amounts of time while they promised, "It's only a few more minutes" to get network access. Two other times I went there they didn't have an available PC setup OR network connection. They have strict rules about limiting access to outside users (me). As a result, I had to borrow PCs from existing users who were out for the day and then install and configure the tools I needed on each of these PCs. A total of two days lost.
Time spent waiting for users to test the application is something I SHOULD have anticipated. It's happened in my own shop at times. If I'm managing a project in my company, I have LOTS of direct and indirect pressure that can be applied to help others focus on their responsibilities. In this situation, I had none. I should have realized I'd have no control over that portion of project.
Time spent doing performance benchmarks is something I DIDN'T plan for, but I probably should have allowed for. Since I already implemented this solution twice at our company and performance was excellent on the same basic servers and hardware, I mistakenly assumed it would be fine here, too. And, of course, I was dead wrong. While my proxy server setup ran exactly as I expected, the user and project manager complained because Web users were getting horrible response time compared with in-house users. And of course they wanted ME to fix it! Not being TOO stupid, I first installed performance tools and ran a number of benchmarks to prove where the delays were coming from: their network! After sharing the results of my performance study on a conference call, the user asked me what I was going to do about it. I said. "Nothing because I don't own your network." This forced their network manager to finally respond. (He had ignored the performance reports I sent him two weeks earlier!) He said they'd look into it. A week later, they found their networking problems and corrected them -- there were several. Yes, performance was now fine, but I had lost three more days doing this extra work.
I had a lot to learn on my end from this small project. Unfortunately for me, some of those lessons were ones I had learned earlier but didn't apply carefully here "because it was a small project."
Categories of IT project risks
IT projects generally have three categories of risks that need to be managed well for a project to succeed:
- Business -- Will business operations be improved or, at least, kept the same before and during a project? Will business performance be improved in some way after the project?
- Project -- What factors in planning and managing this project can go wrong? In my case, the technical lead (me) had no reporting structure to a project manager -- a very scary situation.
- Technical -- Did I identify and plan for different types of technical problems? I certainly didn't plan to handle the viruses on their network, the need to setup their PCs over and over or the need to do technical performance analysis.
Seven key steps to ensure project success
The seven items below are key to project success in many different scenarios:
- Need to plan
- Need to validate
- Need to leverage experience
- Need to resource
- Need to manage
- Need to adapt
- Need to audit results
Need to plan
On a project I plan, I normally define specific objectives, success keys, risks and strategies for business, project and technical items. On this project, I had input only on the technical details and, on that, I missed some key technical factors unique to their environment. Lesson learned: don't make assumptions about what DOESN'T need to be planned.
Need to validate
All plans are based on concepts, data and assumptions. Ideally, ALL of that needs to be validated BEFORE a project is launched. My best project performances have been achieved when I validated the business, project and technical details BEFORE the project was launched. If you can't do that, don't launch the project. You and your team aren't ready.
I usually create a validation project on the concepts that I'm validating the expected business case and looking for user input on the design to be sure the solution will be right for them. Both of those are true statements. In addition, I always have a "secret" technical strategy to validate the actual technical environment we are planning for the project. Time and again my "technical pilots" have shown the vast majority of the technical problems I'll face on the real project BEFORE I launch it. That gives me a chance to get the plan right BEFORE it's released, which is a nice place to be. When I've finished with the validation I now know that THIS technology works in MY environment with THIS staff. That's not a general statement that WebSphere works for instance but rather proof that WE can make it work HERE.
Need to leverage experience
Success requires that we have the right experience to execute a project successfully. I've learned that experience can be ours (in-house) OR someone else's (consultants). Like many of you, when we hire consultants to help, we also add some deliverables on skills transfer to our in-house staff so we can support the work the consultants have done.
Lack of experience, especially technically, may be the highest single cause of failure in many IT projects I see. Typically I've seen people are often unsuccessful in substituting the following for REAL technical project experience:
- Product documentation
- Classroom training
- User install guides and IBM Redbooks that address a different scenario than yours
Real technical skills cover planning, engineering, designing, integration, measurement, monitoring, diagnostics, technical testing, user testing and support. Even being "vendor-certified" on a technology doesn't guarantee you have the right technical skills for a project in many cases. Most certifications we see are basic "hands-on" skills, not solid engineering skills for a technology. Since nothing pays like experience, skills transfer is critical when we hire outside help. That's been our most productive route to gain the experience we need on projects.
Need to resource
On projects, you always need to find, recruit and allocate the right resources to be successful. That's more than just my staff. That's outside consultants if needed. That's business resources and capital usually from the project sponsor because we've made a valid business case. It's also the right assets: not just hardware and software tools but our OWN applications and data. We are much smarter about looking at project deliverables and asking IF we already have the resources (data and application components) that can provide similar services to what we need. It's sort of an inventory analysis on our own assets. If we do have those resources, we'll look at ways to "glue" those resources into our new solution. If we don't have what we need, we'll look at the build or buy decision now as the exception. If we're going to build it, we're more willing to contract that out, especially if it's something we don't expect to do every day.
Need to manage
Good, valid plans aren't enough. You need to have someone who will manage the project well through completion (whatever that is defined as).
I manage differently now than when I was a new manager. Then I always looked at the project plan task details and spent a lot of time estimating, communicating to the team and collecting feedback.
Now, after I've built a valid plan, I look only at the more important stuff: milestone deliverables, success keys, risk factors and resources. I spend a lot more time asking other project members, users, sponsors, owners and so on questions about these key items, then listening to what they say and making needed adjustments. Overall, the results are much better in terms of project team delivery and attitudes.
Sure we use standard tools such as Microsoft Project, BUT those tools DON'T create good projects. We DO have some project methodologies that have made a positive difference: GAPS and OARS. There are other methodologies available as well.
Need to adapt
Every time I hit a technical roadblock on my Web access project, I had to come up with a valid alternative to achieve my project objectives. Fortunately, I have more experience in this area than the average certified engineer. As a result, I was able to see and use options that aren't covered in standard training and certification courses. If I didn't know alternatives, couldn't find them or wasn't willing to modify my design to use them, that project surely would have failed.
On my Web security project, I needed to come up with alternate ways to access, configure and test their applications, since their network models prevented standard access to those components. The alternatives I created worked great, and I didn't lose any time because of software setups.
Need to audit results
I repeated this exercise on other projects regularly, especially the ones that didn't go well. I also created a "simple" accounting system for our in-house projects that lets me see how close we came on resource utilization to our plans and original estimates. It turns out our original estimates on many projects are only half of what the project ultimately will cost in terms of time. As a result, we're able improve our estimates going forward.
Golden rule: Need to enjoy what you're doing
I know you've never failed on any project you've tried. :-) As a result, I'm sure you know more than I do on this subject. Share your knowledge and write me at firstname.lastname@example.org.
About the author: The Value Manager is an IBM iSeries IT manager trying to make the right decisions to deliver better value for his company. He welcomes your comments and feedback. E-mail him at email@example.com.