I’ve spent a lot of time in system design meetings where performance was discussed. The conclusion of many of those meetings being: "It’s too early to worry about performance. We needed to focus on requirements & functionality; we didn’t want to get bogged down in technical speculation. Besides we can assume performance expectations will meet the requirements because the new technology was so fast. Performance is not an issue."
Invariably these conclusions came back to haunt the project. Performance that wasn’t supposed to be an issue became the biggest issue.
Performance is always the biggest issue. After all it is the quest for better performance that drives all change. Any new functionality that doesn’t also provide better performance will not be accepted by the user.
Few people would question that the main driver for the introduction of all new technologies is to improve performance either through increased functionality or better response time or both. When we strive to make these improvements performance issues arise from three sources – people, processes & technology.
It’s not always the technology that needs performance improvements but the technology is the easiest to blame because no one’s feelings get hurt when you pick on the computer for poor performance.
The opposite is true for people. With people there’s always egos and political agendas to watch out for. And processes, especially well established ones, are often extensions of the people who perform the processes and can have momentum that is difficult to change.
Our biggest performance issues arise when change at the process or people level is required. It takes a long time to turn anything with momentum (see Newton’s Laws of Motion). People and processes over time gather significant momentum. "We’ve always done it this way so why would we change now?"
Organizations & the people in them build up their own kind of momentum – it’s called the status quo. When new technologies or processes arrive, passive aggressive attitudes (people momentum) can prevail and make any technology – no matter how good – the scape goat for project failure. One is prompted to wonder in failed project situations – whose interests were served by the project failure? There’s always lots of blame to go around so don’t waste it all on the technology. At the same time it’s good to remember that people will only change when they are ready. That’s one of the reasons paradigm shifts fail – people aren’t ready to have their paradigm shifted.
In all attempts to improve performance there’s always the next bottle neck.
No matter what performance improvements are introduced the only purpose they serve is to eliminate the current performance problem & highlight the next one. The role of the charge agent, architects or designer becomes crucial in determining how to identify the next performance problems as soon as possible. Of course the biggest problem here is knowing what performance issues will be uncovered when the current problems are eliminated.
The beauty of technology as opposed to people or processes is it is pretty easy to line up your technology performance constraining factors. In hardware its: CPU, disk, memory, network band width, database or application (no specific order implied).
In software it’s the OS, the file system (databases) or applications. With processes the issues usually arise because the old process doesn’t work with the new technology. People, me included, are lazy. As a result any change that initially requires more work is taboo.
What ever your current performance problems maybe the fix is usually quite simple – spend your way out of the problem. Of course that’s assuming you have money to throw at the performance problem. If you don’t have deep pockets you can try one of the following: tweak your system – if you’re going to spend the money on tuning the money would be better spent on an upgrade. Tweaking may buy you some time but not much. 2) Reduce the work load on the system – this is exactly what angry users want to hear “stop using the system and performance will improve.” Of course you may not have to tell them – just ignore them enough & they will go away all by them selves. 3) Stick your head in sand and expect the problem to go away. This is equivalent to calling a meeting to discuss strategies for improving performance. Eventually everyone at the meeting will have to agree that money thrown at the problem is the only solution.
But where to throw the money? Proven technologies. Check those references! Ask for proof beforehand. Don’t believe the salespeople – they have been known to lie.
The big challenge in all performance related matters is coming up with the money to pay for it in the first place. But remember that price performance curves are always dropping. While other costs – like human resources – are going up. Investing money in technology improvements rather than on legacy system maintenance personnel may be more cost effective.
I’m sounding a bit like a die hard capitalist here but I know of a number of shops where a significant amount of operating budget could be moved to the capital budget if only management could grasp some of the nonsense that is going on in the batch programs on their mainframes. And all in the name of maintaining the batch mentality – status quo!
Maybe we should have a look at money from other sources – like the user community – that group of people who’ve been screaming at you for months about poor performance. Here’s the easiest sell job you’ll ever have – all I need is x million dollars and your problems are over. It’s easy unless that’s what you told them the last time performance was poor.
So while you’re begging for cash remember to set you user’s expectations that any performance improvement they experience will be temporary and in 6 months you’ll need $y million to get over the next performance hurdle.
Performance is always an issue.
Your article reminds me of a conversation with a CTO who after a 2 year development cycle launched his service to the market. Two things happened.
1. It worked amazingly.
2. User adoption hit the business identified 1 year volume on the 2nd day.
They had worked very hard to make some decisions early on about how to build this application and new that there were constraints to the platform when they did adoption planning. But all of their assumptions prooved inadequate and though everyone agreed to the plan (including customers) it still was not performing.
This was a development cost decision and most likely the right one at the time with the information on hand. They just didn’t have a years worth of revenue to support the upgrade cost that was now in front of them.
This also speaks highly to proper prototyping to allow people to garner an understanding of load and requirements before going out to the real world.
Look at Google how it roles things out in beta form forever. This allows them to deal with performance constraints as they do not always no the usage. Even the best guesses can be wrong.