best metrics for comparing hardware?
Recently I spent 4 weeks on a project where we were considering hardware options for a large amount of storage for a data migration project. We ended up with 4 different proposals -- three from vendors and one to be built in-house. One of the tasks that I worked on was a matrix to compare the 4 potential solutions.
There were the easy metrics -- the amount of raw and usable storage, number of racks/tiles required, electrical and cooling requirements, cost, etc. Comparing supportability was trickier but doable, with 24*7 versus 12*5 phone support, availability of on-site technicians, warranty terms, support contract costs, etc. Where it became more difficult was identifying metrics to compare performance. Ratio of processors to storage? Location of processing nodes in the architecture? I/O rates? Time to read all data? And how do you best calculate those last two with four quite architecturally different proposals? We ended up with metrics that not everyone agreed upon, in part because there was a requirement that not everyone agreed upon.
I'm curious how other folks have gone about doing this. I'd be interested in hearing from anyone who is willing to share their strategies.
1 comment:
The way I've done this sort of thing for years is through some variant of utility analysis. It's important to remember that such a selection is not a wholly objective process; there's always an element of subjectivity. The problems arise when there is a subconscious link in individuals between a desired outcome (solution x) and some attribute that "proves" that solution x is the best, ie this attribute is the one for which solution x is outstanding.
To get round this, we used to determine the attributes for choice, as a team, before sending the tenders out. You may be too late for this, but it's still worth doing it as a team process. You need to get agreement on the attributes. Then you weight the "importance" of each attribute, perhaps totalling them to 100. You can get team members to do this separately, and combine them, either in discussion, or even through averaging (if someone sticks to their guns with an outlier opinion). Then you score each attribute out of 10; again you do this best as a team, but can average individuals opinions if necessary. Finally multiply each score by the corresponding weight and sum them to get a total utility for that solution against your criteria.
As noted, some are not objective, but this is a way of controlling subjectivity, and in my experience it works well. There are many little caveats on how to best set up the criteria (try to make them "orthogonal", ie so that a solution that scores well in one doesn't necessarily score well in another). And I try to keep cost out as a separate issue, even in the form of value for money. In the end you might expect and even desire to pay more for a solution with a higher utility to you.
This is probably me teaching grandma to suck eggs, and may not answer your question anyway. But good luck, and have fun!
Post a Comment