A university, a startup and chip giant Intel are pushing a proposal for a standard model of rating open-source software. They hope to provide customers with a better sense of the maturity of the more than 100,000 open-source projects available today.

The Business Readiness Ratings (BRR) model is the brainchild of Carnegie Mellon University West's Center for Open Source Investigation (COSI) and is being cosponsored by open-source testing and certification startup SpikeSource and Intel.

"The model allows users and developers to get a feeling for the appropriateness of open-source software for their environment," said Joaquin Ruiz, vice president of product marketing at SpikeSource.

One way of thinking about the BRR model is as a kind of tailored Netflix service, he added. Like the online video ordering service, users and developers will rate the different open-source projects.

The model should save organisations a good deal of time they would've spent trying to do their own inhouse assessment of the wealth of open-source projects around, according to Ruiz. For instance, if a company is looking for an open-source Wiki type application, there are seven currently available, while he estimates there are 135 open-source general content management tools in the market.

For the next three months, COSI, SpikeSource and Intel are inviting comment on the BRR model from users and developers, Ruiz said. Using those comments, the model will be enhanced and the organisations would hope to have the model in production by the end of the year, he added. The model will need to be adaptable to reflect different usage assessments, with the requirements of a university, say, being quite distinct from those of a large corporation, according to Ruiz.

COSI, SpikeSource and Intel have defined 12 categories for assessing open-source projects, including how well the software meets users' needs, its usability, scalability, performance and support. The categories in turn consist of a number of grouped together metrics. For instance, under the rating "quality," metrics will include users' estimations of the quality of the software's design, the code and the testing and how complete and error-free each of these three are.

Users will rate the categories for a project using a scale of one for "unacceptable" up to five for "excellent" and then the 12 categories will be weighted in terms of importance. The top seven or fewer categories will then be taken as the basis for ending up with a calculation of a project's overall BRR score.

On the BRR website, the model's sponsors are providing a white paper and discussion forums together with samples, standard templates and worksheets of the model. In the white paper, the sponsors state the aim of the model is to offer "a vendor-neutral federated clearing-house of quantifiable data on open-source software packages to help drive their adoption and development."