In the next few weeks, we will post one update every week about either a new feature on the website or an improvement on how we do things.
The first update is our new out of spec policy, which aims at increasing the probability that our reviews represent accurately what you can buy at home.
Our main goal at Rtings.com is to help you find the best product for your needs. This means we want our reviews to represent what the majority of you would get. This is why we buy our own units to test. We do not want the best case scenario like websites that receive cherry picked units from manufacturers, but we also do not want an exceptionally bad unit, if we get unlucky with the one that we bought. Ideally, we would test a large sample size, bought from a wide range of retailers, to calculate the deviation of each measurement. Unfortunately, this is not a financially realistic solution for an independent company like us.
We already had a policy about defective products formalized, but we didn't have one about working products that are worse than the specifications. Here are these policies formalized:
We consider a unit defective when the product is not usable. For example, a physically broken screen, a partially broken LED backlight, or a non-responding driver of headphones. When this happens, we don’t test the unit. We return it and get a new one instead.
This policy isn’t new. We have been doing this for a few years. For example, in 2016, 2 out of the 44 TVs and 1 out of the 122 headphones we bought were defective. When this happens, we don’t publish the review. We return it and get a new one instead.
New out of spec policy
We are leaving to the brand the responsibility to define what out-of-specifications is for them since they are aware of how their product should perform and there is no good way for us to know if our unit is worse than average.
As soon as a brand tell us the unit we tested is out of spec:
- We update the review to mention on top that our unit might not be representative
- We buy another unit from a different retailer to improve the chances of getting a better one
- We retest unit #2 through all the tests affecting the Mixed Usage score and post all the results on the review page
This should remove the out of ordinary bad unit that we could potentially review. But to prevent manufacturers calling out out-of-spec on all reviews forever until we get a best-case unit, we apply this rule:
If (Mixed Usage #2 - Mixed Usage #1) > 0.1
- Unit #2 is better, so the review is updated with unit #2 measurements
- Unit #2 isn't significantly better, so brand lose their right to call out another unit until we review 10 more units of their brand
We chose the ‘Mixed Usage’ since this rating is the one that encompasses the majority of our tests and is meant to represent a normal customer scenario with multiple usages.
The 0.1 threshold is to simply account for rounding errors.
The 10 reviews handicap was chosen since we test about 10 products per year for major brands.
Our hope with this new policy is to improve the accuracy of our measurements and to better represent what you can buy. Hopefully, brands will start pointing out the units we have that are outliers. With this policy, you will know that if a brand didn’t tell us that a unit is out of spec, then they don’t think it performs significantly worse overall than what you can get at home.
However, this doesn't solve the issue of us getting a better than normal unit, but we have a few ideas on how we could address this in the future.
If you have any feedback on this new out-of-spec policy, or maybe you have an even better solution to propose to help make sure our units are a good representation of what you can end up with, you can send us an email directly at email@example.com.