How We Review Products
Our full testing methodology — from product acquisition to scoring, publication, and update policy. No shortcuts. No compromises.
Our process
Six Steps From Purchase to Publication
We purchase at retail price
Every product we review is purchased at standard retail price using our own budget. We do not accept manufacturer-supplied review units, early access samples, or pre-production hardware. This eliminates the inherent bias that comes from reviewing products provided by the people who make them.
Minimum two-week testing period
Every product is tested for a minimum of two weeks in realistic, everyday conditions — not in a controlled lab setting designed to produce favourable results. We use products on commutes, at desks, in kitchens, on planes, and in the kinds of environments our readers actually encounter.
Scored across 40+ data points
Our scoring rubric covers build quality, performance, ease of use, software and companion app quality, battery life (where applicable), value for money relative to the competitive set, and long-term reliability. Every dimension is scored independently before a composite score is calculated.
Written for clarity, not length
Our reviews are structured to answer the questions real buyers have: Is it worth the money? Who is it for? What are its actual weaknesses? We do not write long reviews because long reviews rank better in search. We write the review each product deserves — and no longer.
Peer reviewed before publication
Every review is read by a second editorial team member before it is published. This review checks factual accuracy, scoring consistency, and that the verdict is supported by the evidence presented in the body of the review.
Updated when it matters
Products change after launch. Firmware updates can fix bugs or introduce new features. Prices drop. Better alternatives emerge. We revisit and update reviews when any of these changes materially affects our recommendation. The "Updated" date on each review reflects the most recent revision.
How scores are calculated
Our Scoring Rubric
Each dimension is scored from 1–10 and weighted to produce a final composite score. Scores represent our independent assessment after real-world testing.
Score interpretation
What Our Scores Mean
9.0 – 10.0
Editor's Choice
Best in class. A clear recommendation for most buyers.
8.0 – 8.9
Highly Recommended
Excellent product with minor trade-offs. A confident buy.
7.0 – 7.9
Recommended
Good product with notable limitations. Right for some buyers.
6.0 – 6.9
Consider Alternatives
Has merit but better options exist at the same price.
Below 6.0
Not Recommended
Significant flaws that outweigh its strengths.
Common Questions
Do you accept free products from manufacturers?
No. We purchase every product at standard retail prices. Accepting manufacturer-supplied review units creates pressure — real or perceived — to produce favourable reviews. Avoiding them entirely is the only credible approach.
Does your affiliate relationship affect your scores?
No. Affiliate programs exist for thousands of products, including ones we give negative verdicts to. We regularly recommend products for which we earn no commission. Scores and verdicts are determined before any commercial considerations.
Can manufacturers request corrections or rebuttals?
We welcome factual corrections from any source, including manufacturers. If a manufacturer believes a factual claim in one of our reviews is incorrect, they can contact us at corrections@smartsignify.com. We investigate all claims seriously and correct genuine errors transparently.
How do you handle products you receive as gifts or at events?
If we receive a product as a gift or acquire it via a media event, we disclose this in the review and purchase a retail unit for testing wherever possible. We do not review products we cannot independently verify through retail acquisition.
See our methodology in action
Browse our independently tested reviews across every category.