
What separates credible cognitive assessments from clickbait
The honest answer is: it depends entirely on the test. The internet hosts thousands of IQ tests, ranging from peer-reviewed research instruments to glorified clickbait that assigns everyone a flattering score. What distinguishes a credible online test is the same thing that separates good science from bad: published validity data, transparent methodology, and items that actually measure cognitive ability rather than trivia. Our test uses ICAR items developed at Northwestern University, with correlations of r = 0.70–0.85 against gold-standard clinical measures like the WAIS-IV.
Psychometricians evaluate tests against four key criteria. A test that meets all four is considered scientifically credible — regardless of whether it's administered online or in a clinic.
The International Cognitive Ability Resource was developed by researchers at Northwestern University with a specific goal: create high-quality cognitive test items and make them freely available for research and public use. The items have been published in peer-reviewed journals, validated across thousands of participants, and tested against established clinical measures.
To put the numbers in context, here's how different tests correlate with the WAIS-IV — the clinical benchmark. A correlation of 1.0 would mean perfect agreement; anything above 0.70 is considered strong in psychometric research.
Gold-standard matrix reasoning test
Open-source research items (our test)
No published validation data
Correlation coefficient (r). Higher values indicate stronger agreement with the clinical gold standard. Data from published validation studies.
Before trusting any online IQ result, check for these warning signs. A test that exhibits several of these is likely optimised for clicks rather than accuracy.
Credible tests tell you where their questions come from. If a test doesn't mention its item source or validation studies, there's no way to verify it measures anything meaningful.
Flattery drives sharing and return visits. If a test consistently assigns scores of 120+ regardless of performance, it's optimised for engagement, not accuracy.
Requiring payment to see your score creates an incentive to show inflated results afterward — you're less likely to feel cheated if the number is flattering.
No online test replicates clinical conditions. Any test claiming to be "official" or "clinical-grade" is misrepresenting what clinical assessment involves.
If a test doesn't cite correlation coefficients, sample sizes, or peer-reviewed publications, its accuracy claims are unsupported.
While processing speed is a real cognitive ability, most online tests that emphasise speed are using time pressure to increase difficulty rather than measuring a genuine construct.
We built MyIQTested on a simple principle: use the best available science, present results honestly, and don't manipulate scores to flatter people.
Our test uses 33 items from the ICAR framework, covering abstract reasoning, verbal reasoning, numerical reasoning, and spatial reasoning. Scoring is based on normative data from published validation studies — not an algorithm designed to make you feel good. Some people will score above average, some below, and most will land in the broad middle. That's what a real bell curve looks like.
All scoring happens in your browser. Your responses are processed locally, results appear instantly, and we don't require sign-up, payment, or personal data to show you your score. The methodology is transparent because we believe credibility comes from openness, not from gatekeeping.
33 validated questions. Honest scoring. Instant results. No sign-up required.
Our test uses ICAR items with published correlations of r = 0.70–0.85 against clinical measures like the WAIS-IV. That makes it a reliable screening tool — informative enough to be useful, honest enough to acknowledge it's not a clinical diagnosis.
Likely close, but not identical. Clinical tests are administered in controlled conditions by trained professionals who account for factors like test anxiety, fatigue, and your personal history. Environmental distractions can affect online results in either direction.
Because flattery drives engagement and sharing. If everyone who takes a test gets told they're a genius, the test generates more traffic and social media mentions. Our test uses real scoring based on your actual performance against normative data — which means some people will score below average, and that's by design.
Trust it as a well-calibrated estimate. ICAR items are peer-reviewed and validated, and the scoring algorithm is based on real normative data. If the result surprises you significantly in either direction, consider taking a professional test for confirmation — but for most people, the online result will be in the right ballpark.