A structured expert assessment of how effectively the website serves its users’ needs — evaluating navigation, information architecture, content clarity, interaction design, page flow and error handling against established usability principles to identify friction points that cause confusion, frustration or abandonment.
A UX review focuses on the quality of the overall user experience — how easily users can achieve their goals, whether the site is intuitive, clear and reliable. A conversion optimisation review focuses specifically on increasing the percentage of visitors who complete defined commercial conversion goals. Both overlap but UX review has a broader scope.
Ten established usability principles used as an evaluation framework: visibility of system status, match between system and real world, user control and freedom, consistency and standards, error prevention, recognition rather than recall, flexibility and efficiency, aesthetic and minimalist design, help users recognise and recover from errors, and help and documentation. A heuristic evaluation checks the site against each principle.
User testing involves observing real representative users attempting to complete defined tasks on the website, revealing usability issues from actual user experience. Expert review (heuristic evaluation) involves an expert systematically assessing the interface against established principles without user participation. Both methods identify usability issues but from different perspectives.
Ambiguous navigation labels (menu items that don’t clearly communicate their destination), forms with insufficient error messages or validation, broken or misleading links, content that requires too much assumed knowledge from the visitor, inconsistent interactive element behaviour, page flows that don’t logically progress toward the user’s goal and mobile navigation that is difficult to use.
Research by Nielsen Norman Group indicates that five users will uncover approximately 85% of a site’s usability issues in a single round of testing. Additional users reveal diminishing returns on new findings. The optimal approach is iterative — test with five users, fix the identified issues, test again with a fresh group of five.
A user research technique where participants organise topics or content categories (written on cards) into groups that make sense to them. Card sorting reveals how users categorise information in their own mental model — informing navigation structure, category naming and information architecture decisions that match user expectations rather than internal business logic.
A research method that presents users with a text-only version of the site’s navigation hierarchy (the ‘tree’) and asks them to find specific content or complete specific tasks by navigating through it. Tree testing isolates navigation usability from visual design and reveals where category groupings or labels are causing confusion.
A structured report identifying usability issues categorised by severity (critical, serious, minor), annotated screenshots illustrating each issue, explanation of the user impact, recommended solutions for each issue, and a prioritised implementation roadmap. Some reviews include video clips from user testing sessions illustrating specific usability failures.
After significant new feature releases or content restructures, when analytics reveal increased exit rates or declining engagement on key pages, when a meaningful change in audience (new market, different customer type) occurs and as part of a planned annual website health check programme. UX quality degrades over time if not actively maintained.