/
PH1 core capbility
YEARS EXPERIENCE
10+ years
TYPICAL CLIENT
Director of Digital, Director of Web Strategy, Director of Recruitment & Admissions, VP Marketing & Communications
NECESSARY TIMELINE
2 to 3 months
BUDGET NECESSARY
Quoted individually
Our POV
Higher education institutions consistently invest in website redesigns that do not perform as expected — not because the design was careless, but because the direction was chosen without testing it with the applicants it was meant to serve. Internal review panels, stakeholder workshops, and creative reviews are useful for many things. They cannot replicate how a prospective student who has never seen your institution before moves through an experience they were not part of designing.
The cost of a redesign that misses is significant: the production work, the CMS configuration, the content migration, the stakeholder time — all spent on an experience that needs to be redone within eighteen months because the conversion improvement it was supposed to deliver never materialized. The cost of prototyping before build is a fraction of the cost of rebuilding after launch.
PH1's prototyping and testing practice exists to close the gap between what an institution's internal team believes will work and what applicants actually respond to. We have run applicant testing on application flows that seemed obvious to every internal stakeholder and created new friction for every participant in the research session. We have also tested prototypes the internal team was uncertain about and watched applicants navigate them without hesitation. The evidence is almost always different from the assumption.
What We Do
We design prototypes of the highest-priority improvements — typically identified through a journey mapping, benchmarking, or architecture engagement — and run structured testing sessions with real prospective students and current students. Prototypes range in fidelity from clickable wireframes to near-production designs depending on what needs to be validated.
Testing sessions are moderated and conducted remotely or in-person with participants who match your target applicant and student segments. Each session tests specific hypotheses: Does this simplify the flow? Does this content answer the question applicants are asking? Does the AI-assisted answer increase or reduce trust? Is this search experience faster than what it replaces? We compile findings into a clear direction document with specific recommendations on what to build, what to revise, and what to set aside.
What We'll Deliver
Prototype designs of the improvements to be tested (fidelity agreed at kickoff)
Moderated testing sessions with six to twelve participants per prototype, matched to your target applicant and student segments
Session recordings and observation summaries
Findings report with evidence-backed direction on what to build, what to revise, and what to abandon
Revised prototype incorporating testing feedback, ready for final stakeholder sign-off before production
Success criteria and measurement framework for evaluating the improvement once live
Briefing document for development or delivery partners based on validated design
When This is Essential
Before committing production budget to a major website improvement, redesign, or AI feature
When the institution has a direction it believes in but has not validated with real applicants
When a previous website investment failed to produce the expected conversion improvement
When leadership is divided between two or more directions and needs applicant evidence to make the decision
When an AI-assisted experience is ready for a pilot and the institution needs to understand how applicants will actually respond before it scales
Frequently Asked Questions
How many participants do you need to get meaningful results?
Six to eight moderated sessions with participants matched to a single cohort typically produce clear, actionable patterns. For complex decisions or multiple distinct segments, we run separate testing rounds with each cohort. We are transparent about the limits of what a given sample size can confirm — and when to expand the research versus when the pattern is clear enough to act on.
What fidelity of prototype do we need before testing?
This depends on what you need to validate. For early-stage directional questions — does this navigation structure help applicants find what they need? — a clickable wireframe is sufficient. For questions involving content, tone, or AI-assisted responses — does this answer build or reduce trust? — a higher-fidelity prototype produces more reliable findings. We recommend fidelity based on the specific question being tested.
Can we test improvements to an existing live website rather than a redesign?
Yes — and this is often the highest-value application of the service. Testing targeted improvements to a specific page, flow, or interaction on the existing site typically has a faster path to implementation and a clearer before/after measurement story than a full redesign.
How long does prototyping and testing take?
A focused round — one to two key improvements, one applicant cohort — typically runs four to six weeks from kickoff to findings. If multiple improvements or multiple cohorts are in scope, testing rounds can run in parallel or be sequenced based on priority.
How do testing findings connect to our development or delivery team?
We deliver a briefing document that translates the validated design into implementation specifications — what the experience should do, how it should behave, and what success looks like post-launch. We can also participate in handoff workshops with your delivery team to resolve open questions before build begins.
Combine With These Services
Applicant & Student Journey Mapping Research — use the journey map to identify which improvements are worth prototyping first
Website Architecture & Search Modernization Strategy — validate the proposed new architecture with real applicants before build commits
Benchmark Institutional Performance — use the benchmark scorecard to prioritize which improvements to prototype
/
Submissions