By Dr Daphne Hazell, CEO, Primary Care Research Alliance
Introduction
When we first discussed building the PCRA Portal at the conference, it wasn’t about technology. It was about solving a very practical problem:
GP practices want to participate in research but lack structured access
Sponsors struggle to identify and engage capable sites
Feasibility, communication, and governance are fragmented
The portal was designed to connect these three groups into a single, working system.
What followed was not a typical software build. It became a lesson in governance, clinical reality, and how digital tools actually need to function in NHS-facing research environments.
Where It Started
The initial concept came together following PCRA planning work and conference discussions focused on:
Practice onboarding and profile creation
Shared documentation and contracts
Feasibility data collection
A “library of opportunities” for research engagement
At that stage, the portal was intentionally ambitious — but also undefined in many critical areas.
The early proposal positioned the portal as a phased build, with a focus on:
Getting practices live quickly
Introducing AI-assisted profile population
Creating a clean sponsor-to-site interaction flow
But in reality, this was just the starting point.
The Reality of Building in a Regulated Environment
One of the first major shifts was recognising that this could not be built like a typical tech product.
A Data Protection Impact Assessment (DPIA) became the gating step before further development.
That single decision changed the trajectory of the build:
Governance became foundational, not an add-on
Data flows had to be explicitly mapped before development
Commercial confidentiality (not just GDPR) became a primary design constraint
Critically:
The portal was designed not to handle patient-identifiable data at all
AI functionality was restricted, controlled, and human-led
Document access (especially NDA-bound material) required auditability
This was less about compliance, and more about making the system usable in real-world sponsor environments.
From “Website” to Operational System
Very quickly, it became clear that this was not a website — it was an operational platform.
At its core, the portal now connects:
Practices — building structured research profiles
Sponsors — submitting and managing study enquiries
PCRA team — overseeing approvals, matching, and system flow
But the complexity sits behind that simplicity.
Key features that emerged during build:
1. Structured Practice Profiles
Not just listings — but data that can support feasibility and matching.
2. Controlled Document Sharing
Study-level document allocation
Confidentiality control at point of access
NDA acknowledgement built into workflow
3. Feasibility as a Live Dataset
Instead of one-off questionnaires, feasibility data is stored and reused across studies.
4. Admin Oversight
A full administrative layer to:
Approve practices
Manage enquiries
Track activity and decisions
This turned the portal into a managed ecosystem, not a passive directory.
The AI Question (and Why It’s Different Here)
AI was never treated as an optional feature.
From early design discussions, it was clear it needed to be:
Built into the architecture from the start
Limited to advisory, non-decision outputs
Always subject to human oversight
However, an important constraint emerged:
Much of the most valuable sponsor material sits under NDA – and cannot be processed by AI systems.
This created a hybrid model:
AI supports administrative efficiency (profiles, triage, structuring data)
Human review governs anything commercially sensitive
This balance is essential in clinical research — and very different from consumer AI applications.
What Took the Time (and Why It Was Necessary)
From the outside, a portal can look like a simple build.
In practice, the time went into five things:
1. Defining the Real Workflow
Not what a system could do — but what actually happens between sponsors and practices.
2. Governance First, Not Last
Building with DPIA, DPO input, and auditability from day one.
3. Rewriting Scope Mid-Build
Key functionality (matching, document handling) had to be brought forward as core features.
4. Designing for Scale
The portal isn’t for one study — it supports a network of:
150+ practices
~8 million patient population
Ongoing growth
5. Avoiding “Dead Data”
Ensuring every piece of information (profiles, feasibility) feeds future decisions — not one-off tasks.
What We Learned
Looking back, several lessons stand out:
1. Start With the Problem, Not the Product
The strongest parts of the portal came from real operational pain points.
2. Governance Is Not a Barrier — It Improves Design
It forces clarity in data flows, responsibility, and risk.
3. AI Needs Constraints in Healthcare
Without clear boundaries, it creates more risk than value.
4. Simplicity at the Front Requires Complexity at the Back
A clean user experience depends on a highly structured backend.
5. Build for How People Actually Work
Not how systems assume they should.
Where It Goes Next
The portal is now live and operational, but still evolving.
Next phases focus on:
Smarter matching between studies and sites
Enhanced sponsor visibility and interaction
Gradual expansion of AI support — driven by real-world usage, not timelines
Importantly, this will not be a time-based rollout.
AI maturity will be based on confidence, governance, and demonstrable value — even if that takes longer than expected.
Final Thought
The PCRA Portal was never about building software.
It was about building infrastructure for clinical research in primary care —
One that works for practices, sponsors, and governance requirements at the same time.
That is why it took time.
And that is why it will scale.
