For almost two decades, Portable has taken on projects across a range of social impact areas including access to justice and healthcare. In that time, we’ve learned that innovation is rarely siloed; an idea in one field can spark breakthroughs in another. This article tells one of those stories.
In 2024, Portable partnered with Cancer Council to design the Cancer Council Navigation Service. As part of this work, we carried out in-depth discovery research with staff and community stakeholders across Australia, exploring their systems and processes. It was during this process that we encountered the concept of patient-reported measures.
About patient-reported measures (PRMs)
Patient-reported measures can be divided into two categories:
- Patient-reported experience measures (PREMs): measure a patients’ perception of their personal experience of receiving healthcare (Casaca et al., 2023). These self-reported questionnaires ask patients to report on the extent to which certain predefined processes occurred during an episode of care. An example of this is the Emergency department patient reported experience measures (PREMs) survey, which asks about communication, decision-making, pain management and so on.
- Patient-reported outcome measures (PROMs): measure a patients’ views of their symptoms, their functional status, and their health-related quality of life at a single point in time and are also collected through self-reported survey instruments (Casaca et al., 2023). Originally conceived as a research tool, their potential value in clinical practice and as a mechanism to improve healthcare quality and promote patient-centred healthcare delivery has been increasingly recognised (Anderson et al., 2024). The Australian Commission on Safety and Quality in Health Care lists hundreds of validated generic and condition-specific PROMS. They can also take many forms, such as Likert scales or body diagrams.
What struck me most about this framing was the elegant simplicity. As concepts, PROMs and PREMs are clear enough to be easily understood, yet robust enough to drive meaningful change, especially when applied at scale. Their enduring global use over decades also shows that they are far from a passing idea. This feels particularly relevant to legal services, where evaluation is often siloed and contested.
“[Patient-reported measures] just helps me feel like I'm part of a team addressing my health” – 66 y/o woman, Dartmouth Cancer Center
How PRMs are changing healthcare
These measures give voice to what matters most to patients, improving care at multiple levels:
- Frontline: PROMs and PREMs give practitioners on the frontline real-time feedback that can directly shape what care is provided and how it’s delivered. For example, at the Dartmouth Cancer Center where participation rates are as high as 80%, patients complete PROMs before their appointments, review the results with their clinicians, and then provide post-visit feedback via two sets of PREMs (Nelson, 2024).
- System-wide: Aggregated data from PROMs and PREMs can also support benchmarking, quality improvement and policy development. Notably, in New South Wales the Agency for Clinical Innovation recently celebrated its 200,000th PRM data point. With data at this scale, operational staff, researchers and policy makers can make data-driven decisions that remain closely aligned with patient priorities.
Could this work in the legal sector?
After learning more about PRMs, I became interested in whether this framing could be translated to the legal context.
The evaluation of legal service quality remains a hotly debated area within legal scholarship, with several academics providing suggestions for approaches and highlighting challenges that make it difficult. Taking lessons from other sectors is not new either. In fact, researchers in the 70s offered a framework largely modeled on healthcare systems and the social and behavioural sciences (Saks and Benedict, 1977). Yet despite this groundwork, as Linna (2020) observes, 'the legal industry has not undergone a quality movement'. The time feels ripe for change.
To move forward, several recurring challenges in evaluating legal services need to be addressed:
- Causality: outcomes are often shaped by factors outside a lawyer’s control, making attribution difficult. For example, whether a client secures housing may depend more on extra-legal factors such as available stock and waiting lists than on legal advocacy itself (Curran & Crockett, 2013).
- Surface-level metrics: measures such as time taken, case numbers, or costs often mislead or miss what truly matters (Curran & Crockett, 2013. Some frameworks, such as the 'Legal Department Metrics: Understanding and Expanding Your Impact' (pwc, 2020) offer more holistic approaches by linking measures to broader strategic objectives and are a good example of moving beyond the numbers.
- Client perspectives: evaluation tools can often ignore the voices of marginalised or hard-to-reach clients (Curran & Crockett, 2013). Any meaningful mechanism must be accessible, inclusive, and reflect the priorities of those with legal need. For example, for those who are unable to read or write, self-reported questionnaires are going to be an ineffective way to collect feedback.
- Data consistency and comparability: varying definitions and coverage across providers and jurisdictions make aggregation and benchmarking difficult (Smith and Patel, 2010).
- Timing and monitoring: evaluation requires upfront agreement on what to measure, how often, and how to ensure the accuracy and relevance of data over time.
This list might feel long, but these challenges are not unique to legal services. Healthcare has faced and worked through many of the same issues, suggesting that progress is possible. What is missing though is a unifying movement. A framework with the clarity, simplicity, and memorability of PRMs that can cut across jurisdictions, be repeated at scale and embed itself as the standard way we measure the impact of legal services.
Bringing PRMs into law: what will it take?
To answer this question, we spoke with one of the pioneers in the PRMs movement who had witnessed its evolution since the 1970s and 80s. While they expressed optimism about the potential of PRMs, they also noted a persistent gap between how these measures should be used and how they are actually implemented. To avoid repeating these pitfalls in the legal sector, several lessons stand out:
- Separate experience from outcomes: Think in distinct categories, rather than assuming evaluation can capture everything at once.
- Collate existing evidence: Map the current landscape of evaluation in legal services by bringing together and meta-analysing the scattered guidance already produced by researchers, private providers, and the third sector.
- Co-design across disciplines: Involve clients, lawyers, policymakers, and researchers to ensure tools are practical and widely adopted.
- Find change champions: Identify credible, respected, and charismatic advocates who can build momentum.
- Pilot a proof of concept: Partner with a highly regarded legal service (e.g. a community legal centre, a pro bono practice, a firm, an in-house team, or even a court or tribunal) willing to test the model in practice.
- Develop a multi-level implementation strategy: Align incentives across funders, practitioners, and clients to ensure systemic uptake.
- Embed feedback loops: Create space within pilots and across sites (e.g. through “Learning Labs”) to share insights from what works.
- Embed feed-forward practices: Help clients clarify their goals upfront and align service delivery with what matters most in their legal journey.
Too often, evaluation in the legal sector defaults to what’s easiest to measure rather than what’s most meaningful. But if healthcare has managed to build a global quality movement around PRMs, so can the legal sector.