We don’t start with the model. We start with the mission.
Portable’s approach to AI is built for high-stakes systems where real world people interact with services that are meant to help them, and not harm them. We work in areas that demand clarity, care, and creativity in equal measure.
We’re a team of engineers, designers, researchers, policy thinkers, and strategists. We work together with our partners and people using the systems we build to define the right problem, select the right tools, and shape AI systems that are safe, scalable, and built for real-world complexity.
From housing eligibility to mental health, family law to planning policy, we’re preparing ourselves and our partners to make thoughtful, informed decisions about what AI should do.
Why we’re watching the AI Engineers World Fair
The 2025 AI Engineers World Fair brought together the builders behind the AI systems reshaping infrastructure, services, and everyday life. From Anthropic and OpenAI to public service technologists and platform architects, the Fair has become the most influential event for those working at the intersection of AI and engineering practice.
We followed reporting on the Fair to understand what industry practitioners are focusing on, where investment and innovation are headed, and what’s needed right now for safe, scalable deployment. The conversations confirmed what we’ve been seeing in our own work.
The most meaningful developments in AI are no longer just technical. They’re strategic, ethical, and deeply human.
For us, this event is a pulse check on what it takes to build AI that works in the real world. We’re shaping Portable’s AI strategy to support the institutions, policies and communities that rely on technology to deliver essential services.
We saw four key areas of momentum. Each one aligns with where we’re building capability, experimenting internally, and investing in the skills and systems needed to deliver responsible, production-grade AI.
1. Evaluation and observability are a primary design function, not an add on.
How do you know if an AI system is working?
Not just working technically, but working in context, meeting expectations, aligning with values.
At Portable, we’re expanding our concept of evaluation. It’s not just about accuracy or latency. It’s about fairness, clarity, usefulness and trust. These aren’t metrics you retrofit. They’re design considerations from day one.
We’re developing frameworks that blend engineering observability with human-centred research. We’re building shared ways to measure performance, trace reasoning, and log when things feel confusing or incomplete.
This is part of how we deliver on our value that AI success isn’t just about technology—it’s about governance, design, and trust.
2. Agents are coming to make our jobs easier, not take them - if they’re designed well.
AI agents can perform multi-step tasks, call tools, and make decisions based on context. They offer efficiency, but they also introduce new kinds of risk.
We’re exploring agentic workflows in internal tools first. That gives us a space to test assumptions, observe failure points and design fallback patterns that keep humans in control.
We’re learning how to design interfaces that build trust, not mystery. Interfaces that help people understand what actions are being taken, and why. And we’re considering how human oversight, optionality and intervention need to be baked in from the start.
Our investment here is driven by the belief that Portable is a leader’s leader for AI impact. We help partners navigate not just what’s possible, but what’s appropriate.
3. MCP opens a new design frontier
The Model–Context–Program (MCP) protocol lets AI systems decide which tools to use based on structured metadata. It reduces brittle logic and speeds up orchestration. But it also changes the shape of the work.
We’re building capability to operate in this space. This is where technical design and interaction design meet. It’s where service patterns, interface patterns, and decision patterns all converge.
Portable is preparing to meet that complexity, not avoid it.
4. RAG + knowledge graphs = explainability with structure
Retrieval-augmented generation (RAG) pulls relevant chunks of information from source material. Combined with knowledge graphs, it creates systems that don’t just answer questions, but explain their reasoning and be domain-appropriate.
This matters in domains where rules, context, and structure shape decisions; like planning, justice, or health. It also matters for public trust.
We’re developing approaches to encode real-world relationships. Relationships that reflect how planners think, how caseworkers explain options, or how policy officers interpret legislation.
These are systems that require human-centred information design, semantic modelling, and a deep understanding of context. That’s where our multidisciplinary strength and expertise in human-centred systems design comes in.
What we’re building toward
We’re not here to be first. We’re here to be thorough, strategic, and responsible with this incredibly powerful set of technologies.
We’re investing now in the people, practices and platforms that will allow us to work confidently in production AI systems. We’re strengthening our ability to ask the right questions, design for real use, and evaluate with both technical and ethical clarity.
We experiment to understand uncertainty. And we implement within safe constraints because our proof is not in concepts—it’s in production.
If you care about this too, let’s build together
If you’re a policymaker, service lead, or product owner thinking about AI, we’d love to talk. Especially if you care about the things we care about:
- Human-centred evaluation
- Design-led tool orchestration
- Transparent workflows
- Public trust and transparency
- Co-design and real-world impact
We bring governance, design, and technology together to build AI systems that work in every sense of the word.
Let’s design the future responsibly. Get in touch.