In previous blog posts, we’ve discussed our collaboration with Digital Health and Care Wales on the Digital Maternity Cymru programme as they aim to transform maternity services and replace paper notes with digital versions that can be accessed on a device such as a smartphone or computer.

In our last blog post, Vic Smith, Service Designer at CDPS, talked about the creation and testing of a service pattern for maternity appointments. Our focus for this phase of work has been to test early digital design concepts that support women and pregnant people as they enter and exit the digital maternity service. This means we can provide a set of recommendations to support the configuration and implementation of a new digital maternity portal.

We used concept testing for this phase of work.

What is concept testing?

Concept testing involves sharing early concepts – initial prototypes – to get user reactions and to understand what problems (if any) the concept solves for users.

In concept testing, low to mid-fidelity prototypes act as stimuli to help the project team:

  • dig deeper into the problem space
  • generate insights by observing and uncovering user reactions, perceptions and feelings towards the concept

It’s the perfect next step when you have early-stage ideas that aren’t fully formed enough for usability testing but need to evaluate if your ideas and assumptions align with user needs or not.

Concept testing is very different to usability testing which involves presenting high-fidelity prototypes or live services.

How we designed a concept test

For this project, we used a mixture of hypotheses, scenarios, and screen mockups.

Designing from hypotheses

As Ben Holliday writes in his blog ‘Everything is hypothesis driven design’, having hypotheses helps articulate the thinking behind the design ideas, helping the team get a shared understanding of why we’re doing it and what insights from service evaluation have led us to these assumptions and designs.

We created a set of hypotheses – or assumptions – based on insights from service evaluation. They were written in a way that articulated our understanding of the problem and what changes we’d expect to see because of the new designs or concepts.

For example:

Currently, users struggle to find specific information from the large amount of data provided. They often seek information independently for specific needs, but it isn't always sourced from reputable sources or accurate.

If users can easily access organised information relevant to upcoming events or their current stage of pregnancy...

Then they are more likely to consume this information...

Because it would be quicker and easier to access compared to finding the information independently.

Having assumptions written as hypotheses helps us to prove or disprove our assumption and gives us a framework to test our designs against.

So, we had a set of hypotheses that centered around the needs of users when they enter and exit the maternity service, we then needed to create a stimulus to test our design hypotheses.

Technical challenges of testing

There are many ways to test concepts with users, from clickable digital screens, storyboards or even roleplaying scenarios using actors.

There are challenges with any approach, but in our case the challenges came from using both scenarios and digital screen designs, remote testing through video conferencing software and choosing to guide the participants through the prototypes ourselves, to ensure they could focus solely on providing feedback.

We performed 3 tests and “dry runs” to work out the best approach to testing that would give the best user experience for both researcher and participant, and to troubleshoot any issues that might arise.

How we tested the concepts

In many (if not, most) cases, prototypes are being developed to test aspects more associated with usability. For example, taking the user through a checkout system, or an appointment booking service and gaining feedback on how easy it was to achieve their tasks.

On this project, however, the prototype was acting as a vehicle to test concepts, and therefore, a more nuanced delivery was needed. We went through quite a few different versions of how detailed or granular our prototypes needed to be. For example, if the prototype was too detailed, we ran the risk of participants focusing on specifics such as language, visuals, or icons. However, if the prototype was too low fidelity, it could be harder for participants to visualise these concepts without context.

Good design is developed through iterating – starting broad and experimenting with different ideas, versions, and approaches, before gradually adding more finesse as it progresses. This project definitely subscribed to that approach – it required plenty of experimenting and toing and froing to hit the sweet spot we were aiming for.

We initially encountered challenges with the concept testing process, particularly because the team were accustomed to conducting traditional usability testing. The typical usability questions like "What would you expect to see?" or "What do you think of this?" weren't suitable for concept testing. It became evident that leading questions such as "Do you like this idea?" or "Would you use something like this?" could bias the participants' responses.

Working closely with the team was essential, especially since none of us had prior experience with this type of testing. Despite the complexities and unfamiliar terrain of the prototyping journey, we made numerous iterations, and we’re immensely pleased with how effectively we worked together to design something that was both clear and purposeful.