Emmi Törnroos

Building a concept through design sprints

Product design

Events are expensive in many ways - globally 30% of companies spend 20% or more of their marketing budget for events. Usually events are the first to be removed from the budget. So how to make events measurable and to understand their impact?

Challenge

No industry standard to measure the impact of events, benchmark events between each other or do it consistently. Our goal was to create an industry standard for measuring event impact that would be automated and benchmarkable.

Role

Product Designer

Design sprint facilitator, User Research, Visual design, Prototyping & Usability testing

February 2020 - November 2020

EVS

Background

About me

As a Design Lead, I had begun building our design team from the ground up and we had just hired another UX designer,  Fabiana. Until that point, I had been the UX Team of One in Lyyti. I had to learn to collaborate with another designer on the project while building the processes to allow us to work in close collaboration.

How to measure the impact of an event?

How can you measure if an event has been successful? Traditional way of measuring event success is to send a questionnaire to participants after the event asking if the food or venue was good. Larger events have KPIs in place, for example leads generated. Could you compare a board meeting with a large conference?

Event organisers didn’t have a consistent way to measure the impact of their event. Globally, many companies spend up to 20% of their marketing budget to events. How could we help show the impact an event has on the people participating in it while helping event organizers learn and improve?

There was no industry standard to measure the impact of events, benchmark events between each other or do it consistently. Our goal was to create an industry standard for measuring event impact that would be automated and benchmarkable. We needed to take a closer look in to common denominator of successful event - participants and their experience.

Process

Understanding the problem

Our hypothesis: the thing that connects all events is participants. Participants invest their time in the event and want to have value in return. Based on this, we concluded that the best way to measure the impact of an event would be to ask the participants if the event was worth the time they spent in it.

To validate the hypothesis, we conducted user and stakeholder interviews to see how our customers respond to the idea of measuring participants’ subjective feelings about the event.

  • 100% interviewees found that it is important how a participant feels about an event
  • 50% interviewees found that how a participant feels about an event reflects on the company image
  • 75% interviewees found that measuring events brings value as a way to develop a framework for events

We wanted to involve our key customers in the development of this feature to make it solve their problems in the best possible way. We conducted interviews and tests with users from these companies during this project.

As this was a completely new concept, the project team wanted to align itself, so that we would all be solving the same problem. We also wanted to move fast and test the hypothesis with our users as quickly as possible to validate it and see if the direction we chose was the right one.

I decided that we run design sprints, because they work best in setting direction on new projects, aligning diverse project teams and gaining speed and efficiency. Design sprints are based on the design thinking methodology, which includes five stages: empathize, define, ideate, prototype and test. I facilitated the design sprint which our product manager, designer, developers and other experts took part in.

EVS

Design Sprint 1: Organiser experience

1Empathize

I facilitated the design sprint for our team. We defined the guiding goals and mapped out the customer journey both for the organiser and the participant. We also conducted two expert interviews on the idea of measuring event success, which gave more in-depth insights into proving the effectiveness of events. After the How-Might-We notes we decided to focus first on the organiser side of the feature to hone the concept.

2Define & Ideate

We collected inspiration from already existing sources and did a competitor analysis. We confirmed that there wasn’t any standard for measuring events currently and none of our competitors had any similar features. We returned to our map and divided responsibilities amongst ourselves, and sketched solutions using Crazy 8’s.

3Ideate

We vetted and critiqued each solution, and storyboarded the best one.

4Prototype

We drafted the first prototype of the feature in Figma with the team in one day. I guided the session, participated in the prototyping and divided responsibilities amongst us.

5Test

Fabiana and I conducted usability testing with our customers, capturing their ideas and reactions.

Our hypotheses for the test were

  • Users might have problems with the scale
  • Users might have difficulty understanding what the feature is

The results from the usability tests clearly showed that

  • The value provided by the feature was not understood
  • The scale that we initially planned to use was not understood by our users
  • Sharing results was a key feature for our users

Even though further iteration was clearly needed, we were given the green light to continue to develop the feature further as the feature showed promise to bring a lot of value to our customers.

EVS

Design Sprint 2: Participant experience

I facilitated another design sprint focusing on the participant experience. As our event organisers are usually really busy, an automated survey would be sent to event participants automatically after the event ended. The survey should be simple and contain a question asking the participant to score the event. This score should be comparable between all events in Lyyti.

Making answering the survey as easy as possible was crucial for the feature’s success. In our initial research we had found that the average response rate for regular surveys is about 15 %. One of the goals of the feature was for it to measure the impact of the event and for it to do that reliably, we needed to have as high of a response rate as possible.

We pondered whether the survey would be sent to the participants by SMS or by email. Both had their pros and cons. Emails tend to be easily ignorable and can end up in the spam folder, but they are easy and free to send. Text messages grab the user’s attention, but each sent message costs money.

To achieve as high of an adaptation as possible, we decided to go with emails. We didn’t want cost to be the deciding factor in taking the feature into use.

EVS

Working with developers

In this project, we had pretty strict technical limitations - Lyyti is a 14-year-old product built with old technology. Mounting an automated feature to that was technically challenging, as our tech architecture wasn’t really built for that.

I worked on all aspects of the project’s design, including its visual, UX and motion design in close collaboration with Fabiana.

We collaborated very closely with developers on a daily basis to help and guide and also to jump in when issues with design would arise.

I did multiple design reviews during the project, to ensure that the finished product was in line with the designs and the user experience was up to par.

I feel it’s important to include developers in the project from the beginning, so we can build a feasible product from the start. It’s also important to balance the technological feasibility with good user experience - if a solution takes 5 weeks to develop and by tweaking it a bit we can get that time reduced to 5 days, I think we should definitely do it.

Launching the feature

Closed beta

We continued to iterate the concept based on customer feedback before it got released to beta. Four months into the project, in the beginning of June, we launched the feature in closed beta to our key users, who we interviewed at the beginning. We monitored how much they used the feature and listened to feedback to develop the feature further, for example changing who the survey will be sent to and adding a preview for the survey.

Open beta

In the beginning of August we launched the feature in open beta, for which our users could sign up for. We set up analytics to follow up on the usage, followed heatmaps and adaptation of the feature and went through user feedback on a weekly basis. We tackled bugs and UX problems quickly.

Launch

In November, we launched the feature for everyone. We followed up on the usage, heatmaps and adaptation of the feature and went through user feedback on a weekly basis. We developed the feature further after launch based on user feedback and our product vision, for example made it possible to share the results with an online report.

EVS

Results and key takeways

We managed to successfully launch a feature that had 15 000 responses from participants in the first 10 days after launch, achieved an amazing response rate of almost 40 % globally (while regular surveys have an average response rate of about 15 %) and was adapted by over 450 customers.

The value our users get out of the feature is

  • Standardised, automated and benchmarkable data about events
  • Tools for organising better events for participants
  • Compare events between industries / event types / organisers

This was a fun and exciting project for me to work on, because it provides real value to our users, involved a lot of research and close collaboration with our customers. However, shifting priorities and changing roadmaps (because of the corona situation starting in March 2020) delayed the launch of this feature. Changing priorities meant the scope of the project also changed and we couldn’t include everything in the first release. I had to adapt to those changes and still deliver the best design.

Fight for good user experience

I had to work under a tight deadline and very strict technical constraints, but still fight for what I believe is essential to having a good user experience. I encountered resistance from management especially for the discovery phase, because they wanted to start development as soon as possible. I learned how to define a true MLP (Most Loveable Product) vs. something that is simply not usable nor shippable.

Choosing what we won’t do

There were many great use cases we could tackle with a rich feature set, for example sharing the results of the survey with other stakeholders or allowing our users to edit the survey content. However, every single one was unrealistic in the timeframe we had.

Best designs come from collaboration

It’s amazing to see when each person brings a different perspective to the table during the design sprint, all the way from defining the end goal to doing the user testing. We combined the business insights with our design knowhow and tech savviness to create an absolutely remarkable end result.

Back to homepage