Not All Research Is The Same

February 1, 2019 pd

Why Quality Matters & How to Make Sure Your Research Has It

What Makes Research Quality?

Conducting a research program is an investment. It’s an investment of time, budget, and personnel resources. It is alluring to try to cut these variables as much as possible. Efficiency is not a bad metric to strive for, but occasionally we can trim our budgets and timelines, or overload workload a little too much. When you cut too many corners, quality suffers. Quality research is the difference between knowing what to do next and taking a risk on a hunch.

Is it better to save up for the work? Find ways to combine initiatives and pool money internally with stakeholders who have similar needs?

Often a company has a core set of users who interact with different products in the portfolio, could you share the recruiting efforts to recruit the same participants for two different studies and field these studies together to save costs? These strategies are not always applicable, but if they are they can double the R.O.I. without sacrificing the quality of the work!

All this to say — some research is not always better than no research. It’s worth making sure the study is set up properly to get the most value out of it that you can. So, what makes research “quality”? We’ll walk through five research values that are critical to developing quality research.

Rigorous

There are four different components to consider when determining if a research program is conducted rigorously:

  • Experiment design
  • Execution of the data collection
  • Analysis
  • Documentation of the findings

Experiment Design

Experiment design (or approach) is the overarching plan for the research. This is the core of creating a rigorous research program. Using varied methods to collect information and using multiple sources to triangulate insights adds rigor to a research program. This helps researchers to determine the apparent truth and have confidence in their findings.

Execution of the Data Collection

Research that is executed consistently throughout its experiment is rigorous because it decreases the risk of inconsistent sessions. Inconsistent sessions affect the quality and validity of the findings. The research execution is all about sticking to the plan (or experiment design). A research program that is repeatable means that a research study could be run a second time with the completely new research team, new participants (who meet the same criteria), and the same methodological approach and get the same results from the program.

Analysis

A thorough analysis plan contributes to a program’s rigor by decreasing bias and supporting the programs’ reputability. A structured analysis allows you to be sure that you haven’t missed anything within the data – no overlooked meaning or misunderstood context. Affinity diagramming, thematic analysis, and open coding are just a few of the common analysis structures you could leverage to analyze your qualitative data. Although, how you actually conduct the activity determines if it is rigorous. It comes down to the ability to go back to the analysis and track where insights came from so that another researcher could go back and easily double check your work. You first design the experiment then you create an analysis plan in order to check that the experiment is going to get you the information and depth you need.

Documentation of the Findings

A rigorous study documents the findings in such a way as to tell the story of the data without distilling out the context required to understand the meaning of the findings. To truly understand an insight, we need all of the surrounding information (context) to ensure we are interpreting the meaning correctly. “Streamlining the story”, “distilling the findings”, “simplifying the report”; however, you want to say it, you sacrifice precious information your audience needs to fairly interpret the findings.

For example, I am often asked, what is the one thing that will differentiate my product. The streamlined version may be “Incorporate No Grooves/Slots/Crevices…” when designing appliances, but I’ve left out the context “I want all appliances to be able to be wiped clean”. Really the user wants to be able to clean their appliances, yes removing grooves/slots/crevices may resolve the desire for cleanability, but that also may wall in your design to look only at options that have no grooves when there very well may be another way to resolve the issue.

Actionable

“Insights”, “learnings”, “findings”…terms aside, what we take away from research should be concrete. We know what the data is telling us and any data discrepancies should be defined so that the team can run with the information without question.

Data discrepancies come in all shapes and forms. One example could be while studying audio equipment’s visual brand language, “I want my speakers to blend into the background of my homes aesthetic.” and “I want my speakers to be a statement piece in my home.”. Two very high-level findings contradicting each other. Often this is where we as researchers we need to dig into the data to discover why and when these two findings are applicable. This could even indicate two potential user personas that exist within your data.

Whether you are hunting for directional findings to drive strategic vision or tactical insights to build what’s next right now, your findings should describe the context, reasoning, and experience.

“I want my speakers to be a statement piece in my home using bright colors and unique textures to showcase my style and taste.”

The catch is, what a researcher believes is actionable, isn’t always actionable to the design team that’s going to run with the findings. It is crucial to define with the teams you work with what makes findings actionable to them.

How do they want to be immersed in the research? Do they want to be in the field with you? Reviewing the theme structure with you? Or do they just want to read the report you produce to document?

The output or deliverable you provide very well may be the linchpin to getting your team to listen and leverage your research findings. Do you hand them a traditional report that’s highly visualized to be inspirational, or is it a storyboard set to highlight the product scenarios or user personas, or is it a research video cut together from fielding clips that portray the findings?

The definition of actionable will vary from team to team and from initiative to initiative. It is very important to level set this early and reconfirm often to ensure the research is being understood and used by all teams as best as possible.

Externally Valid

Results from the research need to be applicable to the right population. A key focus of study planning is to ensure the recruit is on-point and the study design introduces no confounding variables that may call into question whether results will hold true once the product hits the market.

The most intimidating words to be spoken in a report out are “did we talk to the right people?”. If you can’t defend WHO the research was conducted with in terms of participants, and WHY, then the whole research program can be compromised. It is healthy to question a recruit as a team in order to ensure who you are talking to is right, but if you discover they, in fact, are not the end user of your product in the market your findings might be pointing you in the wrong direction.

When the project reaches design and debate arises whether or not to follow an insight’s guidance, another question you want to be able to respond to confidently is “Did this fielding protocol compromise this insight?”. Sometimes this comes up as a question to the counterbalancing of tasks or maybe the wording of a line of questioning. One way or another as you develop a program these questions should be a constant thought.

If we can’t stand up for our recruit, all research findings come into question. If we can’t stand behind our field method and protocols it can ultimately jeopardize a program. Thinking through these variables before that point of no return (fielding) is crucial to avoiding these pitfalls.

Takeaways

Rigorous, Reusable Results, Holistic, Actionable, and Externally Valid are five ingredients to strive to incorporate into every research project. These characteristics draw the line between “quick and dirty” and efficient quality research programs.

For “quick and dirty” research, you may create cost savings by cutting the recruit with a decreased sample size affecting the rigor of the analysis, cutting the research method from in-depth interviews to phone interviews affecting the rigor of the experiment design, or cutting the output into a top of mind reporting and read out affecting the holistic and reusable nature of the work.

But how much of your return on investment are you ultimately sacrificing? Is this research going to be compromised from one of these cuts? These cost savings can end up compromising a research program they can cost you more in the long run than conducting the work right the first time.

Quality research programs aren’t always expensive and time-consuming, but low cost and fast research programs aren’t always quality. It comes down to making strategic decisions that allow for efficiencies without compromising research integrity. There is a threshold for every research objective to meet while providing the highest return on investment from the program.

If these five research values are not in place, your ROI is being decreased ultimately causing the value of your investment to decrease. Which begs the question, is quick & dirty really worth it? Investing in a quality research program can create long-lasting insights and increase your success of correctly implementing the findings into your next product.

Want to receive new articles and updates? Join our email list.

[mc4wp_form id="9498"]

About The Author

Lauren Purkhiser, Design Research Specialist
Lauren is a design researcher with a thirst for knowledge that only feeds her curiosity and love of creative problem solving. Her unique experience with co-creation helps discover consumers’ emotional experiences with products and uncover how to create a better experience. She gleams structure from qualitative chaos and helps clients make sense of research to inform development decisions. On the weekends, you’ll find her out riding or maintaining vintage motorcycles. How cool is that?

Katie Mowery, Sr. Human Factors Specialist
With an expertise in human factors psychology, interface and experimental design, Katie is passionate about understanding the way people think and feel, helping clients understand end users’ needs and applying those insights to improve the product’s design. Her work in automotive, retail, defense and software brings a wide range of knowledge that spans across industries. Katie is curious, thoughtful and she’s even done research while riding in tanks! When she’s not watching Disney movies with her two little girls, she’s playing outside as much as possible.

  • SHARE: