QUALITATIVE DATA (GCP - HSR - HTA)
Although there is no unified definition of qualitative research, most authors agree about its main characteristics. Creswell formulated it like this: “Writers agree that one undertakes qualitative research in a natural setting where the researcher is an instrument of data collection who gathers words or pictures, analyzes them inductively, focuses on the meaning of participants, and describes a process that is expressive and persuasive in language” (Creswell, 1998). The gathering of qualitative data takes many forms, but interviewing and observing are among the most frequently used, no matter the theoretical tradition of the researcher.
1. How to chose a qualitative method?
We so far identified 4 types of QRM suitable for the KCE research projects useful to describe in a first report: interviewing (individually or in focus groups), observing and structuring discussions among experts with a Delphi survey. Others should be developed in the future.
Before entering in the practical aspect of each method, we will briefly describe them in order to give some guidance to choose the most appropriate one.
- Semi-structured individual interview aims at searching for data through questioning the respondent using conversational techniques, “…being shaped partly by the interviewer’s pre-existing topic guide and partly by concerns that are emergent in the interview.” (Bloor and Wood, 2006, p. 104). “It gives the opportunity to the respondents to tell their own stories in they own words” (Bowling, 1997, p. 336). The use of such a method in the KCE context is appropriate when the aim is to identify different point of views, beliefs, attitudes, experience of people such patients, practitioners, stakeholders, etc. when no interaction between the respondents is required or appropriate (according to the topic for example). It could also be chosen because of practical reasons, e.g. when participants are not easily ‘displaceable’, or lack time.
- Focus groups is a form of semi-structured interview. It consists on “a series of group discussions held with differently composed groups of individuals and facilitated by a researcher, were the aim is to provide data (via the capture of intra-group interaction) on groups beliefs and group norms in respect of a particular topic or set of issues” (Bloor and Wood, 2006, p. 88). This is useful “where we need interactivity to enhance brainstorming among the participants, gain insights and generate ideas in order to pursue a topic in greater depth” (Bowling, 1997, p 352). Focus groups ‘”worked well and provide the richest data in relation to public’s view of priorities for health services and (…) were less inhibiting for respondent that one-to-one interviews (Bowling, 1997, p. 354).
- Observation is useful to understand more than people say about (complex) situations (Bowling, 1997). In the KCE context, it will be useful for site visits, when preparing a report on a hospital or a health service, a procedure, etc.
- The Delphi survey aims to achieve consensus or define positions among experts panelists, through iterations of anonymous opinions and of proposed compromise statements from the group moderator (Bloor and Wood, 2006). For KCE reports, this method could be useful for setting priorities, clarify acceptability of a new technology or system or innovations.
2. How to set up?
Following the discussions we have heard in the different focus groups, not every KCE researcher expressed the need to use or understand QRM. Nevertheless, for those interested in QRM, we try to respond to the different researchers’ needs through the notes that will be published in the KCE process book.
[1] For further reading: Silverman (2011)
Why opt for a qualitative approach?
“The goal of qualitative research is the development of concepts which help us to understand social phenomena in natural (rather than experimental) settings, giving due emphasis to the meanings, experiences, and views of all the participants” (Mays, 1995,p. 43). This quotation gives a nice summary of the specificities of qualitative research methods, which are discussed below.
A. Specificities of qualitative research methods
First, qualitative research encompasses all forms of field research performed with qualitative data. “Qualitative” refers to data in nonnumeric form, such as words and narratives. There are different sources for qualitative data, such as observations, document analysis, interviews, pictures or video’s, etc. Each of these data-gathering techniques has its particular strengths and weaknesses that have to be reflected upon when choosing for a qualitative research technique. In the social sciences, the use of qualitative data is also closely related to different paradigms trying to develop insight in social reality. Elaboration on these paradigms is however outside the scope of this process note [1].
Second, the aim of qualitative research is developing a “thick description[2]” and “grounded or in-depth understanding” of the focus of inquiry. The benefits of well developed qualitative data-collection are precisely richness of data and deeper insight into the problem studied. They do not only target to describe but help also to get more meaningful explanations on a phenomenon. They are also useful in generating hypotheses (Sofaer, 1999). Types of research questions typically answered by qualitative research are “What is going on? What are the dimensions of the concept? What variations exist? Why is this happening?” (Huston,1998). Qualitative research techniques are primarily used to trace “meanings that people give to social phenomena” and “interaction processes”, including the interpretation of these interactions (Pope, 1995). “They allow people to speak in their own voice, rather than conforming to categories and terms imposed on them by others.” (Sofaer, 1999, p. 1105). This kind of research is also appropriate to investigate social phenomena related to health(Huston,1998).
Third, one of the key strengths of qualitative research is that it studies people in their natural settings rather than in artificial or experimental ones. Since health related experiences and beliefs are closely linked to daily life situations it is less meaningful to research them in an artificial context such as an experiment. Therefore data is collected by interacting with people in their own language and observing them in their own territory (Kirk, 1986) or a place of their own choice. This is also referred to as naturalism. Therefore the term naturalistic methods is sometimes used to denote some, but not all, qualitative research(Pope, 2006). Also this characteristic is not always relevant to the use of QRM at the KCE. For example focus group interviews are usually not performed in the natural setting of the participants, but rather in the setting of a meeting room.
A fourth feature of qualitative research in health care is that it often employs several different qualitative methods to answer one and the same research question (Pope, 2006). This relates partly to what is called triangulation (see here).
Finally, qualitative research is always iterative starting with assumptions, hypotheses, mind sets or general theories which change and develop throughout the successive steps of the research process. It is desirable to make these initial assumptions explicit at the beginning of the process and document the acquired new insights or knowledge at each step.
[1] For those interested we refer to Denzin and Lincoln, 2008 a, Denzin and Lincoln, 2008 b, Bourgeault et al., 2012 or in Dutch, Mortelmans, 2009
[2] A “thick description” of a human practice or behavior include not only the focus of the study, but its context as well, such it becomes meaningful to an outsider. The term was introduced in the social science literature by the anthropologist C. Geertz in his essay in 1973
B. Qualitative versus quantitative approaches
Laurence.Kohn Tue, 11/16/2021 - 17:41Although it is meaningful to do qualitative research in itself, qualitative research is often defined by reference to quantitative research. Often it is assumed that because qualitative research does not seek to quantify or enumerate, it does not ‘measure’. Qualitative research generally deals with words or discourses rather than numbers, and measurement in qualitative research is usually concerned with taxonomies or classifications. “Qualitative research answers questions such as, ‘what is X, and how does X vary in different circumstances, and why’, rather than ‘how big is X or how many X’s are there?”(Pope, 2006, p3).
By emphasizing the differences the qualitative and quantitative approach are presented as opposites. However, qualitative and quantitative approaches are complementary and are often integrated in one and the same research project. For example in mixed methods research the strengths of quantitative and qualitative research are combined for the purpose of obtaining a richer and deeper understanding (Zang, 2012). Also qualitative data could be analyzed in a quantitative way by for example counting the occurrence of certain words.
Often health services researchers draw on multiple sources of data and multiple strategies of inquiry in order to explore the complex processes, structures and outcomes of health care. It is common that quantitative and qualitative methods answer different questions to provide a well-integrated picture of the situation under study(Patton, 1999). Especially in the field of health services research qualitative and quantitative methods are increasingly being used together in mixed method approaches. The ways QRMs could be used combined or not, are:
Qualitative research only:
- To know the variation in experiences related to health or illness.
- To build typologies regarding health services use, patient attitudes, health beliefs, etc.
- Qualitative preliminarly to quantitative:
- To explore new area, new concepts, new behaviour, etc.(Pope, 1995) before to start with measurement.
- To build quantitative data collection tools (questionnaires): using appropriate wording(Pope, 1995), variables to submit, to develop reliable and valid survey instruments(Sofaer, 1999), etc.
- To pre-test survey instruments(Sofaer, 1999).
- In supplement to quantitative work:
- As a part of a triangulation process that consist in confronting results coming from several data sources(Pope, 1995).
- To reach a different level of knowledge(Pope, 1995): “If we focus research only on what we already know how to quantify, indeed only on that which can ultimately be reliably quantified, we risk ignoring factors that are more significant in explaining important realities and relationships.” (Sofaer, 1999, p. 1102).
- In complement to quantitative work by exploring complex phenomena or areas that are not reachable with quantitative approaches(Pope, 1995).
- Sofaer(Sofaer, 1999) provides us the insight that in many cases, inquiry can move from being unstructured, largely qualitative in nature, to being structured and largely quantitative in nature. This is how she describes the continuum: “(…) there is uncertainty not only about answers, but about what the right questions might be; about how they should be framed to get meaningful answers; and about where and to whom questions should be addressed. As understanding increases, some of the right questions emerge, but uncertainty remains about whether all of the right questions have been identified. Further along, confidence grows that almost all of the important questions have been identified and perhaps framed in more specific terms, but uncertainty still exists about the range of possible answers to those questions. Eventually, a high level of certainty is reached about the range of almost all of the possible answers.” (p. 1103).
- In sum, over time investigations related to a certain area, start with qualitative research to explore the field, find the right questions, prepare for more focused questions and discover theories and hypotheses. Next, quantitative research is in place to test hypotheses and finally, qualitative research can be used to deepen the findings or to search for explanations quantitative research techniques cannot provide.
3. How to collect?
<This chapter will be published in December 2013>
3.1 Interviewing (individuals, groups)
There are many ways to interview people, e.g. individually or in focus groups. However, they share some general principles and techniques. Therefore in what follows we address the general principles. After that we present a chapter on individual semi-structured interviews and a chapter on focus groups.
3.1.1 General principles
3.1.1.1 How to plan the research design?
As with any data collection, interviewing (individually or in focus groups) has to be planned within the overall research approach taking into account the particular aims of the qualitative data collection.
The planning of data collection has to be prepared early in the process of the overall research. Qualitative research is time consuming, on the level of data-collection, data-analysis and reporting. All the steps are presented in the next figure.
Figure 2 – Flowchart: interviewing people
3.1.1.2 Sampling issues in qualitative research: who and how many?
Selection of participants
In qualitative research we select people who are likely to provide the most relevant information (Huston 1998). In order to design the sample and cover all variability around the research issue, the researchers must have an idea about the different perspectives that should be represented in the sample. This is called “field mapping” of the key players who have a certain interest in the problem under study. The role of this explicit “field mapping” is often underestimated but essential in order to build a purposive sample. It is possible that this “field map” evolves during the data collection. The notion of “representativeness” here is not understood in the statistical way. The idea of representation is seen as a “representation of perspectives, meanings, opinions and ideas” of different stakeholders in relation to the problem researched and their interest. In order to select the participants for interviews or focus groups, one should ask “do we expect that this person can talk about (represent) the perspectives (meanings given to the situation) of this stakeholder group”. The aim is to maximize the opportunity of producing enough data to answer the research question (Green 2004).
Ideally there should be a mixture of different “population characteristics” to ensure that arguments and ideas of the participants represent the opinions and attitudes of the relevant population. Also the unit of analysis should be taken into account. This could be for example “individuals for their personal opinions/experience/expertise” or “individuals because they represent organizational perspectives”.
Moreover in order to make comparisons within and between types of participants, the sample design should take this already into account. In Table 9, two criteria for comparison, for example age and socio-economic status, are already included to allow comparative analysis between age or status groups.
Sampling approaches
There is a wide range of sampling approaches (e.g. Miles and Huberman 1994, Patton 2002, Strauss and Corbin 2008). It is not uncommon in qualitative research that the research team continues to make sampling decisions during the process of collecting and analysing data. However, a clear documentation of the sampling criteria is needed when doing qualitative research. These criteria should cover all relevant aspects of the research topic. The researcher should identify the central criteria and translate them in observable sample criteria. In addition, the chosen criteria should leave enough variation to explore the research topic (Mortelmans, 2009). For example, in a research about factors influencing the decision to have or refrain from having a refractive eye surgery in the two last years, sampling criteria were:
- To have experienced or to have considered a refractive surgery. We want to explore both the pro and cons.
- To be older than 20 and younger than 70. Refractive eye surgery is not an option for those younger than 20 or older than 70.
In what follows we describe a number of sampling strategies. All the sampling strategies are non-probabilistic. A randomized sample is not useful in qualitative research, since generalizability to the general population is not the aim. Moreover with a random sample the researcher would run the risk of selecting people who have no link with the research subject and thus nothing to tell about it (Mortelmans, 2009). In purposive sampling the point of departure are the sampling criteria as described above. There are different forms of purposive sampling:
- Stratified purposive sampling (Patton, 2002):
Purposive samples can be stratified (or nested) by selecting particular persons that vary according to a key dimension/characteristic (e.g. a sample of people from large hospitals, and a different sample with people from small hospitals) and the selection ideally represents the different positions within the ‘system’ or phenomenon under investigation. The stratification criteria are the equivalent of independent variables in quantitative research. The researcher should think ahead about independent variables which could provide new information regarding the research topic. For example, in the research project on refractive eye surgery we expected that reasons to chose or refrain from chosing for refractive eye surgery vary with age, with financial resources and can be different in the Dutch- and French-speaking part of the country. Therefore we added age, socio-economic status and region as criteria introducing heterogeneity. This results in the following matrix: - Homogeneous sampling:
In the case of homogeneous sampling variation between respondents is minimised. Participants are chosen because they are alike, in order to focus on one particular process or situation they have in common (Mortelmans, 2009) . However the homogenous character does not exclude comparisons between types of participants, because for example unanticipated dimensions might emerge from the data. It is also useful to take into account hierarchy, hence not to put for example nurses and specialists working in the same hospital together in a focus group, as this might create bias in the responses.This sampling strategy is used when the goal of the research is to develop an in-depth understanding and description of a particular group with similar characteristics or people on equal foot. For example for the KCE research project on alternative medicines 48-50 only regular users were sampled.
Table 9 – Example of stratified purposive sample
Already had eye surgery or surgery planned | Considered eye surgery but refrained from having it | |||||||||||||||||
Age | 20-30 | 31-40 | >40 | 20-30 | 31-40 | >40 | ||||||||||||
Socio-economic status | a | b | c | a | b | c | a | b | c | a | b | c | a | b | c | a | b | c |
Number of respondents | 2 | 2 | 2 | 2 | 2 | 2 | 2 | 2 | 2 | 2 | 2 |
3.1.1.3 How to develop an interview guide?
An interview guide should be adapted to the language and vocabulary of the participant(s) and is generally built out of three components:
- A reminder of the goal of the research.
- The main topics or questions, the interviewer wants to address during the interview.
- Relaunching questions. They are an essential part of the interview. It may happen that the interviewee does not give an answer to the question or gives an unexpected answer. In that case the interviewer can probe in order to delve deeper. In case a respondent does mention an aspect you thought of in advance or you are particularly interested in, you can repose the question focused on that specific issue. For example the initial question could be: “Which difficulties you experienced after your surgery?”. The respondent mentions all kinds of worries and inconveniences, but you are particularly interested in the organization of after care. Hence you could ask: “How did you experience the organization of after care?”.
How to construct a topic list or semi-structured questionnaire?
A topic list covers all the topics the interviewer should ask during the interview. It enables the interviewer to guide the interview while allowing the discussion to flow naturally. The sequence of topics generally moves from the general to the specific. The sequencing of topics can be introduced in a flexible way, and within a general framework of topics, the focus of the discussion can be reset. A topic list is also used in preparation of the semi-structured questionnaire
In a questionnaire semi-structured questions are formulated in speaking language and are posed as such during the interview. The same questions with the same formulation, sometimes in the same sequence, are posed in each interview. The disadvantage however is that it can threaten the natural flow of the conversation.
Both for the topic list and the semi-structured questionnaire, questions/topics should evidently be selected in function of the research objectives. An open ended-formulation of the questions is important in order to enable the interviewee to talk freely without predispositions of the interviewer influencing the narrative. For example, rather than asking “Did you worry about the surgery?”, one could ask “How did you feel about the surgery?”.
A topic list or questionnaire may be adapted or improved in the course of the research, in line with the iterative nature of QRM. The more interviews you have done, the more you know and the more specific or detailed your questions can be (Mortelmans, 2009). However, continuity should be guarded. The topics of the first interview should also be represented in the following interviews, although the latter can also contain much more detailed questions.
For an example of a topic list and a semi-structured questionnaire, see Appendix 6 and Appendix 7 respectively.
What types of questions can be posed?[a]
The interview starts with an easy opening question which is mostly to set the interviewee at ease, break the ice and get to know each other. With this question the researcher does not expect to get a lot of useful information, the main function is to start up the conversation.
After that the conversation is started with a first general and easy to answer question addressing the content of the research. It can be an attitude question to enable the respondents to roll into the conversation. An example could be: “If you hear breast cancer screening, what are your first thoughts?”.
Next, transition questions involve the respondents in the research subject, for example through asking questions about personal experiences or specific behavior regarding the topic. Attitudinal questions are more difficult to answer and should therefore be addressed later in the interview. An example is “How did you experience your eye surgery?”.
Subsequently the key questions are addressed. These questions are the reason why the interview is done. The interviewer can make clear that the interviewee can take some time to answer these questions. An interview can count up to five key questions each taking up to fifteen minutes to answer them.
Finally, the interview is terminated by means of a concluding question and thanking the interviewee for his participation. Three types of concluding questions can be distinguished:
Summary questions provide the interviewee with a summary of what he has told the interviewer,Final questions can address elements that have not been mentioned during the interview, for example: “Do you want to add something to this interview?”. Make sure you allow enough time for the concluding questions.
It is useful to conduct a pilot (focus group) interview in order to test, assess and validate the format and the appropriateness of the topic guide or questionnaire.
3.1.1.4 How to run the data collection?
Preparations for the interview
Preparations for the interview encompass the recruitment of participants and the making of appointments, becoming knowledgeable about the research topic, including learning the interview guide by heart, anticipating questions of participants regarding the research project, access to a physical space where the interviews can take place and preparation of the recording equipment (Mack,2005). Well functioning of the recorders is crucial, so batteries, tapes and microphones should be carefully checked. It could be practical to foresee a second recorder as back-up. Finally also a notebook, a pen, and of course the topic list or interview guide you prepared for the interview should not be forgotten.
Box 2: What to take to the interview?
Equipment
- digital tape recorder (plus 1 extra, if available)
- Spare batteries
- Field notebook and pens
Interview packet
- 1 interview guide (in the appropriate language)
- informed consent forms (2 per participants: 1 for interviewer, 1 for each participant, in the appropriate language)
- Participant reimbursement (if applicable)
Source: Adapted from Mack, 2005
Running the interview
Informed consent should be obtained from each participant before starting the interview. Also permission should be asked to record the interview. Also it should be explained how the tapes will be used and stored.
The research aims should be briefly repeated. Probably the research aims were already explained during the first contact with the respondent in order to convince him of participating. Next, all the topics or questions on the checklist or questionnaire need to be addressed. Participants are probed for elaboration of their responses in order to learn everything they want to share about the research topic54. Mobile phones should be switched off during the interview so as not to imply that the participant’s testimony is of secondary importance.
During the interview back-up notes could be taken, the interviewee’s behaviors and contextual aspects of the interview should be observed and documented as part of the field notes. Field notes are expanded as soon as possible after each interview, preferably within 24 hours, while the memory is still fresh (Mack,2005).
To get deeper or redirect the discussion, probing techniques can be used:
- Repeat the question but in a different wording.
- Summarise the anwer the relevant aspects of the interviewee’s answer, in an interrogative way. For example: “In sum, you say that…?”
- Probe explicitly, for example: “What do you mean?” or “Could you give me a second example?”
- Purposive probing, for example: “Why was it that you?” or “What happened then?”
- Repeat the last couple of words in an interrogative way. For example: “R: (…) I think it is dangerous and I don’t trust doctors”. I: ”You don’t trust doctors?”
- Introduce a short silence.
- Verbalise emotions, for example: “I can see that thinking of that discussion makes you very angry.”
The interview is closed by thanking the participant(s).
3.1.1.5 How to prepare the data for analysis?
Transcribing is the procedure for producing a written version of the interview. Ideally, the information recorded during the interview will need to be transcribed in order to enable accurate data analysis. A transcript is a full written literal text of the interview. It often produces a lot of written text.
Good quality transcribing is not simply transferring words from the tape to the page. The wording communicates only a small proportion of the message. A lot of additional information is to be found in the way people speak. Tone and inflection, timing of reactions are important indicators too. With experienced observers and note-takers, a thematic analysis of the notes taken during the interviews could be used as a basis for analysis of the “non-verbal” aspects.
Transcribing is a time consuming and costly part of the study. The research team should consider in advance the question "who should do the transcribing”? Resources may be needed to pay an audio typist, a strategy usually more cost effective than a researcher. Be aware that “typists” are often unfamiliar with the terminology or language used in the interviews which can lead to mistakes and/or prolong the transcribing time.
It may not be essential to transcribe every interview. It is possible to use a technique known as tape and notebook analysis, which means taking notes from a playback of the tape recorded interview and triangulating them with the notes taken by the observers and note-takers. However, bias can occur if inexperienced qualitative researchers attempt tape and notebook analysis. It is certainly preferable to produce full transcripts of the first few interviews. Once the researcher becomes familiar with the key messages emerging from the data tape analysis may be possible. Transcripts are especially valuable when several researchers work with the same data.
3.1.1.6 What are the common pitfalls?
In the following paragraph we mention a number of common pitfalls typical for interviews. They are based on the work of Mortelmans (Mortelmans, 2009) and the Qualitative Research Guidelines Project (Cohen, 2008).
- The methodology needs to be transparent. Each step of the sampling, data collection and analysis should be described in sufficient detail, this means that it must enable other researchers to replicate the same study.
- The sample should be well constructed and described.
- Avoid dichotomous questions which elicit a yes or a no. In an interview we are especially interested in rich descriptions and we want the interviewee to talk a lot and elaborate on the topic of the question.
- Avoid double questions, for example: “Once you decided to have a screening, what was the next step? How did you proceed? How did it change the way you thought about potential risks?” The interviewee can not respond to all the questions at once and thus picks out one. This means the other questions are lost.
- Avoid the expression of value judgements or your own opinion, for example: “What do you think about the endless waiting times?” The word “endless” suggests irritation.
- Avoid to be suggestive, for instance by giving examples: “Which kind of difficulties did you encounter, like long waiting times, full waiting rooms etc?” This kind of examples provide the interviewee with a frame, which he will possibly not transcend. This way you loose what he would have answered spontaneously.
- Avoid a reverse of roles. The interviewee should not be asking you questions. An example could be: I: “What does it mean to you to be a patient?”, R: “I don’t know. What does it mean to you?”. If this happens you can say that you are willing to answer that question after the interview, but that you can not answer it during the interview in order not to influence the answers of the interviewee. A reverse of roles can be avoided if the interviewer introduces himself in a neutral way, for example as a researcher, but not as, for example a physician or an expert in an issue related to the topic/goal of the interview, in order for the respondent not to ask you too many questions on a particular condition or issue.
- Avoid letting the interviewee deviate to far from the topic or elaborates on irrelevant matters by returning to the question posed.
- Avoid being too jargony, but use a familiar terminology which does not need explications or definitions.
- The analysis should not be superficial but really in-depth. However it may not transcend the data. The data must always support the results.
3.1.2 Individual interviews
3.1.2.1 What are individual semi-structured interviews?
Interviews are used in many contexts (journalism, human resource managers, etc.) and for many purposes (entertainment, recruitment of personnel, etc.), hence scientific data collection is only one very specific application, which should not be confused with other applications. The interview is easily trivialized as it is common practice in the media landscape which surrounds us. Fontana and Frey even speak about “the interview society” according to Atkinson and Silverman. Practicing health professionals routinely interview patients during their clinical work, and they may wonder whether simply talking to people constitutes a legitimate form of research (DiCicco-Bloom et al,2006). In qualitative research, however, interviewing is a well established research technique and two types can be distinguished: semi-structured and unstructured. Structured interviews are out of scope here, because they consist of administering structured questionnaires producing quantitative data.
Unstructured interviews are more or less equivalent to guided conversations(DiCicco-Bloom et al,2006). Originally they were part of ethnographers’ field work, consisting of participant observation and interviewing key informants on an ongoing basis to elicit information about the meaning of observed behaviors, interactions, or artifacts(DiCicco-Bloom et al,2006). There is no list of questions, nor an interview guide, the questions asked are based on the responses of the interviewee, as in the natural flow of a conversation (Britten, 1995).
Semi-structured interviews are often the sole data source in a qualitative research project. A set of predetermined open-ended questions is used to guide the interview, but other questions emerging from the dialogue can be added(Britten, 1995). Also the iterative nature of the research process in which preliminary data analysis coincides with data collection, results in altering questions as the research process proceeds. Even so, questions that are not effective in eliciting the necessary information can be dropped or replaced by new ones(Britten, 1995).
Essentially an interview consists of someone who asks questions (interviewer), someone who answers these questions (interviewee) and the registration of those answers in some way (Mortelmans, 2009).
The interview as qualitative research method differentiates from other forms of interviewing used in varied domains. Mortelmans pays attention to four characteristics:
- Flexibility; with flexibility internal and external flexibility is meant: external refers to the iterative use of interviewing and data analysis. Structure and content of the subsequent interview may be changed in function of the analysis of the previous one. Internal flexibility points to the fact that the sequence of the prepared interview questions and themes should stands in function of the interviewee in order to guard the natural flow of the conversation.
- The interviewee leads so to speak the conversation. The interviewer only guards the scope of the conversation and makes sure that all the topics are covered.
- Non-directiveness; the interviewee steers the interview and the interviewer only makes sure that the conversation does not stray too far by means of non-directive interview techniques.
- Direct face-to-face contact is important to built trust and get in-depth information, but this depends on the topic and should be considered case by case.
3.1.2.2 When to use individual semi-structured interviews?
Individual semi-structured interviews are useful to:
- Collect data on individuals’ personal histories, perspectives, and experiences, particularly when sensitive topics are being explored (Mack, 2005).
- Elicit a vivid picture of the participant’s perspective (Mack, 2005).
- Provide context to other data, offering a more complete picture(Boyce et al, 2006)
- Learn about the perspectives of individuals, as opposed to, for example, group norms of a community, for which focus groups are more appropriate(Mack, 2005).
- Get people to talk about their personal feelings, opinions, and experiences (Mack, 2005).
- Gain insight into how people interpret and order the world on the research topic (Mack, 2005).
- Address sensitive topics that people might be reluctant to discuss in a group setting (Mack, 2005).
- Elicit information from key informants (Sofaer, 1999).
- Examine people’s experiences, attitudes and beliefs (Huston et al, 1998).
3.1.2.3 Strengths and weaknesses of the method
Strengths:
- They provide much more detailed information than what is available through other data collection methods, such as surveys (Boyce et al, 2006).
- Questions can be prepared ahead of time. This allows the interviewer to be prepared and appear competent during the interview (Cohen, 2008).
- Semi-structured interviews also allow informants the freedom to express their views in their own terms (Cohen, 2008).
- Semi-structured interviews can provide reliable, comparable qualitative data (Cohen, 2008).
Weaknesses:
- Interviews can be time-intensive because of the time it takes to recruit participants, conduct interviews, transcribe them, and analyse the results. In planning your data collection effort, care must be taken to include time for transcription and analysis of this detailed data (Boyce et al, 2006).
- Interviewers must be appropriately trained in interviewing techniques. To provide the most detailed and rich data from an interviewee, the interviewer must make that person comfortable and appear interested in what they are saying. They must also be sure to use effective interview techniques, such as avoiding yes/no and leading questions, using appropriate body language, and keeping their personal opinions in check (Boyce et al, 2006)
- Data from individual semi-structured interviews are not generalizable in a statistical way, but they are theoretically transferrable, because small samples are chosen and no random sampling methods are used. Individual semi-structured interviews however, provide valuable information, particularly when supplementing other methods of data collection. It should be noted that the general rule on sample size for interviews is that when the same stories, themes, issues, and topics are emerging from the interviewees, then a sufficient sample size has been reached (Boyce et al, 2006).
3.1.2.4 How to plan the research design?
See “How to plan the research design?”
3.1.2.5 Modalities of data collection
Individual semi-structured interviews are usually conducted face-to-face and involve one interviewer and one participant. Phone conversations and interviews with more than one participant also qualify as semi-structured interviews, but, in this chapter, we focus on individual, face-to-face interviews (Mack, 2005).
3.1.2.6 Data collection tools
The data collection tools to carry out interviews are topic lists, questionnaires and field notes. Topic lists and questionnaires are described here.
Researchers use field notes to record observations and fragments of speech. Field notes should be written up as soon as possible after the events to which they refer. If possible, short “aide-mémoire” or pocket dictaphones may be used in fieldwork settings, to facilitate later expansion of the notes into proper fieldnotes (Bloor et al, 2006). In the chapter on observational techniques field notes are addressed in more detail (here).
3.1.2.7 Sampling
For general issues on sampling, see “Sampling issues in qualitative research: who and how many?”.
3.1.2.8 Human resources necessary
In the ideal scenario researchers plan, organize, carry out and transcribe the interviews themselves, to be completely immersed in the data, but in practice the interviews are often carried out by subcontractors and the transcriptions are often done by professional typists.
3.1.2.9 Practical aspects
Preparations for the interview see “How to run the data collection” .
Physical organisation of an interview. Take the following rules into account:
- Interviewee and interviewer should not sit opposite each other, but rather at an angle of 90° or less.
- The interview should take place in a quiet place where the interviewee feels at ease.
- Avoid the presence of third parties.
3.1.2.10 Analysis and reporting of findings
See "How to prepare data for analysis", “How to analyse?” and “How to report qualitative research findings?” .
3.1.2.11 Examples of KCE reports using the method
- Home monitoring of infants in prevention of sudden infant death syndrome (Eyssen et al, 2006)
- Making general practice attractive: encouraging GP attraction and retention (Lorant et al, 2008)
- Osteopathy and chiropractic: state of affairs in Belgium(De Gendt et al, 2010)
- Acupuncture: state of affairs in Belgium (De Gendt et al, 2011)
- Homeopathy: state of affairs in Belgium (De Gendt et al, 2011)
- Burnout among general practitioners: prevention and management (Jonckheer et al, 2011)
- Evaluation of a fixed personal fee on the use of emergency services (Gourbin et al, 2005)
[1] We propose a example of a ‘standard introductive text’ in appendix.
3.1.3 Focus groups
3.1.3.1 What are focus groups ?
A focus group is a particular technique in qualitative research. In order to do a focus group interview a group of individuals is gathered in function of their specific profile or characteristics to explore a limited number of “focused questions” (Sofaer,1999). Groups are generally homogenous on a or several criteria relevant to the focus of the discussion.
“In essence, a focus group is a small (usually 6-12 people) group brought together to discuss a particular issue (..) under the direction of a facilitator who has a list of topics to discuss” (Green and Thorogood, 2009, p. 111).
Focus groups are group semi-structured interviews used for the purpose of collecting information focused on a specific subject or area of concern, for exploration and discovery, in-depth understanding of a problem as it is experienced in context, to assess needs, preferences, attitudes and interests related (in the context of KCE research) to health and health care issues.
It differs from individual semi-structured interviews, as the interaction component is used to bring out insights and understandings in ways which questionnaire items or individual questions may not be able to do. The interaction between the moderator and the group, as well as the interaction between group members, may result in more in-depth information, and to elicit differing perspectives related to carefully designed questions. Focus groups are thus not to be considered as a pragmatic time saving substitute for individual semi-structured interviews (e.g. if for any reason the planning does not allow for individual interviews), as the methodological groundings of both techniques differ.
A focus group is not synonymous to ‘group interview’: For a focus group, people are recruited specifically to participate in a research protocol, using a certain method. It is a group interview in the sense that it gathers data simultaneously from different participants (Green and Thorogood, 2009) However it differs from a group interview in the importance that is attached to the interaction among participants. Participants might change their perspective during the focus group interview because of this interaction. In a group interview the interaction between participants is limited, and occurs mainly between interviewer and interviewees.
Figure 4 – Interaction patterns in a group interview versus focus group interview
Depending on sampling strategy and aims, group interviews can take several forms, e.g. consensus panel, focus group, natural group or community interview (Coreil 2005 cited by Green and Thorogood, 2009).
Focus groups can be used as a single research strategy, as well as in combination with other methods in a multi-method research strategy.
3.1.3.2 Specific questions suitable for the method
The principal feature of focus group interviews is interaction between participants. Kitzinger (2006, p. 22) highlights that this particularity could be used to:
- “Highlight the respondents’ attitudes, priorities, language and framework of understanding.
- Encourage participants to generate and explore their own questions, and to develop their own analysis of common experiences.
- Encourage a variety of communication from participants – tapping into a wide range and different forms of discourse.
- Help to identify group norms/cultural values.
- Provide insight into the operation of group social processes in the articulation of knowledge (e.g. through the examination of what information is sensitive within the group.
- Encourage open conversation about embarrassing subject and to permit the expression of criticism.
- Facilitate the expression of ideas and experiences that might be left underdeveloped in an interview, and to illuminate the research patient’s perspectives through the debate with the group.”
- Allow topics which participants have given little thought in advance to emerge from the discussion (Barbour, 2010).
3.1.3.3 Strengths and weaknesses of the method
The benefits from focus groups highlighted are:
- Interaction between participants (Green and Thorogood, 2009)
- Ability to produce a large amount of data on a topic in a short time (Cohen et al, 2008)
- Access to topics that might be otherwise unobservable (Cohen et al, 2008)
- Access to explore sensitive topics, such as dissatifaction with a service: it can be easier for an interviewee if negative ideas are reported as coming from a group than from one single person (Green and Thorogood, 2009)
- Ability to insure that data directly targets researcher's topic (Cohen et al, 2008)
- Access to comparisons that focus group participants make between their experiences. This can be very valuable and provide access to consensus/diversity of experiences on a topic (Cohen et al, 2008)
The limitations of focus groups are related to the limitations of group interviews:
- Inappropriate to uncover marginal or deviant opinions (Green and Thorogood, 2009)
- Importance of social norms: participants are influencing each other, creating a certain kind of implicit norm (Baribeau, 2010), or consensus.
- Otherwise, group dynamics may contribute to cristallization of opinions.
- Not easy to organize: several selected people have to be gathered in the same place during a couple of hours .
3.1.3.4 How to plan the research design?
Since focus group interviews are a collective data collection technique requiring direct person-to-person contact (several people have to come together at the same moment and in the same place) a careful planning of all activities and related tasks is necessary.
3.1.3.5 Modalities of data collection
The data collection by focus group could vary according to (Cohen et al, 2008):
- The level of standardization of the questions
- The number of focus groups
- The number of participants in each groups
- The level of implication of the moderator
3.1.3.6 Data collection tools
During the preparation of the focus group interviews a set of topics or questions is developed and takes the form of a topic list or questionnaire. For the general principles, see here
A focus group interview is in most cases a structured group process structured by means of an agenda to keep the group focused and on track. A focus-group should be experienced as free-flowing and relatively unstructured, but in reality, the moderator must follow a pre-planned script of specific issues and set goals for the type of information to be gathered. An introduction of up to 15 minutes should be carefully planned, as well as a good opening question. In order to keep the time schedule, as several people are going to participate and answer to the questions, it is important to foresee a maximum duration for each question.
The use of a well designed guide is helpful to compare information from one group to another as it is expected to have more than one focus group for a given topic.
3.1.3.7 Sampling
For general issues on sampling, see “Sampling issues in qualitative research: who and how many?”
Identification of units of analysis
The starting point for selecting participants for focus groups is to identify the unit of analysis. Is the unit of analysis “individuals for their personal opinions/experience/expertise”, or is it “individuals because they represent organizational perspectives”? It has a major impact on the people invited to the focus group interview and therefore it should be clearly described.
The sample of focus groups will consist of groups of people, instead of individuals. People who are invited to take part need to have an interest in the subject.
Composition of the groups
Ideally groups have to be internally homogenous on criteria relevant to the topic but externally heterogeneous between groups. Homogeneity in the group capitalizes on people’s shared experiences (Kitzinger, 2006).
It is best to select people who do not know one another, but have similar relationships with the topic being investigated (although it could in practice be difficult for particular topics). Selecting participants who are similar may help them to share ideas more freely and develop an in-depth analysis of a topic (homogeneous groups).
Sometimes, heterogeneous groups can be used after the primary analysis of homogeneous focus groups has started. Heterogeneous groups are used to “confront” diverging opinions. In general terms, heterogeneous groups are composed of representatives of all relevant stakeholders.
In this case, the researcher has to pay attention to potential power differences or inequalities between participants. This may prevent some people from talking freely during the discussion and by consequence prevent the collection of rich data (Kitzinger, 2006).
In the Belgian context, focus group interviews can be carried out with French-speaking or Dutch-speaking and even German-speaking, participants. It is advisable to conduct unilingual groups: it is easier and richer for facilitators and participants. For heterogeneous groups, like stakeholders samples, it could be difficult to separate people in groups according to their mother tongue. In this particular case, it is important that participants express themselves in their mother tongue and to be sure that every participant understands the other language. The moderator has to be thus perfectly bilingual.
Number of participants per group
A group of six to twelve people is sufficient for a focus group. The ideal size for a focus group is eight to ten respondents. In general, the smaller the group, the more manageable it is. From experience, a group of 6‑8 participants allows enough time for discussion and is easier to manage. Where the purpose is to generate in-depth expression from participants, a smaller group size may be preferable in combination with carrying out more focus groups to attain saturation.
In order to make sure that a group counts enough participants, it is advisable to recruit 25% more people than required (Green and Thorogood, 2009). If too few participants turn up, one should foresee an additional focus group to substitute for the low attendance.
Number of groups
The number of focus group interviews needed depends on the aims and available resources . It is almost impossible to give clear standardized guidelines on the number of focus groups needed.
It is methodologically important for both approaches to conduct at least two focus groups by ‘type of people’. Using only one focus group to arrive at conclusions is risky since the opinions expressed may have had more to do with the group dynamics (i.e. persuasive skills of one or two members) than a true sampling of the opinions of the population that the group represents. Even the preset number of two focus groups is generally too limited to make in-depth analyses, especially if the topics discussed are rather “broad” or general (see also paragraph analysis on continuous comparative method). Having two homogeneous groups that provide different results suggests that more information is necessary (data saturation is not reached). One rule of thumb is to conduct focus groups until they no longer provide any new information on the topic discussed.
3.1.3.8 Human resources necessary
Three people (from the research team) could chair the focus group interview:
- The moderator (also called ‘facilitator’) plays a crucial role in the success of a focus group interview and can have a major impact on the outcomes of the data collection. He should lay down some ‘rules’, explain the duration of the focus group interview, plan a break in between, make everybody welcome before hand, do the paperwork (e.g. informed consent) before actually starting the interview. Before the opening question, is it important to ask everybody to introduce themselves briefly. He has “to establish a relaxed atmosphere, enable participants to tell their stories, and listen actively” (Green and Thorogood, 2009, p 126.). Facilitating or moderating focus group interviews requires particular competencies: interpersonal skills (including non-verbal communication skills) are needed as well as a non-biased attitude towards the issues discussed. A focus group moderator should be able to keep the discussion on track and make sure every participant is heard. He/she has to be able to summarize what has been said, to structure the discussion. However he/she should not take position, avoid to make quick assumptions or conclusions, avoid to develop answers for the participants or give advice. Focus groups are intended to make in-depth studies of the perceptions, attitude and opinions of the participants, not of the research team (or moderator). The moderator makes it socially acceptable for participants to have another point of view. If participants get off track or get ahead of the issue being discussed the moderator must pull the group back together. He/she does not need to be an expert in the domain of the research.The moderator needs to use “probing techniques” when necessary: probing is essentially a means of further investigating a topic that has already been introduced. Probing can be used to clarify, to obtain more detail and to assure completeness. For this purpose, see also here. In the particular case of focus group interviews, the moderator could use disagreements in the group to force participants to develop and elucidate their point of view. An experienced interviewer could decide whether or not to follow the lead of the interview or to return to the sequence of the interview guide1 In the particular case of bilingual groups, the moderator has to master both languages.
- The note-taker will take notes during the discussion while the moderator is introducing questions. The note-taker could sit next to the moderator. Nevertheless, pay attention that if he/she is typewriting on a laptop directly, the sound of the typing on the keyboard is not disturbing. Moderator and note-taker can take turns in asking questions and taking notes (this requires a well functioning team that clearly understands its roles and can adapt to the situation). It should be discussed and reported whether different or the same persons facilitate the respective focus group interviews.
- The observer is a third facilitator who could be useful to observe the focus group participants (non-verbal language) and to help the moderator in identifying not very talkative participants and in keeping time.
As focus group have to be transcribed afterwards. It is also useful to engage the services of an audio typist.
3.1.3.9 Running of data collection
For general principles see “How to run the data collection?”.
In the case of focus groups, once the group of respondents is gathered for the discussion, the moderator should give a brief introduction to set everybody at ease[1]. More concretely, the moderator should:
- Explain the purpose of the discussion, how the information collected will be used and reported.
- Introduce note-taker and observer who will remain in the room during the discussion.
- Explain that the discussion is for scientific purposes and that information will solely be used with the context of the research.
- Ensure participants that the rules of confidentiality apply to everyone in the room, including the note-takers, observers.
- Explain how names will be used (real names or pseudonyms).
- Explain the group rules (speak one at a time, avoid interrupting or monopolizing, etc.).
- If the discussion is to be tape-or video-recorded, obtain permission from the respondents first, and explain how the tapes will be used, stored and eventually destroyed. – Tip to increase the quality of the recording: use 2 recorders, preferably stereo recording, one at each side of the table: it is useful to understand everybody and prevent the loss of data in case of disfunctioning of the recorder.
The Moderator will then begin the focus group interview by asking an ‘icebreaker question’ to facilitate the discussion in the group. Afterwards, he/she will come to the focus of the discussion.
Immediately after the focus group a debriefing has to be foreseen with the moderators/facilitators. The debriefing part is an essential step for the analysis. The debriefing exercise is best supported by a template of dimensions, upon which the moderator/facilitator team needs to comment (example in Appendix).
The facilitators should review the notes taken during the focus group and have a first assessment of clarity and understanding.
They should discuss, compare and record observations or impressions about the group not readily apparent from the notes.
Discuss and record any insights or ideas emerging during the interviews while they are still fresh in the mind.
3.1.3.10 Practical aspects
Preparations for the interview
See also part “How to run the data collection? ”
Location & timing
- The location where the focus groups will be held should be carefully selected.
- Accessibility and transport issues (and mobility needs of participants) should be considered.
- Avoid noisy areas where it will be difficult for participants and the moderator to hear each other.
- The setting should be comfortable, non-threatening for the respondents. Refreshments should be provided.
- The focus group table can be organized before hand and this allows the researcher to place name tags in the way he wants.
- Seating should be arranged to encourage participation and interaction, preferably in a circle, with or without name tags. It can be discussed whether tables are needed. Moderators/facilitators (and note takers) should be integrated as much as possible within the discussion setting.
- The timing of the focus group interview need to be acceptable for all potential respondents in order to avoid selective “non-response” as much as possible (take into account the socio-demographic profiles of the targeted participants such as working times, daily activities, family life, etc.).
Duration
The length of the focus group should be between 1 and 3 hours.
Allow sufficient time at the beginning to welcome participants, give them an introduction and let them introduce themselves. This part should not take excessive time (about 10 minutes).
Material
Data are collected through different sources: audio or video-taping can be considered. When focus group interviews are recorded, the equipment should be of good quality and easy to use (check batteries and microphone). For larger groups, it may be necessary to use two tape recorders or multi-channel equipment, strategically placed to maximize the probability of recording contributions from all participants.
“Field notes” are an essential part during data collection. They capture all of the essential “non-verbal” information during the focus group interview.
Information has to be collected in an unbiased manner (avoid to filter out information as pre-interpreting it as unimportant, especially in the first focus groups).
The context of statements made during focus groups should be documented (important for giving meaning to the statements in the phase of analysis).
Try to capture nonverbal behavior of group participants (nonverbal reactions of other participants after a participant statement may indicate consensus or disagreement).
3.1.3.11 Analysis and reporting of findings
For issues on analysis, see “How to analyse the data?”.
In the particular case of focus groups, separate analyses have to be performed on data gathered “within-focus group” and continuously compared “between focus group”. This is also an iterative process.
It is important that statements be understood in the context which they were made. Nonverbal communication observed during the interview can also be very informative.
For reporting, see part “How to report qualitative research findings”
Note that findings are reported by focus group as unit of analysis and not by person.
3.1.3.12 Quality criteria
See section part “How to evaluate qualitative research?”
Vermeire et al propose a checklist specific to critical appraise the quality of focus groups in health care research articles in primary healthcare (Vermeire et al, 2002).
3.1.3.13 Examples of KCE reports using the method
- Evaluation of the Belgian reference reimbursement system (LePolain et al, 2010)
- Evidence-based content of the written information provided by the pharmaceutical industry to the general practitioner (Van Linden et al, 2007)
- Quality development in general practice in Belgium: status quo or quo vadis ? (Remmen et al, 2008)
- Mental health care reforms: evaluation research of ‘therapeutic projects’ (Schmitz et al, 2010)
- Emergency psychiatric care for children and adolescents (Deboutte et al, 2010)
3.2 Observation
“The purpose of participant observation is partly to confirm what you already know (or think you know) but is mostly to discover unanticipated truths. It is an exercise of discovery” (Mack, 2005, p. 23)
In this chapter we explicitly try to focus on direct observation, instead of participant observation. However, two remarks are in place. One, there is nearly always some participation involved in observing, unless the researcher is covered behind for example a one-way mirror. In all other cases the researcher is present in a setting, hence inevitably becomes part of the setting. Second, in the KCE context participant observation is unlikely to be applied because it is very time consuming, intensive and hence is not compatible with KCE working procedures. However, that does not mean that observational techniques are irrelevant to a KCE researcher. They can be very useful, for example in case of site visits. In the following chapter although participating is not the main goal, it often enters the logics and quotes used.
3.2.1 What is (naturalistic) observation?
Observing is more than looking around, it is actively registering information along a number of dimensions, namely places (physical place or setting), persons (the actors involved) and activities (a series of acts) 83. Observing means having attention for (1) the detail of the observation, (2) visual as well as auditory information, (3) the time dimension, (4) the interaction between people, and (5) making links with mental categories (Mortelmans, 2009).
Observing includes roughly three steps:
- A descriptive step; the researcher enters the research setting and gets a general overview of the social setting.
- A focused step; more focused observations are a step closer to the research question. The aim is to search for relationships or connections between several elements in his research question, for example X is a characteristic of Y, or X is the result of Y. More concrete, suppose a researcher wants to study the way emergency care is organized in Belgium, he would do some descriptive observations in the emergency department of hospitals to get an idea of the general structures and processes characteristic for emergency care. In a next step he turns to his research question which is about how cost-effectiveness of emergency care could be attained. Hence the focus of his observation will relate to all possible costs and which could be avoided.
- Selective step 83;
.In this last phase, after the researcher may have analysed his data (field notes), he may have identified a lack of information of one specific category of costs, e.g. cleaning and housekeeping costs, and may therefore decide to do extra observations in function of this specific aspect.
3.2.2 When to use observations?
- To collect data on naturally occurring behaviors in their usual contexts54. Observation also captures the whole social setting in which people function by recording the context in which they live84.
- Unstructured observation illustrates the whole picture, captures context/process and informs about the influence of the physical environment84.
- To check whether what people say they do is the same as what they actually do84. Both what people perceive that they do and what they actually do are however valid in their own right and just represent different perspectives on the data84.
- Observation is also an ongoing dynamic activity that is more likely than interviews to provide evidence for processes, things that are continually moving and evolving84.
- To study the working of organisations and peoples’ roles and functioning within organisations20.
- To uncover behaviours or routines of which the observed themselves are not aware of20. What the researcher considers an important finding may belong to the self-evident nature of daily life from the participants’ point of view.
- To understand data collected through other methods (e.g. interviews) and also to design the right questions for those methods54.
3.2.3 What are the strengths and weaknesses of observations?
3.2.3.1 Strengths
A number of strenghts have already been described under “When to use observations?”. We could add that:
- Observation has the advantage of capturing data in more natural circumstances84.
- The Hawthorne effect[1] is an obvious drawback but once the initial stages of entering the field are past most professionals are too busy to maintain behaviour that is radically different from normal84
3.2.3.2 Weaknesses
- It can be very difficult to get access to the setting.
:An observer is often experienced as a threat, especially if the setting is not asking for the research to take place. Observation (and especially participant observation) might lead to knowledge of informal procedures or rules, which people do not want to be uncovered. Also the researcher can be experienced or perceived as a barrier for the normal daily routine in the setting10. In direct observation, the researcher does not participate in the setting, hence is known as a stranger and gets only access to the public or formal layer of the social reality. He does not become an insider and will miss inside information because he is too distant from the actors he is observing10. “Access, then, is not a straightforward process of speaking to the person in charge and obtaining the approval of the ethics committee. It usually involves considerable time and effort and a constant endeavour to strive for ‘cultural acceptability’ with the gatekeepers and participants in research sites” (p. 310)84. - Once inside the setting there is the problem of avoiding “going native”: This means “becoming so immersed in the group culture that the research agenda is lost or that it becomes extremely difficult or emotionally draining to exit the field and conclude the data collection” (p. 183)20.
- Observational data, are more than interview data, subject to interpretation by the researcher. Observers have a great degree of freedom and autonomy regarding what they choose to observe and how they filter the information84.
- Observations are time-consuming and hard work at every possible hour of the day.
- An observer can get emotionally involved in what he observes, and by consequence lose his neutrality.
- It is impossible to write down everything that is important while observing (and participating). The researcher must rely on his memory and have the discipline to write down and expand the field notes soon and as completely as possible54.
[1] The Hawthorne effect is the process where human subjects of an experiment change their behavior, simply because they are being studied http://www.experiment-resources.com/hawthorne-effect.html.
3.2.4 How to plan the research design?
Often observations are carried out at the beginning of the data collection phase, but the method can also be used later on during the research process to address questions suggested by data collected though other methods (Mack, 2005). Before starting the observations, the researcher should try to find out as much as possible about the site where he will be observing.
At the KCE, site visits are common to allow the researchers to become familiar with the research topic and setting. This is often combined with interviews or less formalized talks to key persons on the site. After a number of site visits the scope of the research project is determined and precise research questions are formulated.
3.2.5 Modalities of data collection
3.2.5.1 Participant versus direct observation
The role to adopt during observation and the extent to which participants are fully informed are somewhat intertwined84. Typically researchers refer to Gold’s typology of research roles85:
- The complete observer, who maintains some distance, does not interact and whose role is concealed;
- The observer as participant, who undertakes intermittent observation alongside interviewing, but whose role is known;
- The participant as observer, who undertakes prolonged observation, is involved in all the central activities of the organization and whose role is known;
- The complete participant, who interacts within the social situation, but again whose role is concealed.
Mack et al.54 describe observing as remaining an “outsider” and simply observing and documenting events or behaviors being studied, while participating is taking part in the activity while also documenting it. Pure observing, without participating is a situations that in fact seldom occurs, because once you are present, you are visible, you influence the activities around you, you participate in some degree. There are two reasons for this participation, or to better understand the local perspective, or in order not to call attention to yourself54.
3.2.5.2 Structured versus unstructured observation
- Structured observations are associated with the positivist paradigm and aim at recording physical and verbal behavior by means of a list of predetermined behaviours84.
- Unstructured observations are not ‘unstructured’ in the sense of unsystematic or messy, “instead, observers using unstructured methods usually enter ‘the field’ with no predetermined notions as to the discrete behaviours that they might observe. They may have some ideas as to what to observe, but these may change over time as they gather data and gain experience in the particular setting. Moreover, in unstructured observation the researcher may adopt a number of roles from complete participant to complete observer, whereas in structured observation the intention is always to ‘stand apart’ from that which is being observed” (p307)84.
3.2.5.3 Overt versus covert observation
Covert observation corresponds to two roles in Gold’s typology85, i.e. complete observer and complete participant (see above). Most authors agree that covert observation is only legitimate in very specific circumstances and should be avoided. Mack et al. 54 formulate the following ethical guideline regarding observations: “When conducting participant observation, you should be discreet enough about who you are and what you are doing that you do not disrupt normal activity, yet open enough that the people you observe and interact with do not feel that your presence compromises their privacy.”(p. 16) As with all qualitative research methods, researchers must also protect the identities of the people they observe or with whom they interact, even if informally. “Maintaining confidentiality means ensuring that particual individuals can never be linked to the data they provide”54.
3.2.6 Data collection tools
3.2.6.1 Checklists
Before you enter the setting and start observing, it might be a good idea to have some questions in mind. It may be helpful to carry a checklist in your pocket to help you remember what you are meant to observe54.
3.2.6.2 Fieldnotes
“Fieldnotes are used by researchers to record observations and fragments of remembered speech. Although researchers may use other means of recording (such as video) and other form s of data (such as interview transcripts), fieldnotes remain one of the primary analytic materials used in ethnography.” (p. 82) 35.
Depending on the research questions, the researcher is interested in other aspects of social reality. Mulhalls’ schema84 includes the following types of field notes, each covering an aspect of social reality:
- Structural and organizational features – what the actual buildings and environment look like and how they are used
- People – how they behave, interact, dress, move.
- The daily process of activities.
- Special events – in a hospital ward this might be the consultant’s round or the multidisciplinary team meeting.
- Dialogue.
- An everyday diary of events as they occur chronologically – both in the field and before entering the field.
- A personal/reflective diary – this includes both my thoughts about going into the field and being there, and reflections on my own life experiences that might influence the way in which I filter what I observe.
It is particularly important to detail any contradictory or negative cases. Unusual things often reveal most about the setting or situation20.
Documenting observations consists of the following steps54, 86:
- Quick notes during the observation.
, - Once the researcher left the setting, he expands his notes into fieldnotes. This means he reads them through and adds other things he can remember, but has not yet written down. Note taking in the setting is not self-evident and it is impossible to write down everything you see. Therefore good note taking should trigger the memory by means of key words, symbols, drawings, etc.
- After expansion, the researcher “translates” his shorthand into sentences.
, and - Together with the translation phase, a descriptive narrative can be composed. The researcher writes down a description of what happened and what he has learned about the setting. In this step the researcher should distinguish between describing what happened and interpreting.
The researcher should be well aware of the difference between describing what he observes versus interpreting what he observed. It should be avoided to report interpretations rather than an objective account of the observations54. For example, an interpretive description of a patient could be “he was in terrible pain”. An objective description would be “he was screaming and his face turned pale while grimacing”. “To interpret is to impose your own judgment on what you see” (Mack, 200554, p23). The danger is that interpretations can turn out to be wrong. Therefore the researcher should ask her/himself “what is my evidence for this claim?”54. One way of separating descriptions and interpretations is by separating them visually on paper or screen.
3.2.6.3 Draw a map of the setting or settings you observe.
Maps might support your memory and are a tool to reconstruct interactions and movements of people in a room.
3.2.6.4 Audio or video
Audio or video recordings of observations are generally not permissible unless all ethical requirements are fulfilled and informed consent has been obtained.
3.2.7 Sampling
As outlined in the general principles of the chapter on interviewing, sampling in qualitative research is seldom statistically based. Also samples of settings or groups to observe are purposive.
Specifically for observation the sampling units are places, locations, and blocks of time, but usually not individuals. The aim is to select ‘information-rich’ cases, but in practice site selection is often a pragmatic decision based on existing networks and accessibility. Ideally however, sites are chosen because they typify some larger population of sites (such as clinics) or perhaps because they are exceptional in some way. Observation methods may be used across multiple sites and one could select the ones representing a range of typical settings (Green et al, 2009).
3.2.8 Human resources necessary
Observations can be the work of one researcher, a pair of researcher, or a whole team. Which arrangement is most appropriate depends on the research questions and the features of the setting. Also members of a team can disperse to different locations individually, or in pairs or groups, in order to construct a more complete picture of the issues being studied.
One of the advantages of team work is that field notes can be compared and that team members can question each other about assertions being made. “Taking another perspective on validity Graneheim et al. (2001) used multiple data collectors with different perspectives (insider or outsider) to observe the same situation. This may not accord with the idea that every researcher may produce a unique account of a situation that is valid in its own right. But with extensive mutual reflection, as undertaken by Graneheim and colleagues, these combined observations may have consensual validity. However, from a practical standpoint few projects are afforded the luxury of multiple data collectors.” (Mulhall, 200384, p. 309).
3.2.9 Practical aspects
- Try to be “invisible” as an observator. Adapt to the setting in which you will do the observations, in terms of dress code, the way of behaving, and what is expected from you by the other actors in the setting.
- Start with short observations to explore the field and to get yourself used to your role as observer.
- First you should get an idea of “the normal” way of life in a setting, before you are able to identify unusual or abnormal situations.
- Circumstances may make it difficult or unacceptable to make fieldnotes, hence the researcher has to write down his observations afterwards. This can lead to a memory bias.
- Field notes should not contain interpretations, but merely descriptions.
- There is also the practical problem of how, especially in large and busy social settings, like an emergency department, to inform and obtain consent from everyone who might ‘enter’ the field of observation84.
- Note that once inside the setting it might be difficult to get out again: Ending the fieldwork should not happen abruptly. The researcher must take time to “ease out”. In the ‘easing out’ phase the researcher is more and more absent from the setting. This means more time to analyse the data. When present in the setting, the researcher can confront his preliminary analysis with new observations in the setting10. In the literature the advice is to keep in contact with the setting until the final report is written87.
3.2.10 Analysis
Field notes contain a lot of detail and are highly descriptive. In order to find explanations or answers to the research questions, the researcher should develop categories and test them against hypotheses, and refine them. This is an iterative process that starts during the data collection phase.
3.2.11 Reporting of findings
As with other qualitative research methods it is important that evidence from the data is presented to support the conclusions of the researcher, by means of examples or quotations. The main principles have already been mentioned in (see “How to report qualitative research findings”).
3.2.12 Quality criteria
The quality of observational studies depends largely on the quality of the descriptions of data collection and analysis provided by the researcher. Details about how the research was conducted are crucial and should be well documented. For example, how much time was spent in the field, how typical were the events recorded, description of the attempts to verify the observations made, etc.
The general criteria to assess the quality of qualitative research are described here and also apply to observational methods.
3.2.13 Examples of KCE reports using the method
So far no observational studies have been carried out at the KCE.
3.3 Delphi Technique
Consensus reaching methods generally used in health care are Delphi panel, nominal group or consensus conference. They are useful to organize “qualitative judgments and, which is concerned to understand the meanings that people use when making decisions about health care.” (Black, 200688, page 132). They are not as such qualitative methods because they may use quantitative data collection tools (questionnaires, scales), and quantitative element in the analysis (statistics).
All the consensus methods cited here are characterized by the provision of information prior to the discussion, privacy (participants express their opinion in private), opportunity for participants to change their view and explicit and transparent derivation of the group decision, based on (statistical) analysis88.
3.3.1 Description of the method
The Delphi method (named so because of the Delphi Oracle) was initiated by the RAND corporation, a nonprofit institution that helps improve policy and decision making through research and analysis[a]. The original definition given in the 50s was that it “entails a group of experts who anonymously reply to questionnaires and subsequently receive feedback in the form of a statistical representation of the "group response," after which the process repeats itself. The goal is to reduce the range of responses and arrive at something closer to expert consensus.”89 Today, the method has evolved and Delphi surveys could aim at different goals or have several designs[b]. It could be define more as “a method for structuring a group communication process” and not as a method to produce consensus90. The method could also be defined as a systematic collection and aggregation tool of informed judgment from a group of experts on specific questions and issues” (Hasson, 201191, p. 1696).
Delphi surveys are used in several domains (politics, psychology, agriculture, etc.) and could vary in different ways. Several types of Delphi often used in health research (non exhaustive) are presented in Table 10.
Table 10 – Types of Delphi designs
Design Type | Aim | Target panellists | Administration | Number of rounds | Round 1 design |
Classical | To elicit opinion and gain consensus | Experts selected based on aims of research | Traditionally postal | Employs three or more rounds[3] | Open qualitative first round, to allow panelists to record responses |
Modified | Aim varies according to project design, from predicting future events to achieving consensus | Experts selected based on aims of research | Varies, postal, online, etc. | May employ fewer than 3 rounds | Panelists provided with pre-selected items, drawn from various sources, within which they are asked to consider their responses |
Decision | To structure decision-making and create the future in reality rather than predicting it | Decision makers, selected according to hierarchical position and level of expertise | Varies | Varies | Can adopt similar process to classical Delphi |
Policy | To generate opposing views on policy and potential resolutions | Policy makers selected to obtain divergent opinions | Can adopt a number of formats including bringing participants together in a group meeting | Varies | Can adopt similar process to classical Delphi or 1- preformulating the obvious issues by the research team; |
Real time/consensus conference | To elicit opinion and gain consensus on real time | Experts selected based on aims of research | Use of computer technology that panelists use in the same room to achieve consensus in real time rather than post or via Internet94 | Varies | Can adopt similar process |
Adapted from Hasson, 201191, p. 1697 and Keeney, 201195
[b] See the special issue 78 of the review ‘Technological Forecasting & Social change” (2011) available at http://www.journals.elsevier.com/technological-forecasting-and-social-c….
[3] Note that the number of rounds should ideally be based on the saturation of the responses and is difficult to fix in advance
3.3.2 Specific questions suitable for the method
The following questions could be answered by using a consensus reaching method such as the Delphi panel:
- To help the decision making process.
- When personal contact is not necessary96.
- To choose the most appropriate method or tool (e.g. data collection technique, scales, questionnaires, etc.).
- To identify the best choice of treatment (when no other evidence is available or to complete it).
- To identify the form of a programme.
- To clarify professional roles97.
- To develop clinical guidelines98.
3.3.3 Strengths and weaknesses of the method
3.3.3.1 Strengths
- Lower production cost99.
- Relatively rapid results99.
- Participant can express their opinion anonymously96, without external (perceived) pressure while the process allows to catch the view of the entire group96.
- Avoid domination by individuals or professional interests97;
3.3.3.2 Weaknesses
- Success depends on the qualities of the participants.
- Reliability increases with the number of participants (and the number of rounds). In addition, it is difficult to keep everybody in successive rounds96.
- Coordination is difficult96.
- The existence of a consensus does not necessary mean that it reflects an appropriate or “correct” answer97.
3.3.4 How to plan the research design?
A Delphi survey takes several weeks, even if the number of participants is small.
It has to be planned in the beginning of the project or, if the necessity to conduct such a study appears late in the course of the project, it is important to realize that the whole process takes several weeks, depending on the number of rounds needed. The next figure illustrates the whole process and the time needed.
Figure 5 – The Delphi process
Adapted from Slocum et al.93
3.3.5 Modalities of data collection
Delphi could be administrated ‘paper-and-pencil’ by mail or e-mail.
Online Delphi’s are more and more carried out. Software is available to support the data collection and the analysis (Delphi_Survey_Web (DSW)100, Mesydel©101)
The number of rounds is not necessarily defined a priori (often because of budgetary, time or human resources limitations): data collection must stop when the saturation or the consensus is reached.
3.3.6 Data collection tools
The Delphi method uses iterative (e-)mailed questionnaires in successive rounds. Because there is no interaction between the respondent and the researcher, the formulation of the questions has to be clear, and definitions should be given where necessary.
The questionnaire of the first round encompasses open-ended questions, to identify items to include in the second round.
Next rounds could be exclusively qualitative or composed of closed questions with scales (from totally agree to totally disagree, i.e. from 1 to 9), or combining both qualitative and quantitative questions. They present a synthesis of the results issued from the previous round.
In the case of closed questions, agreement is usually summarized by using the median and consensus assessed by presenting interquartile ranges for continuous numerical scales97. Graphical presentations of the results are welcomed.
In KCE reports the questionnaires used in each round are presented in appendices.
3.3.7 Sampling
Participants have to be carefully chosen because of their expertise, experience or knowledge in the field of the research question. In addition, the variety of positions in the field or opinions regarding the subject, should be covered. In that way, lay people could be added to increase the variety of viewpoints102.
They could be identified through publically available bibliographic information102. Snowballing recruitment could be useful to secure easy agreement to panelist invitation and strengthen panelist retention102.
There is no practical limit to the number of participants in a Delphi survey89.
3.3.8 Human resources necessary
The administrator of the survey develops the questionnaires, identifies, mobilizes and recruits participants, analyses findings and reports them. He/she is responsible for keeping a low attrition rate and insure the coherence between the different steps of the method.
Administrative support could be needed to (e-)mail the questionnaires and manage reminders and answers.
3.3.9 Practical aspects
- It is important to clearly explain the goal of the questionnaire and the way it will be analysed. The redaction of the invitation/introduction letter is thus crucial. “Stressing the practical policy application of the Delphi yield to experts panelists to aid their retention” (Rowe, 2011102, p. 1489).
- The research team should have managers skills to follow up the returned questionnaires and mailing.
- The utilization of online tools could be very useful as well for the research team (rapid results) as for the participants.
- While anonymity in the process of the Delphi is required, “using social rewards for recognition in participation, such as subsequently publishing panel membership listings” (Rowe, 2001102, p. 1489) could improve panelists recruitment and retention.
3.3.10 Analysis
Each step of the Delphi requires a specific analysis.
In a classical Delphi, open-ended questions from round 1 should be content analysed ‘in order to group statements generated by the experts panel into similar areas’95.
Round that uses closed questions should be statistically analysed. Summary statistics are used to decide whether or not consensus is reached. The level of the consensus has to be defined in advance (i.e. 70% of agreement).
There is no agreement on the threshold indicating a consensus, nor how to choose this threshold95. Each researcher has to reflect on it, case by case.
The proposals that have reached consensus should be eliminated from the next round.
3.3.11 Reporting of findings
Intermediary results are reported directly in the successive questionnaires.
All the consensus and dissensus items are listed and discussed at the end of the process.
3.3.12 Quality criteria
It seems that no consensus exists with regards to the standard of methodological rigor to apply. And that “no definitive evidence exists which demonstrates the reliability or validity of the technique” (Keeney, 201195, p. 104). This is partly due to the variety of the Delphi surveys and the constant evolutions in this field91.
We have not identified any checklists to assess the quality of a Delphi survey.
However, the following aspects of the survey could be assessed (adapted from Jillson103 and Hasson91):
- Applicability of the method to the specific research problem
- The quality of the composition of the Delphi panel. Participants have to be carefully chosen in function of their expertise and position in the group.
- Design and administration of the questionnaire
- Feedback
A Delphi survey should be reviewed in terms of reliability, validity and trustworthiness to judge its worth91.
3.1.13 Examples of KCE reports using the method
- Impact of academic detailing on primary care physicians104
- Burnout among general practitioners: prevention and management72
- Methods for including public preference values in reimbursement decision making processes for health interventions. Exploration of the feasibility of different models in Belgium (ongoing project, publication foreseen end 2012)
3.1.14 Basis references
For practical tips see the report of the King Baudouin Foundation available in French, Dutch and English93
4. How to analyse?
4.1. Aim of the qualitative data analysis
The aim of this process note is to give an overview and brief description of approaches useful for qualitative data analysis in the context of KCE projects. It will not provide one recipe, but rather a range of perspectives, ways of looking at the data. Depending on the research aim and questions some perspectives are more suited than others.
4.2. Definition
“Qualitative data analysis (QDA) is the range of processes and procedures whereby we move from the qualitative data that have been collected into some form of explanation, understanding or interpretation of the people and situations we are investigating”. (Lewins et al. 2010)
In general qualitative data analysis means moving from data to meanings or representations. Flick (Flick 2015) defines qualitative data analysis as follows:
“The classification and interpretation of linguistic (or visual) material to make statements about implicit and explicit dimensions and structures of meaning-making in the material and what is represented in it” (p. 5).
The aims of qualitative data analysis are multiple, for example:
- To describe a phenomenon in some or greater detail
- To compare several cases (individuals or groups) with focus on what they have in common or on the differences between them
- To explain a phenomenon or gain insight in a problematic situation
- To develop a theory of a phenomenon
There are several ways to analyze textual data. “Unlike quantitative analysis, there are no clear rules or procedures for qualitative data analysis, but many different possible approaches” (Spencer et al. 2014), p. 270). “Qualitative analysis transforms data into findings. No formula exists for that transformation. Guidance, yet. But no recipe.” (Patton 2002)
Alternative traditions vary in terms of basic epistemological assumptions about the nature of the inquiry and the status of the researcher, the main focus and aims of the analytic process (Spencer et al. 2014, p. 272). Generally speaking, the analysis process begins with the data management and end up with abstraction and interpretation, from organizing the data, describing them to explaining them (Spencer et al. 2014).
According to Spencer et al. (2014), the hallmarks of rigorous and well-founded substantive, cross-sectional qualitative data analysis are:
- Remaining grounded in the data
- Allowing systematic and comprehensive coverage of the data set
- Permitting within- and between-case searches
- Affording transparency to others
4.3. “Methods”, “traditions” and “approaches” in qualitative analysis
Many concepts and terms are used by qualitative researchers. They are not always standardized and we find it useful to clarify the ones we will use in this process note. This part is therefore not exhaustive. We are largely inspired by by Paillé and Mucchielli (Paillé and Mucchielli 2011) and translated their terminology.
4.3.1 Generic methods for analyzing
Globally, a generic method for analyzing is used in many situations: How to analyze the data? To get the meaning of the data? It encompasses the technical and intellectual operations and manipulations helping the researcher to catch the meanings.
- Technical operations for analyzing are processes, operations and management of the data such as transcriptions, cutting of the text, putting it in tables, etc.
- Intellectual operations for analyzing consist of the transposition of terms in other terms, intuitive groupings, confrontation, induction …
Classically, 3 generic methods of analysis are used in qualitative health (care) research, each of them using specific tools
- The phenomenological examination of the empirical data, aiming to report the authentic comprehension of the material
- The thematic analysis, more specifically this is the creation and the refinement of categories to give a global picture of the material
- The analysis using conceptualising categories, aiming at the creation and the refinement of categories to go further than the description, to reach conceptualization of
4.3.2 Specific traditions
Specific traditions are embedded in the generic methods used in health(care) research we described. We give an example for each of them:
4.1.1.1 Phenomenology
Phenomenology focuses on “how human beings make sense of experience and transform experience into consciousness, both individually and as shared meaning” (Patton 2015, p.115). Phenomenology is about understanding the nature or meaning of everyday life. In-depth interviews with people who have directly experienced the phenomenon of interest, is the most used data collection technique. Phenomenology in qualitative research goes back to a philosophical tradition that was first applied to social science by E. H. Husserl to study people’s daily experiences.
Phenomenology will not be developed into detail, because it is less relevant to KCE projects.
4.1.1.2 Framework analysis
Framework analysis has been developed specifically for applied or policy relevant qualitative research, and is a deductive research strategy. In a framework analysis the objectives of the investigation are set in advance. The thematic framework for the content analysis is identified before the research or the qualitative research part in the project sets off.
The decision on using frameworks when analyzing data is closely related to the question for what purpose the qualitative material will be used in the overall research strategy. “Frameworks” are generally deducted from hypotheses of theoretical frameworks: e.g. if the aim of a focus group is trying to get a picture of stakeholders interests and potential conflicting perspectives on a health care issue, and the focus group tries to grasp how stakeholders develop power plays or influence strategies to set agenda’s, a conceptual framework on decision-making processes and power play will serve as a useful tool to orient data-collection and data-analysis.
Applying framework analysis concretely means that the themes emerging from the data are placed in the framework defined a priori. The framework is systematically applied to all the data. Although an analytical framework can be very useful, it is not suited, if the aim is to discover new ideas, since a framework or grid could be blinding (Paillé and Mucchielli 2011).
For the specificity of the analysis of data according to this method see Framework analysis
4.1.1.3 Grounded theory
Grounded theory was developed by Glaser and Strauss in the late 1960s as a methodology for extracting meaning from qualitative data. Typically, the researcher does not start from a preconceived theory, but allows the theory to emerge from the data (Durant-Law 2005). Hence grounded theory is an inductive rather than a deductive methodology. Emergence is also a key assumption in grounded theory: data, information and knowledge are seen as emergent phenomena that are actively constructed. They can only have meaning when positioned in time, space and culture (Durant-Law 2005).
The power of grounded theory lies in the depth of the analysis. Grounded theory explains rather than describes and aims at a deep understanding of phenomena (Durant-Law 2005). Key to grounded theory is the emphasis on theory as the final output of research. Other approaches may stop at the level of description or interpretation of the data (e.g. thematic analysis).
Grounded theory is a complete method, a way of conceptualizing a qualitative research project.
For the specificity of the analysis of data according to this method see Data analysis in the Grounded Theory
4.3.3 Inductive versus deductive approaches
The approach chosen depends largely on the design and the aims of the research. Some designs and/or research questions require an inductive, others a deductive approach. Inductive means that themes emerge from the data, while deductive implies a pre-existing theory or framework which is applied to the data. Qualitative data analysis tends to be inductive, which means that the researcher identifies categories in the data, without predefined hypotheses. However, this is not always the case. A qualitative research analysis can also be top down, with predefined categories to which the data are coded, for example a priori concepts can be adopted from the literature or a relevant field. Framework analysis can be used this way.
The next table shows how the different methods, approaches and types of coding relate to each other.
Generic methods, specific methods/ traditions, approaches and type of coding for qualitative analysis
Generic methods | |||
Phenomenological examination of the empirical data | Phenomenology | Inductive | Statements |
Thematic analysis | Descriptive analysis Framework analysis | Mainly deductive Mainly deductive | Themes |
Analysis using conceptualizing categories | Grounded Theory
| Mainly inductive Mainly deductive | Conceptualizing categories |
4.4. The analytic journey
As in any research method, analyzing collected data is a necessary step in order to draw conclusions. Analyzing qualitative data is not a simple nor a quick task. Done properly, it is systematic and rigorous, and therefore labor-intensive and time-consuming “[…] good qualitative analysis is able to document its claim to reflect some of the truth of a phenomenon by reference to systematically gathered data” (Fielding 1993), in contrast “poor qualitative analysis is anecdotal, unreflective, descriptive without being focused on a coherent line of inquiry.” (Fielding 1993) (Pope et al. 2000, p. 116). Qualitative analysis is a matter of deconstructing the data, in order to construct an analysis or theory (Mortelmans 2009).
The ways and techniques to analyze qualitative data are not easy to describe as it requires a lot of “fingerspitzengefühl” and it is unrealistic to expect a kind of recipe book which can be followed in order to produce a good analysis. Therefore what we present here is a number of hands-on guidelines, which have proven useful to others.
The difficulty of qualitative analysis lies in the lack of standardization and the absence of a universal set of clear-cut procedures which fit every type of data and could be almost automatically applied. Also there are several methods/approaches/traditions for taking the analysis forward (see table). These move from inductive to more deductive, but in practice the researcher often moves back- and forward between the data and the emerging interpretations. Hence induction and deduction are often used in the same analysis. Also elements from different approaches may be combined in one analysis (Pope and Mays 2006).
Different aims may also require different depths of analysis. Research can aim to describe the phenomena being studied, or go on to develop explanations for the patterns observed in the data, or use the data to construct a more general theory (Spencer et al. 2014). Initial coding of the data is usually descriptive, staying close to the data, whereas labels developed later in the analytic process are more abstract concepts (Spencer et al. 2014).
“The analysis may seek simply to describe people’s views or behaviors, or move beyond this to provide explanation that can take the form of classifications, typologies, patterns, models and theories (Pope and Mays 2006, p. 67).”
The two levels of analysis can be described as following:
- The basic level is a descriptive account of what was said (by whom) related to particular topics and questions. Some texts refer to this as the “manifest level” or type of analysis.
- The higher level of analysis is interpretative: this is the level of identifying the “meanings”. It is sometimes called the latent level of analysis. This second level of analysis can to a large degree be inspired by theories.
The selected approach is part of the research design, hence chosen at the beginning of the research process.
In what follows we describe a generic theoretic process for qualitative data analysis.
Figure: Conceptual representation of the analytic journey of qualitative data with an inductive approach
Each theoretical approach adds its own typical emphases. The most relevant approaches are described in next section. These steps could also be useful in the processing of qualitative data following a system thinking method [ADD crossrefs].
Step 0: Preparing the data for analysis
Independent of the methodological approach, a qualitative analysis always starts with the preparation of the gathered data. Ideally, to enable accurate data analysis the recorded information is transcribed. A transcript is the full length literal text of the interview. It often produces a lot of written text.
Good quality transcribing is not simply transferring words from the tape to the page. The wording is only part of the message. A lot of additional information is to be found in the way people speak. Tone and inflection, timing of reactions are important indicators too. With experienced observers and note-takers, a thematic analysis of the notes taken during the interviews could be used as a basis for analysis of the “non-verbal” communication.
Transcribing is time consuming and costly. The research team should consider in advance the question "who should do the transcribing”? Resources may be needed to pay an audio typist, a strategy usually more cost effective than a researcher. Be aware that “typists” are often unfamiliar with the terminology or language used in the interviews or focus groups which can lead to mistakes and/or prolong the transcribing time.
It may not be essential to transcribe every interview or focus group. It is possible to use a technique known as tape and notebook analysis, which means taking notes from a playback of the tape recorded interview and triangulating them with the notes taken by the observers and note-takers. However, bias can occur if inexperienced qualitative researchers attempt tape and notebook analysis. It is certainly preferable to produce full transcripts of the first few interviews. Once the researcher becomes familiar with the key messages emerging from the data tape analysis may be possible. Transcripts are especially valuable when several researchers work with the same data.
Step 1: Familiarization
Researchers immerse themselves in the data (interview transcripts and/or field notes), mostly by reading through the transcripts, gaining an overview of the substantive content and identifying topics of interest (Spencer et al, 2014). Doing this, they get familiar with the data.
Step 2: Coding the data - Construction of initial categories
By reading and re-reading the data in order to develop a profound knowledge of the data, an initial set of labels is identified. This step is very laborious (especially with large amounts of data). Pieces of text are coded, i.e. given a label or a name. Generally, in the qualitative analysis literature, “ data coding” refers to this data management. However data coding refers to different levels of analysis.
Here are some commonly used terms (Paillé and Muchielli, 2011):
Label:
Labeling a text or part of a text is the identification of the topic of the extract, not what is said about it. “What is the extract about?” The labels allow to make a first classification of the documents/ extracts. They are useful in a first quick reading of the corpus.
Example: “Familial difficulties”
Code:
The code is the numerical/truncated form of the label. This tool is not very useful in qualitative data analysis.
Example: “Fam.Diff.”
Theme:
The theme goes further than the label. It requires a more attentive lecture.
“What is the topic more precisely?”
Example: “Difficulties to care for children”
Statement:
Statements are short extracts, short syntheses of the content of the extract. “What is the key message of what is said?”, “What is told?”
The statement is more precise than the theme because it resumes, reformulates or synthetizes the extract. They are mainly used in phenomenology.
Example: The respondent tells that she has financial difficulties because she has to spend time and money to take care of her children.
Conceptualizing category:
Conceptualizing categories are the substantive designations of phenomena occurring in the extract of the analyzed corpus. Hence, this approaches theory construction.
Example: “Parental overload”
These types of coding terms are generally more specific to certain types of qualitative data analysis methods (Paillé and Muchielli, 2011).
By coding qualitative data, meanings are isolated in function of answering the research question. One piece of text may belong to more than one category or label. Hence there is likely to be overlap between categories. Major attention should be paid to “rival explanations” or interpretations about the data.
For further detailed information on coding qualitative data:
Saldaña J. The coding manual for qualitative researchers. 2nd edition ed. London: Sage Publications; 2013.
Step 3: Refine and regroup categories
In a third step the categories are further refined and reduced by being grouped together. “While reading through extracts of the data that have been labelled in a particular way, the researchers assesses the coherence of the data to see whether they are indeed ‘about the same thing’ and whether labels need to be amended and reapplied to the data” (Spencer et al. 2014a), p. 282).
Word processors or software for qualitative data analysis [LAK1] will prove to be very helpful at this stage.
[LAK1]Add crosslink vers section process book existante
Step 4: Constant comparison
During the analysis the researcher might (as a third step) constantly compare the constructed categories with new data, and the new categories with already analyzed data. This results in a kind of inductive cycle of constant comparison to fine tune categories and concepts arising from the data. NB: In the particular case of focus groups, separate analyses have to be performed on data gathered “within-focus group” and continuously compared “between focus group”. This is also an iterative process.
(Step 5): New data collection
New data collection could also be necessary to verify new point of views or insights emerging from the analysis.
Before moving to the more interpretive stage of analysis, the researchers may decide to write a description for each subtheme in the study (Spencer et al., 2014).
Step 6: Abstraction and interpretation
“Taking each theme in turn, the researcher reviews all the relevant data extracts or summaries, mapping the range and diversity of views and experiences, identifying constituent elements and underlying dimensions, and proposing key themes or concepts that underpin them. The process of categorization typically involves moving from surface features of the data to more analytic properties. Researchers may proceed through several iterations, comparing and combining the data at higher levels of abstraction to create more analytic concepts or themes, each of which may be divided into a set of categories. Where appropriate, categories may be further refined and combined into more abstract classes. Dey (1993) uses the term ‘splitting’ and ‘slicing’ to describe the way ideas are broken down and then recombined at a higher level – whereas splitting gives greater precision and detail, slicing achieves greater integration and scope. In this way, more descriptive themes used at the data management stage may well undergo a major transformation to form part of a new, more abstract categorical or classificatory system” (Spencer et al., 2014, p. 285). At this stage typologies can be created.
Step 7: Description of the findings and reporting
Laurence.Kohn Tue, 11/16/2021 - 17:41Findings can be presented in a number of ways, there is no specific format to follow.
When writing up findings issued from interviews or texts qualitative researchers often use quotes. Quotes are useful in order to (Corden and Roy 2006):
- Illustrate the themes emerging from the analysis.
- Provide evidence for interpretations, comparable to the use of tables of statistical data appearing in reports based on quantitative findings.
- Strengthen credibility of the findings (despites critics argue that researchers can always find at least one quote to support any point they might with to make).
- Deepen understanding. The actual words of a respondent could sometimes be a better representation of the depth of feeling.
- Enable voice to research participants. This enables participants to speak for themselves and is especially relevant in a participatory paradigm.
- Enhance readability by providing some vividness and sometimes humour: Braking up long passages of text by inserting spoken words, could help to keep the reader focused, but there could be a danger in moving too far towards a journalistic approach.
Ideally, quotes are anonymous and are accompanied by a pseudonym or description of the respondents. For example, in a research about normal birth, this could be: (Midwife, 36 years). There are however exceptions the rule of anonymity, e.g. stakeholder interviews, in which the identity of the respondent is important for the interpretation of the findings. In that case the respondent should self-evidently be informed and his agreement is needed in order to proceed.
Also in terms of lay out quotations should be different from the rest of the text, for example by using indents, italic fond or quotation marks. Quotes are used to strengthen the argument, but should be used sparingly and in function of the findings. Try to choose citations in a way that all respondents are represented. Be aware that readers might give more weight to themes illustrated with a quotation.
When the research is conducted in another language than the language of the report in which the findings are presented, quotes are most often translated. “As translation is also an interpretive act, meaning may get lost in the translation process (van Nes et al.), p. 313)”. It is recommended to stay in the original language as long and as much as possible and delay the use of translations to the stage of writing up the findings (van Nes et al.).
KCE practice is to translate quotes only for publications in international scientific journals, but not for KCE reports. Although KCE reports are written in English, inserted quotes are in Dutch or French to stay close to the original meaning. The authors should pay attention to the readability of the text and make sure that the text without quotes is comprehensive to English speaking readers.
So far, this general a-theoretic procedure reflects what in the literature is called the general inductive approach for analyzing qualitative data. It does not aim at the construction of theories, but the mere description of emerging themes. It provides a simple, straightforward approach for deriving findings in the context of focused research questions without having to learn an underlying philosophy or technical language associated with other qualitative analysis approaches (Thomas, 2006).
4.5. Three ways to analyse qualitative data
4.5.1 An analysis with (predefined) themes: a deductive approach
Adapted from Paillé and Muchielli , 2011.
The thematic analysis is a process to reduce data. It is not a deep analysis, but rather to describe the topic(s) appearing in the corpus. “Thematization” is a preliminary step in all types of analysis of qualitative data. It consists of transposing the corpus into a number of themes issued from the analyzed content and according to the problematic.
A first step is the location, i.e. the listing of all the themes pertinent for the research question. The second step is to document it: identify the importance of specific themes, repetitions, crosschecks, what goes together, what goes opposite…
What is a theme?
Adapted from Paillé and Muchielli , 2011.
In a thematic analysis, the analyst will search to identify and organize themes in the corpus. We will call this process the ‘Thematization’ of the corpus. This is a set of words aiming to identify what is covered in the corresponding extract of the corpus text, while providing guidance on the substance of what is said. The extract of the text is called ‘a unit of signification’, i.e. sentence(s) linked to a similar idea, topic or theme. Inference is the transformation of the unit of signification to themes.
How to define and assign pertinent themes?
Adapted from Paillé and Muchielli , 2011.
The definition of the themes depends on the framework of the research and the expected level of generality or inference.
Indeed, the analysis will be carried out in a specific framework, i.e. the aim of the research, and with a certain orientation and some presuppositions. These are directly linked to the data collection and the position of the analyst.
The definition of the themes will depend on the data collection:
Once a researcher is ready to launch the Thematization, (s)he has already done many steps: (s)he has defined the problem(s), focused the study, defined objectives, prepared the data collection, written the interview guide, has interacted with participants and perhaps reoriented or redefined new avenues for the research. Many sources have thus already oriented the work and should be highlighted and explained once again before the start of the analysis. For example, Thematization will not be the same if you search for “representations” than if you search for “strategies”, if you analyze psychological responses or social environment, etc.
The definition of the themes will depend on the position of the researcher
Each analyst has some theoretical background, due to his/her training, previous researches, theoretical knowledge, etc. These elements will influence the way they will read, analyze and therefore chose themes to be applied to the corpus. On one hand, (s)he will have a certain level of sensibility that will increase throughout readings, experience of research and reasoning. This level will also improve during the analysis of the corpus itself. On the other hand, s(he) will improve his/her theoretical capacities with new concepts, models, etc.
To process to the analysis, it is important to clearly delimited the theme and label it with a precise formulation. It is easier to begin with a low level of inference, i.e. to be as close as possible of the text or the interview but not to reproduce the verbatim. Interpretation, theorization or making the essence of an experience emerging are not the objectives of a thematic analysis. It is a list and a synthesis of the relevant themes appearing in a corpus.
The risk to end with different themes according to different analyst is not excluded at all and even natural and foreseeable. However it will be limited if everyone adopt the same position with the same goal, i.e. Thematization, and nothing else.
The inference will be done following the next reasoning: because the presence of this or this element or indication in the extract, it is possible to assign it the theme “X”. It is not because a theme appears only once that it is not important.
The thematic tree
The thematic analysis will build a thematic tree.
It is a synthetic and structured representation of the analyzed content. Themes are regrouped in main themes subdivided by subsidiary themes and sub-themes in a schematic way.
Technical aspects in the coding
Adapted from Paillé and Muchielli , 2011.
In order to process a thematic analysis, technical choices should be done:
a) The nature of the support : paper or (specialized) software [see further ADD CROSSREF]
b) The mode of the annotation of the themes (linked to the choice of the software):
Here are the commonly used:
- Annotation in the margin
- Annotation inserted (up to the extract/ color code)
- Annotations on files one per theme where the source (e.g. interview A) and the extract (e.g. line 12-29) are written. There is thus no annotation in the text.
The best choice for the type of annotation is very personnal. One should aim to combine ease of use and efficacy.
c) The type of treatment: continuously or sequential.
- The continuous Thematization:
Themes are given as the reading of the text and the thematic tree is built in parallel progressively, with fusion, regrouping, hierarchical classification…until a final tree at the end of the research. This process offer an accurate and rich analysis. But it is complex and time expensive. It is more adapted for a small corpus and more personnalized Thematization. - The sequential Thematization:
The analysis is more hypotetico-deductive and is done in two steps:
1) Themes are elaborated based on a sample of the corpus and listed. To each theme correspond a clear definition. A hierarchy could already be proposed or not
2) The list is then strictly applied to the whole corpus, with the possibility to add a limited number of new themes.
This type of analysis is more effective but goes less in depth. It is however more appropriate for an analysis in team.
To go further in the practical aspect of thematic analysis
Paillé P, Mucchielli A. L'analyse qualitative en sciences humaines et sociales. 2ème ed. Paris: Armand Colin; 2011.
4.5.2 Framework analysis
Adapted from Spencer L, Ritchie J, O'connor W, G. M, Ormston R. Analysis in practice. In: Ritchie J, Lewis J, McNaughton Nicholls C, Ormston R, editors. Qualitative research practice. London: Natcen, Sage; 2014. p. 295-345.
In the framework analysis data will be sifted, charted and sorted in accordance with key issues and themes (Srivastava et al. 2009). The analytical journey using this approach could be simply described as:
- Familiarization
- Constructing the initial framework
- Indexing
- Charting
- Abstraction and interpretation
The familiarization is the same as explained previously [add crossref]. In this approach, it is the occasion to identify topics or issues of interest, recurrent across the data and relevant for the research question, taking thus into account the aims of the study and the subjects contained in the topic guide.
The construction of an initial thematic framework can begin once the list of topics has been reviewed. This step aims to organize the data. The analyst will identify underlying ideas or themes related to particular items. (s)He will use these to group and sort the items according to different levels of generality, building a hierarchical arrangement of themes and subthemes. It results in a sort of table of content of what could be found in the corpus. These themes or issues “may have arisen from a priori themes (…) however it is at this stage that the researcher must allow the data to dictate the themes and issues”. “Although the researcher may have a set of a priori issues, it is important to maintain an open mind and not force the data to fit the a priori issues. However since the research was designed around a priori issues it is most likely that these issues will guide the thematic framework. Ritchie and Spencer stress that the thematic framework is only tentative and there are further chances of refining it at subsequent stages of analysis (1994).” (Srivastava et al. 2009, p.76).
The next step consists of indexing the data, i.e. labelling sections of the corpus according to the thematic framework. This could be done by annotation in the margin of the transcript.
The fourth stage consist of charting: the indexed data are arranged in charts of themes. One chart is built for each theme. Subthemes are headings of the columns while each row represent an interview, transcript or unit of analysis. The content of each cell is a summary of the section of the corpus related to the subtheme.
To write useful summaries, “the general principle should be to include enough details and context so that the analyst is not required to go back to the transcribed data to understand the point being made, but not include so much that the matrices become full of undigested material (…)”. (Spencer et al. 2014b, p 309)
Spencer et al identified 3 requirements essential in order to retain the essence of the original material (Spencer et al. 2014b, p 309).
- Key terms phrases or expressions should be taken as much as possible from the participant’s own language;
- Interpretation should be kept to a minimum at this stage;
- Material should not be dismissed as irrelevant just because its inclusion is not immediately clear.
The last step is the mapping and interpretation. Spencer et al. advice to take the time to do this, have a break, read through the management of the data, etc.
In this phase, concept, categories could be developed. Linkage between them could be described and explanations and patterns could be raised. This could even be performed by a theorizing deduction. The category is issued of a theoretical preexisting referent. The categories exist because a former analysis of the problematic has already been carried out. (Paillé and Muchielli. 2011). In the framework analysis, the main categorical analysis grid is preexisting. This could be because the research object is already well studied, because of the research is commissioned by an institution or because the research is spread through different teams in different locations (Paillé and Muchielli. 2011).
Nivivo [add cross ref] could be very helpful in the management of the data and creation of the matrix when using the Framework approach.
4.5.3 An analysis with conceptualizing categories: an inductive approach
Adapted from Paillé and Muchielli , 2011.
The analysis by conceptualizing categories allows a more in depth analysis. It is more than only the identification of themes, without a link between the annotation of the corpus and the conceptualizing of the data. It is more than a synthesis of the material. It includes an intention to analyze, to reach the meaning and use then a type of annotation reflecting the comprehension made by the analyst.
What is a category?
Adapted from Paillé and Muchielli , 2011.
A category is a textual production, under the form of a brief expression and allowing to name a phenomenon through a conceptual reading of the corpus. A category responds to “Given my problematic, what is this phenomenon?”, “how can I name this phenomenon conceptually?”
A category belongs to a set of categories, and makes sense in regarding the other categories. It is a matter of relationships between categories. A category is for the analyst an attempt to comprehend, while for the reader it is an access to the meaning. It encompasses the evocation of what is said but is also conceptually rich. It induces a precise mental image of a dynamic or a sequence of events.
The intellectual process of the categorization
Adapted from Paillé and Muchielli , 2011.
Three types of processes could be implied in the categorization: an analytic description, an interpretative deduction and a theorizing induction. But in practice these distinctions will progressively blur. The analytic description is a first step, closer to the text and is a preliminary descriptive work.
As for the thematic coding, it is important to search for the right level or the right context. Here also it depends on the position of the researcher and the context of the research.
For the technical aspects of the coding, we proposed to read and apply the considerations proposed for the thematic coding.
Data analysis in Grounded Theory
Key to grounded theory is the idea that the researcher builds theories from empirical data. Strauss and Corbin (Strauss and Corbin 1998) define theory as “a set of well-developed concepts related through statements of relationship, which together constitute an integrated framework that can be used to explain or predict phenomena” (p. 51). The aim is to produce general statements based on specific cases (analytic induction). Essential is that the insights emerge from the data. It is a theorizing induction process. Other core features are the cyclic approach and the constant comparison.
The cyclic approach is already apparent during data collection, but also in data analysis. Data collection is followed by preliminary data analysis, which is followed by new data collection etc. After each analytic phase, the topic list is adapted and information is collected in a more directed way. The researcher tries to fill in blind spots in his analysis and the testing of hypotheses. Hence, data analysis is generally expected to be an iterative process. Especially in the grounded theory approach constant comparative analysis is emphasized. This means that overall data collection and data-analysis are not organized in a strict sequential way. Constant comparative analysis is a process whereby data collection and data analysis occur on an ongoing basis. The interview is transcribed and analyzed as soon as possible, preferably before the next interview takes place. Any interesting finding is documented and incorporated into the next interview. The process is repeated with each interview until saturation is reached. As a result it could be possible that the initial interviews in a research project differ a lot from the later interviews as the interview schedule is continuously adapted and revised. For this reason researchers have to clarify and document on how structured or unstructured their data-collection method is and keep memos of the process. Notes and observations made at the time of the interview are re-examined, challenged, amended, and/or confirmed using transcribed audio or video tapes. One expects that all members of the research team participate in a review of the final interpretation, in which data and analysis are again re-examined, analyzed, evaluated, and confirmed. The use of more than one analyst can improve the consistency or reliability of analyses.
Within the analysis the cyclic character is also evident from the constant comparison: the researcher tries to falsify his findings through the integration of new data and see whether the theory holds. Data is broken down in small parts (coding), in order to rebuild by identifying relationships between parts.
The analytic process of breaking down and rebuilding data in grounded theory happens in several steps:
- Open coding
the identification of an initial set of themes or categories (called codes[1]). “The analytic process through which concepts are identified and their properties and dimensions are discovered” (Strauss and Corbin 1998, p. 101). In this stage the data is divided into bits of text, which are given a label. This means the researcher isolates meaningful parts relevant to answer the research question.[see before]
- Axial coding
This is a way of refining the initial codes. “The process of relating categories to their subcategories termed “axial” because coding occurs around the axis of a category, linking categories at the level of properties and dimensions” (Strauss and Corbin 1998, p. 123). Open coding results in a long list of separate codes. During axial coding all these loose ends are connected. This way concepts are identified.
- Selective coding
This is the movement towards “the development of analytical categories by incorporating more abstract and theoretically based elements” (Pope and Mays, p. 71). “The process of integration and refining the theory” (Strauss and Corbin 1998, p. 143). During this third and last step in the analytic process concepts are linked, a theory is built. Often a theory is build around one central concept (category of codes).
During the coding process data has been reduced to meaningful conceptualizing categories. Nvivo (see XXX) offers several (visualization) tools (e.g. circle diagrams, charts, matrixes) to discover relations between categories.
[1] In the literature about Grounded Theory ‘codes’ is mostly used but they correspond to what we called ‘conceptualizing categories ‘ before [Add crossref]
4.6. Software to analyse qualitative data
Analysis may either be done manually or by using qualitative analysis software, for example Nvivo©[2], Atlas ti©[3], Maxqda©[4], etc.
These Computer-Assisted Qualitative Data Analysis Software (CAQDAS) offer a support to the analyst with the storage, coding and systematic retrieval of qualitative data35. They are able to manage different types of qualitative materials, such as transcripts, texts, videos, images, etc. their utility for the analysis depends on the size of the corpus of analysis (number of interviews, plurality of the data sources) and has not to be automatic. They also could be useful for collaborative purposes when several researchers are analysing the same data. They not guarantee the scientific nature of the results62. Indeed, quality of the results does not depend on the tool used, but on the scientific rigor and the systematic analysis of the data.
[2] http://www.qsrinternational.com/products_nvivo.aspx
[3] http://www.atlasti.com/index.html
5. How to report qualitative research findings?
Interviews can be presented in a number of ways, there is no specific format to follow. However, alike other research methods, justification and methodology of the study should be provided. The research process should be fully transparent so that any researcher can reproduce it. In addition, it should be comprehensible to the reader.
A possible structure could be:
1. Introduction and Justification
2. Methodology
2.1 How were respondents recruited?
2.2 Description of the sample
2.3 Description of selection biases if any
2.4 What instruments were used to collect the data?
You may want to include the topic list or questionnaire in an appendix
2.5 Over which period of time was the data collected?
3. Results : What are the key findings?
4. Discussion
4.1 What were the strengths and limitations of the information?
4.2 Are the results similar or dissimilar to other findings
(if other studies have been done)?
5. Conclusion and Recommendations
6. Appendices (including the interview guide(s)/ topic guide)
&
When writing up findings qualitative researchers often use quotes from respondents. Quotes are useful in order to63:
- Illustrate the themes emerging from the analysis.
- Provide evidence for interpretations, comparable to the use of tables of statistical data appearing in reports based on quantitative findings.
- Strengthen credibility of the findings (despites critics argue that researchers can always find at least one quote to support any point they might with to make).
- Deepen understanding. The actual words of a respondent could sometimes be a better representation of the depth of feeling.
- Enable voice to research participants. This enables participants to speak for themselves and is especially relevant in a participatory paradigm.
- Enhance readability by providing some vividness and sometimes humour: Braking up long passages of text by inserting spoken words, could help to keep the reader focused, but there could be a danger in moving too far towards a journalistic approach.
Ideally, quotes are anonymous and are accompanied by a pseudonym or description of the respondents. For example, in a research about normal birth, this could be: (Midwife, 36 years). There are however exceptions the rule of anonymity, e.g. stakeholder interviews, in which the identity of the respondent is important for the interpretation of the findings. In that case the respondent should self-evidently be informed and his agreement is needed in order to proceed.
Also in terms of lay out quotations should be different from the rest of the text, for example by using indents, italic fond or quotation marks. Quotes are used to strengthen the argument, but should be used sparingly and in function of the findings. Try to choose citations in a way that all respondents are represented. Be aware that readers might give more weight to themes illustrated with a quotation.
When the research is conducted in another language than the language of the report in which the findings are presented, quotes are most often translated. “As translation is also an interpretive act, meaning may get lost in the translation process (Van Nes et al, 201064, p. 313)”. It is recommended to stay in the original language as long and as much as possible and delay the use of translations to the stage of writing up the findings64.
KCE practice is to translate quotes only for publications in international scientific journals, but not for KCE reports. Although KCE reports are written in English, inserted quotes are in Dutch or French to stay close to the original meaning. The authors should pay attention to the readability of the text and make sure that the text without quotes is comprehensive to English speaking readers.
6. How to evaluate QRM?
In this section we want to address quality criteria for the use and evaluation of qualitative research. At the one hand it should guide those who want to apply QRM in their research project(s), at the other hand KCE researchers asked for criteria that allow them to evaluate existing qualitative studies or publications resulting from qualitative studies, for example in function of a systematic review.
6.1. Usefulness of quality criteria to evaluate qualitative research
“Whatever the method, it needs to be well-defined, well-argued, and well-executed” (Snijders, 2007)
The increasing demand for qualitative research within health and health services research has emerged alongside an increasing demand for the demonstration of methodological rigor and justification of research findings (Reynolds, 2011) . Not only is qualitative research challenged by the current evidence-based practice (EPB) movement in healthcare, also the emergence of meta-analyses (e.g. meta-synthesis) of qualitative research findings urges for quality criteria. Although in quantitative health sciences research, there exist widely-recognized guidelines, no comparable standardized guidelines exist for qualitative research. This can be explained by a lack of consensus related to how to best evaluate “rigor” in qualitative research (Nelson, 2008). Every qualitative paradigm has its own implications regarding the definition of good quality research. First, we introduce the reader briefly in the debate about quality criteria, second, we present the framework of Walsh and Downe (Walsh, 2006) as the most complete and comprehensible list of quality criteria to appraise qualitative research studies, and the framework of Côté and Turgeon as a shorter and practical alternative. For other checklists we refer to Appendix 1.
Among qualitative researchers there is a debate going on between those demanding for explicit criteria, for example in order to serve systematic reviewing and evidence-based practice, and those who argue that such criteria are neither necessary nor desirable(Hammersley, 2007). The quest for quality criteria assumes that qualitative research is a unified field, but this image does not fit reality. In fact, apart from a variety of other positions (e.g. symbolic interactionism, hermeneutics, phenomenology, ethnography) three main paradigms can be discerned in relation to this discussion:
- The interpretativist paradigm assumes that social realities are multiple, fluid and constructed. This framework values research that illuminates subjective meanings and multiple ways of seeing a phenomenon. These researchers question the need for and the utility of quality criteria for qualitative research or apply specific criteria for qualitative research, such as clear delineation of the research process, evidence of immersion and self-reflection, demonstration of the researcher’s way of knowing (e.g. tacit knowledge)(Cohen, 2008).
- The positivist approach stands at the other end of the continuum and assumes that there is a single objective reality that is knowable. Positivists apply traditional quantitative criteria, such as validity and reliability to qualitative work.
- The realist perspective is positioned in between. It maintains a belief in an objective reality, but knowledge of reality is always imperfect(Cohen, 2008). Realists use techniques such as triangulation, member validation of findings, peer review of findings, deviant or negative case analysis and multiple coders of data, to promote to verify findings. The realist perspective adopts a philosophy of science that is in line with positivism, but at the same time embracing the complexity of social life and recognizing the importance of social meanings. “By maintaining a belief in an objective reality and positing truth as an ideal qualitative researchers should strive for, realists have succeeded at positioning the qualitative research enterprise as one that can produce research which is valid, reliable, and generalizable, and therefore, of value and import equal to quantitative biomedical research” (Cohen, 2008, p. 336).
The position one takes in the debate about quality criteria is heavily influenced by the paradigm one feels most attracted to, or identifies with.
6.2. General quality criteria
Most of the quality criteria are applicable to all research, both quantitative and qualitative. For example in 2008, Cohen and Crabtree (Cohen, 2008) reviewed and synthesized published criteria for good qualitative research. They identified the following general evaluative criteria: 1) ethical research, 2) importance of the research, 3) clarity and coherence of the research report, 4) use of appropriate and rigorous methods, 5) importance of reflexivity or attending to researcher bias, 6) importance of establishing validity or credibility, 7) Importance of verification or reliability. Researcher bias, validity, and reliability are most heavily influenced by quantitative approaches. Table 6 bridges quantitative and qualitative research by illustrating the parallels between criteria for conventional quantitative inquiries and qualitative research.
Table 6 – Lincoln and Guba’s translation of terms
Quantitative research | Qualitative research | Methods to ensure quality |
Internal validity | Credibility: Are the findings credible? | Member checks[a]; prolonged engagement in the field; data triangulation |
External validity | Transferability: Are the findings applicable in other contexts? | Thick description[b] of setting and/or participants |
Reliability | Dependability: Are the findings consistent and could they be repeated? | Audit – researcher’s documentation of data, methods and decisions; researcher triangulation |
Objectivity | Confirmability: To which extend are the findings shaped by the respondents and not researcher bias, motivation or interests? | Audit and reflexivity – e.g. awareness of position as a researcher and its influence on the data and findings |
Source: Adapted from Finley,2006
In what follows we pay attention to some keywords appearing in Table 6.
Reflexivity
“Reflexivity is an awareness of the self in the situation of action and of the role of the self in constructing that situation.” (Bloor and Wood, 2006, p. 145)
Because in qualitative research, the researcher could not be ‘blinded’, he/she has to take into account subjectivity in an explicit way. To demonstrate this reflexive awareness during the research process, the following ‘good practices’ can be used (Green, 2009, p. 195):
- Methodological openness: report steps taken in data production and analysis, the decisions made, and the alternatives not pursued.
- Theoretical openess: theoretical starting points and assumptions should be adressed.
- Awareness of the social setting of the research itself: be aware of the interactivity between the researcher and the researched.
- Awareness of the wider social context, including historical and policy contexts and social values.
Triangulation
“Qualitative research is inherently multimethod in focus (Flick, 2002, p.226-227). However, the use of multiple methods, or triangulation, reflects an attempt to secure an in-depth understanding of the phenomenon in question. Objective reality can never be captured. We know a thing only through its representations. Triangulation is not a tool or a strategy of validation, but an alternative to validation (Flick, 2002, p. 227). The combination of multiple methodological practices, empirical materials, perspectives, and observers in a single study is best understood, then, as a strategy that adds rigor, breadth, complexity, richness, and depth to any inquiry (See Flick, 2002, p. 229)” (Denzin and Lincoln, 2008, p. 7).
Triangulation is the use of several scientific methods, both qualitative and quantitative, to answer the same research question(Bloor, 2006) . Often triangulation is understood as producing the same results by means of several methods, sources or analysts. However, different methods or types of inquiry are sensitive to different nuances, so that they may lead to somewhat different results. In fact, triangulation is more about finding inconsistencies to gain deeper insight into the relationship between the inquiry approach and the subject under study. Thus, finding inconsistencies do not weaken the credibility of the results, but rather strengthen it (Patton, 1999).
Five kinds of triangulation can contribute to the quality and consistency of qualitative data analysis:
- Methods triangulation: Information obtained through several methods is compared. These methods can be qualitative, or quantitative or both. Often qualitative and quantitative data can be fruitfully combined as they mostly elucidate complementary aspects of the same phenomenon(Patton, 1999) .
- Triangulation of sources: Information derived at different times and by different means is compared, e.g. comparing observational data with interview data, but also comparing what people say in public with what they say in private (Patton, 1999) .
- Analyst triangulation: Several observers, interviewers, researchers or analysts are used. By this way the potential bias that comes from a single person doing all the data collection and/or data analysis is reduced. In addition to several researchers or data analysts, analytical triangulation may also be to have those who were studied review the findings(Patton, 1999) .
- Theory/perspective triangulation: It involves the use of different theoretical perspectives to look at the same data. Also, for example, data can be examined from the perspective of various stakeholder positions (Patton, 1999) .
- Member validation: It is a popular kind of triangulation that consists of “checking the accuracy of early findings with research respondents” (Bloor and Wood, 2006, p. 170).
These kinds of triangulation protect the researcher against the accusation that findings are an artifact of a single method, or source or investigator’s biases (Patton, 1999).
Transferability
Earlier in this report we argued that qualitative research is context sensitive and it is not aimed at making generalizations to the wider population. This may appear to contradict with the notion of transferability which is just about the extent to which findings of one study can be applied to other situations (external validity) (Merriam, 1998).
Transferability refers to the responsibility of the researcher to provide sufficient contextual information about the fieldwork to enable the reader to determine how far he can be confident in transferring the findings to other situations(Firestone, 1993). However, the situation might be complicated by the possibility that factors considered by the researcher to be unimportant, and consequently unaddressed in the research report, may be critical in the eyes of a reader(Firestone, 1993).
6.3. Checklists
Laurence.Kohn Tue, 11/16/2021 - 17:41We have found four papers (Reynolds, 2011; Walsh, 2006;Cohen, 2008; Côté and Turgeon, 2005) reviewing the literature on quality criteria or guidelines for qualitative research. One of them (Walsh, 2006) provides us with a synthesis of eight existing checklists and summary frameworks (see Table 7). This checklist is quite detailed and is designed in function of meta-synthesis, which is a kind of systematic review of qualitative research papers.
The list of criteria was built in order to rigorously appraise studies first before submitting them to the meta-synthesis technique. Agreement on criteria to judge rigor was necessary in order to decide which studies to include in the meta-synthesis. Walsh and Downe(Walsh, 2006) tabulated the characteristics mentioned in each of the papers in their review. Then they mapped together the characteristics given in all the included papers, sorting them by the number of checklists in which they appeared. In the next step both authors independently attempted a synthesis before coming together to discuss. Redundant criteria were excluded if both authors agreed that the exclusion would not change the final judgment on the meaningfulness and applicability of a piece of qualitative research. Finally the table below was constructed, structured into three columns, namely stages, essential criteria and specific prompts. Although some criteria may seem self-evident, others are less obviously fundamental (Walsh, 2006). This list of criteria is very detailed. In some studies, especially those with short time frame, a shorter and more pragmatic hands-on list could be practical. Therefore we also added the grid of Côté and Turgeon [c] (Table 8) which is shorter, adapted to the specific context of heath care and easier to use for researchers who are less familiar with qualitative research. Other checklists are described in Appendix 1.
The use of a checklist may improve qualitative research, however they should be used critically: not every criterion is appropriate to every research context (Barbour, 2001). For example the list of Coté and Turgeon mentions interpretation of results in an innovative way as a quality criterion (point 10, Table 8), while this is not necessarily the case. Most important is a systematic approach during research process. For example the credibility of data analysis could encompass the use of software (Table 7), triangulation and/or member checking (point 7, Table 8), whereas a systematic approach with a detailed description of each step in the research process could have been sufficient.
Table 7 – Summary criteria for appraising qualitative research studies
Stages | Essential criteria | Specific prompts |
Scope and purpose | Clear statement of, and rationale for, research question / aims / purposes |
|
| Study thoroughly contextualized by existing literature |
|
Design | Method/design apparent, and consistent with research intent |
|
| Data collection strategy apparent and appropriate |
|
Sampling strategy | Sample and sampling method appropriate |
|
Analysis | Analytic approach appropriate |
|
Interpretation | Context described and taken account of in interpretation |
|
| Clear audit trail given |
|
| Data used to support interpretation |
|
Reflexivity | Researcher reflexivity demonstrated |
|
Ethical dimensions | Demonstration of sensitivity to ethical concerns |
|
Relevance and transferability | Relevance and transferability evident |
|
Source: Walsh and Downe, 2006
Table 8 – Grid for the critical appraisal of qualitative research articles in medicine and medical education
| Yes | +/- | No |
Introduction | |||
1. The issue is described clearly and corresponds to the current state of knowledge. | |||
2. The research question and objectives are clearly stated and are relevant to qualitative research (e.g. the process of clinical or pedagogical decision-making). | |||
Methods | |||
3. The context of the study and the researchers’ roles are clearly described (e.g. setting in which the study takes place, bias). | |||
4. The method is appropriate for the research question (e.g. phenomenology, grounded theory, ethnography). | |||
5. The selection of participants is appropriate to the research question and to the method selected (e.g. key participants, deviant cases). | |||
6. The process for collecting data is clear and relevant (e.g. interview, focus group, data saturation). | |||
7. Data analysis is credible (e.g. triangulation, member checking). | |||
Results | |||
8. The main results are presented clearly. | |||
9. The quotations make it easier to understand the results. | |||
Discussion | |||
10. The results are interpreted in credible and innovative ways. | |||
11. The limitations of the study are presented (e.g. transferability). | |||
Conclusion | |||
12. The conclusion presents a synthesis of the study and proposes avenues for further research. |
Source: Côté and Turgeon,2005
[a] Informants may be asked to read transcripts of dialogues in which they have participated to check whether their words match with what they actually intended (Shenton 2004), or they may be asked to check the accuracy of early findings (Bloor 2006) 35.
[b] Thick description refers to rich qualitative data allowing not only the description of social behaviour, but also to connect it to the broader context in which it occurred (Mortelmans 2009).
6.4. Conclusion
To conclude this chapter on quality criteria we wish to warn against a rigid use of checklists and quality criteria in qualitative research and to argue instead for flexible use. Moreover this also applies to quantitative research.
Barbour criticizes the widespread use and description of assumed quality indicators like theoretical sampling, grounded theory, multiple coding, and triangulation in scientific articles, as an unequivocal guarantee of robustness. These dimensions of qualitative research should be embedded within a broader understanding of the qualitative research design and not “stuck on as a badge of merit” (Barbour, 2001, p. 1115).
We agree with Walsh and Downe (Walsh, 2006) that a checklist is indicative of good quality research, but not a guarantee.
Key messages
- Although in quantitative health sciences research, there exist widely-recognised guidelines, no comparable standardised guidelines exist for qualitative research.
- Among qualitative researchers there is a debate going on between those demanding for explicit criteria, for example in order to serve systematic reviewing and Evidence-Based Practice, and those who argue that such criteria are neither necessary nor desirable.
- The framework of Walsh and Downe as an comprehensible example of quality criteria checklist to appraise qualitative research studies. The grid of Côté and Turgeon is more simple and could be recommended as tool for evaluation in KCE reports.