Logo KCE

4. How to analyse?

Laurence.Kohn Tue, 11/16/2021 - 17:41 Logo KCE

4.1. Aim of the qualitative data analysis

Laurence.Kohn Tue, 11/16/2021 - 17:41

The aim of this process note is to give an overview and brief description of approaches useful for qualitative data analysis in the context of KCE projects. It will not provide one recipe, but rather a range of perspectives, ways of looking at the data. Depending on the research aim and questions some perspectives are more suited than others.

Logo KCE

4.2. Definition

Laurence.Kohn Tue, 11/16/2021 - 17:41

“Qualitative data analysis (QDA) is the range of processes and procedures whereby we move from the qualitative data that have been collected into some form of explanation, understanding or interpretation of the people and situations we are investigating”. (Lewins et al. 2010)

In general qualitative data analysis means moving from data to meanings or representations. Flick (Flick 2015) defines qualitative data analysis as follows:

The classification and interpretation of linguistic (or visual) material to make statements about implicit and explicit dimensions and structures of meaning-making in the material and what is represented in it” (p. 5).

The aims of qualitative data analysis are multiple, for example:

  • To describe a phenomenon in some or greater detail
  • To compare several cases (individuals or groups) with focus on what they have in common or on the differences between them
  • To explain a phenomenon or gain insight in a problematic situation
  • To develop a theory of a phenomenon

There are several ways to analyze textual data. “Unlike quantitative analysis, there are no clear rules or procedures for qualitative data analysis, but many different possible approaches” (Spencer et al. 2014), p. 270). “Qualitative analysis transforms data into findings. No formula exists for that transformation. Guidance, yet. But no recipe.”  (Patton 2002)

 

Alternative traditions vary in terms of basic epistemological assumptions about the nature of the inquiry and the status of the researcher, the main focus and aims of the analytic process (Spencer et al. 2014, p. 272). Generally speaking, the analysis process begins with the data management and end up with abstraction and interpretation, from organizing the data, describing them to explaining them (Spencer et al. 2014).

According to Spencer et al. (2014), the hallmarks of rigorous and well-founded substantive, cross-sectional qualitative data analysis are:

  • Remaining grounded in the data
  • Allowing systematic and comprehensive coverage of the data set
  • Permitting within- and between-case searches
  • Affording transparency to others
Logo KCE

4.3. “Methods”, “traditions” and “approaches” in qualitative analysis

Laurence.Kohn Tue, 11/16/2021 - 17:41

Many concepts and terms are used by qualitative researchers. They are not always standardized and we find it useful to clarify the ones we will use in this process note. This part is therefore not exhaustive. We are largely inspired by by Paillé and Mucchielli (Paillé and Mucchielli 2011) and translated their terminology.

Logo KCE

4.3.1 Generic methods for analyzing

Laurence.Kohn Tue, 11/16/2021 - 17:41

Globally, a generic method for analyzing is used in many situations: How to analyze the data? To get the meaning of the data? It encompasses the technical and intellectual operations and manipulations helping the researcher to catch the meanings.

  • Technical operations for analyzing are processes, operations and management of the data such as transcriptions, cutting of the text, putting it in tables, etc.
  • Intellectual operations for analyzing consist of the transposition of terms in other terms, intuitive groupings, confrontation, induction …

Classically, 3 generic methods of analysis are used in qualitative health (care) research, each of them using specific tools

  • The phenomenological examination of the empirical data, aiming to report the authentic comprehension of the material
  • The thematic analysis, more specifically this is the creation and the refinement of categories to give a global picture of the material
  • The analysis using conceptualising categories, aiming at the creation and the refinement of categories to go further than the description, to reach conceptualization of
Logo KCE

4.3.2 Specific traditions

Laurence.Kohn Tue, 11/16/2021 - 17:41

Specific traditions are embedded in the generic methods used in health(care) research we described. We give an example for each of them:

4.1.1.1     Phenomenology

Phenomenology focuses on “how human beings make sense of experience and transform experience into consciousness, both individually and as shared meaning” (Patton 2015, p.115). Phenomenology is about understanding the nature or meaning of everyday life. In-depth interviews with people who have directly experienced the phenomenon of interest, is the most used data collection technique. Phenomenology in qualitative research goes back to a philosophical tradition that was first applied to social science by E. H. Husserl to study people’s daily experiences.

Phenomenology will not be developed into detail, because it is less relevant to KCE projects.

4.1.1.2     Framework analysis

Framework analysis has been developed specifically for applied or policy relevant qualitative research, and is a deductive research strategy. In a framework analysis the objectives of the investigation are set in advance. The thematic framework for the content analysis is identified before the research or the qualitative research part in the project sets off.

The decision on using frameworks when analyzing data is closely related to the question for what purpose the qualitative material will be used in the overall research strategy. “Frameworks” are generally deducted from hypotheses of theoretical frameworks: e.g. if the aim of a focus group is trying to get a picture of stakeholders interests and potential conflicting perspectives on a health care issue, and the focus group tries to grasp how stakeholders develop power plays or influence strategies to set agenda’s, a conceptual framework on decision-making processes and power play will serve as a useful tool to orient data-collection and data-analysis.

Applying framework analysis concretely means that the themes emerging from the data are placed in the framework defined a priori. The framework is systematically applied to all the data. Although an analytical framework can be very useful, it is not suited, if the aim is to discover new ideas, since a framework or grid could be blinding (Paillé and Mucchielli 2011).

For the specificity of the analysis of data according to this method see Framework analysis

4.1.1.3     Grounded theory

Grounded theory was developed by Glaser and Strauss in the late 1960s as a methodology for extracting meaning from qualitative data. Typically, the researcher does not start from a preconceived theory, but allows the theory to emerge from the data (Durant-Law 2005). Hence grounded theory is an inductive rather than a deductive methodology. Emergence is also a key assumption in grounded theory: data, information and knowledge are seen as emergent phenomena that are actively constructed. They can only have meaning when positioned in time, space and culture (Durant-Law 2005).

The power of grounded theory lies in the depth of the analysis. Grounded theory explains rather than describes and aims at a deep understanding of phenomena (Durant-Law 2005). Key to grounded theory is the emphasis on theory as the final output of research. Other approaches may stop at the level of description or interpretation of the data (e.g. thematic analysis).

Grounded theory is a complete method, a way of conceptualizing a qualitative research project.

For the specificity of the analysis of data according to this method see Data analysis in the Grounded Theory

Logo KCE

4.3.3 Inductive versus deductive approaches

Laurence.Kohn Tue, 11/16/2021 - 17:41

The approach chosen depends largely on the design and the aims of the research. Some designs and/or research questions require an inductive, others a deductive approach. Inductive means that themes emerge from the data, while deductive implies a pre-existing theory or framework which is applied to the data. Qualitative data analysis tends to be inductive, which means that the researcher identifies categories in the data, without predefined hypotheses. However, this is not always the case. A qualitative research analysis can also be top down, with predefined categories to which the data are coded, for example a priori concepts can be adopted from the literature or a relevant field. Framework analysis can be used this way.

 

The next table shows how the different methods, approaches and types of coding  relate to each other.

Generic methods, specific methods/ traditions, approaches and type of coding for qualitative analysis

Generic methods

Specific methods / traditions

Approaches

Type of coding

Phenomenological examination of the empirical data


Phenomenology

Inductive

Statements

Thematic analysis

Descriptive analysis

Framework analysis

Mainly deductive

Mainly deductive

Themes

 

Analysis using conceptualizing categories

Grounded Theory

 

Mainly inductive

Mainly deductive

Conceptualizing categories

Logo KCE

4.4. The analytic journey

Laurence.Kohn Tue, 11/16/2021 - 17:41

As in any research method, analyzing collected data is a necessary step in order to draw conclusions. Analyzing qualitative data is not a simple nor a quick task. Done properly, it is systematic and rigorous, and therefore labor-intensive and time-consuming “[…] good qualitative analysis is able to document its claim to reflect some of the truth of a phenomenon by reference to systematically gathered data” (Fielding 1993), in contrast “poor qualitative analysis is anecdotal, unreflective, descriptive without being focused on a coherent line of inquiry.” (Fielding 1993) (Pope et al. 2000, p. 116). Qualitative analysis is a matter of deconstructing the data, in order to construct an analysis or theory (Mortelmans 2009).

The ways and techniques to analyze qualitative data are not easy to describe as it requires a lot of “fingerspitzengefühl” and it is unrealistic to expect a kind of recipe book which can be followed in order to produce a good analysis. Therefore what we present here is a number of hands-on guidelines, which have proven useful to others.

The difficulty of qualitative analysis lies in the lack of standardization and the absence of a universal set of clear-cut procedures which fit every type of data and could be almost automatically applied. Also there are several methods/approaches/traditions for taking the analysis forward (see table). These move from inductive to more deductive, but in practice the researcher often moves back- and forward between the data and the emerging interpretations. Hence induction and deduction are often used in the same analysis. Also elements from different approaches may be combined in one analysis (Pope and Mays 2006).

Different aims may also require different depths of analysis. Research can aim to describe the phenomena being studied, or go on to develop explanations for the patterns observed in the data, or use the data to construct a more general theory (Spencer et al. 2014). Initial coding of the data is usually descriptive, staying close to the data, whereas labels developed later in the analytic process are more abstract concepts (Spencer et al. 2014).

The analysis may seek simply to describe people’s views or behaviors, or move beyond this to provide explanation that can take the form of classifications, typologies, patterns, models and theories (Pope and Mays 2006, p. 67).

The two levels of analysis can be described as following:

  • The basic level is a descriptive account of what was said (by whom) related to particular topics and questions. Some texts refer to this as the “manifest level” or type of analysis.
  • The higher level of analysis is interpretative: this is the level of identifying the “meanings”. It is sometimes called the latent level of analysis. This second level of analysis can to a large degree be inspired by theories.

 

The selected approach is part of the research design, hence chosen at the beginning of the research process.

In what follows we describe a generic theoretic process for qualitative data analysis.

 

Figure: Conceptual representation of the analytic journey of qualitative data with an inductive approach

 

 

Each theoretical approach adds its own typical emphases. The most relevant approaches are described in next section. These steps could also be useful in the processing of qualitative data following a system thinking method [ADD crossrefs].

Logo KCE

Step 0: Preparing the data for analysis

Laurence.Kohn Tue, 11/16/2021 - 17:41

Independent of the methodological approach, a qualitative analysis always starts with the preparation of the gathered data. Ideally, to enable accurate data analysis the recorded information is transcribed. A transcript is the full length literal text of the interview. It often produces a lot of written text.

Good quality transcribing is not simply transferring words from the tape to the page. The wording is only part of the message. A lot of additional information is to be found in the way people speak. Tone and inflection, timing of reactions are important indicators too. With experienced observers and note-takers, a thematic analysis of the notes taken during the interviews could be used as a basis for analysis of the “non-verbal” communication.

Transcribing is time consuming and costly. The research team should consider in advance the question "who should do the transcribing”? Resources may be needed to pay an audio typist, a strategy usually more cost effective than a researcher. Be aware that “typists” are often unfamiliar with the terminology or language used in the interviews or focus groups which can lead to mistakes and/or prolong the transcribing time.

It may not be essential to transcribe every interview or focus group. It is possible to use a technique known as tape and notebook analysis, which means taking notes from a playback of the tape recorded interview and triangulating them with the notes taken by the observers and note-takers. However, bias can occur if inexperienced qualitative researchers attempt tape and notebook analysis. It is certainly preferable to produce full transcripts of the first few interviews. Once the researcher becomes familiar with the key messages emerging from the data tape analysis may be possible. Transcripts are especially valuable when several researchers work with the same data.

Logo KCE

Step 1: Familiarization

Laurence.Kohn Tue, 11/16/2021 - 17:41

Researchers immerse themselves in the data (interview transcripts and/or field notes), mostly by reading through the transcripts, gaining an overview of the substantive content and identifying topics of interest (Spencer et al, 2014). Doing this, they get familiar with the data.

Logo KCE

Step 2: Coding the data - Construction of initial categories

Laurence.Kohn Tue, 11/16/2021 - 17:41

By reading and re-reading the data in order to develop a profound knowledge of the data, an initial set of labels is identified. This step is very laborious (especially with large amounts of data). Pieces of text are coded, i.e. given a label or a name. Generally, in the qualitative analysis literature, “ data coding” refers to this data management. However data coding refers to different levels of analysis.

Here are some commonly used terms (Paillé and Muchielli, 2011):

Label:

Labeling a text or part of a text is the identification of the topic of the extract, not what is said about it. “What is the extract about?” The labels allow to make a first classification of the documents/ extracts. They are useful in a first quick reading of the corpus.

Example: “Familial difficulties”

Code:

The code is the numerical/truncated form of the label. This tool is not very useful in qualitative data analysis.

Example: “Fam.Diff.”

Theme:

The theme goes further than the label. It requires a more attentive lecture.

 “What is the topic more precisely?”

Example: “Difficulties to care for children”

Statement:

Statements are short extracts, short syntheses of the content of the extract. “What is the key message of what is said?”, “What is told?”           
The statement is more precise than the theme because it resumes, reformulates or synthetizes the extract. They are mainly used in phenomenology.

Example: The respondent tells that she has financial difficulties because she has to spend time and money to take care of her children.

Conceptualizing category:

Conceptualizing categories are the substantive designations of phenomena occurring in the extract of the analyzed corpus. Hence, this approaches theory construction.

Example: “Parental overload”

 

These types of coding terms are generally more specific to certain types of qualitative data analysis methods (Paillé and Muchielli, 2011).

By coding qualitative data, meanings are isolated in function of answering the research question. One piece of text may belong to more than one category or label. Hence there is likely to be overlap between categories. Major attention should be paid to “rival explanations” or interpretations about the data.

 

For further detailed information on coding qualitative data:
Saldaña J. The coding manual for qualitative researchers. 2nd edition ed. London: Sage Publications; 2013.

Logo KCE

Step 3: Refine and regroup categories

Laurence.Kohn Tue, 11/16/2021 - 17:41

In a third step the categories are further refined and reduced by being grouped together. “While reading through extracts of the data that have been labelled in a particular way, the researchers assesses the coherence of the data to see whether they are indeed ‘about the same thing’ and whether labels need to be amended and reapplied to the data” (Spencer et al. 2014a), p. 282).

Word processors or software for qualitative data analysis [LAK1] will prove to be very helpful at this stage.


 [LAK1]Add crosslink vers section process book existante

Logo KCE

Step 4: Constant comparison

Laurence.Kohn Tue, 11/16/2021 - 17:41

During the analysis the researcher might (as a third step) constantly compare the constructed categories with new data, and the new categories with already analyzed data. This results in a kind of inductive cycle of constant comparison to fine tune categories and concepts arising from the data. NB: In the particular case of focus groups, separate analyses have to be performed on data gathered “within-focus group” and continuously compared “between focus group”. This is also an iterative process.

Logo KCE

(Step 5): New data collection

Laurence.Kohn Tue, 11/16/2021 - 17:41

New data collection could also be necessary to verify new point of views or insights emerging from the analysis.

Before moving to the more interpretive stage of analysis, the researchers may decide to write a description for each subtheme in the study (Spencer et al., 2014).

Logo KCE

Step 6: Abstraction and interpretation

Laurence.Kohn Tue, 11/16/2021 - 17:41

Taking each theme in turn, the researcher reviews all the relevant data extracts or summaries, mapping the range and diversity of views and experiences, identifying constituent elements and underlying dimensions, and proposing key themes or concepts that underpin them. The process of categorization typically involves moving from surface features of the data to more analytic properties. Researchers may proceed through several iterations, comparing and combining the data at higher levels of abstraction to create more analytic concepts or themes, each of which may be divided into a set of categories. Where appropriate, categories may be further refined and combined into more abstract classes. Dey (1993) uses the term ‘splitting’ and ‘slicing’ to describe the way ideas are broken down and then recombined at a higher level – whereas splitting gives greater precision and detail, slicing achieves greater integration and scope. In this way, more descriptive themes used at the data management stage may well undergo a major transformation to form part of a new, more abstract categorical or classificatory system” (Spencer et al., 2014, p. 285). At this stage typologies can be created.

Logo KCE

Step 7: Description of the findings and reporting

Laurence.Kohn Tue, 11/16/2021 - 17:41

Findings can be presented in a number of ways, there is no specific format to follow.

When writing up findings issued from interviews or texts qualitative researchers often use quotes. Quotes are useful in order to (Corden and Roy 2006):

  • Illustrate the themes emerging from the analysis.
  • Provide evidence for interpretations, comparable to the use of tables of statistical data appearing in reports based on quantitative findings.
  • Strengthen credibility of the findings (despites critics argue that researchers can always find at least one quote to support any point they might with to make).
  • Deepen understanding. The actual words of a respondent could sometimes be a better representation of the depth of feeling.
  • Enable voice to research participants. This enables participants to speak for themselves and is especially relevant in a participatory paradigm.
  • Enhance readability by providing some vividness and sometimes humour: Braking up long passages of text by inserting spoken words, could help to keep the reader focused, but there could be a danger in moving too far towards a journalistic approach.

Ideally, quotes are anonymous and are accompanied by a pseudonym or description of the respondents. For example, in a research about normal birth, this could be: (Midwife, 36 years). There are however exceptions the rule of anonymity, e.g. stakeholder interviews, in which the identity of the respondent is important for the interpretation of the findings. In that case the respondent should self-evidently be informed and his agreement is needed in order to proceed.

Also in terms of lay out quotations should be different from the rest of the text, for example by using indents, italic fond or quotation marks. Quotes are used to strengthen the argument, but should be used sparingly and in function of the findings. Try to choose citations in a way that all respondents are represented. Be aware that readers might give more weight to themes illustrated with a quotation.

When the research is conducted in another language than the language of the report in which the findings are presented, quotes are most often translated. “As translation is also an interpretive act, meaning may get lost in the translation process (van Nes et al.), p. 313)”. It is recommended to stay in the original language as long and as much as possible and delay the use of translations to the stage of writing up the findings (van Nes et al.).

KCE practice is to translate quotes only for publications in international scientific journals, but not for KCE reports. Although KCE reports are written in English, inserted quotes are in Dutch or French to stay close to the original meaning. The authors should pay attention to the readability of the text and make sure that the text without quotes is comprehensive to English speaking readers.

So far, this general a-theoretic procedure reflects what in the literature is called the general inductive approach for analyzing qualitative data. It does not aim at the construction of theories, but the mere description of emerging themes. It provides a simple, straightforward approach for deriving findings in the context of focused research questions without having to learn an underlying philosophy or technical language associated with other qualitative analysis approaches (Thomas, 2006).

Logo KCE

4.5. Three ways to analyse qualitative data

Laurence.Kohn Tue, 11/16/2021 - 17:41 Logo KCE

4.5.1 An analysis with (predefined) themes: a deductive approach

Laurence.Kohn Tue, 11/16/2021 - 17:41

Adapted from Paillé and Muchielli , 2011.

 

The thematic analysis is a process to reduce data. It is not a deep analysis, but rather to describe the topic(s) appearing in the corpus. “Thematization” is a preliminary step in all types of analysis of qualitative data. It consists of transposing the corpus into a number of themes issued from the analyzed content and according to the problematic.

A first step is the location, i.e. the listing of all the themes pertinent for the research question. The second step is to document it: identify the importance of specific themes, repetitions, crosschecks, what goes together, what goes opposite…

Logo KCE

What is a theme?

Laurence.Kohn Tue, 11/16/2021 - 17:41

Adapted from Paillé and Muchielli , 2011.

In a thematic analysis, the analyst will search to identify and organize themes in the corpus. We will call this process the ‘Thematization’ of the corpus. This is a set of words aiming to identify what is covered in the corresponding extract of the corpus text, while providing guidance on the substance of what is said. The extract of the text is called ‘a unit of signification’, i.e. sentence(s) linked to a similar idea, topic or theme. Inference is the transformation of the unit of signification to themes.

Logo KCE

How to define and assign pertinent themes?

Laurence.Kohn Tue, 11/16/2021 - 17:41

Adapted from Paillé and Muchielli , 2011.

The definition of the themes depends on the framework of the research and the expected level of generality or inference.

Indeed, the analysis will be carried out in a specific framework, i.e. the aim of the research, and with a certain orientation and some presuppositions. These are directly linked to the data collection and the position of the analyst.

The definition of the themes will depend on the data collection:

Once a researcher is ready to launch the Thematization, (s)he has already done many steps: (s)he has defined the problem(s), focused the study, defined objectives, prepared the data collection, written the interview guide, has interacted with participants and perhaps reoriented or redefined new avenues for the research. Many sources have thus already oriented the work and should be highlighted and explained once again before the start of the analysis. For example, Thematization will not be the same if you search for “representations” than if you search for “strategies”, if you analyze psychological responses or social environment, etc.

The definition of the themes will depend on the position of the researcher

Each analyst has some theoretical background, due to his/her training, previous researches, theoretical knowledge, etc. These elements will influence the way they will read, analyze and therefore chose themes to be applied to the corpus. On one hand, (s)he will have a certain level of sensibility that will increase throughout readings, experience of research and reasoning. This level will also improve during the analysis of the corpus itself. On the other hand, s(he) will improve his/her theoretical capacities with new concepts, models, etc.

 

To process to the analysis, it is important to clearly delimited the theme and label it with a precise formulation. It is easier to begin with a low level of inference, i.e. to be as close as possible of the text or the interview but not to reproduce the verbatim. Interpretation, theorization or making the essence of an experience emerging are not the objectives of a thematic analysis. It is a list and a synthesis of the relevant themes appearing in a corpus.

The risk to end with different themes according to different analyst is not excluded at all and even natural and foreseeable. However it will be limited if everyone adopt the same position with the same goal, i.e. Thematization, and nothing else.

The inference will be done following the next reasoning: because the presence of this or this element or indication in the extract, it is possible to assign it the theme “X”. It is not because a theme appears only once that it is not important.


Logo KCE

The thematic tree

Laurence.Kohn Tue, 11/16/2021 - 17:41

The thematic analysis will build a thematic tree.

It is a synthetic and structured representation of the analyzed content. Themes are regrouped in main themes subdivided by subsidiary themes and sub-themes in a schematic way.

Logo KCE

Technical aspects in the coding

Laurence.Kohn Tue, 11/16/2021 - 17:41

Adapted from Paillé and Muchielli , 2011.

In order to process a thematic analysis, technical choices should be done:

a)     The nature of the support : paper or (specialized) software [see further ADD CROSSREF]

b)    The mode of the annotation of the themes (linked to the choice of the software):

Here are the commonly used:

  • Annotation in the margin
  • Annotation inserted (up to the extract/ color code)
  • Annotations on files one per theme where the source (e.g. interview A) and the extract (e.g. line 12-29) are written. There is thus no annotation in the text.

The best choice for the type of annotation  is very personnal. One should aim to combine ease of use and efficacy.

c)     The type of treatment: continuously or sequential.

  • The continuous Thematization: 
    Themes are given as the reading of the text and the thematic tree is built in parallel progressively, with fusion, regrouping, hierarchical classification…until a final tree at the end of the research. This process offer an accurate and rich analysis. But it is complex and time expensive. It is more adapted for a small corpus and more personnalized Thematization.
  • The sequential Thematization:  
    The analysis is more hypotetico-deductive and is done in two steps:

1) Themes are elaborated based on a sample of the corpus and listed. To each theme correspond a clear definition. A hierarchy could already be proposed or not

2) The list is then strictly applied to the whole corpus, with the possibility to add a limited number of new themes.

This type of analysis is more effective but goes less in depth. It is however more appropriate for an analysis in team.

To go further in the practical aspect of thematic analysis

Paillé P, Mucchielli A. L'analyse qualitative en sciences humaines et sociales. 2ème ed. Paris: Armand Colin; 2011.


Logo KCE

4.5.2 Framework analysis

Laurence.Kohn Tue, 11/16/2021 - 17:41

Adapted from Spencer L, Ritchie J, O'connor W, G. M, Ormston R. Analysis in practice. In: Ritchie J, Lewis J, McNaughton Nicholls C, Ormston R, editors. Qualitative research practice. London: Natcen, Sage; 2014. p. 295-345.

In the framework analysis data will be sifted, charted and sorted in accordance with key issues and themes (Srivastava et al. 2009). The analytical journey using this approach could be simply described as:

  1. Familiarization
  2. Constructing the initial framework
  3. Indexing
  4. Charting
  5. Abstraction and interpretation

The familiarization is the same as explained previously [add crossref]. In this approach, it is the occasion to identify topics or issues of interest, recurrent across the data and relevant for the research question, taking thus into account the aims of the study and the subjects contained in the topic guide.

The construction of an initial thematic framework can begin once the list of topics has been reviewed. This step aims to organize the data. The analyst will identify underlying ideas or themes related to particular items. (s)He will use these to group and sort the items according to different levels of generality, building a hierarchical arrangement of themes and subthemes. It results in a sort of table of content of what could be found in the corpus. These themes or issues “may have arisen from a priori themes (…) however it is at this stage that the researcher must allow the data to dictate the themes and issues”. “Although the researcher may have a set of a priori issues, it is important to maintain an open mind and not force the data to fit the a priori issues. However since the research was designed around a priori issues it is most likely that these issues will guide the thematic framework. Ritchie and Spencer stress that the thematic framework is only tentative and there are further chances of refining it at subsequent stages of analysis (1994).”  (Srivastava et al. 2009, p.76).

The next step consists of indexing the data, i.e. labelling sections of the corpus according to the thematic framework. This could be done by annotation in the margin of the transcript.

The fourth stage consist of charting: the indexed data are arranged in charts of themes. One chart is built for each theme. Subthemes are headings of the columns while each row represent an interview, transcript or unit of analysis. The content of each cell is a summary of the section of the corpus related to the subtheme.

To write useful summaries, “the general principle should be to include enough details and context so that the analyst is not required to go back to the transcribed data to understand the point being made, but not include so much that the matrices become full of undigested material (…)”. (Spencer et al. 2014b, p 309)

Spencer et al identified 3 requirements essential in order to retain the essence of the original material (Spencer et al. 2014b, p 309).

  1. Key terms phrases or expressions should be taken as much as possible from the participant’s own language;
  2. Interpretation should be kept to a minimum at this stage;
  3. Material should not be dismissed as irrelevant just because its inclusion is not immediately clear.

The last step is the mapping and interpretation. Spencer et al. advice to take the time to do this, have a break, read through the management of the data, etc.

In this phase, concept, categories could be developed. Linkage between them could be described and explanations and patterns could be raised. This could even be performed by a theorizing deduction. The category is issued of a theoretical preexisting referent. The categories exist because a former analysis of the problematic has already been carried out. (Paillé and Muchielli. 2011). In the framework analysis, the main categorical analysis grid is preexisting. This could be because the research object is already well studied, because of the research is commissioned by an institution or because the research is spread through different teams in different locations (Paillé and Muchielli. 2011).

Nivivo [add cross ref] could be very helpful in the management of the data and creation of the matrix when using the Framework approach.

Logo KCE

4.5.3 An analysis with conceptualizing categories: an inductive approach

Laurence.Kohn Tue, 11/16/2021 - 17:41

Adapted from Paillé and Muchielli , 2011.

 

The analysis by conceptualizing categories allows a more in depth analysis. It is more than only the identification of themes, without a link between the annotation of the corpus and the conceptualizing of the data. It is more than a synthesis of the material. It includes an intention to analyze, to reach the meaning and use then a type of annotation reflecting the comprehension made by the analyst.

Logo KCE

What is a category?

Laurence.Kohn Tue, 11/16/2021 - 17:41

Adapted from Paillé and Muchielli , 2011.

 

A category is a textual production, under the form of a brief expression and allowing to name a phenomenon through a conceptual reading of the corpus. A category responds to “Given my problematic, what is this phenomenon?”, “how can I name this phenomenon conceptually?”

A category belongs to a set of categories, and makes sense in regarding the other categories. It is a matter of relationships between categories. A category is for the analyst an attempt to comprehend, while for the reader it is an access to the meaning. It encompasses the evocation of what is said but is also conceptually rich. It induces a precise mental image of a dynamic or a sequence of events.

Logo KCE

The intellectual process of the categorization

Laurence.Kohn Tue, 11/16/2021 - 17:41

Adapted from Paillé and Muchielli , 2011.

 

Three types of processes could be implied in the categorization: an analytic description, an interpretative deduction and a theorizing induction. But in practice these distinctions will progressively blur. The analytic description is a first step, closer to the text and is a preliminary descriptive work.

As for the thematic coding, it is important to search for the right level or the right context. Here also it depends on the position of the researcher and the context of the research.

For the technical aspects of the coding, we proposed to read and apply the considerations proposed for the thematic coding.

Logo KCE

Data analysis in Grounded Theory

Laurence.Kohn Tue, 11/16/2021 - 17:41

Key to grounded theory is the idea that the researcher builds theories from empirical data. Strauss and Corbin (Strauss and Corbin 1998) define theory as “a set of well-developed concepts related through statements of relationship, which together constitute an integrated framework that can be used to explain or predict phenomena” (p. 51). The aim is to produce general statements based on specific cases (analytic induction). Essential is that the insights emerge from the data. It is a theorizing induction process. Other core features are the cyclic approach and the constant comparison.

The cyclic approach is already apparent during data collection, but also in data analysis. Data collection is followed by preliminary data analysis, which is followed by new data collection etc. After each analytic phase, the topic list is adapted and information is collected in a more directed way. The researcher tries to fill in blind spots in his analysis and the testing of hypotheses. Hence, data analysis is generally expected to be an iterative process. Especially in the grounded theory approach constant comparative analysis is emphasized. This means that overall data collection and data-analysis are not organized in a strict sequential way. Constant comparative analysis is a process whereby data collection and data analysis occur on an ongoing basis. The interview is transcribed and analyzed as soon as possible, preferably before the next interview takes place. Any interesting finding is documented and incorporated into the next interview. The process is repeated with each interview until saturation is reached. As a result it could be possible that the initial interviews in a research project differ a lot from the later interviews as the interview schedule is continuously adapted and revised. For this reason researchers have to clarify and document on how structured or unstructured their data-collection method is and keep memos of the process. Notes and observations made at the time of the interview are re-examined, challenged, amended, and/or confirmed using transcribed audio or video tapes. One expects that all members of the research team participate in a review of the final interpretation, in which data and analysis are again re-examined, analyzed, evaluated, and confirmed. The use of more than one analyst can improve the consistency or reliability of analyses.

Within the analysis the cyclic character is also evident from the constant comparison: the researcher tries to falsify his findings through the integration of new data and see whether the theory holds. Data is broken down in small parts (coding), in order to rebuild by identifying relationships between parts.

The analytic process of breaking down and rebuilding data in grounded theory happens in several steps:

  • Open coding

the identification of an initial set of themes or categories (called codes[1]). “The analytic process through which concepts are identified and their properties and dimensions are discovered”  (Strauss and Corbin 1998, p. 101). In this stage the data is divided into bits of text, which are given a label. This means the researcher isolates meaningful parts relevant to answer the research question.[see before]

  • Axial coding

This is a way of refining the initial codes. “The process of relating categories to their subcategories termed “axial” because coding occurs around the axis of a category, linking categories at the level of properties and dimensions” (Strauss and Corbin 1998, p. 123). Open coding results in a long list of separate codes. During axial coding all these loose ends are connected. This way concepts are identified.

  • Selective coding

This is the movement towards “the development of analytical categories by incorporating more abstract and theoretically based elements” (Pope and Mays, p. 71). “The process of integration and refining the theory” (Strauss and Corbin 1998, p. 143). During this third and last step in the analytic process concepts are linked, a theory is built. Often a theory is build around one central concept (category of codes).

During the coding process data has been reduced to meaningful conceptualizing categories. Nvivo (see XXX) offers several (visualization) tools (e.g. circle diagrams, charts, matrixes) to discover relations between categories.

 


[1] In the literature about Grounded Theory ‘codes’ is mostly used but they correspond to what we called ‘conceptualizing categories ‘ before [Add crossref]

Logo KCE

4.6. Software to analyse qualitative data

Laurence.Kohn Tue, 11/16/2021 - 17:41

Analysis may either be done manually or by using qualitative analysis software, for example Nvivo©[2], Atlas ti©[3], Maxqda©[4], etc.

These Computer-Assisted Qualitative Data Analysis Software (CAQDAS) offer a support to the analyst with the storage, coding and systematic retrieval of qualitative data35. They are able to manage different types of qualitative materials, such as transcripts, texts, videos, images, etc. their utility for the analysis depends on the size of the corpus of analysis (number of interviews, plurality of the data sources) and has not to be automatic. They also could be useful for collaborative purposes when several researchers are analysing the same data. They not guarantee the scientific nature of the results62. Indeed, quality of the results does not depend on the tool used, but on the scientific rigor and the systematic analysis of the data.

 

[2]           http://www.qsrinternational.com/products_nvivo.aspx

[3]           http://www.atlasti.com/index.html

[4]           http://www.maxqda.com/