Blogpost by Marianna KozĂĄnyiovĂĄ, Junior Researcher in ETHOS Lab and student in MSc Digital innovation & Management >>>

Digitalization and globalization escalated academic research by providing new possibilities for the academic inquiry in order to navigate the complex realities (Ang, 2011, p. 779) we face today in a digital world that is inherently data-intensive. However, while the new possibilities were formed, the complexity of the research itself has also elevated. Access to and governance of data has changed and the academic inquiry has spread over the data worlds of (1) practices (empirical data), (2) social media and (3) academic journals on a global scale including the digital environment. The researchers, therefore, face the challenge of navigating the complexities of research opportunities offered by the different data worlds through appropriate methodological modes (Ang, 2011, p. 788).

Demonstrating a Methodological Mode

This blog post focuses on the methodological mode of situational analysis for pre-dominantly qualitative research, as presented by Clarke (2005, pp. 21-22), used to analyze complex research situations that usually always include qualitative data. Situational analysis suggests mapping in order to comprehend the qualitative data and also introduces the notions of situational and social worlds maps (Clarke, 2005, p. 22). Following one of the traditional processes of academic inquiry, from initial theoretical overview through empirical research and theoretical overview refinement towards findings, 4 different methods are reviewed:

(1) Network mapping and (2) word mapping are illustrated with data from the academic data world

(3) pin board mapping and (4) conceptual mapping are illustrated with template data from the practice data world.

Within this blog post, the outcomes of the mapping methods are being framed as situational maps mapping the elements of data collected from specific data worlds. If these situational maps are then synthesized together in research, the social worlds map can be created, demonstrating relations between the situational maps. It is important to note that the data applied here is only illustrative as it serves to provide an overview of what the methods and related tools can do, but data from any world could be applied.

As most research practices have been moved to the digital environment, data is analyzed using computational devices (Knox & Nafus, 2020, p. 1). The mapping methods presented here are mainly virtual, as defined by Rogers (2013, p. 19), where the standard research methods are claimed to be enhanced by the digital environment and computational elements such as algorithms.

It is therefore crucial for the researcher to pay attention to the underlying mechanisms of the tools, which embed the research methods, in order to be able to engage with the black-boxed (Latour, 1994, p. 36) process behind the virtual methods. 5 different tools are briefly presented throughout this blog post.

 

1. Network mapping

Network mapping method provides the researcher with the possibility of visualizing the relations between different nodes and node clusters. In order to gain the right input data for the visualization, variable(s) that can be processed in order to establish the relations needs to be present.

(1) The illustrative example presented here extracts the input data from Scopus abstract and citation database. Scopus allows the researcher to search for academic articles within the specific area of interest through a set of keywords and Boolean values as well as through a set of specifications regarding language, document type and research area. After the search is done, the researcher is able to extract the bibliographical data (authors, title, abstract, key-words, citations, etc.) for all the found articles as a CSV file.

(2) The researcher should explore the data and the important variables of the input file; this can be done through Excel showing external data function (see Figure 1.1). By exploring the data, the researcher can decide on what variables are important for network visualization.

Figure 1.1 Snapshot of CSV document from Scopus

 

Cortext is a tool that is able to process the CSV file into a network visualization.

(3) In the Cortext environment, the input file will be inserted as a corpus on which the researcher will have to run scripts of terms extraction for specific variables and then network mapping.

(4) The terms extraction uses Natural Language Processing (NLP) in order to eliminate repetition of the same phrases in different forms. The researcher is able to view the processed file that reveals the frequency of occurrence and co-occurrence for the most frequent terms.

(5) The network mapping script can then be applied translating the values from the term extraction file into nodes and edges. The outcome is a network map (see Figure 1.2).

The illustrative example below shows the network of authors and related terms for ‘IT Consultancy’. It allows the researcher to gain a somewhat partial overview of the academic world within the boundaries of a specific topic. It can therefore navigate the initial theoretical overview and also further grounding of the research within/in-between different segments. Furthermore, the visualization can serve as an artifact of reference based on which other researchers can provide feedback on the elements of the network they are familiar with or that they find interesting.

Figure 1.2 Literature network representing authors and topics created from Scopus CSV file with Cortext

 

2. Word mapping

Word mapping method provides the researcher with the possibility of creating a word cloud, and thus, visualizing ‘text-based content into spatial arrangement’ (Martin & Hanington, 2012, p. 206). Such visualization is based on word frequency and can be framed as an abstract ‘algorithmic poetry’ meaning that the word frequency algorithm counts the use of particular words from the input file(s) and then visualizes it in proportion to position and size in a word cloud arrangement.

NVivo is qualitative research software that provides this word frequency query (see Figure 2.1) with word cloud visualization. The researcher can choose an input file(s) and run the word frequency query creating a tabular overview of the most frequent words as well as word cloud visualization where the style of the visualization is chosen from a set of templates. There are 3 very important considerations.

(1) The researcher has to be familiar with the input file. If the file has been downloaded with a link on every page, the query is going to parse the link and include it between the most frequent words, which is something that might not be feasible. The link would either have to be extracted from the input file or excluded from the word cloud visualization directly. If a word that doesn’t make sense to the researcher appears between the most frequent words, one can always search within the input file in order to put it in the context.

(2) It is advisable to include the stemmed words, meaning that as well as in the case of Cortext, only one form of the word would be present and the same word won’t be repeating in different forms, which could enhance the quality of the visualization.

Figure 2.1 Snapshot of NVivo’s Word frequency query settings

(3) One of the most important functions is the stop words list. Firstly, the researcher needs to make sure that the NVivo project content language (see Figure 2.2) is the same as the language of the input files and therefore, of the research. Secondly, while checking the language, the researcher is also able to check the stop words list (see Figure 2.3) that is pre-set to exclude the most common words in the language without significant semantic value. The researcher is able to remove stop words from the default settings as well as add stop words to the stop words list. Words can be added by right-clicking on the words to exclude in the visualization. After this is done, the word frequency query has to run again, the visualization will then change. This list is also some sort of record of the curation process in creating the visualization.

Figure 2.2 (left) Snapshot of NVivo’s Content language settings; Figure 2.3 (right) Snapshot of NVivo’s Stop words list

 

The illustrative example below (see Figure 2.4) is created from an academic abstract data extracted from Scopus. It provides the researcher with the overview of the common terminology used in academia within the specific topic of ‘IT Consultancy’. But more importantly, it can be used for analyzing patterns as well as for comparative analysis of terminologies between academic, social and empirical data, analyzing different perspectives on the same topic. It can also be utilized within different workshops (see Figure 2.5) as ‘communication artifact’ (Martin & Hanington, 2012, p. 206) of the research that participants can interact with and help the researcher see related patterns.

 

Figure 2.4 (left) Word cloud visualization from NVivo; Figure 2.5 (right) Word cloud visualization after workshop

 

3. Pinboard mapping

Following Law’s method for qualitative research, pinboard mapping provides the researcher with the possibility of ‘engaging with the ‘messiness’ of reality by articulating its complexity’ (Craige, 2015, p. 8). Considering specifically one of the common practices of qualitative research in collecting empirical data through interviews, researchers face several challenges. They need to gain information that could be turned into knowledge from participants that usually have different perspectives on the specific topic while being under the pressure of dealing with the participant’s time constraints, as the research is usually not the participant’s primary engagement.

In this blogpost Law’s pinboard method is suggested repurposed (see Figure 3.1) to help the researcher to prepare for the empirical data collection, mapping the

(1) participant roles along with their partial perspectives,

(2) topics with sets of questions that should help the researcher answer the central research question 

(3) theoretical concepts that are currently underlying the research.

The researcher can then interact with the board by creating links in-between these categories e.g. in order to decide which topics to focus on with which participant roles or to add additional constructs that are revealed as the research progresses. The board can then highlight relations and constructs, helping the researcher to navigate the collection of data as well as its analysis and synthesis. The illustrative example below has been created digitally through MURAL platform that provides an interactive and collaborative solution for creating different kinds of boards. The core elements of the tool used for the illustrative example were: canvas, text box, basic shapes (line, arrow, etc.) and sticky notes. Furthermore, as well as in the case of word clouds, the pinboard is a great method to be utilized within different workshops where participants can interact with it and help the researcher see related links and contrasts.

Figure 3.1 Pin board visualization created with MURAL

 

4. Conceptual mapping

Conceptual mapping provides the researcher with the possibility of visualizing the framework of the research and therefore, providing an understanding of the specific topic in order for the new meaning to emerge (Martin & Hanington, 2012, p. 38). Within this blog post the conceptual mapping is linked to the process of coding data, and therefore, the initial step in the research analysis (DeCuir-Gunby et al., 2011), crucial for knowledge and theory production. Codes can be defined as meaningful labels (DeCuir-Gunby et al., 2011, p. 137) that will later be part of concepts of the research. As defined by DeCuir-Gunby et al. (2011) the process of coding consists of (1) creating a set of codes, that can be either emerging from data (data-driven) or theory (theory-driven), compiled in a codebook with the name of the codes, definitions and inclusion examples, and also (2) assigning the codes to valuable chunks of data, which is initially done through open coding. Open coding is followed by axial coding in which the connections between the codes are revealed. Such analytical methodology has been imported into qualitative research software such aforementioned NVivo, but also ATLAS.ti. ATLAS.ti offers a great user interface for importing different document formats, coding and also visualizing the coding network while allowing for the conceptual mapping method. From personal experience, ATLAS.ti navigates the researcher to subsequently enact both, the open and axial coding. Thus, the code offers a hybrid through the set of code and link managers that compile the codes in a codebook. The hybrid coding is possible if the researcher has an initial theoretical overview that has underlined the collection of empirical data. The codes can then be created based on the interview questions for example that already suggest certain segmentation of the data, which is also linked to the theoretical concepts. The researcher should keep in mind the central research question and how is it being answered. The code manager (see Figure 4.1) can be used to create codes and putting the definitions and include examples in the comments. Addition to the code manager is code group manager (see Figure 4.2) that allows the researcher to create code groups and drag sets of codes to the related groups.

 

Figure 4.1 Code manager in ATLAS.ti;

Figure 4.2 Code group manager in ATLAS.ti

 

The codes can also be linked through the code manager, just by dragging one code on top of another and choosing one of the link options. Selecting one of the codes will display its links and through right-clicking the links the researcher is able to open a link manager (see Figure 4.3) with default link suggestions, but also gives the possibilities of creating new links.

Figure 4.3 Link manager in ATLAS.ti

 

Moreover, through the code manager, codes can be assigned to colors. This can be done based on the conceptual segmentation where color segments visual environments into conceptual objects (Bianco et al., 2015, p. 87). The colors should therefore be distinguishable. The illustrative example below (see Figure 4.4) demonstrates the codes, their links, and the color segmentation. The network visualization provides the researcher with an overview of possible findings and how they are interlinked, which should lead to knowledge and/or theory production. ATLAS.ti allows the researcher to export reports of coded data for each of the codes, but also to visualize the definitions of the codes and related quotations within the network. Furthermore, it is also possible to do a comparative analysis of the input files and related codes.

Figure 4.4 Network of codes created with ATLAS.ti

 

Summarizing

This blog post has provided a very brief description of possible coding, analyzing and visualization tools and many of them have descriptive learning platforms including guides, webinars, forums and support. Exploring such platforms might help the researcher to explore the black-boxed mechanisms of the tools.

As mentioned underneath some of the method descriptions, the visual outcomes of the methods, or in other words research artifacts, can be used within different kinds of workshops, either directly with research participants or with the network of experts that the researcher is part of. The interaction with the artifacts during the workshops can either be physical or digital. If the digital interaction is set up, the workshop participants might co-create new versions of the visualizations that might be valuable for the research, plus the opportunities and limitations of the methods and related tools might be further questioned enhancing researcher’s own reflections.

One of the last important remarks when working with the mapping methods is related to situated knowledge and accountability (Haraway, 1988, p. 583). By mapping data from different sources that are then collected, visualized and analyzed under sets of specifications, we are creating a concrete knowledge space, the researcher’s own network of people, practices and places (Turnbull, 2000), that is reflection of our situated decision-making regarding what will be mapped, what will be visible for people who will interact with the final research outcomes. It is, therefore, significant to not only record the outcomes but also the process of creating the artifacts in order to be able to account for the decisions that have lead to the creation of the specific knowledge space. This can be done by taking screenshots of the settings researchers use within the different tools, saving different versions of the artifacts or recording whole workshop sessions. Recording the process is vital.

However, the researchers won’t always have the possibility to enact their vision of the artifacts as they might be constrained by the limitations of the tools. Many of the method mechanisms embedded within the tools are not flexible enough to provide the researcher with the level of malleability required. There are of course workarounds in order to enhance the flexibility, such as creating artificial input file or including additional tools, these workarounds are however usually very time-consuming. From personal experience, the level of tool malleability in relation to the specific methods is summarized below ranging from 1 to 5, where 1 is the least flexible and 5 is the most flexible.

 

Tool(s) Method Level of malleability Notes
Scopus & Cortext Network mapping 3.5 The CSV file from Scopus is limited to certain size.

The network visualizations in Cortext are not the most flexible.

NVivo Word mapping 2 The size and position of the words cannot be manipulated.
MURAL Pin board mapping 5
ATLAS.ti Network mapping 4 Only constraint is the difficulty of creating sub-groups.

Table 1. Summary of the levels of malleability of the different tools in relation to specific methods

 

It is therefore up to the researcher to consider and reflect on how the research changes as well as how the behaviour of the researcher changes in response to the methods and related tools employed, both being significant actors (Knox & Nafus, 2020) within the research network.

 

References

Ang, I. (2011). Navigating complexity : From cultural critique to cultural intelligence. Journal of Media & Cultural Studies Publication, (April 2012), 37–41.

Bianco, S., Gasparini, F., & Schettini, R. (2015). Color Coding for Data Visualization. In Encyclopedia of Information Science and Technology, Third Edition: IGI Global. https://doi.org/10.4018/978-1-4666-5888-2

Clarke, A. (2005). Doing Situational Maps and Analysis. In Situational Analysis.

Craige, W. A. (2015). The Pinboard in Practice : A Study of Method through the Case of US Telemedicine, 1945-1980, 0, 249. Retrieved from http://etheses.dur.ac.uk/10966/1/The_Pinboard_in_Practice_-_Will_Craige.pdf?DDD34+

DeCuir-Gunby, J. T., Marshall, P. L., & McCulloch, A. W. (2011). Developing and Using a Codebook for the Analysis of Interview Data: An Example from a Professional Development Research Project. Field Methods, 23(2), 136–155. https://doi.org/10.1177/1525822X10388468

Haraway, D. (1988). Situated Knowledges. Feminist Studies, 14(3), 575–599. https://doi.org/10.1111/j.1759-5436.2004.tb00151.x

Knox, H., & Nafus, D. (2020). Introduction: ethnography for a data-saturated world. In Ethnography for a data-saturated world.

Latour, B. (1994). On Technical Mediation – Philosophy, Sociology, Genealogy. Common Knowledge.

Martin, B., & Hanington, B. (2012). Universal methods of design: 100 ways to research complex problems, develop innovative ideas, and design effective solutions. Choice Reviews Online, 49(10), 49-5403-49–5403. https://doi.org/10.5860/CHOICE.49-5403

Rogers. (2013). The End of the Virtual: Digital Methods (Vol. 91). 

Turnbull. (2000). Tricksters and Cartographers: Maps, Science and the State in the Making of a Modern Scientific. In MASONS, TRICKSTERS AND CARTOGRAPHERS Being.