Dissertation
Chapter 3: Method
This chapter presents the methods and research design for this dissertation study. It begins by presenting the research questions and settings, the LibraryThing and Goodreads digital libraries. This is followed by an overview of the mixed methods research design used, incorporating a sequence of three phases. Each of the three methods—qualitative content analysis, a quantitative survey questionnaire, and qualitative interviews—are then presented in detail. The codes and themes used for analysis during the qualitative phases are discussed next. The chapter continues with sections on the management of the research data for this study; the validity, reliability, and trustworthiness of study findings; and ethical considerations. The invitation letters and informed consent statement; survey instrument; interview questions; a quick reference guide used for coding and analysis; and documentation of approval from LibraryThing, Goodreads, and the FSU Human Subjects Committee are included in appendices.
3.1. Research Questions
As stated in Chapter 1 the purpose of this research, taking a social perspective on digital libraries, is to improve understanding of the organizational, cultural, institutional, collaborative, and social contexts of digital libraries. The following two research questions satisfy the purpose of the proposed study within the approach, setting, and framework introduced in Chapter 1 :
- RQ1: What roles do LibraryThing and Goodreads play, as boundary objects, in translation and coherence between the existing social and information worlds they are used within?
- RQ2: What roles do LibraryThing and Goodreads play, as boundary objects, in coherence and convergence of new social and information worlds around their use?
These two questions explore the existing and emergent worlds that may surround digital libraries in social, collaborative use and behavior. RQ1 focuses on examining how LibraryThing and Goodreads may support existing collaboration, communities, and other social activities and behaviors across social and information worlds, with a specific eye to translation, characteristics indicating coherence of existing worlds, and uses of the digital libraries as boundary objects. RQ2 focuses on examining how LibraryThing and Goodreads may support coherence and convergence of new, emergent social and information worlds and their characteristics, as indicated by use of the digital libraries (as boundary objects) as new, localized standards. The questions focus on the roles of each digital library, be there one role, multiple roles, or possibly no role played by LibraryThing and Goodreads. These roles may or may not include explicit support for collaboration, communities, or social contexts. The research questions use and incorporate the definitions, concepts, and propositions of social digital libraries (see section 2.4.3 ), the social worlds perspective (see sections 2.7.1.1 and 2.8.1 ), the theory of information worlds (see section 2.8.2 ), and the synthesized theoretical framework for social digital libraries (developed in section 2.8.3 ). Coherence and convergence are seen as the same concept in boundary object theory (see section 2.7.1.4 ), leading to overlap between the concepts—and the two research questions—in operational data collection and analysis. The connotations of the two indicate convergence will lead to new, emergent worlds, and this meaning is indicated by its use in RQ2, but not RQ1.
3.2. Setting: Case Studies of LibraryThing and Goodreads
In this dissertation study, the boundary objects of interest are defined and given as two digital libraries: LibraryThing and Goodreads (see sections 3.2.2 and 3.2.3 below). This approach is opposite the procedure used by Star and Griesemer (1989), who first identified the populations of communities, users, and stakeholders in their study, then examined the boundary objects they used. Starting with the boundary objects is in line with Star’s later work (Bowker & Star, 1999; Star et al., 2003). Bødker and Christiansen (1997); Gal, Yoo, and Boland (2004); Henderson (1991); and Pawlowski, Robey, and Raven (2000) have used this approach to varying extents, proving its validity and usefulness as an approach to take for studying social digital libraries as boundary objects.
3.2.1. Case Study Approach
This research takes a case study approach, where "a detailed" and intensive "analysis of … individual case[s]"—LibraryThing and Goodreads—will be performed (Fidel, 1984, p. 274). The research looked to generate "a comprehensive understanding of the event under study"—uses of these digital libraries as boundary objects within and across existing and emergent social and information worlds—and develop "more general theoretical statements about regularities in the observed phenomena" surrounding social digital libraries (p. 274). Case studies often focus on the cycle of research methods which inform each other through a longer, more detailed research process than using a single exploratory method. A case study approach fosters multiple opportunities to revisit and reanalyze data collected earlier in the study, revise the research design as new facets and factors emerge, and combine multiple methods and data sources into a holistic description of each case. The research design used here, employing two qualitative and one quantitative method in a cycle (see section 3.3 ), follows this approach.
Yin (2003) breaks the process of conducting a case study into five phases. The phases "effectively force [the researcher] to begin constructing a preliminary theory" prior to data collection (p. 28), as done in Chapter 2 . Each of Yin’s five steps can be found in sections of this dissertation. First, one must determine the research questions to be asked; these were included in section 3.1 above. Second, one must identify what Yin calls the "propositions," statements "direct[ing] attention to something that should be examined within the scope of study" (p. 22). The theoretical framework developed earlier (see section 2.8) and the purpose of this research as stated in Chapter 1 provide this necessary focus from a conceptual perspective. The operationalization of this focus is discussed for each method in sections 3.4.4 , 3.5.3 , 3.6.4 , and 3.7 . Third, Yin says one must determine the unit of analysis, based on the research questions. In this study, the overall units of analysis are the two social digital libraries under consideration, LibraryThing and Goodreads; other units of interest include communities, groups, and individuals. The specific unit of analysis for each method of data collection is discussed in sections 3.4.1 , 3.5.1 , and 3.6.2 . Fourth, one must connect "data to [theoretical] propositions," matching patterns with theories (p. 26). Using the theoretical framework developed in section 2.8 in data analysis (see sections 3.4.4 , 3.5.5 , 3.6.6 , and 3.7 ) provides for this matching process. For the final step, Yin says one must determine "the criteria for interpreting [the] findings" (p. 27); the criteria chosen for this research are discussed in the data analysis sections ( 3.4.4 , 3.5.5 , 3.6.6 , and 3.7 ) and are considered in light of concerns of validity, reliability, and trustworthiness ( section 3.9 ) and the benefits ( section 1.7 and Chapter 5 ) and limitations ( section 5.6 ) of the study.
This research employed a multiple-case, "holistic" design at the highest level, focusing on LibraryThing and Goodreads as units, but what Yin (2003, p. 42) calls an "embedded" design, with multiple units of analysis considered in each method, at lower levels. Examining two social digital libraries allows them to be compared and contrasted, but commonalities were expected to emerge—and did—across the two cases to allow theoretical and practical conclusions to be drawn (see Chapter 5 ). Yin stated case study designs must be flexible and may change as a result of research not turning out as expected, and subtle changes were made to what was intended to be a flexible plan for case studies of LibraryThing and Goodreads and their use as boundary objects within and across existing and emergent social and information worlds.
3.2.2. LibraryThing
LibraryThing (LT) is a social digital library and web site founded in August 2005 (LibraryThing, n.d.-a), with over 1.8 million members as of June 2014 (LibraryThing, 2014). It allows users to catalog books they own, have read, or want to read (LibraryThing, n.d.-b); these serve as Functional Requirements for Bibliographic Records (FRBR) items (International Federation of Library Associations and Institutions, 2009). Users can assign tags to books, mark their favorites, and create and share collections of books with others; these collections are searchable and sortable. LT suggests books to users based on the similarity of collections. Users can provide reviews, ratings, or other metadata (termed "Common Knowledge"; LibraryThing, 2013) for editions of books (FRBR’s manifestations and expressions) and works (as in FRBR); this metadata and users’ tags are shared across the site (LibraryThing, n.d.-c). LT provides groups (administered by users or staff), which include shared library collection searching, forums, and statistics on the books collected by members of the group (LibraryThing, n.d.-d). Discussions from these forums about individual books are included on each book’s page, as are tags, ratings, and reviews. Each user has a profile page which links to their collections, tags, reviews, and ratings, and lists other user-provided information such as homepage, social networks used (Facebook, Twitter, etc.), and a short biography (LibraryThing, n.d.-c).
Examining LibraryThing in light of the definition of social digital libraries (see sections 1.1 and 2.4.3 ) shows the following:
- LT features one or more collections of digital content collected for its users, who can be considered a community as a whole and part of many smaller communities formed by the groups feature. This content includes book data and metadata sourced from Amazon.com and libraries using the Z39.50 protocol (LibraryThing, n.d.-b); and user-contributed data, metadata, and content in many forms: tags, favorites, collections, reviews, posts in discussions, and profile information.
- LT features services relating to the content and serving its user communities, including the ability to catalog books; create collections; discuss with others; and search for and browse books, reviews, tags, and other content.
- LT is managed by a formal organization and company, and draws on the resources of other formal organizations (Amazon.com, libraries) and informal groupings (LT users) for providing and managing content and services.
As a large social digital library and web site, open to the public and with multiple facets, LibraryThing is well-suited as a setting and case for examining the role of digital libraries within and across communities. The existing research literature on LibraryThing has focused on its roles for social tagging and classification (e.g. Chang, 2009; Lu, Park, & Hu, 2010; Zubiaga, Körner, & Strohmaier, 2011) and in recommendation and readers’ advisory (e.g. Naughton & Lin, 2010; Stover, 2009). This study adds an additional view of the site as an online community and social digital library.
3.2.3. Goodreads
Goodreads (GR), similar to LibraryThing, is a social digital library and web site founded in January 2007 (Goodreads, 2014a). As of June 2014, it has 25 million members. Users can "recommend books" via ratings and reviews, "see which books [their] friends are reading; track the books [they are] reading, have read, and want to read; … find out if a book is a good fit for [them] from [the] community’s reviews" (para. 2); and join discussion groups "to discuss literature" (Goodreads, 2014b, para. 11). As with LibraryThing, Goodreads users can create lists of books (called "shelves"), which act as site-wide tags anyone can search on (para. 5). Searching and sorting are possible for other metadata and content types; metadata can apply to editions (manifestations or expressions) of a book or to whole works (in FRBR terms; International Federation of Library Associations and Institutions, 2009). Groups can be created, joined, and moderated by users (including Goodreads staff); they can include group shelves, discussion forums, events, photos, videos, and polling features. Users have profile pages, which may include demographic information, favorite quotes, writing samples, and events. Users who have greater than 50 books on their shelves can apply to become a Goodreads librarian , which allows them to edit and update metadata for books and authors (Goodreads, 2012d, "What can librarians do?" section). In March 2013—during the early stages of this dissertation research—Amazon.com acquired Goodreads (Chandler, 2013).
Examining GR in light of the definition of social digital libraries (see sections 1.1 and 2.4.3 ) shows the following:
- GR features one or more collections of digital content collected for its users, who can be considered a community as a whole and part of many smaller communities formed by the groups feature. This content includes book data and metadata previously sourced from Ingram (a book wholesaler), libraries (via WorldCat and the catalogs of the American, British, and German national libraries), and publishers (Chandler, 2012), and now from Amazon since their purchase (Chandler, 2013); and user-contributed metadata and content, including shelves, lists, forum posts, events, photos, videos, polls, profile information, and book trivia.
- GR features services relating to the content and serve its user communities, including the ability to catalog books; create shelves; discuss with others; and search for and browse books, reviews, lists, and other content.
- GR is managed by a formal organization and company—Goodreads Inc., although now owned by Amazon—and draws on the resources of other formal organizations (Amazon, Ingram, OCLC via WorldCat, libraries, and publishers) and informal groupings (GR users, the librarians group) for providing and managing content and services.
As with LibraryThing, Goodreads is well-suited as a setting and case for examining the role of digital libraries within and across communities, because it is a large social digital library and web site that is open to the public and has multiple facets. There is little existing research literature on Goodreads, limited to its use in recommendation and readers’ advisory (e.g. Naik, 2012; Stover, 2009) and examining its impact on the practice of reading (Nakamura, 2013). This study adds an additional view of the site as an online community and social digital library.
3.3. Research Design
Use of a mixed methods research design combines qualitative and quantitative methods together to emphasize their strengths; minimize their weaknesses; improve validity, reliability, and trustworthiness; and obtain a fuller understanding of uses of social digital libraries as boundary objects within and across social and information worlds. Definitions of mixed methods research vary but core characteristics can be identified, which Creswell and Plano Clark (2011, p. 5) summarize as
- collection and analysis of both qualitative and quantitative data;
- integration of the two forms of data at the same time, in sequence, or in an embedded design;
- prioritizing one or both forms of data;
- combining methods within a single study or multiple phases of a larger research program;
- framing the study, data collection, and analysis within philosophical, epistemological, and theoretical lenses; and
- conducting the study according to a specific research design meting the other criteria.
This study meets all of these criteria. Qualitative and quantitative data were collected and integrated in sequence; qualitative data was prioritized, but not at the expense of quantitative data collection; multiple methods were used within this one study; and the study was based on the theoretical framework developed and the tenets of social informatics and social constructionism explained in Chapter 2 .
This study took a philosophical view of mixed methods research similar to the view of Ridenour and Newman (2008), who "reject[ed] the [standard] dichotomy" between qualitative and quantitative research methods, believing there to be an "interactive continuum" between the two (p. xi). They stated "both paradigms have their own contributions to building a knowledge base" (p. xii), suggesting a holistic approach to research design incorporating theory building and theory testing in a self-correcting cycle. Qualitative methods, Ridenour and Newman argued, should inform the research questions and purpose for quantitative phases, and vice versa; they termed this process an "interactive" one (p. xi). Research designs should come from the basis of "the research purpose and the research question" (p. 1), what "evidence [is] needed," and what epistemological stance should be taken "to address the question" (p. 18).
Greene (2007) presented a similar argument, stating "a mixed methods way of thinking actively engages with epistemological differences" (p. 27); multiple viewpoints are respected, understood, and applied within a given study. She acknowledged the tensions and contradictions that will exist in such thought, but believed this would produce the best "conversation" and allow the researcher to learn the most from their study and data (p. 27). Creswell and Plano Clark (2011) encompassed multiple viewpoints and potential designs in their chapter on choosing a mixed methods design (pp. 53–104). They considered six prototypical designs: (a) convergent parallel; (b) explanatory sequential; (c) exploratory sequential; (d) embedded; (e) transformative; and (f) multiphase.
The research design for this dissertation study is a variation on a multiphase design incorporating elements of the explanatory sequential and exploratory sequential designs of Creswell and Plano Clark. Three methods were use for data collection, following the process proposed by Ridenour and Newman (2008) and taking the approach to thought suggested by these authors, Creswell and Plano Clark (2011), and Greene (2007). The selection of this design and these methods was based on the research purpose discussed in Chapter 1 , the research questions introduced in section 3.1 , and the research setting explained in section 3.2 . The methods used were
- content analysis of messages in LibraryThing and Goodreads groups ( section 3.4 );
- a structured survey of LibraryThing and Goodreads users ( section 3.5 ); and
- semi-structured qualitative interviews with users of LibraryThing and Goodreads ( section 3.6 ).
The holistic combination of these methods, interrelated in a multiphase design, has allowed for exploratory and descriptive research on social digital libraries as boundary objects incorporating the strengths of quantitative and qualitative methods and the viewpoints of multiple perspectives.
3.3.1. Integrated Design
A sequential, multiphase research design was employed for two reasons. First, each of the methods above required focus on data collection and analysis by the researcher. Trying to use a parallel or concurrent design, conducting content analysis alongside a survey or a survey alongside interviews, could have caused excess strain; a sequential design improved the chances of success, the quality of data collected and analyzed, and the significance of and level of insight in the study’s conclusions. Second, each method built on the methods before it. The design of the survey and interview instruments was influenced by ideas drawn from the literature and theories for the study and by elements of interest uncovered during the content analysis phase. The interviews focused on gathering further detail on and insight into findings from the survey results and the content analysis. This combination of methods allowed for exploring each case through content analysis, obtaining summary explanatory data through surveys, and then detailed descriptive and explanatory data through the interviews, achieving the benefits of both the exploratory and explanatory research designs presented by Creswell and Plano Clark (2011, pp. 81–90).
Creswell and Plano Clark (2011) expressed caution, noting multiphase research designs often require substantial time, effort, and multi-researcher teams. The three phases used here were not lengthy or intensive enough to cause lengthy delays in the completion of this dissertation. This is one coherent dissertation study, instead of the long-term, multi-project research program Creswell and Plano Clark cite as the prototypical multiphase design. While it was known in advance this would not be the speediest dissertation research project, using a sequential design allowed for the results from each phase to emerge as the research proceeded, instead of having to wait for all phases to complete as in a concurrent design. A complete and insightful picture of the findings and conclusions of the dissertation came within a reasonable amount of time and with a good level of effort.
3.4. Content Analysis
Content analysis has been defined as "a technique for making replicable and valid inferences from texts (or other meaningful matter) to the contexts of their use" (Krippendorff, 2004a, p. 19), with emphasis often placed on "the content of communication" (Holsti, 1969, p. 2)—specific "characteristics of messages" (p. 14)—"as the basis of inference" (p. 2). Early forms of content analysis required objectivity and highly systematic procedures (see Holsti, 1969, pp. 3–5, 14). The form of content analysis used in this study considers the meaning and understanding of content to "emerge in the process of a researcher analyzing a text relative to a particular context" (Krippendorff, 2004a, p. 19), a subjective and less rigid approach. Such text or content may have multiple, socially constructed meanings, speaking to more "than the given texts" (p. 23); they are indicative of the "contexts, discourses, or purposes" surrounding the content (p. 24).
There are at least three categories of content analysis, which Ahuvia (2001) labels traditional , interpretive , and reception-based ; other authors and researchers (e.g. Babbie, 2007, p. 325; Holsti, 1969, pp. 12–14) break content analysis into latent (subjective and qualitative) and manifest (objective and quantitative) categories of analysis. Early content analysis was purely objective and generated quantitative summaries and enumerations of manifest content, but qualitative and latent analysis have found greater acceptance over time (Ahuvia, 2001; Holsti, 1969, pp. 5–14; Krippendorff, 2004a). This study used the interpretive approach and focused coding on the latent content—the underlying meaning—of the data gathered. This section discusses the application of content analysis in the first phase of this dissertation research, including (a) the choice of the unit of analysis; (b) the population and sampling method chosen; (c) the sampling and data collection procedures followed, including a pilot test; and (d) how the data was analyzed.
3.4.1. Unit of Analysis
The unit of analysis chosen for the content analysis in this study was the message . LibraryThing’s and Goodreads’ group discussion boards are organized into threads, each of which may contain multiple individual messages. Analysis of these individual messages was aimed at uncovering indications of the roles the two digital libraries play in existing and emergent social and information worlds. Analysis began with the individual messages to ensure details and phenomena at that level were captured, but over time went beyond individual messages to the thread or group levels, since these phenomena served as instantiations of social and information worlds or as sites for interaction and translation.
3.4.2. Population and Sampling
The broader population of messages could be defined as all messages posted in public LibraryThing and Goodreads groups, but the logistics of constructing a sampling frame for such a population were and are all but impossible; it is improbable the two sites would provide data on all messages posted if it is not required of them by law. Recent messages from active groups were of most interest and use for this study. The population of messages was defined as all messages from the most active LibraryThing groups in the past week (taken from http://www.librarything.com/groups/active ) and the most recently active Goodreads groups (taken from http://www.goodreads.com/group/active ) as of April 30, 2013, the day data collection began for the content analysis phase of the study. The sampling frames were restricted to as close to but no more than 100 groups as possible, based on LibraryThing’s list claiming to list the 100 most active groups; the actual frames consisted of 91 LibraryThing groups and 93 Goodreads groups once duplicates were removed. During the planning and design of this study, Goodreads provided a list of "recently popular" groups (at http://www.goodreads.com/group/recently_popular ) that was akin to LibraryThing’s list in nature; that list was taken down sometime in early 2013 due to it causing a server slowdown (Jack & Finley, 2013). Using the most recently active groups did not guarantee consistent popularity or activity over a recent time period (such as a week), but did address the need to collect recent messages from active groups and was deemed the most acceptable source for a sampling frame still available.
To obtain a sample of messages from this population, a stratified random sampling method using the levels of group, thread, and message was employed. From the lists identified above, five groups were selected at random from each digital library (for a total of ten), but with the following inclusion and exclusion criteria applied to help ensure representativeness and allow for meaningful analysis:
(a) At least one group from each digital library with over 100 messages posted in the last week was selected. (b) At least one group from each digital library with under 100 messages posted in the last week was selected. (c) Any group with fewer than 60 messages total was removed and a new group selected. (d) Any group with fewer than two members was removed and a new group selected. (e) Any group used in the pilot study (see below) was removed and a new group selected.
Due to constraints placed on this research by Goodreads and the nature of this digital library, all group selections for Goodreads required approval from at least one group moderator per group. Prior to the collection of any data, such moderators were messaged via the site using the invitation letter found in Appendix A , section A.1.1 , and provided their consent for their group to be included in the research by agreeing to an informed consent statement (see Appendix A , section A.1.2 ). Any groups for which the moderator did not provide consent within two weeks were removed from the sample and a new group selected, using the same procedures and initial list of groups.
Two additional groups, one from LibraryThing and one from Goodreads, were used for a pilot study of the content analysis procedures, selected at random using the same procedure as above but with only criteria (c) and (d) applied. As with the main sample, the moderator for the Goodreads group selected was contacted to obtain his approval and consent prior to data collection; the moderator of the first group did not respond within two weeks, so a new group was selected. These two groups were selected in December 2012, earlier than the main sample, using the two lists of groups as they were at that time. For the pilot, threads were selected systematically and at random from the threads shown on the group’s front page (i.e. the most recent and active threads) until the total messages per group reached between 50 and 60; in both cases only one thread was selected containing 60 messages. Any thread with fewer than two messages was to be excluded from selection. All messages in the selected threads, up to the 60-message limit, were part of the sample for the pilot test, which totaled 120 messages. At 20% the size of the intended sample for the main content analysis phase, the pilot sample provided sufficient data to assess if the proposed procedures were appropriate and how long this phase of the study would take. The pilot study allowed adjustments to be made for the main content analysis phase, based on problems and difficulties observed.
For the main content analysis phase, the ten groups were selected on April 30, 2013, a later date than the two for the pilot test, using the two lists of groups as they were as of that day. A few weeks later, threads were systematically selected at random from the threads shown on each group’s front page (i.e. the most recent and active threads) until the total messages per group reaches between 50 and 60. As with the pilot, any thread with fewer than two messages was excluded from selection. No more than the first 20 messages in each thread selected were part of the sample, a change from the pilot test made to ensure at least three threads per group were selected and improve the representativeness of the sample. This was intended to lead to a total sample of between 500 and 600 messages, about half from LibraryThing and half from Goodreads. The samples in practice consisted of 286 messages from LibraryThing and 233 from Goodreads, for a total of 519 messages (see also Chapter 4 , section 4.1 ). For all random and systematic sampling in the pilot and main data collection stages, the starting point and interval was chosen by generating random numbers using Microsoft Excel’s RANDBETWEEN function.
This stratified random sampling procedure was chosen to encourage representativeness of the resulting sample while ensuring data allowing for meaningful analysis was selected. Messages, threads, or groups could be selected purposively, but such a method could result in a sample biased towards a given type of message, thread, or group. Random sampling of groups and threads from the population deemed useful for analysis produced a sample of messages from LibraryThing and Goodreads that can be judged to be quite representative, if not quite equivalent to one generated from simple random sampling since the sampling frames did not include the entire population of groups. The sizes of the sample at each stratum were chosen to balance representativeness against the time and resources necessary to complete content analysis.
3.4.3. Data Collection Procedures
Messages were collected by using a Web browser to access the LibraryThing and Goodreads web sites, following the sampling procedures discussed above. Once a thread was displayed on the screen, up to 20 messages from the thread—starting with the earliest messages—were copied and pasted into a Microsoft Word document; one such file was maintained per thread. As found in the digital libraries, each message’s author, date/time posted, and message content was saved to that file. Images or other media included were saved in their original context as best as possible. Members’ identities, as indicated by their usernames, were used to allow for identifying common message authors in a thread, for analysis of the flow of conversation, and for identifying potential participants for later phases of the study. Identities remained confidential and were not be part of further analysis, results, or publications; psuedonyms are used in this dissertation (see section 4.1 ). Avatars from Goodreads were discarded, as members’ usernames were sufficient for this purpose. These documents were stored as discussed in section 3.8 on data management.
3.4.4. Data Analysis
For analysis, the documents were imported into NVivo qualitative analysis software, version 10, running on a MacBook Pro via a virtualized Windows 7 installation. Each message was examined and codes were assigned based on its latent meaning and interpretation. The codes to be assigned drew from boundary object theory, the social worlds perspective, and the theory of information worlds, which served as an interpretive and theoretical framework for the content analysis (cf. Ahuvia, 2001). These codes were common to multiple phases of this study, and can be found in section 3.7 below. So-called "open" codes, not included in the list but judged by the researcher to be emergent in the data and relevant to the study’s purpose and research questions, could be assigned during the content analysis and coding process, as recommended by Ahuvia (2001) for interpretive content analyses and others for general qualitative data analysis (e.g. Charmaz, 2006). Findings from the data as coded and analyzed, including open codes, are included in Chapter 4 , section 4.1 .
3.4.4.1. Pilot test
These coding and analysis procedures were piloted first, using data from two of the groups, prior to their use in the main content analysis phase. Two volunteer coders, doctoral students at the FSU School of Information [1] , applied the coding scheme and procedures developed for analyzing qualitative data in this study, presented in greater detail in section 3.7 below. The researcher applied the same scheme and procedures. Measures were in place to ensure the validity, reliability, and trustworthiness of the data and analysis, as discussed in section 3.9 below. Both intercoder reliability statistics and holistic, qualitative analysis of the results were used to clarify the scheme and procedures after each round of coding. Changes that were made to procedures and the coding scheme, and issues encountered with intercoder reliability statistics, are discussed at length in section 3.7 below.
3.5. Survey
Surveys are a common research method in the social sciences, including library and information science. They allow characteristics of a population to be estimated, via statistics, through analysis of the quantified responses given to questions by a small sample of the population (Fowler, 2002; Hank, Jordan, & Wildemuth, 2009; Sapsford, 1999). Surveys consist of "a set of items, formulated as statements or questions, used to generate a response to each stated item" (Hank et al., 2009, p. 257). The data collected may describe the beliefs, opinions, attitudes, or behaviors of participants on varied topics, although most research surveys have a special purpose and focus (Fowler, 2002). This is true in the case of the survey used here, which focused on obtaining data on uses of LibraryThing and Goodreads by a sample of its users, in the specific context of their usage as boundary objects within and across social and information worlds.
The following sections cover the components of survey research methods cited by Fowler (2002, pp. 4–8) and Hank et al. (2009) as they apply to the survey used in this study. These include discussion of the unit of analysis, population, and sampling (sections 3.5.1 and 3.5.2 ); concept operationalization and survey question design (sections 3.5.3 ); pretesting and data collection ( section 3.5.4 ); and data analysis ( section 3.5.5 ). The survey was designed as a coherent whole—as recommended by Fowler (2002, p. 7)—and in relation to the content analysis and interview methods used in other phases of the study.
3.5.1. Unit of Analysis
For the survey phase of this dissertation study, the unit of analysis was the individual LibraryThing or Goodreads user . These users were—and are—understood to be members of one or more communities, social worlds, or information worlds, and to be members of or frequent one or more LibraryThing or Goodreads groups. Analysis of their responses to questions about these groups and other communities they were part of allowed for greater understanding of the roles the digital library plays for them in context of these worlds. Tentative conclusions could be made about the nine groups from which users were surveyed and about the communities associated with these groups, but generalization to LibraryThing and Goodreads as a whole was not possible, as explained in section 3.5.2 below.
3.5.2. Population and Sampling
The broader population of LibraryThing and Goodreads users totals over 26 million people, and the logistics of constructing anything resembling a sampling frame—i.e. a complete list of all users of the two sites—are all but impossible. Given the focus in the content analysis phase on nine groups (five from LibraryThing, four from Goodreads), narrowing the population to include any user who visits, frequents, or is a member of one or more of these groups made the task of sampling possible and the population compatible with the population of messages used in the content analysis phase. This narrowing of population led to a less representative population than that of all LibraryThing and Goodreads users, limiting the kinds of analysis that could be done of the survey (further details below and in Chapter 4 , section 4.2 ,).
Two sampling methods were used to select potential survey participants from this population:
- A purposive sample, consisting of all LibraryThing users who posted a message within the five LibraryThing groups selected for the content analysis phase. The pool of messages included the messages selected for the main sample in the content analysis phase. (Goodreads did not consent to messaging of Goodreads users for this purpose, so Goodreads users were excluded from this sample.)
- A convenience sample, consisting of all LibraryThing and Goodreads users who responded to an invitation to participate posted to each of the nine groups selected for the content analysis phase (procedures detailed in section 3.5.5 below).
All users who met the criteria (having posted a message or responded to the invitation) and human subjects requirements for age (between 18 and 65) were allowed to participate, helping to increase the responses collected and the representativeness (as best as possible) of the results obtained.
A true random sample, even from the narrower population, could not be drawn because the researcher could not generate a complete list of visitors to and members of the selected groups. Obtaining such a list from LibraryThing and Goodreads—or the group moderators, should they have access to one for their group—would have placed an unreasonable burden on the digital libraries and could have jeopardized their cooperation in and the successful completion of this study. Such a list would have violated the privacy rights of the members of these groups. A random element is included in the sampling process by using the random groups selected during the content analysis phase, but the sample still lacks much of the representativeness of a true random sample. Users could choose to participate or not and not all users of the nine groups were guaranteed to see the invitation, making it impossible to infer beyond the sample due to selection bias. One may assume survey respondents are at least moderately representative of the population of users of the nine LibraryThing and Goodreads groups, and so conclusions can be inferred about those users through nonparametric statistics. Further details are given in Chapter 4 , section 4.2 .
3.5.3. Operationalization of Concepts and Instrument Design
The phenomena of interest for the survey were similar to the phenomena of interest in the content analysis and interview phases of the study: the concepts of boundary objects, translation, coherence, information worlds, social norms, social types, information values, information behaviors or activities, social worlds, organizations, sites, and technologies. Conceptual definitions for these are found in boundary object theory, the social world perspective, the theory of information worlds, and the synthesized theoretical framework for social digital libraries (see Chapter 2 ). For the purposes of the survey and in the context of answering the research questions of this study, these concepts were operationalized through a set of Likert scaled questions (Brill, 2008; McIver & Carmines, 1981), adapted from the conceptual definitions found in the literature, theories, and synthesis thereof. These questions can be found as part of the survey instrument in Appendix B , section B.1 .
Four to six Likert items (Brill, 2008; McIver & Carmines, 1981) for each of the concepts and phenomena of interest were included in the survey. A symmetric five-point scale was used for each item, as is traditional for Likert items (Brill, 2008); five response choices provides for higher levels of reliability without offering respondents too many choices (Brill, 2008), and questions can be re-scaled without significant loss of statistical validity (Dawes, 2008). Each item used the following labels for response choices: Strongly Agree(5), Agree, Neutral, Disagree, and Strongly Disagree(1). In analysis, each of the items was assigned a numeric rating (5–1) and summed to form Likert scales for each phenomenon (Brill, 2008; McIver & Carmines, 1981). Statistical analysis checked the internal consistency and reliability of each scale, with items dropped that contributed to lower levels of reliability (see sections 3.5.5 and 3.9 below, and Chapter 4 , section 4.2.1 ). Using at least four items per scale allowed for appropriate statistical analysis to proceed.
Questions were developed, based on the literature and theoretical framework reviewed in Chapter 2 , to measure each of the phenomena of interest. Hank et al. (2009, pp. 257–258) provided a list of suggestions for constructing survey instruments and writing questions: ensure questions are answerable, stated in complete sentences, use neutral and unbiased language, are at an appropriate level of specificity, and are not double-barreled. They suggested participants should not be forced to answer any one question. Fowler (2002, pp. 76–103) included a chapter on designing questions that are good measures in his book on survey research methods. He cautioned researchers to be careful questions are worded adequately; mean the same to and can be understood by all respondents; can be answered given the respondents’ knowledge and memory; and do not make respondents feel uncomfortable and desire not to give a true, accurate answer. According to Fowler, researchers should not ask two questions at once. Sapsford (1999, pp. 119–122) agreed and suggested care should be taken to ensure questions are precise, lack ambiguity, and are easy to understand and in colloquial language. The questions developed for the survey in this study, found in Appendix B , section B.1 , were developed by the researcher and reviewed by the researcher and his supervisory committee in light of this advice.
An additional set of demographic and usage questions was part of the survey instrument, in a separate section at the end as recommended by Peterson (2000, as cited in Hank et al., 2009, p. 258). These questions allowed for collection of data on other variables of potential relevance to and having possible impact on the phenomena of interest, including use of the Internet, LibraryThing and Goodreads, the groups feature of the sites, and other social media and social networking web sites; and demographic factors such as age and gender. These demographic questions can be found in Appendix B , section B.1 .
3.5.4. Data Collection Procedures
3.5.4.1. pretest.
The first stage of data collection was to pretest the survey instrument to help ensure its reliability and validity (Hank et al., 2009, p. 259). A convenience sample of graduate students and graduate alumni of Florida State University was invited to pretest the survey and answer a few short, open-ended questions about their experience. Recruitment took place via face-to-face discussion, e-mail, and Facebook messages. All pretesters came from the School of Information; initial attempts were made to have this sample represent multiple departments from the university, but no students from other departments contacted (Business and Communication) volunteered. Flyers were posted later in the pretest period and the survey opened up via a direct link, to see if undergraduate or graduate students from other departments would be interested, but no responses were received through the link. One School of Information faculty member did volunteer his time to pretest the survey, and his input was welcomed alongside the students. Minor changes were made as a result, reducing the number of questions slightly to reduce perceived repetitiveness and clarifying other questions that pretesters reported getting stuck on. The pretest helped confirm the length of time for completion of the survey.
3.5.4.2. Main survey
The second stage of data collection was to select the samples discussed in section 3.5.2 and send invitations to participate to them. A couple of weeks before this began, the researcher contacted LibraryThing and the moderators of each Goodreads group to inform them of the beginning of the survey. A staff member from LibraryThing posted a short message in each group to let users know that the research would be taking place and had been given LibraryThing’s approval, to ensure invitations were not seen as spam. (LibraryThing required this step as part of their approval of the research; see Appendix E , section E.1 .) Goodreads moderators were welcome to inform their groups of the upcoming research.
The purposive sample was drawn from LibraryThing users who posted messages collected during the content analysis phase. Each of these users was sent an invitation letter, included in Appendix A , section A.2.1.1 . The private message features of LibraryThing were used to send the invitations to the selected users; while LibraryThing users can include an e-mail address in their profile, not all did so. Reminder invitation letters ( Appendix A , section A.2.1.2 ) were re-sent two weeks and four weeks after the beginning of data collection to remind individuals who had not completed the survey and thanking users who had. The convenience sample was drawn by posting an invitation, included in Appendix A section A.2.2 , to each of the LibraryThing and Goodreads groups selected during the content analysis phase. This invitation was re-posted to the same groups two weeks and four weeks after the beginning of data collection, to help ensure as many group members and visitors as possible saw it and had a chance to respond. Permission was granted by LibraryThing and Goodreads staff for this method of data collection (see Appendix E , sections E.1 and E.2 ).
Participants were given a total of six weeks to complete the survey from August 26th, 2013, the date data collection first began for this phase of the study. The survey was expected to take users about 15 to 20 minutes, an estimate confirmed by the pretesters—with more subject knowledge—taking between 7 and 16 minutes. The reminders at two and four weeks, number of visitors to and members of the nine groups, and number of users directly invited on LibraryThing led to sufficient data for analysis (see Chapter 4 , section 4.2 ), although snowball sampling and other techniques were held in reserve in case they were necessary.
3.5.4.3. Compensation
To encourage participation, compensation was offered in the form of a drawing for one of ten $25 Amazon.com, Barnes and Noble, or Books-A-Million gift cards. These stores were selected since they include the most popular online bookstore—Amazon.com, who after this selection was made acquired Goodreads—and the two most popular brick-and-mortar bookstores (which also have an online presence). Participants were given a choice of which store they would prefer, increasing the potential usefulness of the gift card to them and reducing potential bias created by supporting only one store. Other bookstores are smaller, do not offer online gift cards, or have few locations; offering gift cards from every possible store would present logistical challenges. The e-mail addresses of all participants who completed the survey and included an e-mail address in their response were entered into a Microsoft Excel spreadsheet (maintained under the data management procedures detailed in section 3.8 ). Gift card codes were e-mailed to 10 random e-mail addresses—selected by using Excel’s RANDBETWEEN function to generate 10 random numbers between 1 and the number of users who took the survey, then selecting those users from the spreadsheet—for the store they selected as preferred; these were sent on November 9 th , about one month after the survey was closed. Funds for the gift cards came from a Beta Phi Mu Eugene Garfield Doctoral Dissertation Fellowship, which I acknowledge and am thankful for.
3.5.4.4. Online hosting
The survey instrument was hosted online using Qualtrics online survey software, made available by FSU to all students and faculty. An online, Internet-based survey provided the greatest chance of reaching users of LibraryThing and Goodreads in the context of their use of the site and their interactions with other users. It cost less—survey hosting for a questionnaire of any length is provided free by Qualtrics in association with FSU—and took less time than a self-administered paper survey was expected to, while providing for honest answers and requiring less direct researcher involvement compared with an administered paper or telephone survey (Fowler, 2002, pp. 71–74). Participants completed the survey by following a link in the invitation letters; two separate links were used for users of LibraryThing and Goodreads, so that the survey could be personalized to refer to each digital library by name.
3.5.4.5. Consent and follow-up
The first page of the survey included an informed consent statement, included in Appendix A , section A.2.3 , which participants had to agree to before they could begin answering the survey questions. As seen by the last few questions in Appendix B section B.1 , participants were asked for their e-mail address for purposes of compensation, if they were interested in participating in a follow-up interview, and if they desired a report of the findings of the research once the study was complete. These e-mail addresses are being kept confidential and are stored in a secure, password-protected encrypted volume, the password known to the researcher but no one else. Details of data management are discussed in section 3.8 .
3.5.5. Data Analysis
The survey results were analyzed using SPSS statistical analysis software running on Windows, accessed through a virtual lab environment supported by FSU. First, the Likert scales were analyzed to determine the internal consistency and reliability of the scales via Cronbach’s alpha, following the procedures related by George and Mallery (2010). Individual items were dropped from a scale if their removal would increase the Cronbach’s alpha (and the reliability) of the overall scale. This procedure and its results are detailed in Chapter 4 , section 4.2.1 . The average of the remaining items in the scale was then taken, resulting in one value ranging from one to five for each of the concepts being measured. Combined with the demographic variables collected in the second half of the study, these were analyzed using appropriate, mostly nonparametric statistics including chi-square analysis, Mann-Whitney U tests, median tests, Kruskal-Wallis tests, Wilcoxon signed rank tests, and Kendall’s τ correlations (see Chapter 4 , section 4.2 for details).
3.6. Interviews
Qualitative interviewing, used in the third phase of this study, is a descriptive and interpretive research method that seeks meaning (Kvale & Brinkmann, 2009). While interviewers may seek basic facts, explanations, and statistics, nuanced explorations and descriptions of phenomena are of core interest. Interviews in qualitative and mixed-methods research projects are used "to understand themes of the lived daily world from the [participants’] own perspectives" (p. 24), through researcher interpretation of "the meaning of the described phenomena" (p. 27). Interviews for research purposes are often seen as a form of "professional conversation" (p. 2; see also Lincoln & Guba, 1985a, p. 268; Sutton, 2010, p. 4388) between the interviewer and the interviewee, on given themes introduced by the interviewer but assumed to be of mutual interest to the interviewee. The two "act in relation to each other and reciprocally influence each other" (Kvale & Brinkmann, 2009, p. 32). Interviewees choose specific instances, examples, or areas within the chosen theme(s) to discuss with the interviewer.
Interviews serve as a source of data on phenomena from the past, present, or (potential) future of interviewees, including "persons, events, activities, organizations, feelings, motivations, claims, concerns, … other entities" (Lincoln & Guba, 1985a, p. 268), and the complex interrelations between all of these. Interviews can help to verify ("member check"), extend, and triangulate data and information already obtained via other methods (Creswell & Plano Clark, 2011; Lincoln & Guba, 1985a). They allow for the gathering of research data when the researcher or his/her colleagues cannot conduct an ethnographic participant observation due to time, location, language, or other constraints (Sutton, 2010).
This dissertation study used semi-structured qualitative interviews employing the critical incident technique (Fisher & Oulton, 1999; Flanagan, 1954; Woolsey, 1986) to explore and describe the phenomena surrounding the roles of LibraryThing and Goodreads, as boundary objects, within and across social and information worlds. Interviews helped find nuances and details that were not possible to determine through the survey questionnaire and were missed, glossed over, or not observable during content analysis. The following sections discuss the strengths of interviews for this study, the chosen unit of analysis, population and sampling procedures, design of the interview instrument, procedures used for conducting the interviews, and data analysis.
3.6.1. Strengths of Interviews
The strengths of qualitative interviews are a good fit with the framework and perspective taken in this dissertation. These strengths are evidenced by many of the studies of social digital libraries reviewed in Chapter 2 using interviews (Bishop, 1999; Bishop et al., 2000; Chu, 2008; Farrell et al., 2009; Marchionini et al., 2003; Star et al., 2003; Van House, 2003; You, 2010) and the frequent use of interviews in studies of social and information worlds and of boundary objects (see Burnett, Burnett, et al., 2009; Burnett, Subramaniam, et al., 2009; Chatman, 1992; Clarke & Star, 2008; Gal et al., 2004; Gibson, 2011, 2013; Kazmer & Haythornthwaite, 2001). Thick, nuanced description of meanings, close to users’ thoughts (Forsythe, 2001; Geertz, 1973; Kvale & Brinkmann, 2009), was intended to help expose the social construction of these meanings and of the phenomena of social and information worlds, which happened (see Chapter 4 , section 4.3 ). Since true ethnographic observation would be difficult to arrange and could miss the social elements of interest, qualitative interviews were the best choice for returning rich, descriptive data on participants’ social and information worlds and the roles LibraryThing and Goodreads play in them. The qualitative interviewing literature states that its flexibility as a technique addresses the different contexts interviewees—with varying interests and backgrounds—come from, allowing the interviewer to adjust (Kvale & Brinkmann, 2009; Westbrook, 1997); this was true in practice in this case. The development of rapport can build opportunities for future follow-up, longitudinal research with the same participants, exploring the results of this study in greater detail (Westbrook, 1997). The understanding of participants of the roles of LibraryThing and Goodreads in the social and information worlds they are part of is at the core of this study, and the obtaining of descriptions and perspectives of participants’ "lived worlds" and their "understanding of the meanings in their lived world" was an appropriate use of interviews and played to their strengths (Kvale & Brinkmann, 2009, p. 116).
3.6.2. Unit of Analysis
The unit of analysis chosen for the interview phase of the study was the individual user of LibraryThing or Goodreads. These users were understood, as in the survey phase, to be part of one or more social or information worlds, and their participation in and responses to the interview informed analysis of the roles of LibraryThing and Goodreads in their experiences, in these existing worlds, and in the potential emergence of new worlds. As discussed above and in Chapter 2 , while individuals were interviewed the theoretical framework underlying this proposed study allowed for multi-leveled analysis, taking advantage of the strengths of interviews over other methods while minimizing their weaknesses.
3.6.3. Population and Sampling
The broader population of LibraryThing and Goodreads users totals over 26 million people; as with the survey phase of the study, sampling from this large population would present major logistical challenges. Given the existing sample of users selected to take the survey, restricting the sample of potential interview participants to this subgroup of the population—a ready-made sampling frame—provides a manageable task, if perhaps not anything approaching a true random sample. This method of sampling is appropriate in this case since data is available from the survey about these users, their social and information worlds, and the roles LibraryThing and Goodreads may play in them, leading to more insightful interview data.
The interview phase used purposive sampling of users whose survey responses indicated they could provide insightful data on the roles of LibraryThing and Goodreads in existing and emergent social and information worlds. Determination of this indication was done by looking at the content analysis and survey findings and prioritizing which scores on which variables were most of interest. Users who indicated they would be willing to participate in follow-up research served as the sampling frame, from which participants were sampled and chosen with an eye towards obtaining thick description (Geertz, 1973) of the picture of the phenomena under study, given other constraints such as time and availability. As interviews continued towards saturation, these criteria were reviewed and revised, and ensuring that interviewees were at least moderately representative of the group of survey participants became a concern. True and complete representativeness is not necessary when using qualitative interviewing, but saturation of findings is a necessary requirement (Bauer & Aarts, 2000; Gaskell & Bauer, 2000; Westbrook, 1997), and so sampling continued "until further exemplars"—interviewees in this study—"fail[ed] to add new nuances or to contradict what is understood" from the existing collected data (Westbrook, 1997, p. 147). This sampling method was chosen to obtain data to answer the research questions—from the interviews and in combination with findings from the other two methods—and to provide an accurate representation of LibraryThing and Goodreads in the context of the communities of users from the nine groups selected at the beginning of the content analysis phase.
Participants who were selected due to expectations they would provide insightful data through an interview were invited to take part via the e-mail addresses they provided when confirming their willingness to participate in an interview. The letter prospective interviewees were sent is in Appendix A , section A.3.1 . An initial sample of six prospective interviewees—three from each digital library—was e-mailed at first, to allow interviews to be arranged within a week or two of the contact date and not be forgotten about by participants if scheduled too far in advance. Further prospective participants were invited every week or two thereafter, when necessary to increase the sample size. If and when selected users did not respond to the initial request, a second request was made one to two weeks later, except in the cases at the end of the interview data collection when saturation had been reached. New users replaced the original ones in the sample if the latter did not respond after two to three weeks.
3.6.3.1. Pretest
Prior to collection of actual interview data, the interview instrument and procedures (as discussed in the next two sections) were pretested with an additional convenience sample of two FSU School of Information alumni and one FSU School of Information faculty member who helped pretest the survey. The procedures for this were identical to the procedures discussed below for the main interview phase. Pretesting allowed for potential refinement of the instrument and procedures, ensuring questions are understandable by a broader population, and making any necessary adjustments to the sampling method for the main interviewing process. No transcriptions or data analysis from this pretest took place, and audio recordings that were made to test procedures were only used to refine the interview instrument and procedures; they were deleted once the main interviews began. No specific changes were made to the instrument, although the potential need for additional prompting in association with a few questions was observed; quirks and foibles of the recording software were discovered, leading to tighter and more careful following of recording steps for the main set of interviews.
3.6.4. Instrument Design
The interviews were semi-structured; they used an instrument as a guide, but were treated as a conversation guided by the interviewer’s questions and the interviewees’ personal responses and reflections (Kvale & Brinkmann, 2009; Lincoln & Guba, 1985a). The instrument, included in Appendix C, provided pre-planned questions and themes, but additional follow-up questions and prompts not included in the instrument emerged from the conversation and its natural progression. This allowed key themes related to the research questions to be discussed and focused on without restricting the interview to no more than a given set of questions in advance (cf. Suchman & Jordan, 1990).
Key themes explored in the interviews included
- participants’ use of LibraryThing or Goodreads, focusing on use as a boundary object;
- the social and information worlds of participants, and their relationship to LibraryThing or Goodreads;
- the characteristics of these social and information worlds—their social norms, social types, information values, information behaviors, activities, organizations, sites, and technologies—and their impact on the user and their use of LibraryThing or Goodreads;
- translation between, coherence across, and convergence of social and information worlds, via LibraryThing or Goodreads; and
- the emergence of new social or information worlds through translation, convergence, or related activities and behaviors of LibraryThing or Goodreads users.
Focusing on critical incidents (Fisher & Oulton, 1999; Flanagan, 1954; Woolsey, 1986) of times when users interacted with others using the LibraryThing or Goodreads digital libraries helped provide a rich environment and context for exploration of these themes in detail with each interviewee. Among the interviews the degree of focus by individuals on the critical incident versus the broader spectrum of their use varied, but this was accepted as a natural, emergent element of the interviews, and follow-up questions and prompts were used to ensure sufficient data was elucidated on the incidents. The questions included in the instrument and in prompts and follow-ups used drew from the advice set down by Kvale and Brinkmann (2009, pp. 130–140) in their discussion of scripting interviews and types of interview questions, including
- introducing themes before asking detailed questions;
- focusing on descriptions of what occurred and how during critical incidents, instead of why it happened (at least to begin with);
- following up on responses as appropriate;
- seeking projection of interviewees’ opinions or the opinions of others in their social and information worlds; and
- checking the researcher’s interpretation of previous findings and interview responses.
3.6.5. Data Collection Procedures
As mentioned above, prior to collection of actual interview data the interview instrument and procedures was pretested with two FSU iSchool graduate alumni and one FSU iSchool faculty member.
3.6.5.1. Preparation and recording
After participants agreed to be interviewed by replying to the invitation discussed in section 3.6.3 , a specific date and time was arranged for the interview to take place. Since no participants were at locations close to Tallahassee (and few were expected to be), face-to-face interviews would have been difficult to accomplish. For this reason, it was planned that interviews would take place using online audiovisual media, as popular in studies of "Internet-based activity … where the research participants are already comfortable with online interactions" (Kazmer & Xie, 2008, pp. 257–258). Interviewees were offered a choice of Skype (skype.com), Google Hangouts (accessible via plus.google.com), Apple FaceTime (apple.com), or telephone. Interviews were audio recorded, with interviewee permission; GarageBand (apple.com/ilife/garageband) and Soundflower (cycling74.com/products/soundflower) software were used to record Skype and Apple FaceTime calls, while telephone calls were recorded via Google Hangouts, Google Voice (voice.google.com), GarageBand, and Soundflower software. No users chose Google Hangouts, and more than expected chose telephone calls; while online audiovisual media were the intended plan, interviewees’ preferences were attended to, and this did not cause any major issues with collecting interview data.
The interviewer took any notes he felt necessary on his impressions of the interview as soon as the interview has concluded, to not distract the interviewee with note taking but help ensure an accurate capturing of the interview process. Most interviews took between 40 and 55 minutes; full details are given in Chapter 4 , section 4.3 . These interview procedures allowed for a level of data equivalent to or greater than face-to-face interviews to be gathered, eliminating any potential weaknesses from a non-traditional interview setting while maintaining the strengths of synchronous interviews (Kazmer & Xie, 2008).
3.6.5.2. Introduction and informed consent
The interview process began with introductions, thanking the interviewees for participating, explaining the logistics of the interview, and ensuring that informed consent was obtained. Since obtaining written consent in person was not possible, participants were e-mailed a link to a page (the content for which is shown in Appendix A , section A.3.2 ) requesting their consent for the interviews, including the interview informed consent form, a couple of days before the interview. (This used the same FSU-partnered Qualtrics system as for the survey.) I requested interviewees to review this page and ask any questions they had. Before the interview recording began, consenting participants clicked an "I consent" button at the bottom of the page; some did this before audio or video contact was made, others waited until I directed them there just before the interview began. I then reviewed "the nature and purpose of the interview" with the interviewee, to ensure they knew the overall theme and topic of discussion (Lincoln & Guba, 1985a, p. 270). Prior to the critical incident portion of the interview, I asked a general, "grand tour"-type question (with follow-up prompts as necessary) to explore participants’ use of LibraryThing or Goodreads, the reasons for this use, and the groups they participate in.
3.6.5.3. Critical incident technique
The biggest portion of the interview employed the critical incident technique, a flexible interviewing technique intended to obtain "certain important facts concerning behavior in defined situations" (Flanagan, 1954, p. 335). First developed for use in aviation psychology, it has become a popular interviewing technique in the social sciences, education, and business, including LIS (Butterfield, Borgen, Amundson, & Maglio, 2005; Fisher & Oulton, 1999; Urquhart et al., 2003; Woolsey, 1986). It is often used in exploratory research to build theories, models, or frameworks for later testing and refinement, as typified by Savolainen’s (1995) research establishing his Everyday Life Information Seeking (ELIS) model. Flanagan (1954) outlined five main stages in the technique. The first two stages are to provide further operational definitions and structure for interviews, which have been discussed in the sections above. The fourth and fifth, procedures for analysis and interpretation of data gathered from interviews, are discussed in sections 3.6.6 and 3.7 below.
The third stage is the actual collection of a critical incident from each interviewee. In a critical incident interview, after initial introductions and formalities, the interviewer asks the interviewee to recall an incident where given situation(s) or behavior(s) occurred, as defined during the previous stages. Per Flanagan (1954), these incidents should be recent enough to ensure participants have not forgotten the details of them. Specific language is used to get interviewees to think of such an incident. In this study, the following language was used, with slight changes incorporated in the context of a given interview:
Now I’d like you to think of a time within the past few weeks where you interacted with others, either people you already knew or people you did not know, while using [LibraryThing / Goodreads]. (Pause until such an incident is in mind, or gently prompt the interviewee if they have trouble recollecting one.) Could you tell me about this interaction and how it came about?
This initial question allowed interviewees to refresh their memory of the incident by going over it in their mind, and provided data on their overall impressions of the interaction and how it came about. After this initial discussion, I guided the conversation with gentle prompts and follow-up questions designed to steer the conversation about the incident to the themes mentioned in section 3.6.4 above. Main questions were included in the interview instrument (see Appendix C ); prompts were not. All questions and prompts were aimed at eliciting "the beliefs, opinions, … suggestions … thoughts, feelings, and [reasons] why participants behaved" that way during their interaction (Butterfield et al., 2005, p. 490), in the context of LibraryThing or Goodreads and the social and information worlds at play in the incident.
3.6.5.4. Finishing up
Once the critical incident had been explored at length, the interview concluded with final questions intended to help validate and generalize the findings obtained from the critical incident portion of the interview, a process often called "member checking" (Lincoln & Guba, 1985a). I gave an overall impression of the role or roles I felt LibraryThing or Goodreads played in the incident and in the interviewee’s overall use of the site, and would ask if the impression seemed correct to the interviewee or—if they responded before I could get to that part—engaged them in further reflective conversation. Interviews confirmed if the incidents participants shared matched their overall experiences. The interview concluded by me thanking interviewees for their time and participation, and answering any questions they had (as a couple did about where the research was going or when they would hear about the overall findings). As mentioned above, as soon as the interview was over I took time to write up any notes I felt were necessary, to capture any elements of the experience that risked being lost due to fading memory. Interviewees were then thanked again for their participation and help via e-mail follow-ups a few days to a week later.
3.6.6. Data Analysis
All interview audio was transcribed by the researcher, who used Audacity software (audacity.sourceforge.net) to play back the interview and Microsoft Word to enter the transcription. Parts found to be difficult to understand could be slowed down or amplified in volume using the built-in features of the Audacity software; it provided noise reduction features that were helpful for one or two interview recordings. Any notes taken not already in digital form were transcribed. All notes, audio, and transcriptions were stored as discussed in section 3.8 .
Data analysis proceeded in a similar fashion to the content analysis phase of the study. Transcripts and notes were imported into NVivo 10 qualitative analysis software, which was used to look over each file and assign codes to sentences and passages. As with the earlier qualitative method, the codes assigned draw from boundary object theory, the social worlds perspective, and the theory of information worlds, which served as an interpretive and theoretical framework for analyzing the meaning of interview responses. They can be found in section 3.7 below. Open codes not included in the list but judged to be emergent in the data and relevant to the study’s purpose and research questions could be assigned during the coding process, as recommended by Charmaz (2006) and Kvale and Brinkmann (2009, p. 202), among others; these codes included open codes from the content analysis phase. Measures to ensure the trustworthiness of the data and analysis were taken as discussed in section 3.9 .
3.7. Qualitative Data Analysis
All qualitative data—consisting of the messages collected for the content analysis and transcripts and notes from the interviews—were imported into NVivo 10 qualitative analysis software, which was used to look over each transcript and assign codes.
For analysis, an approach similar to grounded theory (Charmaz, 2006; Strauss & Corbin, 1994) and its constant comparative method was taken, but without the same focus on open coding. Codes were first applied to sentences in messages or in participants’ interview responses (as transcribed). Only the lowest, most detailed level of codes, as presented in the codebook (as 3.7.2 and 3.7.3 below), were applied. Two exceptions to sentence-level coding were allowed. For the content analysis phase, no more than two codes could be applied to an entire message if there was clear evidence for them throughout the message. For the interview phase, no more than two codes could be applied to a paragraph, answer to a question, or short exchange (no more than half a page) if there was clear evidence for them throughout the paragraph, answer, or exchange. No other exceptions were allowed to this rule; codes could not be applied to units smaller than sentences (to provide sufficient context), and were required to be applied individually to multiple messages, answers, or exchanges. Memos and annotations were made to explain any cases where code(s) were applied across multiple sentences within a message or interview transcript at once, and to explain codes in greater detail where deemed necessary; a general rule of "if in doubt, add an annotation" was followed throughout analysis. These rules were refined and clarified after initial pilot testing, details of which are given in section 3.7.1 below.
After initial analysis, higher levels of analysis looked at the coding in the context of paragraphs, entire messages, message threads, and larger portions of interview transcripts, considering these in light of other threads, messages, and interviews. Throughout the coding and analysis process, consideration of the social and information worlds was explicitly multi-leveled: worlds of multiple sizes, shapes, and types were considered throughout the processes of collecting and analyzing data. The boundaries of these worlds, and where these worlds fell on the continuum of existing and emergent worlds, was considered emergent from the data, based on the conceptual, theoretical, and operational definitions given in earlier sections and in the coding scheme below. Memos and annotations were provided to explain the levels of social and information worlds under consideration, especially when boundary-related codes were applied.
The search, query, and report features of NVivo were used in further analysis and the writing of sections 4.1 and 4.3 of Chapter 4 . While messages and individual interviews (as the units of analysis) and sentences within them were coded as individual units, higher level units—passages, threads, groups, social and information worlds, and LibraryThing and Goodreads—were considered as the analysis proceeded. This allowed findings and conclusions to be drawn at multiple levels, as can be seen in Chapters 4 and 5 .
3.7.1. Pilot Testing and Resulting Changes
Pilot testing of the coding scheme and analysis procedures was conducted prior to the content analysis phase. Two fellow FSU iSchool doctoral students, having basic familiarity with the theories incorporated into the theoretical framework used here, were recruited to test intercoder reliability. Each student volunteer was provided with a "quick reference" version of the coding scheme in sections 3.7.2 and 3.7.3 below, with the final version used by the researcher as a guide for analysis included in Appendix D . Pilot test coders were given a summary of the coding rules and guidelines discussed herein. The second volunteer discussed the coding scheme, rules, and guidelines at some length with the researcher—including some brief practice coding—before coding began, and both volunteers took part in debriefing sessions with the researcher after coding had been completed. The researcher and the first volunteer coded the messages selected for the pilot test of the content analysis phase—120 messages, 60 each from one LibraryThing and Goodreads group. Changes were made after this coding cycle based on intercoder reliability statistics—using Cohen’s (1960) kappa as calculated by NVivo—and qualitative and holistic analysis of the results, and a second cycle proceeded. Further changes were made after this second cycle.
Changes were made to address weaknesses identified in the original procedures, coding scheme, and theoretical framework, to help ensure theoretical and operational clarity. Changes made after the first cycle were as follows:
Codes were only to be applied at the sentence level, with two exceptions as mentioned earlier.
Memos and annotations were stressed, especially to explain codes applied at levels higher than the sentence level and to explain coding in greater detail where deemed necessary.
Boundaries of worlds were to be considered emergent from the data, with memos and annotations recommended to explain the level of social and information worlds under consideration.
Definitions for all concepts were refined and tightened.
Cases where social norms or information value had broad application, across substantial parts of a thread or interview, were to be memoed or annotated instead of coded, since the latter was seen to be of less use for later analysis.
Information behavior was tightened, to consider only behavior that was normative at some level and to exclude general occurrences of information behavior, since under the latter interpretation whole threads and interviews could be coded.
If it was unclear whether a new world—of any size or scale—had truly emerged, memos and annotations were recommended to express the degree of confidence.
Three subcodes were added to account for different cases of LibraryThing or Goodreads acting as a standard boundary object: as an emergent site, an emergent technology / ICT, or another type of emergent boundary object.
Changes were made after the second cycle of coding and discussion among the researcher and multiple committee members, as follows:
The distinction between existing and emergent was stressed to be along a continuum, and to be a phenomenon that would emerge from the research data, similar to the size and shape of the worlds and their boundaries. Memos and annotations were further stressed to elaborate on where given cases fall on this continuum.
Codes and procedures were acknowledged to be complex, and to be using theories that had not been combined in previous research; the theoretical framework is emergent. As such, intercoder reliability statistics—as run using Cohen’s (1960) kappa after each coding cycle of the pilot test and initially planned for a portion of the interview data—were considered a less appropriate measure of the potential trustworthiness, credibility, transferability, dependability, and confirmability of the findings than originally thought. Both pilot tests showed that reaching high statistical levels of intercoder reliability would require extensive training of other coders—difficult if not impossible in dissertation research—and much fine-tuning of rules and procedures, fine-tuning that does not fit the interpretive and social constructionist paradigms in use for this research. Other techniques for ensuring qualitative trustworthiness (Gaskell & Bauer, 2000; Lincoln & Guba, 1985), already built into the study (see section 3.9.3 ), would now be emphasized alongside intracoder reliability checking at the conclusion of the study; results of the latter are included in Chapter 4 .
The following sections present the coding scheme used for each research question, as revised after the pilot testing. Section 3.7.2 includes the codes focusing on existing social and information worlds (RQ1), while section 3.7.3 includes the codes focusing on emergent social and information worlds (RQ2). The distinction between existing and emergent was treated as along a continuum, where the degree to which a world is existing or emergent was allowed to emerge from the research data. Frequent memos and annotations were made on this during analysis. An operational definition is given for the concept each code represents, as used in the coding and analysis of data from the content analysis and interviews phases. These definitions come from the literature review presented in Chapter 2 and the theories and theoretical framework described therein, with contributions from definitions in the Oxford English Dictionary’s online version (oed.com) where necessary and appropriate. A summarized version of the coding scheme, used as a quick reference during coding and analysis, is included as Appendix D.
3.7.2. Existing Worlds
3.7.2.1. translation.
Star and Griesemer (1989) defined translation as "the task of reconciling [the] meanings" of objects, methods, and concepts across social worlds (p. 388) so people can "work together" (p. 389). Multiple translations, gatekeepers, or "passage points" can exist between different social worlds (p. 390). This was operationalized as the process of reconciliation and translation of meanings—taken to include understandings—between different people, social worlds, or information worlds.
3.7.2.2. Coherence
While Star and Griesemer (1989) never gave coherence an explicit, glossary-style definition, it can be conceptualized as the degree of consistency between different translations and social or information worlds. Boundary objects play a critical role "in developing and maintaining coherence across intersecting social worlds" (p. 393). Coherence was operationalized using the common characteristics of social and information worlds, coded under the definitions given below. Coding took place at the level of these characteristics, not for coherence in general.
Social norms : Burnett, Besant, and Chatman (2001, p. 537) defined social norms as the "standards of ‘rightness’ and ‘wrongness’ in social appearances" that apply in an information world. Jaeger and Burnett (2010, p. 22) restated this as "a world’s shared sense of the appropriateness—the rightness or *wrongness—*of social appearances and observable behaviors." Drawing from these, social norms were operationally defined as the common standards and sense of appropriate (right or wrong) behaviors, activities, and social appearances in an information world. In some cases, a substantial part of or an entire thread or interview could be seen as socially normative, but it was decided that in those cases the social norms code would not be applied to every message or sentence, as doing so would not be of much use for later analysis. Instead, a memo or annotation was made to note and discuss the application of social norms to large parts of a thread or interview.
Social types : Burnett et al. (2001, p. 537) defined social types as "the [social] classification of a person." Jaeger and Burnett (2010, p. 22) elaborated on this, stating social types are "the ways in which individuals are perceived and defined within the context of their [information] world." This was operationalized following the latter definition and to include explicit and implicit roles, status, and hierarchy.
Information value : Jaeger and Burnett (2010, p. 35) defined information value as "a shared sense of a relative scale of the importance of information, of whether particular kinds of information are worth one’s attention or not." Such values may include, but are not limited to, "emotional, spiritual, cultural, political, or economic value—or some combination" (p. 35). Values may be explicit and acknowledged, or implicit within message content or interview responses. A succinct operational definition, used in this study for coding, is that information value is a shared sense, explicit or implicit, of the relative scale of the importance—emotionally, spiritually, culturally, politically, and/or economically—of information and whether it is worth attention. As with social norms, if a substantial part of or an entire thread or interview was seen as expressing the shared information values of a world, the code was not applied to every message or sentence; instead a memo or annotation was used.
Information behavior and activities : Burnett and Jaeger (2008, "Small worlds" section, para. 8) defined information behavior as "the full spectrum of normative [information] behavior … available to members of a … world"; this was restated in different words by Jaeger and Burnett (2010, p. 23). Information behavior can include seeking, searching, sharing, or use of data, information, or knowledge; communication and interaction; and avoidance of data, information, or knowledge. Strauss (1978) did not provide an explicit definition of activities, but his use of the word within the social worlds perspective corresponds with one of its senses in the Oxford English Dictionary: "something which a person, animal, or group chooses to do; an occupation, a pursuit" ("Activity," 2012). A slight restriction was placed on this operationally, that the "something" should have an informational component (with information construed to include data and knowledge). Operationally, this code was used to identify occurrences of normative, chosen information behavior and information-based occupations or pursuits—defined broadly—by members of a world. Such behavior had to be normative at some level to be coded, and general occurrence of information behavior were not coded, since under such an interpretation whole threads and interviews could be construed as such.
Organizations : Strauss (1978) stated social worlds may have "temporary divisions of labor" at first, but "organizations inevitably evolve to further one aspect or another of the world’s activities." This sense is similar to the definition of an organization as "an organized body of people with a particular purpose" found in the Oxford English Dictionary ("Organization," 2012). A combination of the two was used for operational coding: organizations are organized, but possibly temporary bodies with the particular purpose of furthering one aspect or another of the world’s activities.
3.7.2.3. Boundary object
Codes were applied for treatment of the digital library as a boundary object. This was operationalized by coding passages where the digital libraries cross the boundaries between multiple existing social or information worlds and are used within and adapted to many of them "simultaneously" (Star & Griesemer, 1989, p. 408) while "maintain[ing] a common identity across sites" (Star, 1989, p. 46). Instances of the boundary object’s use as a common site and information and communication technology (ICT) were coded using the definitions below. Coding took place at the level of these characteristics, not for boundary objects in general.
Common site : Strauss (1978) related sites to "space and shaped landscape"; the term’s use under the social worlds perspective corresponds to this sense given in the Oxford English Dictionary: "a position or location in or on something, esp. one where some activity happens or is done" ("Site," 2012). This location may be a physical, virtual, or metaphorical space, as seen in many of the concepts of community reviewed in Section 2.2. A succinct operational definition, used for coding, is that sites are spaces, positions, or locations—physical, virtual, or metaphorical—where information-related activities and behaviors take place.
Common information and communication technologies (ICTs) : Strauss (1978) defined technology as "inherited or innovative modes of carrying out the social world’s activities" (p. 122). ICTs are often referred to in the literature of LIS, knowledge management, education, and other fields without explicit definition, and there is no one historical source all uses stem from. Remaining compatible with most of this literature and adapting from the definitions of Strauss (1978) and the Oxford English Dictionary ("Technology," 2012), ICTs were operationalized for coding purposes as inherited or innovative processes, methods, techniques, equipment, or systems—developed from the practical application of knowledge—used for carrying out information or communication-related behaviors and activities.
3.7.3. Emergent Worlds
3.7.3.1. convergence.
Convergence is seen in similar light to coherence, defined above as the degree of consistency between different translations and social or information worlds. Convergence was operationalized through the emergence of common characteristics in new social and information worlds (or proto-worlds), to be coded under the definitions given in section 3.7.2.2 above for social norms , social types , information value , information behaviors / activities , and organizations . Coding took place at the level of these characteristics, not for convergence in general; coding was kept separate from that for these characteristics under coherence. If it was unclear whether a new world—of any size or scale—had truly emerged, memos and annotations were made to express the degree of emergence seen in the data.
3.7.3.2. Boundary object as standard
Treatment of LibraryThing and Goodreads as a new, local standard for a new, emergent social or information world was coded in this category, to distinguish it from treatment of the digital libraries as boundary objects within and across existing information worlds ( section 3.7.2.3 ). This will be operationalized under three subcodes, where all coding would take place:
Emergent site : Under the definition of sites given above, cases of LibraryThing or Goodreads serving as an emergent, standard, and influential space, position, or location for information-related activities and behaviors were coded here. Clear evidence of the digital library serving as a new standard site for an emergent world was necessary. This code could be applied alongside the "emergent technology" code below, and in many cases this happened.
Emergent technology / ICT : Under the definition of technologies given above, cases of LibraryThing or Goodreads providing emergent and standard processes, methods, techniques, equipment, or systems—developed from the practical application of knowledge—used for carrying out information or communication-related behaviors and activities in an emergent world were coded here. Clear evidence of the digital library providing or serving as a new standard technology within an emergent world was necessary. This code could be applied alongside the "emergent site" code above.
Emergent boundary object : Cases where LibraryThing or Goodreads served as an emergent, standard boundary object, but not as a site or technology, were coded here. Clear evidence of the digital library serving as such a role was necessary, and clear evidence that it was not serving as a site or technology was required. This code was expected to be rare and in reality was; it was applied only a few times in the content analysis and not at all in the analysis of the interviews. It was included to ensure all cases of LibraryThing or Goodreads serving as a new, standardized boundary object wer captured. This code was considered mutually exclusive with the "emergent site" and "emergent technology / ICT" codes above.
3.8. Data Management
I have kept all data from this study in digital format on my personal laptop computer. Survey data was kept in Microsoft Excel (.xls/.xlsx) format, interview audio in .mp3 format, and messages and interview transcripts in Microsoft Word (.doc/.docx) format. A password protected and encrypted disk image was created and used for all dissertation data, the password known to the researcher but no one else. Within this image, separate folders were created for each phase of the study. All data analyzed using the coding scheme discussed in section 3.7 above—including messages, interview transcripts, and notes—was also kept in an NVivo project (.nvp) file at the top level within the image. This disk image will be kept until the date arrives for destruction of records from this dissertation.
Filenames for data served and continue to serve as metadata, reflecting the source of the data (participant pseudonym or group name for individual data, phase name for collated results), the date it was collected, the digital library the data refers to (LibraryThing or Goodreads), and the type of data it represents (e.g. thread, survey response, interview transcript, interview notes, preliminary analysis). For example, bob_GR_transcript_022914 . doc could be the filename for the transcript—in Microsoft Word format—of an interview with "Bob," a Goodreads user, conducted on the fictional date February 29, 2014. Three additional spreadsheets (in Microsoft Excel format) were created to provide metadata. Two—one for LibraryThing and one for Goodreads—link participants’ names and e-mail addresses to their psuedonyms; the other has kept track of survey data for interviewees, and was used during interview recruitment to help determine who would be invited to participate.
Encrypted and password-protected backups of all research data have been made on a weekly basis (with rare exceptions due to travel) onto an external hard drive kept at the researcher’s home. Additional encrypted and password-protected backups have and will be made onto recordable CDs or DVDs, to be kept in a filing cabinet belonging to the researcher in the Shores Building on FSU’s main campus or, once the researcher leaves FSU, in a similar secure work location. All research data for this study, including backups, will be deleted and destroyed by April 30 th , 2019 (this date being fewer than five years from the completion of the study). Appropriate excerpts from the data (using pseudonyms) and synthesized data analysis, findings, and conclusions—including the completed dissertation, journal articles, and conference papers—may be shared with other researchers, scholars, and the general public up to and beyond the date given above. Future research data and findings building on the data collected and conclusions drawn during this study may be shared with other researchers, scholars, and the general public, subject to restrictions put in place by the researcher’s home institution and funding source(s) at the time of such research.
3.9. Validity, Reliability, and Trustworthiness
3.9.1. holistic: mixed methods, case studies.
The validity and reliability of mixed methods studies can be assessed in two ways (Creswell & Plano Clark, 2011). One can look at the research as a whole, considering the study’s design, interrelations, and how everything fits together to ensure high levels of validity and reliability. Towards this view, Creswell and Plano Clark provided a list of potential validity threats in mixed methods research and strategies for minimizing these threats (pp. 242–243), which have been followed throughout the design and execution of this research.
Yin (2003) provided similar guidance for case study designs, summarized in his Figure 2.3 (p. 34). Each of these has been implemented in this study as follows:
"Use multiple sources of evidence": Three different methods of data collection have been used, each sampling across different groups and users from LibraryThing and Goodreads.
"Establish chain of evidence": The methods were linked together and informed each other. Data from content analysis helped inform the survey instrument, while the content analysis and survey data helped inform the interview instrument, process, and analysis. Data from all three methods has been tied together in the overall findings and conclusions from the study (see Chapter 5 ).
"Have key informants review draft case study report": While this specific technique was not used, I confirmed with interviewees that my impression of the critical incident they shared was accurate prior to the conclusion of each interview. Participants who requested a report of the findings on completion will receive one within a few weeks after defense of this dissertation.
"Do pattern-matching": Here Yin refers to looking for "several pieces of information from the same case [that] may be related to some theoretical proposition" (p. 26). This study achieved this by maintaining a consistent focus on the same phenomena throughout all three phases and using the same themes—based on the theoretical framework developed in section 2.8 —for coding the messages (in the content analysis phase) and interview transcripts (in the interview phase).
"Do explanation-building": Here Yin refers to establishing a cause-and-effect relationship between patterns in data and theoretical propositions. The pattern-matching above, combined with the theoretical framework discussed in section 2.8 and the philosophical and epistemological viewpoint provided by social informatics and social constructionism, allowed such explanations to be developed through synthesis of data from all three phases (see Chapter 5 , sections 5.1 and 5.2 ).
"Address rival explanations": While I admit favoring the theories used in the theoretical framework developed in section 2.8 , other theories related to communities, collaboration, information behavior, and knowledge management—reviewed elsewhere in Chapter 2 —could have provided a better explanation. The existing literature in these areas and my knowledge of them is used in later sections of Chapter 5 to address possibilities beyond the theoretical framework that relate to the findings seen here.
"Use logic models": Due to limitations of this study (see Chapter 5 , section 5.7 ), a visual model may be premature at this point. I may develop figures, diagrams, and other visual aids to help present the findings as part of posters, conference papers, journal articles, and research presentations.
"Use theory in single-case studies; use replication logic in multiple-case studies": While this is a multiple-case design, only two cases are considered here. Theory—the theoretical framework in section 2.8 —and replication logic—multiple groups and two digital libraries—have played important roles in the design and execution of this dissertation study.
"Use case study protocol": Constraints placed on procedures by the two sites were unavoidable, but where possible the same procedures were used for LibraryThing and Goodreads. Messages were collected and analyzed the same way; surveys distributed, collected, and analyzed the same way; and interviews followed the same themes and procedures. The extra requirement to obtain the consent of group moderators put in place by Goodreads prior to collecting messages and survey responses from users of that digital library did not cause great differences in the data collected or its comparability with that from LibraryThing groups. The researcher took care to document the study as it proceeded, including deviations in procedures that became necessary; the most notable of these was the need to vary the intended statistics and accept greater limitations on the survey results than were at first intended, as discussed above and in Chapter 4 , section 4.2 .
"Develop case study database": Given few cases in this study, a formal database was not constructed. The data management procedures discussed in section 3.8 and NVivo qualitative analysis software—which runs on a Microsoft SQL Server database—provided similar benefits to Yin’s recommendation here.
While holistic consideration of validity and reliability is useful, a second approach is necessary: examining the validity and reliability of each phase of a mixed-methods study—quantitative and qualitative—as an individual method. Each type of research has "specific types of validity checks" to perform (Creswell & Plano Clark, 2011, p. 239), since—despite the continuum mentioned by Ridenour and Newman (2008)—different methods require different measures of their reliability and validity. The two sections below take this approach and apply it to the quantitative—survey—and qualitative—content analysis and interview—phases of the dissertation study conducted here.
3.9.2. Quantitative: Survey
Validity and reliability for quantitative research are given substantial treatment in research methods textbooks, such as Schutt (2009, pp. 130–141) and Babbie (2007, pp. 143–149). The validity of the survey data can be broken down by the different types of validity these and other authors identify as used for quantitative research:
Face validity (Babbie, 2007, p. 146; Schutt, 2009, p. 132): Given that the survey questions were developed from the theories discussed in Chapter 2 and the theoretical framework developed in section 2.8 , each of which have face validity, the questions are judged to have met face validity for measuring the phenomena in question.
Measurement validity (Schutt, 2009, pp. 130–132): The survey questions were looked over by the researcher and his supervisory committee to ensure they did not suffer from idiosyncratic errors due to lack of understanding or unique feelings; from generic errors caused by outside factors; and from method factors such as unbalanced response choices or unclear questions. Attention paid to other kinds of validity helps improve measurement validity.
Content validity (Babbie, 2007, p. 147; Schutt, 2009, p. 132): Using multiple scales and multiple questions per scale helped the questions cover "the full range of [each] concept’s meaning" (p. 132) and the full range of the roles of LibraryThing and Goodreads in the social and information worlds of their users. The content analysis and interviews provided data from fewer users, but much thicker description of the phenomena of interest, as one would expect from qualitative research methods.
Criterion validity (Babbie, 2007, pp. 146–147; Schutt, 2009, pp. 132–134): This is difficult to measure here because no survey-based measures are known to have been developed for the theory of information worlds or boundary object theory prior to this study, and the social worlds perspective makes rare use of surveys. Schutt stated that "for many concepts of interest to social scientists, no other variable can reasonably be considered a criterion" (p. 134); Babbie (2007, p. 147) advocated using construct validity in these cases instead. Fowler (2002, p. 89) made a similar argument for questions "about subjective states, feelings, attitudes, and opinions," believing "there is no objective way of validating the answers … [they] can be assessed only by their correlations with other answers," through construct validity.
Construct validity (Babbie, 2007, p. 147; Schutt, 2009, pp. 134–135): Most of the measures used in the survey significantly correlated with each other, as one would expect given their relations to each other in the social worlds perspective and the theory of information worlds.
Reliability (Babbie, 2007, pp. 143–146; Schutt, 2009, pp. 135–138): While the survey was not repeated by each participant, using multiple measures of each concept and triangulation of the findings via the content analysis and interview phases of the study served a similar role to measures of test-retest or pre- and post-test reliability in an experimental design. The reliability of the scales was analyzed, while the randomization of survey questions (except the demographic questions) helped improve reliability.
3.9.3. Qualitative: Content Analysis and Interviews
A few qualitative and mixed methods researchers hold to positivistic treatments of validity and reliability, requiring use of quantitative measures such as intercoder percentage agreement, Holsti’s (1969) coefficient of reliability, Cohen’s (1960) kappa, or Krippendorf’s (2004b) alpha. Most qualitative researchers, however, argue validity and reliability should not be ported over from quantitative to qualitative research with no changes, nor ignored; instead they must be adapted and changed to fit the naturalistic and ethnographic nature of most qualitative research (Gaskell & Bauer, 2000; Golafshani, 2003; Kvale & Brinkmann, 2009; Lincoln & Guba, 1985b; Ridenour & Newman, 2008). Which adaptations and changes should be put into place for qualitative research is the subject of debate (Golafshani, 2003). Golafshani found "credibility, … confirmability, … dependability … transferability," and "trustworthiness"—the last term preferred by Lincoln and Guba (1985b)—to be the most often terms used to describe the validity of qualitative research. No matter what term is chosen, validity is "inescapably grounded in the processes and intentions of particular [qualitative] research methodologies and projects" (Winter, 2000, p. 1, as cited in Golafshani, 2003, p. 602). Dependability and trustworthiness were the closest linked to reliability in qualitative research by Golafshani (p. 601) and Lincoln and Guba (1985b).
This dissertation research study, while drawing from all of the sources cited above, adapted the criteria and techniques cited by Gaskell and Bauer (2000) and Lincoln and Guba (1985b) for ensuring the validity and reliability of the qualitative phases of the study. These are discussed below, following four broader categories of trustworthiness outlined by Lincoln and Guba.
3.9.3.1. Credibility
The sequential, multiphase design allowed for prolonged engagement with the environment—19 months from prospectus defense to dissertation defense—and persistent, detailed observation of the phenomena under consideration. Using an approach for coding and analysis similar to the constant comparative method of grounded theory (Charmaz, 2006; Strauss & Corbin, 1994) helped ensure breadth and depth. Methods were triangulated via the sequential, multiphase design, where each method reflexively informed and was informed by the others and the theoretical framework developed in section 2.8 . The theoretical framework provides two perspectives—the lenses of the social worlds perspective and the theory of information worlds—that were triangulated in analysis, and the researcher was and is familiar with other social theories, models, and concepts of information and information behavior, some of which apply to the findings (see the later sections of Chapter 5 ). Triangulation of multiple investigators was difficult given the individual nature of a dissertation project, but the input of the dissertation committee and the researcher’s colleagues was considered and welcomed at appropriate stages. Using member checking in the interview process and later methods in the sequential design to check earlier ones led to greater credibility for the study and produced a high level of communicative validity.
Statistical intercoder reliability testing, while used during the pilot testing of the content analysis procedures, was later and is now considered less appropriate for this study; the combination of theories incorporated in the theoretical framework was being used for the first time, and as such the coding scheme and framework should be considered at least somewhat emergent. The coding scheme and procedures are acknowledged to have been quite complex. Statistics such as Cohen’s (1960) kappa or Krippendorff’s (2004) alpha are not very compatible with this exploratory study, using an emergent framework, and following an interpretive approach to analysis (Ahuvia, 2001). The pilot testing of the content analysis procedures, incorporating intercoder reliability testing with Cohen’s kappa, showed that reaching high statistical levels of intercoder reliability would require extensive training of other coders—difficult if not impossible in dissertation research—and much fine-tuning of rules and procedures, fine-tuning that might be appropriate for a non-dissertation, post-positivistic study, but does not mesh with the interpretive and social constructionist paradigms in use here nor fit with the nature and resources of dissertation research. Intracoder reliability testing was performed, using percent agreement and Cohen’s kappa, for the content analysis and interviews; this is reported in Chapter 4 at the beginning of each section of findings. Stressing of the other measures discussed here to address credibility and qualitative trustworthiness is believed to have been enough to overcome any limitations caused by not using intercoder reliability statistics.
3.9.3.2. Transferability
Every effort was made in the prospectus to be transparent in how the research would be conducted, and such transparency carried over to the research and to writing this dissertation. The data collection for the content analysis and interview phases was constructed to provide valid and complete results, from reaching saturation, leading to insightful analysis; this has occurred. As seen in Chapters 4 and 5, the data allow for thick description (Geertz, 1973) of the phenomena in context, taken from messages and interview transcripts, which can allow other researchers to assess the potential transferability of the research findings to other settings.
3.9.3.3. Dependability
As discussed above, every effort has been made to be transparent in the conduct of this research. The data collection for the content analysis and interview phases provides valid and complete results, having reached saturation, leading to insightful analysis. I remained transparent with users who were surveyed and interviewed, disclosing the full and true purpose of the study and not engaging in deception. Using participants whose survey or content analysis data indicated they would provide interest and insight in an interview helped satisfy Gaskell and Bauer’s call for revealing and relevant findings, and I feel what is found in Chapters 4 and 5 also fits. By ensuring saturation was reached in the interviews, the dependability of the study is increased further. While the inquiry audit suggested by Lincoln and Guba was not implemented for this study, the process of defending the prospectus and dissertation and the guidance of the dissertation committee throughout the process has served a similar purpose.
3.9.3.4. Confirmability
The data analysis process included memoing, annotating, and note taking at appropriate moments, including reflective comments on the data and the researcher’s experience. The researcher noted any and all reflective comments on the research study, theoretical framework, data collection process, and data analysis process during all phases of the project. Triangulation (as discussed above) helped ensure confirmability. While the formal confirmability audit suggested by Lincoln and Guba—examining if findings, interpretations, and recommendations are supported by the data—was not implemented for this study, the process of defending the dissertation serves a similar purpose.
3.10. Ethical Considerations
This study is not known to have violated any ethical principles or procedures. The content analysis phase used messages accessible to the public, posted in LibraryThing and Goodreads groups, as its source of data. The identities of the users who posted each message remains confidential. Usernames have been used to allow for identifying common message authors in a thread, for analysis of the flow of conversation, and for identifying potential participants for later phases of the study, but have not been and will not be part of further analysis, results, and publications. Identities have remained confidential throughout the survey and interview phases of the study, and will continue to do so after a defended dissertation. Psuedonyms have been and will continue to be used in any published or unpublished reports of the results and conclusions, and any other data or information with the potential to identify participants to people familiar with them has been altered for the purposes of this dissertation and future presentation and publication.
Informed consent was obtained from participants in the survey and interview phases, before they completed the survey instrument or participated in the main portion of the interview, and—as required by Goodreads for use of their digital library as a setting for this research (see Appendix A , section A.1 )—from the moderators of Goodreads groups. Their participation was voluntary; any participant who wished not to complete the survey or be interviewed, or wanted to request an interview be stopped or their survey data be deleted, would have been accommodated and allowed to not take part in or withdraw from the study. Moderators had the same right when it came to deciding if their group would take part in the study as a whole. No users or moderators who had previously consented expressed feeling uncomfortable and wishing to withdraw. Some moderators and potential interviewees did not respond to invitations, and one potential interviewee did not show up for her interview time and never responded to inquiries, but it is unclear why she chose to withdraw or why others were not interested in—in some cases further—participation. If any participants wish to withdraw their data from the study in the future, after already completing the survey or having been interviewed, their survey results, interview transcript, interview audio recording, and notes taken by the researcher after their interview will be removed from the data collected and analyzed as best as is possible, although their data will have already been analyzed and affected the conclusions drawn from data analysis (seen in Chapter 5 ). This is an unavoidable consequence and will be dealt with as best as possible by the researcher, should it occur.
On the opposite end of the research lifecycle, in two of the LibraryThing groups—which will not be named to maintain confidentiality and not "rock the boat" where it is unnecessary—a small number of users (five to ten) responded to the survey invitation post with comments disliking the survey instrument or facing confusion over the questions asked. I answered the questions and queries as best as possible without causing excessive bias in the survey results, but there was not much that could be done to please some users. They were, strictly speaking, not expressing any uncomfortable feelings—if anything they made me more uncomfortable than my survey had done to them—but this is worth noting as a negative reaction. It was not the norm; most participants were happy to complete the survey without incident, and no harm or risks occurred to any participants, greater than those experienced in everyday life, as a result of viewing or completing the survey or participating in the research in other ways.
The study was explained to participants in all letters they received, at the beginning of the survey in the informed consent statement, in the interview informed consent statement, and in verbal form at the beginning of the interview; see Appendix A for the letter and consent forms. As such, participants should have had complete awareness of the potential risks (or lack thereof) and benefits, that their participation was and is voluntary, and of the compensation provided, before giving their informed consent for each phase of the data collection. Participants were not deceived in any way at any point during this study. The potential benefits to the participants, as users of the LibraryThing or Goodreads digital libraries, were great enough to outweigh any small possibility of harm or any risks discussed above. The identity and affiliation of the researcher was known to all prospective participants via the invitation letters and informed consent statements, and the purpose of the interview and reasoning behind it was reiterated to each interview participant at the start of their interview. There were no issues seen with the researcher (as interviewer) maintaining appropriate boundaries with participants during the interview phase of the study.
The FSU Human Subjects Committee, an institutional review board (IRB), approved this study, including the pilot test of the content analysis phase. Documentation of this approval can be found in Appendix E , section E.3 .
3.11. Conclusion
This chapter has presented the details of the method and procedures for this dissertation research study. The use of content analysis, a survey questionnaire, and semi-structured interviews in sequence within a mixed methods research design addressed the purpose of the research: to improve understanding of the organizational, cultural, institutional, collaborative, and social contexts of digital libraries. As stated in Chapter 1 and shown in Chapter 2 , these contexts have important effects on users, communities, and information behavior. There is a clear need for theoretical and practical research into the roles digital libraries play within, between, and across communities, social worlds, and information worlds. This study helps satisfy that need.
The research design is well-grounded in epistemology and theory, previous research, and previous and existing practice; Chapter 2 provides this necessary context. The study operates under the tenets of the social paradigm, social informatics, and social constructionism, and incorporates boundary object theory, the social worlds perspective, and the theory of information worlds into its theoretical framework. This design has allowed for data to be collected and analyzed, at multiple levels and using multiple methods, on the roles that LibraryThing and Goodreads, two cases of social digital libraries, play as boundary objects in translation, coherence, and convergence between existing and of emergent social and information worlds. Chapter 4 turns to presenting the findings from this data and analysis of it, with Chapter 5 providing greater synthesis and discussion of the findings, implications, and conclusions of this research.
The FSU iSchool was known at the time as the School of Library and Information Studies; for simplicity the newer name (which took effect in early 2014) will be used to refer to this entity in this dissertation. The older name is still present on the invitation letters and consent forms as approved by FSU’s Human Subjects Committee in Appendix A . ↩︎
IMAGES
VIDEO