Wednesday 31 October 2012

Peer Review Example

Learning about the Peer Review Process for me, was very useful. I'm a Peer Editor for two graduate journals, but it was really useful learning the theory behind it. As well, I really liked the tips, and the website, that Prof. Grimes mentioned.What I personally find the most difficult is being constructive. I often find it difficult to be optimistic about the paper. I just think to myself that if it were my paper, I would want good, constructive comments about how to improve.

I chose the paper about Podcasts. I am looking forward to applying the hands-on skills I have acquired from being a peer-editor, to the theory and skills taught, as I never really learned them explicitly in a lecture.

I'm also posting a checklist I use to conduct peer-reviews to help out.

Happy Reviewing!
-->

 Review Checklist

Instructions: While reading through, or immediately following your first read of the paper, fill in the checklist below. Using the guiding questions, place a number from 1 (weakest) to 5 (strongest) in the column indicating how well the paper meets the listed criteria. This will produce a general mark based on your initial impression of the paper.

Evaluation Criterion
Mark (1 – 5)
1.   Appropriateness (Overall, is the article appropriate for its level – undergraduate or graduate – and the intended audience of the journal?)

2.   Topic (Is the article’s topic of particular interest or relevance? Is it interesting?)

3.   Title (Is the article’s title clear and consistent with the content?)

4.   Introduction (Does the introduction raise interest? Does it reflect the content of the paper—i.e., are all questions answered/points addressed by the end of the essay?)

5.   Thesis/Purpose (Is the thesis/purpose of the article clear, focused and consistent?)

6.   Argumentation (Does the article demonstrate valid, logical arguments and/or meaningful discussion related to its purpose?)

7.   Supporting Evidence (Do demonstrable links exist between the sources used and the article’s arguments and overall purpose?)

8.   Sources (Do the sources reflect an understanding of the significant and relevant literature in the field or other important documents, etc.?)

9.   Theoretical Background (Does the author show an understanding of the theoretical background relevant to the theme and purpose?)

10.Conclusion (Is the conclusion strong, clear and consistent with the article’s arguments, discussion and overall purpose?)

11. Significance (Does the article identify significant ideas or findings that either clarify or add to what is known in the field?)

12.Writing (Is the writing sufficient to be published considering word-choice, spelling, grammar, punctuation and overall style?)

13.Format (Does the article comply with the journal’s format and length limit?)

14.Referencing (Does the referencing – footnotes and bibliography – appear thorough, complete and accurate?)

Total (above ÷ 14) =

Peer Review Workshop

For this week I’d like to comment on our workshop on how to do a peer review. I learned that peer reviews are an important way to protect the quality of research that is published. It was also interesting to discover that the standards of what constitutes “good quality” can differ between different journals. From the peer review tips we learned, what really struck me was the advice of Dr. Tyworth, that we should not turn a project into what we would do. For example, I can see this becoming problematic if you are in a field in which you have extensive knowledge and experience. I learned that ultimately, as a peer reviewer, I must not try to change the question. Rather, I must thoroughly understand exactly what the author is arguing or trying to explain in their article, and based on this, then evaluating whether their methods are appropriate to their research question.
As we all are submitting a peer review, I also found it very useful to hear my classmate’s suggestions on best practices and strategies to utilize in our peer reviews. For example, we could look at the general readability of the article as well as the originality of the actual research question. Perhaps an extension to the workshop could have been reviewing an article together (however, not one of the articles we are selecting from), and then commenting on areas for improvement. I understand the time constraints; we are limited in our discussion time. Still, a practical hands-on example could be beneficial. Athough many of us have never written a peer review before, hopefully we all feel a bit more confident in our ability to write professionally and constructively in our peer reviews after attending the workshop.

Tuesday 30 October 2012

Peer Review Workshop Review

I thought I would post about the Peer Review Workshop, as I actually took a lot away from it (at least more than I thought I would)! It helped to drill into my head that criticisms of my work do not equal attacks. Logically I know this, but as I'm sure most of you are aware it's so easy to get emotionally attached to your own written ideas. When someone tears them apart (or at least it feels like that's what they're doing) you tend to become defensive or quite honestly embarrassed and sad (at least I do). The workshop hit home with me in that I genuinely took the comments from my SSHRC proposal in a different light. Some of these comments were harsh, but most of them were clearly designed to help me not only make my proposal better, but to make myself a better student and communicator. This should be common sense by the time one enters a Master's program, but I'm glad I had the reminder that criticism is really designed to help you, not embarrass you. I feel much more confident going into my peer review with these thoughts in mind, and hopefully I will see my own writing improve even further as the course goes on.

Peer Review thoughts...

For this assignment, I decided to review the article dealing with women entrepreneurs in Mauritius. The reason for this was that the research method the author used was focus groups, which is the method I chose for my SSHRC proposal. I was excited about focus groups because of the possibility of rich data (as both Luker and Knight state), and I was also aware of the difficulties of interpreting the data and ensuring that you, as the researcher, can actually say something about what you've discovered through the interviews. What is becoming increasingly clear to me is the possibility that the group dynamic (which is the part that attracted me to the method in the first place) might be an issue.

I've looked at some other articles about focus group research and methodology, and all of them state that it is very important to pick a group that's "just right" - not too heterogeneous, or else people might feel inhibited, but also not too homogenous, or else you might get the same opinions over and over again. In fact, one of the articles critiques the method pretty harshly and says that it provides data that is, basically, wrong. He says that individual interviews yielded much more reliable data than groups. I wonder if it's because the topic was more about individuals' decisions about products (market research, really), whereas focus group research is more conducive to exploring questions that have to do with group dynamics. In the case of the article I'm looking at, I'm not sure yet whether the question was right for the method, but it's definitely something for me to consider for the research proposal assignment: focus group use needs to be really carefully justified.

Monday 29 October 2012

If I can get Harry Potter involved, I am game

For my Peer Review assignment I have decided to rip apart (jks) President or Dictator? A Comparison of Cuban-American Media Coverage of Cuban News by G. D. Peterson et al. The reason I choose this particular article is because I am interested in content analysis. I think that it (content analysis) will be a method that is relevant to the research that my research proposal is proposing...enter Harry Potter. 

When I was looking for online resources on content analysis I came across this: Teaching Content Analysis Through Harry Potter. I try, very deliberately, to integrate Harry Potter (sans prologue) and various other fantasy series/novel/movies into my life, and here is an opportunity for me to do it academically and relevantly, score! Adam M. Messinger (the author) presents a compact crash course on context analysis by analyzing music from the Harry Potter films. He begins by briefly defining content analysis and its processes (sampling, coding, theory developing). Messinger then uses this method to analyze the music found in Harry Potter to uncover its (the music's) message with the aid of a coding sheet. The article was not spectacular, but it did clarify a few things about the method through application. Also, because it was meant as an learning exercise for undergraduates, it was easy to understand and it related the intents and purposes of content analysis clearly. It is fun and worth reading if you want to obtain a general understanding of content analysis (which I did not get from Luker). However, be warned: you will get Hedwig's theme song stuck in your head...which is not necessarily a bad thing.   


Reflections on being an Interviewee


Last weekend I had an interesting encounter on the street. I was heading through the Philosopher's Walk, on my way to meet a friend, when a total stranger stopped me on the path and asked if he could ask me a few questions about "student life." Normally I would have said no to this - I don't like talking to people I don't know, and I don't like being put on the spot to answer questions, and I find that kind of stopping-strangers-on-the-street method of gathering information a little bit invasive. But I said yes, because I got the impression he was a student and I thought it might be a good opportunity to observe a research method in action.

It was a really strange experience, as an interviewee, to begin answering questions with no real sense of what the questions were going to be about, and with no idea what it was the interviewer was trying to find out. All I knew before we began was that he wanted to know about "student life" -  but what he actually ended up asking me about was how I, as a student, celebrated birthdays. I was distracted, when answering his questions, by trying to anticipate what he wanted to know or what he was actually trying to find out. I don't know if it was a halo-effect kind of thing - I didn't really care about telling him what he wanted to hear, but I was trying to figure out what he was actually doing. Would it have been that hard for him to preface his request with, "I'm interested in finding out about how university students celebrate birthdays?" I feel like just that tiny bit of extra context would have made it easier for me to answer his questions.

It turned out that he wanted to find out whether or not I would find a particular product (some sort of content aggregation thingy for finding birthday gifts based on things like age & gender, from the looks of it?) useful. In the end I wasn't sure what his motives were and felt a little weird about the fact that I let myself get roped into an interview I didn't really know the whole context of. I realize this is probably not going to be how most of us conduct interviews in our research careers, but it was interesting all the same and made me wonder about how or if to prepare a respondent for an interview you're conducting.

In another class I am taking, we had an in-class workshop on conducting semi-structured interviews. When we prepared our interview questions, some of us made a point of giving our questions a little bit of context, and others did not. Is there a reason to choose in one direction or the other? Or, I guess, what is the benefit of giving your respondant no/limited context?  I feel like it would be to everyone's benefit to give a little bit of explanation - by at least letting the person your interviewing know what it is, in a general way, you are trying to explore - but maybe I'm wrong.




BB-SSHRC?


With no readings this week, I figured I’d reflect on my SSHRC proposal. During the lecture Sara brought up the importance of not underselling yourself in the proposal. As a student of bureaucratic  methodologies  and a believer that BBB (bullshit baffles brains), I may have taken that bit of advice a tad too far in my early drafts; as some of my statement induced chuckles and eye rolls from my proof-reader. This makes me wonder what the norms are for how you rhetorically promote yourself within grant work. Through my work opportunities, I’ve had the chance to work with both management teams and technical teams. When I make statements to the technical team that would pass unnoticed by the management group, the frequently engender the same laugh and eye roll that my early SSHRC drafts got. I doubt that saying different groups (technicians, managers, academics, etc.) have different rhetorical norms for self-promotion (and different thresholds for their BS meter) is controversial statement. But I wonder how an outsider (me) can quickly learn the BS threshold for their target audience.

Sunday 28 October 2012

The "President or Dictator?" article

I just read the "President or Dictator?" article that is among our choices for the peer review assignment. It alarmed me by how irrelevant the selected methodology is to the study's thesis statement. I found it surprising that the group of authors did not recognize that. Does anyone else see this or am I completely off?..

The study proposes to "examine the role the media plays in [the US policy positions towards Cuba] to determine if the Cuban-American media is reflecting these views". The study then goes on to gather quantitative data to reveal the amount of times positive and negative descriptors are used in the same sentences as "Fidel", "Raul Castro", and "Cuba" in four American newspapers. This is content analysis, which, I think, works well to determine how much more frequently the three words above are used negatively versus positively, but, unfortunately, it does not offer examination of the media role in people's views! And yet media role in people's view is what the researchers propose to examine in the beginning. Luker says in chapter 8, that when it comes to content analysis she is "skeptical that we can show anything other than a correlation." This method "can only find the distribution of a population into categories that we have defined a priori". The information collected in this study using this method is not enough to offer analysis of the Cuban-American community and its views. And yet, upon presentation of the results, the conclusion of the study attempts to offer qualitative analysis. All of a sudden, in the last paragraph it offers a new idea: "the Cuban-American population  <...> has a more negative opinion of Castro to begin with". How is this clear from the data collected? How do we know from the data examined what views the Cuban-American population had "to begin with"?

The researchers of this paper also failed to successfully justify the use of their method other than noting that it would be not too expensive...

This study is an example of how not to choose irrelevant tools to answer your research question. It is also, maybe, an example of how sometimes it is necessary to completely change your research question after you have collected the data. I found this article to be a troubling example of a mistake that can easily happen to me and to anybody in research work. I guess I have a lot to say about this one, so I'll choose it for the assignment. Who is with me?

Wednesday 24 October 2012

Participant Observation & Interviews

For this week’s reading, the Stebbins article really made an impact. It gently reminded us that while it is beneficial to have a background and knowledge of the subjects and environment we are observing we should not act as a “know-it-all.” (Stebbins, 1987, 104) This is incredibly relevant to my own research. For example, relating this to how children interact in a group setting, to me this means I should step out of my own professional teaching role and fully engage in the conversations, removing any preconceived notions of what will or might happen.
Regarding the Luker Chapter 8 reading, I found her suggestion of utilizing index cards and arranging them on the floor to design a sensible flow of interview questions to be quite useful. (Luker, 2008, 172) Also, it was interesting to learn about the benefits of making use of warm up and cool down questions. However, I’m not sure how much time to allocate for the warm up and cool down part of the interview. Would a few minutes at the beginning and end be enough? What if the participants continued to talk and their session is going into the next interview timeslot? Perhaps if what they are saying is something that will add value to the research and was not mentioned earlier then I suppose it would be best to allow them to finish their thought. Personally, it’s really fascinating to learn that participants could divulge more information during the cool down because they feel comfortable and free from being in the interview structure. All in all, I’m finding that the readings each week are quite relevant to my own research question and my chosen methodology – I learn something new every week!

Monday 22 October 2012

On Stebbins

Stebbins raises some interesting points about the "participant-as-an-observer-as-a-non-member." It all takes me back to the my Knight post on the "insider" and "outsider" dichotomy, and how troubling I found it all. I was afraid that as an outsider the research would not result in completely truthful observations about subjects, and as an insider they would not be as objective as they should be.

Though Stebbin's language can be a little off putting (speaking in absolutes, making generalizations...ect.) he does make some really good points about becoming an observer who participates in the setting. I think that Stebbin's assertion that the researcher should try in fit in is a valid one.  It has to do with the issue of trust that I am so concerned with when it comes to the ethnographic methodology (as I understand it). He manages to draw lines and stay within his competent bounds in his research, not allowing "field work reciprocities" when it conflicted with his research. However, I would not go as far to say that the whole project is null if are not fully accepted into the group - not all research projects demand participation. Or if it does, you can modify your research structure; these are after all people we are dealing with and people are not always predictable, so it is all about rolling with the punches...from what I gather.

Stebbin's makes a great point regarding genuine interest, as well. This is something that does not come up often in our discussions and I think that it is an important consideration. As I have said, these are people that researchers are dealing with; to be disingenuous while conducting ethnographic research, to me, is simply wrong. These people are allowing you into their lives and are expecting something useful and fulfilling to come out of the process and to do it without sincerity and interest, is, in my opinion to use them to as means to your own end.    

Ethnographic methods


Real ethnography seems like a daunting, yet interesting method in practice, and I enjoyed the accounts given in this week's readings. My own prior experience is with contextual interviews - a kind of "psuedo-ethnography" existing somewhere between ethnography and interviews. In Stebbins' terms the difference might be that I was involved in learning, but less so in participation, which I think made it easier to navigate the boundary between researcher and subject (though perhaps less rich an understanding as a result, but nevertheless it seemed "good enough" for my purposes and time-frame).

Nevertheless, I found Shaffir and Stebbins' accounts fascinating. I enjoyed Shaffir's allegory of the man lost in the forest. It seems difficult to give hard and fast rules on how to proceed for this kind of research, after all we are people trying to study other people and everyone is up to their eyeballs in ideology and personal baggage. But hearing about the kinds of issues others have encountered in their research is valuable. As Shaffir notes "even the most seasoned field researcher, if given the chance to start over, might approach the setting and individuals differently…" - I think rather than invalidating the work, this shows that there is indeed some clearer understanding gained by doing it.

Favourite Method?

At the end of Luker's chapter, she asks the reader to pick the "best" method for them. I have a problem with this as to me, they ALL sound like good methods and I simply can't compare them all because they are so different! I can say that interviews and focus groups are most applicable to my research proposal, but choosing between the two may be an impossible task.
I found it useful to frame interviews in the way that Luker discusses (that it is to discover patterns among many people, not just the thoughts inside one person's head), as we discussed the fallibility of interviews in class to the point where they seemed almost useless as a data-gathering device. Luker points out that even how one dresses can alter the results of an interview, making interviews seem problematic as a source of gathering data. I now look at interviews in a different way after reading this chapter, as they are instead a way to examine the mental maps inside the heads of lots of people. The patterns in the answers are what's important, not necessarily the answers themselves. The same can be said for focus groups, as watching people interact with each other can produce a lot of data. This type of data may not be accessible in an interviewer/interviewee setting.
I am finding the readings more relatable and applicable after handing in my proposal, and am beginning to harvest some knowledge that will be very useful in expanding my two page proposal into the final product due at the end of the semester! Hopefully others feel the same!

Sunday 21 October 2012

Interviews


What I liked most was Lukar’s pre-interview process. I think that her idea of writing out all the questions on index cards, setting them on the floor, and ordering them is a fantastic way to do it. I really liked how she referred to the ordering as "clumps" and to make sure you tell your interviewee that you are moving on to a different "clump."
When doing an interview assignment for Foundations in Library and Information Science, I did not inform my participant that I was moving on to different 'clumps.' In retrospect, I really think this would have helped me get better answers, as my interviewee seemed to think that a lot of the questions were kind of the same, when they were asking very different things. Perhaps if I had told the participant that I was moving to a different subject, I would have gotten different, and better answers.
           
However, there is one part of her chapter that I could not agree with. I don’t like that she believed that it is okay to provoke the participant so that they could outline what he/she actually meant. I suppose that if you were to tread carefully, and have an idea about his/her temperament, it would be okay. What if it went poorly, and ended the interview? Although she claims that this is generally “low-risk,” you can anger the person whom you are interviewing if you have not built a rapport. But I’m not sure I am personally comfortable with putting words in the participant’s mouth in order to get a reaction.

Participant Observation


I have moved to Canada when I was 17. Reading Luker made me smile a few times this week: 

When you are enmeshed in a different culture, everyday life becomes problematic and challenging. You don’t get your regular morning coffee in your familiar coffee mug, you don’t read the comics in your local paper <…>. Then the day goes downhill from there.

I realized that during the last 12 years of my life as a new Canadian, I have incidentally learned and have continuously been using a major research methodology: participant observation. I have been writing home and describing the whole society of Canada in conversation with friends on skype. I have been generalizing based on what I know about the groups of people that I know. For a long time now I have been “someone who watches and notes things that everyone else takes for granted”.  I have been documenting practices and attempting to explain my Canadian friends’ actions by describing their distinct lens of beliefs. I have been creating “a fixed narrative for a specific audience”. Of course, the motivation I have for this is not based on a desire to publish volumes of research, but to personally understand my own life. I took up ‘figuring out’ Canada casually and, as a result, skipped a few steps of analyzing the data I collected. I am glad Luker is pointing those out for me. I see that I sometimes didn’t stop to think whether the events I had talked about or had written home about were atypical or widespread. I assumed categories without thoroughly researching them. Sometimes I found theories to back up my conclusions, but often not. Mostly skipping these steps, I simply arrived at a model of why Canadians do what they do.

Participant observation is dangerous because it is completely personal and is based on individual impressionability. Your generalizations are often based on the groups you personally chose to join. Sure, often there is a highly structured theoretical explanation of why you analyze who you analyze. But, similarly to the study of ‘reproduction ‘ of social class, where the researcher chooses to look at groups in a public school because she believes, it is obvious to her, that those groups are representative of the whole, you will too always choose those certain groups and categories based on your personal experience. My entire analysis of living in Canada is my interpretation and my perception. And so is the analysis derived from participant observation in general.

Actionable Ethnography

My personal feelings towards ethnographic studies were best articulated by Shaffir. As I discussed in my first posts, I find the need to present the social sciences in, as close to, the same light as the hard sciences a bit disconcerting. I think, as Shaffir discusses, this is particularly problematic for ethnographic studies. The inherent subjectivity of the exercise means at best you can report your experience of the encounter. The standard I judge social science research on is ‘good enough’. Related to the idea of actionable intelligence, depending on the circumstances, there will be a different threshold of good enough information. While many of the techniques for maintaining analytical distance can make for a better report, I think that it’s still just an attempt to offer an explanation that holds some value for someone in some context. If any sort of value can be found in the study, regardless of how strong its scientific validity, it has served its purpose. But yes, the more rigorous the study, the more likely it will have more value for more people.

Saturday 20 October 2012

I really liked what Luker had to say about interviews in her chapter on field methods. After all our talk of things like bias and the halo effect, I was beginning to have a lot of questions about the value of the interview as a research method. Luker points out that interviews are often criticized because they aren't a "realistic" account of any aspect of social life, as well as because interviews are narratives and so they are always an interpretation of what happened, rather than what actually happened. I couldn't let go of the idea of interviews as a valuable research method (I like narratives), but it became clear really quickly that learning how to make sense of them was going to be more complicated than I anticipated. But Luker writes that the value of interviews isn't that they're an accurate picture of reality, but that they are "accurate accounts of the kinds of mental maps that people carry around inside their heads, and that it is this, rather than some videotape of 'reality' which is of interest to us" (167). I like this. I think it's valuable, in sociological research, to pay attention to what and how people think, and how they interpret their reality, as much as it is valuable to try and create a picture of the "reality" itself, and I'm glad to be able to frame the practice of interviewing in this way.

Friday 19 October 2012

Quantitative methods

I really enjoyed the readings on quantitative research methods. My previous notion that research is boring has been completely shattered! I’m metamorphosing (is that the right word?) into a Research Methods Butterfly! J
In the Luker reading, I found it very interesting to read about her example of how not only do institutions differ in their definition of rape, but so do individuals. I suppose I never realized the impact that one’s values and preconceptions could have on how one responds to survey questions. This is incredibly fascinating! And so I learned that researchers must operationalize their research idea over and over again so that they can then go out and analyze what others think about their idea/question. (Luker, 2008, p. 123)
I also found Knight’s discussion on piloting quite helpful. Personally I would appreciate the opportunity to test-drive the questionnaire I plan on using to pinpoint and fix any problems before the actual interview process. As my research proposal question involves having children complete a questionnaire about their reading habits, ideally I would ask a fellow teacher and perhaps even some children from the grade I’m researching to comment on those characteristics Knight outlined especially readability (age appropriate) and presentation, i.e. plenty of white space on the page, large enough font, etc. (Knight, 2002, pp. 93-94) Admittedly, I sometimes write a lot because I don’t want to leave anything out. My fear is that they will become overwhelmed and just fill out the survey without actually reading the question, just completing it to “get it over.” So that is where the piloting could come in handy.
The comparison chart where Knight distinguishes the characteristics of self-administered questionnaires using fixed-response/open-ended questions and interviews using open-ended questions/prompts was extremely helpful. (Knight, 2002, pp. 89-90) For my research question, I decided that I would incorporate both types. Specifically, before the actual reading experiment starts the students will be able to complete questionnaires about how they feel about reading and then after they participated in it they will complete the same questionnaire to see if their attitudes have changed. The other aspect is a separate face-to-face interview with each child, where I’ll ask more open-ended questions. From the readings, I learned it can be difficult for some people to discuss their feelings. (Knight, 2002, p. 89) However, I believe that by interviewing each child separately, hopefully they will feel less apprehensive in sharing their ideas. It will also ensure all children, especially those who are shy, are able to discuss their feelings.

Thursday 18 October 2012

Operationalization and Language

Like Jessica, Luker's chapter on Sampling, Operationalization, and Generalization made me think a lot about language. What I found especially interesting was how, when Luker discussed how a "salsa-dancing social scientist" would operationalize rape, she would interview a sampling of people to understand what they defined as rape. Maybe this is because I'm still relatively new to the world of research, but it didn't really occur to me before this that operationalizing variables (especially slippery ones - in my case, I have had to think about how to operationalize "spirituality") had to do with more than just limiting how you, as the researcher, were interpreting these terms, and should also take into account how the term is understood by others. It makes sense: you want to be able to have a holistic (or as holistic as possible) sense of how these terms are understood, by "giving yourself a framework for examining what the taken-for-granted elements are in other people's categories, and [sensitizing yourself] to thinks you have taken for granted" (122). When language is as slippery as it is, you want to give yourself and the readers of your research as many footholds as possible, and that means deliberately getting to know how other people understand terms.

Monday 15 October 2012

Generalization

I have always had a problem generalizing. While writing my major research project last year, I thought it was so obvious that the historical statistical data pointed to a general theme that I was studying. Assuming this was almost my downfall while working on the project. So obviously, I found Luker's section on generalization extremely informative. 

One piece of advice I found extremely helpful is to "Anticipate the kinds of criticisms that people will make of you" (Luker, 2008, p. 125). Oftentimes, when I have, what I consider at the time, to be a stroke of genius, the connections are only made in my head. I often try to ‘talk-it-out’ with someone, especially when I was doing historical research. Is what I’m saying too abstract?, does it only make sense to me? Do I need to make stronger connections, or make them more clear?. Oftentimes, when I explain it to someone, I can the holes, missed connections, or problems with my interpretation of data. This, I think, will be important when designing my own research project, again. However, Luker also states that it is important to generalize, but also ‘bump up’ the study to get more related research. I think that the balance she is writing about here is difficult to attain, but important to good research.

Sunday 14 October 2012

Reflections on the Interview Exercise


For the interview exercise last week, I found it extremely difficult to craft interview questions to ask my partner. In hind sight, I may have misunderstood they sort of question we were supposed to ask. When I tried to think of question to ask my partner about her research project I kept drawing a blank. It occurs to me now that if I re-framed the task to interview about the process of selecting a research topic, I might have had more success. This highlights for me the need for interview prep. As they say in trial law, never ask a question you don’t know the answer too. As was discussed in class, in order to give enough structure to make the interview useful as part of a larger data set, you need to have more or less the same interview with everyone, and the interview needs to be meaningful. Ideally, I would pre-interview everyone in a free flowing format before doing the structured data collecting interview. I can, however, foresee issues with subject availability to do this sort of interview. Also, it was mentioned in class that there is an expectation that you analyse all the information you collect (e.g. video vs. audio recording of interviews) how does this apply to prep work? Do can you ignore pre-interviews in your analysis, or do you have to find a way to include them?

The Slipperiness of Language

In one of my undergraduate English classes ("Literary Theory and Criticism") I had a professor who constantly ranted about the "slipperiness of language". It got to the point where we once spent an entire hour and a half class discussing the possible interpretations of the phrase "I think" in a Virginia Woolf text. I thought that I would never find this professor's class more relevant than it was during my study period, but now his lectures are coming back to me at full force thanks to Luker's chapter. In her discussion of how a research question involving rape cases can't really be answered until you have defined what "rape" is (Luker 115) Luker raises a valuable point about language and how it limits our ability to understand things. The operationalization section of this chapter is almost a philosophical waxing about the limitations of language and how it pertains to research, which I found incredibly relevant to my research proposal.
Without going into too much detail, my project focuses on whether certain types of technology affect literacy in young people. Early on in my proposal, I was faced with how to define "literacy". The term is ever-changing thanks to technology, we now have "media literacy" and  "computer literacy" in addition to traditional reading-and-writing models. I had to spell out exactly what literacy meant to me and how I was measuring it before I felt comfortable continuing my proposal. Luker's chapter really resonated with me and helped me to feel confident in my choice to outline and clarify my definition of a "slippery" term in my proposal. I can definitely see how my project may also be related to questions of how literacy is defined as well as what does and does not influence it.

"Real life in real time is lots of noise and not very much signal."


Luker discusses sampling - the important practice of collecting data. One of the very risky problems with sampling is choosing such a segment of data that clearly comes out as influential to the research outcome.  I thought of how often we come upon studies or research conclusions in everyday media that are so obviously biased.
 

On the INF 1240 blog, Professor Grimes posted an analysis of the study about a correlation between divorce rates and shared housework. Describing the problems with the study, Professor mentions various degrees of bias, with which I very much agree. In her blog, Professor writes that it is important to always ask questions about research and researchers. This example with divorcing couples who share housework is particularly good to describe poor and incomplete ways of collecting and then presenting data. This unsupported study clearly creates a sensationally politicized outcome, which may be its only purpose, really: how many people will have an emotional response to such a finding about sharing housework! You only hope that with pieces of information like this, people do stop and ask questions.

I find that biases at times are obvious, but also at times obscure. Luker talks about sampling at length, which, perhaps, shows that carefully, inoffensively, and properly accessing data for research is not all that easy. It is clear that many researchers struggle with choosing the right place and the right ways to locate data. I find the three guidelines she uses very helpful: find a setting where variable varies, make sure the part stands for the whole, and let theory lead your sampling. I ended up questioning my own methodology in my research proposal, after reading this. In my research I initially wanted to interview my own peers who are in the field I am studying. This, of course, goes against the two of the three guidelines. There are truly many things to be careful about.

Wednesday 10 October 2012

Some Thoughts on Luker's Chapter 6


I found Luker's chapter on sampling, operationalization and generalization to be the most interesting reading that we have been assigned so far. If I were pressed to choose which chapter in her book has been the most useful, I would probably maintain that the literature review techniques, analogies and advice given in Chapter 5 are paramount, and can be viewed as beneficial not just for our own research but for any social science research project. After reading Chapter 6, however, it now seems as though Luker is beginning to strike a greater balance between theoretical discussions and parables, and concrete, practical advice. I hope this is an approach that is maintained throughout the remainder of the book, because I find it to be well-rounded and engaging.

I like that Luker advocates drawing from the methodology of canonical science in a judicious fashion. It is clear that different research contexts can warrant different research methodologies. I like how Luker illustrates this point through her K-9 Search and Rescue Dog Training analogy (Luker, pp.102-3). When the goal is to simply search the wilderness, the appropriate method is to divide the terrain into grids and then have teams search each one. When the goal is to search disaster areas, the appropriate method is to target specific areas - for example, in the case of an earthquake, it is best to target areas containing buildings, that are in a state of collapse, and where people are likely to have been during the crisis. Luker also gives advice on conducting comparisons, doing data outcropping, and maximizing the "acceptability" of your sample among peers, colleagues, editors and reviewers.

In the operationalization section of the chapter, Luker dives right into the heart of what makes defining things so difficult: language, culture, history, gender, law, politics, philosophy perpetually overlap and interact with each other to produce highly contextual definitions of persons, objects and ideas. Her in-depth analysis of how definitions and perceptions of rape have changed throughout time is fascinating, as is her brief investigation of how rape can still be defined and perceived in wildly different ways depending on your sex and general philosophical/ideological orientation.

Luker concludes the chapter with a brief discussion of generalization, offering the advice that we should attempt to "bump up a level of generality" when doing our research (Luker, p.126). By this, she means that we should attempt to reach higher levels of generalization by anticipating some of the theoretical and practical implications of our research for areas outside our immediate domain. I found this last point to be really useful the development of my research proposal. I am now beginning to brainstorm some of the ways in which my study will be relevant, not just for other social science researchers, but for market researchers, advertising companies, and other various business-related entities.

Oh, the dreaded exercises!

I've just finished reading Luker's chapter 6. She writes wonderfully, and it's all clear and wonderful until the end of the chapter where I actually have to apply what she was talking about to my own case... And here I suddenly realize that my awesome, society-changing, definitely publishable study is... well, a mess.

Briefly, I want to see if the TPL programs and services targeted to newcomers to Canada actually hit the target. My "hypothesis", if I can use the word, is that TPL (in the same way as most immigrant-library-use studies) bases its decisions regarding programs for immigrants on the wrong kind of data - surveys, census info, staff perceptions - when they should be talking to the community. Sounds good so far? The problem is that I can envision the actual research: the interviews, transcribing the audio, the "aha!" moment when I discover that all three of my focus groups mentioned one particular service they wanted to see in the library... but when it comes to practical aspects - samples, or operationalization - I have not considered those. Particularly because my study is based on focus group interviews, I simply intended to let them talk as much as they wanted to, with me only steering the conversation in the right direction if it totally veers off course.

When trying to answer the question posed in the exercise I first encountered a difficulty with sampling. Originally, I just thought I'd talk to whoever will talk to me, just like Luker did in her abortion study. But she was lucky, as she said - she didn't need to worry about sampling. In my case, talking to anyone who will talk to me might mean I only talk to women, or only to library patrons, or only to older people, or only to people with post-secondary education. I'd have to do a whole bunch of focus groups to get all these variables accounted for... As for "tacit control group", I don't know if I have one. My study is narrow - I just want to look at the TPL, and just at the Russian speaking community. I think I can reasonably generalize and say that other libraries would benefit from the same approach. But one of the main points I'm trying to make is that different ethnic groups have different needs, so generalization is pretty limited...

Also, the operationalization stumped me a bit. As I mentioned, I intended to just let my respondents talk. As I read the chapter, and also taking into account Kline's article about her disastrous interview, I concluded that I need to spend a lot more time actually formulating good questions and trying to predict which way the conversation could veer, and what would I do if it did veer that way. Luker's writing style is so deceptively down to earth that when she springs the exercises on me at the end of the chapters, I'm always taken aback by the realization that, actually, research is really hard work. 

Monday 8 October 2012

Putting the Cart before the Horse?



** Note: as a late entrant to the course, this post is in lieu of the missing first week post**

I’ve cooked up an (essentially) finished research proposal after a brief talk with Sara, and a few minutes of mulling. As pleased as I am by the accomplishment, I have a nagging doubt that I’ve missed something. It shouldn’t have been that easy. There seems to be a disconnect between the idea of a grand vision for research and its implications for our social existence, brought up several times by both Knight and Luker, and the brass tacks of research mechanics (e.g. SSHRC requirements). 

Sara helped me articulate my interest in dealing with unknown unknowns during emergency situations.  I simply made up a small scale study to access that sort of information. Since it is by nature, primarily tacit knowledge (you can’t document what you don’t know/can’t imagine) I decided interviews were the best tool, and that Toronto Heavy Urban Search and Rescue (HUSAR, http://www.toronto.ca/wes/techservices/oem/husar/index.htm) would be the best venue. I’ve also tied this into subsequent research avenues (which are where my interest actually lies). This is a necessary pre-study to consider the applications of information systems to support dealing with unknown unknowns. Perhaps my main doubt is that I’m not making a causal claim in the HUSAR study, and that seems to make it come up short.

Tentative Research Question: How do HUSAR first responders and incident commanders respond to unforeseen developments/factors during a critical incident with respect to information production and sharing?

Tuesday 2 October 2012

SSHRC!

I found it very useful to take the SSHRC Writing workshop given by the SGS this September. Dr Jane Freeman, who is writing a book on SSHRC grant applications, shared some very devotedly informative points and showed us excellent examples of winning proposals. Here are the main things I learned:
There are 6 questions that a Research Proposal MUST answer:
1. What do you plan to do? (hypothesis, research question or statement)
2. Where does your work fit with other work in your field? (originality of project and research context)
3. How you will go about doing it? (theoretical framework/ or methodology)
4. Will you be able to deliver what you promise? (feasibility of plan and timeline, justification of location)
5. Who will do the work and who will benefit from it? (expertise of researcher and potential significance)
6. Why is it worth doing? (objective, contribution to the field)

It is very important to show that you are a strong student, that you have a relevant track record, and a clear sense of direction. Winning proposals are easy to read. We need to say that we know what research problem in a particular field we will solve. We need to describe details of the project, not try to evaluate it - the committee will evaluate. We should not say we are 'hoping' to do so and so. We need to use details and confident language. It is important to 'give the meat first'.

Professor Sara Grimes covered most of these in the class as well. I find it very helpful to look at the winning proposals as well as at the failed ones. I am also very committed now to getting my work peer-reviewed and I recommend that we all do that. It is so helpful to get comments from the others who are trying to work out the same things as you. It is very rewarding, too.

I hope you guys will find this useful.

Monday 1 October 2012

Running through the Lunt and Livingstone reading was the familiar theme of quantitative vs. qualitative research methods. The article is critical of the belief that focus groups, as a qualitative research method, are useful only as a supplement to quantitative data, and argues that with the right justification and with a mindful/critical approach, focus groups are a useful research method in themselves. As someone who is more inclined to pursue qualitative research methods for my own work, I spent time once again considering the advantages or values of qualitative data.

Lunt and Livingstone argue that the construction of a focus group can make a good theoretical framework in itself - that is, they suggest using the artificiality of a focus group scenario as a lens through which to understand the information you are gathering. I agree that it is important to be aware of, and critical of, the context in which focus group conversation takes place (which is kind of a simulation of a "real" scenario for conversation) - but whether researchers acknowledge the artificiality of them or not, I still think that artificiality is a bit weird. I would much prefer to observe by participating, or by in some way allowing people doing things that are natural to them. I realize there are issues that arise with that kind of method, too - I just feel like they are issues I am better prepared to work with or around.

Different Styles of Inquiry


I am interested in face-to-face interviews in my own research. Knight was very helpful at pointing out the advantages and disadvantages of the different directions that an interview style inquiry may take. I believe the style of inquiry depends on the research being conduced. In a research that has predetermined set of goals and objectives it is important not to dilly-dally with the scope of the questions, but to narrow down the discussion to have a structured plan of inquiry. Starting interviews with fixed questions and then going into a more open ended discussion, can benefit analytical qualitative research. It is important to give the subjects room for exploration of their ideas and to discover what is on their minds, but only after they have provided a picture first by answering specific questions so the larger group of readers recognizes the topic.
            It was also fascinating to read about the Memory Work style of inquiry. The process reminded me of art collaborations. I think it is probably effective in exploring and developing people’s ideas, but I question the efficiency of this method in producing specific answers to characteristic questions.