Nursing
Guidelines for Critical Analysis of Published Research Article
Guidelines for Critical Analysis of Published Research Article
Evaluating Titles
- Is the title sufficiently specific?
- Does the title indicate the nature of the research without describing the results?
- Has the author avoided using a “yes-no†question as a title?
- If there is a main title and a subtitle, do both provide important information about the research?
- Are the primary variables referred to in the title?
- Does the title indicate what types of people participated?
- If the title implies causality, does the method of research justify it?
- Has the author avoided using jargon and acronyms that might be unknown to his or her audience?
- Overall, is the title effective and appropriate?
Evaluating Abstracts
- Is the purpose of the study referred to or at least clearly implied?
- Does the abstract highlight the research methodology?
- Has the researcher omitted the titles of measures (except when these are the focus of the researcher)?
- Are the highlights of the results described?
- Has the researcher avoided making vague references to implications and future research directions?
- Overall, is the abstract effective and appropriate?
Evaluating Introductions and Literature Reviews
- Does the researcher begin by identifying a specific problem area?
- Does the researcher establish the importance of the problem area?
- Is the introduction an essay that logically moves from topic to topic?
- Has the researcher provided conceptual definitions of key terms?
- Has the researcher indicated the basis for “factual†statements?
- Do the specific research purposes, questions, or hypotheses logically flow from the introductory material?
- Overall, is the introduction effective and appropriate?
- Is current research cited?
Â
Evaluating Samples When Researchers Generalize
- Was random sampling used?
- If random sampling was used, was it stratified?
- If the randomness of a sample is impaired by the refusal to participate by some of those selected, is the rate of participation reasonably high?
- If the randomness of a sample is impaired by the refusal to participate by some of those selected, is there reason to believe that the participants and non-participants are similar on relevant variables.
- If a sample from which a researcher wants to generalize was not selected at random, is it at least drawn from the target group for the generalization?
- If a sample from which a researcher wants to generalize was not selected at random, is it at least reasonably diverse?
- If a sample from which a researcher wants to generalize was not selected at random, does the researcher explicitly discuss this limitation?
- Has the author described relevant demographics of the sample?
- Is the overall size of the sample adequate?
- Is there sufficient number of participants in each subgroup that is reported on separately?
- Has informed consent been obtained?
- Are there probable biases in sampling (e.g., volunteers, high refusal rates, institution population atypical for the country at large, etc.)?
- Overall, is the sample appropriate for generalizing?
Evaluating Samples When Researchers Do Not Generalize
- Has the researcher described the sample/population in sufficient detail?
- For a pilot study or developmental test of a theory, has the researcher used a sample with relevant demographics?
- Even if the purpose is not to generalize a population, has the researcher used a sample of adequate size?
- If a purposive sample has been used, has the researcher indicated the basis for selecting individuals to include?
- If a population has been studied, has it been clearly identified and described?
- Has the researcher obtained informed consent?
- Are there probable biases in sampling (e.g., volunteers, high refusal rates, institution population atypical for the country at large, etc.)?
- Overall, is the description of the sample adequate?
Evaluating Instrumentation
- Have the actual items, questions, and/or directions (or, at least, a sample of them) been provided?
- Are any specialized response formats and/or restrictions described in detail?
- For published instruments, have sources where additional information can be obtained been cited?
- When delving into sensitive matters, is there reason to believe that accurate data were obtained?
- Have steps been taken to keep the instrumentation from obtruding on and changing any overt behaviors that were observed?
- If the collection and coding of observations is highly subjective, is there evidence that similar results would be obtained if another researcher used the same measurement techniques with the same group at the same time?
- If an instrument is designed to measure a single unitary trait, does it have adequate internal consistency?
- For stable traits, is there evidence of temporal stability?
- When appropriate, is there evidence of content validity?
- When appropriate, is there evidence of empirical validity?
- Is the instrumentation adequate in light of the research purpose?
- Overall, is the instrumentation adequate?
Â
Evaluating Experimental Procedures
- If two or more groups are compared, were individuals assigned at random to the groups?
- If two or more comparison groups were not formed at random, is there evidence that they were initially equal in important ways?
- If only a single participant or a single group is used, have the treatments been alternated?
- Are the treatments described in sufficient detail?
- If the treatments were administered by people other than the researcher, were these people properly trained?
- If the treatments were administered by people other than the researcher, was there a check to see if they administered the treatments properly?
- If each treatment group had a different person administering a treatment, has the researcher tried to eliminate the “personal effect�
- Except for differences in the treatments, were all other conditions the same in the experimental and control groups?
- If necessary, did the researchers disguise the purpose of the experiment from the participants?
- Is the setting for the experiment “natural�
- Has the researcher used politically acceptable and ethical treatments?
- Has the researcher distinguished between random selection and random assignment?
- Overall, was the experiment properly conducted?
Â
Evaluation Methods
- What is/are the research hypothesis/es?
- Is the method so described that replication is possible without further information?
- What data was collected? Were the validity and reliability of the instrument addressed?
- What was the level of measurement of each distinct category of data?
- What was (were) the independent variable(s)?
- What was (were) the dependent variable(s)?
- What statistical tests were used in the procedure?
- Were the statistics used with appropriate assumptions fulfilled by the data (e.g., normalcy of distribution for parametric techniques)?
- What was the null hypothesis for each statistical test?
- What were the results of each statistical test?
- Did the statistical procedure answer the research hypothesis/questions?
- Have statistical significance levels been accompanied by an analysis of practical significance levels?
- Are the figures and tables (a) necessary and (b) self-explanatory? Large tables of non-significant differences, for example, should be eliminated if the few obtained significances can be reported in a sentence or two in the text. Could several tables be combined into a smaller number?
Evaluating Results
- Is the results section a cohesive essay?
- Does the researcher refer back to the research hypotheses, purpose, or questions originally stated in the introduction?
- When there are a number of statistics, have they been presented in table form?
- If there are tables, are their important aspects discussed in the narrative of the results section?
- Have the researchers presented descriptive statistics before presenting the results of inferential tests?
- If any differences are statistically significant and small, have the researchers noted that they are small?
- Have appropriate statistics been selected?
- Overall, is the presentation of the results adequate?
Â
Â
Â
Â
Â
Evaluating Discussion
- In long articles, do the researchers briefly summarize the purpose and results at the beginning of the discussion?
- Do the researchers acknowledge their methodological limitations?
- Are the results discussed in terms of the literature cited in the introduction?
- Have the researchers avoided citing new references in the discussion?
- Are specific implication discussed?
- Are suggestions for future research specific?
- Have the researchers distinguished between speculation and data-based conclusions?
- Overall, is the discussion effective and appropriate?
Â
Putting It All Together
- Have the researchers selected an important problem?
- Were the researchers reflective?
- Is the report cohesive?
- Does the report extend the boundaries of our knowledge on a topic?
- Are any major methodological flaws unavoidable or forgivable?
- Is the research likely to inspire additional research?
- Is the research likely to help in decision making (either of a practical or theoretical nature)?
- All things considered, is the report worthy of publication in an academic journal?
- Would you be proud to have you name on the report as a coauthor?