Relationship Between Evidence Based Practice And Research Process Nursing Essay

This essay concerns itself with the relationship between evidence based practice and the research process. Section A will address the definition of the research process, the various types and levels of evidence and it will describe how this is implemented into clinical practice highlighting any barriers of implementation. Section B will concentrate on critiquing two research papers and evaluating each paper focusing on their methodology including research design, data collection tools, sampling methods and approaches to analysis.

Burns & Grove (2003, pg3) state that research is a diligent, systemic inquiry or study that validates and refines existing knowledge and develops new knowledge. A definition, or rather an explanation of research, was offered by Macleod Clark & Hockey (1989) cited by Cormack (2000, pg4) they state that research is an attempt to increase the sum of what is known, usually referred to as “a body of knowledge”, by the discovery of new facts or relationships through a process of systematic scientific enquiry, the research process.

There are two main approaches to research these are qualitative and quantitative. Qualitative research is a systematic approach used to describe life experiences and give them meaning. It is concerned with narrative or non-numeric data derived from unstructured interviews or observations (Polit & Beck, 2006). Quantitative research is a formal, objective, systematic process used to describe research. It involves the use of methods that ultimately result in the collection of numerical data. Analysis of this data can then be carried out statistically (Gerrish & Lacey, 2006).

The process of any activity is what occurs from its beginning to its end. The tasks and activities carried out by the researcher in a bid to satisfy the research question represent the research process (Parahoo, 2006). Whatever the type of research carried out or the approaches used, the research process habitually consists of four main components: the identification of the research question; the collection of data; the analysis of data and finally the dissemination of the findings (Parahoo, 2006, pg105).

There are numerous sources that health care professionals can acquire their research knowledge from these include traditions, authority, personal experience, intuition and clinical guidelines (Burns & Grove, 2003). Clinical guidelines are described as “systematically developed statements to assist practitioner decisions about appropriate health care for specific clinical circumstances” (Field & Lohr, 1990). Once a clinical guideline is ready for use, there are two stages which facilitate its introduction into practice: dissemination and implementation. Dissemination is generally taken to refer to the method by which guidelines are made available to potential users (Craig & Smyth, 2007, pg256). Implementation is a means of ensuring that users subsequently act upon the recommendations.

Organisations that assist with progress, synthesis of findings, dissemination and implementation of evidence into clinical practice are the Scottish Intercollegiate Guidelines Network (SIGN), the National Institute for Health and Clinical Excellence (NICE), the Cochrane Collaboration, NHS Evidence and the Joanna Briggs Institute (JBI) (Cohen et al, 2008). As an example SIGN guidelines are developed using an explicit methodology based on three core principles: Development is carried out by multidisciplinary, nationally representative groups; a systematic review is conducted to identify and critically appraise the evidence; recommendations are explicitly linked to the supporting evidence (SIGN, 2008).

Evidence that some research designs are more powerful than others has given rise to the notion of a hierarchy of evidence (Summerskill, 2000). The higher a methodology appears in the hierarchy, the more likely the outcomes represent objective reality (Johnston, 2005). SIGN utilise a grading and recommendation system (see appendix 1) based on quality and weight of evidence (Harbour & Miller, 2001). However Hunink (2004) argues that the hierarchy of evidence, which is so influential in EBP, may not be justified and in fact can be misleading. He states that it can result in too much focus being directed toward the quantitative aspects of clinical problems and may have a negative influence on the caregiver’s role.

Grol (1992) suggests that when designing an implementation strategy, it is necessary to be aware of barriers to behaviour change. Implementing guidelines is not simple or straightforward difficulties often centre on the need for personal, organisational or cultural change (Grol & Grimshaw, 2003).

Haynes & Haines (1998) state barriers that practitioners face implementing evidence based practice but they also suggests possible solutions to these barriers. Examples of barriers include: Lack of knowledge from staff where to access best evidence and guidelines; organisational barriers; lack of further training and time when practicing and low patient adherence to new treatment options. Examples of solutions to the above barriers are: Encourage use of NHS Evidence, SIGN and NICE as these organisations provide easy to access integrated evidence and guidelines relating to patient care; improve effectiveness of educational and quality improvement programmes for practitioners and develop more effective time management strategies and encourage patients to follow health care advice (Haynes & Haines, 1998).

The research process is a systematic system used to develop evidence. The McMaster University Evidence-Based Medicine Group (1996) defines evidence based practice (EBP) as a collection, interpretation and integration of important, valid and applicable information from a patient reported healthcare worker observation and researched evidence. However the most used definition of EBP comes from Sackett et al (1996) cited by Parahoo (2006, pg447) and is defined as using the most up to date evidence for making decisions about individual patient care. It is also described as integrating medical expertise with external evidence from systematic research.

Nurses are now required to develop EBP which integrates research evidence, clinical expertise and interpretations of patients needs and perspectives into the decision making process regarding care (Craig & Smith, 2007). The Nursing and Midwifery Council (NMC) state that staff need to take responsibility for their own practice and be proactive with regards to keeping up to date to help ensure safe and efficient practice delivery (NMC, 2008).

Effective EBP is the responsibility of all health care providers to ensure that patients are given the best available care and treatment. As indicated above the vast amount of resources are easily accessible including clinical practice guidelines e.g. SIGN, NICE and systematic reviews e.g. Cochrane Collaboration, JBI. There can be no justification for this not to be implemented into practice. Nursing research needs to be maintained and increased to contribute positively to patient care.

Section B

Critique 1

Edwards et al (2005)

This research article focuses on a pilot test study, this was used in order for the researchers to improve their technique and to analyse strategies. As the results of the pilot test study were expected to provide statistical data a quantitative research approach was adopted. Essentially quantitative research is used to measure concepts or variables objectively and to examine, by numerical and statistical procedures, the relationship between them (Parahoo, 2006, pg49-50).

A Randomised Control Trial (RCT) was applied in an attempt to answer the research question. RCT’s are described by Newell & Burnard (2006, pg164) as a special kind of experiment which investigates the effectiveness of therapeutic interventions with patients. Variables should be kept to a minimum in order for the experiment to be legitimate to the conclusion of the researched phenomena (Burns & Grove, 2003). The researchers failed to mention any difficult variables they encountered during the study; this should have been highlighted in their original research design (Burns & Grove, 2003).

The authors adopted an inclusion/exclusion criterion via the recruitment process where the participant’s physiological condition was calculated within the limits of the study and it is useful to ensure similarities between ulcers and the pathophysiology. It was noted that the participants for the pilot study were selected via an advert in a local newspaper, this can be described as a stratified random sample which Parahoo (2006) states as separating the units in the sample frame into layers according to the variables the researcher believes are important for inclusion in the sample and drawing a sample from each layer using the simple random sampling method. If a full trial was to be undertaken a quota sample would be more constructive (Polit & Beck, 2006).

Recruiting participants via a local newspaper may introduce bias (Parahoo, 2006) as the newspaper is distributed to specific postcodes therefore sections of the population would be omitted from the trial before it had begun. To avoid bias participants were randomly allocated by computer either to an intervention group or controlled group. As this process is out of the control of the researchers the possibility of bias is removed (Newell & Burnard, 2006).

Data was collected using direct and indirect measures on demographic information, general health status, ulcer status, functional ability, levels of pain and quality of life. Direct measures included the measurement of all ulcers using the dot point method and the PUSH scale in order to view the progress of ulcer healing. By systematically analysing the ulcers using a standardised tool, this minimises objectivity in measurements (Burns & Grove, 2003). Observation was adopted looking at the presence of oedema, eczema, infection and recurrence. An educational package was also provided to the nurse care team prior to the trial commencing which ensures reliability and provides consistent treatment to both the controlled and intervention groups. Other than the PUSH scale there is limited information regarding data collection which may in turn make it difficult to reproduce the study.

The intervention group also had social and goal settings included within their trials. This could produce confounding variables which are described as an intervention between the action of the independent variable on the dependent variable therefore confounding our ability to be confident in the existence of a cause and effect relationship (Newell & Burnard, 2006, pg133).

The idea of a pilot study is to allow the researchers to have a good idea of whether all the respondents understand the questions in the same manner, whether the questions are suitable to that population, whether the questions are formatted correctly and how relevant the question are (Parahoo, 2006). This can ensure reliability which can be described as the extent of the entity is measured in a consistent way (Newell & Burnard, 2006). It is also useful to use a pilot study to ascertain the feasibility and test the validity and reliability of the research design without incurring the cost of a larger RCT (Polit & Beck, 2006). To be reliable in a quantitative study, a measure needs to be repeatable, giving similar responses in the same conditions, and it needs to be reproducible, giving similar responses in different conditions (Newell & Burnard, 2006). The study is reliable as it achieved the research question by drawing a comparison between two participant groups (Burns & Grove, 2003).

The study was granted ethical approval by St Luka’s Nursing Service and Queensland University of Technology, however the author’s failed to state what consent was given by the participants or the information that was provided to them before the study commenced. It is good practice to notify the reader that information was given to the participants regarding informed consent, confidentiality and that participation was voluntary and they had the option to withdraw at any time (Tod et al, 2009).

The researchers analysed the data by using an appropriate statistical database program and various types of testing were used. The T-test was used to underline any standard variation between the control group and intervention group which could influence the study results. Gerrish & Lacey (2006) advise the most suitable test for this type of research is the chi squared test. The authors adopted this within the study to obtain an unconditional outcome. As the sample size was small with only thirty three participants the researchers used triangulation tests to collate data from the different stages of the study in order to represent accurately the phenomenon being investigated (Gerrish & Lacey, 2006).

The results of the study were presented via a graph which provides easy to interpret data (Parahoo, 2006). More in depth results were provided by narrative text usually found in qualitative research, and statistical information. The results were not generalisable due to the small size of the study (Parahoo, 2006). A literature review was carried out by the authors and they made comparisons from their findings to past studies. This makes the study more reliable as the other studies were found to have produced similar results (Cormack, 2000).

It is noted that the authors identified limitations within their study which included potential issues with regards to limited consistency due to the higher than normal staff turnover at the commencement of the trial. Clients with diabetes were excluded from the trial so any benefit that resulted in the study may not be applicable to this group of patients. Overall the research design was good for this type of trial as the study was a pilot, sample size is not a limitation as the main aim is to test out the hypothesis and study design (Polit & Beck, 2006). The use of RCT’s reduces the risk of bias from the authors and gives the study more credibility (Polit & Beck, 2006). After critiquing this article it is clear that with patient participation healing rates could be improved. In order to justify this, further studies would need to be conducted to make the results generalisable and reproducible.

Critique 2

Ebbeskog & Emani (2005)

This research article focuses on the older patient’s experience of dressing changes on venous leg ulcers. The authors implemented an interpretive phenomological design. Phenomenology focuses on individuals’ interpretation of their experiences and the ways in which they express them (Parahoo, 2006, pg68). The research approach used in this article is qualitative research which is described by Parahoo (2006, pg63) as an umbrella term for a number of diverse approaches which seek to understand, by means of exploration, human experience, perceptions, intentions and behaviours. This approach was appropriate for the study because the patients “lived experience” is analysed.

The authors clearly identified their use of a purposive sample plan to obtain a sample of older patients with venous leg ulcers. Purposive sampling involves selecting people on the basis of their being likely to have things to say relevant to the research aim (Newell & Burnard, 2006). This type of sampling was suitable for the study as the aim was to obtain the experience of a specific group, specifically people who experience venous leg ulcers. Burns & Grove (2003) state that this type of sampling has been criticised because it is difficult to evaluate the precision of the researcher’s judgement however the authors used inclusion criteria to determine their target population and the rationale provided was that these subjects were essential to collect data for their study.

Research Interviews were carried out in the form of audio-taped dialogue, the authors used semi-structured interviews as a method of data collection. This approach encourages the interview to flow more freely: one topic often leads, seamlessly into another until the interviewer has covered all avenues (Newell & Burnard, 2006). As interviews were recorded this ensures no data is lost. Interviews included open questions this was appropriate as it allows participants to freely express their own experience (Parahoo, 2006). The participant’s interviews lasted between 45 and 120 minutes which was apt. Parahoo (2006, pg343) recommends qualitative interviews should be 30 minutes plus. Participants were interviewed either in the clinic or in their home environment giving them freedom of choice.

Trustworthiness and credibility was ensured by the fact two researchers carried out the analysis and the use of the word “we” indicates they had similar outcomes. A proven structured method was adopted by the researchers, this method is planned to discourage bias and assures the reader of the trustworthiness of the authors (Burns & Grove, 2003). Bias is the influence or action in a study that distorts the findings or slants them away from the true or expected (Burns & Grove, 2003). As the study concentrated on a specific target population, a limitation of the study would be that it would be difficult to repeat therefore there could be an issue with generalisation (Cormack, 2000).

It was noted that formal ethical approval was granted from the Karolinska Institute in Sweden, and this is to the betterment of the paper. If study was to be carried out in the United Kingdom approval would need to be sought from NHS Research Ethics Committee (REC) and also approval from NHS Research Governance (Tod et al, 2009). Participants were informed that the study was voluntary and anonymous. There was no mention of informed consent however patients were fully aware of details regarding purpose of the study and the data collection method that would be used which is good practice (Parahoo, 2006).

Breakdown of the data was carried out using thematic analysis. This is useful to assist organisation and structure the data that accumulates from the interviews (Newell & Burnard, 2006). As themes materialise, the authors translate and summarise important data (Burns & Grove, 2003). This method of analysis noticeably demonstrates adherence to the theoretical influence of the study.

Results were presented in the form of three key themes in a descriptive manner. Each theme was highlighted by a paradigm case with direct quotes from the interviews to support; this assisted the reader to have a clear understanding of the themes (Burns & Grove, 2003). The three themes were categorised from the emerged data gained from the interviews (Newell & Burnard, 2006) and represented the lived experience of the patient’s thus answering the research question. The author’s conducted a literature review before the study took place this underlines that previous research findings were unable to give them a conclusion to their research question.

The authors did not highlight any limitations to their study. This is unusual as qualitative research, like other approaches, has its limitations as well (Parahoo, 2006) the omission of limitations may question the trustworthiness of the study. Although they used a small sample size, this is acceptable in qualitative studies (Gerrish & Lacey, 2006), the sample had an unequal ratio of males to females this could be a limitation. Qualitative research is limited in the respect of generalisability where information can not be applied to situations other than ones dealt with in the research (Newell & Burnard, 2006). As the interviews were subject to interpretation by two authors this gives the analysis more credibility as there would have been less chance of key information being omitted. The authors could have added to the study by developing the results of the qualitative study into a quantitative style thus expanding the study to a larger population.

The research paper was well arranged and focused well on the importance of the topic, they identified that few studies had been undertaken on patient’s experience of venous ulcer treatment. It appeared to follow the research process and explained the process in great depth. The authors also used an easy to understand theoretical framework.

As a consequence of critiquing this research paper it has given an insight into actual lived experiences of people who undergo these dressing changes. It may help professionals better understand their patients experience and this in turn will help encourage good quality care. It indicates that it is important for nurses to be research aware in order to provide their patients with the best possible care.

Word Count

3191

References

Burns, N. & Grove, S.K. (2003) Understanding Nursing Research. Building evidence based practice. 3rd Ed. St Louis: Saunders Elsevier.

Cohen, D.J., Crabtree, B.F., Etz, R.S., Balasubramanian, B.A., Donahue, M.D., Leviton, L.C., Clarke, E.C., Isaacson, N.F., Stange, K.C., & Green L.W. (2008). Fidelity Versus Flexibility – Translating Evidence-Based Research into Practice. American Journal of Preventive Medicine. 35 (5S):S381-S389

Cormack, D. (2000) The Research Process in Nursing. 4th Ed. Oxford: Blackwell Publishing.

Craig, J.V., & Smyth, R.L., (2007). The Evidence Based Practice Manual for Nurses. 2nd Eds. Edinburgh: Churchill Livingstone

Ebbeskog, B., Emami, A. (2005) Older patients’ experience of dressing changes on venous leg ulcers: more than just a docile patient. Journal of Clinical Nursing 14: 1223-1231

Edwards, H., Courtney, M., Finlayson, K., Lewis, C., Lindsay, E., Dumble, J. (2005) Improved healing rates for chronic venous leg ulcers: Pilot study results from a randomized controlled trial of a community nursing intervention. International Journal of Nursing Practice. 11: 169-176

Field, M.J., & Lohr, K.N., (1990) Clinical practice guidelines: directions for a new program. Washington D.C.: National Academy Press

Gerrish, K. & Lacey, A., (2006) The Research Process in Nursing. 5th Ed. Oxford: Blackwell Publishing

Grol, R. (1992) Implementing guidelines in general practice care. Quality in Health Care 1:184-191

Grol, R., & Grimshaw, J. (2003) From best evidence to best practice: effective implementation of change in patients’ care. Lancet. 362(939):1170.

Harbour, R., & Miller, J., (2001) A new system for grading recommendations in evidence based guidelines. British Medical Journal. 323 (7308): 334-336

Haynes, B. & Haines, A. (1998) Getting research findings into practice: barriers and bridges to evidence based clinical practice. British Medical Journal. 317:273-276.

Hunink, M. G. M. (2004). Does evidence based medicine do more good than harm? British Medical Journal. 329:1051.

Johnston, L. (2005) Critically appraising quantitative evidence. In: Melnyk, B.M., Fineout-Overholt, E (eds) Evidence based practice in nursing and health care: a guide to best practice. Philadelphia: Lippincott Williams and Wilkins.

MacLeod Clark, J., & Hockey, L., (1989) Further Research for Nursing. In: Cormack, D. (2000) The Research Process in Nursing. 4th Ed. Oxford: Blackwell Publishing.

McMaster University Evidence-Based Medicine Group, (1996) Evidence-based medicine: The new paradigm. Available from: http://www.hiru.mcmaster.ca/ebm. [Accessed 13 March 2010].

Newell, R., & Burnard, P., (2006) Research for Evidence-Based Practice. Oxford: Blackwell Publishing.

Nursing and Midwifery Council, (2008) The Code – Standards of Conduct, Performance and Ethics for Nurses and Midwives. NMC: London

Parahoo, K., (2006) Nursing Research – Principles, Processes and Issues. 2nd Ed. London: Palgrave MacMillan.

Polit, D.F. & Beck, C.T, (2008) Nursing Research. Generating and assessing evidence for nursing practice. 8th Edition. Philadelphia: Lippincott Williams & Wilkins.

Sackett, D.L., Rosenburg, W.M.C., Muir Gray, J.A., Haines, R.B., Richardson, W.S. (1996) Evidence Based Medicine In: Parahoo, K., (2006) Nursing Research – Principles, Processes and Issues. 2nd Ed. London: Palgrave MacMillan.

Scottish Intercollegiate Guidelines Network (2008) SIGN 50 A Guidelines Developer’s Handbook. SIGN Executive, Edinburgh available from http://www.sign.ac.uk/guidelines/fulltext/50/section10.html [accessed 13 March 2010]

Summerskill, W.S.M. (2000) Hierarchy of evidence. In: McGovern, D.B.P., Valori, R.M., Summerskill, W.S.M. & Levi, M. (eds) Evidence-based medicine. Oxford: BIOS Scientific Publishers.

Tod, A.M., Allmark, P., and Alison, A., (2009). A practical guide to attaining research ethics approval in the UK. Nursing Standard. 23 (25) pp.35 -41

Appendix 1

[SIGN thistle header]

Annex B: Key to evidence statements and grades of recommendations

Levels of evidence

1++

High quality meta-analyses, systematic reviews of RCTs, or RCTs with a very low risk of bias

1+

Well-conducted meta-analyses, systematic reviews, or RCTs with a low risk of bias

1-

Meta-analyses, systematic reviews, or RCTs with a high risk of bias

2++

High quality systematic reviews of case control or cohort or studies

High quality case control or cohort studies with a very low risk of confounding or bias and a high probability that the relationship is causal

2+

Well-conducted case control or cohort studies with a low risk of confounding or bias and a moderate probability that the relationship is causal

2-

Case control or cohort studies with a high risk of confounding or bias and a significant risk that the relationship is not causal

3

Non-analytic studies, e.g. case reports, case series

4

Expert opinion

Grades of recommendations

[A]

At least one meta-analysis, systematic review, or RCT rated as 1++, and directly applicable to the target population; or

A body of evidence consisting principally of studies rated as 1+, directly applicable to the target population, and demonstrating overall consistency of results

[B]

A body of evidence including studies rated as 2++, directly applicable to the target population, and demonstrating overall consistency of results; or

Extrapolated evidence from studies rated as 1++ or 1+

[C]

A body of evidence including studies rated as 2+, directly applicable to the target population and demonstrating overall consistency of results; or

Extrapolated evidence from studies rated as 2++

[D]

Evidence level 3 or 4; or

Extrapolated evidence from studies rated as 2+

Good practice points

[tickbox]

Recommended best practice based on the clinical experience of the guideline development group

Source www.sign.ac.uk

 

Get a 10 % discount on an order above $ 100
Use the following coupon code :
SAVE10
Open chat
1
onlinenursinganswers.com
Hello, you can now chat with our live agent via WhatsApp +1 (347) 428-6774
Our professional nursing writers will work on your paper from scratch.
We guarantee a plagiarism-free custom-written nursing paper.