Nursing assessment example

Nursing assessment example

To assess the knowledge

If we are to develop the nursing profession, we must ask good questions and effectively search for information that can help us answer the questions. What are pleasant questions and how we can find research-based information were the subjects of two previous articles in this series?

A third important prerequisite for a more evidence-based practice is to assess the information we find on a critical and constructive manner. This is a broad subject and can present here only a rough simplified framework or tools. To learn more about a critical appraisal of research-based information, there is a lot of literature.

What kind of information?
Depending on the question we asked in the first place we looked for different types of information, often in the form of articles. Ideally, we want to find a systematic review that summarizes all available research that sheds light on a particular question. A guideline for practice is another form of summarized knowledge. These processes are often more questions simultaneously, for example, how to prevent, diagnose and treat urinary tract infections. Guidelines should preferably be based on systematic reviews, but often they are not there, and then it can be difficult to assess the knowledge base. If we do not find systematic reviews or guidelines based on these, it may be appropriate to look for different types of surveys. If the question is about the effect of prevention or treatment, you need randomized controlled trials. If you have questions about patients' experiences, you will need qualitative studies.
Nursing assessment example

Start Help
It is impossible to assess an article whose purpose is unclear. The study must provide a purpose that a) are clearly defined, B) you understand and C) is relevant and important to you. If these conditions are met, carrying it straight into the trash.

If you understand what the article is about and think it is an important issue, go ahead and read with a critical eye. Necessary assessment involves determining:

If the message of the article is true.

What message is.
If you let yourself be influenced by the
Below we will review certain types of investigations and give you some tools that make it possible to determine credibility, effect size and value. We have also created a kind of primer on how to easily evaluate an article.

Systematic aims of, other review articles
A systematic review is a study in which authors tend to "whole world" to find all studies that assess the same issues. The word methodical implies that the authors have been orderly and systematic effects, that they show the way they have gone forward on. A "normal" review article (review) often does not explain how it has been found during the investigations are included. Then it is difficult to know whether they were thorough enough in their search for relevant documentation.

Such a study can be read critically by asking the following questions:
1. It is given a precise purpose of the list? And this is what you need?
2. Have the authors readily thoroughly for relevant studies? Sources and search strategy must be given and let the judge.
3. Complies within inclusion criteria for the purpose? Have the authors included and excluded studies according to the focus of the list?
4. Have the authors assessed studies methodological quality? And they have done so in a systematic manner that is clearly described?
5. Are the results consistent with study to study? A high degree of variability means caution, in conclusion.

Randomized studies
A randomized, controlled study is the most reliable way to proceed in order to determine the effect of a measure. The most important question when to critically evaluate such a study is whether the participants were randomly allocated to interventions (intervention) or control group. Random distribution, we get (in principle) by turning the coin (in practice using a computer program). This is called randomization. Randomization is important for a reason only two groups to be compared should be as equal as possible with regard to anything that might affect the outcome being measured in the study. Randomness is a very effective way to get the same groups. If one studies the effect of a painkiller, it is unfortunate for anyone with severe pain comes in one group and all with moderate pain in the other. We want a mixture of pain levels in the two groups, and we want about as many women and men, young and old, about the same diagnoses, etc. That is, we want the groups to be similar in terms of what we had already known to influence the outcome (prognostic factors or confounding factors). The cunning to randomization is that the groups are equal with regard to the factors. we must expect to influence the outcome, but we do not know.

Another important question is whether randomization was concealed. This means that those who carried out the experiment, did not know which treatment the subsequent patient would receive. If you know that the next patient will receive active treatment, it is facile, consciously or unconsciously, to send the worst looking off to the group. It is usually easy to determine if an attempt were randomized, as stated in the title or abstract. It is often difficult to determine if randomization was properly executed. The English term "concealed random allocation" covers what we want. A frequently used way of implementation is that they will allocate the patient to intervention or control dials a central randomization Phone were a computer program tells you to group affiliation. Fewer effective methods are the distribution according to the principle "every," using the birth dates or by a file number (it can be tempting to not quite stick to the form). The last is often called quasi-randomization.

The next checkpoint for a randomized trial is to see whether the authors have accounted for all patients in the trial and whether they are analyzed in the group, they were originally allocated to. If people drop out of the way, we know nothing. Their outcome may potentially alter the study's conclusion. If the dropout rate is large (a possible limit of "large" is more than 20 percent drop), there is the reason to be skeptical.

If you can answer yes to all questions below, you've got a really good study. What about studies where we can answer some yes, but sometimes have to answer no? There is no standard answer, and you must use your own judgment. We can look at the consequences. If there is a lot at stake (such as life and death), it is important that patients are likely part of a treatment effect. Often it will help to see the results in context of other studies of the same measures (see above on systematic reviews).

Randomized studies can be read critically by asking the following questions:
1. Is the purpose of the study clearly defined? Is a randomized study, a suitable design to answer the question?
2. The sample was allocated to intervention and control group using a satisfactory randomization procedure?
3. The groups were treated equally apart from the measure that should be evaluated?
4. Join participants, clinicians and outcome measures blinded with according to group affiliation? It is not always possible to blind participants or the one who gives the measure, but it is almost always possible to blind outcome measures.
5. Participants were accounted for at the end of the study?
6. Is the measure made sense? Is it likely that the measure might affect the outcome (consider does, duration)? Is the measure acceptable to users?
7. The outcome was measured with reliable methods of measurement?
8. What do the results? How accurate were they?
9. Are the authors' conclusions in line with the results?
10. Can the results be transferred into practice?

Cohort - and case-control studies
When we want information about causes of disease, possible damage of a treatment or prognosis, we must be able considered often called observational studies. Generally, it involves cohort study, case-control studies and occasionally description of single patients or case series. A cohort study (cohort = group) follows a group of people over time, such as healthy people, and observes, for example, who gets sick. Researchers are trying to assess the characteristics veddem which become ill; it can be if there is predominance of men or smokers. It is obvious that the healthy, and the unwell may well be different in ways other than that there is a preponderance of smokers among the ill. Researchers must try to think of other factors and "control" for these (confounding factors). The Mother and Child Cohort Study of Public Health or the U.S. "The Nurses' Health studies" are examples of a cohort study.

In case-control studies examined people who already have a condition or disease and a group of people who are in condition. The researchers compare by going back in time and see how many sick and the healthy who have been exposed to a possible pathogenic factor. It is important that the "case" is like "controls" with regard to all other matters that may affect the development of the disease (such as gender, age and other disorders). It is often difficult to know whether the researchers have considered all "the others" it is reasonable to take into account.

Observational studies can be read critically by, among other things, to ask the following question:
1. Where the groups comparable to the same? In cohort studies must be adjusted in the design or analysis of all other prominent exposure than those studied. In a case-control study to the sick and the healthy are exactly alike with respect to other important causes of disease.
2. Exposure and outcomes were measured on the same (reliable) way?
3. Was follow up sufficiently long and complete? Persons in a cohort study that falls can have a different risk of disease than those who remain in a study.
4. Have the authors considered known, possible confounding factors in study design and analysis?
5. Coincides with results from this study with results from other available studies?

Qualitative studies
Qualitative studies include useful for documenting patients' experiences of being ill or their experiences with health care. There is an ongoing debate among qualitative researchers about the usefulness of checklists in qualitative research. However, it is understood that they must be used with caution. One argument is that qualitative research is largely dependent on the researcher's discretion and not by standardized methods.

Journal of Advanced Nursing recently had an article that discusses the critical assessment of a phenomenological study. Checklists for crucial evaluation of qualitative research is still in development. They have not been used so long as checklists for quantitative research. There is more and more literature.

Broadly speaking, we can argue that a good qualitative study is characterized in that it.

We are based on clear and concise protocol that has been tested in a pilot study.
If approved by the ethics committee.
It is based on a satisfactory framework.
I have used explicit methods to identify sources of error in the collection of information.

When considering such a study, you must first ask the question, whether it was appropriate to use qualitative methods to answer just this question or this question. Can it have anything to offer, go ahead and check out the study as these criteria?
1. Is it satisfactorily described how and why the choice was made?
2. Was the data collection process sufficient to provide a comprehensive picture of the phenomenon?
3. Was it explained the background conditions that may have affected the interpretation of the data?
4. Is it clear how the analysis was completed? Is the interpretation of data understandable, evident and reasonable?
5. Is it an attempt to corroborate the findings?
6. Will it clear what was the main finding of the study?

In a few pages, we have provided an outline of the mass method knowledge. It is not intended that, after reading this, in a simple and reliable way to assess the validity of all research-based information. Look at it either as an introduction and an invitation to learn more or to attend courses. Everyone needs time to study and training - often along with others - before being fair to evaluate the quality of research. However, we hope, nevertheless, that it left an impression that it is possible. You do not have to be a researcher with a doctorate to become more critical of the documentation and articles you come across.