It is used to determine if it was large enough and looked at a number of other factors in the process. The basic idea is to enhance diversity when studying the problem. (Kirk, 1996)
According to Kirk (1996), these ideas are used to have a better comprehension of key trends with them saying, "Statistical significance is essentially scientific credibility. In many academic disciplines, research is considered statistically significant only if the results of the study would occur by mere chance less than five times out of 100. For many, this number has become a kind of gold standard, often determining which papers are published, where researchers find work, etc. As a consumer of numbers, this is important to you for two reasons. Practical significance is an arbitrary limit whereby an observed difference is of some practical use in the real world. Let's say you add Ingredient X to your car's oil that is supposed to improve fuel efficiency. You conduct a careful controlled experiment, measuring fuel efficiency before and after introducing the additive. You find that the difference before and after is statistically significantly better, and conclude that the additive does indeed improve fuel efficiency. However, Ingredient X costs $1,000 a bottle - effectively negating any fuel efficiency savings. You don't really drive your car that much so conclude that the difference is not practically significant." (Kirk, 1996) These insights are showing how both will be utilized to understand what is taking place.
In this case, practical significance is looking at the long-term effects of the programs. This is taking place by watching the cessation rates and the impacts it is having on stakeholders. While the statistical significance, is providing specific raw data that is collected. It is interested in supporting or refuting the hypothesis. The differences are the practical approach wants to understand key trends and use the data as a part of achieving these objectives. This means that the information they are providing is more reliable by taking a broader perspective. The data from the statistical approach, is examining a specific aspect of the problem. It is very limited and can only answer select aspects of the problem. (Kirk, 1996)
Determine the essential elements needed in your program evaluation report.
The most critical elements that will be needed in the program report include: beginning statistics, current figures and the scope of key changes. These areas will allow researchers to compare the information objectively. During this process, they will not be provided with information on the subject. Instead, they will only see an anonymous label. It will ensure that they are overly influenced in one way or another by specific respondents. (Yin, 2009)
In the case of beginning statistics, this information is necessary to understand what kind of smoker the person was and what kinds of programs they were involved with. This will improve accuracy by ensuing that each subject is carefully studied. The beginning figures will provide an initial foundation in reaching these objectives. (Yin, 2009)
The current figures will provide raw information about what is happening in real time with respondents. In this case, they will show how each cessation program is working and it if is having the desired effects. The data is useful in understand the transformations that are taking place and the long-term impact. (Yin, 2009)
The scope of the changes is when researchers are looking at what is happening. They want to understand which factors are influencing the subjects. This is when they can use these insights to establish key conditions and preferences. In the future, these views will be used to understand how this influencing the attitudes of those who are in these programs and those they prefer the most. (Yin, 2009)
According to a study conducted by the University of Washington (2014) this is critical to the long-term success of the research with them saying, "Program evaluation is the systematic assessment of the processes and/or outcomes of a program with the intent of furthering its development and improvement. As such, it is a collaborative process in which evaluators work closely with program staff to craft and implement an evaluation design that is responsive to the needs of the program. For example, during program implementation, evaluators can provide formative evaluation findings so that program staff can make immediate, data-based decisions about program implementation and delivery. In addition, evaluators can, towards the end of a program or upon its completion, provide cumulative and summative evaluation findings, often required by funding agencies and used to make decisions about program continuation or expansion. Evaluators use many of the same qualitative and quantitative methodologies used by researchers in other fields. Indeed, program evaluations are as rigorous and systematic in collecting data as traditional social research. That being said, the primary purpose of evaluation is to provide timely and constructive information for decision-making about particular programs, not to advance more wide-ranging knowledge or theory. Accordingly, evaluation is typically more client-focused than traditional research, in that evaluators work closely with program staff to create and carry-out an evaluation plan that attend to the particular needs of their program. The primary difference between evaluation and assessment lies in the focus of examination. Whereas evaluation serves to facilitate a program's development, implementation, and improvement by examining its processes and/or outcomes; the purpose of an assessment is to determine individuals or group's performances by measuring their skill level on a variable of interest (e.g., reading comprehension, math or social skills, to mention just a few). In line with this distinction -- and quite common in evaluating educational programs where the intended outcome is often some specified level of academic achievement -- assessment data may be used in determining program impact and success." ("What is Program Evaluation," 2014) These insights are showing how the program is effective understanding the problem. This is because it is looking at what is happening from a variety of perspectives.
Research utilization processes in program evaluation. Critique at least two and explain how you would convince stakeholders to best utilize your evaluation.
The research utilization programs will focus on ordinary knowledge and empirical research. Ordinary knowledge is looking at common sense and what factors are influencing stakeholder the most. According to Bradley (1986) it is achieving critical objectives with him saying, "It provides the basis for decision and action in most organizations. Such knowledge, derived from practical experience, is usually widely shared, sensitive to context, and comprehensive. By contrast, knowledge derived from social science methods tends to be context independent and, of necessity, to be selective rather than comprehensive. At best, such knowledge supplements ordinary knowledge. On the one hand, it justifies the generation of knowledge using social science methods, and on the other, it imposes restrictions on the use of such knowledge. However, evaluators and others with a stake in social science have generally failed to recognize this argument, generating unrealistic expectations about the value of knowledge derived from social science methods and minimizing the application of such knowledge." (Bradley, 1986) This is illustrating how these techniques could improve the validity of the project. This is taking place by making sense and offer clear guidelines. In the future, stakeholders will utilize this as an avenue to connect with the study and see it value. This is when they are willing to support it in the long-term. (Yin, 2009)
Empirical research is gaining knowledge by conducting direct and indirect observations. In this case, the basic idea is to look at different aspects of the problem in order to understand what is taking place. A good example of this can be seen with insights from Bradley who said, "Empirical evidence (the record of one's direct observations or experiences) can be analyzed quantitatively or qualitatively. Through quantifying the evidence or making sense of it in qualitative form, a researcher can answer empirical questions, which should be clearly defined and answerable with the evidence collected (usually called data). Research design varies by field and by the question being investigated. Many researchers combine qualitative and quantitative forms of analysis to better answer questions which cannot be studied in laboratory settings, particularly in the social sciences and in education. The research utilization process includes synthesizing and disseminating new and existing evidence; supporting champions and decision makers to use evidence; developing and promoting evidence-based policy and service delivery guidelines, job aids, curricula and other materials; organizing global, regional and national technical consultations and learning exchanges; developing strategies to scale up effective practices and programs; and providing technical assistance for policymakers and program implementers at the country level to use evidence in decision making." These insights are showing how this approach will address the needs of stakeholders. This is taking place through using common sense and observing the way respondents are reacting in comparison with the underlying trends. (Yin, 2009)
The combination of them is beneficial to stakeholders by taking a different look at the problem and offering unique ways of analyzing it. Ordinary knowledge will be utilized to establish key trend…