Format Project reports must contain no more than 8 Research Paper Help

Format Project reports must contain no more than 8 Research Paper Help

Professional Development Evaluation Plan Template (Narrative Form) The template below provides an overview of the phases of your evaluation plan. Save a copy of the template, and complete each portion within the template as directed in the weeks indicated. Your writing should be in essay form and follow APA guidelines. It is expected that you will make any necessary revisions to your plan as you progress through the course, collaborate with your Learning Team, and share your ideas with your Walden colleagues. You will submit this document in Weeks 1, 2, 4, and 6, but you will only be graded for the section(s) of the plan that are due that week. The Professional Development Evaluation Plan is an assignment that is aligned with the Learning Forward Process Standard: Evaluation. Once you receive feedback on your plan, you will make any revisions and submit it for grading. See the Application Assignment in Week 6 for more information. Also note that the Application Assignments in Weeks 3, 4, and 5 require you to collaborate with your Learning Team. See the Professional Development Evaluation Plan Overview located in Week 1 for more information about working with a Learning Team. Pre-Planning Phase Framing the Evaluation (Week 1) Write an essay that addresses the following: Explain the purpose of the evaluation Analyze aspects of a school or district s culture that may influence evaluation efforts Apply professional standards to guide the evaluation Identify members of the Learning Team who will be involved in developing the evaluation plan, and provide a rationale for each Identify the users of evaluation results Write your essay below (approx. 3 4 pages). You will only be graded on this portion of the Professional Development Plan in Week 1. Purpose of the Evaluation Plan Evaluating professional development sessions provides school districts with valuable information that can help school leaders and other educational stakeholders to make decisions to ensure that professional development programs positively impact teacher practice and ultimately improve student performance. Doing so helps teachers to view PD evaluations positively as well as engage more teachers in the evaluation of the development programs they participate in (Guskey, 2005). Drawing an evaluation plan based on this rationale is imperative for improving teacher practice and sustaining evidence practice among education practitioners. The purpose of evaluation can be summarized in three broad categories. First, evaluation is used to give PD developers and implementers a precise understanding of what the sessions are meant to accomplish, the procedures to be followed, and how the goals achieved can be determined at the end of the program (Guskey, 2005). In other words, evaluation helps decision makers to determine whether the PD program is headed in the right direction and is likely to meet the objectives of the program. Secondly, formative evaluations give decision makers with continuous information on the progress of the program on whether it is working as planned or not (Guskey, 2005). In the event that the outcomes are not as expected, the PD managers have the opportunity to make the necessary adjustments to ensure that the objectives of the program are met. Third, summative evaluation is undertaken to provide PD program developers with a program s overall worth. As such, summative evaluations help to determine whether the benefits are worth the cost. Analysis of Aspects of a District s Culture Schools and school districts have become cognizant of the fact that a good PD culture creates a learning organization which is crucial is the implementation of robust teacher development policies (Blandford,A school district that has emphasizes high expectations as well as respects the diversity of students talents and learning styles are thought to stress on the delivery of quality learning experiences in equal measure. Taking teachers through a well-thought PD program plays a significant role in motivating and training teachers to work with other teachers towards schools improvement as well as student achievement (Asfaw, 2008). In order to attain such levels of PD, the school district needs to provide teachers with expert support on how to address issues in their day-to-day practice. This has the potential of encouraging reflective practice among teachers hence make them aware of the significance of continuous learning (Blandford,In addition, a school district with a culture of accountability entrenched in teacher professional development is better placed to enhance teachers practice and consequently improve student learning outcomes. This means that PD sessions have to focus on the use of evidence aimed at improving student outcomes. According to Guskey (2005), staff development programs founded on the inclusion of evidence-based teaching strategies and methods in school reform programs have the potential of centering on specific measures of student learning. Consequently, fostering such a culture within a school district ensures that PD programs aim to bring priceless benefits to targeted students instead of having a one-size-fit-all plan that locks out other students who, for instance, might not be as fast to learn at the pace of their colleagues. For this reason, it is critical to base PD programs on established professional standards to guide the evaluation process. Professional Standards Guiding PD Evaluation PD evaluators are charged with the responsibility of ensuring that the evaluative information they produce is both accurate and credible in a manner that adheres to the highest technical standards suitable to the methods they use (Hannum, Martineau, Reinelt, & Leviton, 2006). Attaining this professional standard involves the evaluator communicating approaches and methods used accurately and in detail for the participants to understand. These should be context-specific to reduce limitations associated with inclusion of inappropriate values, assumptions, methods, interpretations, and analyses (Hannum et al., 2006). For effectiveness, this evaluation recommends that evaluators should possess effective negotiation skills to enable them negotiate honestly with clients as well as other relevant stakeholders in the education sector on the tasks that should be undertaken and costs involved. This is critical in initiating discussions between the evaluator and PD stakeholders. Members of the Learning Team Profession development experts are the first category of people to be included in the evaluation process. PD experts are supposed to be aware of the professional standards, ethics, and regulations governing the evaluation of those participating in PD programs. Secondly, PD evaluators should be included to ensure that the public interest is well-represented, especially in situations where the PD program is funded by public finances (Hannum et al., 2006). This is critical in ensuring that the welfare of society is well protected. Also, it is important to include teachers who can provide helpful insights on influence of PD programs on their teaching practice. User of the Evaluation Results Schools, districts, teachers, and other education stakeholders such as school superintendent are the key beneficiaries of evaluation results. These individuals and institutions use evaluation to determine the effectiveness of PD programs and to determine whether the costs involved are worth running subsequent programs. Student learning outcomes are a major determinant in this regard. References Asfaw, A. T. (2008). Multicultural Education Professional Development of Principals: Its Impact on Performances of School Leadership. Ann Arbor, MI: Capella University. Blandford, S. (2012). Managing Professional Development in Schools. New York: Taylor & Francis. Guskey, T. R. (2005). Taking a Second Look at Accountability: Strong Evidence Reflecting the Benefits of Professional Development Is More Important than Ever before. Journal of Staff Development, 26(1), 10-18. Hannum, K., Martineau, J. W., Reinelt, C., & Leviton, L. C. (2006). The Handbook of Leadership Development Evaluation. San Francisco, CA: John Wiley & Sons, Inc. Planning Phase Part 1: Focusing the Evaluation (Week 2) Write an essay that addresses the following: Define program goals Specify objectives/outcomes: What kinds of changes do you expect to see in Knowledge, Attitudes, Skills, Aspirations, Behaviors ?KASABs ?for teachers, students, and other stakeholders (principals, central office, organization, other)? When will these changes occur? Determine initial outcomes, intermediate outcomes, and/or final outcomes related to the KASABs you have identified Write your essay below (approx. 3 4 pages). You will only be graded on this portion of the Professional Development Plan in Week 2. Note: Once you receive feedback on this part of the Evaluation Plan, you will need to add the information to the Professional Development Plan At-a-Glance (Program Goals and Column One: Program Objectives). Professional development program goal Developers of professional development programs often set goals for the whole program founded on their ideas regarding student learning, instructional approaches, and education reform. The first step in the design of professional development programs involves identification of program goals (Killion, 2003). Goal setting is a critical step in the design process as it shed light on how the program will be executed in practice as well as taken up by the participants (Killion, 2003). As will be demonstrated below, oftentimes developers plan programs to meet specific goals for institutions, teachers and students involved. In this paper, I will outline goals in terms of what participating school, educators as well as learners will achieve in the course of their participation in this professional development program. At the institutional level, the main goal of this professional development program is to make certain that professional development for staff is ongoing and provide for educators lifelong learning. When it comes to students, this program is aimed at improving teachers understanding of student learning and to provide opportunities for educators to reflect on their teaching in relation to student learning. Concerning students, the program is aimed at improving students interest and motivation for learning mathematics. KASAB (Knowledge, Attitude, Skill, Aspiration, Behavior) for the program For reforms in practice to happen, transformations of knowledge of principles that are assumed must occur as well (Taylor & MacKenney, 2008). With regard to the proposed program, teachers will receive training on various student learning styles and will be provided with information pertinent to the learning process as reported by researchers that has something to do with instructional procedures in the classroom. As an initial outcome, it is anticipated that participation in this program will enable teachers to gain a deeper and wider understanding of different approaches to student learning and teaching as well as an array of instructional strategies to use in certain situations. Moreover, teachers will gain insight into complex factors that have some bearing on the process of teaching and learning. Although it may not be easy for teachers to adjust their style of teaching, this program will provide them with training that will enable them to identify and match their teaching style to student learning styles. Skilled teachers understand that students approach learning mathematics in various ways (Clarke & Pittaway, 2014). Consequently, teachers ought to be able to thoroughly deliberate to students learning tendencies and offer diverse avenues to learning. Therefore, the other initial outcome anticipates that teachers will be able to create learning environments that will make learning easy particularly in connection with certain types of learning disabilities. In this program, teachers will have a chance to consider how their students learn and weigh against how they teach them in classroom. It would be interesting for them to reflect about their own particular way of teaching and to appreciate that all children do not learn in the same manner. When teachers are aware of their preferred style of teaching they are more likely to be more responsive as to whether it goes well with the learning needs and style of their learners (Mohanna, Chambers & Wall, 2007). Then they will be capable of varying their style of teaching so as to engage with students more efficiently and expectantly have more influence on their gain in skills and knowledge. As a result, one of the intermediate outcomes of this program is that teachers attitudes about working towards incorporating learning activities that have variety and interest for all the learners in the classroom is expected to improve. Despite the fact that there are overriding modalities of learning, there are students that exclusively utilize a single learning style, thus, there is more often than not considerable overlap in learning styles. As mentioned earlier, an effective teacher will slot in a variety of learning styles when presenting a lesson (Mohanna, Chambers & Wall, 2007). This will augment learners intrinsic motivation which is imperative in the learning process. In the absence of motivation, learning is less likely to go on in an organized and orderly manner. On the other hand, motivated learners tend to achieve better academically. According to Mohanna, Chambers and Wall (2007), learners who are motivated are generally goal directed, exhibit higher activity levels, and are determined in completing tasks assigned to them. Thus, the other intermediate outcome of this program is that it is anticipated that students develop a more positive attitude towards the subjects as well as increase their level of motivation to learn. The less discrepancy between teaching and learning styles, the greater the levels of student learning which is expected to reflect in test scores of the students taught by teachers who will participate in this development program. That is to say, the intended result of this program is that learners achievement in mathematics will improve. In summary, as indicated by Killion (2003) and demonstrated in this paper, program goals dictate the manner in which the program will be executed as well as the outcomes of the program. References Clarke, M. & Pittaway, S. (2014). Becoming a Teacher: Knowledge, Skills and Issues. Frenchs Forest: Pearson Australia. Killion, J. (2003). 8 Smooth Steps. Journal of Staff Development, 24(4), 14-21. Mohanna, K., Chambers, R. & Wall, D. (2007). Your Teaching Style: A Practical Guide to Understanding, Developing and Improving. Oxford: Radcliffe publishing. Taylor, G.R. & MacKenney, L. (2008). Improving Human Learning in the Classroom: Theories and Teaching Practices. Lanham: Rowman & Littlefield Education. Part 2: Developing Evaluation Questions (Week 3) Develop evaluation questions, and describe in essay format your process in developing, refining, and prioritizing these questions Include a summary of your Learning Team s feedback and input on the evaluation questions Write your essay below (approx. 2 3 pages). You will submit this portion of the Professional Development Plan in Week 4. You will only be graded on this portion of the plan in Week 4. Note: Once you receive feedback on this part of the Evaluation Plan, you will need to add the information to the Professional Development Plan At-a-Glance (Column Two: Evaluation Questions). Developing Evaluation Questions Harvey and Struzierro (2008) equate evaluation of profession development program to carrying out a case study with a single student. Often, after identifying the problem, evaluators come up with well-defined, testable evaluation questions. The process of evaluating professional development programs can sometimes be complex in the sense that it is multifaceted. Nonetheless, different scholars have come up with different models for evaluating professional development programs (Harvey & Struzierro, 2008). The majority of authors suggest that evaluation starts with formulation of a list evaluation questions. To avoid getting caught up in the middle, evaluators can involve stakeholders as well as the target group in identifying critical issues to be addressed by the program. By doing so, evaluators come up with what they consider would guide collection of data (Killion, 2003; Harvey & Struzierro, 2008). In addition, the involvement of stakeholders as well as potential participants helps in the identification of priority questions. Program evaluations are designed to answer questions. Questions direct the evaluations. The questions address different aspects of the development program which include: the participants concerns and perception of the program; program activities and how they lead to the achievement of program goals; and importance of the program findings. According to Zepeda (2013), there are several things to be considered when formulating program evaluation questions which were taken into account when formulating the evaluation questions listed in this paper. First, an evaluator should establish whether the question is of interest to the key participants. Second, whether the answer already exists. And third, whether the answer provides useful information. In regard to relevance and usefulness, an evaluation question ought to increase the likelihood of its use. This is based on the presumption that evaluation of any program is aimed at proving whether the program was a worthwhile investment and to demonstrate to stakeholders how to design better programs in future. Good evaluation questions address the main issue and are clearly defined in a manner that would produce clear and precise answers. Another characteristic of a good evaluation question is that it should be measurable. Essentially, it should be viable as well as practical to gather data to answer the question. The type of evaluation (summative or formative) influences how program evaluation questions are formulated (Killion, 2003). With regard to formative evaluation, an evaluator can utilize theory of change as well as the logic model to generate evaluation question. That is, ?questions can be formulated from each initial and intermediate outcome in the logic model, from each step of the theory of change, from both, or from steps in either that are pivotal to the program s success  (Killion, 2003, p.19). Conversely, summative evaluation questions whether the development program accomplished its goals. The evaluation questions listed below are generated using the theory of change and logic model for closing achievement gap in mathematics among fourth graders. These questions, as will be shown, are both formative as well as summative. Evaluation questions are as follows: ? How often do teachers use instructional strategies that take into account the different cultures represented in their classrooms? ? How does the school administration offer support to mathematics teachers in implementing the various strategies for closing achievement gap in fourth grade mathematics? ? Does the use of culturally responsive instructional strategies close the achievement gap in mathematics? References Harvey, S.V. & Struzierro, J.A. (2008). Professional Development and Supervision of School Psychologists (2nd ed.). Thousand Oaks, CA: Corwin Press. Killion, J. (2003). 8 Smooth Steps. Journal of Staff Development, 24(4), 14-21. Zepeda, S.J. (2013). Professional Development: What Works. New York: Routledge. Part 3: Choosing Data Sources and Collection Methods (Week 4) Write an essay that addresses the following: Identify data sources and collection methods for the plan Describe your plan for who will collect data and when Explain how you will pilot a data collection technique with a trial group Include a summary of your Learning Team s feedback on the data collection plan Write your essay below (approx. 2 3 pages). You will only be graded on this portion of the Professional Development Plan in Week 4. Note: Once you receive feedback on this part of the Evaluation Plan, you will need to add the information to the Professional Development Plan At-a-Glance (Column Three: Data Sources and Column Four: Data Collection Methods). Data plan Data collection is an important step in program evaluation and demands thorough preparation (Champion, 2002). When preparing for collection of data, an evaluator does a number of things including choosing suitable instruments; preparing individuals will take part in data collection and gaining access to data as well as people. This paper presents a plan for collecting data to be used in evaluating development program aimed at reducing achievement gap in mathematics among grade four students. The main types of data required for the evaluation of this program are student assessment data and teachers use of culturally responsive instructional strategies. When selecting instruments suitable for collection of the aforementioned data, the evaluator carefully reviewed instrument so as to ensure that she selects instruments that are appropriate for the planned evaluation. Champion (2002) indicates that evaluators should strategically choose the data they will gather. Here, student assessment data which include criterion-referenced test scores and homework will be obtained from the school data base. Data on teachers use of culturally responsive instructional strategies will be collected using observation. At the start, the evaluator will select the evaluation team. Thereafter she will prepare the individuals participating in data collection by first, thoroughly informing them about the evaluation plan while paying particular attention to data collection activities, amount of time required and responsibilities. This was especially critical considering that classroom observation is one of the instruments selected for collection of data. Secondly, the evaluator will carry out a comprehensive review of data collection instruments including the way they are used. The evaluator will explain to the individuals participating in data collection the reason for using the selected instruments as well as the expectations for how data are to be documented in preparation for analysis. A clear guideline for duration as well as frequency of observation will as well be provided. Furthermore, the evaluator will develop a checklist and guide for classroom observation. The observation protocol will used to assess teachers application of culturally responsive instructional strategies. Therefore, observers will be required to know the number of times they are supposed to observe every teacher who took part in the program, how to record as well as to report the data (Waxman & Tharp, 2004). After thoroughly briefing those who will take part in data collection, the evaluator will conduct a pilot data collection. The reason for doing so is that it provides these individuals with an opportunity to practice using instruments. As a result, they will gain a better understanding of their responsibilities which will in turn ensure that the process of data collection is coherent and systematic. A pilot data collection, especially in relation to observation protocols, will provide an opportunity for testing the reliability of the observation protocol and to make necessary fine-tuning (Killion, 2003; Harvey & Struzierro, 2008). The observations will be carried out by senior teachers from other schools because this brings in significant element of credibility as well as objectivity to the evaluation process. During pilot data collection, observers will be required to carry out multiple observations of teachers for two days and complete reports as soon as possible. Observers will be directed to report any difficulties including encountered during data collection including teachers concerns. Doing so will allow the collectors to make necessary changes to the manner in which they conduct observations prior to the actual collection of data for program evaluation. Basically, the key issue creating a data plan is deciding what role the different collectors will play (Killion, 2003). During a discussion with members of my learning team, the majority of the members stressed the importance of ensuring a data accuracy which in their opinion required collection of data as planned. One of the members questioned how the evaluator would ensure that the collectors record data accurately and that errors associated with data entry are found and rectified. A member of the group suggested that the evaluator should put in place a mechanism that would help in creating checks and balances. Summary Notably, the most critical step in data collection is the planning stage. This is because it dictates the success of the actual data collection process. At the planning stage, the evaluation team outlines procedures, sets up timelines and assigns individuals responsible for every step. From the above discussion of the data plan, a comprehensive orientation of participants of the development program as well as individuals involved in data collection is crucial. During orientation, those involved become acquainted with strategies for collecting data. Piloting data collection, as stated by Killion (2003) ensures the clarity as well as accuracy of the instruments used to collect data. In addition, in instances where the evaluator recruits other people to assist in data collection he/she has to train them prior to the actual process of data collection. Thus, piloting provides an opportunity for training as well as refinement of the protocols for collecting data. After piloting, the evaluation team makes appropriate changes to the data plan. The refined data plan will be used in the actual process of data collection. References Champion, R. (2002). Choose the Right Data for the Job. Journal of Staff Development, Summer, 78-79. Harvey, S.V. & Struzierro, J.A. (2008). Professional Development and Supervision of School Psychologists. Corwin Press. Killion, J. (2003). 8 Smooth Steps. Journal of Staff Development, 24(4), 14-21. Waxman, H.C. & Tharp, R.G. (2004). Observational Research in U.S. Classrooms. Cambridge: Cambridge University Press. Conducting Phase Collecting, Analyzing, and Interpreting Data (Week 5) Write an essay that addresses the following: Describe interrelationships you will be looking for in the data you collect and how different types and sources of data may lead to important findings Explain insights gained based on piloting a data collection method Write your essay below (approx. 2 3 pages). You will submit this portion of the Professional Development Plan in Week 6. You will only be graded on this portion of the plan in Week 6. Note: Once you receive feedback on this part of the Evaluation Plan, you will need to add the information to the Professional Development Plan At-a-Glance (Column Five: Data Analysis). Pilot Study After coming up with evaluation questions and determining the method of collecting the data that will be used to responds to the evaluation questions. It is often wise for the evaluation team to carry out a pilot study (Killion, 2003; Guskey, 2002). This is because avoiding pilot study is likely to result in mistakes particularly in relation to collection of data. The two methods of data collection selected for the proposed development program whose main objective is to close achievement gap in math among fourth graders are observation and review of student assessment records. Observation method is best suited for gathering information regarding individual behaviours (Puma & Raphael, 2001). Furthermore, with classroom observations observers who are acquainted with the development program are capable of rating participants level of implementation of instructional strategy in classroom. As result observation was used to collect information on teachers use of culturally responsive instructional strategies. On the other hand, student assessment data was obtained from school records of test scores. In this case, the evaluator took into account the scope of the available data, accuracy of the data, completeness of the records, as well as the timelines of the data (Zepeda, 2013). The pilot study provided an opportunity for the evaluation team to ensure that the observation instrument would function as required. The observation instrument was tested using teachers who would not participate in the study. Based on the findings of the pilot study, the evaluation team identified a number of changes that would be made on the checklist. Similarly, it was noted that the observers required more training so as to attain higher levels of consistency. A thorough training regimen for the observers was formulated and only when the observers exhibited the accepted level of reliability were they considered ready to carry out the observations. According to Puma and Raphael (2001), well-trained and skilled observers are capable of establishing learners responses to the instruction delivered by educators who took part in professional development. Furthermore, perception data was collected using interviews. Therefore, it is during piloting that data collectors were acquainted with probe and questions on the protocol. Data collectors obtained a broader understanding of the questions thus they would be able to provide interviewees with clarification when asked for. In order to manage collection of data efficiently, the evaluator developed a system for tracing the status of data collection. This was done using spread and every teacher participating in the study will be accorded a distinctive identification code for purposes of tracking. The evaluation team will keep track of the status of every data collection spreadsheet for every participant. In essence, the evaluator has to constantly monitor the status of data collection continuously (Puma & Raphael, 2001). In doing so, the evaluator will be able to determine how many observations were planned and how many have been completed as well as where the data has not been obtained. Killion (2003) insists that ?data collection processes must be refined for accuracy  (p.20). In this case, the evaluator will ensure data accuracy by creating checks and balances amongst data collectors. By doing so, data collectors will be able to record data accurately and errors related to data entry are most likely to be identified and remedied. In addition, the missing data would be dealt with appropriately. References Guskey, T.R. (2002). Does it make a difference? Evaluating Professional Development. Educational Leadership, March, 45-51. Killion, J. (2003). 8 Smooth Steps. Journal of Staff Development, 24(4), 14-21. Puma, M. & Raphael, J. (2001). Evaluating Standard-based Professional Development for Teachers: A Handbook for Practitioners. Washington, DC: Urban Institute. Zepeda, S.J. (2013). Professional Development: What Works. New York: Routledge. Reporting Phase Sharing Your Findings (Week 6) Create a reporting plan for sharing the findings of your evaluation Write a description in essay format that describes your reporting plan Write your essay below (approx. 2 3 pages). You will only be graded on this portion of the Professional Development Plan in Week 6. Note: Once you receive feedback on this part of the Evaluation Plan, make any necessary changes and review the Week 6 Application for instructions. In addition, you will need to add this week s information to the Professional Development Plan At-a-Glance (Column Six: Method for Sharing Findings). Reporting Plan for Program Evaluation Guskey (2000) points out that ?reporting evaluation results


Comments are closed.