Improving Instructor Development & Performance with Observations

One of the most critical influences on knowledge transfer, effective instructors are essential for building a great learning and development (L&D) organization.

 
BY ERIC A. SURFACE, Ph.D., AND REANNA P. HARMAN, Ph.D. | Originally Published in Training Industry Magazine

Research in childhood, adolescent and adult learning shows some instructors are more effective than others, but feedback and development efforts improve instructor performance. Instructors can have a variety of effects; one great instructor can influence hundreds, and even thousands, of learners. Therefore, it is critical for instructors to have the resources and support necessary to be successful.

New instructors, seasoned subject matter experts (SMEs) and lifelong educators can continue to grow and improve with constructive feedback. An effective evaluation practice focusing on instructor development includes data from multiple sources, collected via multiple methods. Observation is commonly used in academic settings but is less common in corporate training. In this article, we will discuss the value of observations for instructors, learners and organizations; review best practices for designing and implementing an effective observation practice; and identify and overcome challenges.

Systematic observations can yield rich qualitative data, as well as quantitative ratings regarding learners, instructors and the learning environment. Observations provide a source of developmental feedback for instructors and are often the only opportunity instructors have for receiving behavioral feedback. While learner reactions and performance are indicators of instructor effectiveness, direct observation of instructors’ behavior in the classroom provides information about instructor performance that instructors can directly control and improve.

Lessons learned from the use of classroom observation in K-16 settings, such as the Measures of Effective Teaching (MET) Project, provide insights for developing an observation practice in corporate L&D. The MET Project was a three-year study to explore measures of teacher effectiveness and highlights the value of using observations for teacher development.

Tips for Designing and Implementing an Effective Observation Practice

A well-designed observation practice provides a rich source of data to develop instructors and support comprehensive program evaluation. While this may seem like a linear process, it should be agile and iterative with lessons learned from each phase informing adjustments to subsequent iterations.

Step 1: Plan

Planning is critical and lays the foundation for all subsequent steps.

  • Create or select an observation rubric to measure instructional behaviors. These behaviors should be linked to learner and program outcomes and provide actionable insights to help instructors improve. Rubrics should include quantitative ratings as well as opportunities for written feedback. Several models and instruments of instructor performance have been developed and used for K-12 and adult learning contexts. Select the most appropriate model or instrument based on the training content, context and instructor. For example, some behaviors are more relevant for face-to-face instruction than for virtual training. Models and instruments developed to improve instruction for adult learners will be more relevant in corporate L&D than those developed for younger learners.
  • Determine who will observe. In academic settings, administrators and peers are often used as observers. In a corporate setting, observers could be internal to the L&D program or include administrators and peers who teach similar or different subjects. One unique consideration in L&D settings is that many instructors are not trained as professional educators, so it may be helpful to include observers who have experience as professional educators to provide SMEs with feedback related to teaching practices.
  • Train observers to rate using the rubric and assess interrater reliability. If possible, use multiple observers to increase reliability. Raters should be trained to use the rubric and to develop a shared mental model. In addition to assessing interrater reliability as part of observer training, reliability should be monitored over time, and there will likely be a need for follow-up training to ensure raters remain calibrated. If it’s not possible to have multiple raters, have a single observer do multiple observations.
  • Determine the method of observation. Should observations be in-person, recorded or both? There are pros and cons for both scenarios. During the COVID-19 pandemic, we have seen a massive shift to virtual instruction, which presents the opportunity to record virtual sessions for feedback. Recording sessions, even when doing live observation, is a good practice given its benefits and limited drawbacks.
  • Create an observation sampling plan. An observation sampling plan ensures having enough observations from multiple sources across courses and instructors. Recording all sessions for possible observation is a great practice if feasible, but sessions can be selected in advance by the organization or even by instructors. Some best practices from the MET Project include allowing teachers to choose their own recordings for observations, supplementing full-length observations with shorter observations, and announcing observations to reduce anxiety associated with being observed and keep the focus on development.

Step 2: Observe

Gather feedback from observers to determine if adjustments need to be made to the rubric or sampling plan. Rating quality should be monitored by assessing observer reliability and retraining observers as needed.

Step 3: Self-discovery and coaching

Linking and presenting data from observation with other sources (e.g., learner survey and assessment data) can help instructors reflect on instructional practices, impact on learning and identify areas for improvement. Instructors can engage in self-discovery and reflection and participate in developmental feedback conversations with a manager or coach by reviewing data from these sources. Protect learner confidentiality when data from learners is provided, such as survey data, particularly when class sizes are small or still in session at the time instructors receive feedback. Coaches can help instructors interpret feedback, identify areas for improvement and determine what actions instructors should take to incorporate feedback in their teaching practices.

Step 4: Instructor development plan

Working with a lead instructor, manager or coach, instructors can create an instructor development plan (IDP) focusing on chosen areas for improvement. This IDP should include SMART goals (specific, measurable, achievable, relevant and time bound) to address the agreed upon areas for improvement, as well as describe how their personal goals support organizational goals.

Once an instructor or coach chooses an area of improvement, they should define a specific individual goal for the area, identify the related organizational goals and specify the measure of success. Each goal should be supported by a list of specific actions, and those actions should be accompanied by a list of resources needed. It is critical that organizations provide resources for IDPs to be implemented effectively.

Step 5: Commit, act and evaluate

Instructors, learners, coaches and business leaders must commit to instructor development and improvement by supporting behavior-based changes. These efforts need to be evaluated over time to determine if the changes are having a positive impact on organizational outcomes.

Overcoming Barriers

There are many perceived barriers to conducting observations, but these barriers can be overcome with the right approach.

  • Negative view of observations by instructors. Many instructors are not comfortable being observed or fear negative consequences. Create a positive feedback culture, and use observations to focus on development and improvement rather than a purely administrative process.
  • Observation itself can impact the behavior we are trying to observe. Develop a supportive feedback culture, observe regularly and make observation less obtrusive by using technology (i.e., video recording).
  • Observation only captures a snapshot of behavior. Develop sampling plans to ensure multiple observations of an instructor at different points in the training course.
  • Observation is time-consuming, resource intensive and difficult to integrate. Create an efficient plan for observation and leverage technology to support collection and display of observation data along with other key metrics of instructor effectiveness. Technology cannot only be used to systemically capture ratings but also to provide feedback, develop and monitor the success of IDPs.

Developing and implementing an effective observation process creates a win-win-win for instructors, learners and the L&D enterprise. Adding an observation practice into an existing feedback practice allows stakeholders to combine and compare data from multiple sources. Done correctly and combined with other sources of data, observations can have a positive impact on teacher development, as well as learning and organizational outcomes.

RECENT INSIGHTS

Impact Evaluation: From Employee Training to Leadership Development

SIOP Annual Event: Sat, April 25, 12:30PM-1:20PM [Cancelled due to COVID-19 policies]
Drawing on the combined experience of a diverse panel of learning and development experts, this session will examine and discuss current practices and future opportunities in impact evaluation for a wide range of interventions, from employee training to leadership development programs. Panelists will share insights to help build value using evaluation data.

read more

Create More Value With Your Learning Evaluation, Analytics, and Feedback (LEAF) Practice

TK2020 Event: Wed, February 5, 11:30AM – 12:00PM
Optimizing your LEAF practice is your best opportunity to improve learning and its impact. Less than half of organizations indicate evaluation helps them meet their learning and business goals. Data alone doesn’t create value. People acting on data create value. Our ALPS Ibex™ platform drives effective, purpose-driven evaluation empowering L&D stakeholders with insights and creating a culture of continuous improvement in the workplace. Examples demonstrate how using ALPS Ibex helps L&D stakeholders Act on Insights™ to drive improvement and impact.

read more

Want More Value from Evaluation? AIM to Answer Two Questions

TK2020 Event: Thurs, February 6, 9:00AM – 10:00AM
While almost all learning is evaluated, less than half of organizations report that evaluation helps meet their learning and business goals. Data create no value. People acting on meaningful data within the L&D process create value. The Alignment and Impact Model (AIM) focuses evaluation on helping all stakeholders create value. AIM incorporates purpose, process, stakeholder roles, and two questions to guide evaluation design and focuses on maximizing learning, transfer, and impact. Examples demonstrate the fundamentals of AIM and how it can be implemented and used.

read more

Create More Value With Your Learning Evaluation, Analytics, and Feedback (LEAF) Practice

TK2020 Event: Thurs, February 6, 10:15AM-10:45AM
Optimizing your LEAF practice is your best opportunity to improve learning and its impact. Less than half of organizations indicate evaluation helps them meet their learning and business goals. Data alone doesn’t create value. People acting on data create value. Our ALPS Ibex™ platform drives effective, purpose-driven evaluation empowering L&D stakeholders with insights and creating a culture of continuous improvement in the workplace. Examples demonstrate how using ALPS Ibex helps L&D stakeholders Act on Insights™ to drive improvement and impact.

read more

Request a Demo

Learn how ALPS Insights can help your organization with L&D and HR analytics, measurement, and feedback needs? Share with us a little about you and our specialist will contact you to schedule a demo.

 

15 + 13 =