Investigating Superintendent Perceptions in Florida:   The Advent of New Teacher Evaluation Systems

Investigating Superintendent Perceptions in Florida: The Advent of New Teacher Evaluation Systems

Investigating Superintendent Perceptions in Florida:  The Advent of New Teacher Evaluation Systems

Race to the Top (RttT) is a federal program designed to promote competition between states to reform and innovate K-12 education programs; states that make substantial progress in raising standards receive Federal Government grant money (U.S. Government Accountability Office [USGAO], 2014).  To many scholars and practitioners, the limitations of George W. Bush’s No Child Left Behind necessitated RttT, and this new initiative has been valued for revolutionizing the reform efforts of K-12 schools across the nation (McGuinn, 2012).  As a result of RttT, substantial changes to state and local school policies have been implemented (McGuinn, 2012).

Educational reforms under the RttT program include the improvement of teacher and principal effectiveness (USGAO, 2014). Aspects of teacher and principal effectiveness include evaluation policies and processes, and in exchange for receiving federal funding from the RttT initiative, school districts must adopt a new teacher and principal evaluation system (U.S. Department of Education, 2009).  Subsequently, school districts have been working to design teacher evaluation systems that demonstrate fairness, transparency, and differentiated effectiveness by making major reforms to their current evaluation systems (U.S. Department of Education, 2009).

Under the Selection Criteria for the Reform Plan, the local educational agency (LEA) is required to

[d]esign and implement rigorous, transparent, and fair evaluation systems for teachers and principals that (a) differentiate effectiveness using multiple rating categories that take into account data on student growth . . . as a significant factor, and (b) are designed and developed with teacher and principal involvement.  (U.S. Department of Education, 2009, p. 9). Therefore, as opposed to rating teachers and principals in the traditional dichotomous meets expectations versus unsatisfactory method, educational leaders and teachers are now evaluated on their effectiveness using multiple rating categories.  This new evaluation strategy is designed to more efficiently describe areas in need of improvement (U.S. Department of Education, 2009).

Introduction and Background

Although the new teacher evaluation systems are intended to identify teachers’ strengths and weaknesses with more precision, this evaluation system has not been openly received by all scholars and educational leaders (Herman, 2011).  In fact, there have been conflicting views regarding this new program and what should be included in the new teacher evaluation systems (Doherty, 2009; Herman, 2011).  As Herman (2011) indicates, one assessment alone should not be used to determine teacher effectiveness.  Because continuous improvements can only be facilitated when evaluation systems include a rubric that uses multiple methods of collecting teacher effectiveness information, observation rubrics are also needed in teacher evaluation systems (Doherty, 2009).

Accordingly, Curtis (2012) suggests five categories of information should be considered in teacher evaluations: a) student outcomes, b) teacher inputs, c) professionalism, d) feedback from students, and e) development of students’ characters.  Moreover, based on individual teacher contributions and input, Herman (2011) maintains that student growth assessments should also be included in teacher evaluations.  Although Shakman, Riordan, Sánchez, DeMeo Cook, Fournier, and Brett (2012) agree that teacher evaluation systems should include multiple measures of performance, and be based on a range and depth of evidence, they note that evaluations should include teachers’ knowledge and education about content, processes between teachers and students, and outputs such as student achievement.

Although the new evaluation systems will serve to improve the assessment process, reports indicate that teachers are frustrated (e.g., Heitin, 2011).  In Tennessee, for example, teachers do not fully comprehend the new observational rubrics that are integrated in their new evaluation system, and the multiple observational visits for each teacher are incredibly cumbersome for principals (Heitin, 2011).  The increased time invested in evaluation processes causes school leaders to feel overwhelmed (Heitin, 2011).  Furthermore, according to a study conducted by Weisberg, Sexton, Mulhern, and Keeling (2009), only 43% of teachers in 12 districts from Arkansas, Colorado, Ohio, and Illinois agree their current evaluation systems foster pedagogical improvement.

Although extensive research regarding the new teacher evaluation systems has been conducted in several states, little is known about the impact of the new teacher evaluation requirements in the state of Florida (Weisberg et al., 2009).  In Florida, the 2012-2013 school year brought about many changes to teacher and principle evaluation with the full implementation of new policies (Shakman et al., 2012). Although many other states have not included value-added models of evaluation, they have been included in Florida evaluation systems (Shakman et al., 2012).  In addition, although school districts in other states have taken and modified parts of evaluation systems from other states, each county in Florida adopted a specific teacher evaluation model.  Subsequently, while some of Florida’s 67 school districts have chosen to model their teacher evaluation systems on Charlotte Danielson’s framework for teaching, other Florida districts have selected Robert Marzano’s model of effective teaching, and one district has chosen the Jerry Copeland model of evaluation (Shakman et al., 2012).

Despite school board pressures, teacher union pressures, and teacher and principal conflicts, concluded negotiations resulted in the inclusion of Danielson’s (2007) framework for teaching into the former evaluation system (Shakman et al., 2012). Danielson (2007) focuses on the following areas of effective teaching practices: planning and preparation, classroom environment, instruction, and professional responsibilities. Claiming that teacher pay should not be tied to the evaluation system, the model specifically focuses on improving professional practice through her framework of teaching.  Unlike Danielson’s (2007) model, Marzano’s (2011) teacher evaluation model consists of four domains: classroom strategies and behaviors, preparing and planning, reflecting and teaching, and collegiality and professionalism.  Copeland’s (Learning Sciences International, 2012) model is a local model that includes focus on the notions that evaluation data should come from a variety of sources and evaluations should be of good quality, are cost effective, and are completed only after the teacher has had input into the process.

After adoption of an evaluation model, Florida school districts were expected to execute the system of evaluation; however, the implementation of the first school year’s new teacher evaluation systems requires a systematic assessment to test for its efficacy and validity. Until this new evaluation system has been assessed, the system of teacher evaluation is not entirely complete.  Appraising new evaluation systems ensures that adaptations can be discerned and implemented successfully throughout the following school years. The intuitive starting point for this appraisal, then, should begin at the highest level of the school district, the superintendents.  For these reasons, the purpose of this study is to obtain and assess Florida superintendents’ opinions regarding the changes made to the teacher evaluation system under the newly implemented RttT.

Methodology

Participants

The state of Florida includes 67 school districts, of which 41 superintendents are elected, and 26 are appointed (Florida Department of Education (FDOE, 2012). Superintendents from each of the 67 school districts in Florida were selected to participate in this study.  Of the 10 superintendents who gave their informed consent to use their survey results, only 9 participants’ surveys were used; one survey was started, but never completed and was thus removed from the results.

Materials

The Superintendent Perceptions of New Teacher Evaluation Systems survey consisted of 15 closed and open ended questions, including demographic questions, and superintendent perceptions relative to the implementation of the new teacher appraisal system.  The demographic questions included age, gender, education level, teaching and principal experience, teacher evaluation system model, and school district size.  Survey items included issues of a) satisfaction with the quality of training for the new teacher evaluation system, b) confidence in the effective overall use of the teacher evaluation system and c) modifications in the teacher evaluation system. Satisfaction questions were assessed using the following anchors on a 5 point Likert scale: extremely dissatisfied, somewhat dissatisfied, not sure, somewhat satisfied, and extremely satisfied.  In addition, confidence questions included anchors based on the following 3 -point Likert scale: extremely confident, somewhat confident, and not at all confident.  Open ended questions allowed superintendents to discuss modifications in the teacher evaluation system and evidence of teacher reflection on the new evaluation system.  The survey questions were developed using Danielson’s (2007) and Marzano’s (2011) models for teacher evaluation systems.

Procedure

Florida superintendents’ names and email addresses were obtained from the Florida Department of Education website (FDOE, 2012). Each superintendent received an e-mail which included the informed consent information and a link to the new teacher evaluation system questionnaire in Survey Monkey™.  The superintendents were invited to participate in the study by reading and completing the survey within 2 weeks of receipt of the survey. Informed consent information was included in both the email and the questionnaire. Participation was voluntary, and the superintendents were informed their responses were not affiliated with their names; thus, their anonymity was assured.  Moreover, the respondents were informed they could withdraw from the study at any time.

The results were collected via transfer from the Survey Monkey™ database to a Statistical Package for the Social Sciences database.  Data were evaluated using descriptive statistics; the 13% return rate from the superintendent population provide an interesting and timely snapshot of superintendent perceptions of Florida’s new teacher evaluation systems.  Moreover, since perceptions regarding the initial implementation of RttT in Florida can no longer be gathered, this study provides valuable information to both researcher and practitioners.

Results

Respondent Demographics

Nine Florida school district superintendents completed the survey, representing the northern, central, and southern regions of Florida. Eighty eight percent of the respondents to this survey were men, a slightly higher return rate for male superintendents, as 73% of the Florida school district superintendents are men.  All respondents were 46 years of age or older, and all respondents held advanced degrees, 44%of whom held doctoral degrees. Thirty three percent of the respondents were appointed to their positions as superintendent; similarly, 39% of Florida’s school district superintendents are appointed positions. All respondents reported having K-12 teaching experience, while 78% of these superintendents also had administrative experience as principals.

Evaluation Model Choice

When assessing the type of teacher evaluation model choice, all respondents specified which evaluation model was adopted by their district.  Of the 9 respondents, 5 (56%) of the superintendents indicated their districts were using Danielson’s (2007) model, while 3 (33%) noted Marzano’s (2011) model.  Finally, among the respondents, the Copeland model was identified as being used by 1 (11%) Florida school district.

Evaluation Tool Training Satisfaction

Superintendents were asked to indicate their level of satisfaction regarding several aspects of the new teacher evaluation training and preparation.  When asked to specify their satisfaction levels regarding the quality of training they received on the new teacher evaluation tool established for their districts, all respondents indicated they were extremely or somewhat satisfied.  Furthermore, all superintendents noted they were somewhat or extremely satisfied when asked to specify the quality of training their principals and teachers received on the new teacher evaluation system implemented in their district.

Evaluation Tool Confidence

The confidence section of the survey intended to describe: a) proficiency levels of principals in observing and evaluating teachers using the new systems, b) improvements in student learning, c) improvements in teacher performance, and d) improvements in teacher reflection.  All superintendents responded they were extremely confident (45%) or somewhat confident in the proficiency of their principals in objectively observing and evaluating teachers in their districts.  Superintendents were either extremely (45%) or somewhat confident in the ability of the new teacher evaluation systems to result in improved student learning across their districts. Furthermore, 67% of respondents noted they were extremely confident, while the remaining superintendents indicated they were somewhat confident the new teacher evaluation systems would result in overall improved teacher performance. Finally, 78% of superintendents were extremely confident the new teacher evaluation system would promote increased and improved teachers’ pedagogical reflection. The remaining 22% of respondents were somewhat confident the new teacher evaluation system would have a positive influence on teachers’ pedagogical reflection.

Importantly, several respondents provided additional comments suggesting they have received evidence of teacher reflection about the new evaluation process through feedback systems provided to teachers and administrators.  Teacher reflections have been also been captured electronically in the online system and through e-mails.  In addition, evidence of teacher of reflection has been reported to superintendents through face-to-face interaction, during lesson plan modifications, during teachers’ post conferences with administrators and during the end of their Professional Growth Plan evaluations for the upcoming school year.

Desired Modifications to the Existing Evaluation System

Since the teacher evaluation systems are newly implemented for this school year, we were interested in learning what aspects of the new evaluation systems might be considered for modification. Toward this objective, respondents were asked, “Based upon the initial use of the teacher evaluation system, are you planning to make modifications for the upcoming school year?” Only 1 respondent indicated that no modifications would be made in the new teacher evaluation system, while the remaining 8 colleagues indicated they desired to make modifications in the teacher evaluation system.  These 8 respondents were then asked to indicate the type of modifications they would recommend to the current evaluation system.  Thirty-seven percent of the respondents indicated a desire to modify the rubric or observation tool; toward this specific goal, 75% of the superintendents indicated their districts’ teachers will receive additional training. Also of the 8 responding superintendents, 88% indicated their districts’ principals will receive additional training in conferencing and giving feedback to teachers, while 75% of the respondents noted their districts’ principals will receive additional training in the observation tool.  One respondent wrote: “[The district] will be offering online [professional development] that has a variety of courses principals and teachers can use to deepen their knowledge of the Danielson framework.”

Discussion and Conclusions

The findings from this survey, while descriptive, showcase an interesting dichotomy between what these Florida superintendents are saying about the new teacher and principal evaluation systems and how they “scored” their perceptions of their confidence and satisfaction in the new teacher evaluation systems adopted by their respective districts.  In all, the Likert scale findings showed no scores below “somewhat confident” and “somewhat satisfied”.  The superintendents’ open ended responses, however, showcased different attitudes about the new teacher evaluation systems. Perhaps it can be said that, of this specific group of Florida superintendents, they are generally satisfied and confident in their districts’ forward movement toward improved teacher evaluation systems, yet there perceptions are there remain necessary modifications and revisions to continue to improve the validity and reliability of the teacher evaluation process.

Given the new mandates from the Federal government (FDOE, 2012), changes in the way teacher effectiveness is evaluated in Florida are inevitable and transitioning to the new changes impact both teacher and student success.  A transition into the new teacher evaluation system can be challenge for educators and administrators (Danielson, 2007).  However, working to ensure there is communication and trust between teachers and administrators helps teachers be more willing to make improvements and offer suggestions for the benefit of their own professional development (Hull, 20123).  Although some teachers are uncomfortable with administrators entering their classrooms unannounced, principals should make clear the need for this process as one strategy toward improving teacher performance and student achievement. Ideally these improvements, for the sake of academic success, would override teachers’ fears and concerns.   Because teaching practices are an important aspect of the overall teacher evaluation system, supervisor observations must be conducted so that improvements in both pedagogy and best practice can be achieved; thus, teachers are no longer allowed to teach without the supervision, guidance, and review of administrators.

The fear of transitioning into a new evaluation system should also be successfully negated by providing the proper skills, through ongoing, targeted training and professional development. Training is key to ensuring appropriate changes to current practices are made.  According to the results of this survey, all superintendents were either “somewhat” or “extremely satisfied” with the training both they and their principals had received with the new teacher evaluation systems.  In order to maximize effectiveness in classrooms, principals must identify the unique needs of each teacher and then deliver the professional development and support needed (The New Teacher Project, 2013).  Through formal and informal teacher observations and through beneficial discourse regarding teaching practices, principals will be better equipped to provide teachers the essential tools that will allow them to become more effective in the classroom.

Both the information communicated and the means of communication are equally important (Hull, 2013).  However, since teachers and principals are stakeholders, personal best interests can become problematic issues.  Social aspects of opposing groups must be examined, and conflicting aspirations should be considered and reconciled (Baldridge, 1971).  Because conflict is inevitable in the institutional setting, minimizing its detrimental effects on relationships is the primary goal (Baldridge, 1971).  A comfortable setting in the school must be nurtured so that all voices can be openly shared and embraced.

In addition to indicating satisfaction towards the training sessions, with a majority response of “extremely confident,” on the survey instrument, it is assuring that the new teacher evaluation system shows evidence for improving teacher reflection on the lessons they are teaching. In addition, the majority of surveyed superintendents indicated they were “extremely confident” their new evaluation system would result in improved student learning and teacher performance.  It is important that performance evaluations are successfully helping teachers identify their strengths (Almy, 2011), because teacher evaluation systems have the potential to help teachers grow and to help students achieve (Doherty, 2009).  Furthermore, it is also important to obtain positive feedback from teachers regarding the new evaluation system.  This promotes a comfortable environment, where teachers are more open and willing to embrace new changes.

In the workplace (particularly in schools), trust is instrumental (Covey & Merrill, 2008).  Building relationships based on trust between superintendents, principals, and teachers should be the overriding and pervasive theme in ensuring success.  Improving trust between superintendents, principals, and teachers, Baldridge’s (1971) policies can be applied to the new teacher evaluation methods.  Policy formation is the culminating result of the following three stages: entering and reconciling conflict, legal action, and the commitment of school districts to a set of values (Baldridge, 1971).  Moreover, applying Stephen Covey’s (1990) seven habits of highly effective people, superintendents must begin with a vision of the desired end result.  The successful transition into new teacher evaluation methods involves the careful planning of each step to ensure not only effective evaluation, but stakeholder “buy in” to the new system.

Furthermore, the majority of the superintendents surveyed claimed a desire to make changes to the current evaluation system.  This finding shows that “evaluating the system” is an essential component of designing a new evaluation system (Goe, 2011).   Obtaining superintendent input about training and best practices will require modifications, in order to make consistently and accurately work toward best practices in teacher evaluation.

The data collected in this study provide valuable information about the revolution of teacher and principal evaluation in the 21st century and are, therefore, important to consider.  Although this project represents more of a descriptive case study, data of this nature has import for school districts across the country. To continue to test and evaluate the effectiveness of the RttT innovations, data regarding the new principal and teacher evaluation systems must routinely be collected and updated.  Teacher, principal, and superintendent assessments, after another academic year, would be beneficial in evaluating the overall effectiveness of the evaluation changes adopted in Florida.  Furthermore, expanding insights from other states regarding the new teacher and principal evaluation practices would also provide a more comprehensive view of the role of federal RttT initiatives. In all, since superintendents are tasked with overseeing the effectiveness of new initiatives, it is imperative they are aware of the strengths and weaknesses of the new teacher and principal evaluation models adopted by districts across the United States.  As results of this study indicate, this group of superintendents feel the new evaluation systems will increase student achievement by improving teacher effectiveness. While it is the superintendent’s task is to ensure their districts’ evaluation processes are working effectively and to suggest modifications for the following year, ongoing, sustained research on this timely and important aspect of our education system is necessary to continue to fine tune these new and intricate evaluation processes.

References

Baldridge, J. V. (1971).  Academic governance: Research on institutional politics and decision making. Berkeley, CA: McCutchan.

Covey, S. M. R. (1990). The seven habits of highly effective people: Restoring the character ethic. New York, NY: Fireside Book.

Covey, S. M. R., & Merrill, R. R. (2008). The speed of trust: The one thing that changes everything. New York, NY: Free Press.

Curtis, R., & Wiener, R. (2012, January). Means to an end: A guide to developing teacher evaluation systems that support growth and development. Washington, DC: The Aspen Institute. Retrieved from http://www.aspeninstitute.org/sites/default/files/content/docs/pubs/Means_To_An_End.pdf

Danielson, C. (2007).  Enhancing professional practice: A framework for teaching.  Alexandria,

VA:  Association for Supervision of Curriculum and Development.

Danielson, C. (2011).  The framework for teaching evaluation instrument.  Princeton, NJ: The Danielson Group.

Doherty, J. F. (2009). Perceptions of teacher and administrators in a Massachusetts suburban school district regarding the implementation of a standards-based teacher evaluation system. (Doctoral dissertation).  Retrieved from ProQuest Dissertation & Theses.  (Accession Order No. AAT 3431766)

Florida Department of Education. (2012). School districts superintendents. Retrieved from http://www.fldoe.org/eias/flmove/supers.asp.

Goe, L., Holdheide, L., & Miller, T. (2011, May). A practical guide to designing comprehensive teacher evaluation systems: A tool to assist in the development of teacher evaluation systems. Washington, DC: National Comprehensive Center for Teacher Quality.  Retrieved from http://www.tqsource.org/publications/practicalGuideEvalSystems.pdf

Heitin, L. (2011, October 19). Evaluation system weighing down Tennessee teachers: Glitches in implementation could hurt other efforts. Education Week31(8),pp. 1, 14. .

Herman, J. L., Heritage, M., and Goldschmidt, P. (2011).  Developing and selecting assessments of student growth for use in teacher evaluation systems.  Assessment and Accountability Comprehensive Center.  Los Angeles, CA: University of California, National Center for Research on Evaluation, Standards, and Student Testing (CRESST).

Marzano, R. J. (2011, August).  The Marzano Teacher Evaluation Model.  Retrieved from http://pages.solution-tree.com/rs/solutiontree/images/MarzanoTeacherEvaluationModel.pdf

McGuinn, P.  (2012). Stimulating reform: Race to the Top, competitive grants, and the Obama education agenda.  Educational Policy, 26(1), 136-159.

Reform Support Network. (2013). Race to the top at a glance. Evaluations of teacher effectiveness: State requirements for classroom observations. Retrieved November 1, 2013 from http://www2.ed.gov/about/inits/ed/implementation-support-unit/tech-assist/evaluations-teacher-effectiveness.pdf

Shakman, K., Riordan, J., & Sánchez, M. T., DeMeo Cook, K., Fournier, R., & Brett, J. (2012). An examination of performance-based teacher evaluation systems in five states. (Issues & Answers Report, REL2012-No. 129).  Washington, DC: U.S. Department of Education, Institute of Educational Sciences,National Center for Education Evaluation and Regional Assistance, Regional Education Laboratory Northeast and Islands.  Retrieved from http://ies.ed.gov/ncee/edlabs/regions/northeast/pdf/REL_2012129.pdf

The New Teacher Project. (2013). Teacher evaluation 2.0. Retrieved November 1, 2014 from http://tntp.org/assets/documents/Teacher-Evaluation-Oct10F.pdf?files/Teacher-Evaluation-Oct10F.pdf

Hull, J. (2013). Trends in teacher evaluation: How states are measuring teacher performance. National School Boards Association Center for Public Education. Retrieved November 01, 2013 from http://www.centerforpubliceducation.org/Main-Menu/Evaluating-performance/Trends-in-Teacher-Evaluation-At-A-Glance/Trends-in-Teacher-Evaluation-Full-Report-PDF.pdf

U.S. Department of Education. (2009, November).  Race to the Top program executive Summary. Washington, DC: U.S. Department of Education.  Retrieved from http://www2.ed.gov/programs/racetothetop/index.html

Weisberg, D., Sexton, S., Mulhern, J., & Keeling, D.  (2009).  The widget effect: Our national failure to acknowledge and act on differences in teacher effectiveness.  Retrieved from http://www.edtrust.org/sites/edtrust.org/files/Fair_To_Everyone_0.pdf

Appendix

Table 1

Superintendent Perceptions of New Teacher Evaluation Systems

Demographics

Gender

Female

Male

No.

1

8

Age

46 to 50

51 to 55

56 to 60

61 to 65

1

1

4

3

Education

Master’s

Doctorate

5

4

Attained position

Appointed

Elected

3

6

Years of experience as a

teacher

1 to 5

6 to 10

11 to 15

More than 15

1

5

2

1

Years of experience as a

principal

0

1 to 5

6 to 10

11 to 15

More than 15

2

1

5

2

1

Years of experience as a

superintendent

1 to 5

6 to 10

11 to 15

More than 15

5

1

0

3

Number of schools in

district

Up to 20

21 to 40

41 to 60

More than 60

4

2

1

2

About the Author

Author

Dr. Sherri Zimmerman

Sherri Zimmerman, Ph.D. is an Associate Professor at the University of West Florida. She has experience as both a building and district administrator and has been researching the topic of teacher evaluation since 1999.

Improve Teacher Growth One Trust-Builder at a Time

Improve Teacher Growth One Trust-Builder at a Time

Anyone who has ever walked a mile in bad shoes understands the value of supportive soles.  Without good reinforcement, there is an increased likelihood of blisters, splints, or spurs.  Taco lovers surely recognize the necessity of a good, crunchy taco shell.  Without it, the meat, beans, lettuce and tomato will simply fall out. The foundation is the key in both scenarios.

So it is with trust between teachers and school leaders.  Each conversation has the potential to become the foundation for trust-building, thereby allowing the growth process to flourish. But each encounter also has the potential to decrease the level of trust.  When trust levels decrease, growth stagnates.  If teachers are going to feel comfortable taking risks, then the significance of the following statements must be considered:

“My principal doesn’t ever come in my classroom except to do my two observations each year. How does she really know what I do?”

“I overheard the 2nd grade teachers saying they don’t believe any of us on the administrative team.”

“Teachers just do dog-and-pony shows for their observations. Why don’t they teach like that every day?”

“Why doesn’t the Superintendent ever come visit classrooms? Maybe he would be a better leader if he did.”

Trust is the foundation upon which growth is built.  When trust is present in schools, teachers and administrators feel more comfortable talking about teaching.

Research on trust

In researching teacher trust in school leaders (Arneson, 2012), several trust-builders emerged as critical components to building a foundation upon which conversations could be conducted.  Above all else, teachers indicated the need for school leaders to communicate effectively with teachers, not just as a well-spoken orator but particularly concerning the frequency and sincerity of the communication.  Many administrators claim to have an open-door policy, but what does that really mean?  Do teachers really need their principal’s office door open every minute of the day? Hardly, since those same teachers relish the privacy of talking behind this closed door, confidentially, about student concerns or discussing their observation evidence.  When providing anecdotal data regarding communication, teachers shared that good school leaders effectively and regularly share information with staff and are available and open to meet when there are concerns.  The second trust-builder was honesty, which was specifically mentioned in regard to teacher performance.  Several teachers mentioned the desire to hear constructive feedback, but “please do it with respect and tact”.  After all, who besides school leaders have the ability to watch all sorts of teaching going on in their school and are then able to show the teachers the data that were collected in the classroom. Only then can the teacher and administrator have a conversation about what might be some patterns of teaching behavior or causes of student behavior to be considered.  The final trust-builder mentioned specifically by many teachers in the study was support.  A specific theme resonated through the comments about teachers not wanting to be “thrown under the bus”.  Teachers shared experiences in which the school leader had waited until a parent/teacher conference before indicating a problem with the teacher’s instruction, behavior management or other issues.  Teachers indicated they would rather hear the feedback, confidentially, than hearing about it in a public forum such as a faculty meeting or a parent conference.

Each of the trust-builders highlighted in the research (Arneson, 2012) has certain practices that can improve the trust between teachers and school leaders.

TRUSTBUILDER #1: COMMUNICATION

Speak with teachers, not to them

Who has had the experience of being observed in the classroom only to find that the follow-up is the observer telling us what they saw and then asking us to sign the paperwork which will then be put in the personnel file? Too many, I suspect.  Even if the feedback was glowing, as in “I really liked the way you had the students work in groups”, the result is still contingent upon whether or not the teacher actually cares what the observer likes or doesn’t like.  If the teacher respects the observer, perhaps the aftermath is “Oh, gee, they liked what I did” but if there is no love lost between teacher and observer, the teacher can write off the comment with “Who really cares what you like or don’t like?”  Whether the feedback is positive or negative, the teacher must only endure the conference in a passive manner (Danielson, 2016) Consider the difference in feedback value if the conversation is actually that, a conversation, as in the following illustration:

Observer: As you examine the exit slips from yesterday’s algebra lesson, how might you summarize the student learning?

Teacher: I noticed that 18 out of 20 students were able to solve for x accurately, as opposed to the day before when half the students were still confused.

Observer: What’s your hunch about what might have led to the increase in mastery?

In this case, the teacher is actually engaging in the cognitive challenge that leads to teacher growth and learning.  Therefore, if school leaders want to provoke teacher growth, then we must provoke teacher thinking. Some habits and practices that foster the trust-building communication include:

  1. Ask open-ended questions.
  2. Ask questions that are not solely focused on the observed lesson but rather on teaching practices in general
  3. Resist the temptation to cloud the feedback with personal preferences (i.e. “I liked the way you…”

TRUSTBUILDER #2: HONESTY

Base observations on facts

In asking teachers what they valued in a trusting relationship with their administrator, many educators felt strongly that school leaders need not have all the answers.  After all, none of us do.  However, in absence of having the right answer, it is crucial that administrators don’t act as if they do.  The prevalent thought was “Just be honest about not knowing everything. I need to know I can trust you” (Arneson, 2015).

Observations need not be lengthy, but they do need to be based on fact.  Observational facts include: teacher moves, student behaviors, student work, numerical information (i.e. “Eighteen of twenty students completed the ‘do-now’.”)  Having data on which to base the conversation about the teaching segment that was observed “shifts the cognitive and emotional energy from the supervisor/teacher relationship to the data” (Lipton & Wellman, 2013, p. 14). This use of data as the “third point” can help facilitate a dialogue about the evidence collected in the classroom rather than one person’s whims about the right or wrong way to do things.

Consider the alternate tactics of administrators writing only vague descriptions of the classroom lesson (The lesson seemed to go pretty well) or sharing their opinions of the teaching (Your behavior strategies need some work) or even less subtle advice-giving (I think you’d have better luck if you tried using some other strategies).  If the true purpose is to help teachers grow in their own practice, then the stage must be set for such growth by giving feedback that is factual and not based on one person’s opinion.

Some habits that will encourage honesty in observations and evaluations include:

  1. Observers should collect factual evidence from classroom observations, even if it is hard work.
  2. Feedback should then be based on the facts instead of observer opinion.
  3. Teachers and administrators should admit they don’t have all the answers and be willing to ask for support.

TRUSTBUILDER #3: SUPPORT FOR THE TEACHER

Respect for the person, the practice and the process

A principal who is respectful with a teacher is role modeling for the teacher the same respect teachers can show for students in the classroom (Arneson, 2011), as in the antithesis to the analogy of the parent yelling, “You kids better quit shouting at each other right now!!”

If, indeed, school leaders want to see teacher growth instead of “gotchas”, then support for the teacher must occur by design.  Take the practice of turning in lesson plans to the administrator, for instance.  In many schools and districts, lesson plans are to be turned in to the principal on a weekly basis for “grading”, as many call it.  While the intent might simply be a practice to keep in touch with what each grade level or content area are covering during a given week, the received perception of the teacher can be demeaning.  Stating the purpose of the practice is often the best way to keep it from seeming embarrassing, but that requires the administrator knowing what the purpose is.  If principals simply say, “You have to turn in your lesson plans every week because the Superintendent mandates it”, the tone is quite different than saying, “Taking a look at your lesson plans each week will help me keep in close touch with what you are covering and will help guide conversations when we meet”.  Because I said so is not ever the best way to foster morale. Sometimes, it’s not what we say but how we say it that makes all the difference in how it is received.

Many administrators use faculty meetings as a forum for honoring teachers by highlighting their effective teaching practices.  The only caveat to this well-intentioned method is that teachers have individual preferences (sometimes not coinciding with our own), and some prefer not to be praised in public.

Habits that will help foster teacher respect for administrators include:

  1. Model respect.
  2. Include a purpose for policies and procedures
  3. Individualize and differentiate respect for teachers, just as we expect teachers to do the same for their students.

While school leaders and teachers can agree that trust is a critical factor in the observation and evaluation process, achieving a trusting relationship is easier said than done.  Knowing the key trustbuilders and some concrete ways to keep trust as the foundation for school relationships is the first step in getting there.

Arneson, S. (2012). Character and competence: A mixed methods study on teacher trust in principals in a mid-sized county in Florida (Doctoral dissertation). University of West Florida.

Arneson, S. (2014). Building trust in teacher evaluations: It’s not what you say; it’s how you say it. Thousand Oaks, CA: Corwin.

Arneson, S. (2015). Improving Teaching, One Conversation at a Time. Educational Leadership, 72(7), pp. 32-36.

Danielson, C. (2016). Talk about teaching. Thousand Oaks, CA: Corwin.

Lipton, L., & Wellman, B. (2013). Learning-focused supervision. Arlington, MA: MiraVia.

Dr. Shelly Arneson (arnesoncommunicates@gmail.com ) is keynote presenter, author, professor, and international consultant for the Danielson Group. She works with schools and districts on topics such as communication, leadership, and trust.  Her books and articles are subjects for book studies, and she loves building relationships with teachers and school leaders around the world. Check out her weekly blog on her website:  www.arnesoncommunicates.com.

About the Author

Author:

Dr. Shelly Arneson

Dr. Shelly Arneson (arnesoncommunicates@gmail.com) is keynote presenter, author, professor, and international consultant for the Danielson Group. She works with schools and districts on topics such as communication, leadership, and trust. Her books and articles are subjects for book studies, and she loves building relationships with teachers and school leaders around the world.

Check out her weekly blog on her website: www.arnesoncommunicates.com

Pre-Service Teachers’ Sense of Efficacy during an International Professional Practicum Experience 

Pre-Service Teachers’ Sense of Efficacy during an International Professional Practicum Experience 

Abstract

This research project measured the self-efficacy of preservice teachers before and after they completed their teacher education professional practicum experience during a study abroad experience. The Teachers’ Sense of Efficacy Scale (Tschannen-Moran & Woolfolk Hoy, 2001) was administered prior to the pre-service teachers’ professional practicum experience and after the experience had ended. Follow-up interviews were conducted at the conclusion of the experience. Results indicate that the preservice teachers’ sense of self-efficacy was, for most items on the scale, unchanged by the practicum experience; however, for five items on the scale, the practicum experience abroad had a negative effect on the preservice teachers’ sense of self-efficacy. Qualitative results yielded common themes consistent with the quantitative data regarding classroom management. Implications for field experiences abroad are discussed.

Introduction & Study Background

During the summer of 2015, a group of graduate and undergraduate pre-service teacher education students embarked on completing their professional practicum experience abroad in a country well over 8,000 miles from their home. The pre-service teachers were immersed in a primary school in a suburban area of Cape Town, South Africa close to Khayelitsha, which is one of the largest black townships in South Africa. Many students from the primary school in in the township of Khayelitsha. This study examines the students’ sense of self-efficacy before and after this unique professional practicum experience.

In order to fulfill the state requirements for teacher certification and the university requirements for their practicum experience, the pre-service teachers had to spend a minimum of 60 to 80 hours in the classroom. Their practicum experiences were completed at a primary school with students ranging from grade R (equivalent to kindergarten) to ninth grade. The grades represented by the pre-service teachers who participated in this study were R, 1, 2, 4, 5, and 7. Each of the pre-service teachers completing this experience in the lower grades (R–5) were paired with another pre-service teacher. However, not all preservice teachers in who were completing their professional practicum in this international setting participated in this study.

Preservice teachers must also meet a diversity requirement with respect to classroom grade clusters. For Elementary Education majors, there are three clusters: pre-kindergarten or kindergarten; first, second, or third grade; and fourth or fifth grade. For Middle School majors, the pre-service teachers must complete three experiences in the selected concentration area(s) in two clusters: one in either fourth or fifth grade and one in sixth, seventh, or eighth grade. Like Middle School majors, the pre-service teachers in the Secondary Education program must complete three experiences in their specific major or concentration area in two clusters: sixth, seventh, or eighth grade; and ninth, tenth, eleventh, twelfth grade. Other than the Secondary major, each pre-service teacher was paired with another pre-service teacher needing the same grade cluster in order to fulfill the requirements to gain certification at the end of their respective program. It is worth noting that, because of the specific needs of each pre-service teacher other than the secondary pre-service teacher, all pre-service teachers were paired with someone from a campus different than their own.[1] Although each student was acquainted with all pre-service teachers in the group, none of the pre-service teachers knew the other pre-service teachers from the other campuses.

The pre-service teachers’ responsibilities began with classroom observation and progressed to the planning and execution of lessons. These aspiring educators were also expected to reflect on student learning, converse with cooperating teachers, consult with their university supervisors, and participate in collegial conversations with peers, in addition to participating in all school-related activities that happened during the school day, such as Cross Cultural Day for which the different grade levels put together a show for the school in the courtyard area of the school yard. Each day, the faculty of the school met as a group to discuss necessary items such as upcoming school assessments, surveys that needed to be completed by each class, and announcements of school-wide events. All pre-service teachers were required to be in these daily meetings.

Literature Review

Bandura (1994) defines self-efficacy as “people’s beliefs about their capabilities to produce designated levels of performance that exercise influence over events that affect their lives” (p. 71), claiming that beliefs of self-efficacy influence motivation and behavior such that personal achievement and well-being are enhanced by a strong sense of self-efficacy. According to Bandura, “[p]eople with high assurance in their capabilities approach difficult tasks as challenges to be mastered rather than as threats to be avoided” (p. 71). Berman, McLaughlin, Bass, Pauly, & Zellman (1977) have defined teacher efficacy as “the extent to which the teacher believes her or she has the capacity to affect student performance” (p. 137). Tschannen-Moran, Woolfolk Hoy, and Hoy (1998) define teachers’ sense of efficacy as “the teacher’s belief in his or her capability to organize and execute courses of action required to successfully accomplish a specific teaching task in a particular context” (p. 233).

Although Flores (2015) has indicated the need to provide authentic classroom teaching opportunities through “systematically structured field experiences” in order to connect research to practice (p. 14), to date, few investigations have been conducted to determine if completing a practicum experience abroad affects a pre-service teacher’s self-efficacy beliefs. Merryn, Moussa-Inaty, and Barza (2014) stress the importance of research of teachers working in environments different from their own, eg culturally, is underexplored.

Smolleck and Mongan (2011) used the Teaching Science as Inquiry (TSI) instrument to assess changing self-efficacy beliefs among thirty-eight pre-service teachers at various stages of their professional development. They also collected qualitative data through which they investigated the “critical incidents, which may contribute to changes in self-efficacy.” Their results demonstrate a positive change in self-efficacy over the course of the research study and identify a set of potential educational experiences that may have influenced a change in self-efficacy with respect to teaching science as inquiry.

The development of teacher efficacy is a process. Although different experiences can contribute to building one’s teacher efficacy, mastery experiences during student teaching and the first year of teaching can have the most substantial impact (Mulholland & Wallace, 2001; Hoy & Spero, 2005). Brown, Lee, and Collins (2015) found that “pre-service teachers benefit from their student teaching experiences in terms of perceptions of preparedness and sense of teaching efficacy” (p. 87). Also found that although students teachers may have difficulties during their student teaching experiences, student teachers’ “positive perceptions regarding their growing knowledge and skillfulness, their increasing sense of efficacy, flexibility and spontaneity in their performance and interactions as well as the awareness of having achieved reasonable levels of acceptance and recognition amongst the school community” was found (p. 172).

Hoy and Spero (2005) used four quantitative assessments of teacher efficacy to measure the changes in efficacy from program entrance through the induction year. Their results show an increase in efficacy during student teaching and a decrease in efficacy during the induction year. They also found a relationship between changes in efficacy during the induction year and the level of support received.

As indicated by Bandura (1997) and then again in Yilmaz’s (2011) follow-up study with English as a Foreign Language teachers concerning how teachers evaluated themselves does have impact on their own effort in teaching and the challenges that are set for themselves and their students.

Research Questions and Hypotheses

The purpose of this mixed-methods study was to examine beliefs regarding self-efficacy among pre-service teachers completing their professional practicum in an international setting. This study sought to answer the following questions: (1) How does preservice teachers’ sense of self-efficacy change during their international practicum experience? (2) How does completing the practicum abroad affect pre-service teachers’ sense of self-efficacy? (3) What are some possible factors during the practicum abroad experience that might be related to changes in candidates’ self-efficacy?

We hypothesized that, consistent with Bandura’s theory that “[p]ersistence in activities that are subjectively threatening but in fact relatively safe produces, through experiences of mastery, further enhancement of self-efficacy and corresponding reductions in defensive behavior, “completing the practicum experience abroad would positively affect a change in efficacy.

Methodology

In order to gain greater insight into the participants’ belief in their own sense of self-efficacy, a mixed methods design was used. One of the reasons this design was chosen was how the study yielded to being an example of complementarity. Hesse-Biber (2010) explains how “complementarity allows the researcher to gain a fuller understanding of the research problem and/or to clarify a given result (p. 4).

Participants completed a pre-and post-inventory using the Teachers’ Sense of Self-Efficacy Scale (TSES), which is comprised of a Likert-type scale. Although the TSES scale was distinguished quantifiably by nine points on the Likert-type scale, there are only five justifiers identifying the scale, with one representing the lowest on the scale and nine representing the highest on the scale. The number one represented ‘nothing,’ three represented ‘very little,’ five representing ‘some influence,’ seven representing ‘quite a bit,’ and nine representing ‘a great deal.’

The pre-service teachers also completed post interviews. A semi-structured interview protocol was followed and was completed once the pre-service teachers returned from the international experience. “Semi-structured interviews allow respondents the chance to be the experts and to inform the research” (p. 668). The pre-service teachers are the experts of their own beliefs, thus the reason to use the semi-structured interview protocol so that the pre-service teachers would be able to provide appropriate data.

Theoretical Framework

Bandura (1977) explains the four sources of information that can have an effect on one’s own personal efficacy. They are performance accomplishments, vicarious experience, verbal persuasion, and physiological states. Bandura postulates that “cognitive events are induced and altered most readily by experience of mastery arising from effective performance” (p. 191). Based on Bandura’s assertions, through active participation, one must participate in actual experiences rather than be the onlooker. In a study investigating views of what it is to be a teacher of preservice teachers, Pendergast, Garvis, and Keogh (2011) also emphasized the necessity for teacher educators to recognize the influence of these key information sources on teacher self-efficacy.  Pre-service teachers begin their time in the classroom with limited mastery experiences. Results from a study conducted by Tschannen-Moran and Hoy explain how these same key points, “verbal persuasion, vicarious experiences, and emotional arousal, may well be most salient for pre-service teachers who lack significant mastery experiences” (p. 954). Tschannen-Moran, Woolfolk Hoy, and Hoy (1998) also explain that “mastery or enactive experiences are the most powerful source of efficacy information (p. 229). Martins, Costa, and Onofre (2015) found that “classes’ characteristics, planning and teaching practice were examples of mastery experiences” (p. 263).

Participants

The participants for this study were eight pre-service teachers from a small, private university in the Southeastern United States. Of the eight participants, seven of the participants were enrolled in an undergraduate teacher education program, and one student was enrolled in a graduate teacher education program. The university attended by the pre-service teachers has multiple campuses. One campus is considered the traditional campus, whereas the other four campuses are for non-traditional students. Students from the traditional campus and three of the four non-traditional campuses were represented in the study. All pre-service teachers were early childhood (elementary) education majors except for the one graduate student, who was a secondary education major. Three of the participants were students of the traditional campus, whereas four of the participants were students of the centers. One student was a graduate student from the metropolitan campus. Although there were additional pre-service teachers represented in this experience, only seven of the pre-service teachers are recognized in this study.

Data Collection

In order to gain insight into whether the international experience had an effect on the pre-service teachers’ self-efficacy, a pre-and post-inventory was administered, as well as a follow-up interview. The students completed the Teachers’ Sense of Self-Efficacy Scale (Tschannen-Moran & Woolfolk Hoy, 2001). The inventory assesses three factors of self-efficacy: student engagement, instructional strategies, and classroom management. There are eight items in the inventory dedicated to each of these three factors, for a total of twenty-four items. The assessment is comprised of a Likert-type scale. The scaling consisted of choices based on a nine-point Likert-type scale with choices ranging from nothing, very little, some influence, quite a bit, to a great deal. Although there are two separate instruments constructed by Tschannen-Moran and Woolfolk Hoy determining teachers’ sense of their own efficacy, one being a short form and the other being a long form, it is recommended by the authors to use the long form consisting of twenty-four items with preservice teachers. The authors’ reasoning is based on the increased ability of the long form to provide a distinct measure of each of the three factors.

The pre-service teachers were asked to complete the pre-assessment during their first full day in their new international environment. After the students had completed the international experience, the follow-up post assessment was administered. In order to gain a better understanding of the changes, if any, that took place, the pre-service student completed a semi-structured interview. The questions asked during the interview were as follows:

1.) In what ways, if any, has your teacher efficacy altered during your Mercer on Mission: Cape Town, South Africa experience?

2.) What barriers, if any, might have impacted a change in your own teacher efficacy?

3.) At what points in the Mercer on Mission: Cape Town, South Africa experience did you notice a change in your own teacher efficacy?

4.) In what areas of your own teaching do you feel the Mercer on Mission experience has had the most impact?

Findings and Analysis

A paired-samples t-test was used to determine whether there was a significant difference between the pre- and post-practicum self-efficacy scores of the pre-service teachers. The Pearson correlations between the pre- and post-practicum measurements for Questions 21 (Efficacy in classroom management – how well can you respond to defiant students) (0.727/0.041 sig.) and Question 22 (Efficacy in student engagement – how much can you assist families in helping their children do well in school?) (0.668/0.0700 indicate a medium correlation.

Table 1

Questions from The Teachers’ Sense of Efficacy Scale
Indicating a Significant Difference
Question Corresponding Factor Mean difference Sig.

(2-tailed)

6 “How much can you do to get students to believe they can do well in school work?” Efficacy in Student Engagement -1.125 0.026
11 “To what extent can you craft good questions for your students?” Efficacy in Instructional Strategies -1.250 0.060
19 “How well can you keep a few problem students from ruining an entire lesson?” Efficacy in Classroom Management -1.500 0.040
20 “To what extent can you provide an alternative explanation or example when students are confused?” Efficacy in Instructional Strategies -2.125 0.001
21 “How well can you respond to defiant students?” Efficacy in Classroom Management -1.625 0.002
24 “How well can you provide appropriate challenges for very capable students?” Efficacy in Instructional Strategies -1.625 0.014

As the significance values for changes in these scale items is less than 0.05, the average decrease in scores can be attributed to the practicum experience.[2] However, significance values greater than 0.10 for changes in scores of the other items show the practicum experience did not significantly change the participants’ responses to these items.

Factor analysis was performed in order to reduce the number of variables to reveal their underlying relationships and to determine whether the resultant pattern of correlations among items on the scale were the same as those obtained by Tschannen-Moran and Woolfolk Hoy, the developers of the TSES. A total of four factor analyses, two for each administration of the TSES scale, were performed in SPSS using varimax rotation and principal component analysis as the extraction method.[3] The first analysis of the pre-test yielded five factors with eigenvalues greater than 1 which accounted for 95.89 percent of the variance. The first factor, with an initial eigenvalue of 11.192, explained 32.23 percent of the variance. When the component analysis was restricted to three factors, the three saved components accounted for 82.23 percent of the variance, with the first factor explaining 36.66 percent of the variance.

Table 2

Pre-test The Teachers’ Sense of Efficacy Scale
Pre-test, no specified number of factors
Component

(Factor)

Initial Eigenvalues Rotation Sums of Squared Loadings % of Variance Explained Cumulative %
1 11.192 7.736 32.234 32.234
2 5.010 6.519 20.875 59.395
3 3.534 3.364 14.725 73.411
4 2.233 2.763 9.304 84.924
5 1.045 2.631 4.353 95.888

Table 3

Pre-test The Teachers’ Sense of Efficacy Scale
Pre-test, no specified number of factors
Component

(Factor)

Initial Eigenvalues Rotation Sums of Squared Loadings % of Variance Explained Cumulative %
1 11.192 8.797 36.655 36.655
2 5.010 7.029 29.288 65.943
3 3.534 3.909 16.288 82.231

When factor analysis was used to reduce the variables on the TSES administered after the Practicum experience, four factors with eigenvalues greater than 1 emerged, accounting for 89.36 percent of the variance. The first factor, with an initial eigenvalue of 8.230, explained 27.320 percent of the variance. When the component analysis was restricted to three factors, the three saved components accounted for 74.95 percent of the variance, with the first factor explaining 28.89 percent of the variance.

Table 4

Post-test The Teachers’ Sense of Efficacy Scale
Post-test, no specified number of factors
Component

(Factor)

Initial Eigenvalues Rotation Sums of Squared Loadings % of Variance Explained Cumulative %
1 8.230 6.557 27.320 27.320
2 5.894 5.376 22.398 49.718
3 3.863 5.190 21.625 71.343
4 3.460 4.325 18.021 89.364

Table 5

Post-test The Teachers’ Sense of Efficacy Scale
Post-test, 3 factors specified
Component

(Factor)

Initial Eigenvalues Rotation Sums of Squared Loadings % of Variance Explained Cumulative %
1 8.230 6.934 28.890 28.890
2 5.894 6.587 27.444 56.334
3 3.863 4.467 18.614 74.948

The three moderately correlated factors identified by Tschannen-Moran and Woolfolk Hoy are efficacy in student engagement, efficacy in instructional practices, and efficacy in classroom management. The items on our pre-experience administration of the TSES were distributed among these factors as follows:

Student Engagement: 1, 2, 3, 4, 5, 7, 8, 9, 13, 14, 15

Instructional Practices: 6, 12, 16, 17, 18, 19, 21, 22, 23

Classroom Management: 10, 11, 20, 24

The items on our post-experience administration of the TSES were distributed among these factors as follows:[4]

Student Engagement: 1, 7, 10, 14, 15, 16, 18, 24

Instructional Practices: 3, 5, 6, 8, 12, 13, 19, 20, 21

Classroom Management: 2, 4, 11, 17, 22, 23

Unstructured text data.

Although the Teachers’ Sense of Efficacy Scale (TSES) determined the efficacy in student engagement, the efficacy in instructional practices, and the efficacy in the classroom management, other themes emerged in the responses from the participants in the semi-structured interviews. At the beginning of each interview with each participant, Tschannen-Moran, Woolfolk Hoy, and Hoy’s (1998) definition of the teachers’ sense of efficacy, which is “the teacher’s belief in his or her capability to organize and execute courses of action required to successfully accomplish a specific teaching task in a particular context” was given to each pre-service teacher prior to posing the first interview questions (p. 233). Providing the pre-service teachers the definition would allow the pre-service teachers to have a firm understanding of what exactly was meant by the term “teacher efficacy” in the interview questions.

Interview Question 1: In what ways, if any, has your teacher efficacy altered during your Mercer on Mission: Cape Town, South Africa experience?

Although all participants voiced that their teacher efficacy altered during their experience, one theme which emerged was classroom management. Another theme was planning. Other themes were knowing the students and their culture and background. Albeit no additional themes were identified among two or more participants, it is worth noting other ways the participants’ teacher efficacy was altered. One participant explained how she now feels she is more flexible.

Interview Question 2: What barriers, if any, might have impacted a change in your own teacher efficacy?

All students felt the language was a barrier that could have impacted a change in their own teacher efficacy.

Interview Question 3: At what points in the Mercer on Mission: Cape Town, South Africa experience did you notice a change in your own teacher efficacy?

Most students noticed a change in their own teacher efficacy during the beginning of their experience in the school. However, there was one pair of students who were placed in the same classes who developed a change during the second week of their pre-service teaching experience. These two students had the opportunity to split their daily routine between two classrooms. During the second week, it was decided the pair of pre-service teachers would complete the rest of the experience in one classroom instead of the two.

Interview Question 4: In what areas of your own teaching do you feel the Mercer on Mission experience has had the most impact?

Understanding the importance of having to work together and being a part of collegial conversations were two themes that emerged. Multiple participants stressed that their classroom management and planning were both impacted in positive ways. One pre-service teacher noted in multiple interview questions that her confidence had been impacted positively. As indicated in a study conducted by Gaudino, Moss, and Wilson (2012) and also by Pence & Macgillivray (2008), it was found that pre-service teachers’ self-confidence was impacted positively because of the international field experience.

The data demonstrated that participants’ belief in their own teacher efficacy changed in a short amount of time regarding a few of the factors being tested within the TSES. Themes which emerged in the responses of the participants added credence to the data gathered from the TSES. When pre-service teachers are provided opportunities through which they are able to experience teaching, their own self-efficacy can be improved (Smolleck & Mongan, 2011). People with a strong sense of self-efficacy persevere through failure by ameliorating insufficient effort of knowledge (Bandura, 1994).

Discussion

Although each pre-service teacher gained a new perspective of teaching in a different country, all pre-service teachers who participated in this international field experience program classroom management was a factor of the TSES with a significant difference and was also a theme which emerged during the interviews. As common with first year teachers, lower self-efficacy in regard to instructional practices and classroom management has been reported.

A recent study indicates that students with training in particular classroom management procedures earlier in their preparation program have higher preservice teachers’ self-efficacy and are more comfortable managing classroom behavior (Lenter & Franks, 2015). O’Neill and Stephenson (2012) found that when pre-service teachers had not completed a course in classroom management their preparedness to handle behaviors in the classroom which were troublesome decreased. However, the pre-service teachers in this study completed their professional practicum experiences without a prior course in classroom management, as this course is generally recommended during the last semester of the teacher preparation program while candidates are completing student teaching.  While it is common for preservice teachers to have issues with classroom management, some of these issues can be resolved during their professional practicum experience (Charalambous, Philippou, & Kyriakides, 2008).

Teacher isolation has been identified for many years (Davis, 1986; Finders; 1988). Ensuring pre-service teachers have the opportunity to be involved in collegial conversations as well as well as belonging to a community, a positive influence can happen in regard to their own beliefs in self-efficacy (Meristo, Ljalikova, & Löfströ​​m, 2013). Fortunately, all teachers except the one graduate pre-service teacher were in a classroom with a peer. Having the pre-service teachers in the same classroom with a peer during their practicum experience was one way the pre-service teachers were able to join each other and discuss daily occurrences, thus not being in isolation. In their interviews at the end of the experience, two of the pre-service teachers who were paired mentioned the support they received from each other in regard to lesson planning and developing rapport with their students. A third participant, the secondary education pre-service teacher and the only preservice teacher who was not paired with a peer, noted in her interview that the designated collaboration time she had each day with the teachers from the school assisted her in developing a new perspective regarding her students.

Many investigators agree that the correlations upon which factor analysis is based require large sample sizes in order to stabilize the solution and achieve good recovery of population factors (e.g., Comrey & Lee 1992; Tabachnick & Fidell 2013). Reviews by MacCallum, Widaman, Zhang, & Hong (1999) and Velicer & Fava (1998) debunk previously proposed rules of thumb for specifying minimum N. In addition to a lack of agreement among authorities regarding the determination of minimum sample sizes, Velicer and Fava found neither rigorous theoretical basis nor empirical basis for the rules (1998, p. 232). MacCallum, Widaman, Zhang, & Hong found that extremely large sample sizes are necessary to achieve good recovery of population factors when communalities are low.[5] MacCallum, Widaman, Preacher, and Hong (2001) later confirm that “if communalities are high, recovery of population factors in sample data is normally very good, almost regardless of sample size, level of over determination, or the presence of model error” (p. 636). In fact, the retention of an incorrect number of factors, especially too few factors, can cause major distortion of loading patterns (Fava & Velicer 1998; MacCallum, Widaman, Preacher & Hong 2001). “The main effects of N and communality level on recovery of population factors are more dramatic when factors are less well determined” (p. 612). According to Tabachnick and Fidell (2013), smaller sample sizes can be tolerated with consistently high communalities (greater than .6). In the present factor analyses, communalities were all very high (with most above .9 and none less than .875).

A minimum of three variables per factor is critical, four or five is better (Velicer & Fava 1998). “Factors that are not measured by at least three high-loading variables should not be interpreted” Velicer & Fava, 1998, p. 248), and an even stricter criterion should be used when the sample size is low.

Conclusions

Willard-Holt (2001) found that pre-service teachers could be positively impacted both personally and professionally when involved in an international teaching experience. Research has suggested that pre-service teachers being involved in an international field experience could have benefits in pre-service teachers’ “professional and personal changes such as increased confidence, a better appreciation and respect for differences of others and other cultures, and an awareness of the importance that feedback and reflection play in professional and personal growth” (p. 14).

Pre-service teachers may have an artificially elevated sense of self-efficacy; completing a significant field experience abroad might provide an important challenge to their pedagogical skills, knowledge and attitudes that might not otherwise occur. They can then be encouraged to seek the professional development they require in order to be successful in the classroom.

Either preservice teachers’ sense of self-efficacy was artificially inflated at the pre-test, or the language barrier hindered the beliefs at the post-test. However, pre-service teachers can have the presumption that they are able to conqueror difficult tasks regarding teaching prior to being immersed in the situation (Sevgi, Gök, & Armağan, 2017)

Bandura (1997) explained how experiences, mastery and vicarious, assist pre-service teachers in strengthening their ability to teach while in their practicum experience. Like Bandura, Putman, (2009) explains the same, that the effect of those two experiences are important aspects of the experience the pre-service teachers are able to take part in their practicum experience.

References

Bandura, A. (1977). Self-efficacy: Toward a unifying theory of behavioral change. Psychological Review, 84(2), 191–215.

Bandura, A. (1994). Self-efficacy. In V. S. Ramachaudran (Ed.), Encyclopedia of human behavior (Vol. 4, pp. 71-81). New York: Academic Press.

Bandura, A. (1997). Self-efficacy: The exercise of control. New York, New York: W. H. Freeman and Company.

Berman, P., McLaughlin, M., Bass, G., Pauly, E. & Zellman, G. (1977). Federal programs supporting educational change: Vol. VII. Factors affecting implementation and continuation (Rep. No. R-1589/7-HEW). Santa Monica, CA: RAND.

Brown, A. L., Lee, J., & Collins, D. (2015). Does student teaching matter? Investigating pre-service teachers’ sense of efficacy and preparedness. Teaching Education, 26(1), 77-99.

Charalambous, C. Y., Philippou, G. N., & Kyriakides, L. (2008). Tracing the development of pre-service teachers’ efficacy beliefs in teaching mathematics during fieldwork. Education Studies in Mathematics, 67(2), 125-142.

Comrey, A. L. & Lee, H. B. (1992). A first course in factor analysis. Hillsdale, NJ: Erlbaum.

Davis, J. B. (1986). Teacher isolation: Breaking through. The High School Journal, 70(2), 72-76.

Finders, D. J. (1988). Teacher isolation and the new reform. Journal of Curriculum and Supervision, 4(1), 17-29. Retrieved from http://www.ascd.org/ASCD/pdf/journals/jcs/jcs_1988fall_flinders.pdf

Flores, I. M. (2015). Developing pre-service teachers’ self-efficacy throughout field-based science teaching practice with elementary students. Research in Higher Education Journal, 27, 1-19.

Gaudino, A. C., Moss, D. M., & Wilson, E. V. (2012). Key issues in an international clinical experience for graduate students in Education: implications for policy and practice. Journal of International Education and Leadership, 2(3) p. 1-16.

Hesse-Biber, S. N. (2010). Mixed methods research: Merging theory with practice. NY: The Guilford Press.

Hoy, A. W. & Spero, R. B. (2005). Changes in teacher efficacy during the early years of teaching: A comparison of four measures. Teaching and Teacher Education, 21, 343–356.

Lenter, V.S., & Franks, B. (2015). The redirect behavior model and effects on pre-service teachers’ self-efficacy. Journal of Education and Practice, 6(35), 76-87.

MacCallum, R. C., Widaman, K. F., Preacher, K. J., & Hong, S. (2001). Sample size in Factor analysis: The role of model error. Multivariate Behavioral Research, 36(4), 611-637. http://www.quantpsy.org/pubs/maccallum_widaman_preacher_hong_2001.pdf

MacCallum, R. C., Widaman, K. F., Zhang, S., & Hong, S. (1999). Sample size in factor analysis. Psychometric Methods, 14(1), 84–​99.

Martins, M., Costa, J., & Onofre, M. (2015). Practicum experiences as sources of pre-service teachers’ self-efficacy. European Journal of Teacher Education, 38(2), 263-279.

Merryn, M., Moussa-Inaty, J., & Barza, L. (2014). Science teaching self-efficacy of culturally foreign teachers: a baseline study in Abu Dhabi. International Journal of Educational Research, 66, 79-89.

Meristo, M., Ljalikova, A., & Löfströ​​m, E. (2013). Looking back on experienced teachers’ reflections: How did pre-service school practice support the development of self-efficacy? European Journal of Teacher Education, 36(4), 428-444. doi: 10.1080/02619768.2013.805409

Mulholland, J., & Wallace, J. (2001). Teacher induction and elementary science teaching: enhancing self-efficacy. Teaching and Teacher Education, 17, 243–261.

O’Neill, S. & Stephenson, J. (2012). Does classroom management coursework influence pre-service teachers’ perceived preparedness or confidence? Teaching and Teacher Education, 28, 1131-1143.

Pence, H. M. & Macgillivray, I. K. (2008). The impact of an international field experience on preservice teachers. Teaching and Teacher Education, 24, 14-25.

Pendergast, D., Garvis, G., & Keogh, J. (2011). Pre-service student-teacher self-efficacy beliefs: An insight into the making of teachers. Australian Journal of Teacher Education, 36(12), 45-58.

Putman, S. M. (2009). Grappling with classroom management: The orientations of preservice teachers and impact of student teaching. The Teacher Educator, 44, 232-247.

Sevgi, S., Gök, G., & Armağan, F. (2017). Self-efficacy beliefs of prospective teachers. The Online Journal of New Horizons in Education, 7(1), 135-142.

Smolleck, L. A. & Mongan, A. M. (2011). Changes in pre-service teachers’ self-efficacy: From science methods to student teaching. Journal of Educational and Developmental Psychology, 1(1), 133–145.

Tabachnick, B. G. & Fidell, L. S. (2013). Using multivariate statistics. Boston: Pearson.

Tschannen-Moran, M., & Woolfolk Hoy, A. (2001). Teacher efficacy: Capturing an elusive construct. Teaching and Teacher Education, 17, 783–805.

Tschannen-Moran, M., & Woolfolk Hoy, A. (2007). The differential antecedents of self-efficacy beliefs of novice and experienced teachers. Teaching and Teacher Education 23, 944-956.

Tschannen-Moran, M., Woolfolk-Hoy, A., & Hoy, W. K. (1998). Teacher efficacy: Its meaning and measure. Review of Educational Research, 68, 202-248.

Velicer, W. F. & Fava, J. L. (1998). Effects of variable and subject sampling on factor pattern recovery. Psychological Methods, 3(2), 231-251. http://www.academia.edu/13281056/Effects_of_Variable_and_Subject_Sampling_on_Factor_Pattern_Recovery

Willard-Holt, C. (2001). The impact of a short-term international experience for pre-service teachers. Teaching and Teacher Education, 17, 505-517.

Yilmaz, C. (2011). Teachers’ perceptions of self-efficacy, English proficiency, and instructional strategies. Social Behavior and Personality, 39(1), 91-100.

[1] Although the pre-service teachers are students at one university, the university has multiple campuses. The university has a traditional campus, which caters to traditional resident students. The university also has a campus located in a large metropolitan area and caters to graduate students. There are also centers that the university provides where working adults are able to enroll and graduate with an undergraduate degree. Of the five campuses, the university offers degrees in initial certification in education, four of those campuses were represented in the study abroad program.

[2] Item 11 is included in the table because of its borderline significance value of 0.060.

[3] Tabachnick and Fidell (2013) warn against aggregating the results from repeated measures because “underlying factor structure may shift in time for the same subjects with learning or with experience in an experimental setting, “suggesting that the differences in structure may be “quite revealing” (p. 617).

[4] Item 9 was not strongly correlated with any of the factors for the post-experience administration of the TSES.

[5] A variable’s communality, defined as the sum of its squared factor loadings, is the proportion of its variance that can be explained by the underlying factors. MacCallum, Widaman, Zhang, & Hong, S. (1999) consider communalities in the range of .5 to be acceptable with sample sizes of at least 100 when factors are well determined (i.e., having a minimum number of marker variables with high loadings for each factor).

About the Author

Authors

Dr. Michelle Vaughn
Dr. Rebecca Grunzke

Michelle Vaughn, EdD, is an assistant professor for the Tift College of Education at Mercer University where she focuses on literacy and teacher preparation. Research interests include culturally responsive pedagogy, teacher preparation, and teacher professional development.

Rebecca Grunzke, PhD is an independent researcher.