• 沒有找到結果。

DSpace at National Taiwan Normal Univ.: Item 20.500.12235/81291

N/A
N/A
Protected

Academic year: 2021

Share "DSpace at National Taiwan Normal Univ.: Item 20.500.12235/81291"

Copied!
29
0
0

加載中.... (立即查看全文)

全文

(1)

U.S. “NATIONAL TEACHER AND PRINCIPAL

SURVEY” AS A VALUED RESOURCE FOR

TEACHER AND PRINCIPAL STUDIES

Jiangang Xia 1

Xingyuan Gao 2

Jianping Shen 3

ABSTRACT

In this paper we introduce and review the US National Teacher and Principal Survey (NTPS) as a valued resource for teacher and principal studies. The introduction covers NTPS’s original format and historical development as the Schools and Staffing Survey (SASS) and its redesign and new format. The review examines and discusses how the SASS data have been utilized for educational research and policy analysis. We finally discuss the implications for other education systems.

Keywords: national teacher and principal survey, schools and staffing survey, questionnaire, data, research, policy, implication

Jiangang Xia (corresponding author), Assistant Professor, Department of Educational

Administration, University of Nebraska-Lincoln. E-mail: jxia@unl.edu

Xingyuan Gao, Doctoral Research Associate, Ph.D. Candidate, Department of Educational Leadership Research and Technology, Western Michigan University.

E-mail: xingyuan.gao@wmich.edu

Jianping Shen, John E. Sandberg, Professor of Education and the Gwen Frostic Endowed Chair in Research and Innovation, Department of Educational Leadership Research and Technology, Western Michigan University.

E-mail: shen@wmich.edu

(2)

Nowadays, more and more researchers choose to use existing database rather than to collect data on their own. Utilizing existing database provides researchers the benefits of both economy and efficiencyresearchers can be confident with the data quality while they only spend little time and money on gathering data. Most existing databases are large-scale, high quality, free of charge, and ready for analysis (Hussein, 2011). In the United States, The National Center for Education Statistics (NCES) is one of the most important federal units which collect and analyze data in the field of education. In this paper, we introduce and review the U.S. National Teacher and Principal Survey (NTPS). Our introduction covers its history as Schools and Staffing Survey (SASS), data features, data access, and its current new development and transition toward NTPS. Our review focuses on how SASS data was used for educational research and policy analysis. Finally we present our discussion and implication.

What Is NTPS: An Introduction

NTPS is a redesign of SASS. The original SASS and current NTPS, including its Teacher Follow-up Survey (TFS) and Principal Follow-up Survey (PFS), are national-level surveys that are conducted by NCES. Like all other surveys that were conducted by NCES, a website1 was created for both

SASS and NTPS. This brief introduction addresses the issues regarding SASS questionnaires, SASS data, and current new development.

SASS Questionnaires

The first administration of SASS was during the 1987-1988 school year, and it included four questionnaires: Teacher Demand and Shortage (TDS) Questionnaires for both public and private schools, School Questionnaires for both public and private schools, School Administrator Questionnaire, and Teacher Questionnaires for both public and private schools. Until 2015, there have beenseven administrations of SASS: 1987-1988, 1990-1991, 1993-1994, 1999-2000, 2003-2004, 2007-2008, and 2011-2012. Although the main content of SASS remained unchanged during these seven cycles of SASS, NCES made some changes to subsequent rounds of SASS to capture the most

1 The SASS homepage is http://nces.ed.gov/surveys/sass/, and the NTPS homepage is

http://nces.ed.gov/surveys/ntps/. DOI : 10.6151/CERQ.2015.2304.03

(3)

currently emerging issues in elementary and secondary education. The major changes for each subsequent round are summarized and presented in Table 1. Table 1

Changes Made to SASS Questionnaires by Its Administration Cycles

Questionnaires Type 87-88 90-91 93-94 99-00 03-04 07-08 11-12 Teacher Demand and Shortage Public    Discontinued Private  Discontinued Teacher Listing Form     School District     School Public        Private        Charter  Unified  Unified  Deleted Indian    Deleted School Administrator /Principal Public        Private       

Indian   Deleted Deleted Deleted Charter  Deleted Deleted Deleted

Teacher

Public       

Private       

Indian   Deleted Deleted Deleted Charter  Deleted Deleted Deleted Library

Media Centers

Public     

Private   Deleted Deleted Deleted Indian   Deleted Deleted Deleted Library Media Specialist Public  Discontinued Private  Discontinued Indian  Discontinued Student Records  Discontinued

(4)

In addition to the major changes regarding the types of questionnaires, it is also important to point out that sometimes items are revised, or an item remains the same but the rating scale changed. However, it is beyond the scope of this introduction to discuss all those details. Readers who have the interest could compare the questionnaires for the differences.

Based on the above information, it is noted that SASS developed four core components: The School Questionnaire, the Teacher Questionnaire, the Principal Questionnaire, and the School District Questionnaire. Each questionnaire was designed to have its own focus. For example, if we look at 2007-2008 SASS, the Teacher surveys were designed to collect the information regarding teacher’s demographic information, preparation, qualification, teaching experience, professional development, teaching assignment, workload, and their perceptions and attitudes toward their teaching and school environment. Principal surveys were designed to measure principal’s demographic information, preparation, both teaching and leadership experience, perceptions of their influences and school problems. School surveys aimed to indicate information on school enrolment, grades offered, staffing issues, school programs, and school performance. While the school district surveys covered questions on district information (e.g., total enrollment, student demographics, teaching force, and grades offered), teacher hiring and evaluation, teacher dismissal, principal and teacher salary, school choice, programs offered, and graduation requirements. As a result, SASS has rich data on school practices.

Although SASS keeps making changes to its questionnaires, over years SASS also keeps collecting some information consistently. For example, SASS continues to measure the same five major policy issues: Teacher shortage and demand, teacher characteristics, teacher working conditions, principal characteristics, and school programs and policies. It is the richness and consistency of SASS questions that makes it possible for researchers, school administrators, policy makers, and the public to understand our schools, districts, states, and the whole education system, and to examine to what extent that the policies and investments have had impacts on our school practices.

Two Follow-Up Survey Questionnaires: TFS and PFS

As important complements of SASS, NCES also developed TFS and PFS. TFS was first conducted in 1988-1989, one year after 1987-1988 SASS was

(5)

administrated. It was designed to collect information on teacher mobility: Teachers who stayed, moved, or left the profession. Therefore, TFS followed a sample of teachers who completed the SASS in the previous year. Based on the same teacher samples, SASS and the next year’s TFS constitute a longitudinal data structure. The information could be used to understand teacher retention and teacher attrition. Except for the 2008-2009 TFS, each TFS has two questionnaires: One for former teachers and another for current teachers. The 2008-2009 TFS differs in that it has four questionnaires: The two new questionnaires were designed for former and current first-year teachers. The TFS questionnaires and its change were presented in Table 2.

Unlike TFS, the first PFS was conducted until school year 2008-2009. But the purpose was like TFSto collect principals’ mobility information by surveying a sample of principals who completed the SASS in previous year. This includes who stayed, who moved to another school, and who left the profession. Like TFS, SASS and the next year’s PFS also makes a mini longitudinal data structure. And the collected information could be used for understanding principal retention and attrition. The 2008-2009 PFS has two survey questionnaires: One for public schools and the other for private schools, while the 2012-2013 PFS added principal status form for both public and private schools. The types of PFS questionnaires were also presented in Table 2. Table 2

TFS and PFS Questionnaires Administration Cycles

Questionnaires 88-89 91-92 94-95 00-01 04-05 08-09 12-13 Teacher Follow-up Survey Former        1st year  Current        1st year  Status Form  Principal Follow-up Survey Public   Public Status Form  Private   Private Status Form 

(6)

Data Features

SASS dataset, including its TFS and PFS data, has four core components: Teacher Data, Principal Data, School Data, and School District Data. Although each SASS data set includes all variables based on the questionnaire, NCES adds some additional important information to the data set, including weighting variables, imputation flags, control numbers, and some SASS created variables.

Weighting variables. SASS data were collected based on a stratified

sampling design and thus each sampled teacher, principal, school, and school district carry a certain weight. The weights were designed to meet various requirements and one of them is to make sure each sampled teacher, principal, school, and school district would represent those with similar characteristics in respective national populations of teachers, principals, schools, and districts. Thus each SASS data set includes a weighting variable called final weight. According to an NCES’s documentation for SASS prepared by Tourkin et al., (2010), a final weight is “the product of the initial basic weight, sampling adjustment factor, separate adjustments for nonresponse at each stage of selection, and one or more stages of ratio adjustment to the frame or to independent sources” (p. A-2). The SASS documentation also introduced the detailed procedure of how each final weight is calculated. If the researcher wants to conduct a descriptive study, for example, to know how many national public schools were represented by the sample, the original final weights could be used. Here we presented the actually sampled and weighted public schools over years (see Table 3).

Table 3

Actually Sampled and Weighted Public Schools

1987-88 1990-91 1993-94 1999-2000 2003-04 2007-08 2011-12 Sampled 8,170 9,050 9,100 8,520 8,140 7,460 7,510 Weighted 74,590 78,890 79,620 82,800 87,620 90,470 89,810

Note. Sample sizes are rounded to the nearest 10 per rules of NCES. Results are cleared by NCES.

If the researcher wants to conduct an inferential analysis, instead of using final weight, a relative weight should be applied. A relative weight could also be used for descriptive statistics when the researchers are not interested in its

(7)

population. We have not found an official definition of relative weight, but it has been widely used by many NCES’s reports. For instance, regarding the regression and other multivariate analysis based on data from complex sample survey, Huang, Salvucci, Peng, and Owings (1996) made several recommendations including using a statistical package that allows the use of relative weights. They described the calculation of relative weight as “the final survey weight divided by the average weight for the group being analyzed” (p. 72). We also noticed that based on the National Education Longitudinal Study data, Taylor (1998) applied relative weight for the descriptive statistics. Using relative weight makes the analysis still nationally representative, and keeps the sample size as the originally achieved, thus avoiding inflating the sample size. No matter which weight is applied, the results are nationwide generalizable. The SASS data set also contains replicate weights which could be used for variance-based testing, a topic that is beyond the scope of this article.

Imputation flags. Due to the fact that a survey might contain missing

values, SASS added imputation flags to the data. If the researchers are concerned with the missing values, then the imputation flag could be used to decide whether or not to include the imputed data and which types of imputed data. For detailed information, please refer to the SASS manual and technical report.

Control numbers. Originally SASS data include some identifiable

information such as names and addresses. In order to protect respondents’ confidentiality, SASS removed the identifiable information, and used identification numbers to connect teacher and principal data to data from the schools and districts in which they work. For example, SASS teacher data have teacher control number, principal control number, school control number, and district control number. It also has state ID. SASS principal data include school control number, district control number, and state ID. And school data also contain principal control number, district control number, and state ID. District data have district control number and state ID. These control numbers and IDs could be used for data analysis for the purpose to link teachers and principals with their own schools, districts, and states.

Created variables. Based on the original variables, SASS also created

some new variables. Some of these variables were created as composite variables. For example, based on principals’ reported degrees, SASS created a composite variable of highest degree; based on teachers’ reported race and

(8)

ethnicity, SASS created a composite variable of percentage of minority teachers. SASS also collapsed some continuous variables into categories. These include school enrollment, school level, and so on. SASS also merged a few principal, school, and district information with teacher data, and merged some school and district information with principal data. These merged variables could be considered as the context or the shared properties by all group members.

Data Access and Use

SASS provides data users several approaches to accessing SASS data. The first approach is the public-use data. Among the 7 cycles of SASS data, the first 4 cycles of public-use data could be directly downloaded from the SASS website. These include: The 1987-1988 SASS and 1988-1989 TFS Public-Use Data, the 1990-1991 SASS and 1991-1992 TFS Public-Use Data, the 1993- 1994 SASS and 1994-1995 TFS Public-Use Data, and the 1999-2000 SASS and 2000-2001 TFS Public-Use Data. However, there are no public-use data files for the 2003-2004 SASS, the 2004-2005 TFS, the 2007-2008 SASS, the 2008-2009 TFS, the 2008-2009 PFS or the 2011-2012 SASS and its two follow-up surveys, 2012-2013 TFS and 2012-2013 PFS.

The second approach is to access SASS data through the restricted-use data. The restricted-use data differs from the public-use data in that it contains the identifiable information, which is confidential and protected by federal law. To protect the confidentiality, SASS requires data users from qualified organizations to go through a strict licensing process. For example, SASS requires a data security plan, which states that the data should be stored in a stand-alone computer that is not connected to the Internet. Only the people who are approved for the license could have access to the computer and the data. Further, all the results based on the restricted-use data must be cleared before dissemination.

The third approach is to access SASS data through its online DataLab system—PowerStats. The SASS data available in PowerStats include the data collected from five SASS questionnaires: The School Questionnaire, Teacher Questionnaire, School Principal Questionnaire, School District Questionnaire, and Library Media Center Questionnaire. To log in the online PowerStats system, a user must create an account by agreeing the terms for using SASS data. Once logged in, the new users could visit the learning center to find

(9)

video training modules for PowerStats and user guides. They could also take the step-by-step exercises. The PowerStats supports users with regard to two major analyses: Create descriptive tables, and conduct linear or logistic regression analysis.

A Transition From SASS to NTPS

The most recent cycle of SASS is 2011-2012 SASS. It is also the last administration of SASS. Currently SASS has completed a major redesign. Starting from 2015-2016, SASS is redesigned as the NTPS. According to its overview statement from NTPS website, the redesign aims to achieve three goals: Flexibility, timeliness, and integration with other national education data.

There are a few changes with the redesign. First, differing from SASS that was conducted almost every four years, NTPS will be conducted every two years. Second, SASS has four major components, while NTPS includes three components: Teachers, principals, and schools. Third, NTPS samples only America’s public elementary and secondary schools including charter schools. Fourth, NTPS utilizes both the Internet and paper for its primary data collection instruments. Fifth, NTPS has a goal to publish the results in a timely manner—within 12 months after data collection is completed. These and the integration with other data sources are important changes particularly for educational research purpose.

Although NTPS has a few different features from SASS, it is based on SASS and it still focuses on teachers, principals and schools. And it still collects data on core topics including teacher and principal preparation, teacher and principal demographics, and school characteristics. Also, the TFS remains within NTPS scope. Currently, NTPS has completed its 2014-2015 pilot test, and the 2015-2016 NTPS formal data collection is underway. The 2015-2016 NTPS data will become available in 2017.

Using SASS Data for Educational Research:

A Literature Review

As a nationally representative database, SASS is influential in both policy development and scholarly research. In this section, we discuss how SASS data have been utilized for educational research. Based on NCES’s Online

(10)

Bibliography Search Tool, we identified 46 journal articles that were based on SASS data and published during 2005-2015. We coded the 46 articles by focusing on the major research topics. The major research topics were identified from two perspectives: The outcome variables and the key predictors. A total about 20 topics were identified. To facilitate the writing, the 20 topics were categorized by levels: Teacher level topics (8 topics), principal and school level topics (8 topics), district and state level topics (3 topics), and federal level topic (1 topic). Since some studies connected one level topics to topics of another level, they might be listed under two or more topics. The number of corresponding articles and its citations are presented in the Appendix, while a brief review is presented below.

Teacher Level Topics

As the lowest level of measures available from the SASS data, teacher topics attracted the most attention of educational researchers. Among the 20 topics we identified, eight are teacher-related topics: (a) teacher preparation and qualification, (b) teacher job satisfaction, (c) teacher autonomy, (d) teacher efficacy and commitment, (e) teacher professional development (PD), (f) teacher retention, attrition, and turnover, (g) teacher salary, and (h) teachers of certain subjects and certain career stages.

Teacher preparation and qualification. Among the several teacher-

related topics that were examined based on SASS data, teacher quality is one of the most frequently visited topics. The studies examined various aspects of teacher quality: Alternative teacher certification (Cohen-Vogel & Smith, 2007), teachers’ educational background (Angrist & Guryan, 2008), teachers’ attendance of highly competitive undergraduate institutions (Baker & Dickerson, 2006), pre-service teacher preparation (e.g., practice teaching and methods-related coursework) (Ingersoll, Merrill, & May, 2012), teachers’ perceptions of preparedness (Ronfeldt, Schwartz, & Jacob, 2014), in-field teaching rate (Lee, 2012), average teaching experience (Bodine et al., 2008), and percentage of teachers with an emergency, provisional or probationary teaching credential (Bodine et al., 2008), and highly qualified teachers (Eckert, 2013).

Teacher job satisfaction. Teacher job satisfaction is another hot topic

emerged from the reviewed studies based on SASS data. We noticed that among the studies, six studies predicted teacher job satisfaction from various

(11)

influencing factors which include: Teacher autonomy and teacher-student racial mismatch (Renzulli, Parrott, & Beattie, 2011), principal effectiveness (Grissom, 2011), principal-teacher relationship (Price, 2012), principal background and school processes (e.g., career and working conditions, staff collegiality, administrative support, student behavior and teacher empowerment) (Shen, Leslie, Spybrook, & Ma, 2012; Xia, Izumi, & Gao, 2015), and performance-related pay (Belfield & Heywood, 2008). We also noticed that among all the factors, the only negative effect was related to teacher-student’s racial mismatch and performance-related pay.

Teacher autonomy. Through using SASS data, several studies examined

the effects of teachers’ two types of autonomy—(a) influences on school decisions and (b) controls on classroom-level decisions. Researchers investigated the following effects associated with teacher autonomy: Teacher job satisfaction (Renzulli et al., 2011; Shen, Leslie, Spybrook, & Ma, 2012), teacher turnover (Renzulli et al., 2011), instructional time of social studies (Fitchett, Heafner, & Lambert, 2014a, 2014b), and teachers’ participation in PD activities (Desimone, Smith, & Phillips, 2007; Smith & Rowley, 2005). For all of these studies, teacher autonomy was not the single predictor, but rather it was used as one of several predictors. For instance, in Shen et al.’s (2012) study, teacher autonomy was one of the six school process indicators. Ingersoll and May (2012) also used these two autonomy variables as well as other six organizational characteristics and conditions to predict teacher turnover. Fitchett et al. (2014a, 2014b) used part of the teacher autonomy variables, among others, to predict social studies instructional time. In some studies, teacher autonomy was also utilized as outcome variables. For instance, to examine whether principal-teacher’s power relationship is a win-win situation or zero-sum game, Shen and Xia (2012) used principal’s influences to predict teacher’s influence. Grissom, Nicholson-Crotty, and Harrington (2014) investigated NCLB’s impacts on teachers’ perceptions of their classroom control.

Teacher efficacy and teacher commitment. Based on SASS data, four

studies examined teacher efficacy and/or teacher commitment. Eckert’s (2013) study differentiated personal teacher efficacy (PTE) and general teacher efficacy (GTE). The study found that teacher preparation level (the number of education courses taken and the length of student teaching) is a positive and significant predictor of both PTE and GTE; and the “highly qualified teacher” status is a significant, positive predictor of PTE but not GTE. Some

(12)

researchers examined the relationship between teacher efficacy and teacher commitment. For example, Ware and Kitsantas (2007) developed two teacher efficacy scales and one collective teacher efficacy scale and found that they significantly predict teacher professional commitment. Further, Ware and Kitsantas (2011) found that besides teacher efficacy scales, principal efficacy also impacted teacher commitment.

Some researchers treated teacher autonomy and teacher efficacy as synonyms. Bodine et al. (2008) found that compared with charter teachers working in white schools, charter schools teachers serving predominately black students reported lower levels of efficacy as measured by less influence over classroom and school issues. Hancock and Scherff (2010) conceived self- efficacy as a group of factors found in SASS including teacher’s school influence, classroom control, and curricular control and tried to use them and some other variables to predict teacher attrition risk. They found that self- efficacy was not a significant predictor.

Teachers’ professional development. Teachers’ PD also attracted a lot

of attention and researchers connected it with various issues such as teacher job satisfaction and teacher turnover. Ingersoll and May (2012) utilized teachers’ PD activities as well as other seven organizational characteristics and conditions to predict teacher turnover. Smith and his colleagues (2005, 2007, and 2011) differentiated content-focused PD from other types of PD activities (e.g., instruction-related PD and classroom management PD activities) and connected them to teacher influence/control, teacher turnover and school and state policy attributes. For example, Smith and Rowley (2005) found that both teacher influence over school policy and teacher control over classroom practices significantly predict teachers’ participation in content- related PD, which in turn predict teacher turnover. Also, Desimone et al., (2007) examined the relationship between different policy attributes and teachers’ participation in four types of PD activities. They found that both authority and stability predict teachers’ participation in PD activities while other policy attributes such as power and consistency do not. Further, Phillips, Desimone, and Smith (2011) found that school and state policy context is more predictive of teacher participation in content-focused PD in a high-stakes subject (mathematics) than a low-stakes subject (science).

Teacher retention, attrition, and turnover. Although there are

numerous studies on teacher retention, attrition, and turnover, most of those studies are limited to the district and school context. SASS enables researchers

(13)

to investigate the topics at the state and national levels. Using SASS, researchers can study retention and attrition in specific subjects with a national representative sample. For example, based on both SASS and TFS data, Hancock and Scherff (2010) and Hahs-Vaughn and Scherff (2008) reported the attrition risk of English language arts teachers. Ingersoll and May (2012) explored different preference in terms of retention between science teachers and mathematics teachers. Furthermore, based on the SASS data, researchers are able to associate various factors with teacher attrition and retention, including student social-economic status (Feng, 2010; Grissom, 2011), racial and ethnic composition (Renzulli et al, 2011), teacher salary (Ingersoll & May, 2012; Jones, 2013), teacher autonomy (Ingersoll & May, 2012), teacher preparation programs (Eckert, 2013; Ronfeldt et al., 2014), school structure and policy (Feng, 2010; Smith & Rowley, 2005), and principal effectiveness (Grissom, 2011).

Teacher salary. Salary is one of the most interesting topics at teacher

level, as 11 papers we found have addressed the issues. Accountability and merit pay are the two foci (Angrist & Guryan, 2008; Belfield & Heywood, 2008; Bifulco, 2010; Loeb & Strunk, 2007; Martin, 2010; Jones, 2013). Teacher salary is also found to be an important predictor to teacher job satisfaction (Shen et al., 2012), teacher retention (Ingersoll & May, 2012), teacher attrition in disadvantaged schools (Goldhaber, Destler, & Player, 2010), and teacher working condition in charter schools (Bodine et al., 2008).

Teachers of certain subjects and career stages. Instead of studying

teacher issues in general, some studies focused on teachers of one certain subject or two, or teachers of certain career stages such as new teachers. For instance, Hahs-Vaughn and Scherff (2008) focused on 1st-year English

teachers’ attrition, mobility, and retention; Hancock and Scherff (2010) examined English teachers’ attrition risk; Ingersoll and May (2012) examined mathematics and science teachers’ turnover; and Ingersoll et al. (2012) examined new mathematics and science teachers’ preparation and found it different from that of other new teachers in various respects. Some studies examined mathematics and science teachers’ participation in various professional development activities (see Desimone et al., 2007; Phillips et al., 2011). Hill and Dalton (2013) explored the link between in-field mathematics teaching rate and student mathematics achievement.

(14)

Principal and School Level Topics

SASS data were also widely used to investigate principal and school issues. Most studies treated principal and school variables at the same level. Here we first present our review of principal-related topics, and then school- related topics. Principal-related topics include principals’ leadership style, their experience and academic background, and principals’ efficacy beliefs. While the studies on school-related topics used SASS data to evaluate function of various school characteristics, including school types, school level, school performance, school location, school race and ethnicity composition, student behavior or discipline problems, and parental involvement.

Principalship. Principal is the most influential person in a school. SASS

covers various aspects of principalship and its relationship with other school elements. Principal variables measured by SASS questions were mostly used as predictors. Shen et al., (2012) used principal’s experience of being a department head or an athletic coach/director from SASS questionnaires as two predictors for teacher job satisfaction. Likewise, Grissom (2011) found school principal’s effectiveness improved teacher job satisfaction and reduced teacher turnover. Ware and Kitsantas (2011) suggested that principals' efficacy in policy and spending is negatively related to teacher commitment. Principal-related variables can also be used as dependent variables. Urick and Bowers (2014) used items from SASS to evaluate and compare the types of principal leadership. Baker, Orr, and Young (2007) used SASS to investigate the change of principal leadership preparation degree program from 1990 to 2003.

School process. We noticed that the concept of school process was first

brought up by Porter (1991), and that several studies based on SASS data examined its impacts. Through aggregating teacher perceptions, Shen et al., (2012) conceived eight school process indicators: Teachers’ influence on school issues, teachers’ control on classroom issues, staff collegiality, career and working conditions, administrative communication, administrative support, parental support, and student behavior. They found that several school process indicators (e.g., career and working conditions, staff collegiality, and administrative support) significantly predict teacher job satisfaction. By adapting the conceptual framework from Shen et al., (2012), Xia et al., (2015) examined alternative schools’ teacher job satisfaction and its association with school process. And the findings were consistent with Shen et

(15)

al.,’s (2012). Considering the unique features of alternative schools, Izumi, Shen, and Xia (2015) conceived three sub-constructs of school process: Support programs, teaching methods, and instructional opportunities. They found that school process, as with staffing characteristics, are important predictors of graduation rate in alternative schools. Some studies did not explicitly use the concept of school process, but they were implicitly utilized to predict some outcomes such as teacher turnover. For instance, Ingersoll and May (2012) examined the association between teacher turnover and eight school’s “organizational character and conditions”, which include: Student discipline problems, teacher salary, teacher influence on school decisions, teacher control in classroom, leadership and support, school resources, and two types of teacher professional development (PD) activities. These key school characteristics and conditions are close to Shen et al.’s (2012) school process variables.

School performance. SASS data do not include the direct measures of

student performance, but it does have school-level outcomes such as adequate yearly progress (AYP)—related measures and associated rewards and sanctions, graduation rate, daily attendance rate, and so on. For example, Izumi et al., (2015) found that alternative schools’ graduation rate is associated with both staffing characteristics and school process. Shen, Washington, Palmer, and Xia (2014) examined two school performance outcomes—meeting AYP and being free from sanctions. They found that parental involvement variables explained significant amount of variance for the two outcomes. Ma, Shen, and Krenn (2014) examined both AYP and schools’ staying off mandatory improvement. They found that parent-initiated parental involvement significantly predict both meeting AYP and staying off the mandatory improvement.

School type. Since SASS collects data from traditional public schools,

charter schools and private schools, comparisons between these three types of schools in various aspects became possible. While teacher compensation seems to be the central topic when comparing public schools to private schools (Goldhaber et al., 2010; Martin, 2010), we found more themes emerged from papers on charter schools.

In general, charter schools are supported by states funds. However, unlike traditional public schools, charter schools are granted more autonomy which is considered to be important for school improvement. With America’s growing public interest in alternatives to traditional public schools, charter schools play

(16)

an increasingly more important role in U.S. education. Since the 1999-2000 cycle, SASS survey data from charter schools became available to researchers. Among the papers we reviewed, we found 4 papers focused on charter school issues based on analyzing SASS database. Barghaus and Boe (2011) evaluated charter schools against their key legislative objectives–whether charter schools, compared to regular schools, brought more school autonomy, school and classroom options, and teacher influence. Renzulli et al. (2011) compared charter schools to traditional public schools in the areas of teacher satisfaction and teacher turnover. Backer and Dickerson (2006) compared undergraduate preparation between traditional public school teachers, private school teachers and charter school teachers. Bodine et al. (2008) examined the charter school sector solely, asking how charter schools were influenced by their state and local context in terms of material and human resources. They found that charter schools depend much more on local resource rather than state resource. They also compared conversion charter schools with start-up charter schools, and compared charter schools as to student population and state policy environment.

School location. School location is one of the most important school

characteristics. Especially, previous studies noted various challenges faced by urban and rural schools in regard to teacher working condition, school leadership, student performance and health, and parent involvement (e.g., Abel & Sewell, 1999; Damore, 2002; Lleras, 2008; Hentschke, Nayfack, & Wohlstetter, 2009; Scheer, Borden, & Donnermeyer, 2000; Shen, Rodriguez- Camps, & Rincones-Gomez, 2000). Among the 46 studies we reviewed, four studies used SASS to focus on issues in urban schools, including teacher merit pay (Bifulco, 2010), teacher job satisfaction (Renzulli et al., 2011), teacher efficacy (Eckert, 2013), and principal leadership (Urick & Bowers, 2014). Meanwhile, two studies used SASS to compare across locations. Ma et al. (2014) compared parental involvement and school outcome among urban, suburban, and rural schools. Ingersoll and May (2012) noted that compared to other type of school locale, the turnout rate of mathematics and science teachers tend to be higher in urban schools.

Parental involvement. Based on teachers’, principals’, or schools’ report,

some studies examined parental involvement. Among the studies, parental involvement was used to predict several things: Teacher job satisfaction (Shen et al., 2012), school performance such as meeting AYP and being free from sanctions (Shen et al., 2014), meeting AYP and the need for mandatory improvement (Ma et al., 2014).

(17)

Student behavior. SASS data do not include direct measures of student

characteristics. Instead, all student demographic information and other variables were collected based on teachers’, principals’, and schools’ report. Based on teachers’, principals’ and schools’ report, some studies examined student behavior or discipline problems. Student behavior was examined in two approaches: As the focused outcome (Kelly, 2010), and as the key predictor of other issues such as teacher job satisfaction (Shen et al., 2012), or teacher turnover (Ingersoll & May, 2012).

Students of certain population. A few studies focused on a segment of

the student population, or schools with predominant group of students. For example, Grissom (2011) examined teacher job satisfaction and teacher turnover by focusing on schools with large numbers of disadvantaged students—students eligible for free or reduced-price lunch and students of color; Lee (2012) addressed the equity and adequacy issues for disadvantaged minority students; Kelly (2010) examined student behavior in predominantly black schools.

District and State Level Topics

Three themes at state level emerged from the papers reviewed: Policy analysis on (a) professional development, (b) teacher qualification, and (c) testing and accountability. We also noticed that one single study examined a district level policy. Since it is related to testing and accountability, we combined it with state level topics.

Policy on professional development. Desimone et al. (2007) and Phillips

et al. (2010) designed hierarchical linear model to investigate the influence of state and school policy on teacher professional development. They evaluated consistency, authority, power, stability and specificity of state and school policy, where those attributes of policy were defined and measured with different items extracted from SASS questionnaires. Smith (2007) studied how state induction policy associates with beginning teacher’s participation in mentorship program.

Policy on teacher qualification. Angrist and Guryan (2008) used data

from SASS to evaluate to what extent state teacher testing requirements change teacher wages and teacher quality. Bodine et al. (2008) observed that charter schools tend to hire more uncredentialed teachers in those states that have loose regulation for charter schools. Lee (2012) noted a shortage of qualified teachers to improve student performance in disadvantaged schools.

(18)

Policy on testing and accountability. Fitchett et al. (2014a, 2014b)

reported that state testing policy has significant influence on class time of social studies in elementary schools. Teachers reported to spend more time on social studies instruction in which states that mandated testing on social studies. Loeb and Strunk (2007) found that on one hand, accountability policies could improve student mathematics achievement; on the other hand, to make the accountability polices more effective, states need to allow more local autonomy, such as revenue raising or teacher hiring. Bifulco (2010) and Martin (2010) discussed how district performance-based pay scheme influence teacher’s salary at school level and individual level.

Federal Level Topics

Based on SASS data, several studies examined the impacts of federal educational policies such as No Child Left Behind (NCLB). For example, using both the Common Core of Data and several waves of SASS data, Dee, Jacob, and Schwartz (2013) examined NCLB’s effects on multiple district, school, and teacher traits. They found that NCLB increased per-pupil spending, teacher compensation, and the share of elementary school teachers with advanced degrees. They also found that NCLB had no effects on class size and overall instructional time in core academic subjects. But they did find that due to NCLB schools reallocated time away from science and social studies toward the tested subject of reading. Following the same strategy, Grissom et al. (2014) investigated NCLB’s impacts on teachers’ working environments and their job attitude (satisfaction and commitment). They found that NCLB had positive effects on teachers’ perceptions of their classroom control and administrator support, had negative effects on teacher cooperation, and no effects on teacher job satisfaction and commitment. Based on both SASS data and national Early Childhood Longitudinal Survey (ECLS) data, Reback, Rockoff, and Schwartz (2011) examined NCLB’s impacts on teachers’ behavior and students’ achievement. Among others, they found that NCLB lowered teachers’ perception of job security.

Discussions Based on the Review

The literature revealed several features regarding the utilization of SASS data. The first feature refers to the levels of research topics that the SASS data could be used to address. The literature review indicated that the research

(19)

topics based on SASS data could range from teacher level up to federal level. More importantly, more and more studies addressed educational issues based on multi-level data. For example, to predict teacher turnover, Ingersoll and May (2012) applied eight school-level organizational characteristics and conditions; Angrist and Guryan (2008) connected the state-mandated certification testing requirements to teacher quality and teacher salaries; and Dee et al. (2013) even examined federal educational law NCLB’s effects on multiple district, school, and teacher traits.

The second feature refers to the utilization of cycles or waves of SASS data. Our review found that studies based on SASS data used either one single wave of data or several waves of data. While most studies were based on single wave of SASS data, some studies used both SASS data and the subsequent TFS data to examine teacher attrition and retention (e.g., Ingersoll & May, 2012). And some studies used several waves of SASS data for longitudinal studies such as trend study or comparative interrupted time series (CITS) study (Dee et al., 2013; Grissom et al., 2014).

The third feature is related to the first two features and it is more about research design and data analysis methods. By using its multilevel data feature, researchers could design and conduct either a single level or a multilevel analysis. Among the reviewed studies, the applied single level analysis includes descriptive statistics (e.g., Baker et al., 2007; Bodine et al., 2008; Hahs-Vaughn & Scherff, 2008; Hill & Dalton, 2013), t-test (e.g., Cohen-Vogel & Smith, 2007), ANOVA (e.g., Fitchett et al., 2014b), and regression (e.g., Baker & Dickerson, 2006; Grissom, 2011; Kee, 2012; Lee, 2012; Ware & Kitsantas, 2007), while the multilevel analysis includes 2-level analysis (e.g., Eckert, 2013; Fitchett et al., 2014b; Shen et al., 2012; Shen et al., 2014) and 3-level analysis (e.g., Desimone et al., 2007; Phillips et al., 2011). Based on its longitudinal data feature, researchers could conduct longitudinal studies. Among the reviewed studies, the longitudinal design includes trend studies (e.g., Grissom et al., 2014) and cohort study (e.g., Eckert, 2013; Feng, 2010; Hahs-Vaughn & Scherff, 2008; Renzulli et al., 2011; Ronfeldt et al., 2014). In order to create a composite variable, some researchers also applied factor analysis (e.g., Grissom, 2011; Smith & Rowley, 2005; Ware & Kitsantas, 2007, 2011).

The fourth feature lies in the fact that SASS data are both state and nationally representative, and they could be utilized to examine state and federal policies’ impacts. Although SASS does not include state or federal

(20)

questionnaires, many items could be conceived as measures or indicators of state or federal policies (see Dee et al., 2013; Desimone et al., 2007; Grissom et al., 2014). Even if the studies did not directly focus on state or federal policy, the results could be generalized to the state and national scenes, and have implications for policy makers.

The final feature rests with the connections between SASS and other national databases. The literature indicated that SASS could be merged with other national database so that its application could be further extended. For example, Dee et al. (2013) used both the Common Core of Data and several waves of SASS data, examined NCLB’s effects on multiple district, school, and teacher traits. Hill and Dalton (2013) utilized the national High School Longitudinal Study of 2009 (HSLS: 09) data to complement SASS data for their investigation of the distribution of qualified teachers in mathematics. Based on both SASS and national Early Childhood Longitudinal Survey (ECLS) data, Reback et al. (2011) examined NCLB’s impact on teachers’ behavior and students’ achievement.

Implications for Other Countries

Despite many differences in educational policy, education system, and student population, other countries face just as many educational issues as America does. However, while more and more educational researchers in the U.S. rely on SASS or other educational databases investigating various educational topics over the decades, their counterparts in many countries do not have similar resource to investigate those topics in their countries. As far as we know, there are few large-scale national education databases available in most countries. By contrast, NCES alone has developed more than 20 large- scale databases for different educational purposes. Researchers in other countries either collect data by their own, or draw data from international databases such as the Program for International Student Assessment (PISA), or the Trends in International Mathematics and Science Study (TIMSS). The former strategy is often expensive, inefficient, and of small scale, while the latter has its drawback in terms of local demand, since those international databases will not focus on specific educational topics of a single country. Therefore, many countries should establish their own national databases in the field of education. We have a few recommendations for building national databases based on our own research experience and our review of SASS.

(21)

First, ensuring the technical soundness is the most critical issue to a database. The method and procedure of SASS were reviewed extensively by professionals. Using its 2007-2008 cycle as an example, SASS used a stratified probability sample design based on a revised 2005-2006 Common Core of Data (CCD) sampling frame; SASS considered the response burden for schools as well as equilibrium of the samples; SASS used a computer program to perform quality control of each survey and determine the eligibility; and SASS used weighting procedure to reduce biases from unit nonresponse. Because of advanced design, the overall response rate for SASS reached 72.4 for public teachers, and 65.9 for private teachers. Given that many countries have a vast population and relatively unbalanced development in different regions, it is particularly important to refer to the U.S. experience in national level survey.

Second, it is very important to widen the access to the data for maximum impact. As we have mentioned earlier, SASS data sets have public-use version and restricted-use version, which allows the maximum access and secures private information. The public-use version can be downloaded directly from NCES website, while the restricted-use data version contains identifiable information, thus only qualified organizations in the U.S., who went through a strict licensing process, can request and use the restricted-use data.

Moreover, there are various institutes and associations sponsored the use of national database in the U.S. For example, the Institute of Education Sciences (IES) would provide free training for using datasets. The American Educational Research Association (AERA) established Grants Program that seeks to “stimulate research on U.S. education issues using data from the large-scale, national and international data sets supported by the National Center for Education Statistics (NCES), NSF, and other federal agencies” (Grants Program, 2015).

Finally, although SASS stands as an excellent example for national-level survey, based on our review of SASS, we also notice a few issues about SASS. We suggest other countries pay attention to three issues when building their own database. First, we should ensure the stability of core items, a few papers reported that some SASS items are good for cross-sectional analysis but are difficult for longitudinal study because of the issue of stability: Over each cycle of SASS (a) some items fail to keep identical rating scales; (b) some questions were slightly changed or removed entirely; (c) options were added, changed, or deleted. Second, while ensuring the stability, it is also important

(22)

to add new theoretical or practical topics of the time. For example, computer- based teaching has been a prevailing topic over the past ten years, but there have been very limited information we can obtain from SASS. Third, it will be a plus if all national datasets are built with similar criteria and standard, so that they can be efficiently merged with each other. Currently it is not easy to merge SASS with other national datasets because they tend to use different identifiable number and coding schemes.

In summary, large national databases such as SASS allow educational researchers to have a platform to conduct research to improve educational practice. Based on the experience from U.S., investment in high-quality national databases will lead to quality research, and will eventually benefit the entire educational system and everyone in the country.

(23)

References

Abel, M. H., & Sewell, J. (1999). Stress and burnout in rural and urban secondary school teachers. The Journal of Educational Research, 92(5), 287-293.

Angrist, J. D., & Guryan, J. (2008). Does teacher testing raise teacher quality? Evidence from state certification requirements. Economics of Education Review,

27(5), 483-503.

Baker, B. D., & Dickerson, J. L. (2006). Charter schools, teacher labor market deregulation, and teacher quality evidence from the schools and staffing survey.

Educational Policy, 20(5), 752-778.

Baker, B. D., Orr, M. T., & Young, M. D. (2007). Academic drift, institutional production, and professional distribution of graduate degrees in educational leadership. Educational Administration Quarterly, 43(3), 279-318.

Barghaus, K. M., & Boe, E. E. (2011). From policy to practice: Implementation of the legislative objectives of charter schools. American Journal of Education, 118(1), 57-86.

Belfield, C. R., & Heywood, J. S. (2008). Performance pay for teachers: Determinants and consequences. Economics of Education Review, 27(3), 243-252.

Bifulco, R. (2010). The influence of performance-based accountability on the distribution of teacher salary increases. Education Finance and Policy, 5(2), 177-199.

Bodine, E., Fuller, B., González, M., Huerta, L., Naughton, S., Park, S., & Teh, L. W. (2008). Disparities in charter school resources—The influence of state policy and community. Journal of Education Policy, 23(1), 1-33.

Cohen-Vogel, L., & Smith, T. M. (2007). Qualifications and assignments of alternatively certified teachers: Testing core assumptions. American Educational

Research Journal, 44(3), 732-753.

Damore, D. T. (2002). Preschool and school age activities: Comparison of urban and suburban populations. Journal of Community Health, 27(3), 203-211.

Dee, T. S., Jacob, B., & Schwartz, N. L. (2013). The effects of NCLB on school resources and practices. Educational Evaluation and Policy Analysis, 35(2), 252-279.

Desimone, L., Smith, T., & Phillips, K. (2007). Does policy influence mathematics and science teachers’ participation in professional development? Teachers College

Record, 109(5), 1086-1122.

Eckert, S. A. (2013). What do teaching qualifications mean in urban schools? A mixed-methods study of teacher preparation and qualification. Journal of

Teacher Education, 64(1), 75-89.

Feng, L. (2010). Hire today, gone tomorrow: New teacher classroom assignments and teacher mobility. Education Finance and Policy, 5(3), 278-316.

Fitchett, P. G., Heafner, T. L., & Lambert, R. (2014a). Examining elementary social studies marginalization: A multilevel model. Educational Policy, 28(1), 40-68.

(24)

Fitchett, P. G., Heafner, T. L., & Lambert, R. (2014b). Assessment, autonomy, and elementary social studies time. Teachers College Record, 116(10), 1-36. Goldhaber, D., Destler, K., & Player, D. (2010). Teacher labor markets and the perils

of using hedonics to estimate compensating differentials in the public sector.

Economics of Education Review, 29(1), 1-17.

Grants Program. (2015). Retrieved May 27, 2015, from http://www.aera.net/Default. aspx?TabID=10242

Grissom, J. A. (2011). Can good principals keep teachers in disadvantaged schools? Linking principal effectiveness to teacher satisfaction and turnover in hard-to- staff environments. Teachers College Record, 113(11), 2552-2585.

Grissom, J. A., Nicholson-Crotty, S., & Harrington, J. R. (2014). Estimating the effects of no child left behind on teachers’ work environments and job attitudes.

Educational Evaluation and Policy Analysis, 36(4), 417-436.

Hahs-Vaughn, D. L., & Scherff, L. (2008). Beginning English teacher attrition, mobility, and retention. The Journal of Experimental Education, 77(1), 21-54. Hancock, C. B., & Scherff, L. (2010). Who will stay and who will leave? Predicting

secondary English teacher attrition risk. Journal of Teacher Education, 61(4), 328-338.

Hentschke, G. C., Nayfack, M. B., & Wohlstetter, P. (2009). Exploring superintendent leadership in smaller urban districts: does district size influence superintendent behavior? Education and Urban Society, 41(3), 317–337.

Hill, J. G., & Dalton, B. (2013). Student math achievement and out-of-field teaching.

Educational Researcher, 42(7), 403-405.

Huang. G., Salvucci, S., Peng, S., & Owings, J. (1996). National education longitudinal study of 1988 (NELS: 88): Research framework and issues. Working Paper Series. Retrieved from http://files.eric.ed.gov/fulltext/ED418149. pdf Hussein, S. (2011). The use of large scale datasets in UK social care research. London,

England: National Institute for Health Research.

Ingersoll, R. M., Merrill, L., & May, H. (2012). Retaining teachers: How preparation matters. Educational Leadership, 69(8), 30-34.

Ingersoll, R. M., & May, H. (2012). The magnitude, destinations, and determinants of mathematics and science teacher turnover. Educational Evaluation and Policy

Analysis, 34(4), 435-464.

Izumi, M., Shen, J., & Xia, J. (2015). Determinants of graduation rate of public alternative schools. Education and Urban Society, 47(3), 307-327.

Jones, M. D. (2013). Teacher behavior under performance pay incentives. Economics

of Education Review, 37, 148-164.

Kee, A. N. (2012). Feelings of preparedness among alternatively certified teachers what is the role of program features? Journal of Teacher Education, 63(1), 23-38. Kelly, S. (2010). A crisis of authority in predominantly black schools? The Teachers

(25)

Lee, J. (2012). Educational equity and adequacy for disadvantaged minority students: School and teacher resource gaps toward national mathematics proficiency standard. The Journal of Educational Research, 105(1), 64-75.

Lleras, C. (2008). Race, racial concentration, and the dynamics of educational inequality across urban and suburban schools. American Educational Research

Journal, 45(4), 886-912.

Loeb, S., & Strunk, K. (2007). Accountability and local control: Response to incentives with and without authority over resource generation and allocation.

Education Finance and Policy, 2(1), 10-39.

Ma, X., Shen, J., & Krenn, H. Y. (2014). The relationship between parental involvement and adequate yearly progress among urban, suburban, and rural schools. School Effectiveness and School Improvement, 25(4), 629-650.

Marks, H. M., & Nance, J. P. (2007). Contexts of accountability under systemic reform: Implications for principal influence on instruction and supervision. Educational

Administration Quarterly, 43(1), 3-37.

Martin, S. M. (2010). The determinants of school district salary incentives: An empirical analysis of, where and why. Economics of Education Review, 29(6), 1143-1153.

Phillips, K. J., Desimone, L., & Smith, T. (2011). Teacher participation in content- focused professional development & the role of state policy. Teachers College

Record, 113(11), 2586-2630.

Porter, A. C. (1991). Creating a system of school process indicators. Educational

Evaluation and Policy Analysis, 13(1), 13-29.

Price, H. E. (2012). Principal-Teacher interactions how affective relationships shape principal and teacher attitudes. Educational Administration Quarterly, 48(1), 39-85.

Reback, R. L., Rockoff, J. E., & Schwartz, H. (2011). Under pressure: Job security, resource allocation, and productivity in schools under NCLB. NBER Working

Paper (w16745).

Renzulli, L. A., Parrott, H. M., & Beattie, I. R. (2011). Racial mismatch and school type teacher satisfaction and retention in charter and traditional public schools.

Sociology of Education, 84(1), 23-48.

Ronfeldt, M., Schwartz, N., & Jacob, B. (2014). Does pre-service preparation matter? Examining an old question in new ways. Teachers College Record, 116(10), 1-46.

Scheer, S. D., Borden, L. M., & Donnermeyer, J. F. (2000). The relationship between family factors and adolescent substance use in rural, suburban, and urban settings.

Journal of Child and Family Studies, 9(1), 105-115.

Shen, J., Rodriguez-Camps, L., & Rincones-Gomez, R. (2000). Characteristics of urban principalship: A national trend study. Education and Urban Society, 32(4), 481-91.

(26)

Shen, J., Leslie, J. M., Spybrook, J. K., & Ma, X. (2012). Are principal background and school processes related to teacher job satisfaction? A multilevel study using schools and staffing survey 2003-2004. American Educational Research Journal,

49(2), 200-230.

Shen, J., Washington, A. L., Palmer, L. B., & Xia, J. (2014). Effects of traditional and nontraditional forms of parental involvement on school-level achievement outcome: An HLM study using SASS 2007-2008. The Journal of Educational

Research, 107(4), 326-337.

Shen, J., & Xia, J. (2012). The relationship between teachers’ and principals’ decision-making power: Is it a win-win situation or a zero-sum game?

International Journal of Leadership in Education, 15(2), 153-174.

Smith, T. M., & Rowley, K. J. (2005). Enhancing commitment or tightening control: The function of teacher professional development in an era of accountability.

Educational Policy, 19(1), 126-154.

Smith, T. M. (2007). How do state‐level induction and standards‐based reform policies affect induction experiences and turnover among new teachers? American

Journal of Education, 113(2), 273-309.

Taylor, C. (1998). Does money matter? An empirical study introducing resource costs and student needs to educational production function analysis. W. Fowler (Ed.), 75-97. Retrieved from https://nces.ed.gov/pubs98/dev97/ 98212g.asp#4

Tourkin, S., Thomas, T., Swaim, N., Cox, S., Parmer, R., Jackson, B.,...Zhang, B. (2010). Documentation for the 2007-2008 Schools and Staffing Survey (NCES 2010-332). U.S. Department of Education. Washington, DC: National Center for Education Statistics. Retrieved from http://nces.ed.gov/pubs2010/ 2010332.pdf. Urick, A., & Bowers, A. J. (2014). What are the different types of principals across the

United States? A latent class analysis of principal perception of leadership.

Educational Administration Quarterly, 50(1), 96-134.

Ware, H. W., & Kitsantas, A. (2007). Teacher and collective efficacy beliefs as predictors of professional commitment. The Journal of Educational Research,

100(5), 303-310.

Ware, H. W., & Kitsantas, A. (2011). Predicting teacher commitment using principal and teacher efficacy variables: An HLM approach. The Journal of Educational

Research, 104(3), 183-193.

Xia, J., Izumi, M., & Gao, X. (2015). School process and teacher job satisfaction at alternative schools: A multilevel study using SASS 2007-2008 data. Leadership

(27)

Appendix I: Reviewed Journal Articles and

Coded Research Topics

Topic N Articles

Teacher Topics

Teacher Preparation and Qualification

8 Angrist & Guryan, 2008; Baker & Dickerson, 2006; Bodine et al., 2008; Cohen-Vogel & Smith, 2007; Eckert, 2013; Ingersoll et al., 2012; Lee, 2012; Ronfeldt et al., 2014 Teacher Job

Satisfaction 6 Belfield & Heywood, 2008; Grissom, 2011; Price, 2012; Renzulli et al., 2011; Shen et al., 2012; Xia et al., 2015 Teacher

Autonomy 9

Desimone et al., 2007; Fitchett et al., 2014a, 2014b Grissom et al., 2014; Ingersoll et al., 2012; Renzulli et al., 2011; Shen et al., 2012; Smith & Rowley, 2005; Xia et al., 2015

Teacher Efficacy and Teacher Commitment

5 Bodine et al., 2008; Eckert, 2013; Hancock & Scherff, 2010; Ware & Kitsantas, 2011; Ware & Kitsantas, 2007

Teachers’ Professional Development

4 Desimone et al., 2007; Ingersoll et al., 2012; Phillips et al., 2011; Smith & Rowley, 2005

Teacher Retention, Attrition, and Turnover

11

Hancock & Scherff, 2010; Hahs-Vaughn & Scherff, 2008; May, 2012; Feng, 2010; Grissom, 2011; Renzulli et al., 2011; Jones, 2013; Ingersoll & May, 2012; Eckert, 2013; Ronfeldt et al., 2014; Smith & Rowley, 2005

Teacher Salary 10

Angrist & Guryan, 2008; Belfield & Heywood, 2008; Bifulco, 2010; Loeb & Strunk, 2007; Martin, 2010; Jones, 2013; Shen et al., 2012; Ingersoll & May, 2012; Goldhaber, Destler, & Player, 2010; Bodine et al., 2008

Teachers of Certain Subjects and of Certain Stage

7

Desimone et al., 2007; Hahs-Vaughn & Scherff, 2008; Hancock & Scherff, 2010; Hill & Dalton, 2013; Ingersoll & May, 2012; Ingersoll et al., 2012; Phillips et al., 2011

Principal and School Topics

Principalship 5 Shen et al., 2012; Likewise, Grissom, 2011; Ware & Kitsantas, 2011; Urick & Bowers, 2014; Baker, Orr, & Young, 2007

School Process 4 Ingersoll & May, 2012; Izumi et al., 2015; Shen et al., 2012; Xia et al., 2015 School

Performance 3 Izumi et al., 2015; Ma et al., 2014; Shen et al., 2014

(28)

Topic N Articles

School Type 7

Goldhaber, Destler, & Player, 2010; Martin, 2010; Barghaus & Boe, 2011; Renzulli, Parrott, & Beattie, 2011; Backer & Dickerson, 2006; Bodine, et al., 2008

School Location 6 Bifulco, 2010; Eckert, 2013;Indersoll & May, 2012; Ma et al., 2014; Renzulli et al., 2011; Urick & Bowers, 2014 Parental

involvement 3 Ma et al., 2014; Shen et al., 2012; Shen et al., 2014 Student

behavior or discipline problems

4 Ingersoll & May, 2012; Kelly, 2010; Shen et al., 2012

Disadvantaged

students 3 Grissom, 2011; Kelly, 2010; Lee, 2012

District and State Topics

Policy on Professional Development

3 Desimone, Thomas, & Phillips, 2007; Phillips, Desimone, & Smith, 2010; Smith, 2007

Policy on Teacher

Qualification 3 Angrist & Guryan, 2008; Bodine, et al., 2008; Lee, 2012 Policy on

Testing and Accountability 5

Fitchett, Heafner & Lambert, 2014a, 2014b; Loeb & Strunk, 2007; Bifulco, 2010; Martin, 2010

Federal Topic

NCLB’s

(29)

Appendix II: Studies That Used Single/Multiple Wave(s) of

SASS and TFS

Studies used one wave of SASS

Baker and Dickerson, 2006; Barghaus and Boe, 2011; Belfield and Heywood, 2008; Bodine et al., 2008; Cohen-Vogel and Smith, 2007; Desimone et al., 2007; Fitchett et al., 2012; 2014; Goldhaber et al., 2010; Hancock and Scherff, 2010; Hill and Dalton, 2013; Kee, 2012; Kelly, 2010; Lee, 2012; Ma et al., 2014; Marks and Nance, 2007; Martin, 2010; Phillips et al., 2011; Price, 2012; Shen et al., 2014; Shen et al., 2012; Smith, 2007; Urick and Bowers, 2014; Ware and Kitsantas, 2011; Ware & Kitsantas, 2007; Xia, Izumi, and Gao, 2015. Studies used

multiple waves of SASS

Baker et al., 2007; Dee et al., 2013; Grissom et al., 2014; Jones, 2013; Loeb and Strunk, 2007; Angrist and Guryan, 2008.

Studies used one wave of SASS and TFS

Eckert, 2013; Feng, 2010; Grissom, 2011; Ingersoll & May, 2012; Ingersoll et al., 2012; Renzulli et al., 2011; Smith and Rowley, 2005.

Studies used multiple waves of SASS and TFS

參考文獻

相關文件

“In assessing the impact of the PNET Scheme on the professional development of local teachers, the centralised seminars have made a significant contribution and their value should

(a) The principal of a school shall nominate such number of teachers of the school for registration as teacher manager or alternate teacher manager of the school as may be provided

Based on “The Performance Indicators for Hong Kong Schools – Evidence of Performance” published in 2002, a suggested list of expected evidence of performance is drawn up for

(ii) “The dismissal of any teacher who is employed in the school – (a) to occupy a teacher post in the establishment of staff provided for in the code of aid for primary

(b) An Assistant Master/Mistress (Student Guidance Teacher) under school-based entitlement with a local first degree or equivalent qualification will be eligible for

(ii) “The dismissal of any teacher who is employed in the school – (a) to occupy a teacher post in the establishment of staff provided for in the code of aid for primary

Associate Professor of Department of Mathematics and Center of Teacher Education at National Central

The major topics of the paper are Chan, Chan Buddhism, the very beginning of Chan, method of Chan, master or teacher of Chan, the mean between the two extremes, understanding