Mode Effects on Response Behaviors in a Mixed-mode Survey
為了提高母體涵蓋率與問卷回收率，降低調查執行成本，混合模式（mixed mode）已普遍應用於大型追蹤調查。若不同調查模式之間的樣本非隨機分配，除了存在因不同模式產生的測量效應（即模式效應），同時還會有樣本以何種模式完訪的選樣效應。若擬探討不同模 式對受訪者答題行為的影響，首要的挑戰是如何釐清並存的兩種效應。本研究採用可因應樣本非隨機分派的傾向分數配對法，探討大型追蹤調查答題行為的模式效應，比較總體題項無回應、拒答、「不知道」回答，以及態度量表的默許、極端回答風格在面訪模式與網路自 填問卷模式之間是否有明顯的差異。分析資料來自「家庭動態調查」在 2018 年以面訪、網路自填問卷模式同步蒐集到的問卷資料。由於面訪模式與網路自填問卷模式之間的樣本數差異大，為了提高傾向分數配對法的有效配對數，擴大估計的涵蓋範圍，以及降低估計誤差， 本研究採用以擴大樣本（oversampling）與配對替代（replacement） 為原則的兩種配對方法，分別為半徑配對（radius matching）與核函數配對。模式效應的估計結果顯示，網路自填問卷模式發生題項無回應與「不知道」回答的可能性明顯較面訪為高，與既有文獻的發現相近。在平衡型態度量表題組中，面訪的默許風格明顯較網路自填問卷模式嚴重，但網路自填問卷模式的極端回答風格明顯較面訪嚴重。 其中，默許風格的發現如同預期，但極端風格的發現則不同於既有研究。儘管本文的研究課題限於題項無回應、態度量表回答風格兩類答題行為，而傾向分數配對法的應用上仍可能存在改善的空間，在調查方法研究、分析方法應用與調查實務上均具參考價值。
In order to improve coverage and response rates, and to reduce survey cost, mixed modes have been commonly used in large-scale panel surveys. If the assignment of a survey mode is not random, the respondents' answers to a mixed-mode survey might be subject to two kinds of biases. One is measurement bias (i.e., mode effects) evoked by the modes themselves, and the other is sample selection bias, which results from the respondents' non-random assignment to different modes. How to disentangle these two biases is a crucial challenge for the estimation of mode effects. This study adopts propensity score matching, an analytical method which can deal with non-random sample assignment, to examine mode effects on response behaviors in a panel survey with a mixed-mode design of face-to-face and selfadministered online modes. The outcome variables analyzed in this study include overall item nonresponse, "refusal" and "don't know" answers, and two response styles in balanced attitude scales, namely the acquiescence and extreme response styles.
Data analyzed in this study are from the Panel Study of Family Dynamics survey conducted in 2018, in which the sample was preassigned to face-to-face and self-administered online modes based on whether they provided an email address and finished an online questionnaire previously sent with a festival greeting card. There is a large difference in the numbers of complete questionnaires between the face-to-face and self-administered online modes. In order to improve the estimates for measurement effects, this study uses two matching methods, including the radius and kernel matching methods. These two matching methods are based on an oversampling strategy and matching with replacement.
The results of the two matching methods indicate that the probabilities of occurrence of item nonresponse and "don't know" answers in the self-administered online mode were significantly higher than those in the face-to-face mode, consistent with previous studies. It is also consistent with previous studies in that, regarding responses to the balanced attitudinal scales, respondents who finished the questionnaires by face-to-face interviews were significantly more likely to provide acquiescent responses than those who filled out online questionnaires by themselves. However, different from previous studies, our findings indicate that respondents who completed self-administered online questionnaires were more likely to provide extreme responses to items on the balanced attitudinal scales than face-to-face interviewees. One other finding worth mentioning is that this study did not find significant mode effects for "refusal" answers.
This study contributes to research on mode effects, applications of propensity score matching, and survey practices. Our findings suggest that, to mitigate mode effects, a mixed-mode survey including face-toface and self-administered modes should adopt the same design for "don't know" and "refusal" options between modes, and respondents in the face-to-face mode should be allowed to enter answers to questions with social desirability concerns by themselves. Despite these academic and practical contributions, our study still has its limitations. One is that the response behaviors explored in this study are confined to overall item-nonresponse, "don't know" and "refusal" answers, and the style of responses to attitude scales. In addition to extending the investigation of mode effects to a broader range of survey questions, future research should endeavor to increase the application of propensity score matching methods in order to disentangle the mode effects and selections effects in mixed-mode surveys. Future directions include but are not confined to the selection of covariates for the logistic model used to predict propensity scores, the methods of imputing missing values, and other matching strategies.