##plugins.themes.bootstrap3.article.main##

Monika Ginting

Abstract

Studi ini menelaah bagaimana kesantunan strategis direalisasikan dalam interaksi warga–chatbot pada layanan publik daring, serta bagaimana pola kesantunan tersebut memengaruhi penerimaan pesan dan keberlangsungan percakapan. Berlandaskan Prinsip Kesantunan Leech (tact, generosity, approbation, modesty, agreement, sympathy) dan konsep Face-Threatening Acts (Brown & Levinson), penelitian merancang kerangka anotasi pragmatik untuk mengidentifikasi pematuhan/pelanggaran maksim, perangkat mitigasi (hedges, penanda empati, permintaan maaf), dan strategi tindak tutur (informing, requesting, refusing, directing). Korpus berupa log percakapan terpilih dari beberapa chatbot pemerintah yang melayani informasi prosedur administrasi, aduan layanan, dan verifikasi dokumen. Analisis dilakukan secara campuran—kajian kualitatif berbasis analisis percakapan dipadukan dengan penambangan fitur tekstual (imperatif langsung, modalitas kewajiban, formula kesopanan) dan evaluasi keterbacaan. Hasil menunjukkan bahwa pematuhan tertinggi terjadi pada maksim tact dan agreement dalam skenario informatif, sementara pelanggaran paling sering muncul pada skenario refusal/redirect (mis. penolakan berkas, rujukan ke kanal lain) karena dominannya bentuk imperatif dan absennya penanda empati. Ketidakkonsistenan kesantunan juga tampak pada konteks multibahasa dan saat transisi ke agen manusia. Studi ini menawarkan pedoman perancangan templat respons ramah-muka—menggabungkan mitigasi FTA, penanda empati, justifikasi prosedural singkat, dan opsi tindak lanjut—yang dapat meningkatkan kejelasan sekaligus menjaga wajah positif warga dalam interaksi layanan publik digital.

##plugins.themes.bootstrap3.article.details##

References
[1] Androutsopoulou, A., Karacapilidis, N., Loukis, E., & Charalabidis, Y. (2019). Transforming the communication between citizens and government through AI-guided chatbots. Government Information Quarterly, 36(2), 358–367. https://doi.org/10.1016/j.giq.2018.10.001
[2] Brown, P., & Levinson, S. C. (1987). Politeness: Some universals in language usage. Cambridge University Press.
[3] Daft, R. L., & Lengel, R. H. (1986). Organizational information requirements, media richness and structural design. Management Science, 32(5), 554–571. https://doi.org/10.1287/mnsc.32.5.554
[4] Haugeland, I. K. F., Nordbø, R., Søreide, G., & Dahl, Y. (2022). An experimental study of chatbot interaction design. International Journal of Human-Computer Studies, 168, 102904. https://doi.org/10.1016/j.ijhcs.2022.102904
[5] Klopfenstein, L. C., Delpriori, S., Malatini, S., & Bogliolo, A. (2017). The rise of bots: A survey of conversational interfaces, patterns, and paradigms. In DIS ‘17 (pp. 555–565). ACM. https://doi.org/10.1145/3064663.3064672
[6] Krippendorff, K. (2019). Content analysis: An introduction to its methodology (4th ed.). SAGE. https://doi.org/10.4135/9781071878781
[7] Larsen, A. G., & Kalgin, A. (2024). The impact of chatbots on public service provision. Government Information Quarterly, 41(x), Article 102xxx. (Advance online publication).
[8] Leech, G. (2014). The pragmatics of politeness. Oxford University Press. (Online ed.). https://academic.oup.com/book/35384
[9] Miller, C. A. (2010). Politeness effects in directive compliance. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 54(4), 322–326. https://doi.org/10.1177/154193121005400445
[10] Poivet, R., Preux, P., & de Loor, P. (2023). The influence of conversational agents’ role and communication style on user experience. PLOS ONE, 18(12), e0295358. https://doi.org/10.1371/journal.pone.0295358
[11] Ribino, P., Augello, A., Pilato, G., & Dignum, F. (2023). The role of politeness in human–machine interactions: A review. Artificial Intelligence Review, 56, 14021–14070. https://doi.org/10.1007/s10462-023-10540-1
[12] Sacks, H., Schegloff, E. A., & Jefferson, G. (1974). A simplest systematics for the organization of turn-taking in conversation. Language, 50(4), 696–735. https://doi.org/10.2307/412243
[13] Shan, Y., Ding, Y., Chen, X., & Lee, M. K. O. (2022). Language use in conversational agent–based health communication: A review. International Journal of Human–Computer Interaction, 38(14), 1361–1381. https://doi.org/10.1080/10447318.2021.2001891
[14] de Souza Monteiro, M., & Pereira, V. C. (2023). Investigating politeness strategies in chatbots through the lens of Conversation Analysis. In IHC ‘23: XXII Brazilian Symposium on Human Factors in Computing Systems. https://doi.org/10.1145/3638067.3638068
[15] Wirtz, B. W., Weyerer, J. C., & Geyer, C. (2019). Artificial intelligence and the public sector—Applications and challenges. International Journal of Public Administration, 42(7), 596–615. https://doi.org/10.1080/01900692.2018.1498103
[16] Li, X., Wang, J., & Chen, L. (2024). Should government chatbots behave like civil servants? The effect of perceived public service traits on citizen experience. Government Information Quarterly, 41(x), Article 102xxx. (Advance online publication).
[17] Metzger, L., Stier, S., & Mothes, C. (2024). Empowering calibrated (dis-)trust in conversational agents. In CHI ‘24 Extended Abstracts. ACM. https://doi.org/10.1145/3613904.3642122
[18] Zhang, R., Zhou, Z., & Li, H. (2025). Enhancing citizen–government communication with AI: A field evaluation of chatbot-assisted replies. arXiv preprint. https://arxiv.org/abs/2501.10715