Objective: Microcounseling skills are fundamental to effective psychotherapy, yet manual coding is time- and resource-intensive. This study explores the potential of large language models (LLMs) to automate the identification of these skills in therapy sessions. Method: We fine-tuned GPT-4.1 on a set of psychotherapy transcripts annotated by human coders. The model was trained to classify therapist utterances, generate explanations for its decisions, and propose alternative responses. The pipeline included transcript preprocessing, dialogue segmentation, and supervised fine-tuning. Results: The model achieved solid performance (Accuracy: 0.78; Precision: 0.79; Recall: 0.78; F1: 0.78; Specificity: 0.77; Cohen's κ: 0.69). It reliably detected common and structurally distinct skills but struggled with more nuanced skills that rely on understanding implicit relational dynamics. Conclusion: Despite limitations, fine-tuned LLMs have potential for enhancing psychotherapy research and clinical practice by providing scalable, automated coding of therapist skills.
QC 20250918