UAV accident investigation is essential for safeguarding the fast-growing low-altitude airspace. While near-daily incidents are reported, they were rarely analyzed in depth as current inquiries remain expert-dependent and time-consuming. Because most jurisdictions mandate formal reporting only for serious injury or substantial property damage, a large proportion of minor occurrences receive no systematic investigation, resulting in persistent data gaps and hindering proactive risk management. This study explores the potential of using large language models (LLMs) to expedite UAV accident investigations by extracting human-factor insights from unstructured narrative incident reports. Despite their promise, the off-the-shelf LLMs still struggle with domain-specific reasoning in the UAV context. To address this, we developed a human factors analysis and classification system (HFACS)-guided analytical framework, which blends structured prompting with lightweight post-processing. This framework systematically guides the model through a two-stage procedure to infer operators’ unsafe acts, their latent preconditions, and the associated organizational influences and regulatory risk factors. A HFACS-labelled UAV accident corpus comprising 200 abnormal event reports with 3600 coded instances has been compiled to support evaluation. Across seven LLMs and 18 HFACS categories, macro-F1 ranged 0.58–0.76; our best configuration achieved macro-F1 0.76 (precision 0.71, recall 0.82), with representative category accuracies > 93%. Comparative assessments indicate that the prompted LLM can match, and in certain tasks surpass, human experts. The findings highlight the promise of automated human factor analysis for conducting rapid and systematic UAV accident investigations.