What Is The Political Content in LLMs’ Pre- and Post-Training Data?

arXiv:2509.22367v2 Announce Type: replace-cross Abstract: Large language models (LLMs) are known to generate politically biased text. Yet, it remains unclear how such biases arise, making it difficult to design effective mitigation strategies. We hypothesize that these biases are rooted in the composition of training data. Taking a data-centric perspective, we formulate research questions on (1) political leaning present in data, (2) data imbalance, (3) cross-dataset similarity, and (4) data-model alignment. We then examine how exposure to political content relates to models' stances on policy issues. We analyze the political content of pre- and post-training datasets of open-source LLMs, combining large-scale sampling, political-leaning classification, and stance detection. We find that training data is systematically skewed toward left-leaning content, with pre-training corpora containing substantially more politically engaged material than post-training data. We further observe a strong correlation between political stances in training data and model behavior, and show that pre-training datasets exhibit similar political distributions despite different curation strategies. In addition, we find that political biases are already present in base models and persist across post-training stages. These findings highlight the central role of data composition in shaping model behavior and motivate the need for greater data transparency.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top