Public and academic discourse on the safety of conversational agents using generative AI, particularly chatbots, often centers on fairness, trust, and risk. However, there is limited insight into how users differentiate these perceptions and what factors shape them. To address this gap, we developed a survey instrument based on previous work. We conducted an exploratory study using factor analysis and latent class analysis on survey responses from $n$=123 participants in the U.S. to offer an initial attempt at measuring and delineating the dimensionality of these safety perceptions. Latent class analysis revealed three distinct user groups with sometimes counterintuitive perception patterns: The Hesitant Skeptics, The Cautious Trusters, and The Confident Adopters. We find that greater usage frequency of AI chatbots is associated with higher trust and fairness perceptions but lower perceived risk. Some demographic traits like sexual orientation, income, and ethnicity also had strong and significant effects on group membership. Our findings highlight the need for more refined measurement approaches and a more nuanced perspective on users’ AI safety perceptions regarding trust, fairness, and risk, particularly in capturing the kinds of experiences and interactions that lead users to develop their perceptions.