Emotion-aware chatbots that can sense human emotions are becoming increasingly prevalent. However, the exposition of emotions by emotion-aware chatbots undermines human autonomy and users’ trust. One way to ensure autonomy is through the provision of control. Offering too much control, in turn, may increase users’ cognitive effort. To investigate the impact of control over emotion-aware chatbots on autonomy, trust, and cognitive effort, as well as user behavior, we carried out an experimental study with 176 participants. The participants interacted with a chatbot that provided emotional feedback and were additionally able to control different chatbot dimensions (e.g., timing, appearance, and behavior). Our findings show, first, that higher control levels increase autonomy and trust in emotion-aware chatbots. Second, higher control levels do not significantly increase cognitive effort. Third, in our post hoc behavioral analysis, we identify four behavioral control strategies based on control feature usage timing, quantity, and cognitive effort. These findings shed light on the individual preferences of user control over emotion-aware chatbots. Overall, our study contributes to the literature by showing the positive effect of control over emotion-aware chatbots and by identifying four behavioral control strategies. With our findings, we also provide practical implications for future design of emotion-aware chatbots.