This study investigates the performance of search engine chatbots powered by large language models in generative political information retrieval. Applying algorithmic accountability as a central theme, this research (a) assesses the alignment of artificial intelligence (AI) chatbot responses with timely political information, (b) investigates the factual correctness and transparency of chatbot-sourced synopses, (c) examines the adherence of chatbots to democratic norms and impartiality ideals, (d) analyzes the sourcing and attribution behaviors of the chatbots, and (e) explores the universality of chatbot gatekeeping across different languages. Using the 2024 Taiwan presidential election as a case study and prompting as a method, the study audits responses from Microsoft Copilot in five languages. The findings reveal significant discrepancies in information readiness, content accuracy, norm adherence, source usage, and attribution behavior across languages. These results underscore the contextual awareness when applying accountability assessment that looks beyond transparency in AI-mediated communication, especially during politically sensitive events.