April 11, 2025
THIMPHU – To promote ethical and effective use of generative Artificial Intelligence (AI) among civil servants, the Royal Civil Service Commission, in collaboration with the GovTech Agency, recently released a guideline on AI use for civil servants.
“The integration of Generative AI in public administration can significantly enhance efficiency, enabling civil servants to automate routine tasks, generate data-driven insights, and engage creatively with citizens,” the guideline states.
However, the guideline also highlights the importance of addressing potential risks, including misinformation, bias, and privacy concerns, which could undermine public trust and the integrity of government processes.
The guideline aims to equip civil servants with necessary tools and knowledge to effectively utilise the benefits of generative AI, while safeguarding national interest, ethical standards, personal information, and data privacy.
According to the guideline, as Bhutan currently does not have specific policy and regulations on AI, the guideline was prepared referencing the AI guidelines of Canada, United Kingdom, Switzerland, USA, United Arab Emirates, European Union Act. “It will serve as an interim guideline for Generative AI usage within the government,” the guideline states.
Generative AI are technologies capable of producing new content—such as text, images, code, and video—based on user inputs or “prompts”. Popular platforms like ChatGPT and Google Gemini use this model to respond to instructions in real-time. However, these tools also collect various forms of user data, including input history, device information, and AI-generated responses.
The guideline cautions civil servants that such data may be stored or used to train AI models unless users explicitly disable history tracking or data sharing features. For instance, when chat history is turned off, user inputs are stored for 30 days before deletion and are not used to further train the model.
The guideline encourages civil servants to clearly indicate when content is generated by AI, thoroughly review AI outputs for factual and contextual accuracy, and learn about bias, diversity, inclusion, anti-racism, values, and ethics to identify potentially biased or discriminatory content.
The guideline also instructs officials not to use AI tools as search engines unless the tool provides verifiable sources. It also cautions civil servants to be mindful while inputting information into AI tools, citing it to be similar to uploading information to public domain and can be accessed by anyone.
When handling personal or sensitive information, the guideline outlines three key methods for de-identification: replacing or modifying identifiable details with information that cannot be traced back to an individual; substituting data such as email addresses or phone numbers with fictional but format-consistent placeholders; and disabling the “Improve the model for everyone” option under data control settings to prevent AI platforms from using the content for training purposes.
The guideline also cautions against entering government data, personal identifiers such as names, employee ID numbers, citizenship ID numbers, unpublished materials, or any proprietary or classified information—particularly data categorised as Level Two, Level Three, or Level Four. Inputting such data into AI platforms risks compromising sensitive information, breaching intellectual property rights, and potentially causing harm to both individuals and institutions.
The guideline also advises civil servants to exercise human oversight when using generative AI in critical decision-making processes, particularly in areas such as human resource recruitment, promotion planning, financial management, and the evaluation of student performance in schools and colleges.