Alone at the Wheel: AI Ethics for the One-Person PIO Team

By Brock Damjanovich, Communications Manager | Salt Lake County Office of Regional Development

Are we sick of talking about AI yet? Will the AI bubble pop soon? What does the future look like?

These are all questions I simply cannot answer, but it’s our responsibility to ask the hard questions as we attempt to integrate new technology into our workflows, especially if you’re like me and a team of one! I’m not sure if you’ve read the headlines, but I don’t foresee the number of team members on my own team growing any time soon.

So with that framing aside, let’s dive into what ethical integration looks like for government teams of all sizes, but especially small teams like mine. 

Coming to Terms with Falling Behind the AI Curve

Three years ago, it would have made perfect sense for governments of all sizes to draft AI policies and standards before the train picked up speed. But time was tight, resources were thin, and then ChatGPT exploded at the end of 2022. Within two months, it hit 100 million users and became the fastest-growing consumer app in history.

Some agencies now have guardrails. Many don’t. If you’re a PIO or emergency communicator (especially a team of one), this matters. I’ve used AI to draft press releases, summarize public meetings, and sketch crisis messaging. It helps stretch limited capacity, and it raises real questions:

  • What happens when public messaging is shaped by new tools that the public doesn’t understand?
  • What are our obligations around disclosure, accuracy, and accountability?

These aren’t hypotheticals. They’re shaping how the public sees our work right now.

The Five Ethical Pillars for PIOs

I’ve learned that when discussing AI Ethics, it’s important to address use cases alongside ethics. It helps people understand not only the potential of AI but also how they can incorporate good practices in their workflow from the start.

Think of these pillars as the guardrails that let you move fast without breaking trust. Each pillar includes practical examples so you can plug the ideas into your day-to-day work.

1. Human Oversight – Keep Humans on the Hook

Large language models can produce strong drafts. They cannot exercise judgment. They do not know your community, your elected officials, or your risk tolerance. Treat AI like an intern who works fast and needs editing.

Oh, and AI can lie, and does so with all the confidence in the world.

  • Consider the “two humans” rule. One person prompts and edits. A second person signs off for anything public (this should be your subject matter expert), especially during incidents.
    • If you truly are a team of one, use a written checklist as your second set of eyes.
  • Define what AI can and cannot touch. Brainstorms and first drafts are okay. Final alerts, legal or HR replies, and sensitive stakeholder messages might require human authorship.
  • Require source checks. If AI suggests statistics or quotes, verify them yourself. Paste links into a browser. Call the source if needed.

Example:

You ask AI to draft a shelter-in-place SMS. It writes a polished message that uses jargon. You replace jargon with plain language, confirm the address format, insert the correct time window, and run it through your checklist. Only then do you post to your alerting system.

2. Transparency – One Public Policy Beats 100 Tiny Footnotes.

The public does not need a “written with AI” label on every email and social post. They do need a clear policy that explains how your agency uses AI and how people can raise concerns.

At Salt Lake County, I’ve treated our AI Policy like our Privacy Policy. Available within just a couple of clicks, it is linked right in our social bio!  

  • If you have an AI Policy or an AI Use Declaration, put it online in an easily accessible place. State the types of AI you use, the functions they support, and that all public communications are reviewed by people. Include what you will never do, such as feeding private information into public chatbots.
  • Disclose when AI involvement is material to public understanding. A general newsletter does not need a tag. A report that uses AI to analyze thousands of comments should say so and explain the human review process.
  • Be explainable. Staff who use AI to cluster comments, triage inquiries, or generate drafts should be able to describe how and why the tool influenced the result.

Example:

You publish a public comment summary on a controversial proposal. The header notes that AI assisted with clustering themes and that staff verified each theme against the full record. You link to the methodology and provide a contact for corrections.

3. Bias – Inclusion Isn’t Automatic.

What is a large language model? It predicts the next likely word based on patterns in huge datasets. It does not understand the meaning. If the datasets lean toward one group or one style, outputs will reflect that lean.

In addition, some companies or individuals might be trying to use outside influence to intentionally bias AI outputs. Be aware of AI chatbots or software that is intentionally biased, as well as AI that is accidentally biased. Some examples of AI bias include:

In practice

  • Edit for inclusion. Replace formal phrasing with plain language. Choose examples that match your local community.
  • Human review for non-English content. Do not rely on AI alone for translations or dialect nuances.
    • However, it’s a whole lot better than a tool like Google Translate, because if you are able to have an AI translation revised by a human, then you can feed that back into the AI for improved future translations that learn from your agency’s language preferences.
  • Maintain a style sheet. Include reading-level targets, preferred terms, and phrases to avoid.

Example
You generate a first draft of a heat advisory post in English and Spanish. The Spanish draft sounds stiff and uses idioms that do not fit your audience. You ask a bilingual staffer or trusted partner to correct tone and vocabulary before posting. Then feed it back into the AI to improve future outputs!

4. Data Privacy – If You Wouldn’t Publish It, Don’t Paste It.

AI is only as safe as what you feed it. Once you put information into a tool, you may not control where it goes. This is why data privacy sits at the center of ethical AI for PIOs.

What counts as PII
Personally identifiable information (PII) is any data that can identify a person. That includes names, home or email addresses, phone numbers, photos, account numbers, license plates, IP addresses, birth month plus ZIP code, and combinations of details that reveal identity. 

  • Do not paste PII into public chatbots. This includes raw emails from constituents, 311 tickets, hotline logs, screenshots of social media messages, and staff rosters.
  • Redact or synthesize. Replace real names and details with neutral placeholders before prompting.
  • Give preference to approved enterprise or government instances. Use tools with clear data retention policies and opt out of allowing models to train on your content when possible.
  • Check outputs before sharing. Make sure summaries do not pull identifiable details back into the text.

Example
You need a quick summary of 50 emails about a road closure. You remove names, addresses, and unique references, then paste only the redacted text. The output lists top concerns without identifying anyone. If you choose to replace names with aliases, you could reintegrate the data with the original PII after the output (outside of the AI tool, of course).

5. Environmental Sustainability – Using AI Thoughtfully

AI lives in data centers that use electricity and water for cooling. As usage grows, so do emissions and resource use. Credibility matters when you ask the public to conserve or to adopt resilience measures.

  • Choose the lightest tool that does the job. Use short prompts instead of many regenerations. Use batch tasks instead of constant retries.
  • Include sustainability in procurement. Ask vendors about energy and water practices and  for public reporting.
  • Avoid vanity use. If a human can write a two-sentence update faster than you can prompt, skip AI.

Example
For a simple calendar post, you write it yourself. For a major after-action report with hundreds of inputs, you use AI to cluster themes in one batch run, then conduct a human review.

Or as I always say – if you happen to be dating an AI chatbot (I certainly am not judging), then maybe give them the weekend off. 

A Simple AI Usage Checklist

  • Is a person in charge of the final words?
  • Did someone verify facts and links?
  • Did we avoid PII or redact it first?
  • Does the message meet plain-language and inclusion guidelines?
  • If AI materially shaped the outcome, is that explained somewhere that the public can find?
  • Do we have a record of prompt, draft, edits, and approval?

Tools do not build trust. People do.

AI can help a one-person PIO move faster, but trust still moves at the speed of people. 

If we keep humans in the loop, publish clear rules for how we use AI, check our drafts for bias, protect privacy as if it were evidence, and mind the footprint of the tools we choose, we’ll earn the right to use them when it counts. 

The public won’t judge our prompts; they’ll judge our judgment. Let’s show them that speed and integrity can live in the same message.

Last but not least, if you’re just getting started with AI, learn from the people who’ve already done the hard work. The GovAI Coalition, run by the City of San Jose, curates practical playbooks, model policies, procurement checklists, and much more for public agencies. 

Leave a comment