
By Brock Damjanovich, Communications Manager, Salt Lake County Office of Regional Development
Let me say something out loud that I think a lot of us in government communications have been feeling but haven’t quite given ourselves permission to say:
The pace of AI is genuinely overwhelming.
But I promise you, whatever you’re doing, you’re doing just fine.
I’ve spent the better part of this year preparing a presentation for the Government Social Media Conference called Keeping Pace with the Speed of AI, and somewhere in the process of researching it, I realized that the most useful thing I could tell an audience of public communicators isn’t a list of the hottest new tools. It’s this: you don’t need to keep up with every AI update to do your job well. You just need an ethical backbone adaptable to any potential AI use case.
That distinction changed how I think about everything.
The Pace Is Real (But So Is the Noise)
In the last year or so, AI has made some genuinely significant leaps. Every single day it feels like I learned about a new AI use case, or more often, another scenario that AI shouldn’t touch.
And then Sora (OpenAI’s much-hyped video-generation platform) launched, burned through $15 million a day in operating costs, and shut down in March 2026. Even the biggest players in the space are still figuring this out.
The tools are moving fast and chaotically. Not all of them will survive. Not all of them are relevant to what you do. And you cannot subscribe to a podcast and a newsletter as your way of keeping up with it all. I know because I tried to build that version of the presentation, and it felt exhausting just to write.
Choose Your Own Adventure: The Three Levels of AI-Awareness
Here’s what I’ve landed on after a lot of thought, and I think it holds up for most of us working in government communications with small teams and ever-growing workloads.
Tier 1: Passive Discovery
Are you already using tools like Canva, Adobe, Claude.ai, or any AI-integrated platform? Then you’re already learning.
Forward-thinking platforms are constantly trying to show off their shiniest new tool or function. I discovered Canva’s new layer-separation tool (which is so cool) not because I read about it in a newsletter, but because I was in Canva working on a graphic, and it showed up. That’s passive discovery, and it counts.
Tier 2: Active Learning
This one is less about seeking information and more about paying attention to what’s already flowing toward you. When AI makes headlines – good or bad – read the story, not just the headline.
When a colleague in a different department tells you about a tool they’re using, that conversation is free professional development. When something goes wrong publicly (and it does, often), ask yourself why it went wrong. That habit builds ethical judgment faster than any newsletter subscription.
Tier 3: Optional Deep Dives
The podcasts and newsletters exist, and they’re genuinely good. But they’re Tier 3 aka optional deep dives for when you want to go further. They’re not the baseline requirement for being a competent AI communicator in 2026.
For those who want the optional deep dives, here are some that my AI friend (Claude) thinks are worth your time:
Newsletters:
- The Rundown AI — Daily 5-minute briefing, 2M+ subscribers, best overall starting point
- Superhuman AI — “Get smarter about AI in 3 minutes a day,” productivity-focused
- The Neuron — Beginner-friendly, no technical background required
- AI Weekly — Covers regulation, ethics, and safety alongside the news — particularly useful for government communicators
- TLDR AI — More technical, good if you want to understand what’s under the hood
Podcasts:
- Hard Fork (New York Times) — Accessible, well-reported, great for non-technical listeners
- The AI Daily Brief — Fast daily updates, 15 minutes or less
- Eye On AI — Former NYT journalist, strong on ethics, regulation, and policy implications
- Everyday AI — Practical focus, covers social platforms and real-world use cases
The Thing That Doesn’t Change: Your Ethical Foundation
Here’s the part of my upcoming presentation I keep coming back to: tools change. Ethics don’t.
Salt Lake County has an AI Policy (Policy 1400-9, adopted in February 2025) that focuses on use cases rather than specific tools. That framing is the whole ballgame.
Instead of asking “Is this tool approved?”, you ask “Does this use case align with our values and our policy?” A framework built around use cases outlasts any individual tool. It gives you a decision-making lens that works whether the tool dropped yesterday or doesn’t exist yet.
The five ethical pillars I think about before any AI-assisted content goes public:
- Data Governance – Did I put anything in this prompt that shouldn’t become public?
- Human Oversight – Did a human read, verify, and take ownership of this output?
- Bias & Fairness – Was this AI trained on data representative of all communities?
- Transparency – Does the public know my organization uses AI in its communications?
- Mindfulness – Am I using AI intentionally, or just reaching for it out of habit?
Five questions. Two minutes. That’s the whole framework.
A Permission Slip (Seriously)
Are you trying your hardest? Are you doing your best? You know what that’s enough. And if you let the burden of “staying up to date with AI” overwhelm you then you’ll explode. Metaphorically of course.
So here’s your permission slip to relax and acknowledge that you are doing enough.
It officially grants you permission to not be an AI expert. To not subscribe to seven newsletters. To use AI as a tool that helps with the lift rather than a burden that requires constant attention.
The only conditions: passive discovery, active learning, and knowing your ethics.