Why pulse surveys stall
Organizations are listening more than ever. Pulse surveys have become standard practice. Platforms are sophisticated. Response rates are tracked. Dashboards are built. And yet, in organization after organization, the same thing happens: the data arrives, leaders review the scores, and then... not much changes.
The survey runs again six months later. The scores look similar. People notice.
This is what I call the pulse paradox. The more we measure, the less people believe anything will change. And the less they believe, the less honestly they respond. The signal degrades. The data becomes less useful. And the cycle continues.
So why does this keep happening?
The problem isn't the data. It's what happens next.
Most pulse programs are well-designed at the front end — thoughtful questions, clean benchmarks, solid statistical rigor. Where they break down is at the point of action. The data gets aggregated, rolled up, filtered through layers of management, and eventually lands as a PowerPoint slide in a leadership meeting. By the time a frontline manager sees their team's results, the connection between the data and the lived experience of their people is already thin.
Leaders are then expected to "act on" their scores — often without enough context to understand what's really driving them, without support to have the conversations that would surface the real story, and without a clear enough mandate to make changes that actually matter to people.
We've built sophisticated listening infrastructure on top of a very unsophisticated action infrastructure.
Scores aren't stories.
This is the deeper issue. A score of 68 on "I feel like my voice matters" tells you something is off. It doesn't tell you what happened in the last all-hands that made three people decide to stop speaking up. It doesn't tell you that the team lead dismisses ideas publicly, or that the last suggestion box response took four months. Numbers compress the humanity out of experience.
Real listening — the kind that actually builds trust — requires getting underneath the number to the narrative. What are people actually experiencing? What would need to change for that score to move? What do they need their leader to understand?
Most pulse programs never get there. They stop at the score.
The manager in the middle carries too much.
Even when organizations do commit to action planning, the weight falls disproportionately on managers — often mid-level leaders who are already stretched thin, who received the same two-hour training on "how to share your results," and who are now expected to translate aggregate data into meaningful team conversations while also hitting their quarterly numbers.
This is where pulse programs quietly die. Not in the C-suite, where there's at least some accountability for the scores. In the middle, where the gap between "we heard you" and "here's what changed" becomes a credibility problem that takes years to repair.
What actually works.
The organizations that break the cycle share a few things in common. They treat the survey as the beginning of a conversation, not the conclusion of one. They invest as much in the follow-through infrastructure as the listening infrastructure — coaching managers to have team dialogues, building in specific commitments and timelines, closing the loop visibly with employees.
More importantly, they get specific. Broad action plans ("we will improve communication") don't move people. Specific, team-level commitments do. "In our team, here's one thing we're changing based on what you told us, and here's how we'll know it worked."
The goal isn't a better score next quarter. The goal is that people feel heard — and that feeling is built through evidence, not assurance.
Listening is a capability and a mindset. The survey is just a tool. Organizations that confuse the tool for the capability will keep running surveys, keep getting data, and keep wondering why nothing changes.