Creating Pulse Programs Best Practices and FAQs

  • 5 March 2018
  • 0 replies
  • 1906 views

Userlevel 6
Badge
  • Renowned Contributor
  • 97 replies

Use this page to find best practices and FAQs for the following aspects of a pulse program:

  • Distribution Best Practices
  • Scheduling FAQs
  • Questions Best Practices
    • Number of Questions
    • Number of Items per Domain/Theme/Dimension
    • Consistent Rating Scales
    • Include Open-ended Questions
    • Targeting Questions
    • Ordering Pulse Questions
    • Mandatory Responses
    • Rating/Agreement Scales
    • Overall Pulse Framework
    • Customizing Glint Questions

Distribution Best Practices

When thinking about to whom pulses should go, a definite FAQ is around sampling. Sampling is useful if the goal is to get a quick pulse at the organizational level and you want to reduce the number of pulses each individual takes. For example, to get feedback on a new program rollout, you may send a quick pulse to a subset of the population to gather immediate feedback for the program manager. On the other hand, it comes with some serious drawbacks that need to be thought about carefully. First, with most sampling, the majority of managers can’t get reports since they won’t have enough respondents to pass the confidentiality threshold. Second, for organizations that have diverse sets of opinions across multiple sub-populations, a uniform sample is often misrepresentative due to randomness. Third, with smaller samples, many people may feel left out - that their voice isn’t important (so good communication is critical).

Scheduling FAQs

Q: What’s the right cadence for our pulse program?

A. You will have differing needs that determine the cadence of your program. Consider other survey programs already in place, business processes these results might inform and your capability to be able to act on the outcomes of the pulse in a timely, meaningful manner.

If you currently have an annual engagement program in place, it may be prudent to shift the pulse frequency from annual to semi-annual to quarterly over time allowing you to make the necessary process, organizational, and leadership mindset changes needed to maximize the benefits of a more frequent pulsing cycle.

Q: What should we consider when creating programs that pulse frequently?

A:

Survey fatigue: Most people want to give constructive feedback and want it to be easy. Keep the pulses short, highly relevant, and easy to use. Take quick and visible action on the feedback.

Leader fatigue: People already have enough on their plates. Set the expectation that leaders and teams will use the data to help them focus their existing efforts (Can they support each other in better ways to help them achieve their goals?). The regular updates will often be simple process checks (Are we moving in the right direction? Are we still working on the right thing?).

HR fatigue: HR can be overwhelmed with requests for help understanding and using the pulse results. However, Glint's intuitive system and online help center enable our clients to regularly deploy to thousands of teams with minimal active HR partner interaction.

Questions: Best Practices

Number of Questions

To support sustained high response rates, lower dropout and inhibit survey fatigue over time for frequent pulse programs, we recommend the following pulse length given the cadence of your pulse program.

  • Weekly Pulses: 1 or 2 questions

  • Monthly Pulses: 8 or fewer questions

  • Quarterly Pulses: 22 or fewer questions

  • Annual Pulses: 30 or fewer questions

Remember, you’re able to rotate and update questions in subsequent pulses within the program, which allows you to ask timely questions given what’s going on in your organization.

Number of Items per Domain/Theme/Dimension

For most practical purposes, well-constructed single-item measures are as good as multiple-item measures when it comes to utility and reliability (i.e., can they predict future behavioral outcomes, and can they do it consistently over time).

To supplement the domain-level rating-style questions with granular and specific feedback, Glint enables optional comments along with every rating-style question.

One of Glint's pulse design goals is to keep measurement short and easy so that teams can be pulsed more frequently with sustained high participation rates, lower dropout and lack of pulse fatigue. In many cases, this requires asking broader questions rather than specific questions, and rotating questions among different pulses. This can also reduce pulse length thereby significantly improving the employee experience.

Consistent Rating Scales

Keep the same rating scale across each question of your survey. This will streamline the participant experience and ensure results are shown and interpreted following that same scale. The Survey Coach will call out any inconsistencies with rating scales as well to ensure this is reviewed before enabling the pulse.

Include Open-ended Questions

To encourage employees to provide more specific, actionable and useful comments, we recommend asking open-ended questions that are more specific (e.g., asking what's working well and/or what can be improved). This can be focused on the immediate experience of the employee (their job, team, manager), or on the company as a whole, or any point of focus in between (leadership, management, business unit...).

Broader open-ended questions (e.g., asking what else is on your mind?) can be used when trying to capture anything else that the pulse may not have covered.

Remember! With Glint’s Narrative Intelligence capabilities, you will be able to quickly assess what themes or topics are most frequently cited within the comments, and how they connect to other topics across the pulse (English only).

Targeting Questions

You're able to change the visibility for certain questions in your pulse to be only viewable and answered by a subset of your population.

Wondering if you should use targeting?

During design, focus on the questions that would be appropriate for everyone to answer first. If there are sub-groups that require targeted questions, they can be used, however sometimes targeted questions are not necessary. By avoiding a lot of targeted questions, it also creates a simpler experience when managers are reviewing their results.

Ordering Pulse Questions

Ideally, we want pulse takers to have a smooth (and delightful) pulse experience that does not take them too much time to complete. To this end, organize the pulse in such a way that participants can move from one one frame of reference to another in a logical manner.

Typically, this would start with all the higher level, more general items about the overall company, business and leadership; then items about more local management, the direct supervisor and team; and finally with items about the individual job experience.

Outcome variables (eSat, Recommend, etc.) are generally asked at the beginning of the pulse. We want the respondent to review outcomes first and then rate their other experiences so that these answers do not artificially influence their opinions of overall engagement.

Mandatory Responses

Glint standard questions, which are developed based on our People Science methodology, default to mandatory or required. This is a unique case and is because these questions should be relevant for all employees to provide feedback.

However, in general, we suggest you avoid mandatory, forced-response questions. Employees may be frustrated if they do not have the option to skip over questions that they would prefer not to answer or provide responses that don’t accurately capture their sentiment.

The pulse interface will remind respondents at the end if they have skipped any questions, so they have an opportunity to go back and respond.

You can ask a force-response question if it’s absolutely necessary, but we suggest you keep mandatory questions to a minimum.

Rating/Agreement Scales

Glint recommends using a 5-point Agreement scale: Strongly Agree, Agree, Neutral, Disagree Strongly Disagree. You should use the same scale for all your pulses across programs.

For those clients that are currently using a 7-point Agreement scale, we recommend continuing to use that scale across all your pulse programs for consistency.

For those questions for which the Glint benchmark data is available, how is the benchmark calculated?

To enable our customers to make insightful comparisons, Glint offers a robust and growing benchmark database organized by industry and region, as well as by a number of demographic attributes. Glint’s proprietary benchmarks incorporate both anonymized data from Glint’s fast growing global client-base with tens of millions of responses across 50 countries, as well data from curated industry panels to produce highly robust and unbiased normative data.

Overall Pulse Framework

Our recommendation is to keep it short and simple, easy to complete and manageable to action. This also supports sustained high response rates over time. We know how enticing it can be to choose a large set of questions covering multiple areas, but take the time to step back and ask yourself: “What are the most important areas from a talent or business perspective that we want to learn about and improve upon? Then choose a short set of core questions to capture these areas and track them over time. You will have the opportunity to rotate a subset of questions in future pulses based on pulse findings, salient activities in the organization or your employee-facing initiatives. Be strategic, thoughtful and purposeful about the questions you ask, as this is foundational to a successful pulse program.

Customizing Glint Questions

Glint provides the ability to easily create a copy of a Glint standard question and make slight modifications to wording, action plan and comment availability, required or optional, and a few other question features. This can be used to better align a question with your program.

When customizing a question, it's important to consider how that change will impact the benchmark for that Glint question. To learn more about this topic, please reference the "Deciding to Retain Benchmarks when Modifying an Item" section of the Best Practices on Using Benchmarks documentation. See also Manager Effectiveness Question Plan.


0 replies

Be the first to reply!

Reply