Top
1.800.628.2783
1.800.628.2783
ATD Links Archive
Issue Map
ATD Links Archive
feedback.fw.png
ATD Links

Characteristics of an Effective Survey

Regardless of the type of survey instrument you plan to employ, there are certain characteristics surveys must meet. They are:

  • measurable survey objectives
  • sound research design
  • effective survey question design
  • sound sampling strategy, when needed
  • effective survey response strategy
  • meaningful data summary
  • effective data display and reporting. 

Each of these criteria is presented in more detail throughout Survey Basics, but a brief summary of the criteria is presented here. 

Survey objectives 

Survey objectives are the basis for all things about the survey. Survey objectives represent the need for the questions as well as the measures to be taken through the survey instrument. By reading the survey objectives, a surveyor should be able to identify the measures (or variables) as well as how best to collect the data. Good survey objectives also provide insight into the research design. 

Survey objectives come in three forms: 1) a statement, 2) a question, or 3) a hypothesis. Because many surveys are used for descriptive purposes, the statement is the most common survey objective. However, there are times when a research question is an appropriate survey objective, particularly when the survey is intended to identify key issues that will ultimately form the basis for a larger survey. 

Hypotheses are special-purpose objectives and are, technically, only used when the theory the survey is testing is based on enough evidence to justify hypothesis testing; although, specific, measurable, achievable, relevant, and time-bound (SMART) program objectives set for learning and development initiatives are written much like hypotheses. 

You can read more about survey objectives in chapter 2 of Survey Basics. 

Research design 

Research design refers to how the survey will be administered in terms of targeted groups, comparisons of data to multiple groups, and frequency of survey administration. 

Many survey projects represent cross-sectional studies. In a cross-sectional design, a survey is administered to a group at a defined time. For example, you may decide to measure your employees’ overall satisfaction with their jobs. This measurement of satisfaction for the group at this particular time is a cross-sectional survey. 

On the other hand, you may want to compare the change in behavior as measured by a 360-degree feedback survey between one group involved in a program and another group not involved in a program. This comparison of two groups falls into the experimental (randomly selected participants) or quasi-experimental (nonrandomly selected participants) designs. Occasionally you will not know the specific questions to ask on a self-administered questionnaire. If that is the case, you can use a focus group (qualitative survey) to gather preliminary information that will inform the questionnaire. Or, you may administer a broad-based survey to capture data on key issues, but you use those data to guide questions asked during a focus group. 

These mixed method research designs are increasing in popularity and provide a robust foundation for collecting relevant data. Chapter 3 of Survey Basics presents more detail on research design. 

Survey question design 

All too often we make decisions based on results derived from the wrong questions. Even if they are the right questions, if they are poorly written the outcome is the same: decisions based on bad questions. 

Survey question design is the heart of survey research. Asking the right questions the right way to the right people in the context of an appropriate research framework generates relevant, useable information. But how do we know what are the right questions? We refer to the survey objectives. How do we know we are asking them the right way? Read chapter 4 of Survey Basics for samples of good question design. 

Advertisement

Sampling 

Sampling is a process developed to avoid costs of sending out one more survey, while allowing assumptions to be made to nonrespondents of a population. While it is a common practice in large general population studies, marketing research, and opinion polling, its use is limited within the organization setting. This is particularly true when evaluating learning and development programs, human resources initiatives, and large meetings or events. But, when needed, a sound sampling strategy is an imperative in order to reduce error when making inferences. Because sampling is part of a sound research design, content on this topic is found in chapter 3 of Survey Basics.

Survey response 

An effective survey administration strategy will help ensure you receive an acceptable quantity and quality of responses. Research describes a variety of incentives and processes available to us to increase our chances of getting a good response rate. Chapter 6 of Survey Basics describes these opportunities as well as the research that supports them. 

Data summary 

Data “summary” is a less intimidating way of referring to data “analysis.” However, if you collect survey data, whether with a statistical survey or an interview, you will analyze the data. But fear not, it does not have to be difficult. Many of the surveys used in learning and development, human resources, and meetings and events lend themselves to simple descriptive statistics. 

While many organizations are advancing their capability in more complex analytics, most survey data captured for the purposes of conducting needs assessments and program evaluations can be summarized using basic statistical procedures. Credible qualitative analysis can be done by simply categorizing words into themes. 

Chapter 7 of Survey Basics provides a brief look at how to summarize your survey results so that they are meaningful and useful. A future book on data analysis basics will address more detail on analysis for the learning and development, HR, and meetings and events professional. 

Data display and reporting 

A final characteristic of a good survey is one for which the final results are reported in such a way that stakeholders immediately “get it.” Reporting results requires written words, oral presentations, and effective graphical displays. Along with data summary, chapter 7 of Survey Basics provides tips on how to effectively report your survey results.


About Survey Basics 

Known for their expertise in ROI, Jack and Patricia Phillips have contributed to another area in the field of measurement and evaluation. Together with Bruce Aaron, they’re offering a useful tool to help learning and development professionals design and administer surveys and questionnaires. Written in the accessible style of ASTD Basics books, this volume covers:

  • the purpose of surveys and questionnaires
  • types of error that can creep into survey results
  • considerations when developing survey questions
  • tricks to ensure positive response rates
  • content on validity and reliability
  • approaches to data analysis and reporting results.

In addition to content on survey design, the book includes a section that evaluates various survey technologies. By applying a simple decision-making process, readers can identify the most appropriate survey tool for their needs. 

About the Author
Patti Phillips is president and CEO of the ROI Institute and is the ATD Certification Institute's 2015 CPLP Fellow. Since 1997, she has worked with organizations in more than 60 countries as they demonstrate the value of a variety of programs and projects. Patti serves on the board of the Center for Talent Reporting, as Distinguished Principal Research Fellow for The Conference Board, and as faculty on the UN System Staff College in Turin, Italy.

Patti has written and edited numerous books and articles on the topics of measurement, evaluation, and ROI. Recent publications include Measuring the Success of Leadership Development, Making Human Capital Analytics Work, Measuring the Success of Learning Through Technology, Measuring the Success of Organization Development, and Measuring Leadership Development: Quantify Your Program's Impact and ROI on Organizational Performance.
About the Author
Jack J. Phillips, PhD, is chairman of the ROI Institute and a world-renowned expert on measurement and evaluation. Phillips provides consulting services for Fortune 500 companies and workshops for major conference providers worldwide. Phillips is also the author or editor of more than 100 articles and more than 75 books, including Measuring the Success of Leadership Development: A Step-by-Step Guide for Measuring Impact and Calculating ROI (ATD Press). His work has been featured in the Wall Street Journal, Bloomberg Businessweek, Fortune, and on CNN.
BA
About the Author
Bruce Aaron, PhD, has more than 20 years of program evaluation experience, and has presented at international conferences of organizations such as ASTD, ISPI, SALT, AERA, FERA, EERA, AEA, and The Psychometric Society. Aaron has authored or co-authored dozens of presentations, articles, chapters, and books, including Isolation of Results: Defining the Impact of the Program with Jack Phillips, and a chapter for The ASTD Handbook of Measuring and Evaluating Training.
Be the first to comment
Sign In to Post a Comment
Sorry! Something went wrong on our end. Please try again later.