Skip navigation

Tag Archives: Data collection

Every grant seeker must carefully consider its options when it plans how it will collect data throughout a proposed project or initiative. Each data collection method has immediate consequences for budgets, personnel, and other aspects of the undertaking.

 

An earlier post discussed three approaches to data collection: self-reports, observation checklists, and standardized tests. This post will discuss three more: interviews, surveys/questionnaires, and reviews of existing records.

 

Interviews:

  1. How important is it to know the service recipient’s perspective?
  2. Do the interview questions already exist or must they be created?
  3. Who will create the questions and vet their suitability to intended purposes?
  4. How will the applicant ensure accurate recording of responses to interview questions?
  5. Will interviews be used together with a survey or separately?
  6. Are enough persons available to conduct the interviews?
  7. How often will interviews occur and who will be interviewed?
  8. Will interviews be in English or in other languages as well?
  9. Who will translate the interviews and ensure accuracy of the translations?

 

Surveys or Questionnaires:

  1. How important is it to know the service recipient’s perspective?
  2. How will you control for inaccurate or misleading survey responses?
  3. Does the survey already exist or must it be created?
  4. Who will create the survey and vet its suitability to intended purposes?
  5. Will the survey be all forced-choice responses or will it include open-ended prompts?
  6. Will the survey be self-administered?
  7. Who will complete the surveys?
  8. Who will collect completed surveys?
  9. Will the survey be in English or in other languages as well?
  10. Who will translate the survey responses and ensure accuracy of the translations?

 

Reviews of Existing Records:

  1. Are the records internal to the applicant organization?
  2. Are the records external (i.e., found in other organizations, such as partners)?
  3. Will the external organizations (or partners) agree to the use of their records?
  4. Who will determine whether the records are timely and relevant?
  5. Are the records quickly, easily, and readily accessible?
  6. Are the records formal and official?
  7. Are the records maintained consistently and regularly?
  8. Are the records reliable?
  9. Are protocols in place to protect and preserve privacy and confidentiality?
  10. How will the applicant ensure that existing protocols are followed?

As a proposal writer, as you sort through your options for how an applicant will collect data during the lifespan of a proposed project or initiative, it will help if you consider:

  1. What kinds of data does the applicant need?
  2. Must the data be quantitative?
  3. Must the data be standardized?
  4. Must the data be reliable?
  5. Must the data be valid?
  6. Do the data already exist? If so, where?
  7. Does a data collection instrument already exist? If so, is it usable as-is?

 

For a given type of data – to be analyzed with a given desired or required level of rigor – the best choices of data collection methods often prove also to be the simplest, least expensive, and most direct.

 

Among the most commonly used data collection methods are: self-reports, observation checklists, standardized tests, interviews, surveys, and reviews of existing records. This post will cover the first three methods. A later post will cover the others.

 

Self-Reports:

  1. Are there questions whose responses could be used to assess a given indicator?
  2. Does a self-report provide sufficiently objective and accurate data?
  3. Will a self-report be sufficiently reliable (e.g., stable over time)?
  4. Are adequate safeguards in place to protect privacy and confidentiality?

 

Observation Checklists:

  1. Is the expected change readily observed (such as a skill or a condition)?
  2. Will using the checklist require trained observers?
  3. Are enough already trained persons available to observe events and behaviors?
  4. Can volunteers be trained and deployed as observers?
  5. Can trained observers measure the indicator without also asking questions?
  6. Are adequate safeguards in place to protect privacy and confidentiality?

 

Standardized Tests:

  1. Is the expected change related to knowledge or a skill?
  2. Is the knowledge or skill something that is already tested?
  3. What are the technical attributes of the tests already used?
  4. Can a pre-existing test be used or must a new one be created?
  5. If a new one is needed, how will its validity and reliability be verified?
  6. Can the same test be used with all test-takers?
  7. Must special accommodations be made for some test-takers?
  8. Must the applicant administer the test or do others administer it?
  9. Do others already statistically analyze the test results or must the applicant?

Data are critical to the success of a grant proposal. In addition, data are critical to the success of a funded project or initiative. Although data may be qualitative as well as quantitative, major funders tend to look for plans to generate quantitative data. While designing a data collection plan, smart grant seekers will ask how (strategies) and how often (frequencies).

 

Data Collection Strategies:

In creating a plan, consider the best ways to capture each performance indicator. Since the changes in conditions or behaviors that each indicator will measure may be subtle, incremental, or gradual, each indicator will need to be sensitive enough both to detect the changes to be measured and to determine their significance.

 

Data Collection Frequency:

In addition, consider what frequency will furnish the most useful data for monitoring and measuring expected changes. Typical frequencies are daily, monthly, quarterly, and yearly. Be certain to differentiate between outputs and outcomes, since outputs often require considerably less time to be observable and measurable than do outcomes.

 

In planning for data collection, smart grant seekers will ensure that:

  1. Collecting data neither usurps nor impedes delivery of direct services
  2. Staff rigorously protect and preserve the privacy and confidentiality of data
  3. Data collection methods are time-efficient and cost-effective
  4. Data collection activities strictly observe human research standards and protocols
  5. A neutral third party evaluates and reports the collected data

 

Data Collection Caveats:

In considering the nature and uses of the data to be collected, be mindful that:

  1. Data should be aggregated and analyzed to reflect the total population served
  2. Findings should be limited to a specific project or initiative
  3. Findings should be limited to a specific population of intended beneficiaries
  4. Cause and effect claims require much more rigorous evidence than associative claims

 

A later post in this series will discuss data collection methods.

 

In the context of grants, evaluation is a systematic inquiry into project performance. In its formative mode, it looks at what is working and what is not; in its summative mode, it looks at what did work and what did not. In both modes, it identifies obstacles to things working well and suggests ways to overcome them. For evaluation to proceed, the events or conditions that it looks at must exist, must be describable and measurable, and must be taking place or have taken place. Its focus is actualities, not possibilities.

 

Data Collection:

Effective evaluation requires considerable planning. Its feasibility depends on access to data. Among the more important questions to consider in collecting data for evaluation are:

  • What kinds of data need to be acquired?
  • What will be the sources of data?
  • How will sources of data be sampled?
  • How will data be collected?
  • When and how often will the data be collected?
  • How will outcomes with and without a project be compared?
  • How will the data be analyzed?

 

Problem Definition:

In developing an evaluation plan, it is wise to start from the problem definition and the assessment of needs and work forward through the objectives to the evaluation methods. After all, how a problem is defined has inevitable implications for what kinds of data one must collect, the sources of data, the analyses one must do to try to answer an evaluation question, and the conclusions one can draw from the evidence.

 

Evaluations pose three kinds of questions: descriptive, normative, and impact (or cause and effect). Descriptive evaluation states what is or what has been. Normative evaluation states what is to what should be or what was to what should have been. Impact evaluation states the extent to which observed outcomes are attributable to what is being done or has been done. The options available for developing an evaluation plan vary with each kind of question.

 

Power:

An evaluation plan does not need to be complex in order to provide useful answers to the questions it poses. The power of an evaluation should be equated neither with its complexity nor with the extent it manipulates data statistically. A powerful evaluation uses analytical methods that fit the question posed; offer evidence to support the answer reached; rule out competing evidence; and identify modes of analysis, methods, and assumptions. Its utility is a function of the context of each question, its cost and time constraints, its design, the technical merits of its data collection and analysis, and the quality of its reporting of findings.

 

Constraints:

Among the most common constraints on conducting evaluations are: time, costs, expertise, location, and facilities. Of these constraints, time, costs, and expertise in particular serve to delimit the scope and feasibility of various possible evaluation design options.

 

Design Options:

Most evaluation plans adopt one of three design options: experimental, quasi-experimental (or non-equivalent comparison group), or pre/post In the context of observed outcomes, the experimental option, under random assignment of participants, is most able to attribute causes to outcomes; the pre/post option – even one featuring interrupted time series analyses – is least able to make such attributions.

 

The experimental option tends to be the most complex and costliest to implement; the pre/post option tends to be the simplest and least costly. Increasingly, Federal grant programs favor the experimental evaluation design, even in areas of inquiry where it is costly and difficult to implement at the necessary scale, such as education and social services.

What follows is a sample Position Description for an External Evaluator. It is one of a series of posts of samples describing positions in a fictional STEM Partnership Project.

 

Position/Time Commitment: Project-paid External Evaluator (Per Contract)

 

Name: Dr. DE Falconer

 

Nature of Position:

Provides external formative and summative evaluation services to the project consistent with its program design, evaluation plan, and federal regulations; assists in eventual submission of the project for validation as a national model.

 

Accountability:

This position is directly responsible to the Project Director.

 

Duties and Responsibilities:

  1. Design an evaluation process compatible with the project’s objectives
  2. Provide interim and final evaluation reports for each project year
  3. Conduct on-site observations and consultations
  4. Review data collection, analysis, and recording processes; recommend needed modifications
  5. Assess and revise project evaluation implementation timeline and provide a schedule for conducting data gathering, analysis, and reporting
  6. Provide technical assistance as needed
  7. Prepare and submit final evaluation reports in consultation with the Project Director
  8. Attend at least one Partnership Advisory Team (PAT) meeting to outline the evaluation process
  9. Assist in assessing project participants’ training needs at the start of the project
  10. Design project questionnaires, interview protocols, checklists, rating scales, and all other project-developed instruments in consultation with project staff and consultants
  11. Assist in identifying and characterizing the non-project comparison group
  12. Communicate regularly with the Project Director concerning the evaluation process
  13. Attend and report on meetings convened by the funding program, as needed
  14. Assist in submitting the project for validation as a national model

 

Qualifications:

  1. Master’s degree in Education or a related field; a Doctorate is preferred
  2. Knowledge of and experience in assessing projects serving disadvantaged and minority high school students, evaluating partnership programs, and in managing the evaluation process
  3. Technical background in program evaluation, data collection and analysis, and reporting
  4. Familiarity with national model STEM programs
  5. Familiarity with state and national academic standards for STEM subjects

 


%d bloggers like this: