Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Survey Review Checklists #29

Open
wants to merge 4 commits into
base: master
Choose a base branch
from
Open

Survey Review Checklists #29

wants to merge 4 commits into from

Conversation

roshni13khincha
Copy link
Member

No description provided.

@roshni13khincha roshni13khincha changed the title WIP: Survey Review Checklists Data Quality Assurance Checklist Mar 10, 2021
\toprule
\textbf{CHECKS} & YES & NO & N/A \\
\midrule
\makerow{Does the TOR establish \textbf{minimum quality indicators} required for the data to be considered acceptable?}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm actually not sure offhand what minimum quality indicators would be.

\midrule
\makerow{Does the TOR establish \textbf{minimum quality indicators} required for the data to be considered acceptable?}
\midrule
\makerow{Does the TOR establish a \textbf{maximum response time} for the survey firm to address data quality issues identified?}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

perhaps add, does the TOR establish a clear protocol for identifying and correcting data quality issues? maximum response time is part of that for sure, but establishing that the team will be running quality checks and the survey firm is expected to respond seems the most critical.

\midrule
\makerow{Does the survey have an \textbf{an ID variable} to identify respondents and link them to the respondents database?}
\midrule
\makerow{Does the survey have an \textbf{unique ID generator} for each survey submission (if survey is being done over SurveyCTO this is automatically generated as the ‘key’ variable.}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this seems second-order to me (since ID var is already established)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is second-order, yes but comes in handy when dealing with duplicate submissions with the same ID variable as it allows for easy management of duplicates.

\makerow{Does the survey have \textbf{duration spent} on completing a survey and modules within the survey?}
\midrule

\makerow{Has the \textbf{translation} been approved?}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

by whom? maybe instead, was the survey fully translated into all languages used for data collection?


\makerow{Has the \textbf{translation} been approved?}
\midrule
\makerow{Does \textbf{enumerator training materials} include explanation of informed consent, Q\&A session, mock interviews and review of best practices?}
Copy link
Contributor

@mariaruth mariaruth Mar 11, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggest including something about the enumerator manual here as well, like does one exist and does it explain all questions and survey conventions. I see there is a follow up question about manuals but it reads as though it only applies for projects with multiple survey instruments.
maybe consolidate q&a/mock interviews to does the agenda for the enumerator training include both classroom and field practice, and allocate sufficient time for understanding the survey instrument?

\midrule
\makerow{Is there a plan in place to conduct \textbf{high-frequency checks} on the data being collected?}
\midrule
\makerow{Does the HFC \textbf{monitor survey duration} and start/end times and day of the week of a survey fill-out?}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

how does day of the week affect quality? maybe this is getting at more generally whether the HFCs monitor compliance with established field protocols re: interview duration and timing? might want to add here whether HFCs check for enumerator-specific patterns (e.g. an enumerator that is super slow or consistently doing interviews outside of expected working hours)

\toprule
\textbf{CHECKS} & YES & NO & N/A \\
\midrule
\makerow{Is there a plan in place to conduct \textbf{high-frequency checks} on the data being collected?}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this question seems like the most important, the rest are good to document but not as critical. perhaps in general we want to think about setting this up as minimum expectation and then recommended best practices? not sure about the best way to do that, just thinking out loud ...

\midrule
\makerow{Does the HFC \textbf{monitor survey duration} and start/end times and day of the week of a survey fill-out?}
\midrule
\makerow{Do the HFCs check for key treatment and \textbf{outcome variables?}}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

check what for them? distribution?

\midrule
\makerow{Do the HFCs check for key treatment and \textbf{outcome variables?}}
\midrule
\makerow{Do the HFCs check for \textbf{sample completeness?}}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

meaning response rate / attrition?

\midrule
\makerow{Is the process for generating flags from HFCs \textbf{easy and fast to run} on a daily basis for FCs?}
\midrule
\makerow{Does the HFC process account for potential lack of \textbf{access to WiFi} or electricity for team on the field?}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is very specific. perhaps more generalizable, is there a protocol for sharing HFC results with teams that accounts for local conditions (e.g. limited connectivity) and accounts for safe sharing of personal data?

\midrule
\makerow{Does the HFC process account for potential lack of \textbf{access to WiFi} or electricity for team on the field?}
\midrule
\makerow{Do HFCs monitor geographic location of projects based on \textbf{GPS coordinates?}}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this wording is quite vague. is it getting at whether HFCs check whether the interview was actually conducted in/near the expected location?

\midrule
\makerow{Do HFCs monitor geographic location of projects based on \textbf{GPS coordinates?}}
\midrule
\makerow{Has the HFC plan been communicated to all \textbf{stakeholders} in the team (incl. survey firm) with enough time before survey round?}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this could be incorporated with the question about protocol i suggested in my comment above.

\toprule
\textbf{CHECKS} & YES & NO & N/A \\
\midrule
\makerow{Is there a plan in place to conduct \textbf{back-checks} on the data collected?}
Copy link
Contributor

@mariaruth mariaruth Mar 11, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

perhaps have a space here for people to indicate what proportion of the sample was back-checked? might be simpler for that to be indicated explicitly rather than a yes/no in the question at the end. but maybe want to stick consistently with checklist.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we do a combo of checklist plus spaces for open-ended responses in the reproducibility package checklist and that seems to work pretty well

\midrule
\makerow{Does the back-checks plan include \textbf{variables unlikely to change}, as well as variables that stem from difficult or lengthy modules? }
\midrule
\makerow{Is there a plan to audit through \textbf{audio recordings}? If so, is it mentioned in the informed consent?}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this question seems out of order because it comes in the middle of questions about backchecks

\toprule
\textbf{CHECKS} & YES & NO & N/A \\
\midrule
\makerow{Does team have a plan for \textbf{communication and data quality checks feedback} with enumerators?}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this seems potentially redundant with the questions earlier about protocols for HFCs

\midrule
\makerow{Does team have a plan for \textbf{communication and data quality checks feedback} with enumerators?}
\midrule
\makerow{Is there a protocol in place for the field team to \textbf{gather information and respond to flags} raised during data quality checks?}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

potentially redundant with HFCs question, unless this is also meant to include backchecks/audits?

\midrule
\makerow{Is there a protocol in place for the field team to \textbf{gather information and respond to flags} raised during data quality checks?}
\midrule
\makerow{Has the team put together \textbf{tracking systems} for enumerators to fill out as part of their daily tasks?}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

meaning like a supervisor logbook of interviews completed by the team?

\midrule
\makerow{Does the team have a plan for \textbf{reconciling data} from the tracking system with the responses on the server}
\midrule
\makerow{If feasible in the context, does the team plan on research staff \textbf{accompanying enumerator}? }
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this seems like it would fit better in the sections on backchecks / audits

Copy link
Contributor

@mariaruth mariaruth left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's need to clarify the objective of the doc. if this is for team's planning purposes, it might be helpful to add links in each section (or possibly each question) to relevant resources / wiki articles. is Analytics supposed to be verifying anything? if so, there would be need for some document to check against (i guess a data quality assurance plan)?
In general, we might want to distinguish a bit more between minimum expectations all team should meet and recommended best practices. Right now there's rather a mix of general questions and very detailed follow-ups.

@roshni13khincha roshni13khincha changed the title Data Quality Assurance Checklist Survey Review Checklists Apr 16, 2021
\end{itemize}
\subsection*{Recommended practices}

The \href{https://dimewiki.worldbank.org/Ietestform}{ietestform} command (from the iefieldkit package) was run and the results are attached. It is a good practice to run iestestform multiple times during survey development. This is to make sure you’ve incorporated all suggested changes that are relevant and to re-test when new questions are added during the development phase of the survey.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this seems out of place?

\makerow{\textbf{Pilot feedback} and key-takeaway discussions (2-3 weeks)}
\midrule
\makerow{Conduct \textbf{data quality or variation checks} on pilot data collected (1-2 weeks)}
\bottomrule
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggest adding item on revising survey instrument (paper) and revising electronic instrument, both here and after data-focused pilot

\midrule
\makerow{\textbf{Pilot Sample} Selection (2-3 weeks)}
\midrule
\makerow{\textbf{Survey Firm} hiring (2-3 weeks)}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i suggest removing this -- it's already covered in survey preparation

\midrule
\makerow{\textbf{Survey Firm} hiring (2-3 weeks)}
\midrule
\makerow{Obtain \textbf{IRB approval} (2-3 weeks)}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggest removing this since it's already covered in survey prep, keep tight focus on pilot activities for this checklist

\midrule
\makerow{Obtain \textbf{IRB approval} (2-3 weeks)}
\midrule
\makerow{Prepare \textbf{Informed-Consent} forms for respondents (or verbal consent section of the survey) - (1-2 weeks)}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
\makerow{Prepare \textbf{Informed-Consent} forms for respondents (or verbal consent section of the survey) - (1-2 weeks)}
\makerow{Pilot \textbf{Informed-Consent} forms for respondents (or verbal consent section of the survey) - (1-2 weeks)}

suggested to change this to pilot as i suggested adding preparing consent statement to survey prep. but in practice the piloting is usually part and parcel of the survey pilot itself, not sure it's necessary to have a separate line item.

\bottomrule
\end{tabularx}

\subsection*{Data-Focused Pilot}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

include item for piloting survey in the field for data-focused pilot

\midrule
\makerow{Submit a \textbf{mock dataset} (or several) - (2-3 weeks)}
\midrule
\makerow{Placing \textbf{consistency checks and constraints} in survey questions (2-3 weeks)}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i don't usually think of this as separate from the programming?

\midrule
\makerow{Placing \textbf{consistency checks and constraints} in survey questions (2-3 weeks)}
\midrule
\makerow{Check skip patterns, loops, relevances, special characters in translated supplemental csv’s, and other \textbf{electronic features} of programmed survey (1-2 weeks)}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe add to check preloads?

}

\section*{Objective}
Project teams will submit a \href{https://www.dropbox.com/scl/fi/px74eyztrn7uv1hsbeoz3/Survey-Pilot-Plan-Template-for-teams.paper?dl=0&rlkey=yaid3m17zzzjpfoconj0zw6fu}{\textcolor{blue}{Survey Pilot Plan Template (for teams)}} and a Pilot timeline GANTT chart. The objective of the review is to ensure that the pilot is set up in such that teams are able to maximize learning from the pilot. \\
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • update link for pilot plan (I assume this is also moving to github)
  • is there a template for the pilot GANTT chart?

\end{itemize}
\subsection*{Recommended practices}

The \href{https://dimewiki.worldbank.org/Ietestform}{ietestform} command (from the iefieldkit package) was run and the results are attached. It is a good practice to run iestestform multiple times during survey development. This is to make sure you’ve incorporated all suggested changes that are relevant and to re-test when new questions are added during the development phase of the survey.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

out of place

\newpage

\section*{Checklist}
\subsection*{Project Identifiers}
Copy link
Contributor

@mariaruth mariaruth Apr 19, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i'm unsure of what this section is ... is it meant to be on the checklist?

\toprule
\textbf{CHECKS} & YES & NO & N/A \\
\midrule
\makerow{\textbf{Survey Pilot GANTT} timeline specific review \href{https://www.dropbox.com/home/Survey\%20Review/tex\%20checklists?preview=survey_pilot_timeline.pdf}{\textcolor{blue}{checklist}}}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

link is broken

\midrule
\makerow{Method of \textbf{recording comments} during pilot}
\midrule
\makerow{Expected plan for sharing comments on a \textbf{daily basis}}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

would this be in addition to the daily debriefs?

\midrule
\makerow{Who is \textbf{implementing changes} in the survey once discussed}
\midrule
\makerow{Does FC have additional document to \textbf{record observations} that may become useful for enumerator manual and training down the line?}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this checklist goes back and forth between statements and questions, will want to edit for consistency

\bottomrule
\end{tabularx}

\subsection*{Pre-Pilot specific}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder if we might think of this as a separate questionnaire design stage? this all seems useful to check for but not sure teams would think about it as part of piloting

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

and i think a lot of this stage might come before concrete piloting plans are made

\toprule
\textbf{CHECKS} & YES & NO & N/A \\
\midrule
\makerow{Has the team submitted a \textbf{PAP}?}
Copy link
Contributor

@mariaruth mariaruth Apr 19, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

does this mean submitted to Analytics? or elsewhere? same question applies throughout this section

\midrule
\makerow{Has the team submitted a \textbf{PAP}?}
\midrule
\makerow{Has the team submitted a \textbf{Literature review}?}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i am a little uncertain how feasible it would be for our team to review a lot of the items in here -- i think these are good things for teams to verify they have done, but not sure Analytics has meaningful inputs, this should mostly be up to the PIs...

\end{tabularx}

\vspace{2mm} %2mm vertical space
Did the team prepare an informed consent form to be handed to all respondents, containing:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggest moving this section elsewhere, possibly to a separate checklist on ethics (or incorporating that into survey design / prep). also need to update this to reflect data policy and any corresponding edits we make to the informed consent template. probably should link the template here (or maybe it is linked in docs that go to the team and i missed it?).

\noindent
\begin{tabularx}{\textwidth}{Xccc}
\toprule
\textbf{CHECKS} & YES & NO & N/A \\
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

many of these items seem like they are useful for the team to self-evaluate, but difficult for us to assess.

\bottomrule
\end{tabularx}

\subsection*{Data-pilot specific}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

to what extent do these differ from the questionnaire review checklist, and how much do we think they should differ? if very similar should we just offer the same questionnaire review here and again later if needed? i would think often people would do the questionnaire review pre-pilot but i'm not sure what sequencing is ideal.

\noindent
\begin{tabularx}{\textwidth}{Xccc}
\toprule
\textbf{CHECKS} & YES & NO & N/A \\
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

a lot of different sections have the heading "CHECKS". suggest either removing this or making them more distinct.

}

\section{Objective}
Project teams will submit the paper survey along with a pre-analysis plan/i2i table/concept note. The objective of the review is to ensure that the survey designed is collecting all required information in a manner conducive to the respondents.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should think about the sequencing of this review and the piloting. there seems to be quite a bit of overlap with the content-focused pilot. perhaps just simplify the pilot checklist to suggest paper review before the content-focused pilot and electronic instrument review before the data-focused pilot and cut a lot of the specifics out of the pilot checklist.

\end{itemize}
\subsection*{Recommended practices}

The \href{https://dimewiki.worldbank.org/Ietestform}{ietestform} command (from the iefieldkit package) was run and the results are attached. It is a good practice to run iestestform multiple times during survey development. This is to make sure you’ve incorporated all suggested changes that are relevant and to re-test when new questions are added during the development phase of the survey.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

out of place

\noindent
\begin{tabularx}{\textwidth}{Xccc}
\toprule
\textbf{CHECKS} & YES & NO & N/A \\
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

headers are all "CHECKS", either remove or disambiguate.

\end{tabularx}

\vspace{5mm} %5mm vertical space
\textbf{Consent section includes:}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is also included in pilot. suggest consolidating to one place. need to update to align also with data privacy policy (probably just need to add data retention period). i think this is a better place for it than the pilot checklist.

\midrule
\makerow{Does each module have an \textbf{introductory script?}}
\midrule
\makerow{Could the \textbf{order} of survey modules be improved for better respondent understanding?}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not sure we'll have insights on this. would be something for teams to check during the pilot, but not sure we'll have much to add here.

\midrule
\makerow{Are there \textbf{open-ended} questions which can be phrased as questions with choice lists in the survey?}
\midrule
\makerow{Are there any questions in the survey that could be \textbf{ambiguous} or that the same person could answer in two different ways?}
Copy link
Contributor

@mariaruth mariaruth Apr 19, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure it'll make sense for us to do the detailed question-by-question review that would be needed to answer this. also seems like something perhaps better addressed through pilot than our review.

\midrule
\makerow{If so, does the survey incorporate strategies to mitigate this? (ie, \textbf{ensure anonymity}, ask for consent for that module, ensure the enumerator is in private with the respondent, self-administration of module)}
\midrule
\makerow{Are questions asked only when \textbf{relevant}? (ie, the question “What level of schooling are you in?” is only relevant if the question “Are you in school” is answered Yes)}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it may be difficult for us to do a detailed enough review to assess if all relevancies are in place

\midrule
\makerow{If so, does the survey incorporate strategies to \textbf{mitigate the bias}? (ie, multiple measurements, consistency checks)}
\midrule
\makerow{If there is no way to ask the question while avoiding bias, has the team considered avoiding that question and obtaining that information through \textbf{alternate means}? (ie, administrative data, geospatial data, triangulation)}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

how would we assess this?



\vspace{5mm} %5mm vertical space
\subsection*{Form usability}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

just a note that we should update this to align with our template/guidance about form design (from the discussion about making it easy to distinguish modules)

Copy link
Contributor

@mariaruth mariaruth left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I went through all the forms in detail, with the exception of data quality assurance (i'm happy to return to that later once you've had a chance to address existing comments), and i did a pretty quick review of electronic survey forms (since i've already seen that, and it seems great).

FYI i reviewed in reverse order of how files appear in this PR, starting with survey_timeline, that might be helpful to know in case i refer to earlier comments (which you would not have yet seen if you're reading the PR from beginning to end).

Lots of comments in line, but two comments i wanted to highlight:

  • the sequencing / overlap between the pilot, paper and electronic survey review needs revision. i think we can simplify and reduce duplication, i made specific suggestions in text.
  • i think we probably want to pull out the informed consent, and either fold that into a new separate checklist which could either be framed around ethics or secure data handling (or both?)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants