Collecting and combining data using multiple modes of interview (e.g., face-to- face, telephone, Web) is becoming common practice in survey agencies. This is also true for longitudinal studies, a special type of survey that applies questionnaires repeatedly to the same respondents. In this PhD I investigate if and how collecting information using different modes can impact data quality in panel studies. Chapters 2 and 3 investigate how a sequential telephone - face-to-face mixed mode design can bias reliability, validity and estimates of change compared to a single mode. In order to achieve this goal I have used an experimental design from the Understanding Society Innovation Panel. The analyses have shown that there are only small differences in reliability and validity between the two modes but estimates of change might be overestimated in the mixed modes design. Chapter 4 investigates the measurement differences between face-to-face, telephone and Web on three scales: depression, physical activity and religiosity. We use a quasi-experimental (cross-over) design in the Health and Retirement Study. The results indicate systematic differences between interviewer modes and Web. We propose social desirability and recency as possible explanations. In Chapter 5 we investigate using the Understanding Innovation Panel if the extra contact by email leads to increased propensity to participate in a sequential Web - face-to-face design. Using the experimental nature of our data we show that the extra contact by email in the mixed mode survey does not increase participation likelihood. One of the main difficulties in the research of (mixed) modes designs is separating the effects of selection and measurement of the modes. Chapter 6 tackles this issue by proposing equivalence testing, a statistical approach to control for measurement differences across groups, as a front-door approach to disentangle these two. A simulation study shows that this approach works and highlights the bias when the two main assumptions don't hold.