1 Introduction Cash transfers (CTs) - commonly understood as direct payments made to people in poverty - are among the most extensively studied and implemented interventions in low- and middle-income countries (LMICs) (Vivalt, 2015). Previous systematic reviews and meta-analyses of CTs found improvements on several outcomes. These outcomes include material poverty (Kabeer & Waddington, 2015), human capital (Baird et al., 2013b; Millan et al., 2019), social capital (Owusu-Addo et al., 2018), health (Lagarde et al., 2007; Behrman & Parker, 2010; Crea et al., 2015), intimate partner violence (Baranov et al., 2020; Buller et al., 2018), child labor (Kabeer & Waddington, 2015), the spread of HIV (Pettifor et al., 2013), spending on tobacco and alcohol (Evans & Poponova, 2014; Handa et al, 2018), and labor supply (Baird et al., 2018; Banerjee et al., 2017). Although these factors are relevant to wellbeing, measures of mental health (MH) and subjective wellbeing (SWB), which probe how individuals themselves assess the quality of their lives, are often thought to track wellbeing more accurately. Indeed, measures of SWB are increasingly considered to be essential components in applied policy analyses (Benjamin et al., 2020; Frijters et al., 2020). It therefore seems pertinent to evaluate the effectiveness of CTs with respect to these measures. Individual income and SWB are known to be positively associated (Powdthavee, 2010; Stevenson & Wolfers, 2013; Jebb et al., 2018), especially for those at low income levels (Clark, 2017; Deaton, 2008). A similar relationship is observed in the MH literature (Karimli et al., 2019; Tampubolon & Hanandita, 2014; Schilbach et al., 2016; Ridley et al., 2020). Moreover, mental health problems may engender and perpetuate poverty (Haushofer & Fehr, 2014). Unfortunately, the literature on the link between income and SWB and MH in LMICs has long lacked causal evidence, which the growing body of primary research on CTs may address. While CTs may improve the SWB and MH of recipients, these interventions could also have negative psychological consequences on non-recipients. Qualitative research suggests the presence of negative psychological spillovers (Fisher et al., 2017; MacAuslan & Riemenschneider, 2011), and some recent quantitative work echo this worry (Haushofer et al., 2019). For example, envy among non-recipients may be a concern (Ellis, 2012). Community disruptions and crime rates may also increase if CTs are mistargeting to formally ineligible recipients (Agbenyo et al., 2017; Fisher et al., 2017). However, there is also some evidence of positive spillovers. For example, CTs have been found to decrease the intergenerational transmission of depression (Eyal & Burns, 2019) and to lead to decreased suicide rates in the areas they are implemented (Alves et al, 2018). We know of no previous systematic reviews on this subject. A non-systematic meta-analysis by Ridley et al. (2020), which evaluates the impact of CTs on MH, is closest to our work.1 We build on their work in four directions. First, we conducted a full systematic review and search of the existing literature in accordance with the preferred reporting items for systematic reviews and meta-analyses (PRISMA) guidance (Moher, Liberati, Tetzlaff, & Altman, 2010). Second, we consider SWB measures alongside MH measures2. Third, we consider quasi-experimental designs (in addition to randomised controlled trials (RCTs)). Fourth, we evaluate the quality of included studies, assess publication bias, and perform a moderator analyses across (1) outcome type (MH and SWB), (2) CT value, and (3) duration of the transfer. Methods 2.1 Eligibility criteria For a study to be included it must satisfy four criteria: First, the study must investigate the effect of an unbundled cash transfer (defined below). Second, the study must include a measure of self-reported affective mental health or subjective wellbeing, but these need not be the primary focus of the study. Third, the study context must not be a high-income country.3 Fourth, the study design must be experimental or quasi-experimental4 and afford standardizing the mean difference between treatment and control groups. Regarding our first criterion, we distinguish between unconditional cash transfers (UCTs) and conditional cash transfers (CCTs). Conditional cash transfers formally require adherence to certain actions, such as school enrollment or vaccination. The strictness of conditions varies widely, and conditions are sometimes left unmonitored due to high administrative costs (Davis et al., 2016). UCTs have no requirements, although they are often targeted to a vulnerable subset of the population, commonly defined by a combination of regional statistics, means tests and selection by prominent members of the community. We consider noncontributory social pensions and enterprise grants to be UCTs. CTs are typically paid out in lump-sums or streams (monthly installments). Some stream or multi-installment CTs have graduation mechanisms where individuals stop receiving transfers once they meet certain conditions (Villa & Nino-Zarazua, 2019). All included CTs must be “unbundled”, i.e. implemented and tested independently of other services such as asset transfers, training, or therapy. Concerning our second criterion, we note that SWB measures tend to assess overall wellbeing (Diener, 2009; Diener et al., 2018), which sometimes include separate measures of positive and negative mental states (Busseri & Sadava, 2011). By contrast, affective MH questionnaires tend (1) to only measure the negative components of SWB, i.e., how badly someone is doing and, (2) to also capture information on an individual's behaviors and habits (in addition to their thoughts and feelings). In our analyses, we include measures of valenced mental states, but no measures of behavior or habits. See the “Measures” column of Table A3 in the appendix for a list of all included measures. 2.2 Data We searched studies using academic search engines and databases. These included: EBSCO: MEDLINE, PsycINFO, PubMed, Business Source Complete, EconLit, Social Sciences Full Text (H.W. Wilson), APA PsycARTICLES, Psychology and Behavioral Sciences Collection, Academic OneFile, Academic Search Premier, CINAHL, Open Dissertations, Web of Science, Science Direct, JSTOR, ECON PAPERS, 3ie, IDEAS/REPEC, and Google scholar. These efforts were complemented by a forward and backward citation search of eligible studies, contacting authors, and through Google Scholar notifications. Our search string can be found in Appendix A. We stored all retrieved records in the reference management system Zotero. Double-blind screening of the titles and abstracts was done using the software Rayyan by JM and CK. Any disagreements were discussed until consensus was reached. Studies that passed the double-screening were reviewed in full text by JM. We extracted study details such as author name, CT program, number of participants, MH and SWB outcomes, and effect sizes. We also collected information on the size of the cash transfer, time between start of intervention and follow-up, and whether it was a CCT or UCT, paid out in a stream or lump sum, or directed towards adolescents, prime age adults or elders. All data were extracted by one author (JM) and the full extraction results were checked for accuracy by CK and ABM. 2.3 Quality To assess the quality of included research, we evaluated the following domains: causal identification strategy, pre-registration, balance between treatment and control groups, attrition, sample size, contamination, treatment compliance, and whether intention-to-treat (as opposed to a complete case) analyses were performed. 2.4 Statistical Methods We used the statistical programming language R for data analysis. Since most RCTs and quasi-experimental designs are based on mean differences,5 we standardized these using Cohen’s d. We used the independent t-statistic from a test of the mean difference to calculate Cohen’s d in nearly all cases. We use d = t'1/n! + 1/nc where n! - treatment sample size and nc - control sample size (Goulet-Pelletier & Cousineau, 2018). If the effect size of a study was expressed via odds ratios (n = 2), we converted from odds ratios to Cohen’s d using d = ln(01)V3/5.6 If a study contained multiple outcome measures, we coded each as MH or SWB. To achieve a single effect size for each study-follow-up combination, we combined outcomes using the method of Borenstein et al., (2009), specifying a correlation of 0.7 for within construct aggregations, 0.5 for between constructs and 0.6 for both within and between aggregations. Specifying different correlations changes only the aggregate standard error, not the mean of effect sizes. We used random effects (RE) models for our meta-analysis, which assume that true effects of each included study are drawn from a distribution of true effects (Borenstein et al, 2010). Each study in our model was weighted by the inverse of the standard error of the study’s estimated effect size. Since there are sometimes multiple follow-ups in a study and multiple studies in a sample or program, we clustered standard errors at the level of the study and program. We assessed evidence of publication bias and p-hacking by using a funnel plot, the Egger regression test (Borenstein et al, 2011), and a “p-curve” (Simonsohn et al., 2014). We conducted meta-regressions to test if certain study characteristics moderated estimated effect sizes. We focused on three potential moderating variables: years since CT began, size of CT, and whether CTs had conditionality requirements. Concerning size of CT, we considered both the absolute and relative CT size. We operationalized absolute size as the average monthly value of a CT in purchasing power parity (PPP) adjusted US 2010 dollars, with lump sum CTs (comprising about 25% of our sample) divided by 24 months, which is the mean follow-up time.7 For relative size, we used monthly CT value as a proportion of previous household monthly income. This was either directly reported or easily derived in many studies (21 out of 37 studies). If a study did not report sample information on income, we used consumption (10 studies) or expenditure (3 studies) information as a proxy. To convert between individual income and household income (8 studies) we assumed that household income = individual income * ^household size (see Chanfreau & Burchardt, 2008). If there was insufficient information to impute average household income (4 studies), we used regional statistics. Finally, as a robustness test, we also computed yearly CT value as a proportion of annual gross domestic product per capita (GDPpc). 3 Results 3.1 Description of Studies and Quality We retrieved 1,870 records from implementing our search string. After removing duplicates, we were left with 1,147 records. After an initial round of double screening titles and abstracts by JM and CK, 143 met the eligibility requirements (see Figure 1 for a diagram of selection flow). After JM performed the final round of screening, there were 32 unique studies drawn from the initial search and five from Google Scholar alerts and citation searches. We thus included a total of 37 studies8 reporting on 100 outcomes. Table A3 in the appendix summarizes the key characteristics of the included studies. Of the outcomes, 46 measured depression or general psychological distress, 21 measured happiness or positive feelings, 18 measured life satisfaction and two measured anxiety. The remaining 13 were summary indices of MH, SWB, or both. Most of the studies were conducted in Africa (23), followed by Latin America (10) and Asia (4). The most commonly investigated CT type was UCT (26; 19 plain, 6 pensions and 1 enterprise grant) followed by CCTs (10) and one study that contained both a CT and UCT (Baird et al., 2013a). Country context was relatively evenly divided into low, low-middle, and upper-middle income countries (see Figure A2 in the appendix). Over half of the included studies included random assignment (22), while the rest were quasi-experimental (15).9 The average time from the start of the CT to follow-up was two years. The average monthly payment was $38 PPP. A quarter of the studies were implemented as predominantly lump sum (10). All other studies (27) were paid out on a monthly basis. In Table 1, we list the results of our quality assessments. While blinding of participants is impossible for CTs, blinding personnel and outcome assessment was mentioned (but not performed) in only one study (McIntosh & Zeitlin, 2020). Overall, few studies (9/37) referred to pre-registered protocols. The adherence to pre-specified statistical procedures and outcomes was generally unclear, thus making it impossible to assess whether outcomes were ‘cherry-picked’ post treatment. Moreover, about half of the included studies (17/37) did not assess treatment compliance. Therefore, aspects relating to implementation (e.g. intervention fidelity and adaptation) could not be assessed (Moore et al., 2015). Furthermore, contamination by the CT on control groups was rarely discussed or addressed. Only 13 out of 37 studies were geographically-clustered RCTs (cRCTs), which are more robust to possible contamination effects. Of the 15 quasi-experimental studies, one used a natural experiment (Powell-Jackson et al., 2016), two used instrumental variables (Ohrnberger et al., 2020a; Chen et al., 2019), and four used a regression discontinuity approach (based on a means test). The eight remaining studies used a propensity score matching approach. Of those using propensity score matching, six also employed a difference-in-difference estimator. Despite the aforementioned concerns, we assess the synthesized evidence to be fairly reliable. Importantly, most studies clearly explained their causal identification strategy, were well balanced, performed intention-to-treat analyses, and controlled for differential attrition when present. Sample sizes were generally large compared to common sample sizes in clinical or psychological studies (n<500; Billingham et al., 2013; Kuhberger et al., 2014; Sassenberg & Ditrich 2019). 3.2 Baseline results For our baseline results, we aggregated effect sizes across studies using a random effects model. Throughout our analyses, we omitted measures of stress, optimism, and hope, and one outcome reported from Galama et al. (2017), which was a clear outlier.10 The average overall effect size, as indicated by a black diamond at the bottom of Figure 2, is 0.10 SDs in the composite of SWB & MH measures (95% CI: 0.08, 0.12; given by the width of the diamond). The overall effect size does not Cohen's d Note: Forest plot of the 37 included studies. Subjective wellbeing (SWB) and mental health (MH) outcomes in each study are aggregated with equal weight. Mo. after start is the average number of months since the cash transfer began. $PPP Monthly is the average monthly value of a CT in purchasing power parity adjusted US 2010 dollars. Lump sum cash transfers were converted to monthly value by dividing by 24 months, the mean follow-up time. change substantially when accounting for dependency between multiple follow-ups, and multiple studies in a program in a multilevel model (ES: 0.095, 95% CI: 0.071, 0.118, or if we combine all the outcomes, without first averaging at the study-follow-up level (ES: 0.091, 95% CI: 0.066, 0.116. Heterogeneity, as calculated by the D2index, is substantial; 63.7% of the total variation in outcomes is due to variation between studies.11 In other words, 63.7% of total variability can be explained by variability between studies instead of sampling error. To account for the impact of this substantial heterogeneity, we calculate a 95% predicted interval.12 The estimated 95% prediction interval, given by the dashed line bisecting the black diamond in Figure 2, suggests that 95% of similar future studies would be expected to fall between 0.001 and 0.201 SDs in our composite of MH and SWB. Figure 3 displays the risk of publication bias and “p-hacking” (researchers testing a high number of outcomes and cherry-picking the coefficients that fall below a threshold p-value). In Figure 3a, we show a funnel plot, with standard error plotted against effect size, and the mean effect shown as a black vertical line.13 If there are significantly more studies to the right than the left of the mean effect size, this would suggest that studies on the left may be missing, possibly indicating publication bias. This is known as asymmetry. Figure 3a shows little asymmetry, indicating that studies with more positive effects appear no more likely to be published. We use Egger’s regression test to check this quantitatively by regressing the standard error on the effect size. The test does not reject the null of funnel plot symmetry (p=0.549), supporting our reading of the plot. Figure 3b shows the percentage of results with different p-values. If “p-hacking” were an issue, we would expect that the distribution of p-values is left-skewed (an upward slope in the figure). The p-curve is downwardly sloped, which suggests no widespread p-hacking. However, it is possible that regression specifications with insignificant dependent variables were not reported at all. P-curves are unable to address such scenarios (Bishop & Thompson, 2016). 3.3 Meta Regression and Moderator Analysis We focus on three types of variables that we expect to moderate the observed effects: (1) Whether a CT had conditionality requirements or not. (2) Value of CT (in absolute terms and relative to previous income). (3) Years since the transfer began, allowing us to assess whether effects dissipate over time. Throughout, we use multi-level models that account for multiple outcomes in a follow-up, multiple follow-ups in a study and multiple studies in a sample or program. Standard errors are clustered at the study and program level.14 In every specification presented, the dependent variables are the study’s estimated effect on MH or SWB. We standardized the effect sizes into Cohen’s d. In Figure 4, we present six plots that illustrate the bivariate moderating relationship of our variables of interest. Panel (a) shows the distribution and average effect size for UCTs and CCTS. Panels (b) through (f) show effect size on the y-axis and the time or size on the x-axis. Plots (b) through (f) are simple scatter plots meant to illustrate the raw correlation between two variables. In Table 2, we present our main results. All models include a measure of CT size and years since the CT began. Model 1 includes a dummy indicating whether the CT had conditionality requirements. Models 1, 2 and 3 estimate the effect of relative CT size. Models 4 and 5 estimate the effect of absolute CT size (using $PPP monthly value). Models 3 and 4 include an interaction term between payment mechanism and “years since CT began” to identify the effect of decay conditional on whether a CT was paid out in a lump sum or stream. In Model 1 we find that conditionality requirements reduce estimated effect sizes by almost 50%. In so far as UCTs are less costly to administer than CCTs, this suggests that UCTs are likely to be more efficient in promoting recipients’ wellbeing. In Model 2 we omit the indicator of whether CTs where CCTs or UCTs. Based on this specification, one can expect that doubling a recipient’s consumption (by receiving a CT 100% of previous consumption) to roughly lead to a 0.10 SD increase in MH/SWB at the average follow-up time. Results in Models 1 and 3 are similar. See panels (e) and (f) of Figure 4 for the correlational relationship between relative size of a CT and magnitude of effect. Models 4 and 5 shows our results for absolute CT value, yielding a significant and positive coefficient in both specifications. These results indicate that a CT with a monthly value of $100 PPP leads to an approximately 0.07 to 0.08 SD increase in SWB and MH outcomes. See Figure 4, panel (c) for the bivariate relationship. Increases in income are typically assumed to yield diminishing gains in wellbeing. To test if that is the case in our sample of studies, we log transformed our measures of relative and absolute CT size. We find a significant effect for log-relative value but no significant effect of logabsolute value (see Table A2 in the appendix).15 Taken together, models 1, 2 and 4 provide evidence that the effect of CTs on wellbeing decays over time. Using the coefficient from Model 2, each year the effect is estimated to decline by 0.015 SDs. With that estimate, a CT which doubles household income would take almost two decades to decay.16 However, the effects of “years since CT began” could differ depending on whether the recipient was given the CT in a lump sum or still receives monthly transfers. Our bivariate plot (Figure 4, panel (b)) suggests a difference in decay between the two payment mechanisms. Lump CTs appear to decay over time while stream CTs (which are nearly all ongoing at the time of the last follow-up) show a flat trend. In Models 3 and 4 we formally test for differences in decay between lump and stream CTs. The interaction, “years since * CT is lump sum” gives the difference in decay between lump and stream CTs. Since stream CTs are ongoing, we expected lump CTs to exhibit a larger decay in effect size than streams. Surprisingly, this is not the case in models 3 and 4. These display a positive, albeit insignificant interaction term. Thus, although there is a significant overall decay in effect size (as indicated by Models 1, 2, and 5), we are unable to precisely estimate the effect over time for a specific payment type. Finally, we note that seven studies in our study include multiple follow-ups. As shown in Figure A1 in the appendix, six of these show a decline in effects size across follow-ups. A repeated t-test of whether mean effect size is different between first and second follow-up yields a p-value of 0.007, indicating that this decline is statistically significant. The relatively large and significant intercepts in Table 2 suggest that CTs could have an effect independent of the size of the cash transfer (i.e., an effect from being enrolled). An enrolment effect, however unintuitive, is not implausible. Being awarded an amount of cash might boost someone’s sense of good fortune, which could explain the intercept. Another explanation for the intercepts is that they are an artifact of a concave relationship between CT size and effect. A linear model will generally overestimate the intercept on data that contains a true concave relationship. However, the insignificance of the log-transformed absolute CT value is evidence against a clear concave relationship (see appendix Table A2, Model 2). In addition to these analyses, we also tested whether RCT design, type of measure, or the study context moderated the effect size (see Table A1 in the appendix). Whether a study uses a RCT design does not affect the magnitudes of the estimated effects of CTs. This suggests that studies which rely on natural experiments or other causal identification strategies are reasonably robust. However, we do find that, compared to pure MH measures, effects of CTs on measures of SWB are significantly larger. Moreover, the largest effect sizes occur for studies in which a compound index of both MH and SWB was used.17 Notably, CTs conducted in Latin America have a near zero estimated effect. This appears to be primarily driven by the fact that many CTs in Latin America have conditionality requirements. When including both a dummy for conditionality and for the CT being conducted in Latin America, we find that the coefficient on Latin America is roughly halved and significant at the 10% level only. As discussed in section 2, we ran alternative specifications of our size variables (see appendix Table A2). In particular, we checked if using CT value relative to GDP per capita changes our results. Although the coefficient is somewhat larger compared to results presented in Table 2 (with p<0.05), our conclusions remain unaffected. Finally, in appendix D we consider how our type of results could potentially be used in policy analyses to study cost-effectiveness. Specifically, we calculate how many “wellbeing-adjusted life years” (see De Neve et al. 2020, Frijters et al. 2020), a given type of cash-transfer could buy for a given transfer size. We find that 1000$ lump-sum payment may be expected to buy roughly 0.330 “wellbeing-adjusted life years”. 3.4 Spillovers Four RCTs (two with multiple follow-ups) in our sample enabled assessment of spillover effects on non-recipients of CTs by including two control groups in a geographically-clustered RCT design: a spillover control made up of non-recipients living near recipients, and a “pure” control comprising non-recipients living spatially separate from the treatment locations.18 This design allowed comparison of wellbeing across (a) non-recipients who are “treated” to a spillover effect by living near recipients to (b) recipients living further away (who form the “pure” control). To ascertain the average effect of spillovers we performed a meta-analysis of the observed effects, using a multilevel random effects model, inverse-weighted by study standard error, and errors clustered at the level of the sample. Our results are illustrated in Figure 5. The average effect of CTs on non-recipients’ MH and SWB (represented by the diamond), is close to zero and is not significant at the 95% level, suggesting no significant spillover effects on average. 4 Discussion Our results represent a systematic synthesis and meta-analysis of all the available causal evidence of the impact of CTs on mental health and subjective wellbeing in low- and middle-income contexts. In sum, we find that CTs, on average, have a positive effect on MH and SWB indicators among recipients. More precisely, we find an average impact of about 0.10 SDs. Additionally, we observe that the effects RE Model ♦ -0.01 [-0.06, 0.03] I I I I I I -0.3 -0.1 0.1 Note: A forest plot of the studies in our sample that include MH and SWB spillovers. A random effects multilevel model (with levels for study and sample) with robust standard errors (clustered at the level of the program) shows an effect of -0.01. The 95% confidence interval overlaps with zero. All of the CTs except Baird et al., (2013a) were implemented by GiveDirectly, an NGO. of CTs appear to only dissipate slowly over time. The estimated effects were substantially larger for unconditional CTs. Our results were consistent across a battery of robustness tests and the observed effects did not vary according to study design (RCT and quasi-experimental). Notably, our results indicate that CTs are less efficacious in Latin America, which may be explained by the prevalence of CCTs (as opposed to UCTs) in that region. We find no significant evidence of negative spillover effects on non-recipients. However, spillover effects were rarely reported upon (n=4). We therefore encourage more research on this aspect going forward.19 4.1 Limitations Like most meta-analyses, using study averages for moderator variables means that we do not capture within-study variation, which limits the precision of our estimates. Some of our insignificant results may be due to low power. This could be remedied if we had access to the data at the level of the individual. Some of the studies we include have open access data policies (Haushofer et al., 2016; Paxson & Schady, 2010; Ohrnberger et al., 2020a). An individual level analysis may therefore be possible but was outside the scope of this paper. Another limitation arises from the paucity of longitudinal follow-ups. There was only one study in our sample that followed up more than five years after the cash transfer began (Blattman et al., 2020). This limits what we can say about the long run effects of CTs on SWB and MH. There is also only one study that discusses effects of CTs on the SWB and MH of individuals who share a household with recipients.20 Unfortunately, our evidence was limited to spillovers relating to non-recipients in the geographic proximity of recipients. An important feature of this meta-analysis is that it does not offer evidence on the mechanisms by which CTs improve SWB and MH. One possible mechanism worth investigating is whether the effect on SWB or MH stems from increased consumption relative to one’s peers or from previous levels of consumption. Indeed, there is a rich set of possible mediators and moderators, and we have only analyzed a small subset of them. Finally, we know of no other systematic review and meta-analysis which estimates the total effect of an intervention on SWB and MH. This limits our capacity to compare the cost-effectiveness of CTs to other poverty alleviation or health interventions. 4.2 Implications and suggestions for future research Although there is some preliminary evidence that CTs are cost-effective interventions in LMICs compared to a USAID workforce readiness program (McIntosh & Zeitlin, 2020) and psychotherapy (Haushofer, Shapiro & Mudida, 2020), the work done to compare the cost-effectiveness of interventions in terms of SWB and MH is scarce, especially in LMICs. Our meta-analysis contributes to this literature by providing a comprehensive empirical foundation to compare the cost-effectiveness of cash transfers to interventions aimed at improving MH or SWB. Although limited, the practical implications of our meta-analysis are clear: direct cash transfers improve the wellbeing of poor recipients in LMICs. There are several research questions to be pursued in future work on subjective wellbeing and mental health. What are the long run (5+ years) effects of CTs? What are the effects on a recipient’s household and community? Relevant spillover data should be collected in RCTs or evaluated in quasiexperiments. The costs of CTs and other poverty alleviation interventions should be published. For instance, since a UCT requires less administration (as there are no conditions to monitor), it seems likely that UCTs are cheaper and, based on our results, more effective than CCTs. However, there appears to be no available evidence to answer this question. More broadly, we recommend a greater inclusion of SWB and MH data in intervention evidence collection efforts such as Aid Grade.21 5 Conclusion Cash transfers have a small22 (d<0.2) but significant and lasting effect on wellbeing with only mild adaptation effects. Although modest in size, if SWB and MH measure wellbeing more directly than other indicators, these reported improvements are an indicator of genuine success. How important CTs are as a means of improving wellbeing depends on their cost-effectiveness relative to the alternatives. Even if effect sizes are small, CTs may nevertheless be among the most efficient ways of improving lives. There is no evidence that CTs have, on average, significant negative spillover effects within the community they are implemented in. However, the evidence on this is scarce, meriting further research on the topic.