Main

Genotype imputation is the term used to describe the process of predicting or imputing genotypes that are not directly assayed in a sample of individuals. There are several distinct scenarios in which genotype imputation is desirable, but the term now most often refers to the situation in which a reference panel of haplotypes at a dense set of SNPs is used to impute into a study sample of individuals that have been genotyped at a subset of the SNPs. An overview of this process is given in Box 1. Genotype imputation can be carried out across the whole genome as part of a genome-wide association (GWA) study or in a more focused region as part of a fine-mapping study. The goal is to predict the genotypes at the SNPs that are not directly genotyped in the study sample. These 'in silico' genotypes can then be used to boost the number of SNPs that can be tested for association. This increases the power of the study, the ability to resolve or fine-map the causal variant and facilitates meta-analysis. Box 2 discusses these uses of imputation as well as the imputation of untyped variation, human leukocyte antigen (HLA) alleles, copy number variants (CNVs), insertion–deletions (indels), sporadic missing data and correction of genotype errors.

The HapMap 2 haplotypes1 have been widely used to carry out imputation in studies of samples that have ancestry close to those of the HapMap panels. The CEU (Utah residents with northern and western European ancestry from the CEPH collection), YRI (Yoruba from Ibadan, Nigeria) and JPT + CHB (Japanese from Tokyo, Japan and Chinese from Beijing, China) panels consist of 120, 120 and 180 haplotypes, respectively, at a very dense set of SNPs across the genome. Most studies have used a two-stage procedure that starts by imputing the missing genotypes based on the reference panel without taking the phenotype into account. Imputed genotypes at each SNP together with their inherent uncertainty are then tested for association with the phenotype of interest in a second stage. The advantage of the two-stage approach is that different phenotypes can be tested for association without the need to redo the imputation.

This Review provides an overview of the different methods that have been proposed for genotype imputation, discusses and illustrates the factors that affect the accuracy of genotype imputation, discusses the use quality-control measures on imputed data and methods that can be employed in testing for association using imputed genotypes.

Genotype imputation methods

We assume that we have data at L diallelic autosomal SNPs and that the two alleles at each SNP have been coded 0 and 1. Let H denote a set of N haplotypes at these L SNPs and let G denote the set of genotype data at the L SNPs in K individuals with Gi = {Gi1,..., GiL} denoting the genotypes of the ith individual. The individual genotypes are either observed so that Gik {0,1,2} or they are missing so that Gik = missing. The main focus here is in predicting the genotypes of those SNPs that have not been genotyped in the study sample at all but there are usually sporadic missing genotypes as well. We assume that strand alignment between data sets has been carried out (Supplementary information S1 (box)).

IMPUTE v1. IMPUTE v1 (Ref. 2) is based on an extension of the hidden Markov models (HMMs) originally developed as part of importance sampling schemes for simulating coalescent trees3,4 and for modelling linkage disequilibrium (LD) and estimating recombination rates5. The method is based on an HMM of each individual's vector of genotypes, Gi, conditional on H, and a set of parameters. This model can be written as

in which Z = {Z1,..., ZL} with Zj = {Zj1, Zj2} and Zjk = {1,..., N}. The Zj can be thought of as the pair of haplotypes from the reference panel at SNP j that are being copied to form the genotype vector. The term P(Z|H,ρ) models how the pair of copied haplotypes changes along the sequence and is defined by a Markov chain in which switching between states depends on an estimate of the fine-scale recombination map (ρ) across the genome. The term P(Gi|Z,θ) allows each observed genotype vector to differ through mutation from the genotypes determined by the pair of copied haplotypes and is controlled with the mutation parameter θ. Estimates of the fine-scale recombination map (ρ) are provided on the IMPUTE v1 webpage and θ is fixed internally by the program. The effective population size parameter (Ne) must be specified by the user but estimates of this parameter are available for a wide range of human populations and our experience is that performance is robust to variation in this parameter. More details of these terms and parameters are given in Refs 2, 5.

Exact marginal probability distributions for the missing genotypes that are conditional on the observed genotype data in the vector Gi are obtained using the forward–backward algorithm for HMMs6. Using a simple modification to the algorithm it is also possible to obtain a marginal distribution for genotypes that are not missing. This provides a useful method of validating observed genotypes and allows quality assessment of imputation runs. IMPUTE v1 can also carry out imputation on the X chromosome and this is described in Box 3.

IMPUTE v2. IMPUTE v2 (Ref. 7) uses a related but more flexible approach than IMPUTE v1. SNPs are first divided into two sets: a set T that is typed in both the study sample and reference panel, and a set U that is untyped in the study sample but typed in the reference panel. The algorithm involves estimating haplotypes at SNPs in T (using the IMPUTE v1 HMM) and then imputing alleles at SNPs in U conditional on the current estimated haplotypes. As the imputation step is haploid imputation, it is very fast (O(N)) compared with diploid imputation (O(N2)) carried out in IMPUTE v1. Phase uncertainty is accounted for by iterating these steps using a Markov chain Monte Carlo (MCMC) approach. As imputation performance is driven by accurate matching of haplotypes, the method focuses on accurate haplotype estimation at the SNPs in T using as many individuals as possible.

Alternating between phasing and haploid imputation at a carefully chosen subset of SNPs is particularly suited to study designs in which different amounts of genotype data are available in different cohorts of a study. For example, IMPUTE v2 can use both the set of haplotypes from the pilot data of the 1000 Genomes Project (see Further information for a link) and haplotype sets from the HapMap3 data set as reference panels for imputation. Compared with imputation from HapMap2, this provides a much larger set of imputed SNPs and a notable boost in accuracy at those SNPs included in the HapMap3 SNP set. Other methods can be made to handle this imputation scenario but IMPUTE v2 has been shown to be the most accurate approach7 and the program makes it straightforward to apply.

When phenotype is strongly correlated with genotyping platform Howie et al.7 found that imputing untyped SNPs in cases from SNPs that are present in a dense set of genotype data from controls did not lead to increased false-positive rates. However, if cases and controls are typed on different chips, then imputing SNPs that are untyped in both cases and controls from a haplotype panel can lead to false-positive associations. SNPs that are imputed accurately from one chip but poorly from another chip may lead to differences in allele frequency that just reflect allele frequency differences between the haplotype reference panel and the study population. This is similar to the way that population structure can cause problems in GWA studies. Ideally, situations like this are best avoided by sensible study design. If this isn't possible, we recommend quality-control measures to ensure only the most accurately imputed SNPs are used.

fastPHASE and BIMBAM. The fastPHASE8 method can be used to estimate haplotypes and carry out imputation and has recently been incorporated into an association-testing program called BIMBAM9,10. The method uses the observation that haplotypes tend to cluster into groups of closely related or similar haplotypes. The model specifies a set of K unobserved states or clusters that are meant to represent common haplotypes. The kth cluster is assigned a weight (akl) that denotes the fraction of haplotypes it contains at site l, with

Each cluster also has an associated frequency (θkl) of allele 1 at each site. Each individual's genotype data is then modelled as an HMM on this state space with transitions between states controlled by a further set of parameters (r) at each SNP,

This equation is similar to equation 1 above with P(Gi|Z, θ) modelling how likely the observed genotypes are given the underlying states and P(Z|a, r) modelling patterns of switching between states, but the states represent clusters rather than reference haplotypes. An analogous model can be used for a set of observed haplotypes so that a likelihood can be written as

An expectation-maximization algorithm (EM algorithm) is used to fit the model and missing genotypes are imputed conditional on the parameter estimates using the forward–backward algorithm. The authors found that averaging over a set of estimates produced much better results than choosing a single best estimate. Empircal experiments10 suggest that using K = 20 clusters and E = 10 start points for the EM algorithm represents a practical compromise between speed and accuracy. The model underlying the GEDI11 approach is very similar to that of fastPHASE.

When imputing untyped SNPs from a reference panel, it was discovered (B.H., unpublished observations) that maximizing the full likelihood L(G, H|a, θ, r) resulted in relatively high error rates compared to other methods. Subsequently, it was shown that fixing parameter estimates based only on the likelihood for the set of haplotypes produces lower error rates11. This is a similar strategy to that used by IMPUTE v1 in which each cohort individual is independently imputed conditional only on the panel data. IMPUTE v1 has the advantage of not needing to estimate any parameters by using real haplotypes as the models of underlying states. By contrast, fastPHASE uses a much smaller set of states, which speeds up the required HMM calculations but the need to estimate the many parameters of this method can counteract this effect.

MACH. MACH uses an HMM model very similar to that used by HOTSPOTTER5 and IMPUTE. The method can carry out phasing and as a consequence it can be used for imputation. The method works by successively updating the phase of each individual's genotype data conditional on the current haplotype estimates of all the other samples. The model used can be written as

in which D−i is the set of estimated haplotypes of all individuals except i, Z denotes the hidden states of the HMM, η is an 'error' parameter that controls how similar Gi is to the copied haplotypes and θ is a 'crossover' parameter that controls transitions between the hidden states. The parameters η and θ are also updated during each iteration based on counts of the number and location of the change points in the hidden states Z and counts of the concordance between the observed genotypes to those implied by the sampled hidden states.

Imputation of unobserved genotypes using a reference panel of haplotypes, H, is naturally accommodated in this method by adding H to the set of estimated haplotypes D−i. The marginal distribution of the unobserved genotypes can then be estimated from the haplotypes sampled at each iteration. An alternative two-step approach is also recommended that estimates η and θ using a subset of individuals and then carries out maximum-likelihood genotype imputation based on the estimated parameters. By contrast, IMPUTE v1 uses fixed estimates of its mutation rates and recombination maps. Estimating the parameters allows more flexibility to adapt to the data set being analysed. However, it is likely that some parameters will not be estimated well and this will reduce imputation accuracy.

BEAGLE. The BEAGLE method12,13,14 is based on a graphical model of a set of haplotypes. The method works iteratively by fitting the model to the current set of estimated haplotypes and then resampling new estimated haplotypes for each individual based on the model of fit. The probabilities of missing genotypes are calculated from the model that is fitted at the final iteration. The model is empirical in the sense that it has no parameters that need to be estimated and is applied to a given set of haplotypes in two steps. In the first step, a bifurcating tree that describes a set of haplotypes is constructed from left to right across the set of haplotypes. Once completed, each edge of the tree is weighted by the number of haplotypes that pass along it. In the second step, the tree is pruned to produce a more parsimonious characterization of the data set. At each level of the tree, pairs of nodes are compared in terms of their downstream haplotype frequencies by summing the squared differences of their downstream partial haplotype frequencies; if this number exceeds a threshold, then the nodes are not similar enough to combine. The current recommended threshold was determined empirically from simulated data12. Possibly the best way to understand the model is by looking at the small example given in Figure 2 and Table 1 of Ref. 12. Ref. 15 provides a useful review that contrasts various methods for phasing and imputation.

Table 1 Comparison of imputation methods

The BEAGLE model has the property that the graph will have few or many edges in regions in which there is low or high LD respectively. In this way, the model has the attractive property that it can adapt to the local haplotype diversity that occurs in the data. In some sense it can be thought of as a local haplotype-clustering model, similar to fastPHASE, but with a variable number of clusters across a region.

SNP tagging-based approaches. Some methods (PLINK16, SNPMSTAT17, UNPHASED and TUNA18) carry out imputation using methods based on tag SNP approaches19,20,21. For each SNP to be imputed, the reference data set is used to search for a small set of flanking SNPs that, when phased together with the SNP, leads to a haplotype background that has high LD with the alleles at the SNP. The genotype data from the study and the reference panel are then jointly phased at these SNPs and the missing genotypes in the study are imputed as part of the phasing. The advantage of this approach is that it is simple and quick. The downside is that these approaches generally don't provide as accurate results as other methods because they don't use all the data and the phasing is carried out through a simple multinomial model of haplotype frequencies22.

Imputation in related samples. The UNPHASED program implements an unpublished method for genotype imputation in nuclear families. This approach has been used to impute sporadic missing SNP genotype data in a study of nuclear families and unrelated individuals with a mixture of HLA and SNP genotypes23. A more focused method in which genotypes in founders are imputed down to descendents has also been proposed24. As close relatives will share long stretches of haplotypes, the descendents need only be typed at a relatively sparse set of markers for this to work well. Kong et al.25 proposed a related approach in which surrogate parents are used instead of real parents. For each individual, surrogate parents are identified as those who share long stretches of sequence with at least one allele that is identical by state (IBS). Regions in which this occurs are assumed to be identical by descent (IBD) and this estimated relatedness is used to help phase the individuals accurately over long stretches. This approach only works when a sufficient proportion of the population (>1% as a rule of thumb) has been genotyped, but may have useful applications when carrying out imputation if large, densely typed or sequenced cohorts become available. A related idea is used in IMPUTE v2 (Ref. 7) in which a 'surrogate family' of individuals is used when updating the phase of a given individual over reasonably long stretches of sequence (typically 5 Mb in practice).

Comparison between methods.Table 1 summarizes the properties of each of the most popular imputation methods divided into subsections that deal with properties of the reference panels the methods can handle, properties of the study samples, relevant program options and features, computational performance, error rates, and properties and ways of using the output files. Supplementary information S2 (table) is a fuller version of this table, which includes all the methods discussed above.

The sections of Table 1 on computational performance and error rate include an updated version of the comparison of the methods IMPUTE (v1 and v2), MACH, fastPHASE and BEAGLE carried out by Howie et al.7. IMPUTE v2 is the most accurate approach in all of the settings examined but all the methods produce broadly similar performances. The methods are also broadly comparable in terms of computational performance. Several authors7,14 have noted that the HMM models used by IMPUTE and MACH scale quadratically as the number of haplotypes in the panel increases, but the adaptive haplotype selection approach in IMPUTE v2 (Ref. 7) scales linearly with the number of haplotypes in the panel and overcomes this problem.

To examine how the methods might perform on a large reference panel of haplotypes, such as that being generated by the 1000 Genomes Project, we timed IMPUTE v2, fastPHASE and BEAGLE when imputing genotypes using a reference panel of 1,000 haplotypes into a study of samples consisting of 500 and 1,000 individuals. We used HAPGEN26 to simulate these data sets based on some of the pilot CEU haplotypes from the 1000 Genomes Project in a 5 Mb region on chromosome 10. The haplotype reference contains 8,712 SNPs and the study sample has genotype data at 872 of these SNPs. The results in Table 1 show that IMPUTE v2 is at least twice as fast as both BEAGLE and fastPHASE on this data set.

Factors that affect imputation accuracy

Most imputation methods produce a probabilistic prediction of each imputed genotype of the form

in which Gij {0,1,2} denotes the genotype of the ith individual at the jth SNP.

To assess the quality of predictions and compare methods, genotypes can be masked and then predicted. The most likely predicted genotype above some threshold can be compared with the true genotype and a plot of the percentage discordance versus the percentage of missing genotypes can be constructed for a range of thresholds to illustrate performance. This method was recently used to compare methods using 1,377 UK individuals genotyped on both the Affymetrix 500k SNP chip and the Illumina 550k chip. Genotypes on the Affymetrix chip were combined with the 120 CEU haplotypes to predict the 22,270 HapMap SNPs on chromosome 10 that were on the Illumina chip but not the Affymetrix chip. The error rate of the best-guess genotype for various methods was: BEAGLE (default), 6.33%; BEAGLE (50 iterations), 6.24%; fastPHASE (k = 20), 6.07%; fastPHASE (k = 30), 5.92%; IMPUTE v1, 5.42%; IMPUTE v2 (k = 40), 5.23%; IMPUTE v2 (k = 80), 5.16%; MACH, 5.46%, and these results are consistent with other comparisons27,28. For the best methods an error rate of 2–3% can be achieved but at the expense of 10% of missing genotypes. Another option involves measuring the squared correlation between the best-guess genotype and the true genotype14 which can be averaged across SNPs to give a single measure. Another desirable property of imputation methods is that the predicted probabilities they produce should be well calibrated. Most methods in common use have been shown to produce well-calibrated probabilities2,8,14.

The imputation accuracy results from Howie et al.7 are specific to a UK population using the CEU HapMap and the Affymetrix 500k chip. The study population, properties of the reference panel and genotyping chip will all influence performance, and performance may vary between rare and common alleles. To illustrate the way in which these factors affect imputation accuracy we took the CEU, YRI and JPT + CHB HapMap 2 haplotype panels and removed a single individual from each. We then used genotypes at SNPs on four chips (Affymetrix 500k, Affymetrix 6.0, Illumina Human660W and Illumina Human1M) to impute masked genotypes not on each chip in that individual, based on the remaining haplotypes in their panel of origin. We also assessed four other panels of haplotypes: a combined CEU + YRI + JPT + CHB panel of 414 haplotypes, which can be used to assess how a larger more diverse set of haplotypes compares with a small, more homogeneous set of haplotypes; a CEU panel rephased without using trio information, using fastPHASE8 (denoted CEU_FP) to assess the effects of trio phasing on imputation and to assess the effect of reference panel size; a subset of 60 haplotypes from the CEU panel (denoted CEU_60); and a subset of 120 haplotypes from the JPT + CHB panel (denoted JPT + CHB_120).

The results of this analysis are described in detail in Box 4. They show that: across all imputation panels and genotyping chips, imputation error rate increases as the minor allele frequency decreases, which is in line with previous observations1 that have shown that rare SNPs are more difficult to tag than common SNPs; using a reference panel phased using trio information boosts imputation performance, compared with using a reference panel phased without trio information; and the Illumina chips outperform the two Affymetrix chips in the CEU population, but in the YRI population, the performance of all chips decreases, more so for the Illumina chips. Therefore, the use of tagging methods for chip design can influence the imputation performance. The results also show that the error rate decreases as reference panel size increases7,14, and using a combination of CEU, YRI and JPT + CHB haplotype panels can boost the performance of imputation, especially at rare SNPs, compared with using a single haplotype panel.

It is also important to consider the performance of imputation in individuals from populations other than the three main HapMap panels. Huang et al.29 examined the 'portability' of the HapMap reference panels for imputation using genome-wide SNP data collected on samples from 29 worldwide populations. When a single HapMap panel was used as the basis for imputation, they found that European populations had the lowest imputation error rates, followed by populations from east Asia, central and south Asia, the Americas, Oceania, the Middle East and Africa. Within Africa, which has high levels of genetic diversity, imputation accuracy using the YRI panel varied substantially. These results indicate that differences in genetic diversity between the study population and the reference panel also influence imputation accuracy.

Huang et al.29 also found that imputation-based mixtures of at least 2 HapMap panels reduced imputation error rates in 25 of the populations. In 11 of the populations, the optimal choice of mixtures was to combine all 3 HapMap populations together as a reference panel. Of these 11 groups (Bedouin, Mozabite, Druze, Basque, Burusho, Daur and Yi), 7 were from Eurasia with some degree of dissimilarity from the HapMap CEU and JPT + CHB panels. The remaining four groups (Melanisian, Papuan, Pima and Colombian) were from Oceania and the Americas. These results can guide the choice of HapMap panels to use, with the caveat that they are specific to the HumanHap550 chip. A related point concerns imputation of admixed individuals. Pasaniuc et al.30 have shown that imputation conditional on a local ancestry estimate can be more accurate than unconditional imputation, but the biggest gains in accuracy will occur in admixed individuals from genetically dissimilar populations.

More recently, sets of haplotypes from the HapMap3 project and from the pilot phase of the 1000 Genomes Project (1KGP) have been made available. The HapMap3 has ten distinct sets of haplotypes and larger numbers of haplotypes in each set. For example, there are 330 CEU haplotypes. This allows more accurate imputation of rarer SNPs but HapMap3 has a smaller set of SNPs than HapMap2. At the time of publication of this Review, there are 7.7 million SNPs after filtering in the CEU panel of the 1KGP pilot project. This large boost in the number of SNPs allows finer resolution of signals in associated regions55. When the 1KGP data is complete, it is likely that this will become the reference set of choice for imputation into GWA study data sets. The large increase in both the number of SNPs and samples will allow more accurate imputation of most SNPs, indels and other structural variants that occur at a frequency above 1%.

Post-imputation information measures. Once imputation has been carried out, it is useful to assess the quality of imputed genotypes at SNPs in the absence of any true set of genotypes to compare them to. If the imputation quality is low at a SNP, it may be wise to filter out such SNPs before association testing is performed31. Four metrics have been proposed in the literature to assess quality that are designed to lie in the range (0,1) (Supplementary information S3 (box)). A value of 1 indicates that there is no uncertainty in the imputed genotypes whereas a value of 0 means that there is complete uncertainty about the genotypes. All of these measures can be interpreted in the following way: an information measure of a on a sample of N individuals indicates that the amount of data at the imputed SNP is approximately equivalent to a set of perfectly observed genotype data in a sample size of a N.

The MACH

measure is the ratio of the empirically observed variance of the allele dosage to the expected binomial variance at Hardy–Weinberg equilibrium. BEAGLE advocates using the R2 between the best-guess genotype and the allele dosage as an approximation to the R2 between the best guess-genotype and the true genotype14. The IMPUTE software calculates a measure of the relative statistical information about the SNP allele frequency from the imputed data. The SNPTEST program, which is primarily a package to carry out tests of association at SNPs, also calculates a similar relative information measure, but where the parameter of interest is the relevant association parameter of the model of association being fitted. When an additive model is fitted, this measure then has a very strong correlation to the IMPUTE information measure (Supplementary information S4 (figure)).

Figure 1 shows the MACH, BEAGLE and IMPUTE information measures applied to a simulated imputed data set across a 7 Mb interval on chromosome 22 and shows that the measures are highly correlated, although the MACH measure often goes above 1 and the BEAGLE measure is undefined at almost 3% of SNPs (see also Supplementary information S4–S6 (figures)).

Figure 1: Post-imputation information measures.
figure 1

The plot shows the IMPUTE, MACH and BEAGLE information measures applied to a data set simulated of 1,000 cases and 1,000 controls on chromosome 22 using HAPGEN26 and the CEU (Utah residents with northern and western European ancestry from the CEPH collection) HapMap2 haplotypes (release 22) in the interval 14–21 Mb. Only genotypes at SNPs on the Affymetrix 500k chip were simulated. IMPUTE was then used to impute all ungenotyped SNPs from the CEU HapMap2 haplotypes. Each of the three metrics are plotted against the base pair position for each imputed SNP. The blue dots in the BEAGLE plot indicate the position of all those SNPs for which the allelic R2 metric is undefined owing to the most likely genotype call resulting in a monomorphic SNP. Red lines are shown at 0 and 1.

Association testing using imputed data

The probabilistic nature of imputed SNPs means that testing for association at these SNPs requires some care. Using only imputed genotypes that have a posterior probability above some threshold (or using the best-guess genotype) is a reasonable method of comparing the accuracy across methods but it is not recommended when carrying out association tests at imputed SNPs. Removing genotypes in this way can lead to both false positives and loss of power.

Frequentist tests. To fully account for the uncertainty in imputed genotypes, well-established statistical theory for missing data problems can be used2 (Box 5). An observed data likelihood is used in which the contribution of each possible genotype is weighted by its imputation probability. A Score test (implemented in SNPTEST) is the quickest way to use this likelihood to test for association, as it attempts to maximize the likelihood in one step by evaluating the first and second derivatives of the likelihood under the null hypothesis and works well when the log-likelihood is close to a quadratic. Small sample size, low allele frequency and increasing genotype uncertainty from imputation all act to degrade this assumption and can lead to the test reporting a spuriously low p-value. In practice, thresholds on information metrics and allele frequencies to filter out SNPs at which this happens have been used and work well31,32,33. As such SNPs are those likely to have very low power to detect effects, it is unlikely that has a negative effect on the study. SNPTEST v2 implements an iterative Newton–Raphson scheme and an EM algorithm to maximize the likelihood and improves performance at SNPs for which the Score test does poorly. SNPTEST allows both quantitative and binary traits and can condition on user-specified covariates.

A simpler approach involves using the expected allele count eij = pij1 + 2pij2 (also called the posterior mean or allele dosage). These expected counts can be used to test for association with a binary or quantitative phenotype, using a standard logistic or linear regression model, respectively. This method has been shown to provide a good approximation to methods that take the genotype uncertainty into account when the effect size of the risk allele is small10, which is the case for most of the common variants found in recent GWA studies. This approach is implemented in the programs MACH2DAT/MACH2QTL, SNPTEST, PLINK and the R package ProbABEL. The ProbABEL package also allows time-to-event phenotypes to be considered using Cox proportional hazards models.

Bayesian approaches.Bayesian methods for analysing SNP associations have recently been proposed2,9,10,34,35 and have advantages over the use of p-values in power and interpretation. Within the Bayesian framework, focus centres on calculation of a Bayes factor (BF), which is the ratio of marginal likelihoods between a model of association (M1) and a null model of no association (M0),

in which the marginal likelihoods are defined by

and θ denotes the regression parameters. This can be approximated using a Laplace approximation and a straightforward modification of the likelihood maximization used by frequentist methods (Supplementary information S7 (box)). We have found that this approach (implemented in SNPTEST) is much more stable than when maximizing the likelihood, as the prior acts to regularize the parameter estimation. The expected genotype count can also be used to calculate Bayes factors10 and is implemented in both BIMBAM and SNPTEST. Stephens and Balding35 provide an excellent Review of the use of Bayes factors and include a good discussion on the choice of priors. In particular, they discuss the idea of using a mixture of priors to more precisely control beliefs about large effect sizes. Supplementary information S7 describes how this can also be achieved using a t-distribution prior and also discusses a method of setting priors for quantitative trait models. Currently, only SNPTEST can calculate Bayes factors for binary traits conditional on a set of covariates.

In the context of fine-mapping in which multiple SNPs in a gene or region may play a part in the underlying causal model, it is desirable to consider models that allow multiple SNPs. The BIMBAM approach9 combines imputation with such an approach and can produce posterior probabilities of association for each SNP and also on the number of associated SNPs in the region.

Bayes factors versus p-values. At directly genotyped SNPs, Bayes factors and p-values can be made equivalent in the sense that they give the same ranking of SNPs34, but this occurs for a particular choice of prior in which the prior variance of the effect size increases as minor allele frequency decreases (or as the information at the SNP about the effect size parameter decreases). This prior assumes larger effects at rarer SNPs which may be a biologically reasonable assumption. At imputed SNPs, the level of uncertainty also influences the amount of information there is about the effect size parameter. To make Bayes factors give the same ranking of SNPs as p-values, we would need to allow the prior variance to increase as the amount of imputation uncertainty increases which makes no sense35. So even when adopting a prior that depends upon allele frequency, Bayes factors and p-values will not give the same ranking at imputed SNPs. In practice, studies have tended to filter out SNPs with low information so it seems unlikely that a reanalysis of studies using Bayes factors will result in very different outcomes but, as we probe ever rarer SNPs based on imputation from the 1000 Genomes Project data, it may become more important to take care of these details.

Joint imputation and testing. The two-stage imputation then testing approach may underestimate effect sizes as genotypes are effectively imputed under the null model. The SNPMSTAT17 and UNPHASED approaches allow joint imputation and testing in a single model. A comparison of the joint approach (using SNPMSTAT) to a two-stage approach (using IMPUTE and then SNPTEST) on three different data sets suggested that the improved performance of imputation gained by using a method that uses as much flanking genotype data as possible (like IMPUTE, MACH, fastPHASE and BEAGLE) outweighs the advantage of joint imputation and testing36.

Perspectives and future directions

It seems likely that genotype imputation will continue to play an important part in the analysis of GWA studies over the next few years, as researchers apply the approach to an increasing set of diseases and traits and work together to combine cohorts through meta-analysis. The main factor that will influence the precise way in which imputation is used will be the increasing availability of next-generation sequencing data. Such data will allow researchers to assess many more SNPs as well as short indels and CNVs.

One public resource of such data will be the 1000 Genomes Project. Imputation is currently being used both within the data to reconstruct genotypes from the low-coverage sequencing reads and will also be used to impute from the data into other cohorts. Compared to HapMap2, the number of SNPs, the number of haplotypes and the number of populations will increase notably. This resource will include many more SNPs with a frequency of 1–5% that can be imputed. This may be key if rarer variation is an important part of the aetiology of a given trait. The availability of haplotypes on a larger set of populations should lead to an improvement in imputation in populations that are not well matched to the CEU, YRI and CHB + JPT haplotype sets in HapMap2.

The challenges for imputation methods will be in using the larger, more diverse set of haplotypes available for imputation. As any haplotype estimates produced from the 1000 Genomes Project data may have more inherent uncertainty than the HapMap2 haplotypes, owing to the low-coverage sequencing used and the larger number of rare SNPs, it may be important to take this into account when imputing from this data. Along these lines, both IMPUTE v2 and BEAGLE already offer the ability to accept genotypes estimated with uncertainty when carrying out imputation and phasing.

Care may also be needed when analysing rare imputed SNPs. It is well known that the asymptotic theory used by frequentist association tests breaks down at rarer SNPs, which means that p-values may not be well calibrated. In addition, the subtle effects of population structure when analysing rare variants will need to be handled carefully. The danger here is that a small number of extra rare alleles in cases or controls owing to population structure may lead to false positives.