Fisher's (1918) classic paper on the inheritance of complex traits not only founded the field of quantitative genetics, but also coined the term variance and introduced the powerful statistical method of analysis of variance. This was a watershed paper, reconciling the Mendelian’s discrete and saltatorial view of trait evolution with the gradual and continuous view of Darwin’s followers, the biometricians (Provine, 1971). This fusion of Mendelian genetics with Darwinian natural selection was the start of the modern evolutionary synthesis. Fisher’s paper also marked a critical point in modern statistics, and this synergism between the development of new statistical methods and the ever-increasing complexity of genetic/genomic data sets continues to this day.

Quantitative genetics plays a unique role in biology, serving as the conduit between purely statistical descriptions of trait inheritance (and evolution) and more genetically informed views. While we have an exquisite understanding of certain simple (but important) traits, such as regulatory switches in bacteria, flies, yeast, worms, humans, and viruses, our understanding of more complex traits is still lagging. The bridge between using what we do know about the genetics of a trait, and accommodating the remaining uncertainty, is quantitative genetics. At its heart is statistical machinery to address the uncertainty inherent in complex trait variation and inheritance—variation in genotypes and environments and the interactions between these. Despite many attempts to declare its demise (in part based on the rather narrow view by detractors that its focus is on purely statistically descriptions of trait variation), the field is as vibrant as ever. Indeed, it is the cornerstone of modern breeding, evolutionary and ecological genetics, and human genetics of complex traits (such as most important diseases).

In the 1920s and 1930s, quantitative genetics was simply an integral part of genetics. The rise of molecular biology, and the resulting vast increase in our molecular understanding of the inheritance of simple traits, created a fractured view of the field among geneticists. It fell out of favor among many molecular-focused geneticists who viewed it as an approximation to handle complex inheritance in a purely statistical fashion. As a brief perusal through any current genetics journal illustrates, this misinformed view is a gross oversimplification, as the machinery of quantitative genetics is happy to incorporate any relevant genetic information.

Against this background, and motivated by the recent Fourth International Conference on Quantitative Genetics (ICQG) held at Edinburgh in 2012, this special issue of Heredity is devoted to recent advances in the field. Before introducing the papers that comprise this issue, some brief reflections on the changing focus and concerns of the field as indicated by the programs of previous ICQGs are in order. The first ICQG was held in 1976 at Ames, Iowa (Pollak et al., 1977). While much of the focus was on standard biometrical applications (for example, variance components), hints of things to come were foreshadowed by papers on the relevance of molecular biology to breeding and applications of mixed models (models including both fixed and random effects, for example, BLUP and REML). Much of the emphasis was on breeding or laboratory populations. A decade later, the second ICQG held at Raleigh, North Carolina in 1987 (Weir et al., 1988), reflected explosive growth in new tools (low-density molecular markers for early quantitative trait locus (QTL) mapping), a continued expansion of the importance of mixed-model methodology for complex estimation issues, and a growing fusion of population and quantitative genetics. It also highlighted the migration of quantitative-genetic machinery from applied breeding into important evolutionary and human-genetic problems.

Despite massive advances in the field in the 1990s, there was a two-decade hiatus, with the third ICQG being held in Hangzhou, China in 2007 (Weir et al., 2009). Much of the focus was on the use of dense-marker information, although Bayesian statistical methods of analysis were also making inroads. The subsequent 5 years leading up to the most recent conference at Edinburgh saw the full flowering of genomic selection (GS) and other very dense-marker methods, as well as continued development of statistical machinery to handle this information (much with its roots in the mixed models discussed in the first conference). Quantitative genetics is now a central part of human genetics (for example, the recent discussions on ‘missing heritability’), and its fusion with molecular population genetics continues. Indeed, coalescent-based approaches for gene mapping and detecting loci under selection are now an integral part of the field. Likewise, dense-marker information offers the possibility of assigning relationships to individuals missing pedigree data, allowing mixed-model machinery to be applied in more general settings, such as natural populations.

The following nine papers, roughly half from authors presenting at the fourth ICQG, examine many of the developing themes that will likely be key issues at the Fifth Congress to be held in Madison in 2016. Quantitative genetics has a rich past history with ecological and population genetics, and the first three papers (Anderson et al. (2013); Shaw and Shaw (2013) and Aguirre et al. (2013)) examine ongoing research in these areas. One exciting new strategy in the ecological genetics of complex traits are manipulative genetic experiments in otherwise natural populations. Anderson et al. (2013) illustrate the power of this approach for detecting fitness effects of QTLs. The BCMA (Branched Chain Methionine Allocation) locus in the Brassicaceous plant Boechera stricta was first identified by QTL mapping as influencing both insect damage and glucosinolate profiles (secondary chemicals involved in projection against insect herbivory) in laboratory settings. Nearly isogenic lines (NILs) segregating BCMA variants showed strong fitness differences in the field, differences that would likely be missed if NILs were not used.

The paper by Shaw and Shaw (2013) examines the additive genetic variance in fitness. Such variance is required for any trait to respond to selection, but the expectation is that natural selection will generally drive this variance to small values in equilibrium populations. Shaw and Shaw show that significant additive variance can be generated for a trait under stabilizing selection with a moving optimal value (as might be expected under climate change). In such settings, not only is the additive variation substantial, it can increase steadily over an extended period following a change in the optimal value as the population distribution catches up with the shift in the fitness profile. A second important point they raise is that the estimation of genetic variance in fitness is often done under the assumption of normality, while by their nature fitnesses are non-normal. Many individuals leave no offspring, creating a point mass at zero. Likewise, fitness components are multiplicative, with lifetime fitness being the product of several fitness episodes, again creating departures from normality. Shaw and Shaw note that their recently developed Aster approach of modeling such processes (Geyer et al., 2007; Shaw et al., 2008) provides much more suitable estimates of the genetic variance in fitness.

The genetic variance–covariance matrix G plays a central role in the Lande equation R=Gb, where R is a vector of trait responses and b the direction favored by selection. G rotates the response away from the optimal direction, generating genetic constraints. Because of this central role for G, evolutionary biologists have been obsessed with comparing estimates of G across populations and species. The final ecological genetics paper by Aguirre et al. (2013) develops a Bayesian framework for comparing not just pairs but rather whole series of G matrices via three frameworks—the random skewers (the correlation between response vectors for different G matrices using randomly-generated b vectors), Krzanowski common subspaces (how much common space is spanned by two matrices) and tensor approaches. Tensors offer a very interesting, and underutilized, tool for quantitative geneticists. Problems involving comparison of a series of matrices (such as G or quadratic selection gradient matrices) can be thought of as a stack of matrices, much like a three-dimensional chess board. This is a third-order tensor, and likewise the set of the covariances between any two elements Gij, Gkl from a set of G matrices is a fourth-order tensor (Hone et al., 2009).

The next set of papers (Wallace et al. (2013); Druet et al. (2013) and Crossa et al. (2013)) examine the impact of dense-marker information on maize, cattle and wheat improvement. As befitting the most harvested crop on the planet, powerful genetic resources make maize one of the best-studied model plant systems. In particular, the recently developed Nest Association Mapping (NAM) lines (a set of 5000 RILs created by crossing 25 diverse lines to a reference) offer the advantages of both linkage (QTL) and association mapping. As reviewed by Wallace et al. (2013), the view emerging for many traits from NAM analysis is a ‘common gene, rare allele’ model, with common genomic locations containing unique (line-specific) alleles influencing trait value. Most natural alleles have small effect size with little detectable epistasis or pleiotropy.

A major growth industry in quantitative genetics is GS, using all of the marker information (as opposed to a subset of statistically significant markers, that is, marker-assisted selection or MAS) to detect superior individuals. Druet et al. (2013) examine GS in dairy cattle, while Crossa et al. (2013) considers maize and wheat. Druet et al. (2013) extend GS by incorporating whole-genome sequencing, focusing on optimal strategies for detecting which key individuals to fully sequence. The additional gain is often small (two or three percent) over methods (such as GBLUP) simply using dense, as opposed to full, sequence information. Crossa et al. (2013) review the impact of genomic selection on the maize and wheat breeding programs at CIMMYT (one of the major global centers for crop development), finding that population structure accounts for a significant fraction of the prediction accuracy. This limitation on predicting values for individuals from populations other than the training set is a general observation, and limiting factor, on the success of GS. Crossa et al. (2013) also show that accuracy can be increased by using standard methods to account for genotype–environment interaction by borrowing information from correlated environments.

For many traits, the final phenotype of an individual is a function of both its intrinsic genetic value as well as environmental effects imparted by its interaction with others. If these environmental (or associative or indirect genetic) effects themselves have a heritable component, then correct models of trait inheritance must jointly consider the changes in direct and associative effects. The twist is that an individual’s associative effects never appear in their own phenotype, but rather only in the phenotypes of the group members with which they interact. While this critically important concept is not new, being introduced by Griffing (1967, 1968a, 1968b over 40 years ago, it did not gain serious tract in breeding until Muir (2005) showed how breeding values of direct and associative effects could be estimated with a simple modification of standard BLUP mixed models. Bijma (2013) provides a very useful overview of the implications of such traits for both breeding and evolutionary genetics. From a breeding standpoint, there is often considerably more usable genetic variation in the associative (as opposed to direct) effects, but individual selection misses this component. For evolutionary geneticists, the old debates about group versus kin selection are neatly folded into a simple unified model with measurable components. Perhaps the most direct application of these models is to competition, and Wilson’s (2013) paper examines the impact of such models on potentially imposing evolutionary constraints.

The final paper in this issue is a speculative, and provocative, offering from Marjoram et al. (2013) as to how to proceed in the Post-genome-wide association study (GWAS) world. As has been stressed, quantitative genetics provides a very flexible platform for integrating additional information when attempting to predict the phenotypes of relatives. Marjoram et al. (2013) build on this tradition by suggesting one natural extension of GWAS is to look for effects of single-nucleotide polymorphisms (SNPs) on intermediate products in particular gene regularity networks thought to underlie a complex trait. They suggest that approximate Bayesian computation (ABC) methods are ideally suited for analysis involving SNPs information and regulatory products. Under a standard Bayesian analysis, the product of a likelihood (the signal in the data) and a prior distribution of beliefs are used to generate an updated, or posterior, view of our knowledge of a set of parameters. One problem is that likelihoods may be extremely difficult to compute, but ABC methods offer a useful approximation. Suppose one has a complex gene network, with a set of dynamical equations that relate model input to model outputs. For ABC, one samples values of the parameters of the dynamical equations from some prior, and then runs a simulation. In the simplest ABC settings, if the resulting simulation output is a sufficiently close match to the observed data, these initial parameters are saved, else they are rejected. Running through a sufficiently large number of such iterations generates draws from the posterior.

As this set of papers illustrates, the reach of quantitative genetics is vast, covering important problems in breeding, genomics, human genetics and evolutionary biology. While the range of topics covered would likely not have been foreseen by most at the 1976 Ames conference, the basic foundations would be quite recognizable. While the massive impact of genomics is clear to the even the most casual of followers, what is often lost is the equally impressive contribution of new statistical methods (fueled, in large part, by advances in computation). One could have a healthily debate on whether molecular biology or computation has advanced at a faster pace, but both have strong, and synergistic, impact on quantitative genetics. Finally, these papers all hint at the great strength of quantitative genetics, which is that it provides a bridge to unify ideas and data from very diverse fields, be they statistical, computational, molecular, evolutionary, ecological, behavioral or genetical.