Fig. 3: Prediction accuracy relative to a convolutional neural network and transfer learning, using smaller K and N, and over large contiguous regions with no ground truth data. | Nature Communications

Fig. 3: Prediction accuracy relative to a convolutional neural network and transfer learning, using smaller K and N, and over large contiguous regions with no ground truth data.

From: A generalizable and accessible approach to machine learning with global satellite imagery

Fig. 3

a Task-specific MOSAIKS test-set performance (dark bars) in contrast to: an 18-layer variant of the ResNet Architecture (ResNet-18) trained end-to-end for each task (middle bars); and transfer learning based on an unsupervised featurization using the last hidden layer of a 152-layer ResNet variant pre-trained on natural imagery and applied using ridge regression (lightest bars). See Supplementary Note 3.1 for details. b Validation set R2 performance for all seven tasks while varying the number of random convolutional features K and holding N = 64, 000 (left) and while varying N and holding K = 8, 192 (right). Shaded bands indicate the range of predictive skill across five folds. Lines indicate average accuracy across folds. c Evaluation of performance over regions of increasing size that that are excluded from training sample. Data are split using a checkerboard partition, where the width and height of each square is δ (measured in degrees). Example partitions with δ = 0. 5∘, 8∘, 16∘ are shown in maps. For a given δ, training occurs using data sampled from black squares and performance is evaluated in white squares. Plots show colored lines representing average performance of MOSAIKS in the US across δ values for each task. Benchmark performance from Fig. 2 are indicated as circles at δ = 0. Grey dashed lines indicate corresponding performance using only spatial interpolation with an optimized radial basis function (RBF) kernel instead of MOSAIKS (Supplementary Note 2.8). To moderate the influence of the exact placement of square edges, training and test sets are resampled four times for each δ with the checkerboard position re-initialized using offset vertices (see Supplementary Note 2.8 and Supplementary Fig. 10). The ranges of performance are plotted as colored or grey bands.

Back to article page