Introduction

Wave breaking is one of the most challenging water wave phenomena to investigate. Despite nearly two centuries of research, the current practical description of the process is still mostly empirical1,2,3,4. Precise wave breaking modelling requires explicit knowledge of the wave phase speed and the fluid velocity distribution on the wave crest, which can only be done by numerically solving the Navier-Stokes equations in a framework that currently is too computationally expensive for practical applications5. Consequently, current large-scale state-of-the-art wave models rely on statistical approaches (mostly the spectral approach) to represent the waves6,7,8. This type of models parameterizes wave breaking as a function of known parameters such as the wind speed6, the local wave height to water depth ratio7 or semi-empirical breaking wave height probability distributions2,9. Due to a lack of observed data, the constants involved in these models have been derived from limited datasets that may not adequately represent the natural environment. For example, a recent study has shown that popular surf zone parametric wave breaking models incorrectly represented the fraction of broken waves in their formulations with errors > 50%, despite these models being able to adequately represent surf zone energy dissipation possibly due to parameter tuning10. This paper aims to start addressing the data unavailability issue by providing a reliable and reproducible method to detect and track waves that are actively breaking in video imagery data.

Wave breaking directly affects several environmental phenomena. For example, wave breaking is considered as the main driver for air-sea exchanges11, being often described as a function of Phillips’12 \(\Lambda (c)dc\) parameter. This parameter is defined as the average total length per unit surface area of breaking fronts that have velocities in the range c to \(c+dc\) and its moments are assumed to correspond to different phenomena such as whitecap coverage (second moment), rate of air entrainment (third moment), and momentum flux (fourth moment). As previously identified13, different interpretations of how data is processed to obtain \(\Lambda (c)dc\) resulted in differences of 175% in the first moment and 300% in the fifth moment of \(\Lambda (c)\)13,14. This issue does not seem to limit the applicability of Phillips’ framework. A recent study, for example, used a wave breaking parameterization fully based on \(\Lambda (c)dc\) to model whitecap coverage off the coast of California15. To the authors’ knowledge, however, no study has assessed the impact that errors in wave breaking detection has on measured \(\Lambda (c)\) distributions and how such errors would impact on the conclusions that have been drawn from these models (for example, the assumption that \(\Lambda (c)\) is a simple function of wind speed11). More importantly, in the case which new observations diverge from Phillips’ original theory (as we will later show in this paper), new and improved wave breaking models will certainly follow.

Further, breaking waves have a direct impact on the remote sensing of the ocean surface from satellites, aircraft, or platform mounted instruments. Wave breaking has been observed to significantly modulate measurements of backscatter off radars with large increases in backscatter being directly correlated with breaking waves16. Given that part of the data that will be used here (see Methods for details) coincides with the location of previous wave backscatter studies16, data derived from our method could to lead to explicit formulations correlating wave breaking and radar backscatter. Such formulations could, in the future, lead to more reliable satellite-derived scatterometer data, which currently ignores17 or relies on empirical wave breaking models18. The foam generated by breaking waves at the sea surface also influences on the measurement of microwave brightness temperature signatures for moderate to high wind speeds19,20. To date, no study has, however, systematically quantified such temperature signatures for large field datasets and most of our knowledge is empirical. Such limitation could be addressed by extending wave breaking detection and tracking methods to infrared wavelengths, which is directly correlated to water temperature21. Advancements in this regard could lead to improvements in global climate models given that the inclusion of wave-generated heat transfer has shown to reduce cold biases in such models for non-breaking waves22. Breaking waves are expected to transfer more heat than non-breaking waves23 and, when considered, could improve climate models even further.

With the recent explosion in the usage of machine learning, particularly deep neural networks, it was only a matter of time before these techniques were adapted to wave research. Recent studies have shown that deep neural networks can accurately be used to classify different types of surf zone breakers24, obtain wave heights from video data25, track waves in the surf zone26 and the shoreline position27 and, when applied to pressure transducer data, distinguish between broken and unbroken waves10. In this paper, we describe a robust and extensible method to identify and track breaking waves in video imagery data using a combination of classic machine learning algorithms and deep learning. More importantly, we make the present dataset and code base fully available for future researchers who can either use it directly, re-train the models adding more training data, or expand the present method to other cases (for example, object segmentation). The development and standardization of a general framework to detect wave breaking in video imagery data should help to provide the wave breaking statistics database that is currently needed to assess, or further develop, our current understanding of several processes related to breaking waves. Ultimately, the efforts initiated here aim to lead to improvements in several areas of research such as wave modelling and forecasting, wave energy tracking and harvesting28, ocean-atmosphere interaction15, remote sensing of the ocean16, and safety at sea and coasts29.

Method

Model definition

In this study, we have developed a novel method to detect active wave breaking in video imagery data. Similarly to previous studies, we exploit the fact that breaking wave crests generate characteristic white foam patches from bubble entrainment30,31. Differently from the vast majority of previous methods, here we make a clear distinction between active wave breaking (that is, visible foam being actively generated by wave breaking) from passive foam and to all other non-wave breaking instances. To the authors’ knowledge, only the methods of Mironov and Dulov30 and Kleiss and Melville32 have previously attempted to distinguish between active wave breaking and passive foam. Both their methods start similarly to ours (see next paragraph) by identifying bright pixels in a given series of images via thresholding. Next, interconnected pixels are connected in space and time using a nearest neighbor search (Mironov and Dulov30) or by extracting connected contours (Kleiss and Melville32). In both previous methods, the separation between active and passive wave breaking is then done by empirically defining foam expansion direction and velocity limits. The present method aims to remove empirical steps in active wave breaking detection by using data-driven methods and deep neural networks instead of experiment-specific parameters.

Our method starts by naïvely identifying wave breaking candidates in a given input image. This was done by identifying bright (that is, white) pixels in the image using the local thresholding algorithm available from the OpenCV33 library with its default parameters. Interconnected regions of bright pixels in the image were then clustered using DBSCAN34 and a minimum enclosing ellipse35 was fitted to each identified cluster. The DBSCAN step requires the user to define a minimum number of pixels to be clustered (default value is 10 for all data used here) and a maximum distance allowed between pixels (default value of 5 for all data used here). This step resulted in large quantities of bright pixel patches being detected. At this stage, it was not possible to determine whether a given identified cluster of pixels represented active wave breaking, passive foam, or any other instance of bright pixels (for example, birds, boats, coastal structures, sun glints or white clouds). Generically, this step can be thought of a feature extraction step and can be replaced by any other equivalent approach (for example, image feature extractors). To avoid exponential memory consumption growth generated by DBSCAN, the input image may be subdivided into blocks of regular size. This step was easily parallelized with performance increasing linearly with number of computing threads.

The second step of the method consisted of training a deep convolutional neural network to distinguish between active wave breaking and passive foam (and all other non-desired occurrences). This step can be understood as a binary classification step. Figure 1 shows a schematic representation of the present method. From the original images, randomly selected subsets of size \(256\times 256\) pixels centered on the fitted ellipses were extracted and manually labelled as either active wave breaking (label 1, or positive) or otherwise (label 0, or negative). The present training dataset was generated using raw video imagery data from Guimarães et al.36 and consisted of 19,000 training and 1300 testing images (see Table 1). The training dataset is further split into 80% training and 20% validation data37. The validation dataset is reserved for future hyper-parameter fine-tuning. Note that the present dataset has a class imbalance of \(\approx 90\%\) towards the negative label, that is, for each active wave breaking sample (label 1, or positive) there are nine instances that were not active wave breaking (label 0, or negative). Data augmentation38 (rotation, vertical and horizontal flips, and zoom) was employed during training to increase the variety of samples of the positive label.

Five state-of-the-art neural network architectures, or backbones (VGG1639, ResNet50V240, InceptionResNetV241, MobileNetV242 and EfficientNetB543), were implemented and can be chosen by end users (see our Github code repository https://github.com/caiostringari/deepwaves for usage guidance). All of these backbones make use of convolutional layers to gradually extract information from the input images. The main differences between them are how many layers they have, the size of the convolution windows, how data is normalized in each layer and how each layer connects to each other (or to previous layers in the stack). For example, VGG16 is 16 layers deep, uses \(3\times 3\) convolution windows and has no normalization. InceptionResNetV2 is 164 layers deep, uses a mix of \(5\times 5\), \(3\times 3\) and \(1\times 1\) convolution windows and uses batch normalization44 to help avoiding overfitting. Residual Networks (namely, ResNet50V2) not only connect adjacent layers but also take into account errors (residuals) from previous layers. In general, more modern architectures (namely, EfficientNet) are wider (mainly by having parallel convolution windows) as well as much deeper than older architectures (namely, VGG16). The final top layers of the network were fixed for all backbones and consisted of flattening the last convolutional layer, two fully-connected layers with 50% dropout45, and a final classification layer with sigmoid activation.

The optimization step (training) was done using the Adam46 implementation of the stochastic gradient descent method47 and binary cross-entropy was used as the loss function. Note that this step must be computed using a graphics processing unit (GPU) in order to achieve feasible computation times. The models took from three (VGG16) to twelve (EffientNetB5) hours to train using a NVIDIA GTX 1080 GPU with a batch size of sixty-four images. After the neural networks were sufficiently trained, the best performing network (VGG16, see below) was used to classify all the naïvely identified wave breaking candidates and only the events classified as active wave breaking were kept for further analyses. Although VGG16 was chosen for presenting the results, the performance of the other architectures is nearly identical to VGG16 on real-world applications. Finally, note that user-tunable parameters for the neural networks (for example, learning rate and neuron activation thresholds) were kept unchanged from their default values in the TensorFlow library48. This implicates that more aggressive parameter optimization could improve the results presented here even further. For sake of brevity, we refer the reader to the project’s code repository for guidance on how to select hyper-parameters.

Figure 1
figure 1

Schematic representation of a deep convolutional neural network. The input layer consists of an active wave breaking candidate and has shape \(256\times 256\times 3\) (image height, image width and number of RGB channels). In the case of a grey-scale image, the single grey channel was triplicated. The red dashed rectangle in the analyzed image shows the region of interest which, in this particular case, was equal to the stereo video reconstruction area. The intermediary convolutional layers (or backbones) are interchangeable with different options available (see text for details). The function of the convolutional layers is to extract features from the input image by using convolutions (\(3\times 3\) in this example) and max pooling49 (that is, selecting the brightest pixel in a given window). The last convolutional layer is flattened (that is, turned into a one-dimensional vector) and is connected to two fully-connected (that is, multi-layer perceptron-like) layers and one final classification layer. A 50% dropout (that is, random selection of neurons in a layer) is applied after each fully connected layer. The final classification layer has one unit and uses sigmoid activation with a threshold of 0.5.

Table 1 Data characterization summary table.

The last step of the method consisted of grouping the active wave breaking events in space and time (note that at this stage bright pixels are only grouped in space). Time-discrete wave breaking events had their bounding ellipse region filled in pixel space (ij) and were stacked in time (t) resulting in a point-cloud-like three-dimensional (ijt) structure. The DBSCAN34 algorithm was then used to cluster these data and obtain uniquely labelled clusters. The two parameters for the DBSCAN algorithm, that is, the minimum number of samples in each cluster (\(n_{min}\)) and the minimum distance allowed between two samples (eps), were set to be equals to the minimum number of points inside a fitted ellipse among all ellipses (\(n_{min}\)) and equals to the sampling frequency in Hertz (eps). These values for \(n_{min}\) and eps were constant among the analyzed datasets. Note that this final step can be replaced by any other density-based clustering algorithm or other more sophisticated algorithms such as SORT52 (which is also available on the project’s code repository but was not used here).

Note that up to the clustering step, all calculations were done in pixel domain and “time” was obtained from sequential frame numbers. To convert between pixel and metric coordinates, the grids constructed using the stereo-video dataset available from Guimarães et al.36 were used in the present study. If stereo video data are not available, such conversions could be done knowing the camera position in real-world (that is, metric) coordinates, which is usually done using surveyed ground control points53,54. Conversion from sequential frame numbers to time (in seconds) can be done by knowing the sample rate (in frames per second, that is, Hertz) for the data. The total amount of computational time required to process 20 min of raw video recorded at 10Hz with an image size of 5 megapixels is approximately two hours on a modern six-core computer (assuming that a pre-trained neural network is available). Much shorter processing times are achievable for smaller images sizes and higher number of computation threads.

Evaluation metrics

Due to the imbalanced characteristic of the dataset, the classification accuracy score in isolation is not an appropriated metric to evaluated the present test dataset. For instance, a classifier that guesses that all labels are negative (that is, 0) would automatically obtain a high score (\(\approx\) 90%). To properly assess the performance of the classifier, more robust metrics were defined. These metrics are defined as follows:

  • True Positives (TF) and True Negatives (TN) are samples that were correctly classified. False Positives (FP) or Type I errors are false samples that were incorrectly classified as true, and False Negatives (FN) or Type II errors are true samples that were classified as false.

  • Accuracy is the percentage of examples correctly classified considering both classes, that is, Accuracy=\(\frac{TP+TN}{T+N}\), where T and N are the total number of positive and negative samples, respectively.

  • Precision is the percentage of predicted positives that were correctly classified, that is, \(\hbox {Precision}=\frac{TP}{TP+FP}\).

  • Recall is the percentage of actual positives that were correctly classified, that is, \(\hbox {Recall}=\frac{TP}{TP+FN}\)

  • Area under the curve (AUC) is the area defined by plotting the FP rate against the TP rate (also referred to as receiver operating characteristic curve). This metric indicates the probability of a classifier ranking a random positive sample higher than a random negative sample.

All metrics described above were monitored during the training process and were individually assessed to rank model performance. The training curves shown in Fig. 2a were used to assess when overfitting, that is, decreases in the loss function value for the training dataset that were not reflected in the validation dataset, started to occur. Training epochs after overfitting started were discarded. Here we favor the AUC curves as shown in Fig. 2b to indicate better performing models because AUC is a more robust metric than the classification score and at the same time presents a smooth evolution with training epochs. Finally, a confusion matrix (or table of confusion) as shown in Fig. 2c was plotted for each model to assess Type I and Type II errors.

Results

Classifier performance

From the analysis of all training curves, confusion matrices, and from Table 2, the best performing backbone architecture during training was ResNet50V2 by a considerable margin (\(AUC=0.989\)). These results however, did not translate to the validation dataset (\(AUC=0.873\)). Considering only the validation data, VGG16 was the best performing backbone with \(AUC=0.946\). Considering only the test dataset, the best performing model was also VGG16 (\(AUC=0.855\)). Overall, VGG16 was selected as the best performing model and the results presented in the next sections will use this backbone. Other evaluation metrics such as the accuracy score, precision, and recall closely followed the evolution of AUC with training epochs (compare Fig. 2a and b, for example). In general, as the loss value decreased, the number of false positives decreased, which made the precision and recall to increase. This behavior was consistent for all models. Given that different architectures may perform better for other datasets and further optimization could change the model ranking presented here, all pre-trained models and their training metrics evolution are made available in the project’s code repository. Finally, it is worth mentioning that larger models (for example, VGG19, ResNET152 and EfficientNetB7) could achieve better results but this was not attempted there due to hardware limitations (that is, these models required more memory than what was available on the NVIDIA GTX 1080 GPU used in this study).

Figure 2
figure 2

Examples of training curves and confusion matrix for the best overall performing model (VGG16). (a) Loss function value for training and validation data. (b) AUC value for training and validation data. In (a) and (b) the hatched area indicates the epochs after which the model started to overfit, the thick colored lines show smoothed loss or AUC values (average at every 10 epochs), and the transparent lines show raw loss or AUC values. (c) confusion matrix calculated using test data only.

Table 2 Evaluation metrics for all tested backbone architectures.

Test case with real-world data

Figure 3 shows the results of the application of the best performing model architecture (VGG16) on La Jument (2018/01/03 09:39) and Black Sea (2011/10/04 09:38) data. Visual inspection of these results confirmed the ability of the neural network to correctly classify the naïvely identified wave breaking candidates and only keep instances that represented active wave breaking. Moreover, the same neural network was able to correctly classify active wave breaking events for the rogue waves seen at La Jument29 and for the small wind-generated breakers seen in the Black Sea data. This result highlights the ability of the neural network to generalize well on the dataset, which is a difficult result to achieve. From the analysis of the training curves, the averaged classification error (accounting for the imbalance in the data) should be of the order of \(\approx 10\%\), which to the authors’ knowledge it was not assessed by other wave breaking detection methods. We strongly recommend that future research should report active wave breaking detection errors when used to assess or further develop models that depend on wave breaking data.

Figure 3
figure 3

Example of the application of the method. (a) Results of the naïve wave breaking detection (thresholding + DBSCAN) for La Jument data (03/01/2018 09:39). Note the great amount of passive foam being detected as active wave breaking. (b) Results of active wave breaking detection using VGG16 as backbone for La Jument data (03/01/2018 09:39). Note the significant reduction in the amount of passive foam being detected. In both plots, number of clusters refers to the number of clusters identified by DBSCAN. The red dashed rectangle indicates the region of interest which, in these examples, was the same as the stereo-video reconstruction area. (c) Results of the naïve wave breaking detection (thresholding + DBSCAN) for Black sea data (04/10/2011 09:38). (d) Results of active wave breaking detection using VGG16 as backbone for Black Sea data (04/10/2011 09:38). Animations of these results are available at https://github.com/caiostringari/deepwaves. In this particular example, the image was subdivided into blocks of \(256\times 256\) pixels for processing. Note that identical results were seen using other architectures other than VGG16 to classify these data.

Comparison with Mironov and Dulov (2008)

To highlight the improvements obtained by the present method, this section presents a comparison between Mironov and Dulov’s (2008)30 automatic active wave breaking detection method and the present method. To perform this task, data from the 2013 Black Sea experiments in Table 1 that had previously been classified and investigated in detail by Guimarães55 were used. All the active breaking events (that is, considering all 15 min of data) detected using Mironov and Dulov’s (2008)30 were manually classified as true or false and compared to the labels predicted by our method. To the authors’ knowledge, these data are the only currently available data that has been classified by both methods as well as manually. Figure 4 shows the result of the compassion between models. On average, our method has relatively \(\approx 50\%\) less error then Mironov and Dulov’s (2008) method with an averaged absolute reduction in error of \(\approx 15\%\). The results in Fig. 4 are also consistent with the results seen in Table 2 which showed that our model had errors in the order of \(\approx 15\%\) when considering the validation and test datasets. Note that all tested neural network architectures performed very similarly with only a slight advantage for InceptionResnetV2. It is also worth mentioning that Mironov and Dulov’s (2008) method was designed and optimized to work specifically with data from the Black Sea and it is not guaranteed that it will generalize to other datasets. Our method, on the contrary, has been shown to generalize well for very distinct datasets (see Fig. 3, for example) and with further optimization could achieve even better performance.

Figure 4
figure 4

Comparison between averaged classification error for active wave breaking detection between Mironov and Dulov’s (2008)30 method and all the neural network architectures implemented here. The error bars represent one standard deviation from the mean. The data used for this comparison are from the 2013 Black Sea experiments described in Table 1.

Wave breaking statistics

In this section we briefly present examples of wave breaking statistics that can be directly derived from the classified wave breaking data. For brevity, the analysis is limited to data from the Black Sea because our classifier performed best at this location (classification errors \(< 10\%\) considering both 2011 and 2013 experiments). Five quantities will be analysed: the wave breaking duration (\(T_{br}\)), the wave breaking area (\(A_{br}\)), the major (a) and minor (b) axis of the fitted ellipses (representative of the wave breaking lengths at their maximum during the active part of the wave breaking), and Phillips’ distribution \(\Lambda (c)dc\). These quantities were obtained directly from space-time clustered wave breaking events with the exception of the cumulative wave breaking area (\(A_{br}\)) which was calculated from the projections of pixels clustered in the first step of the method to metric coordinates. The results of this analyses are shown in Fig. 5.

The wave breaking duration (\(T_{br}\) normalized by wave peak period (\(T_p\)), Fig. 5a) roughly followed a shifted Gamma probability density function (PDF) and had a mean value of 0.12 and most frequent value (mode) of 0.13. This result shows that the active part of the wave breaking process happens very quickly. The wave breaking area (\(A_{br}\), Fig. 5b) closely followed a Pareto distribution which indicates that large wave breaking events are relatively rare in the data. The ratio between the major and minor axis of the fitted ellipses (a/b, Fig. 5c) followed a Beta PDF and had a mean of 2.5 and mode of 1.9, which indicates that the ellipses’ major axis is approximately double the size of the minor axis. Assuming a negligible wave front angle, the wave breaking area scaling relation from Duncan56 (\(A_{br}/b^{2}\), Fig. 5d) also followed a Beta PDF and had mean of 0.1 and mode of 0.8, which is consistent with the previously reported value56 (\(0.1\pm 0.01\)). The wave breaking area (Fig. 5e) showed a quadratic increase with wave breaking event duration, which is trivial but seems not to have been previously directly shown before. Finally, Fig. 5f shows the \(\Lambda (c)dc\) distributions obtained using our method and considering that the ellipse major axis is representative of the wave breaking length. The observed distributions greatly deviated from the theoretical \(c^{-6}\) relation12. Note that all PDF fits shown here were statistically significant at the 95% confidence level using the two-tailed Kolmogorov–Smirnov test. See Table 3 for the description of the parameters of the PDFs presented in this section and the Discussion section for the contextualization of these results and the possible implications that they have for future research.

Figure 5
figure 5

Examples of statistical properties of breaking waves that can be directly obtained from the proposed method. (a) Probability distribution of the wave breaking duration (\(T_{br}\)) normalized by wave peak period (\(T_p\)). The blue line shows the Gamma fit to the data, the purple dashed line shows the mean value (0.13) and the orange dashed line shows the mode value (0.12). (b) Probability distribution of the wave breaking area (\(A_{br}\)) normalized by wavelength (\(\frac{g}{2\pi }T_p^2\)). The blue line shows the Pareto fit to the data. (c) Probability distribution of the ratio between major and minor axis (a/b). The blue line shows the Beta fit to the data, the purple dashed line shows the mean value (2.4) and the orange dashed line shows the mode value (1.8). (d) Probability distribution of the wave breaking area scaling parameter (\(A_{br}/b^2\)). The blue line shows the Beta fit to the data, the purple dashed line shows the mean value (0.1), the orange dashed line shows the mode value (0.08), and the red line shows Duncan’s constant value (0.11). (e) Evolution of the wave breaking area over time. The black markers show the observed values, the error bars show values binned at 0.15 s intervals, the blue line shows the quadratic regression to the data, the blue swath shows the 95% confidence interval for the regression, and \(r^2\) is the correlation coefficient. (f) Phillips12 \(\Lambda (c)dc\) distributions assuming that the wave breaking speed is the same as the phase speed of the carrier wave (that is, \(c=c_{br}\)) grouped by wind speed. The black dashed line shows the \(c^{-6}\) theoretical decay, the colored markers show the average crest length binned at \(0.1ms^{-1}\) wave speed intervals, and the colored lines show a running average with a window size of 10 bins.

Table 3 Fitted parameters for the distributions seen in Fig. 5 .

Discussion

We have presented a new method to detect active wave breaking in video imagery data that is robust and easily transferable to future studies. Overall, VGG16 was the best performing architecture which is a surprising result given that VGG is a considerably older architecture that has been superseded by more recent architectures such as ResNets and EfficientNets43. Also surprisingly, EfficientNet was one of the worst performers considering the test dataset despite the state-of-the-art results that this architecture achieved in recent years43. One explanation could be that given VGG16 is the model with the highest number of parameters (despite being the shallowest network), it adapted better to the characteristics of the present dataset. Another explanation could be that EfficientNet is currently overfit on the ImageNet58 dataset that was used to train this network hence its high reported classification scores. Another aspect to take into consideration is the speed in which the models can predict on new data (inference). While VGG16 was the best performing model, its size slows down inference time. In this regard, MobileNetV2 offers real-time inference speed at 10Hz for small image sizes (for example \(512 \times 512\hbox {px}\)). Consequently, a model based on MobileNetV2 could be used to detect active wave breaking in an operational fashion in, for example, offshore floating wind turbines that are susceptible to wave breaking impacts and used to adjust anchor and cable settings in real-time.

From the analysis of the application of the method on real-world data, it was visually observed that small active wave breaking events were not always detected, particularly for the Black Sea data. There are two possible explanations for this error. The simplest explanation could be that this type of events is under-represented in the training dataset. The solution for this issue consists of adding more data representative of these instances to the training dataset. The second possibility is that the image size becomes too small in the deeper layers of the network which makes it impossible for the network to learn these events (see below for further discussion). A solution for this issue could be to increase the input image size but this was not attempted here due to hardware constraints (that is, memory limitation, as discussed above).

Neural networks have been historically seen as black boxes in which only the final classification outputs are relevant. There has been, however, an increase in interest to understanding how neural networks are learning. One technique that is of particular interest for the present paper is the concept of Gradient-weighted Class Activation Mapping (Grad-CAM)59. Briefly, this technique shows which regions of a convolutional layer are more important for the results obtained by the classifier. Figure 6 shows the results of Grad-CAM applied to examples for all unique locations from Table 1 considering VGG16’s last convolutional layer. When considering only actively breaking waves (Fig. 6a to d) it is evident that VGG16 closely mimicked how a human would classify these data, that is, it directly searched for the regions of the image that corresponded to active wave breaking. In the case of passive foam (Fig. 6e to h), VGG16 seemed to use larger portions of the image but at the same time focused on the flocculent foam seen in the images as a human classifier would do. In general, these results show that our model truly learned how to classify the images and is not merely guessing the labels.

Figure 6
figure 6

Results of Grad-CAM59 for all unique experiment locations described in Table 1 applied to actively breaking waves (top row) and to passive foam (bottom row). (a) Actively breaking wave example recorded at Adriatic Sea (2015/03/05 10:35). (b) Actively breaking wave example recorded at the Black Sea (2011/10/04 11:07). (c) Actively breaking wave example recorded at La Jument (2018/01/03 09:39). (d) Actively breaking wave example recorded at the Yellow Sea (2017/05/13 05:00). (e) Passive foam example recorded at Adriatic Sea (2015/03/05 10:35). (f) Passive foam example recorded at the Black Sea (2011/10/04 11:07). (g) Passive foam example recorded at La Jument (2018/01/03 09:39). (h) Passive foam example recorded at the (2017/05/13 05:00). In all panels, the color scale indicates the weights of the class activation map with brighter colors showing regions of the image which the neural network used to classify each particular sample.

The promising results presented here indicate that the current method should be extended to an object detection framework. Such a framework would eliminate the need for the image thresholding and DBSCAN steps. This implementation could be done by using a strongly-supervised architecture such as UNet60, or by using a weakly-supervised method derived from Grad-CAM, for example. As a final recommendation regarding the machine learning aspect of this paper, we strongly encourage future researchers to add samples to the training dataset that matches their specific research needs instead of blindly applying the provided pre-trained models.

We have also presented examples of wave breaking statistics that can be obtained using the proposed method. In general, the patterns observed here agreed with previously reported scaling factors that support the idea of wave breaking self-similarity. For example, Fig. 5f directly showed that the scaling parameter \(A_{br}/b^2\) approaches the constant 0.1 value from Duncan’s56 laboratory experiments. Another variable that showed signs of a self-similar behavior was the wave breaking area (\(A_{br}\)) which was very well described by a Pareto distribution and presented a steady quadratic increase with wave breaking duration (\(T_{br}\)). Extensive research11,15 has been grounded on the assumption that wave breaking is self-similar, but inconsistencies with this approach have been reported before14. Contrarily to other studies11,31, however, the \(\Lambda (c)\) distributions obtained here did not closely match the theoretical \(c^{-6}\) for a sea-state in equilibrium. As reported before13, these differences may be due to the fact that here we only considered actively breaking waves for our analysis whereas other studies seem to track the speeds of both actively breaking waves and passive foam31,32. Another possibility is that other phenomena not accounted by Phillips’12 theory (for example, wave modulation61) play important role on wave breaking. A future publication that investigates the mechanisms related to the wave breaking kinematics using the method and data obtained here will soon follow.

Conclusion

We described a novel method to detect and classify actively breaking waves in video imagery data. Our method achieved promising results when assessed using several different metrics. Further, by analyzing the deeper layers of our neural network, we showed that the model mimicked how a human classifier would perform a similar classification task. As an application of the method, we presented wave breaking statistics and breaking wave crest length distributions. Our method can thus be useful for the investigation of several ocean-atmosphere interaction processes and should, in the future, lead to advancements in operational wave forecast models, gas-exchange models, and general safety at seas. Finally, we strongly recommend that future research should focus on standardized methods to study wave breaking so that a consistent dataset can be generated and used freely and unambiguously by the community.