Speaker
Description
Gross Primary Production (GPP), the gross uptake of CO2 by vegetation, represents the largest carbon flux in terrestrial ecosystems and is fundamental to understanding global carbon dynamics. While Eddy Covariance (EC) towers provide direct, high-frequency estimates of GPP, their global spatial coverage remains limited. When there is an absence of EC data, remote sensing (RS) techniques are typically employed, often relying on statistical models with limited capacity to capture temporal dependencies. Recent advances in Deep Learning (DL) offer new opportunities for predicting GPP from time series. However, a comparative performance analysis between Transformer-based models and Recurrent Neural Networks (RNNs) for GPP estimation remains underexplored. This study presents a comparative evaluation of two representative architectures: GPT-2, a Transformer model, and Long Short-Term Memory (LSTM), a widely used RNN variant. We assess each model’s predictive accuracy across typical and extreme conditions. Our results show that while both models achieve comparable overall performance, LSTM excels under normal conditions, whereas GPT-2 demonstrates superior accuracy during extreme events, including stress-induced downregulation and peak productivity periods. Furthermore, we examine the role of memory retention in temporal modeling, revealing that LSTM requires shorter context windows to match GPT-2’s performance.
Status Group | Doctoral Researcher |
---|---|
Poster Presentation Option | Undecided/No preference |