Atmospheric and Oceanic Sciences

Showing all 38 journals
Remote SensingFeb 05, 2026
Accurate crop type mapping remains challenging in regions where persistent cloud cover limits the availability of optical imagery. Multi-temporal dual-polarization Sentinel-1 SAR data offer an all-weather alternative, yet existing approaches often underutilize polarization information and rely on single-scale temporal aggregation. This study proposes PTU-Net, a polarization–temporal U-Net designed specifically for pixel-wise crop segmentation from SAR time series. The model introduces a Polarization Channel Attention module to construct physically meaningful VV/VH combinations and adaptively enhance their contributions. It also incorporates a Multi-Scale Temporal Self-Attention mechanism to model pixel-level backscatter trajectories across multiple spatial resolutions. Using a 12-date Sentinel-1 stack over Kings County, California, and high-quality crop-type reference labels, the model was trained and evaluated under a spatially independent split. Results show that PTU-Net outperforms GRU, ConvLSTM, 3D U-Net, and U-Net–ConvLSTM baselines, achieving the highest overall accuracy and mean IoU among all tested models. Ablation studies confirm that both polarization enhancement and multi-scale temporal modeling contribute substantially to performance gains. These findings demonstrate that integrating polarization-aware feature construction with scale-adaptive temporal reasoning can substantially improve the effectiveness of SAR-based crop mapping, offering a promising direction for operational agricultural monitoring.
Remote SensingFeb 05, 2026
Airplanes are the most popular investigation objects as a dynamic and critical component in remote sensing images. Accurately identifying and monitoring airplane behaviors is crucial for effective air traffic management. However, existing methods for interpreting fine-grained airplanes in remote sensing data depend heavily on large annotated datasets, which are both time-consuming and prone to errors due to the detailed nature of labeling individual points. In this paper, we introduce Text2AIRS, a novel method that generates fine-grained and realistic Airplane Images in Remote Sensing from textual descriptions. Text2AIRS significantly simplifies the process of generating diverse aircraft types, requiring limited texts and allowing for extensive variability in the generated images. Specifically, Text2AIRS is the first to incorporate ground sample distance into the text-to-image stable diffusion model, both at the data and feature levels. Extensive experiments demonstrate our Text2AIRS surpasses the state-of-the-art by a large margin on the Fair1M benchmark dataset. Furthermore, utilizing the fine-grained airplane images generated by Text2AIRS, the existing SOTA object detector achieves 6.12% performance improvement, showing the practical impact of our approach.
Remote SensingFeb 05, 2026
Remote sensing image captioning (RSIC) aims to generate natural language descriptions for the given remote sensing image, which requires a comprehensive and in-depth understanding of image content and summarizes it with sentences. Most RSIC methods have successful vision feature extraction, but the representation of spatial features or fusion features fails to fully consider cross-modal differences between remote sensing images and texts, resulting in unsatisfactory performance. Thus, we propose a novel cross-modal spatial–semantic alignment (CSSA) framework for an RSIC task, which consists of a multi-branch cross-modal contrastive learning (MCCL) mechanism and a dynamic geometry Transformer (DG-former) module. Specifically, compared to discrete text, remote sensing images present a noisy property, interfering with the extraction of valid vision features. Therefore, we present an MCCL mechanism to learn consistent representation between image and text, achieving cross-modal semantic alignment. In addition, most objects are scattered in remote sensing images and exhibit a sparsity property due to the overhead view. However, the Transformer structure mines the objects’ relationships without considering the geometry information of the objects, leading to suboptimal capture of the spatial structure. To address this, a DG-former is designed to realize spatial alignment by introducing geometry information. We conduct experiments on three publicly available datasets (Sydney-Captions, UCM-Captions and RSICD), and the superior results demonstrate its effectiveness.
Remote SensingFeb 05, 2026
Hyperspectral anomaly detection (HAD) aims to identify pixels that significantly differ from the background without prior knowledge. While deep learning-based reconstruction methods have shown promise, they often suffer from limited feature representation, inefficient training cycles, and sensitivity to imbalanced data distributions. To address these challenges, this paper proposes a novel contrastive–transfer-synergized dual-stream transformer for hyperspectral anomaly detection (CTDST-HAD). The framework integrates contrastive learning and transfer learning within a dual-stream architecture, comprising a spatial stream and a spectral stream, which are pre-trained separately and synergistically fine-tuned. Specifically, the spatial stream leverages general visual and hyperspectral-view datasets with adaptive elastic weight consolidation (EWC) to mitigate catastrophic forgetting. The spectral stream employs a variational autoencoder (VAE) enhanced with the RossThick–LiSparseR (R-L) physical-kernel-driven model for spectrally realistic data augmentation. During fine-tuning, spatial and spectral features are fused for pixel-level anomaly detection, with focal loss addressing class imbalance. Extensive experiments on nine real hyperspectral datasets demonstrate that CTDST-HAD outperforms state-of-the-art methods in detection accuracy and efficiency, particularly in complex backgrounds, while maintaining competitive inference speed.
Remote SensingFeb 05, 2026
High resolution underwater mapping is fundamental to the sustainable development of the blue economy, supporting offshore energy expansion, marine habitat protection, and the monitoring of both living and non-living resources. This work presents a pose-graph SLAM and calibration framework specifically designed for 3D profiling sonars, such as the Coda Octopus Echoscope 3D. The system integrates a probabilistic scan matching method (3DupIC) for direct registration of 3D sonar scans, enabling accurate trajectory and map estimation even under degraded dead reckoning conditions. Unlike other bathymetric SLAM methods that rely on submaps and assume short-term localization accuracy, the proposed approach performs direct scan-to-scan registration, removing this dependency. The factor graph is extended to represent the sonar extrinsic parameters, allowing the sonar-to-body transformation to be refined jointly with trajectory optimization. Experimental validation on a challenging real world dataset demonstrates outstanding localization and mapping performance. The use of refined extrinsic parameters further improves both accuracy and map consistency, confirming the effectiveness of the proposed joint SLAM and calibration approach for robust and consistent underwater mapping.
Quarterly Journal of the Royal Meteorological SocietyFeb 05, 2026
Abstract Using three‐hourly ERA5 and ERA5‐Land reanalyses from 1979 to 2024, we determine the characteristics of wet spells in Australia in summer (October–March) and winter (April–September), focusing on northern and southeast Australia. Wet spells in summer account for up to 90% of seasonal precipitation in northern Australia, with a frequency of 20%–30% and a mean duration of 8–10 hours. In winter, wet spells of similar frequency contribute up to 96% of precipitation in southeast Australia, with a mean duration of 12–17 hours. Wet spells lasting six hours to one day account for the largest fraction (50%–60%) of seasonal precipitation in summer, whereas 12‐hour to two‐day wet spells contribute the most (50%–70%) in winter. Wet spells longer than 12 hours account for nearly 90% of the three‐hourly extreme precipitation events with a mean intensity of 3–4 mm·3‐h −1 . Shorter spells are associated with light showers with a mean intensity of 0.5–2 mm·3‐h −1 and contribute 10%–30% of extreme events. The increase in seasonal precipitation during wet years is primarily due to an increase in the frequency of wet spells in northern Australia. In contrast, increases in both wet‐spell frequency and intensity are important in southeast Australia. In both regions, wet spells longer than 12 hours contribute the most to the change in precipitation between wet and dry seasons. The synoptic environments for subdaily wet spells in northern Australia are tropical convection and monsoon low‐pressure systems. Longer wet spells show patterns similar to active monsoon bursts with a well‐developed monsoon trough. In southeast Australia, the synoptic patterns for subdaily spells resemble extratropical lows and fronts, while longer wet spells are mostly associated with cut‐off lows.