Advances in dimensionality reduction
Duration: 10 hours
Trainers: N. Mankovich
Description: This course provides an overview of linear dimensionality reduction techniques and offers hands-on experience with NumPy and scikit-learn. It also explores the world of nonlinear dimensionality reduction, revealing hidden patterns in complex data.
Keywords: Linear algebra, PCA, LDA, IsoMap, UMAP
Objectives
- Familiarity with the landscape of dimensionality reduction algorithms
- Implement various algorithms in Python.
Atmospheric radiative transfer models
Duration: 3 hours
Trainers: J. Vicent
Description: Atmospheric Radiative Transfer Models (RTM) are computer codes that describe the physical interaction of light with atmospheric constituents, helping us to understand the radiation processes occurring in the Earth’s atmosphere. These models are widely used in Earth Observation scientific and technological applications, from retrieval of atmospheric parameters, to processing satellite data and numerical weather forecasting.
Objectives
Causality course
Duration: 45 hours
Trainers: G. Varando, E. Díaz Salas and V. Sitokonstantinou
Description: Introduction to causal inference methods, robust prediction techniques, and causal discovery methods. Familiarize with the vocabulary, definitions and basic concepts of causality, and understand the fundamentals behind basic methodologies in causal inference and causal discovery.
Keywords: Causality, Bayes, effect estimation, DAG, causal discovery, what-if
Objectives
- Understand the fundamental goals and problems of causal methods
- Familiarize with the vocabulary, definitions and basic concepts of causality
- Understand the fundamentals behind basic methodologies in causal inference and causal discovery
Color vision and colorimetry
Duration: 30 hours
Trainers: J. Malo
Description: Color is a 5-dimensional perception that is not only related to the spectrum coming from an object, but also strongly related to its spatio-temporal context. It is a powerful feature that allows humans to make reliable inferences about objects that would be nice to understand and mimic in artificial vision. In this course, we derive the linear CIE tristimulus theory from its experimental color matching foundations. We derive the relations between spectrum and tristimulus vectors through the color matching functions, the chromatic coordinates, chromatic purity and luminance. Phenomenology of color discrimination and adaptation reveals the limitations of the linear description and sets the foundations of color appearance models. In addition, we link the above perceptual representations of color with the conventional representation of color in computers.
Objectives
Explainability in AI systems
Duration: 3 hours
Trainers: O. Pellicer
Description: Introduction to Explainable AI: covering broadly Explainable AI (XAI) from definitions and concepts to cutting-edge methods and evaluations.
Keywords: explainability, interpretability, trustworthiness, transparency
Objectives
- To cover broadly Explainable AI (XAI) from definitions and concepts to cutting-edge methods and evaluations.
Google Earth Engine Introduction
Trainers: E. Izquierdo, J. Muñoz, A. Moreno and M. Sapena
Description: A short introduction to Google Earth Engine (GEE), focusing on data collection, custom function development, basic remote sensing preprocessing techniques, application of machine learning techniques for regression and classification tasks.
Keywords: Cloud computing, GEE, remote sensing, classification, regression, machine learning
Objectives
- Data collection
- Custom function development
- Basic remote sensing preprocessing techniques
- Application of machine learning techniques for regression/classification tasks
- Temporal segmentation methods
Human vision: facts, mechanistic models, and principled theories
Duration: 30 hours
Trainers: J. Malo
Description: In this course, I introduce the facts on color vision from radiometry and photometry, the vision of spatio-temporal textures, the models of mechanisms and circuits that (kind of) reproduce these facts, and the statistical elements of theories that explain the facts.
Objectives
Hybrid modeling
Duration: 3 hours
Trainers: K. Cohrs
Description: Introduction to hybrid modelling and its particular challenges on small and large scales. Learn about the different flavors and dimensions in which physical knowledge can be integrated with machine learning.
Keywords: Physics informed, NN, machine learning
Objectives
- Learn about the different flavors and dimensions in which physical knowledge can be integrated with machine learning.
Hyperspectral image processing
Duration: 60 hours
Trainers: G. Camps-Valls
Description: We introduce the main concepts of hyperspectral image processing. We start by a soft introduction to hyperspectral image processing, the standard processing chain, and the current challenges in the field. Then we analyze the current state of the art in several topics: feature extraction, supervised classification, unmixing and abundance estimation, and retrieval of biophysical parameters. All the methods and techniques studied are reviewed both theoretically and through MATLAB exercises.
Objectives
Information theory for visual communication
Duration: 30 hours
Trainers: J. Malo
Description: In this course, I introduce the elements of information theory required to understand why Uniformization or Gaussianization of density functions and noise in the system are key for the transmission of visual information. This knowledge is the basis of our long-standing agenda on developing invertible transforms for uniformization (SPCA, PPA, DRR) and Gaussianization (RBIG), and our research to calibrate neural noise in the visual system and Divisive Normalization models.
Objectives
Kernel methods in machine learning
Duration: 30 hours
Trainers: G. Camps-Valls
Description: Two fundamental operations in Machine Learning such as regression and classification involve drawing nonlinear boundaries or functions through a set of (labeled or unlabeled) training samples. These boundaries or functions at certain (test) samples can be deduced from the similarities between the test sample and the training samples. These similarities can be encoded in Kernels, and the representer theorem can be used to obtain expressions for the functions at any test sample. In this course, we will also review the application of the kernelization of scalar products (e.g., as in the covariance matrix) to obtain nonlinear generalizations of classical feature extraction methods.
Objectives
Machine learning and signal processing for remote sensing data analysis (IGARSS'14 tutorial)
Trainers: G. Camps-Valls and D. Tuia
Description: In this tutorial, we will present the remote sensing image processing chain, and take the attendants on a tour of different strategies for feature extraction, classification, unmixing, retrieval, and pattern analysis for data understanding. On the one hand, we will present powerful methodologies for remote sensing data classification: extracting knowledge from data, including interactive approaches via active learning, classifiers that encode prior knowledge and invariances, semi-supervised learning that exploits the information of unlabeled data, and domain adaptation to compensate for shifts in the ever-changing data distributions. On the other hand, we will pay attention to recent advances in bio-geophysical parameter estimation that incorporate heteroscedasticity, online adaptation, and problem understanding. From there, we will take a leap towards the more challenging step of understanding the geoscience problems from data by reviewing the latest advances in (directed) graphical models, structure learning, and empirical causal inference. Beyond theory, we will also present results of recent studies illustrating all the covered issues. Finally, we will provide code to the attendees to try the different methodologies and provide a solid ground for their future experimentations.
Objectives
Probabilities and uncertainties
Duration: 24 hours
Trainers: G. Varando, H. Durand and K. Cohrs
Description: Introduction to the concepts of probability and uncertainty, with some examples on how to work, estimate and eventually communicate with them. Learn classical statistical inference (frequentist and Bayesian) and advanced techniques for high-dimensional data and ML.
Keywords: Probability, uncertainty, climate modelling, deep learning
Objectives
- Basics of probability theory
- Classical statistical inference (frequentist and Bayesian)
- Advanced techniques for high-dimensional data and ML.
Remote sensing for water quality
Representation of spatial information
Duration: 30 hours
Trainers: J. Malo
Description: Statistical regularities in photographic images imply that certain representations of spatial information are better than others in terms of coding efficiency. In this course, we present the information theory concepts (entropy, multi-information, correlation and negentropy) for unsupervised feature extraction or dictionary learning required in image coding. Redundancy in images and sequences is reviewed, and basic techniques for compact information representation are introduced such as vector quantization, predictive coding, and transform coding. Application of these concepts in images is the basis of DCT and Wavelet representations, which are the core of JPEG and JPEG2000.
Objectives
Satellite-based tools for investigating aquatic ecosystems
Statistical signal processing
Duration: 60 hours
Trainers: G. Camps-Valls
Description: Material for a master course on (statistical) signal processing. I cover the essential background for engineers and physicists interested in signal processing: Probability and random variables, discrete time random processes, spectral estimation, signal decomposition and transforms, and an introduction to information theory.
Objectives
Texture and motion in the visual cortex
Duration: 40 hours
Trainers: J. Malo
Description: Neurons in V1 and MT cortex play a determinant role in the analysis of the shape of objects, their spatial texture, and the estimation of retinal motion. In this course, we describe the basic psychophysical and physiological phenomena related to low-level spatio-temporal vision: the contrast sensitivity functions, masking, adaptation, and aftereffects. These facts are mediated by the context-dependent nonlinearities of the response of neurons with specific receptive fields. We analyze the geometric properties of the standard model of V1 and their consequences in image discrimination. We introduce the concept of optical flow, its properties, and how this description of motion can be estimated from the 3D wavelet sensors in V1 and the aggregated sensors in MT.

















