Journée Axe PMNL

Organisateur:

Intitulé: 
Journée Axe PMNL
Date: 
Vendredi, 23 Juin, 2023

Journée Programmation mathématique non linéaire (axe PMNL du GDR RO) :

Cnam au 2 rue conté 75003 Paris, vendredi 23 juin 2023

 


 

Informations pratiques

  • Lieu : Conservatoire National des Arts et Métiers (Cnam ), situé au 2 rue conté 75003 Paris.
  • Salle : 31.2.85 : accès 31, 2ème étage, salle 85
  • Comment venir : Métro ligne 3 ou 11 station Arts et Métier, ou ligne 4 station Réaumur Sébastopol

Programme

  • 09:30-10:00 : Accueil des participants
  • 10:00-10:45 : Un langage et un environnement intégré pour la modélisation et la résolution de problèmes de conception de systèmes., présenté par Pierre-Alain Yvars (ISAE-SUPMECA)
  • 10:50-11:35 : Recent algorithmic advances for maximum-entropy sampling présenté par Jon Lee (Michigan University)
  • 11:35-11:50 : Pause
  • 11:50-12:30 : Exploiting submodularity in some MINLP problems , présenté par Linding Xu (Ecole Polytechnique - LIX )
  • 12:30-14:00 : Pause déjeuner
  • 14:00-14:20 : Digestion technique pour récupérer des slides et faire refonctionner le vidéoprojecteur
  • 14:20-15:05 : Strong duality reformulation for the bilevel optimization of nonlinear networks, présenté par Sophie Demassey (CMA - Mines ParisTech)
  • 15:05-15:20 : Pause
  • 15:20-16:00 : Classification and Regression Trees. A Non-Linear Optimization approach , présenté par Cristina Molero (Ecole Polytechnique - LIX )

Exposés invités

Pierre-Alain Yvars (ISAE-SUPMECA) : Un langage et un environnement intégré pour la modélisation et la résolution de problèmes de conception de systèmes.

DEPS (Design Problem Specification) est un nouveau langage de modélisation conçu pour poser des problèmes de conception de systèmes en vue de leur résolution. DEPS aborde les problèmes de dimensionnement, de configuration, de déploiement de fonctions et plus généralement de génération d'architecture ou de synthèse de système. Les systèmes considérés peuvent être des systèmes physiques, des systèmes à forte composante logicielle ou des systèmes mixtes (embarqués, mécatroniques, cyber-physiques). Ces problèmes peuvent-être discrets, continus ou mixtes, linéaires ou non linéaires. DEPS est un langage déclaratif structuré et basé sur des propriétés qui combine des caractéristiques de modélisation structurelle avec des caractéristiques de spécification de problèmes issues de la programmation par contraintes sur des domaines mixtes. La nature mathématique des problèmes est décrite par des propriétés formelles encapsulées dans des modèles organisés selon l'architecture du système étudié. Nous présenterons d'abord les caractéristiques de ce langage en les illustrant par des exemples dans différents domaines. Nous passerons ensuite en revue la version actuelle de DEPS Studio, l'environnement intégré de modélisation et de résolution de DEPS, ainsi que son solveur CP mixte dédié.

Jon Lee (Michigan University) : Recent algorithmic advances for maximum-entropy sampling

The maximum-entropy sampling problem (MESP) is to select a subset, of given size s, from a set of random variables, so as to maximize the "differential entropy". If C is the covariance matrix and we are in the Gaussian case, then we are simply seeking to maximize the determinant of an order-s principal submatrix of C. A key application is for the reconfiguration of an environmental-monitoring network. MESP sits at the intersection of optimization, data science and information theory, and so it has attracted a lot of recent attention. The problem is NP-hard, and there have been algorithmic attacks aimed at exact solution of moderate-sized instance for over three decades. It is a fascinating problem from the perspective of integer nonlinear optimization, as it does not fit within a framework that is successfully attacked via available general-purpose paradigms. I will give a broad overview of algorithmic work, concentrating on the many useful techniques related to various convex relaxations.

Linding Xu (Ecole Polytechnique - LIX ) : Exploiting submodularity in some MINLP problems

Submodular functions are special class of discrete functions. They are usually considered as discrete convex functions, which enjoy many useful properties. In this talk, we will review some polyhedral properties of submodular functions and examples of submodular functions. We will show that submodular functions arise in applications of optimal statistical design, binary polynomials, and utility functions. In particular, we will study intersection cuts for several problems involving submodular functions.

Sophie Demassey (CMA - Mines ParisTech) : Strong duality reformulation for the bilevel optimization of nonlinear networks

Nonlinear flow-potential graphs appear in models of many physical networks: consider as an example, drinking water distribution networks where the flow-potential relation on each arc is nonlinear. Some optimization models on these networks are either static (for network design) or dynamic (for operation control) with time-coupling constraints induced by the storage. They are nonconvex MINLPs if they directly include the flow-potential equilibrium as constraints. Other formulations exploit their clear bilevel structure: as in network interdiction problems, one need to determine one subgraph or a sequence of subgraphs and the associated flow-potential equilibria, which together minimize a given cost function. We study the strong duality reformulation of these nonlinear bilevel problems, for deriving a convex reformulation in the static case, and, in the dynamic case: cuts by linearizing the strong duality constraints or a new problem decomposition by dualizing them.

Cristina Molero (Ecole Polytechnique - LIX ) : Classification and Regression Trees. A Non-Linear Optimization approach

Contrary to classic classification and regression trees, built in a greedy heuristic manner, designing the tree model through an optimization problem allows us to easily include desirable properties in Machine Learning in addition to prediction accuracy. In this talk, we present a Non-Linear Optimization approach that is scalable with respect to the size of the training sample, and illustrate this flexibility to model several important issues in Explainable and Fair Machine Learning. These include sparsity, as a proxy for interpretability, by reducing the amount of information necessary to predict well; fairness, by aiming to avoid predictions that discriminate against sensitive features such as gender or race; the cost-sensitivity for groups of individuals in which prediction errors are more critical, such as patients of a disease, by ensuring an acceptable accuracy performance for them; local explainability, where the goal is to identify the predictor variables that have the largest impact on the individual predictions; as well as data complexity in the form of observations of functional nature. The performance of our approach is illustrated on real and synthetic data sets.