i). Nonconvex programming and Global optimization - local and global approaches; DC programming - theory, algorithms and applications in various areas of applied sciences;
ii). Combinatorial Optimization and its applications in complex industrial systems;
iii). Machine Learning and Data mining: algorithms and applications.
I specialized in Non-Convex Optimization since 1992, more particularly in a new field of Non-Convex and Non-Differentiable Optimization named DC (Difference of Convex functions) Programming and DCA (DC Algorithms): theory, algorithms and applications. Chronologically, this field was created by Pham Dinh Tao in its preliminary state, in 1985 and our intensive research has led to decisive developments since 1994, with a rich scientific output in quality-quantity, to now become classic and increasingly popular worldwide. DC programming and DCA are known as powerful non-convex optimization tools due to their robustness and performance compared to existing methods, their rapidity and scalability, and the flexibility of DC decompositions. Being a continuous approach, DC programming and DCA were successfully applied to combinatorial optimization as well as many classes of hard nonconvex programs. We can say, without false modesty, that most practical problems in non-convex optimization whose efficient modeling and resolution, by researcher-practitioners in the world, automatically use these theoretical and algorithmic tools. Moreover, almost all classical and recent algorithms in local approaches for nonconvex optimization can be seen as a version of DCA with an appropriate DC decomposition.
i). DC duality & local optimality conditions, particular features of DC polyhedral programs, Convergence of the whole DCA sequence for DC programs with subanalytic data, Extended standard DCA to general DC programs with DC constraints.
ii). The exact penalty techniques with/without error bounds in DC programming with nonconvex constraint sets. These results permit to recast various classes of difficult nonconvex programs into suitable DC programs which can be tackled by DCA and combined DCA - global algorithms;
iii). Several DCA based algorithms for solving various classes of difficult nonconvex programs. These works constitute the bridge between Combinatorial optimization, Operation Research and Continuous optimization, DC programming.
iv). A unifying nonconvex approximation approach and the exact penalty approach, with solid theoretical tools as well as efficient algorithms based on DC programming and DCA, to tackle the zero-norm and sparse optimization.
v). Numerous efficient DCA based algorithms for various areas of machine learning and data mining: unsupervised learning, supervised learning, semi-supervised learning, learning with sparsity / uncertainty, reinforcement learning, stochastic learning, online learning, ect.
vi). In applied research, we developed new powerful methods based on DCA and global optimization approaches for several large scale problems in various fields.
vii). In the last ten years, my research is very active on the development of new theoretical tools and a new generation of algorithms beyond the standard framework of DC programming and DCA for large-scale nonconvex optimization, as well as advanced methods in machine learning based on DCA, in order to meet the challenges related to Big data (the scalability, the uncertainty, the velocity, …).