"Differential Evolution: Standing the Test of Time in Continuous Parameter Optimization under Complex Scenarios"
Swagatam Das (Web Page)
Indian Statistical Institute, Kolkata, India
Swagatam Das is currently serving as an assistant professor at the Electronics and Communication Sciences Unit of the Indian Statistical Institute, Kolkata, India. His research interests include evolutionary computing, pattern recognition, multi-agent systems, and wireless communication. Dr. Das has published one research monograph, one edited volume, and more than 200 research articles in peer-reviewed journals and international conferences. He is the founding co-editor-in-chief of “Swarm and Evolutionary Computation”, an international journal from Elsevier. He also serves as the associate editors of the IEEE Trans. on Systems, Man, and Cybernetics: Systems, IEEE Computational Intelligence Magazine, IEEE Access, Neurocomputing (Elsevier), Engineering Applications of Artificial Intelligence (Elsevier), and Information Sciences (Elsevier). He is an editorial board member of Progress in Artificial Intelligence (Springer), PeerJ Computer Science, International Journal of Artificial Intelligence and Soft Computing, and International Journal of Adaptive and Autonomous Communication Systems. Dr. Das has 7500+ Google Scholar citations and an H-index of 44 till date. He has been associated with the international program committees and organizing committees of several regular international conferences including IEEE CEC, IEEE SSCI, SEAL, GECCO, and SEMCCO. He has acted as guest editors for special issues in journals like IEEE Transactions on Evolutionary Computation and IEEE Transactions on SMC, Part C. He is the recipient of the 2012 Young Engineer Award from the Indian National Academy of Engineering (INAE).
Differential Evolution (DE) is arguably one of the most powerful stochastic real-parameter optimization algorithms of current interest. DE operates through similar computational steps as employed by a standard Evolutionary Algorithm (EA). However, unlike traditional EAs, the DE variants perturb the current-generation population members with the scaled differences of distinct population members. Therefore, no separate probability distribution has to be used for generating the offspring. Since its inception in 1995, DE has drawn the attention of many researchers all over the world resulting in a lot of variants of the basic algorithm with improved performance. This talk will begin with a brief but comprehensive overview of the basic concepts related to DE, its algorithmic components and control parameters. It will subsequently discuss some of the significant algorithmic variants of DE for bound constrained single-objective optimization. Application of the DE family of algorithms in complex optimization scenarios like constrained, large-scale, dynamic and multi-modal optimization problems will also be included. The talk will finally discuss a few interesting applications of DE and highlight a few open research problems.
1. Subhodip Biswas, Souvik Kundu, and Swagatam Das, "Inducing niching behavior in differential evolution through local information sharing", IEEE Trans. Evolutionary Computation, Vol. 19, Issue 2, pp. 246 - 263, 2015.
2. Subhodip Biswas, Souvik Kundu, and Swagatam Das, "An improved parent-centric mutation with normalized neighborhoods for inducing niching behavior in differential evolution", IEEE Transactions on Cybernetics, Vol. 44, No. 10, pp. 1726 - 1737, 2014.
3. Swagatam Das, Ankush Mandal, and Rohan Mukherjee, “An adaptive differential evolution algorithm for global optimization in dynamic environments”, IEEE Transactions on Cybernetics , Vol. 44, No. 10, pp. 1726-1737, 2014.
4. Aniruddha Basak, Swagatam Das, and Kay Chen Tan, "Multimodal optimization using a biobjective differential evolution algorithm enhanced with mean distance-based selection", IEEE Trans. Evolutionary Computation, 17(5): pp. 666-685, 2013.
5. Swagatam Das and P. N. Suganthan, "Differential evolution – a survey of the state-of-the-art", IEEE Transactions on Evolutionary Computation, Vol. 15, No. 1, pp. 4 – 31, 2011.
"Understanding Brains Through Experiments and Modeling of Neurodynamicss"
Włodzisław Duch (Web Page)
Department of Informatics, and NeuroCognitive Laboratory, Center for Modern Interdisciplinary Technologies, Nicolaus Copernicus University, Poland
Wlodzislaw Duch heads the
Neurocognitive Laboratory in the
Center of Modern Interdisciplinary Technologies, and the
Department of Informatics, both at
Nicolaus Copernicus University,
In 2014-15 he has served as a deputy minister for science and higher education in Poland, and in 2011-14 as the Vice-President for Research and ICT Infrastructure at his University. Before that he has worked as the Nanyang Visiting Professor (2010-12)
School of Computer Engineering,
Nanyang Technological University, Singapore where he also worked in 2003-07.
MSc (1977) in theoretical physics, Ph.D. in quantum chemistry (1980),
postdoc at Univ. of Southern California, Los Angeles (1980-82),
D.Sc. in applied math (1987);
worked at the University of Florida;
Max-Planck-Institute, Munich, Germany,
Kyushu Institute of Technology, Meiji and Rikkyo University in Japan,
and several other institutions.
He is/was on the editorial board of IEEE TNN, CPC, NIP-LR, Journal of Mind and Behavior, and
14 other journals;
was co-founder & scientific editor of the “Polish Cognitive Science” journal;
for two terms has served as the President of the
European Neural Networks Society executive committee (2006-2008-2011),
is an active member of IEEE CIS Technical committee;
International Neural Network Society Board of Governors elected him to their most prestigious
College of Fellows.
He works as an expert of the European Union science programs;
published over 300 scientific and over 200 popular articles on diverse subjects, has written or
co-authored 4 books and co-edited 21 books, his DuchSoft company has made
GhostMiner software package marketed by
Wlodek Duch is well known for development of computational intelligence (CI) methods that facilitate understanding of data, general CI theory based on similarity evaluation and composition of transformations, meta-learning schemes that automatically discover the best model for a given data. He is working on development of neurocognitive informatics, focusing on algorithms inspired by cognitive functions, information flow in the brain, learning and neuroplasticity, understanding of attention, integrating genetic, molecular, neural and behavioral levels to understand attention deficit disorders in autism and other diseases, infant learning and toys that facilitate mental development, creativity, intuition, insight and mental imagery, geometrical theories that allow for visualization of mental events in relation to the underlying neurodynamics. He has also written several papers in the philosophy of mind, and was one of the founders of cognitive sciences in Poland. Since 2014 he is heading a unique NeuroCognitive Laboratory, that involves experts in hardware and software, signal processing, physics, cognitive science, psychology and philosophy. His Lab works with infants, preschool children, students and older people, using neuroimaging techniques, behavioral experiments and computational modelling.
With a wide background in many branches of science and understanding of different cultures he bridges many scientific communities. To find a lot of information about his activity including his
full CV just type
"W. Duch" in Google.
The astronomical complexity of the brain is unmatched in the known universe. The engineering approach to understanding the brain is by creating artificial brains. This requires better understanding of what brains really do, describing animal and human phenotypes at all levels, from genetic to behavioral. All mental states result from neural dynamics of the brain. Understanding mental processes in a conceptual way, without understanding their underlying neurodynamics, will always be limited. A brief review of factors that influence brain development and facilitate building perceptual and cognitive skills, representation of concepts in the brain, will be presented. Functional neuroimaging techniques show brain processes during various mental tasks that lose their private character. External observer can see and correctly interpret brain processes that have not yet become conscious. Self, our personal identity, is one of many processes that brain is running. Computational neuroscience is leading the way to show how brain activity is linked to behavior.
Several models that provide insights into brain processes are discussed, including autism spectrum disorders, ADHD, distortions of memory states, and learning styles are discussed. Hypothesis derived from computational models are tested by experiments carried out in our Neurocognitive Laboratory on infants, preschool children and students.
1. Gravier A, Quek H.C, Duch W, Abdul Wahab, Gravier-Rymaszewska J. Neural network modelling of the influence of channelopathies on reflex visual attention. Computational Neurodynamics 10(1), 49-72, 2016.
2. Duch W, Dobosz K, Mikołajewski D, Autism and ADHD – two ends of the same spectrum?
Lecture Notes in Computer Science Vol. 8226, pp. 623-630, 2013.
3. Dobosz K, Mikołajewski D, Wójcik G.M, Duch W, Simple cyclic movements as a distinct autism feature - computational approach. Computer Science 14(3): 475-489, 2013.
4. Duch W. (2013) Brains and Education: Towards Neurocognitive Phenomics.
In: "Learning while we are connected", Vol. 3, Eds. N. Reynolds, M. Webb, M.M. Sysło, V. Dagiene. pp. 12-23
5. Duch W, Nowak W, Meller J, Osiński G, Dobosz K, Mikołajewski D, and Wójcik G.M, Computational approach to understanding autism spectrum disorders. Computer Science 13(2): 47-61, 2012.
6. Duch W, Dobosz K, Visualization for Understanding of Neurodynamical Systems. Cognitive Neurodynamics 5(2), 145-160, 2011.
7. Dobosz K, Duch W. (2010) Understanding Neurodynamical Systems via Fuzzy Symbolic Dynamics. Neural Networks Vol. 23 (2010) 487-496, 2010
"The Random Neural Network: The Model, its Properties, and an Application to Deep Learning"
Erol Gelenbe (Web Page)
Intelligent Systems and Networks (ISN), Department of Electrical and Electronic Engineering at Imperial College, London, UK
Erol Gelenbe is a Fellow of IEEE, ACM and IET, and he is a professor of Electrical and Electronic Engineering at Imperial College London. His current research encompasses stochastic models of system performance, energy savings in ICT, as well as machine learning and its applications to the management of the Internet and to Cloud Computing. He has graduated over 70 PhDs, has over 14,300 citations and is a Fellow of several national academies. He was awarded Chevalier de la Legion d’Honneur by France, as well as high honours by the government of Italy. http://www.imperial.ac.uk/people/e.gelenbe.
The random neural network (RNN) is a mathematical model of recurrent, spiking neuronal ensembles. It was developed by Erol Gelenbe in the 1990’s and generalised to a class of stochastic networks that are known as G-Networks. We have shown that these non-linear mathematical models have a remarkable mathematical property called “Product Form Solution”. This means, in particular that in steady-state, the joint probability distribution of an n-neuron recurrent RNN can be expressed as the product of the marginal probabilities that each of the n-neurons are excited. We will recall the theory behind this mathematical system and briefly discuss learning by both gradient descent and reinforcement learning, for these models. Their learning properties will be illustrated by examples from the learning of image textures, and the learning of best paths for routing packets in the Internet using Machine Learning and Bid Data. We will then show how very large clusters of recurrent G-Networks can be exploited in constructing multi-layer networks for deep Learning from large data sets.
 E. Gelenbe “Random Neural Networks with Negative and Positive Signals and Product Form Solution”, Neural Computation (1989), Vol. 1, No. 4, pp. 502-510, doi:10.1162/neco.1922.214.171.1242
 E. Gelenbe “Learning in the recurrent random neural network”, Neural Computation (1993), Vol. 5, No. 1, pp. 154-164, doi:10.1162/neco.19126.96.36.199
 E. Gelenbe “The first decade of G-Networks”, European Journal of Operational Research (2000), Vol. 126 (2), pp. 231-232.
 E. Gelenbe “Steps towards self-aware networks”, Communications of the ACM (2009), Vol. 52, No. 7, pp. 66-75.
 O. Brun, L. Wang and E. Gelenbe “Data Driven SMART Intercontinental Overlay Networks”, accepted for publication in the IEEE Journal on Selected Areas in Communications, 2016, Cite as: arXiv:1512.08314
 F. Francois and E. Gelenbe, “Towards a Cognitive Routing Engine for Software Defined Networks, accepted for publication, IEEE ICC-2016 (International Conference on Communications), Malaysia, 2016. Cite as: arXiv:1602.00487
"Embodying Intelligence in Autonomous Systems with the Use of Cognitive Psychology and Motivation Theories"
Zdzislaw KOWALCZUK and Michal CZUBENKO ,
Gdansk University of Technology,
Zdzislaw KOWALCZUK Prof. DSc PhD MScEE (2003, 1993, 1986, 1978). Since 1978 he has been with the Faculty of Electronics, Telecommunications and Informatics at the Gdańsk University of Technology, where he is a Full Professor in automatic control and robotics, and a Chair of the Department of Robotics and Decision Systems. He held visiting appointments at University of Oulu (1985), Australian National University (1987), Technische Hochschule Darmstadt (1989), and at George Mason University (1990-1991). Main interests include robotics, adaptive and predictive control, system modeling and identification, failure detection, signal processing, artificial intelligence, control engineering and computer science. He has authored and co-authored 18 books (incl. WNT 2002, PWNT 2007-2015, Springer 2004, 1014, 2016), about 100 journal papers (40 on the ISI list), and over 50 book chapters and 200 conference publications. He is a recipient of 1990 and 2003 Research Excellence Awards of Polish National Education Ministry, and a 1999 Polish National Science Foundation Award in automatic control.
The presentation proposes, on a certain level of abstraction and generalization, a coherent anthropological approach to the issue of control of autonomous robot or agent system. Such a control system can be based on an appropriate model of human mind using the available psychological knowledge. One of the main reasons for developing such projects is the lack of available top-down approaches resulting from the known research on autonomous robotics.
Thus, taking into account that there is no system that can model human psychology sufficiently well for purpose of constructing autonomous agents, the Intelligent System of Decision-making (ISD) is derived from cognitive psychology (using the aspect of 'information path'), motivation theory (where needs and emotions are used as a drive for the autonomous system), and several detailed theories, which concern memory, categorization, perception, and decision-making. In this system, in particular, an xEmotion subsystem of the ISD covers the psychological theories on emotions, including the appraisal, evolutionary and somatic theory of emotion.
 Z. Kowalczuk, M. Czubenko "Interactive cognitive-behavioural decision making system" Artificial Intelligence and Soft Computing. Lecture Notes in Artificial Intelligence, Part II, vol. LNAI 6114, pp. 516-523 [ISSN 0302-9743, ISBN-10 3-642-13231-6] Springer-Verlag, Berlin–Heidelberg–New York 2010
 Z. Kowalczuk, M. Czubenko "Intelligent decision-making system for autonomous robots" Int. Journal of Applied Mathematics and Computer Science vol. 21, no. 4, pp. 671-684 [ISSN 1641–876X, DOI: 10.2478/v10006-011-0053-7], 2011
 Z. Kowalczuk, M. Czubenko "Cognitive memory for intelligent systems of decision-making, based on human psychology" In: Intelligent Systems in Technical and Medical Diagnosis. Advances in Intelligent Systems and Computing, vol. AISC 230, pp. 379-389 [ISSN 2194-5357, DOI:10.1007/978-3-642-39881-0_32] Springer-Verlag, Berlin–Heidelberg 2014
 M. Czubenko, Z. Kowalczuk, A. Ordys "Autonomous driver based on intelligent system of decision-making" Cognitive Computation [ISSN 1866-9956 DOI: 10.1007/ s12559-015-9320-5, http://www.springer.com/biomed/neuroscience/journal/12559], 2015
 Z. Kowalczuk, M. Czubenko, W. Jędruch "Learning and memory processes in autonomous agents using an intelligent system of decision-making" In: Advanced and Intelligent Computations in Diagnosis and Control. Advances in Intelligent Systems and Computing, vol. AISC 386, pp. 301-315[ISSN 2194-5357; DOI 10.1007/978-3-319-23180-8_22] Springer IP Switzerland, Cham–Heidelberg–New York–London 2016
"Preference learning in a constructive paradigm"
Poznań University of Technology,
Roman Slowinski is a Professor and Founding Chair of the Laboratory of Intelligent Decision Support Systems at the Institute of Computing Science, Poznań University of Technology in Poland. Since 2002 he is also Professor at the Systems Research Institute of the Polish Academy of Sciences in Warsaw.
He is a full member of the Polish Academy of Sciences and, presently, elected president of the Poznań Branch of the Academy. He is also a member of Academia Europaea.
In his research, he combines Operations Research and Computational Intelligence. Today Roman Słowiński is renown for his seminal research on using rough sets in decision analysis, and for his original contribution to preference modeling and learning in decision aiding.
He is recipient of the EURO Gold Medal, and Doctor Honoris Causa of Polytechnic Faculty of Mons, University Paris Dauphine, and Technical University of Crete. In 2005 he received the Annual Prize of the Foundation for Polish Science - regarded as the highest scientific honor awarded in Poland.
Since 1999, he is principal editor of the European Journal of Operational Research,
a premier journal in Operations Research. He is coordinator of the EURO Working Group on Multiple Criteria Decision Aiding, and past president of the International Rough Set Society.
The lecture is about preference learning in Multiple Criteria Decision Aiding. It is well known that the dominance relation established in the set of alternatives (also called actions, objects, solutions) is the only objective information that comes from a formulation of a multiple criteria decision problem (ordinal classification, or ranking, or choice, with multiobjective optimization being a particular instance). While dominance relation permits to eliminate many irrelevant (i.e., dominated) alternatives, it does not compare completely all of them, resulting in a situation where many alternatives remain incomparable. This situation may be addressed by taking into account preferences of a Decision Maker (DM). Therefore, all decision-aiding methods require some preference information elicited from a DM or a group of DMs. This information is used to build more or less explicit preference model, which is then applied on a non-dominated set of alternatives to arrive at a recommendation (assignment of alternatives to decision classes, or ranking of alternatives from the best to the worst, or the best choice) presented to the DM. In practical decision aiding, the process composed of preference elicitation, preference modeling, and DM’s analysis of a recommendation, loops until the DM accepts the recommendation or decides to change the problem setting. Such an interactive process is called constructive preference learning.
I will focus on processing DM’s preference information concerning multiple criteria ranking and choice problems. This information has the form of pairwise comparisons of selected alternatives. Research indicates that such preference elicitation requires less cognitive effort from the DM than direct assessment of preference model parameters (like criteria weights or trade-offs between conflicting criteria). I will describe how to construct from this input information a preference model that reconstructs the pairwise comparisons provided by the DM. In general, construction of such a model follows logical induction, typical for learning from examples in AI. In case of utility function preference models, this induction translates into ordinal regression. I will show inductive construction techniques for two kinds of preference models: a set of utility (value) functions, and a set of "if…, then…" monotonic decision rules. An important feature of these construction techniques is identification of all instances of the preference model that are compatible with the input preference information – this permits to draw robust conclusions regarding DM’s preferences when any of these models is applied on the considered set of alternatives. These techniques are called Robust Ordinal Regression and Dominance-based Rough Set Approach.
I will also show how these induction techniques, and their corresponding models, can be embedded into an interactive procedure of multiobjective optimization, particularly, in Evolutionary Multiobjective Optimization (EMO), guiding the search towards the most preferred region of the Pareto-front.
1. J.Branke, S.Greco, R.Slowinski, P.Zielniewicz: Learning Value Functions in Interactive Evolutionary Multiobjective Optimization. IEEE Transactions on Evolutionary Computation, 19 (2015) no.1, 88–102. 465.
2. S.Corrente, S.Greco, M.Kadzinski, R.Slowinski: Robust ordinal regression in preference learning and ranking. Machine Learning, 93 (2013) 381-422.
3. J.Figueira, S.Greco, R.Slowinski: Building a set of additive value functions representing a reference preorder and intensities of preference: GRIP method. European J. Operational Research 195 (2009) 460-486.
4. M. Szelag, S. Greco, R. Slowinski: Variable consistency dominance-based rough set approach to preference learning in multicriteria ranking. Information Sciences, 277 (2014) 525-552.
5. R. Slowinski, S. Greco, B. Matarazzo: Rough Set Methodology for Decision Aiding. Chapter 22 [in]: J.Kacprzyk and W.Pedrycz (eds.), Handbook of Computational Intelligence, Springer, Berlin, 2015, pp. 349-370.
"Can Learning Vector Quantization be an Alternative to SVM and Deep Learning? - Recent trends and advanced variants of LVQ"
University of Applied Sciences Mittweida, Germany
Prof. Thomas Villmann holds a diploma degree in Mathematics, received his Ph.D. in Computer Science in 1996 and his habilitation as well as venia legendi in the same subject in 2005, all from University of Leipzig. From 1997 to 2009 he led the Computational Intelligence group of the Clinic for Psychotherapy at Leipzig University. Since 2009 he is a full Professor for Technomathematics/ Computational Intelligence at the University of Applied Sciences Mittweida, (Saxonia) Germany. He is founding member of the German chapter of European Neural Network Society (GNNS) and its president since 2011 as well as board member of the European Neural network Society (ENNS). Further he leads the Institute of Computational Intelligence and Intelligent Data Analysis e.V. in Mittweida, Germany. He acts as an associate editor for IEEE Transactions on Neural Networks and Learning Systems, Neural Processing Letters and for Computational Intelligence and Neuroscience. He was the general chair of the 10th Workshop on Self-organizing Maps and Learning Vector Quantization. His research focus includes the theory of prototype based clustering and classification, non-standard metrics, information theoretic learning, statistical data analysis and their application in pattern recognition, data mining and knowledge discovery for use in medicine, bioinformatics, remote sensing, hyperspectral analysis and others.
Learning Vector Quantization (LVQ) has been developed in the 80-ies of the last century by T. Kohonen. The aim was to develop a prototype-based learning classifier approximating a Bayesian classifier . A cost-based variant was introduced later by Sato and Yamada, which can be adapted by stochastic gradient descent learning, minimizing the approximated classification accuracy and realizing a margin optimization scheme [2,3]. This so-called generalized LVQ (GLVQ) is the starting point for many variants proposed during the last years.
One of the most innovative concepts is relevance learning, which consist in an automatic feature weighting with respect to the classification task . This idea can be extended taking correlations between features into account . Recent developments deal with kernel distances, optimization of other statistical classification evaluation measures like F-measures or ROC-values [6,7]. Further, reject option can be included into the GLVQ scheme as well as border sensitive learning options for explicit detection of class borders like in support vector machines (SVM) [8,9]. Additionally, median and relational variants are available if only dissimilarity data are provided .
The talk will highlight these new trends in learning vector quantization in a systematic way to show that LVQ can easily be adapted to user-specific classification tasks and compares the LVQ-abilities with the widely applied SVM. Pros and cons are discussed for both approaches and exemplary applications will accompany the conceptual explanations about LVQ-models.
 T. Kohonen (1988). Learning vector quantization. Neural Networks, 1(Supplement 1):303.
 A. Sato and K. Yamada (1996). Generalized learning vector quantization. In D. S. Touretzky, M. C. Mozer, and M. E. Hasselmo, eds., Advances in Neural Information Processing Systems 8. Proceedings of the 1995 Conference, pages 423-429. MIT Press, Cambridge, MA, USA, 1996.
 K. Crammer, R. Gilad-Bachrach, A. Navot, and N. Tishby (2002).Margin analysis of the LVQ algorithm. In S. Thrun and K. Obermayer, eds., Advances in Neural Information Processing Systems 15. pages 462-469. MIT Press, Cambridge, MA, USA, 2002.
 B. Hammer and T. Villmann (2002). Generalized relevance learning vector quantization. Neural Networks 15 (8-9), 1059-1068.
 P. Schneider, B. Hammer, and M. Biehl (2009). Distance learning in discriminative vector quantization. Neural Computation 21, 2942-2969.
 M. Kaden, W. Hermann, and T. Villmann (2014). Optimization of general statistical accuracy measures for classification based on learning vector quantization. Proc. of the European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN). 47-52, Bruges.
 T. Villmann, M. Kaden, W. Hermann, and M. Biehl (2015). Learning vector quantization classifiers for ROC-optimization. Submitted to Computational Statistics.
 T. Villmann, M. Kaden, A. Bohnsack, S. Saralajew, and B. Hammer (2016). Self-adjusting reject options in prototype based classification. To appear in Proc. Of the 11th Workshop on Self-Organizing Maps and Learning Vector Quantization, Houston.
 M. Kaden, M. Riedel, W. Hermann, and T. Villmann (2015). Border-sensitive learning in generalized learning vector quantization: an alternative to support vector machines. Soft Computing, 19 (9), 2423-2434.
 D. Nebel, B. Hammer, K. Frohberg, and T. Villmann (2015). Median variants of learning vector quantization for learning of dissimilarity data. Neurocomputing, 169, 295-305.
"From Online Ensemble Learning to Learning in the Model Space"
School of Computer Science, University of Birmingham, UK
Xin Yao is a Professor of Computer Science at the University of
Birmingham, UK, since the April Fool's day in 1999. He is a Fellow of IEEE and
a Distinguished Lecturer of IEEE Computational Intelligence Society (CIS). He
was the President (2014-15) of IEEE CIS. His major research interests include
evolutionary computation, ensemble learning, and their applications. His work
won the 2001 IEEE Donald G. Fink Prize Paper Award, 2010 and 2015 IEEE
Transactions on Evolutionary Computation Outstanding Paper Awards, 2010 BT
Gordon Radley Award for Best Author of Innovation (Finalist), 2011 IEEE
Transactions on Neural Networks Outstanding Paper Award, and many
other best paper awards. He received the prestigious Royal Society Wolfson
Research Merit Award in 2012 and the IEEE CIS Evolutionary Computation
Pioneer Award in 2013.
Divide-and-conquer is a common strategy in tackling large and complex
problems. Ensembles can be regarded an automatic approach towards automatic
divide-and-conquer. Many ensemble methods, including boosting, bagging,
negative correlation, etc., have been used in online learning of data streams
with concept drift. This talk will first present an online ensemble
learning algorithm (i.e., DDD) that exploits ensemble diversity to deal with
concept drifts. Then a recent effort towards online learning of class
imbalance data streams will be described. Finally, a new learning framework
--- learning in the model space --- is introduced, which is particularly
suitable for online learning of imbalanced data streams.
Some materials used in the talk are based on the following papers:
L. L. Minku and X. Yao, "DDD: A New Ensemble Approach For Dealing With Concept
Drift," IEEE Transactions on Knowledge and Data Engineering, 24(4):619-633,
S. Wang, L. L. Minku and X. Yao, "Resampling-Based Ensemble Methods for
Online Class Imbalance Learning," IEEE Transactions on Knowledge and Data
Engineering, 27(5):1356-1368, May 2015.
H. Chen, P. Tino, A. Rodan and X. Yao, "Learning in the Model Space for
Cognitive Fault Diagnosis," IEEE Transactions on Neural Networks and
Learning Systems, 25(1):124-136, January 2014.