Prof. Jon-Lark KimMath Department, Sogang University, Republic of Korea
Speech Title: Multi-class learning problems based on error-correcting output codes
Abstract: The multi-class classification problem is one of the important problems in machine learning. The common method to solve a multi-class classification problem is to decompose it into multiple binary problems. Dietterich and Bakiri in 1995 introduced the error-correcting output codes(ECOC) for this problem. The simplest decoding method, Hamming decoding, was employed to obtain a multi-class decision. In this presentation, we overview various ECOC methods and describe our recent work on ECOC based on Hadamard matrices.
Dr. Richi NayakAcademic Lead: Development and Diversity, School of Computer Science;
Program Leader: Applied Data Science, Centre for Data Science;
Professor, Science and Engineering Faculty, Queensland University of Technology, Australia
Speech Title: Machine Learning Methods Elucidating Good and Bad of Social Media Data
Abstract: The proliferation of social media has created new norms in society. Incidents of abuse, hate, harassment and misogyny are widely spread across the social media platforms. Simultaneously, social media platforms facilitate sharing meaningful ideas and thoughts. In this talk, I will explore the ‘bad’ and ‘good’ of social media and present two novel applications with innovative machine learning methods. The first application will be ‘Twitter Misogynist Abuse Detection’ with a progressive Transfer Learning-based Deep Learning approach. The second application will be ‘Emergent Trend Discovery’ with a rank-centred clustering approach. Outcomes of these applications boost the social media monitoring capability and can assist policymakers and government to focus on key issues.
Prof. Camillo PorcaroInstitute of Cognitive Sciences and Technologies (ISTC) – National Research Council (CNR), Rome, Italy
Speech Title: A functional source separation algorithm to enhance error-related potentials monitoring in noninvasive brain-computer interface
Abstract: An Error related Potential (ErrP) can be noninvasively and directly measured from the scalp through electroencephalography (EEG), as response, when a person realizes they are making an error during a task (as a consequence of a cognitive error performed from the user). It has been shown that ErrPs can be automatically detected with time-discrete feedback tasks, which are widely applied in the Brain-Computer Interface (BCI) field for error correction or adaptation. In this work, a semi-supervised algorithm, namely the Functional Source Separation (FSS), is proposed to estimate a spatial filter for learning the ErrPs and to enhance the evoked potentials. EEG data recorded on six subjects were used to evaluate the proposed method based on FFS algorithm in comparison with the xDAWN algorithm. FSS- and xDAWN-based methods were compared also to the Cz and FCz single channel. The single-trial classification was considered to evaluate the performances of the approaches. (Both the approaches were evaluated on the single-trial classification of EEGs.) The results presented using the Bayesian Linear Discriminant Analysis (BLDA) classifier, show that FSS (accuracy 0.92, sensitivity 0.95, specificity 0.81, F1-score 0.95) overcomes the other methods (Cz - accuracy 0.72, sensitivity 0.74, specificity 0.63, F1-score 0.74; FCz - accuracy 0.72, sensitivity 0.75, specificity 0.61, F1-score 0.75; xDAWN - accuracy 0.75, sensitivity 0.79, specificity 0.61, F1-score 0.79) in terms of single-trial classification. The proposed FSS-based method increases the single-trial detection accuracy of ErrPs to both single-channel (Cz, FCz) and xDAWN spatial filters.
Dr. Liang ZhaoAssociate Professor
Chair of Department of Network Engineering, Shenyang Aerospace University, China
Speech Title: Intelligence-Empowered Vehicular Networking and Computing
Abstract: Vehicular networks (VNs) have been studied profoundly aiming to provide the efficient connectivity among vehicles and infrastructures to access to various of applications in which such networks can support all types of services in the internet of vehicles (IoV). Over the past two decades, VANET (vehicular ad-hoc network) has been studied to connect the vehicles in wide areas with its multi-hop connectivity. However, traditional VANET still faces challenges to enable intelligent networking and communication with its decentralized nature in which individual vehicle lacks the ability to collect and compute such large amount of data. Hence, learning algorithms and dedicated networking architecture should be applied to improve the networking quality. In this talk, the speaker will present the AI-enabled vehicular networking techniques and the related architectures, in the aspects of routing metrics, protocol switching, adaptive routing, softwarized VN, and digital twin-based VN.
Prof. Gyu Myoung LeeProfessor, School of Computer Science and Mathematics, Liverpool John Moores University, UK
Adjunct Professor, KAIST Institute for IT Convergence, Korea
Speech Title: Challenges for Trustworthy Artificial Intelligence of Things (AIoT)
Abstract: Artificial Intelligence (AI) and Internet of Things (IoT) are very important technologies for the future and there are a lot of research activities to combine AI and IoT, called AIoT (Artificial Intelligence of Things). Furthermore, data is becoming essential to support AI based solutions with human interactions. In this regard, this talk introduces key concepts, features and characteristics of human centric AIoT in data driven networking point of view. From AIoT research, many researchers have identified that there are security, privacy and trust concerns to realize human-centric AIoT. To cope with negative effects of AIoT, it’s necessary to address trustworthy AIoT. Therefore, this talk presents key challenges for realizing trustworthy AIoT and discuss next steps for future research.
Dr. T.P. FowdurAssociate Professor, Department of Electrical and Electronic Engineering, Faculty of Engineering, University of Mauritius, Mauritius
Speech Title: AI Enabled Green 5G and 6G Communications
Abstract: To meet the exponentially increasing requirements of bandwidth, throughput, latency and jitter, cellular technologies have experienced a progressive evolution from the 1st generation (1G) to the 5th generation (5G). However, with the incorporation of new hardware to support additional applications and devices, the energy consumption of mobile networks has experienced a proportional rise from one generation to the next. A significant demarcation from the conventional trend in energy consumption is expected to be introduced by 5G which already consumes four times more energy than 4G. Moreover, the amount of user data is predicted to be four times more in 2025 than the current data volume on current mobile networks, as per a Mobility Report by Ericsson. Consequently, energy efficiency is a major concern in 5G in contrast to earlier generations. In parallel, the conceptualization of 6G has already begun with the prospect of connecting everything, providing ubiquitous sensor integration, communication, computation and control, as well as transmission over mmWave and THz bands. Such a network evolution will lead to further densification of cells as it will require massive deployment of tiny cells that will overlay on the
existing macro cellular networks. 6G will therefore exert an unprecedented pressure on energy efficiency and sustainability due to its high network and technical complexity. In order to address the energy efficiency issues in 5G and future 6G networks, several machine learning techniques can be employed. For example, in 5G, machine learning techniques can be employed to optimize the processes at the core
network, access network and edge network, hence improving the overall energy efficiency. In 6G, AI based techniques can effectively improve energy efficiency by applying them to the three service classes being proposed for 6G, namely, Cellular Network Communications (CNC), Machine Type Communications (MTC), and Computation Oriented Communications(COC). In this presentation, a review of the most
important AI and machine learning techniques, that can be applied to enhance energy efficiency in 5G and future 6G networks, will be performed.
Dr. Francesco LiberatiResearch Fellow, Automatic Control, Sapienza University of Rome
Speech Title: Task Execution Control in an Assembly Line via Deep Reinforcement Learning
Abstract: This paper presents a deep reinforcement learning approach for optimally controlling the execution of a set of integration tasks in an assembly line. The work is inspired by the problem of optimizing the assembly of a space vehicle at a launch base, to increase the launch rate. The main goal of the controller is to ensure that the tasks are executed in the minimal time, while satisfying all the existing constraints. A comparison with an advanced alternative control approach based on model predictive control is made. Proof of concept simulations are presented to show the effectiveness of the proposed solution.
Prof. Boaz LernerAssociate Professor and Head of the Machine Learning & Data Mining Lab
Department of Industrial Engineering and Management, Ben-Gurion University of the Negev, Beer Sheva, Israel
Speech Title: Utilizing digital traces of mobile phones for understanding social dynamics in urban areas
Abstract: Understanding land use in urban areas, from the perspective of social function, is beneficial for a variety of fields, including urban and highway planning, advertising, and business. However, big cities with complex social dynamics and rapid development complicate the task of understanding these social functions. In this paper, we analyze and interpret human social function in urban areas as reflected in cellular communication usage patterns. We base our analysis on digital traces left by mobile phone users, and from this raw data, we derive a varied collection of features that illuminate the social behavior of each land use. We divide space and time into basic spatiotemporal units and classify them according to their land use. We categorize land uses with a leveled hierarchy of semantic categories that include different levels of detail resolution. We apply the above methodology to a dataset consisting of 62 days of cellular data recorded in nine cities in the Tel Aviv district. The methodology proved beneficial with an accuracy rate ranging from 84% to 91%, dependent on land-use label resolution. In addition, analyzing the results sheds light on some of the limitations of relying solely on cellular communication as a data resource. We discuss some of these problems and offer applicable solutions.
Dr. Hojjat SalehinejadUniversity of Toronto, Canada
Speech Title: Energy-Based Dropout and Pruning of Deep Neural Networks
Abstract: Dropout is a well-known regularization method by sampling a sub-network from a larger deep neural network and training different sub-networks on different subsets of the data. Inspired by the dropout concept, we will discuss EDropout as an energy-based framework for pruning neural networks in classification tasks. In this approach, a set of binary pruning state vectors (population) represents a set of corresponding sub-networks from an arbitrary original neural network. An energy loss function assigns a scalar energy loss value to each pruning state. The energy-based model (EBM) stochastically evolves the population to find states with lower energy loss. The best pruning state is then selected and applied to the original network. Similar to dropout, the kept weights are updated using backpropagation in a probabilistic model. The EBM again searches for better pruning states and the cycle continues. This procedure is a switching between the energy model, which manages the pruning states, and the probabilistic model, which updates the kept weights, in each iteration. The population can dynamically converge to a pruning state. This can be interpreted as dropout leading to pruning the network. From an implementation perspective, unlike most of the pruning methods, EDropout can prune neural networks without manually modifying the network architecture code. We have evaluated the proposed method on different flavors of ResNets, AlexNet, l₁ pruning, ThinNet, ChannelNet, and SqueezeNet on the Kuzushiji, Fashion, CIFAR-10, CIFAR-100, Flowers, and ImageNet data sets, and compared the pruning rate and classification performance of the models. The networks trained with EDropout on average achieved a pruning rate of more than 50% of the trainable parameters with approximately <5% and <1% drop of Top-1 and Top-5 classification accuracy, respectively.
Dr. Dmitri E. KvasovDIMES, University of Calabria, Italy
Speech Title: Global optimization in machine learning: Metaheuristic vs. deterministic approaches
Abstract: Numerical global optimization plays an important role in algorithmic configuration to improve the generalization ability of machine learning techniques. Because of the high computational cost involved in this decision-making process, the main goal is to develop efficient global optimization algorithms that produce reasonably good and guaranteed solutions with a limited budget of function evaluations. The objective function in this case can be black-box, multiextremal, and non-differentiable thus precluding the use of descending schemes with derivatives. Derivative-free methods can be therefore particularly suitable to tackle these challenging global optimization problems and can be either of deterministic or stochastic (and particularly, metaheuristic) nature. Some of the methods of these two groups are briefly surveyed and their application in the machine learning field is discussed.
Dr. Mohammad Shahadat HossainProfessor,
Department of Computer Science and Engineering,University of Chittagong, Chittagong-4331,Bangladesh
Visiting Academic Staff, The University of Manchester, UK
Visiting Professor, Trisakti University, Indonesia
Visiting Scholar Professor, Erasmus Mundus Joint Master Program, Europe
Speech Title: The Evolution of Belief Rule Based Expert Systems
Abstract: Belief Rule Based Expert Systems (BRBESs) are widely used in diverse domains, especially where uncertainty is considered as a critical issue. This talk will present the evolution of BRBESs methodology starting from knowledge representation, inference and learning by taking into account the complexity of the problems of diverse domains. This will demonstrate the scope and challenges of integration of BRBESs with deep learning as well as with evolutionary optimization algorithms. This integration can be considered as fundamental either to develop intelligent decision technologies or to make AI systems more explainable, by ensuring a balance between accuracy and explainability. Results from our ongoing research will be presented to demonstrate the applicability of our approaches.
Dr. A. ShahinaProfessor, Department of Information Technology, Sri Sivasubramaniya Nadar College of Engineering, Anna University, Chennai, India
Speech Title: Otoacoustic Emission as a viable biometric for person identification
Abstract: Biometrics, which have become integrated with our daily lives, could fall prey to falsification attacks. For example, the fingerprint of a user can be easily forged using cheap and readily available gelatin and mould. Researchers at McAfee, the cyber security firm, have been able to trick the facial recognition system to falsely recognise the image of person A that is presented to the system, as that of person B by using an image translation algorithm known as CycleGAN. This could lead to security concerns. In this talk I will be discussing the feasibility of using Otoacoustic Emissions (OAE) as a viable biometric modality that is robust to falsification attacks. Otoacoustic Emissions are generated by the human cochlea in response to an external sound stimulus. I will be discussing how, using both the raw 1D OAE signals as well as the 2D time-frequency representation of the signal using Continuous Wavelet Transform (CWT), we achieve state-of-the-art results in real time, with the added advantage of robustness to falsification attacks.
Assoc. Prof. R. S. HegadiDean, School of Computer Science,
Central University of Karnataka, Kalaburagi, INDIA
Speech Title: Predictive Analytics
Abstract: Predictive analytics is the application of statistical techniques and machine-learning algorithms to predict the future outcome of a business or an event with the help of a large volume of historical data of that business. Predictive analytics is one of the major research advancements which is extensively used in recent days for predicting future occurrences based on past events. Even though the concept of predictive analytics is around for a decade, many organizations are getting benefits of predictive analytics in recent time due to reasons such as increased competitiveness in the business, a growing volume of data, better methods in extracting information from the large volume of the data, reduced cost of hardware and software systems, more and more open-source software, and tough economic conditions. Many organizations have turned to predictive analytics to find solutions for numerous business challenges such as detecting fraud, optimizing the market campaign, improving operations, and reducing risk in the business. In this talk will understand what is predictive analytics? How it is transforming our lives? And other related questions would be addressed with suitable examples.
Dr. Naveed AkhtarOffice of National Intelligence Australia - Research Fellow
Lecturer (AI, Machine Learning & Data Science)
Department of Computer Science & Software Engineering, University of Western Australia, Australia
Speech Title: Explaining Deep Learning with Adversarial Attacks
Abstract: Deep visual models are susceptible to adversarial perturbations to inputs. Although these signals are carefully crafted, they still appear noise-like patterns to humans. This observation has led to the argument that deep visual representation is misaligned with human perception. In this talk, we will slightly counter-argue by providing evidence of human-meaningful patterns in adversarial perturbations. We will introduce an attack that fools a network to confuse a whole category of objects (source class) with a target label. Our attack also limits the unintended fooling by samples from non-sources classes, thereby circumscribing human-defined semantic notions for network fooling. We will demonstrate that our attack not only leads to the emergence of regular geometric patterns in the perturbations, but also reveals insightful information about the decision boundaries of deep models. Exploring this phenomenon further, we will alter the `adversarial' objective of our attack to use it as a tool to `explain' deep visual representation. We will show that by careful channeling and projection of the perturbations computed by our method, we can visualize a model's understanding of human-defined semantic notions.
Dr Hector ZenilSenior Researcher, Department of Computer Science, The Alan Turing Institute, UK
Unit of Computational Medicine, Center for Molecular Medicine, SciLifeLab and the Karolinska Institute, UK
Speech Title: Artificial Intelligence and Algorithmic Information Dynamics in Medicine
Abstract: In this talk, I will explain how current approaches of machine, and deep learning based on traditional statistics and information theory fail to capture fundamental properties of our world and are ill-equipped to deal with high-level functions such as inference, abstraction, and understanding, they are fragile and can easily be deceived. In contrast, we will explore recent attempts to combine symbolic and differentiable computation in a form of unconventional hybrid computation that is more powerful and may eventually display and grasp these higher-level elements of human intelligence. In particular, I will introduce the field of Algorithmic Information Dynamics and that of Algorithmic Machine Intelligence based on the theories of computability and algorithmic probability, and how these approaches promise to shed light on the weaknesses of current AI and how to attempt to circumvent some of their limitations.
Dr. Yurchenko DaniilAssociate Professor, School of Engineering & Physical Sciences, Heriot-Watt University, UK
Speech Title: Improving Flow-Induced Vibration Energy Harvesting Using Machine Learning
Abstract: Aims: To study how machine learning approach can improve the energy harvesting from wind induced vibrations Methods: Three different wake galloping piezoelectric energy harvesters are used to study their galloping response. Each harvester comprises a bluff body with square, triangular and circular cross-section. The bluff bodies are placed upstream of the air flow, which velocity spans within 2.9-14.5m/s. A rectangular parallelepiped bluff body mounted on a cantilever beam and attached with a piezoelectric sheet is placed downstream. Using machine learning technology, the present work selected different parameters as input features, and trained two machine learning models to predict the amplitude of the vortex-induced vibration of the two cylinders and the output voltage and vibration displacement of the piezoelectric energy harvester for wake galloping vibrations. Results: Three machine learning algorithms were tested, namely Decision Tree Regression, Random Forest and Gradient Boosted Regression Tree. The results indicate that the GBRT model exhibits the best performance in predicting both root mean square voltage and maximum displacement. Conclusions: This study demonstrates the promising application potential of ML in the ﬁeld of piezoelectric energy harvesting and how ML can help in predicting and optimizing the performance of such devices. Acknowledgements: This work was supported by the National Natural Science Foundation of China (Grant No.: 51977196), and China Postdoc-toral Science Foundation (2020T130557).
Ali Wagdy MohamedDepartment of Operations Research, Faculty of Graduate studies for Statistical Research, Cairo University, Giza 12613, Egypt
Wireless Intelligent Networks Center (WINC), School of Engineering and Applied Sciences, Nile University, Cairo, Egypt.
Speech Title: Gaining-sharing knowledge based algorithm for solving optimization problems: a novel nature-inspired algorithm
Abstract: This talk presents a novel nature-inspired algorithm called Gaining Sharing Knowledge based Algorithm (GSK) for solving optimization problems over continuous space. The GSK algorithm mimics the process of gaining and sharing knowledge during the human life span. It is based on two vital stages, junior gaining and sharing phase and senior gaining and sharing phase. The present work mathematically models these two phases to achieve the process of optimization. In order to verify and analyze the performance of GSK, numerical experiments on a set of 30 test problems from the CEC2017 benchmark for 10, 30, 50 and 100 dimensions. Besides, the GSK algorithm has been applied to solve the set of real world optimization problems proposed for the IEEE-CEC2011 evolutionary algorithm competition. A comparison with 10 state-of-the-art and recent metaheuristic algorithms are executed. Experimental results indicate that in terms of robustness, convergence and quality of the solution obtained, GSK is significantly better than, or at least comparable to state-of-the-art approaches with outstanding performance in solving optimization problems especially with high dimensions.
S BalaKrishnanProfessor and Head
Department of Computer Science and Business System at Sri Krishna College of Engineering and Technology, Coimbatore, Tamilnadu, India
Speech Title: IoT Development: Challenges and Opportunities
Abstract: According to India Internet of Things (IoT) Market Forecast & Opportunities, 2020, IoT market in India is projected to grow at a CAGR over 28% during 2015 - 2020.
IoT is being rapidly brought into use across diverse industry verticals to reduce operational and manpower costs, and increase operational efficiency.
Consumer electronics, automotive & transportation, BFSI (Banking, Financial Services and Insurance), home & building, energy & utilities, retail, supply chain & logistic sectors, and manufacturing are the key emerging application areas where IoT technology is majorly being adopted. However, Indias IoT market is highly fragmented with numerous players operating across the value chain.
With growing need for connectivity among devices, systems and services using variety of protocols and domains; automating business processes; and real-time monitoring & tracking of services and systems, Internet of Things (IoT) technology has been gaining increasing market traction over the last few years.
In the IoT concept, a Thing can be any natural or man-made object that can be assigned an IP address and provided an ability to transfer data over a network.
Dr. Hamed TaherdoostUniversity Canada West, Vancouver, Canada
HamTa Group | Hamta Business Corporation
Speech Title: Information Security Awareness to Facilitate Digital Transformation
Abstract: The world is gone digital and more businesses are shifting to digital versions day in day out. Today, concentration on digital transformation seems to be a must for businesses to stay in the tough competition. As the pace of changes in the digital world is accelerated apparently, the appeal to shift from traditional platforms to digital ones is also increasing dramatically. As most of the businesses are harmonized with the digital transformation flux, potential vulnerabilities due to digitalization of systems have broadened as well.
Information security is an indispensable element of systems since it is directly correlated with the experience of the end-user. Thus, it is impossible to accomplish digital transformation objectives without regarding information security considerations. The risk of Cyber attacks and unauthorized access to valuable information are considered as typical threats that many businesses may face through digital transformation. As human-beings are end-users of systems and therefore play vital roles in information security systems, raising users' awareness through running information security awareness programs is an essential approach to avoid or neutralize the possibility of unwanted security consequences that may occur during transforming systems digitally.
Dr. Chaojie LiSenior Research Associate
School of Electrical Engineering and Telecommunications,
University of New South Wales, Sydney, Australia
Speech Title: Integration of Renewable Energy Resources into Active Distribution Network through Explainable AI (XAI)
Abstract: Explainable AI has been introduced to guide various industrial process involving complex human behaviours, which aims to interpret what , why and how the solution produced by deep learning system for engineering practitioners. This talk will present a research on how to achieve an explainable model by different techniques from a data-driven perspective. Moreover, how to apply XAI for energy system will be presented by the integration of renewable energy resource. Specifically, a large-scale efficient computational algorithm will be discussed while game theoretical models will be highlighted for challenging issues including demand side management, demand response of EV management, multi-energy trading mechanism design and distributed renewable energy integration in the smart grid.
Assoc. Prof. Dr. Shyamala DoraisamyFaculty of Computer Science and Information Technology, Universiti Putra Malaysia
Honorary Research Fellow, School of Computer Science, College of Science, University of Lincoln, United Kingdom
Speech Title: Machine Listening and its Applications
Abstract: Research on machine listening is on the rise with the advancements of artificial intelligence and sound processing technologies, alongside the increasing collections of digital sound recordings and sound sensor data. Machine listening is a field encompassing research on a wide range of tasks and methods such as speech recognition, audio content recognition, audio-based search, content-based music analysis, signal processing and auditory modelling. This talk will present an overview of machine listening followed by discussions of several past projects on music and health informatics. An ongoing machine listening project on tyre-road sound interactions towards improving vehicle safety systems will also be discussed.
Eugenio CesarioAssociate Professor of Computer Engineering
DICES Department, University of Calabria, Italy
Speech Title: Spatio-Temporal Crime Predictions in Smart Cities: Applications, Achievements and Challenges
Abstract: Steadily increasing urbanization is causing significant economic and social transformations in urban areas, posing several challenges related to city management and services. In particular, in cities with higher crime rates, effectively providing for public safety is an increasingly complex undertaking. To handle this complexity, new technologies are enabling police departments to access growing volumes of crime-related data that can be analyzed to understand patterns and trends, and ultimately support more effective crime prevention.
This talk presents an overview about how data-driven predictive approaches can support police officers to forecast crimes in urban areas, aimed at increasing the efficient deployment of police resources within a given territory. Then, it presents a multi-step algorithm, based on spatial analysis and auto-regressive models, to automatically detect high-risk crime regions in urban areas and to reliably forecast crime trends in each region. The experimental evaluation, performed on two real-world datasets collected in the cities of Chicago and New York City, shows good accuracy in spatial and temporal crime forecasting over rolling time horizons.