Title: Multiresolution Deep Learning
Date: 17/11/2023
Presenter: Yao Ling, Ph.D.
Local Host: Department of Biohealth Informatics
Seminar series: IU AI Consortium Seminar Series
Report by: Swaraj Thorat
Dr. Liang, who earned his Doctor of Philosophy (Ph.D.) in Computer Science from Clemson University between 1992 and 1997, laid the groundwork for his extensive career in academia and research. His robust educational background has greatly contributed to his research and teaching. The report discussion introduces Multiresolution Learning (ML) to bolster DNNs, addressing vulnerabilities, data dissection, and ESC-10 dataset experiments. ML enhances accuracy, fortifying DNNs against attacks, improving their defense capabilities.
In today’s world of amazing inventions and tough problems, Deep Neural Networks (DNNs) have really wowed us with what they can do in many different areas. But there’s one big problem – they can be tricked by adversarial attacks. Recent studies have shown that DNNs are easily fooled by tiny changes called adversarial examples. In this report will discuss a new way to make DNNs stronger against these adversarial attacks, called Multiresolution Learning (ML).
Let’s navigate through the multifaceted journey of Multiresolution Learning:
Demystifying Deep Neural Network Vulnerabilities
Seminar delves into the susceptibility of Deep Neural Networks (DNNs), exemplified by the formidable One Pixel Attack, emphasizing the complexities involved in implementing these intricate models. Ongoing efforts such as adversarial training and structural modifications demonstrate attempts to address these vulnerabilities. However, recent research highlights the inherent fragility of DNNs, necessitating the exploration of innovative approaches to enhance their resilience against adversarial perturbations.
Unveiling the Essence of Multiresolution Learning
Multiresolution Learning (ML) deviates from conventional single-resolution approaches by decomposing training data into multiple approximation levels. This technique explores the original data across different resolutions, aiming to uncover hidden structures and patterns crucial for improved generalization and robustness.
Evaluating ML through Experimentation
To gauge the effectiveness of Multiresolution Learning (ML), researchers conducted experiments employing the ESC-10 dataset, a collection of audio recordings capturing environmental sounds. They trained two types of models, one utilizing ML and the other adhering to Traditional Learning (TL) principles. Both models were based on a Deep CNN architecture comprising 10 layers.
To assess the performance of each model, researchers measured their average classification accuracy. This entailed calculating the percentage of instances where each model correctly identified the sound in an audio recording. The experiments revealed that the ML model surpassed the TL model in terms of average classification accuracy. This finding suggests that ML holds promise as a more effective approach for training DNNs that are resilient against adversarial attacks.
Validating Robustness
The application of the One-Pixel Attack evaluates the resilience of trained models against adversarial perturbations. Results indicate a significant decrease in Attack Success rates, showcasing the enhanced robustness attributed to ML compared to TL.
Insights from the 2D Wavelet Transform
Investigating further, the study delves into the role of the 2D Wavelet Transform in signal reconstruction and its impact on determining DNNs’ adversarial robustness. Utilizing Deep fool provides valuable insights into the effectiveness of ML in minimizing perturbations aimed at deceiving DNNs.
Key Observations and Implications
Remarkable observations reveal enhanced average classification accuracy in ML models compared to their TL counterparts. These findings underscore the efficacy of ML in capturing latent patterns within the data, leading to improved model performance.
Conclusion
Multiresolution Learning emerges as a promising approach in fortifying Deep Neural Networks against adversarial attacks. By scrutinizing concealed patterns across multiple resolutions, this technique promises precision and resilience, making DNN models more reliable and broadly applicable.