I am a Research Scientist in the TCL AI Lab at TCL Research, Hong Kong where I work on Computer Vision, Machine Learning, and Deep Learning. I received my Ph.D. degree in electrical engineering from the City University of Hong Kong, in 2019. During my Ph.D. study, I worked under the supervision of Dr.Lai-Man Po on Face Liveness detection using Deep Learning techniques. Prior to my Ph.D. degree, I worked in NuSyS Lab under the supervision of Dr. Muhammad Tariq on Wireless Multimedia Sensor Networks (WMSN). I also worked as an Algorithm Specialist with TCL Corporate Research (Hong Kong) co., Limited before assuming duties as a Research Scientist at the same center in 2021.
Research Interests
Deep Learning and Computer Vision, Federated Learning, Video Understanding, Computational Photography, Audio Understanding
Available Positions
Software Engineer (Intern)
Software Engineering Intern – Smart Homes at TCL
TCL is seeking a motivated Software Engineering Intern to join our team and contribute to developing innovative smart home solutions. In this role, you’ll work on designing, testing, and optimizing software for IoT devices, cloud integration, and user-friendly interfaces. Candidates who are pursuing a degree in Computer Science, Software Engineering, or a related field, with proficiency in Python, Java, or C++, and a strong interest in IoT and smart home innovation are encouraged to apply. This is a hands-on opportunity to gain real-world experience in a fast-paced, innovative environment, with potential for future full-time opportunities. If you’re excited about shaping the future of smart homes, send your resume to [yasar@tcl.com].
News
- [2024] Our paper titled Exploring Federated Self-Supervised Learning for General Purpose Audio Understanding has been accepted to the ICASSP-2024 workshop on Self-supervision in Audio, Speech and Beyond. [Preprint]
- [2024] Our paper titled AudioRepInceptionNeXt: A lightweight single-stream architecture for efficient audio recognition has been accepted in NeuroComputing [Preprint] [Code] .
- [2023] Our paper titled Large Separable Kernel Attention: Rethinking the Large Kernel Attention Design in CNN has been accepted in ESWA [Preprint] [Code] .
- [2023] Our paper titled L-DAWA: Layer-wise Divergence Aware Weight Aggregation in Federated Self-Supervised Visual Representation Learning has been accepted in ICCV-2023 [Preprint] [Supplementary Materials].
- [2023] Our solution got first place award in EPIC-SOUNDS Audio-Based Interaction Recognition
- [2023] A short highlight on using federated learning with self-supervision for video understanding is now available on the Flower Blogs
- [2022] Our paper titled Federated Self-Supervised Learning for Video Understanding has been accepted in ECCV-2022
- [2022] Video of my short talk on Federated Learning with Self-Supervision at the Flower Summit 2022 is now available on the Flower YouTube Channel
- [2022] A paper is accepted in L3D-IVU - CVPR2022
- [2021] Our paper titled VCGAN: Video Colorization with Generative Adversarial Networks has been accepted for publication in IEEE Transactions on Multimedia
- [2021] Our book chapter titled Visual Information Processing and Transmission in Wireless Multimedia Sensor Networks: A Deep Learning-Based Practical Approach has been accepted for publication in the upcoming book Internet of Multimedia Things (IoMT): Techniques and Applications
Reviewer
- Journals: IEEE TCSVT, IEEE Access, ESWA, JVCI, SPIC, IEEE TNNLS
- Conferences: IEEE CVPR-2024, IEEE ICET, IEEE INMIC, CVPR-FedVision (2023-2024)