Learning and Intelligence over Weak Communication Links

Virtual: https://events.vtools.ieee.org/m/413460

Special Presentation by Prof. Petar Popovski (Aalborg University, Denmark) Hosted by the Future Networks Artificial Intelligence & Machine Learning (AIML) Working Group Date/Time: Thursday, April 18th, 2024 @ 12:00 UTC Topic: Learning and Intelligence over Weak Communication Links Abstract: Besides the fascinating questions on how to train increasingly capable Machine Learning (ML) models and explain their behavior, there is a suite of highly relevant challenges that emerge when ML models become elements of distributed connected systems and networks. A popular instance of this set of problems is federated learning. The first part of the talk will present a federated learning setup over LEO satellite constellation. It will be seen that the predictability of satellite movement can be used to speed up the training process. The second part will deal with a model for supervised learning in which Alice has access to abundant data features but does not have the labels, while Bob is able to provide a correct label for any data point. Alice is connected to Bob through a low-rate communication link and the talk will present strategies that combine active learning and data compression that enable Alice to get the labels. Finally, the third part of the talk discusses generative network layer of communication protocols. This is implemented in an intermediate network node that contains a Generative AI module. When the link to the source is weak, instead of waiting for packets to be routed, the node can generate the packets that need to be sent to the destination. Generative network layer is an early step towards the potential changes in communication protocols based on increasingly capable AI. Co-sponsored by: IEEE Future Networks Speaker(s): Prof. Petar Popovski Virtual: https://events.vtools.ieee.org/m/413460

Madhu Chinnambeti Presents: Retrieval Augmented Generation (RAG)

Room: Room 105, Bldg: Computer Science Building, 35 Olden St., Princeton University, Princeton, New Jersey, United States, 08544, Virtual: https://events.vtools.ieee.org/m/414598

[] Large Language Models (LLMs) are being used widely in current Generative AI systems. Unfortunately, LLMs demonstrate significant capabilities but face challenges such as hallucination, outdated knowledge, and non-transparent, untraceable reasoning processes. Retrieval Augmented Generation (RAG) has emerged as a promising solution by incorporating knowledge from external databases. This enhances the accuracy and credibility of the models, particularly for knowledge-intensive tasks, and allows for continuous knowledge updates and integration of domain-specific information. RAG synergistically merges LLMs' intrinsic knowledge with the vast, dynamic repositories of external databases. This talk gives an overview of the structure of RAG systems and includes a demo of their capabilities. Madhu Chinnambeti is an SVP and Senior Data Scientist at SupportVectors. In his current role, Madhu advises the companies, aspiring engineers, and entrepreneurs on ML, AI, and Gen AI technology stack. Madhu has over 28 years of experience in Computer Science and Engineering and he is currently working on his PhD dissertation in the area of Graph Neural Networks (GNNs) and deep learning at Boise State University. Madhu is currently working on research and publications that advance GNNs under Cybersecurity and fraud detection. His passion also includes tech education in the evolving fields like Generative AI. Madhu is a volunteer advisory board member of Disability:In New Jersey affiliate to help individuals with disabilities to get jobs. Speaker(s): Madhu Chinnambeti, Room: Room 105, Bldg: Computer Science Building, 35 Olden St., Princeton University, Princeton, New Jersey, United States, 08544, Virtual: https://events.vtools.ieee.org/m/414598