From Federated to Fog Learning: Expanding the Frontier of Model Training over Contemporary Wireless Network Systems

Room: Room # 240, Bldg: Busch Campus - Electrical and Computer Engineering, Rutgers University, 94 Brett Road, Piscataway, New Jersey, United States, 08854-8058

Abstract Fog learning is an emerging paradigm for optimizing the orchestration of artificial intelligence services over contemporary network systems. Different from existing distributed techniques such as federated learning, fog learning emphasizes intrinsically in its design the unique node, network, and data properties encountered in today’s fog networks that span computing elements from the edge to the cloud. An important thread of research in fog learning has been on understanding the role that local topologies formed on an ad-hoc basis among proximal groups of heterogeneous computing elements can play in elevating the achievable tradeoff between intelligence quality and resource efficiency. In this talk, I will discuss recent results on the analysis of fog learning processes which give insights into the impact that these topologies, along with other properties such as model characteristics and fog decision parameters, have on global training performance. Additionally, I will discuss the development of adaptive control methodologies that leverage such relationships for jointly optimizing relevant fog learning metrics. Distinguished Lecturer Series: https://www.comsoc.org/membership/distinguished-lecturers Speaker: https://www.comsoc.org/christopher-greg-brinton Co-sponsored by: North Jersey Information Theory Chapter Speaker(s): Chris Brinton, Agenda: 6:30-7:00pm Gather, Refreshments and Introduction 7:00-8:00pm Lecture 8:00-8:30pm Q&A, networking, wrap-up Room: Room # 240, Bldg: Busch Campus - Electrical and Computer Engineering, Rutgers University, 94 Brett Road, Piscataway, New Jersey, United States, 08854-8058

Large Language Models (LLMs), Optimization, and Game Theory

Virtual: https://events.vtools.ieee.org/m/474729

Special Presentation by Dr. Samson Lasaulce (Khalifa U., UAE) Hosted by the Future Networks Artificial Intelligence & Machine Learning (AIML) Working Group Date/Time: Thursday, 17 April 2025 @ 12:00 UTC Topic: Large Language Models (LLMs), Optimization, and Game Theory Abstract: In this talk, we will explore the interplay between large language models (LLMs) and optimization. After introducing a use case (consumption power scheduling) for which studying this interplay is fully relevant, we will survey the main approaches in this area, which include pure LLM-based approaches (e.g., to deal with math word problems) and combined approaches. Both limitations and promising solutions will be discussed. Application to radio resource management and to telecommunications more generally will also be addressed. In the last part of the talk, connections between LLMs and game theory will be discussed. Speaker: [] Samson Lasaulce is a Chief Research Scientist with Khalifa University. He is the holder of the TII 6G Chair on Native AI. He is also a CNRS Director of Research with CRAN at Nancy. He has been the holder of the RTE Chair on the "Digital Transformation of Electricity Networks". He has also been a part-time Professor with the Department of Physics at École Polytechnique (France). Before joining CNRS he has been working for five years in private R&D companies (Motorola Labs and Orange Labs). His current research interests lie in distributed networks with a focus on optimization, game theory, and machine learning. The main application areas of his research are wireless networks, energy networks, social networks, and now climate change. Dr Lasaulce has been serving as an editor for several international journals such as the IEEE Transactions. He is the co-author of more than 200 publications, including a dozen of patents and several books such as "Game Theory and Learning for Wireless Networks: Fundamentals and Applications". Dr Lasaulce is also the recipient of several awards such as the Blondel Medal award from the SEE French society.. Co-sponsored by: Future Networks Artificial Intelligence & Machine Learning (AIML) Working Group Virtual: https://events.vtools.ieee.org/m/474729

Strengthening Power Systems: Resilience, Sustainability, Security, and Investment Priorities

Virtual: https://events.vtools.ieee.org/m/478409

Power systems face escalating risks from aging infrastructure, extreme weather, cyber and physical threats, increased electrification, and shifting energy demands. Recent failures expose these vulnerabilities. In 2021, severe winter storms in Texas froze pipelines and shut down plants: Over 4.5 million people lost power, 246 died, and damages reached $195 billion. Between 2019 and 2023, wildfires and heatwaves in California triggered rolling blackouts. In 2023, winter storms in Quebec knocked out power for over a million. Europe’s 2022-2023 energy shortages, driven by geopolitical tensions, led to blackouts and supply restrictions. These are just a few examples. Cyber and physical attacks continue to threaten power systems. In 2015 and 2016, cyberattacks in Ukraine cut power to over 230,000. Since 2022, multi-pronged attacks have destroyed generation plants, reduced capacity, and forced the grid into emergency protocols. Blackouts are common, exposing the vulnerability of centralized systems during conflict. Failures happen fast. Recovery is slow. Resilience requires decisive action. Modernizing grids with smart technologies can reduce outage durations by 20% (EPRI, 2024). Decentralizing through microgrids adds redundancy—by 2025, 15% of urban areas will rely on them (IEA, 2025). Predictive maintenance using AI has cut transformer downtime from months to less than a week (DOE, 2024). AI-driven cybersecurity has reduced threat response times by up to 70% (DHS, 2025). Energy storage systems help balance supply and demand, particularly during peak loads, while advanced demand response systems increase grid flexibility and reduce stress during surges. However, resilience is not only about technology. It requires robust supply chains for critical components like transformers, semiconductors, and storage technologies. It depends on understanding the interdependencies between power, water, transportation, and telecommunications systems, where a failure in one sector can cascade into others. Investment strategies must prioritize scalable, climate-adaptive infrastructure while ensuring equitable access for underserved communities. Public-private partnerships will be essential to fund and drive these transformations, while policy frameworks must incentivize innovation, sustainability, and resilience. Data integration and AI will be central to optimizing grid efficiency, identifying vulnerabilities, and guiding proactive interventions. Global benchmarking can also provide insights from regions advancing resilience—lessons that can be applied to diverse infrastructure contexts. For IEEE Young Professionals, the challenge is to design, implement, and advocate for these solutions. It means advancing technical expertise, engaging with policymakers, and promoting investments that ensure sector resilience. This session will present real-world examples, data-driven strategies, and practical frameworks for strengthening power infrastructure resilience. It will outline steps to build robust, adaptive systems across interdependent sectors, regions, nations, and global networks. Speaker(s): Dr. Massoud Amin Agenda: - Introduction (5 minutes) - Key Note by Dr. Masood Amin - (45 minutes) - Q&A (10 minutes) Virtual: https://events.vtools.ieee.org/m/478409

Women in AI Series 2025 – Distributed Machine Learning for FPGAs in the Cloud: Dr. Miriam Leeser

Virtual: https://events.vtools.ieee.org/m/473027

Distributed Machine Learning for FPGAs in the Cloud Machine Learning (ML) is a growing area in both research and applications. Trends include larger and larger ML models and the interest in getting results from ML with low latency and high throughput. To address these trends, researchers are increasingly looking at accelerators (such as Graphics Processing Units (GPUs) and Field Programmable Gate Arrays (FPGAs), especially those that are directly connected to the network to achieve low latency access to data. In this talk, I will introduce the Open Cloud Testbed (OCT): https://octestbed.org/ OCT is available to researchers who are interested in conducting cloud research with accelerators. We provide GPUS, FPGAs, and AI engines from AMD. The FPGAs and AI engines are directly connected to the network. I will discuss experiments on using OCT for distributed ML using multiple network connected FPGAs. Specifically I will present results for running Resnet50 inference on the imagenet dataset. No hardware knowledge is assumed for this webinar. Speaker(s): Miriam Virtual: https://events.vtools.ieee.org/m/473027

Back to Top