Below is the tentative list of the topics that are available. Please, consider the specified literature as starting point for your literature research.
Topic Area 1: AI, Machine Learning and Deep Learning
- Computer Vision: Convolutional Neural Networks and Vision Transformers (Andre Luckow)
- Natural Language Processing and Transformer Models (Fabian Dreer)
- Generative Models (Pascal Jungblut)
- Rombach et al, High-Resolution Image Synthesis with Latent Diffusion Models, https://arxiv.org/abs/2112.10752, 2022.
- DALL-E: Creating Images from Text, https://openai.com/dall-e-2/, 2021
- Ramesh et al, Hierarchical Text-Conditional Image Generation with CLIP Latents, https://arxiv.org/abs/2204.06125, 2022
- Goodfellow et al., Generative Adversarial Nets, https://arxiv.org/abs/1406.2661, In Advances in Neural Information Processing Systems (NeurIPS), 2014
- Federated Learning (Korbinian Staudacher)
- Konečný et al., Federated Optimization: Distributed Machine Learning for On-Device Intelligence, https://arxiv.org/abs/1610.02527, 2016
- Yang et al., Federated Machine Learning: Concept and Applications, https://arxiv.org/abs/1902.04885, 2019
- Kairouz et al., Advances and Open Problems in Federated Learning, https://arxiv.org/abs/1912.04977, 2021,
- LEAF: A Benchmark for Federated Settings, https://leaf.cmu.edu/, 2019
- ML in Computational Sciences and HPC (Sergej Breiter)
- Fox et al., Understanding ML driven HPC: Applications and Infrastructure, https://arxiv.org/abs/1909.02363, 2021
- Fawzi et al., Discovering faster matrix multiplication algorithms with reinforcement learning, https://www.nature.com/articles/s41586-022-05172-4, 2022
- Li et al., Fourier Neural Operator for Parametric Partial Differential Equations, https://arxiv.org/abs/2010.08895, 2020
- Pestourie et al., Active learning of deep surrogates for PDEs: application to metasurface design, https://www.nature.com/articles/s41524-020-00431-2, 2021
- Karniadakis et al., Physics-informed machine learning, Nature Reviews, https://www.nature.com/articles/s42254-021-00314-5, 2021
- Explainable AI (Daniel Diefenthaler)
- Explaining Explanations: An Overview of Interpretability of Machine Learning: https://arxiv.org/abs/1806.00069v3
- Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead: https://www.nature.com/articles/s42256-019-0048-x
- The challenge of crafting intelligible intelligence: https://dl.acm.org/doi/10.1145/3282486
- Li, M., Zhao, Z., & Scheidegger, C. (2020). Visualizing Neural Networks with the Grand Tour. Distill, 5(3), e25.
- Smith, E. M., Smith, J., Legg, P., & Francis, S. Visualising state space representations of LSTM networks. Presented at Workshop on Visualization for AI Explainability
- Gärtler, J., Kehlbeck, R., & Deussen, O. (2019). A Visual Exploration of Gaussian Processes. Distill, 4(4), e17
- AI Sustainability (Sophia Grundner-Culemann)
- Wu et al., Sustainable AI: Environmental Implications, Challenges and Opportunities, https://proceedings.mlsys.org/paper/2022/file/ed3d2c21991e3bef5e069713af9fa6ca-Paper.pdf, 2022
- Dodge et al., Measuring the Carbon Intensity of AI in Cloud Instances, https://arxiv.org/abs/2206.05229, 2022
- Strubell et al., Energy and Policy Considerations for Deep Learning in NLP, https://arxiv.org/pdf/1906.02243.pdf, 2019.
Topic Area 2: Data Management and Tools for AI
- Modern data management systems (Dang Diep)
- Behm et al., Photon: A Fast Query Engine for Lakehouse Systems, SIGMOD, https://cs.stanford.edu/~matei/papers/2022/sigmod_photon.pdf, 2022
- Armbrust et al., Lakehouse: A New Generation of Open Platforms that Unify
Data Warehousing and Advanced Analytics, CIDR, https://www.cidrdb.org/cidr2021/papers/cidr2021_paper17.pdf, 2021
- Armbrust et al., Delta Lake: High-Performance ACID Table Storage over Cloud Object Stores, https://www.databricks.com/wp-content/uploads/2020/08/p975-armbrust.pdf, 2020
- AI Programming Tools (Fabio Genz)
- Barham et al., Pathways: Asynchronous Distributed Dataflow For ML, https://arxiv.org/abs/2203.12533, 2022
- Bradbury et al, JAX: composable transformations of Python+NumPy programs, https://github.com/google/jax, 2022
- Frostig et al, Compiling machine learning programs via high-level tracing, https://mlsys.org/Conferences/2019/doc/2018/146.pdf, 2022
- PyTorch, 2017
- Abadi et al., TensorFlow:
Large-Scale Machine Learning on Heterogeneous Distributed Systems, White Paper, 2015
- Scaling Machine Learning (Minh Chung)
- Narayanan et al., Efficient Large-Scale Language Model Training on GPU Clusters Using Megatron-LM, https://arxiv.org/abs/2104.04473, 2021
- Shoeybi, Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism, https://arxiv.org/abs/1909.08053, 2019
- Hoffmann et al., Training Compute-Optimal Large Language Models, Deepmind, https://arxiv.org/pdf/2203.15556.pdf, 2022.
- Kaplan et al., Scaling Laws for Neural Language Models, https://arxiv.org/pdf/2001.08361.pdf, 2021
- Dean et al., Large Scale Distributed Deep Networks, 2012
- Alex Krizhevsky, One weird trick for parallelizing convolutional neural networks, 2014
- Li et al., Scaling Distributed Machine Learning with the Parameter Server, OSDI, 2014
- AI Domain-specific Architectures (Sergej Breiter)
- Edge Computing and Edge to Cloud Continuum (Andre Luckow)
- Beckman et al., Harnessing the Computing Continuum for Programming our World, https://ieeexplore.ieee.org/document/9116796, 2020.
- Rosendo et al, Distributed intelligence on the Edge-to-Cloud Continuum: A systematic literature review, https://www.sciencedirect.com/science/article/abs/pii/S0743731522000843
- Simmhan et al., Characterizing application scheduling on edge, fog, and cloud computing resources, https://arxiv.org/abs/1904.10125, 2018.
Topic Area 3: Quantum Computing
- Quantum Machine Learning (Michelle To)
- Quantum Benchmarking (Michelle To)
- Cross et al., Validating quantum computers using randomized model circuits, https://arxiv.org/abs/1811.12926, 2018
- Blume-Kohout et al., A volumetric framework for quantum computer benchmarks, https://arxiv.org/abs/1904.05546, 2020
- Mills et al., Application-Motivated, Holistic Benchmarking of a Full Quantum Computing Stack, https://arxiv.org/abs/2006.01273, 2021
- Martiel et al., Benchmarking quantum co-processors in an application-centric, hardware-agnostic and scalable way, https://arxiv.org/abs/2102.12973, 2021
- Lubinski et al., Application-Oriented Performance Benchmarks for Quantum Computing, https://arxiv.org/abs/2110.03137, 2021
- Finzgar et al., QUARK: A Framework for Quantum Computing Application Benchmarking, https://arxiv.org/abs/2202.03028, 2022.