Below is a list of the topics that are available. Please, consider the specified literature as starting point for your literature research.
Topic Area 1: AI, Machine Learning and Deep Learning
- Introduction to AI and Machine Learning (FG)
- Bommasani, On the Opportunities and Risks of Foundation Models, https://arxiv.org/abs/2108.07258?sf149288348=1, 2021
- Nathan Benaich and Ian Hogarth, State of AI Report 2021, https://www.stateof.ai/, 2021
- Peng, Matsui, The Art of Data Science, 2017
- Hey, The Fourth Paradigm of Scientific Discovery, 2009
- Cao, Data Science: Challenges and Directions, Communications of the ACM, 2017
- Donoho, 50 Years of Data Science, 2015
- Scikit-Learn, 2017
- Machine Learning Frameworks (Github), 2017
- Dominigos, A Few Useful Things to Know about Machine Learning, KDD, 2014
- Sculley et al., Machine Learning: The High Interest Credit Card of Technical Debt, NIPS, 2014
- Computer Vision: Convolutional Neural Networks and Vision Transformers (FG)
- Radford et al., Learning Transferable Visual Models From Natural Language Supervision, https://arxiv.org/abs/2103.00020, 2021
- LeCun, Bengio, Hinton, Deep Learning, Nature, 2015
- Krizhevsky et al., ImageNet Classification with Deep Convolutional Neural Networks, 2012
- Ujjwal Karn, An intuitive explanation of convolutional neural networks, 2016
- Stanford, CS231n: Convolutional Neural Networks, 2017
- Dosovitskiy, An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale, https://arxiv.org/abs/2010.11929, 2020
- Natural Language Processing and Transformer Models
- Foundation Models und Transfer Learning
- Bommasani, On the Opportunities and Risks of Foundation Models, https://arxiv.org/abs/2108.07258?sf149288348=1, 2021
- Radford et al., Learning Transferable Visual Models From Natural Language Supervision, https://arxiv.org/pdf/2103.00020.pdf, 2021
- Radford et al., Language Models are Unsupervised Multitask Learners, https://paperswithcode.com/paper/language-models-are-unsupervised-multitask, 2019
- Chen et al., A Simple Framework for Contrastive Learning of Visual Representations, https://arxiv.org/pdf/2002.05709.pdf, 2020
- Radford et al., Learning Transferable Visual Models From Natural Language Supervision, https://arxiv.org/abs/2103.00020, 2021
<
- ML and Simulations/HPC (English) (DD)
- Explainable AI
- Explaining Explanations: An Overview of Interpretability of Machine Learning: https://arxiv.org/abs/1806.00069v3
- Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead: https://www.nature.com/articles/s42256-019-0048-x
- The challenge of crafting intelligible intelligence: https://dl.acm.org/doi/10.1145/3282486
- Li, M., Zhao, Z., & Scheidegger, C. (2020). Visualizing Neural Networks with the Grand Tour. Distill, 5(3), e25.
- Smith, E. M., Smith, J., Legg, P., & Francis, S. Visualising state space representations of LSTM networks. Presented at Workshop on Visualization for AI Explainability
- Gärtler, J., Kehlbeck, R., & Deussen, O. (2019). A Visual Exploration of Gaussian Processes. Distill, 4(4), e17
Topic Area 2: Systems for ML
- Scaling Machine Learning (English) (DD)
- Narayanan et al., Efficient Large-Scale Language Model Training on GPU Clusters Using Megatron-LM, https://arxiv.org/abs/2104.04473, 2021
- Shoeybi, Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism, https://arxiv.org/abs/1909.08053, 2019
- Kaplan et al., Scaling Laws for Neural Language Models, https://arxiv.org/pdf/2001.08361.pdf, 2021
- Dean et al., Large Scale Distributed Deep Networks, 2012
- Alex Krizhevsky, One weird trick for parallelizing convolutional neural networks, 2014
- Li et al., Scaling Distributed Machine Learning with the Parameter Server, OSDI, 2014
- Xing et al., Petuum: A New Platform for Distributed Machine Learning on Big Data, KDD, 2015
- AI Domain-specific Architectures
- Security and Privacy in Machine Learning (SGC)
- Machine Learning for Crypto-Analysis (SGC)
- Deep Learning for Systems (MC)
- Knowledge Graphs (MC)
Topic Area 3: Quantum Computing
- Quantum Machine Learning (AL)
- Quantum Benchmarking (AL)
- Cross et al., Validating quantum computers using randomized model circuits, https://arxiv.org/abs/1811.12926, 2018
- Blume-Kohout et al., A volumetric framework for quantum computer benchmarks, https://arxiv.org/abs/1904.05546, 2020
- Mills et al., Application-Motivated, Holistic Benchmarking of a Full Quantum Computing Stack, https://arxiv.org/abs/2006.01273, 2021
- Martiel et al., Benchmarking quantum co-processors in an application-centric, hardware-agnostic and scalable way, https://arxiv.org/abs/2102.12973, 2021
- Lubinski et al., Application-Oriented Performance Benchmarks for Quantum Computing, https://arxiv.org/abs/2110.03137, 2021