For future work of the project, we consider to have a closer look at further scalability issues, e.g., to determine the number of objects our approach is able to handle in a single domain, esp. taking into account that bandwidth and other resources should be used for management only in a very careful and restricted way. For extreme scenarios the project will investigate how far the use of resources can be reduced, while still being able to generate models of satisfactory quality. Or in other words, can the neural networks be trained better so they are able to cope with much less grained data?
For the part of the neural networks we consider to work on improvements allowing to distinguish between different types of dependencies. A second point is that--additional to the way it is implemented now, where the IT-administrator is not at all involved in the training process of neural networks--a feedback mechanism from the GUI to the neural agents could help to improve the neural networks and thus the modeling results. However, the pre-trained neural network currently used in our prototype already reliably works for various use cases.