Maintaining machine-learning models for prediction of service performance is challenging, especially in dynamic network and cloud environments where route changes occur, and execution environments can be scaled and migrated. Recently, transfer learning has been proposed as an approach for leveraging already learned knowledge in a new environment. The challenge is that the new environment may be significantly different from the one the model is trained in, and transferred from, with respect to data distributions and dimensionality. In this paper, we introduce heterogeneous transfer learning in the context of dynamic environments and show its efficiency in predicting service performance. We propose two heterogeneous transfer-learning approaches and evaluate them on several neural-network architectures and scenarios. The scenarios are a natural consequence of network and cloud infrastructure reorchestration. We quantify the transfer gain, and empirically show positive gain in a majority of cases for both approaches. Furthermore, we study the impact of neural-network configurations on the transfer gain, providing tradeoff insights. The evaluation of the approaches is performed using data traces collected from a cloud testbed that runs two services under multiple realistic load conditions.
QC 20220928