High-Performance Computing systems frequently undergo architectural changes, such as hardware upgrades or the deployment of new clusters, to meet evolving computational demands. Traditional static schedulers and machine-learning-based approaches struggle to adapt efficiently to these changes, often requiring manual adjustments or extensive retraining. In this paper, we propose a novel approach combining Separate Feature Extraction and Selective Transfer Learning to enable rapid adaptation of Reinforcement Learning-based HPC schedulers to new or modified cluster architectures. We evaluate our approach using three real-world HPC clusters, including both CPU and GPU architectures. Our experiments simulate scheduler transitions between these clusters, capturing a wide range of architectural changes and workload variations found in practice.