Decision-making process subject to multiple aspects of uncertainty is known as Decision-Making under Deep Uncertainty (DMDU). To support this process, exploration methods are used to eval- uate the performance of candidate policies in possible scenarios, in order to identify a robust plan that performs satisfactorily in any future. These methods show complementarity, hence joint use of them can improve the overall DMDU performance. The main aim of this project is to intro- duce Reinforcement Learning as a new exploration method into DMDU, and further complement the existing methods with it. This is motivated by the suitability of Reinforcement Learning for handling uncertainty due to its real-time policy adaptability. We first reviewed existing studies and identified three Reinforcement Learning algorithms applicable in DMDU, as well as two baseline Evolutionary Algorithms that have already been applied in DMDU. Then, we constructed a com- mon environment to compare these algorithms in an uncertainty problem and two deep uncertainty problems. Our experiments empirically showed the viability of this introduction. Meanwhile, they reflected the complementarity between Reinforcement Learning and Evolutionary Algorithm. The former generally provided higher efficiency and robustness to parameter uncertainty, and could handle random initial states. The latter performed better in dealing with objective uncertainty, and was less sensitive to the randomness of the exploration process. Moreover, we also demonstrated the application of these methods in a real-world problem in the last experiment, and investigated how the high complexity of the problem affected their performance. Overall, this project also re- vealed the differences in characteristics of domain problems and algorithms in the Reinforcement Learning area and DMDU field, which contributed to inspiring possible directions for future robust planning research in them.