Motion planning is a mature field within robotics with many successful solutions. Despite this, current state-of-the-art planners are still computationally heavy. To address this, recent work have employed ideas from machine learning, which have drastically reduced the computational cost once a planner has been trained. It is mainly static environments that have been studied in this way. We continue along the same research direction but expand the problem to include dynamic environments, hence increasing the difficulty of the problem. Analogously to previous work, we use imitation learning, where a planning policy is learnt from an expert planner in a supervised manner. Our main contribution is a planner mimicking an expert that considers the future movement of all the obstacles in the environment, which is key in order to learn a successful policy in dynamic environments. We illustrate this by evaluating our approach in a dynamic environment and by comparing our planner with a conventional planner that re-plans at every iteration, which is a common approach in dynamic motion planning. We observe that our approach yields a higher success rate, while also taking less time and accumulating less distance to reach the goal.