In this paper, we present a novel method for learning robot tasks from multiple demonstrations. Each demonstrated task is decomposed into subtasks that allow for segmentation and classification of the input data. The demonstrated tasks are then merged into a flexible task model, describing the task goal and its constraints. The two main contributions of the paper are the state generation and contraints identification methods. We also present a task level planner, that is used to assemble a task plan at run-time, allowing the robot to choose the best strategy depending on the current world state