Training deep neural networks (DNNs) in solid mechanics is challenging due to data scarcity. Even when using synthetic datasets obtained from computational simulations, these datasets have limited size because each simulation is time-consuming. One way to address this challenge is to use transfer learning, i.e. finetuning a model pretrained in a different context (e.g., computer vision) such that it can learn a new task while using less data (e.g., learning the behavior of a material with a given microstructure). Unfortunately, a model obtained by transfer learning loses the ability to solve the original task. Therefore, each new task that is being learned destroys the ability to perform the previous one with the same model. We present a Cooperative Data-driven Modeling (CDDM) network that can continually learn tasks without forgetting, accumulating knowledge such that less training data is required when facing a new task or that leads to smaller prediction test error for each new task. We provide our numerical experiments on predicting the plastic behavior of different materials using recurrent neural networks, as they have been shown to handle history-dependent problems.