Human Activity Recognition (HAR) using wearable systems in telerehabilitation and clinical applications has caught the attention of many researchers, especially for Parkinson's disease (PD) movement therapy. However, the distinction between simple activities and complex ones and how to handle them have not been thoroughly investigated. We propose and compare two variants of a multi-task network with shared parameters to recognize simple activities (SAs) and complex activities (CAs) simultaneously. We do so by introducing a branched deep neural network that uses a shared feature space for both SAs and CAs, and further enriches the features for CAs using a deep recurrent neural network. The variants are CNN-LSTM and CNN-BiLSTM. We trained and evaluated the models with 65 activities; 51 SAs and 14 CAs composed of Lee Silverman Voice Treatment-BIG (LSVTBIG) and functional activities. Our dataset consisted of 43 healthy subjects, seven women and 36 men. The data were recorded using four smart bands with embedded IMUs, placed on both wrists and both thighs. Our results show that the CNN-BiLSTM model with an average accuracy of 84.17% and 78.78% for SAs and CAs, correspondingly, outperforms the CNN-LSTM model with average accuracies of 71.83% and 66.46%.