To learn the context semantic information of word for paraphrase identification, the model for representing sentence semantic distances based on word embeddings was proposed for paraphrase detection tasks. Firstly, a large-scale word vectors was trained with word2vec model, which embedded the semantic information in word distributional representation. Then, the travel cost between words in sentences computed with Euclidean distance in the word2vec embedding space. Finally, the model from word embeddings to sentence distances was built based on EMD, and sentence transportation matrix was presented for distance metric between sentences. The sentence semantic distances were used for paraphrase recognition. Experiments based on SemEval-2015 PIT Task showed that the proposed model approximates to the baseline in supervised method and gives an improvement of 5.8% in unsupervised methods, compared to the weighted matrix factorization.