word2vec model trained on the concatenation of all the individual universities corpora. To generate the word embeddings of the corpus, the gensim implementation of word2vec (CBOW) was used. For training the word embeddings model, the following parameters were used: vector dimensions=300, window size=10, negative sampling=10, down sampling frequent words = 0.00008 (downsamples 612 most-common words), number of iterations (epochs) through the corpus=10, maximum final vocabulary= 3 million. The maximum final vocabulary resulted in an effective minimum frequency count of 20. That is, only terms that appear more than 20 times in the corpus were included into the word embedding model vocabulary. The exponent used to shape the negative sampling di...