Fix bug in augment() function for stm topic model. Warn when tf-idf is negative, thanks to @EmilHvitfeldt (#112). Switch from importing broom to importing generics, for lighter dependencies (#133). Add functions for reordering factors (such as for ggplot2 bar plots) thanks to @tmastny (#110). Update to tibble() where appropriate, thanks to @luisdza (#136). Clarify documentation about impact of lowercase conversion on URLs (#139). Change how sentiment lexicons are accessed from package (remove NRC lexicon entirely, access AFINN and Loughran lexicons via textdata package so they are no longer included in this package)
Check for installation of stopwords more gracefully Update tidiers and casters for new version of qu...
A number of fixes, documentation updates, and small features: Highlights: Expose diversity paramete...
WordTokenizers v0.5.5 Diff since v0.5.4 Merged pull requests: Update paper.bib (#47) (@kthyng) Upda...
Improvements to documentation (#117) Fix for NSE thanks to @lepennec (#122). Tidier for estimated re...
scale_x/y_reordered() now uses a function labels as its main input (#200) Fixed how to_lower is pass...
Fix tidier for quanteda dictionary for correct class (#71). Add a pkgdown site. Convert NSE from und...
reorder_within() now handles multiple variables, thanks to @tmastny (#170) Move stopwords to Suggest...
Added documentation for n-grams, skip n-grams, and regex Added codecov and appveyor Added tidiers fo...
Updates to documentation (#102), README, and vignettes. Add tokenizing by character shingles thanks ...
get_sentiments now works regardless of whether tidytext has been loaded or not (#50). unnest_tokens ...
hunspell is now a suggested dependency, thanks to @MichaelChirico (#221) Added stm() tidiers for hig...
Wrapper tokenization functions for n-grams, characters, sentences, tweets, and more, thanks to @Coli...
Updates to documentation (#109) thanks to Emil Hvitfeldt. Add new tokenizers for tweets, Penn Treeba...
unnest_tokens can now unnest a data frame with a list column (which formerly threw the error unnest_...
Use vdiffr conditionally Bug fix/breaking change for collapse argument to unnest_functions(). This a...
Check for installation of stopwords more gracefully Update tidiers and casters for new version of qu...
A number of fixes, documentation updates, and small features: Highlights: Expose diversity paramete...
WordTokenizers v0.5.5 Diff since v0.5.4 Merged pull requests: Update paper.bib (#47) (@kthyng) Upda...
Improvements to documentation (#117) Fix for NSE thanks to @lepennec (#122). Tidier for estimated re...
scale_x/y_reordered() now uses a function labels as its main input (#200) Fixed how to_lower is pass...
Fix tidier for quanteda dictionary for correct class (#71). Add a pkgdown site. Convert NSE from und...
reorder_within() now handles multiple variables, thanks to @tmastny (#170) Move stopwords to Suggest...
Added documentation for n-grams, skip n-grams, and regex Added codecov and appveyor Added tidiers fo...
Updates to documentation (#102), README, and vignettes. Add tokenizing by character shingles thanks ...
get_sentiments now works regardless of whether tidytext has been loaded or not (#50). unnest_tokens ...
hunspell is now a suggested dependency, thanks to @MichaelChirico (#221) Added stm() tidiers for hig...
Wrapper tokenization functions for n-grams, characters, sentences, tweets, and more, thanks to @Coli...
Updates to documentation (#109) thanks to Emil Hvitfeldt. Add new tokenizers for tweets, Penn Treeba...
unnest_tokens can now unnest a data frame with a list column (which formerly threw the error unnest_...
Use vdiffr conditionally Bug fix/breaking change for collapse argument to unnest_functions(). This a...
Check for installation of stopwords more gracefully Update tidiers and casters for new version of qu...
A number of fixes, documentation updates, and small features: Highlights: Expose diversity paramete...
WordTokenizers v0.5.5 Diff since v0.5.4 Merged pull requests: Update paper.bib (#47) (@kthyng) Upda...