International audienceStatic subword tokenization algorithms have been an essential component of recent works on language modeling. However, their static nature results in important flaws that degrade the models' downstream performance and robustness. In this work, we propose MANTa, a Module for Adaptive Neural TokenizAtion. MANTa is a differentiable tokenizer trained end-to-end with the language model. The resulting system offers a trade-off between the expressiveness of byte-level models and the speed of models trained using subword tokenization. In addition, our tokenizer is highly explainable since it produces an explicit segmentation of sequences into blocks. We evaluate our pretrained model on several English datasets from different d...
We introduce FLOTA (Few Longest Token Approximation), a simple yet effective method to improve the t...
Neural architectures are prominent in the construction of language models (LMs). However, word-leve...
15 page preprintWhat are the units of text that we want to model? From bytes to multi-word expressio...
Character-level models of tokens have been shown to be effective at dealing with withintoken noise a...
Character-level models of tokens have been shown to be effective at dealing with withintoken noise a...
Character-level models of tokens have been shown to be effective at dealing with withintoken noise a...
Character-level models of tokens have been shown to be effective at dealing with withintoken noise a...
Character-level models of tokens have been shown to be effective at dealing with withintoken noise a...
Language models typically tokenize text into subwords, using a deterministic, hand-engineered heuris...
We introduce FLOTA (Few Longest Token Approximation), a simple yet effective method to improve the t...
Recent studies have determined that the learned token embeddings of large-scale neural language mode...
We introduce FLOTA (Few Longest Token Approximation), a simple yet effective method to improve the t...
We introduce FLOTA (Few Longest Token Approximation), a simple yet effective method to improve the t...
Recent studies have determined that the learned token embeddings of large-scale neural language mode...
We introduce FLOTA (Few Longest Token Approximation), a simple yet effective method to improve the t...
We introduce FLOTA (Few Longest Token Approximation), a simple yet effective method to improve the t...
Neural architectures are prominent in the construction of language models (LMs). However, word-leve...
15 page preprintWhat are the units of text that we want to model? From bytes to multi-word expressio...
Character-level models of tokens have been shown to be effective at dealing with withintoken noise a...
Character-level models of tokens have been shown to be effective at dealing with withintoken noise a...
Character-level models of tokens have been shown to be effective at dealing with withintoken noise a...
Character-level models of tokens have been shown to be effective at dealing with withintoken noise a...
Character-level models of tokens have been shown to be effective at dealing with withintoken noise a...
Language models typically tokenize text into subwords, using a deterministic, hand-engineered heuris...
We introduce FLOTA (Few Longest Token Approximation), a simple yet effective method to improve the t...
Recent studies have determined that the learned token embeddings of large-scale neural language mode...
We introduce FLOTA (Few Longest Token Approximation), a simple yet effective method to improve the t...
We introduce FLOTA (Few Longest Token Approximation), a simple yet effective method to improve the t...
Recent studies have determined that the learned token embeddings of large-scale neural language mode...
We introduce FLOTA (Few Longest Token Approximation), a simple yet effective method to improve the t...
We introduce FLOTA (Few Longest Token Approximation), a simple yet effective method to improve the t...
Neural architectures are prominent in the construction of language models (LMs). However, word-leve...
15 page preprintWhat are the units of text that we want to model? From bytes to multi-word expressio...