python-tokenizers

Provides an implementation of today's most used tokenizers

Provides an implementation of today's most used tokenizers, with a focus on performance and versatility. * Train new vocabularies and tokenize, using today's most used tokenizers. * Extremely fast (both training and tokenization), thanks to the Rust implementation. Takes less than 20 seconds to tokenize a GB of text on a server's CPU. * Easy to use, but also extremely versatile. * Designed for research and production. * Normalization comes with alignments tracking. It's always possible to get the part of the original sentence that corresponds to a given token. * Does all the pre-processing: Truncate, Pad, add the special tokens your model needs.

openSUSE Leap 16.0 に対する公式のパッケージはありません

ディストリビューション

openSUSE Tumbleweed

openSUSE Leap 16.0

home:mslacken:ml コミュニティ
0.21.4

openSUSE Leap 15.6

openSUSE Factory RISCV

science:machinelearning 実験的なもの
0.21.4

SLFO 1.2

openSUSE Backports for SLE 15 SP7

openSUSE Backports for SLE 15 SP4

非公式のディストリビューション

下記のディストリビューションは、公式にはサポートされていないものです。これらのパッケージは自己責任でお使いください。