token_vector {ttgsea}R Documentation

Vectorization of tokens

Description

A vectorization of words or tokens of text is necessary for machine learning. Vectorized sequences are padded or truncated.

Usage

token_vector(text, token, length_seq)

Arguments

text

text data

token

result of tokenization (output of "text_token")

length_seq

length of input sequences

Value

sequences of integers

Author(s)

Dongmin Jung

See Also

tm::removeWords, stopwords::stopwords, textstem::lemmatize_strings, tokenizers::tokenize_ngrams, keras::pad_sequences

Examples

library(reticulate)
if (keras::is_keras_available() & reticulate::py_available()) {
  library(fgsea)
  data(examplePathways)
  data(exampleRanks)
  names(examplePathways) <- gsub("_", " ",
                            substr(names(examplePathways), 9, 1000))
  set.seed(1)
  fgseaRes <- fgsea(examplePathways, exampleRanks)
  tokens <- text_token(data.frame(fgseaRes)[,"pathway"],
            num_tokens = 1000)
  sequences <- token_vector("Cell Cycle", tokens, 10)
}

[Package ttgsea version 1.0.0 Index]