Faster, easier, and more reliable character string processing with stringi 0.3-1

A new release of the stringi package is available on CRAN (please wait a few days for Windows and OS X binary builds).

# install.packages("stringi") or update.packages()
library("stringi")

stringi is an R package providing (but definitely not limiting to) equivalents of nearly all the character string processing functions known from base R. While developing the package we had high performance and portability of its facilities in our minds.

We implemented each string processing function from scratch. The internationalization and globalization support, as well as many string processing facilities (like regex searching) is guaranteed by the well-known IBM’s ICU4C library.

Here is a very general list of the most important features available in the current version of stringi:

  • string searching:
    • with ICU (Java-like) regular expressions,
    • ICU USearch-based locale-aware string searching (quite slow, but working properly e.g. for non-Unicode normalized strings),
    • very fast, locale-independent byte-wise pattern matching;
  • joining and duplicating strings;
  • extracting and replacing substrings;
  • string trimming, padding, and text wrapping (e.g. with Knuth’s dynamic word wrap algorithm);
  • text transliteration;
  • text collation (comparing, sorting);
  • text boundary analysis (e.g. for extracting individual words);
  • random string generation;
  • Unicode normalization;
  • character encoding conversion and detection;

and many more.

Here’s a list of changes in version 0.3-1:

  • (IMPORTANT CHANGE) #87: %>% overlapped with the pipe operator from the magrittr package; now each operator like %>% has been renamed %s>%.

  • (IMPORTANT CHANGE) #108: Now the BreakIterator (for text boundary analysis) may be better controlled via stri_opts_brkiter() (see options type and locale which aim to replace now-removed boundary and locale parameters to stri_locate_boundaries, stri_split_boundaries, stri_trans_totitle, stri_extract_words, stri_locate_words).

    For example:

test <- "The\u00a0above-mentioned    features are very useful. Warm thanks to their developers. 123 456 789"
stri_split_boundaries(test, stri_opts_brkiter(type="word", skip_word_none=TRUE, skip_word_number=TRUE)) # cf. stri_extract_words
## [[1]]
##  [1] "The"        "above"      "mentioned"  "features"   "are"       
##  [6] "very"       "useful"     "Warm"       "thanks"     "to"        
## [11] "their"      "developers"
stri_split_boundaries(test, stri_opts_brkiter(type="sentence")) # extract sentences
## [[1]]
## [1] "The above-mentioned    features are very useful. "
## [2] "Warm thanks to their developers. "                
## [3] "123 456 789"
stri_split_boundaries(test, stri_opts_brkiter(type="character")) # extract characters
## [[1]]
##  [1] "T" "h" "e" " " "a" "b" "o" "v" "e" "-" "m" "e" "n" "t" "i" "o" "n"
## [18] "e" "d" " " " " " " " " "f" "e" "a" "t" "u" "r" "e" "s" " " "a" "r"
## [35] "e" " " "v" "e" "r" "y" " " "u" "s" "e" "f" "u" "l" "." " " "W" "a"
## [52] "r" "m" " " "t" "h" "a" "n" "k" "s" " " "t" "o" " " "t" "h" "e" "i"
## [69] "r" " " "d" "e" "v" "e" "l" "o" "p" "e" "r" "s" "." " " "1" "2" "3"
## [86] " " "4" "5" "6" " " "7" "8" "9"

By the way, the last call also works correctly for strings not in the Unicode Normalization Form C:

stri_split_boundaries(stri_trans_nfkd("zażółć gęślą jaźń"), stri_opts_brkiter(type="character"))
## [[1]]
##  [1] "z" "a" "ż"  "ó"  "ł" "ć"  " " "g" "ę"  "ś"  "l" "ą"  " " "j" "a" "ź"  "ń"
  • (NEW FUNCTIONS) #109: stri_count_boundaries and stri_count_words count the number of text boundaries in a string.
stri_count_words("Have a nice day!")
## [1] 4
  • (NEW FUNCTIONS) #41: stri_startswith_* and stri_endswith_* determine whether a string starts or ends with a given pattern.
stri_startswith_fixed(c("a1o", "a2g", "b3a", "a4e", "c5a"), "a")
## [1]  TRUE  TRUE FALSE  TRUE FALSE
  • (NEW FEATURE) #102: stri_replace_all_* gained a vectorize_all parameter, which defaults to TRUE for backward compatibility.
stri_replace_all_fixed("The quick brown fox jumped over the lazy dog.",
     c("quick", "brown", "fox"), c("slow",  "black", "bear"), vectorize_all=FALSE)
## [1] "The slow black bear jumped over the lazy dog."
# Compare the results:
stri_replace_all_fixed("The quicker brown fox jumped over the lazy dog.",
     c("quick", "brown", "fox"), c("slow",  "black", "bear"), vectorize_all=FALSE)
## [1] "The slower black bear jumped over the lazy dog."
stri_replace_all_regex("The quicker brown fox jumped over the lazy dog.",
     "\\b"%s+%c("quick", "brown", "fox")%s+%"\\b", c("slow",  "black", "bear"), vectorize_all=FALSE)
## [1] "The quicker black bear jumped over the lazy dog."
  • (NEW FUNCTIONS) #91: stri_subset_*, a convenient and more efficient substitute for str[stri_detect_*(str, ...)], added.
stri_subset_regex(c("john@office.company.com", "steve1932@g00gl3.eu", "no email here"),
   "^[A-Za-z0-9._%+-]+@([A-Za-z0-9-]+\\.)+[A-Za-z]{2,4}$")
## [1] "john@office.company.com" "steve1932@g00gl3.eu"
  • (NEW FEATURE) #100: stri_split_fixed, stri_split_charclass, stri_split_regex, stri_split_coll gained a tokens_only parameter, which defaults to FALSE for backward compatibility.
stri_split_fixed(c("ab_c", "d_ef_g", "h", ""), "_", n_max=1, tokens_only=TRUE, omit_empty=TRUE)
## [[1]]
## [1] "ab"
## 
## [[2]]
## [1] "d"
## 
## [[3]]
## [1] "h"
## 
## [[4]]
## character(0)
stri_split_fixed(c("ab_c", "d_ef_g", "h", ""), "_", n_max=2, tokens_only=TRUE, omit_empty=TRUE)
## [[1]]
## [1] "ab" "c" 
## 
## [[2]]
## [1] "d"  "ef"
## 
## [[3]]
## [1] "h"
## 
## [[4]]
## character(0)
stri_split_fixed(c("ab_c", "d_ef_g", "h", ""), "_", n_max=3, tokens_only=TRUE, omit_empty=TRUE)
## [[1]]
## [1] "ab" "c" 
## 
## [[2]]
## [1] "d"  "ef" "g" 
## 
## [[3]]
## [1] "h"
## 
## [[4]]
## character(0)
  • (NEW FUNCTION) #105: stri_list2matrix converts lists of atomic vectors to character matrices, useful in connection with stri_split and stri_extract.
stri_list2matrix(stri_split_fixed(c("ab_c", "d_ef_g", "h", ""), "_", n_max=3, tokens_only=TRUE, omit_empty=TRUE))
##      [,1] [,2] [,3] [,4]
## [1,] "ab" "d"  "h"  NA  
## [2,] "c"  "ef" NA   NA  
## [3,] NA   "g"  NA   NA
  • (NEW FEATURE) #107: stri_split_* now allow setting an omit_empty=NA argument.
stri_split_fixed("a_b_c__d", "_", omit_empty=FALSE)
## [[1]]
## [1] "a" "b" "c" ""  "d"
stri_split_fixed("a_b_c__d", "_", omit_empty=TRUE)
## [[1]]
## [1] "a" "b" "c" "d"
stri_split_fixed("a_b_c__d", "_", omit_empty=NA)
## [[1]]
## [1] "a" "b" "c" NA  "d"
  • (NEW FEATURE) #106: stri_split and stri_extract_all gained a simplify argument (if TRUE, then stri_list2matrix(..., byrow=TRUE) is called on the resulting list.
stri_split_fixed(c("ab,c", "d,ef,g", ",h", ""), ",", omit_empty=TRUE, simplify=TRUE)
##      [,1] [,2] [,3]
## [1,] "ab" "c"  NA  
## [2,] "d"  "ef" "g" 
## [3,] "h"  NA   NA  
## [4,] NA   NA   NA
stri_split_fixed(c("ab,c", "d,ef,g", ",h", ""), ",", omit_empty=FALSE, simplify=TRUE)
##      [,1] [,2] [,3]
## [1,] "ab" "c"  NA  
## [2,] "d"  "ef" "g" 
## [3,] ""   "h"  NA  
## [4,] ""   NA   NA
stri_split_fixed(c("ab,c", "d,ef,g", ",h", ""), ",", omit_empty=NA, simplify=TRUE)
##      [,1] [,2] [,3]
## [1,] "ab" "c"  NA  
## [2,] "d"  "ef" "g" 
## [3,] NA   "h"  NA  
## [4,] NA   NA   NA
  • (NEW FUNCTION) #77: stri_rand_lipsum generates (pseudo)random dummy lorem ipsum text.
cat(sapply(
   stri_wrap(stri_rand_lipsum(3), 80, simplify=FALSE),
   stri_flatten, collapse="\n"), sep="\n\n")
## Lorem ipsum dolor sit amet, eu turpis pellentesque est, lectus, vestibulum.
## Iaculis et nam ad eu morbi, ultrices enim pellentesque est fusce. Etiam
## ipsum varius, maecenas dapibus. Netus molestie non adipiscing netus,
## aptent sed malesuada, placerat suscipit. A, sed eu luctus imperdiet odio
## tempor. In velit ut vel feugiat felis eros risus. Sed sapien, facilisis
## ullamcorper, senectus efficitur sit id sociis sed purus. Ipsum, a, blandit
## faucibus. In vivamus, duis et sed sollicitudin maximus. Sodales magnis
## ac senectus facilisis, dolor faucibus a. Cursus in cum, cubilia egestas
## ut platea turpis. Maximus sit vel cursus nec in vel, eu, lacinia in ut.
## 
## Libero maximus potenti penatibus amet nisl non ut. Commodo nullam rhoncus,
## bibendum quisque sem aliquam sed, quam enim et, sed. Lacinia netus inceptos
## sapien nostra tincidunt facilisis montes nascetur non pharetra convallis
## id. Netus diam nulla montes nec tincidunt facilisis eros porttitor nisl urna
## cubilia. Aliquet egestas mus nisl, nisi vehicula, ac mauris rutrum, felis
## aenean tristique magna. Ante maecenas phasellus id class. Finibus iaculis purus
## volutpat posuere phasellus magna class blandit augue morbi torquent. Taciti
## ullamcorper venenatis at nulla eget auctor ante neque metus sed metus. Dolor,
## platea sit sed pellentesque ipsum. Dapibus sed nisi vestibulum ex integer.
## 
## Duis iaculis sapien habitasse, facilisi habitasse leo nam. Egestas,
## libero tempor purus in. Aliquam himenaeos conubia egestas cum vestibulum
## nec. Sociosqu mauris cum mus non lobortis eu et dapibus vel integer.
## Blandit quis inceptos cursus vel pellentesque lectus amet egestas.
## Pharetra ac eros nisi. Finibus nec, ac congue in molestie sed.
## Tincidunt faucibus a interdum facilisis, sed nulla, tortor, felis,
## sociis. Sem porttitor himenaeos pharetra nec eu torquent elementum.
  • (NEW FEATURE) #98: stri_trans_totitle gained a opts_brkiter parameter; it indicates which ICU BreakIterator should be used when performing case mapping.
stri_trans_totitle("GOOD-OLD cOOkiE mOnSTeR IS watCHinG You. Here HE comes!",
    stri_opts_brkiter(type="word")) # default boundary
## [1] "Good-Old Cookie Monster Is Watching You. Here He Comes!"
stri_trans_totitle("GOOD-OLD cOOkiE mOnSTeR IS watCHinG You. Here HE comes!",
    stri_opts_brkiter(type="sentence"))
## [1] "Good-old cookie monster is watching you. Here he comes!"
  • (NEW FEATURE) stri_wrap gained a new parameter: normalize.

  • (BUGFIX) #86: stri_*_fixed, stri_*_coll, and stri_*_regex could give incorrect results if one of search strings were of length 0.

  • (BUGFIX) #99: stri_replace_all did not use the replacement arg.

  • (BUGFIX) #94: R CMD check should no longer fail if icudt download failed.

  • (BUGFIX) #112: Some of the objects were not PROTECTed from being garbage collected, which might have caused spontaneous SEGFAULTS.

  • (BUGFIX) Some collator’s options were not passed correctly to ICU services.

  • (BUGFIX) Memory leaks causes as detected by valgrind --tool=memcheck --leak-check=full have been removed.

  • (DOCUMENTATION) Significant extensions/clean ups in the stringi manual.

    Check out yourself. In particular, take a glimpse at stringi-search-regex, stringi-search-charclass and, more generally, at stringi-search.

Enjoy! Any comments and suggestions are welcome.

Tagged with: , , , , , ,
Posted in Blog/R, Blog/R-bloggers

R now will keep children away from drugs

Do you find this plot fancy? If yes, you can find the code at the end of this article BUT if you spend a little time to read it thoroughly, you can learn how to create better ones.

We would like to encourage you and your children (or children you teach) to use our new R package – TurtleGraphics!

TurtleGraphics package offers R-users functionality of the “turtle graphics” from Logo educational programming language. The main idea standing behind it is to inspire the children to learn programming and show that working with computer can be entertaining and creative.

It is very elementary, clear and requires basic algorithm thinking skills, that even children are able to form them. You can learn it in just five short steps.

  • turtle_init() – To start the program call the turtle_init() function. It creates a plot region (called “Terrarium”) and places the Turtle in the middle pointing north.
library(TurtleGraphics)
turtle_init()

  • turtle_forward() and turtle_backward() – Argument to these functions is the distance you desire the Turtle to move. For example, to move the Turtle forward for a distance of 10 units use the turtle_forward() function. To move the Turtle backwards you can use the turtle_backward() function.
turtle_forward(dist=15)

  • turtle_turn()turtle_right() and turtle_left(). They change the Turtle's direction by a given angle.
turtle_right(angle=30)

  • turtle_up() and turtle_down() – To disable the path from being drawn you can simply use the turtle_up() function. Let's consider a simple example. We use the turtle_up() function. Now, when you move forward the path is not visible. If you want the path to be drawn again you should call the turtle_down() function.
turtle_up()
turtle_forward(dist=10)
turtle_down()
turtle_forward(dist=10)

  • turtle_show() and turtle_hide() – Similarly, you may show or hide the Turtle image, using the turtle_show() and turtle_hide() functions respectively. If you call a lot of functions it is strongly recommended to hide the Turtle first as it speeds up the process.
turtle_hide()

These were just the basics of the package. Below we show you the true potential of it:

  turtle_star <- function(intensity=1){
  y <- sample(1:657, 360*intensity, replace=TRUE)
  for (i in 1:(360*intensity)){
  turtle_right(90)
  turtle_col(colors()[y[i]])
  x <- sample(1:100,1)
  turtle_forward(x)
  turtle_up()
  turtle_backward(x)
  turtle_down()
  turtle_left(90)
  turtle_forward(1/intensity)
  turtle_left(1/intensity)
  }}
  turtle_init(500,500)
  turtle_left(90)
  turtle_do({
  turtle_star(7)
  })

One may wonder what turtle_do() function is doing here. It is an advanced way to use the package. The turtle_do() function is designed to call more complicated plot expressions, because it automatically hides the Turtle before starting the operations that results in a faster proceed of plotting.

  drawTriangle<- function(points){
  turtle_setpos(points[1,1],points[1,2])
  turtle_goto(points[2,1],points[2,2])
  turtle_goto(points[3,1],points[3,2])
  turtle_goto(points[1,1],points[1,2])
  }
  getMid<- function(p1,p2) c((p1[1]+p2[1])/2, c(p1[2]+p2[2])/2)
  sierpinski <- function(points, degree){
  drawTriangle(points)
  if (degree  > 0){
  p1 <- matrix(c(points[1,], getMid(points[1,], points[2,]),
  getMid(points[1,], points[3,])), nrow=3, byrow=TRUE)

  sierpinski(p1, degree-1)
  p2 <- matrix(c(points[2,], getMid(points[1,], points[2,]),
  getMid(points[2,], points[3,])), nrow=3, byrow=TRUE)

  sierpinski(p2, degree-1)
  p3 <- matrix(c(points[3,], getMid(points[3,], points[2,]),
  getMid(points[1,], points[3,])), nrow=3, byrow=TRUE)
  sierpinski(p3, degree-1)
  }
  invisible(NULL)
  }
  turtle_init(520, 500, "clip")
  p <- matrix(c(10, 10, 510, 10, 250, 448), nrow=3, byrow=TRUE)
  turtle_col("red")
  turtle_do(sierpinski(p, 6))
  turtle_setpos(250, 448)

We kindly invite you to use TurtleGraphics! Enjoy!
A full tutorial of this package is available here.

Marcin Kosinski, m.p.kosinski@gmail.com
Natalia Potocka, natalia-po@hotmail.com

Tagged with: , , ,
Posted in Blog/R, Blog/R-bloggers

Playing with GUIs in R with RGtk2

Sometimes when we create some nice functions which we want to show other people who don’t know R we can do two things. We can teach them R what is not easy task which also takes time or we can make GUI allowing them to use these functions without any knowledge of R. This post is my first attempt to create a GUI in R. Although it can be done in many ways, we will use the package RGtk2, so before we start you will need:

require("RGtk2")

I will try to show you making GUI on an example. I want to make an application which works like calculator. It should have two text fields: first with a expression to calculate and second with result. I want to include button which makes it calculate. It should display error message when there is a mistake in the expression. Also I want two buttons to insert sin() and cos () into text field. Last thing is a combobox allowing us to choose between integer and double result.

Firstly we need to make window and frame.

window <- gtkWindow()
window["title"] <- "Calculator"


frame <- gtkFrameNew("Calculate")
window$add(frame)

It should look like this:

Now, let’s make two boxes. To the first box we put components vertically and horizontally to the second box.

box1 <- gtkVBoxNew()
box1$setBorderWidth(30)
frame$add(box1)   #add box1 to the frame

box2 <- gtkHBoxNew(spacing= 10) #distance between elements
box2$setBorderWidth(24)

This should look exactly as before because we don’t have any component in boxes yet, also box2 isn’t even added to our window. So let’s put some elements in.

TextToCalculate<- gtkEntryNew() #text field with expresion to calculate
TextToCalculate$setWidthChars(25)
box1$packStart(TextToCalculate)

label = gtkLabelNewWithMnemonic("Result") #text label
box1$packStart(label)

result<- gtkEntryNew() #text field with result of our calculation
result$setWidthChars(25)
box1$packStart(result)

box2 <- gtkHBoxNew(spacing= 10) # distance between elements
box2$setBorderWidth(24)
box1$packStart(box2)

Calculate <- gtkButton("Calculate")
box2$packStart(Calculate,fill=F) #button which will start calculating

Sin <- gtkButton("Sin") #button to paste sin() to TextToCalculate
box2$packStart(Sin,fill=F)

Cos <- gtkButton("Cos") #button to paste cos() to TextToCalculate
box2$packStart(Cos,fill=F)

model<-rGtkDataFrame(c("double","integer"))
combobox <- gtkComboBox(model)
#combobox allowing to decide whether we want result as integer or double

crt <- gtkCellRendererText()
combobox$packStart(crt)
combobox$addAttribute(crt, "text", 0)

gtkComboBoxSetActive(combobox,0)
box2$packStart(combobox)

Now we should have something like this:

Note that our window gets bigger when we put bigger components in it. However nothing is working as intended. We need to explain buttons what to do when we click them:

DoCalculation<-function(button)
{

  if ((TextToCalculate$getText())=="") return(invisible(NULL)) #if no text do nothing

   #display error if R fails at calculating
   tryCatch(
      if (gtkComboBoxGetActive(combobox)==0)
   result$setText((eval(parse(text=TextToCalculate$getText()))))
   else (result$setText(as.integer(eval(parse(text=TextToCalculate$getText()))))),
   error=function(e)
      {
      ErrorBox <- gtkDialogNewWithButtons("Error",window, "modal","gtk-ok", GtkResponseType["ok"])
      box1 <- gtkVBoxNew()
      box1$setBorderWidth(24)
      ErrorBox$getContentArea()$packStart(box1)

      box2 <- gtkHBoxNew()
      box1$packStart(box2)

      ErrorLabel <- gtkLabelNewWithMnemonic("There is something wrong with your text!")
      box2$packStart(ErrorLabel)
      response <- ErrorBox$run()


      if (response == GtkResponseType["ok"])
         ErrorBox$destroy()

      }
   )

}


  PasteSin<-function(button)
{
   TextToCalculate$setText(paste(TextToCalculate$getText(),"sin()",sep=""))

}

PasteCos<-function(button)
{
   TextToCalculate$setText(paste(TextToCalculate$getText(),"cos()",sep=""))

}

#however button variable was never used inside 
#functions, without it gSignalConnect would not work
gSignalConnect(Calculate, "clicked", DoCalculation)
gSignalConnect(Sin, "clicked", PasteSin)
gSignalConnect(Cos, "clicked", PasteCos)

Now it works like planned.

Also we get a nice error message.

Wiktor Ryciuk
wryciuk@poczta.onet.pl

Tagged with: , , ,
Posted in Blog/R, Blog/R-bloggers

Text mining in R – Automatic categorization of Wikipedia articles

Text mining is currently a live issue in data analysis. Enoromus text data resourses on the Internet made it an important component of Big Data world. The potential of information hidden in the words is the reason why I find worth knowing what’s going on.

I wanted to learn about R text analysis capabilities and this post is the result of my small research. More precisely, this is an example of (hierarchical) categorization of Wikipedia articles. I share the source code here and explain it, so that everyone could try it oneself with various articles.

I use tm package which provides the set of tools for text mining. Also package stringi is useful here for string processing.

First of all, we have to load the data. In the variable titles I list some of the titles of the Wikipedia articles. There are 5 mathematical terms (3 of them are about integrals), 3 painters and 3 writers. After loading the articles (as texts – html page sources), we make a container for them, called “Corpus”. It’s a structure for storing text documents, which is just a kind of a list, containing text documents and metadata that concern them.

library(tm)
library(stringi)
library(proxy)
wiki <- "http://en.wikipedia.org/wiki/"
titles <- c("Integral", "Riemann_integral", "Riemann-Stieltjes_integral", "Derivative",
    "Limit_of_a_sequence", "Edvard_Munch", "Vincent_van_Gogh", "Jan_Matejko",
    "Lev_Tolstoj", "Franz_Kafka", "J._R._R._Tolkien")
articles <- character(length(titles))

for (i in 1:length(titles)) {
    articles[i] <- stri_flatten(readLines(stri_paste(wiki, titles[i])), col = " ")
}

docs <- Corpus(VectorSource(articles))

As we have already loaded the data, we can start to process our text documents. This is the first step of text analysis. It’s important because preparing the data strongly affects the results. Now we apply the function tm_map to the corpus, which is equivalent to lapply for list. What we do here is:

  1. Replace all “” elements with a space. We do it because there are not a part of text document but in general a html code.
  2. Replace all “/t” with a space.
  3. Convert previous result (returned type was “string”) to “PlainTextDocument”, so that we can apply the other functions from tm package, which require this type of argument.
  4. Remove extra whitespaces from the documents.
  5. Remove punctuation marks.
  6. Remove from the documents words which we find redundant for text mining (e.g. pronouns, conjunctions). We set this words as stopwords(“english”) which is a built-in list for English language (this argument is passed to the function removeWords.
  7. Transform characters to lower case.
docs[[1]]

docs2 <- tm_map(docs, function(x) stri_replace_all_regex(x, "<.+?>", " "))
docs3 <- tm_map(docs2, function(x) stri_replace_all_fixed(x, "\t", " "))

docs4 <- tm_map(docs3, PlainTextDocument)
docs5 <- tm_map(docs4, stripWhitespace)
docs6 <- tm_map(docs5, removeWords, stopwords("english"))
docs7 <- tm_map(docs6, removePunctuation)
docs8 <- tm_map(docs7, tolower)

docs8[[1]]

We can look at the results of the “cleaned” text. Instead of this:


“The volume of irregular objects can be measured with precision by the fluid < a href=”/wiki/Displacement_(fluid)” title=”Displacement (fluid)”>displaced</a> as the object is submerged; see < a href=”/wiki/Archimedes” title=”Archimedes”>Archimedes</a>’s <a href=”/wiki/Eureka_(word)” title=”Eureka (word)”>Eureka</a>.”

now we have this:


“the volume irregular objects can measured precision fluid displaced object submerged see archimedes s eureka”

Now we are ready to proceed to the heart of the analysis. The starting point is creating “Term document matrix”. It describes the frequency of each term in each document in the corpus. This is a fundamental object in the text analysis. Based on it we create a matrix of dissimilarities – it measures dissimilarity between documents (the function dissimilarity returns an object of class dist – it is a convenience because clustering functions require this type of argument). At last we apply the function hclust (but it can be any clusterig function) and we see result on the plot.

docsTDM <- TermDocumentMatrix(docs8)

docsdissim <- dissimilarity(docsTDM, method = "cosine")

docsdissim2 <- as.matrix(docsdissim)
rownames(docsdissim2) <- titles
colnames(docsdissim2) <- titles
docsdissim2
h <- hclust(docsdissim, method = "ward")
plot(h, labels = titles, sub = "")
plot of chunk unnamed-chunk-4

As we can see, the result is perfect here. Of course it is because chosen articles are easy to categorize. On the left side, writers made one small cluster and painters the second. Next this both clusters made bigger cluster of people. On the right side, integrals made one cluster and next two terms joined it and made together bigger cluster of mathematical terms.

This example is only a piece of R text mining capabilities. I think that you can easily proceed other text analysis as concept extraction, sentiment analysis and information extraction in general.

I give some sources for more information about text mining in R: cran.r-project, r-bloggers, onepager.togaware.com, jstatsoft.org.

Norbert Ryciak
norbertryciak@gmail.com

Tagged with: , , ,
Posted in Blog/R, Blog/R-bloggers

ICU Unicode text transforms in the R package stringi

The ICU (International Components for Unicode) library provides very powerful and flexible ways to apply various Unicode text transforms. These include:

  • Full (language-specific) case mappings,
  • Unicode normalization,
  • Text transliteration (e.g. script-to-script conversion).

All of these are available to R programmers/users via our still maturing stringi package.

Case Mappings

Mapping of upper-, lower-, and title-case characters may seem to be a straightforward task, but just a quick glimpse at the latest Unicode standard (Secs. 3.13, 4.2, and 5.18) will suffice to convince us that case mapping rules are very complex. In one of my previous posts I've already mentioned that “base R” performs (at least on my machine) only a single-character case conversion:

toupper("groß") # German: -> GROSS
## [1] "GROß"

Notably, the case conversion in R is language-dependent:

toupper("ıi") # Polish locale is default here
## [1] "II"
oldloc <- Sys.getlocale("LC_CTYPE")
Sys.setlocale("LC_CTYPE", "tr_TR.UTF-8")  # Turkish

toupper("ıi") # dotless i and latin i -> latin I and I with dot above (OK)
## [1] "Iİ"
Sys.setlocale("LC_CTYPE", oldloc)

This language-sensitivity is of course desirable when it comes to natural language processing. Unfortunately, a few more examples might be found for which toupper() and tolower() does not meet the Unicode guidelines. Generally, a proper case map can change the number of code points/units of a string, is language-sensitive and context-sensitive (character mapping may depend on its surrounding characters). Luckily, we have the case mapping facilities implemented in the ICU library, which provides us with all we need:

library(stringi)
stri_trans_toupper("groß", locale = "de_DE") # German
## [1] "GROSS"
stri_trans_totitle("ijsvrij yoghurt", locale = "nl_NL")  # Dutch
## [1] "IJsvrij Yoghurt"
stri_trans_toupper("ıi", locale = "tr_TR")
## [1] "Iİ"
stri_trans_tolower("İI", locale = "tr_TR")
## [1] "iı"

By the way, ICU doesn't have any list of non-capitalized words for language-dependent title casing (e.g. pining for the fjords in English is most often mapped to Pining for the Fjords), so such tasks must be performed manually.

Unicode Normalization

The following string:

'\u01f1DZ'
## [1] "DZDZ"

consists of 3 Unicode code points: a Unicode character LATIN CAPITAL LETTER DZ (U+01F1) and then Latin letters D and Z. Even though both DZs may look different in your Web browser, the appear as identical (well, almost) in RStudio (at least on my computer). Take a try yourself, that's really interesting.

A tricky question: how many \code{DZ}s are in the above string? 2 or 1? Considering raw code points (in a byte-wise manner) we'd answer 1. But for natural language processing a better answer is probably 2. This is one of a few cases in which the Unicode normalization (see here and here for more information) is of interest.

Without going into much detail let's just say that there are few normalization forms (NFC, NFD, NFKC, NFKD, NFKC_casefold); each one serves for different purposes. Unless you're an author of some string processing package, these won't interest you too much (it's the developer's responsibility to provide an on-the-fly normalization). Anyway, the Unicode normalization may be performed with ICU:

stri_trans_nfkc('\u01f1DZ')
## [1] "DZDZ"
stri_trans_nfc('a\u0328') # a and ogonek => a with ogonek
## [1] "ą"
stri_trans_nfkc("\ufdfa") # 1 codepoint -> 18 codepoints
## [1] "صلى الله عليه وسلم"

Fortunately, an ordinary user may keep calm: many string processing tasks in stringi just take care of a proper transformation automatically. This includes string searching, sorting, and comparing functions:

stri_count_coll('\u01f1DZ', 'DZ', stri_opts_collator(strength=2)) # how many DZs are there?
## [1] 2
'ą' %==% 'a\u0328' # are they canonically equivalent?
## [1] TRUE

General Text Transforms

If you were patient and persistent enough with reading this post and arrived at this very section, here's the frosting on the cake: ICU general text transforms.

First of all, general transforms allow us to perform all the above-mentioned operations (however, they are not locale-dependent). For example:

stri_trans_general("DZDZ", "nfkc")
## [1] "DZDZ"
stri_trans_general("groß", "upper")
## [1] "GROSS"

Here, the 2nd argument of stri_trans_general denotes the transformation to apply. The list of available transforms is returned by a call to:

head(stri_trans_list())
## [1] "ASCII-Latin"       "Accents-Any"       "Amharic-Latin/BGN"
## [4] "Any-Accents"       "Any-Publishing"    "Arabic-Latin"

General text transforms can perform:

  • Hex and Character Name conversions (e.g. for escaping Unicode code points),
  • Script to Script conversion (a.k.a. text transliteration),
  • etc.

For more information on text transforms, refer to the ICU documentation. I admit that the user's guide is not easy to follow, but it may allow you to do some magic tricks with your text, so it's worth reading.

Notably, text transformations may be composed (so that many operations may be performed one by one in a single call) and we are able to tell ICU to restrict processing only to a fixed set of Unicode code points.

A bunch of examples: firstly, some script-to-script conversions (not to be confused with text translation):

stri_trans_general("stringi", "latin-greek") # script transliteration
## [1] "στριγγι"
stri_trans_general("Пётр Ильич Чайковский", "cyrillic-latin") # script transliteration
## [1] "Pëtr Ilʹič Čajkovskij"
stri_trans_general("Пётр Ильич Чайковский", "cyrillic-latin; nfd; [:nonspacing mark:] remove; nfc")  # and remove accents
## [1] "Petr Ilʹic Cajkovskij"
stri_trans_general("zażółć gęślą jaźń", "latin-ascii")   # remove diacritic marks
## [1] "zazolc gesla jazn"

What I really love in the first example above is that from ng we obtain γγ (gamma,gamma) and not νγ (nu, gamma). Cute, isn't it?

It's getting hotter with these:

stri_trans_general("w szczebrzeszynie chrząszcz brzmi w trzcinie", "pl-pl_fonipa")
## [1] "v ʂt͡ʂɛbʐɛʂɨɲɛ xʂɔ̃ʂt͡ʂ bʐmi v tʂt͡ɕiɲɛ"
# and now the same in the XSampa ASCII-range representation:
stri_trans_general("w szczebrzeszynie chrząszcz brzmi w trzcinie", "pl-pl_fonipa; ipa-xsampa")
## [1] "v s`t_s`Ebz`Es`1JE xs`O~s`t_s` bz`mi v ts`t_s\\iJE"

We've obtained the phonetic representation of a Polish text (in IPA) – try reading that tongue twister aloud (in case of any problems consult this Wikipedia article).

We may also escape a selected set of code points (to hex representation as well as e.g. to XML entities) or even completely remove them:

stri_trans_general("zażółć gęślą jaźń", "[^\\u0000-\\u007f] any-hex") # filtered
## [1] "za\\u017C\\u00F3\\u0142\\u0107 g\\u0119\\u015Bl\\u0105 ja\\u017A\\u0144"
stri_trans_general("zażółć gęślą jaźń", "[^\\u0000-\\u007f] any-hex/xml")
## [1] "za&#x17C;&#xF3;&#x142;&#x107; g&#x119;&#x15B;l&#x105; ja&#x17A;&#x144;"
stri_trans_general("zażółć gęślą jaźń", "[\\p{Z}] remove")
## [1] "zażółćgęśląjaźń"

…and play with code point names:

stri_trans_general("ą1©,", "any-name")
## [1] "\\N{LATIN SMALL LETTER A WITH OGONEK}\\N{DIGIT ONE}\\N{COPYRIGHT SIGN}\\N{COMMA}"
stri_trans_general("\\N{LATIN SMALL LETTER SHARP S}", "name-any")
## [1] "ß"

Last but not least:

stri_trans_general("Let's go -- she said", "any-publishing")
## [1] "Let’s go — she said"

Did you note the differences?

A Note on BiDi Text (Help Needed)

ICU also provides support for processing Bidirectional text (e.g. a text that consists of a mixture of Arabic/Hebrew and English). We would be very glad to implement such facilities, but, as we (developers of stringi) come from a “Latin” environment, we don't have good guidelines on how the BiDi/RTL (right-to-left) text processing functions should behave. We don't even know whether such a text displays properly in RStudio or R GUI on Windows. Therefore, we'll warmly welcome any volunteers that would like to help us with the mentioned issues (developers or just testers).

For bug reports and feature requests concerning the stringi package visit our GitHub profile or contact me via email.

So…

stri_trans_general("¡muy bueno mi amigo, hasta la vista! :-)", "es-es_FONIPA")
## [1] "¡muiβwenomiamiɣo,.astalaβista!:)"

Marek Gagolewski

Tagged with: , , , ,
Posted in Blog/R, Blog/R-bloggers

Counting the number of words in a LaTeX file with stringi

In my recent post I promised to present the most interesting features of the stringi package in more detail.

Here's one of such jolly features. Many LaTeX users may find it very useful.

Loading a text file with encoding auto-detection

Here's a LaTeX document consisting of a Polish poem. Probably, most of you wouldn't have been able to guess the file's character encoding if I hadn't left some hints. But it's OK, we have a little challenge.

Let's use some (currently experimental) stringi functions to guess the file's encoding.

First of all, we should read the file as a raw vector (anyway, each text file is a sequence of bytes).

library(stringi)
# experimental function (as per stringi_0.2-5):
download.file("http://www.rexamine.com/manual_upload/powrot_taty_latin2.tex", 
    dest = "powrot_taty_latin2.tex")
file <- stri_read_raw("powrot_taty_latin2.tex")
head(file, 15)
##  [1] 25 25 20 45 4e 43 4f 44 49 4e 47 20 3d 20 49

Let's try to detect the file's character encoding automatically.

stri_enc_detect(file)[[1]]  # experimental function
## $Encoding
## [1] "ISO-8859-2" "ISO-8859-1" "ISO-8859-9"
## 
## $Language
## [1] "pl" "pt" "tr"
## 
## $Confidence
## [1] 0.46 0.19 0.07

Encoding detection is, at best, an imprecise operation using statistics and heuristics. ICU indicates that most probably we deal with Polish text in ISO-8859-2 (a.k.a. latin2) here. What a coincidence: it's true.

Let's re-encode the file. Our target encoding will be UTF-8, as it is a “superset'' of all 8-bit encodings. We really love portable code:

file <- stri_conv(file, stri_enc_detect(file)[[1]]$Encoding[1], "UTF-8")
file <- stri_split_lines1(file)  # split a string into text lines
print(file[22:28])  # text sample
## [1] ",,Pójdźcie, o dziatki, pójdźcie wszystkie razem"
## [2] ""                                               
## [3] "Za miasto, pod słup na wzgórek,"                
## [4] ""                                               
## [5] "Tam przed cudownym klęknijcie obrazem,"         
## [6] ""                                               
## [7] "Pobożnie zmówcie paciórek."

Of course, if we knew a priori that the file is in ISO-8859-2, we'd just call:

file <- stri_conv(readLines("http://www.rexamine.com/manual_upload/powrot_taty_latin2.tex"), 
    "ISO-8859-2", "UTF-8")

So far so good.

Word count

LaTeX word counting is a quite complicated task and there are many possible approaches
to perform it. Most often, they rely on running some external tools (which may be a bit inconvenient for some users). Personally, I've always been most satisfied with the output produced by the Kile LaTeX IDE for KDE desktop environment.

LaTeX document statistics in Kile

As not everyone has Kile installed, I've had decided to grab Kile's algorithm (the power of open source!), made some not-too-invasive stringi-specific tweaks and here we are:

stri_stats_latex(file)
##     CharsWord CharsCmdEnvir    CharsWhite         Words          Cmds 
##          2283           335           576           461            32 
##        Envirs 
##             2

Some other aggregates are also available (they are meaningful in case of any text file):

stri_stats_general(file)
##       Lines LinesNEmpty       Chars CharsNWhite 
##         232         122        3308        2930

Finally, here's the word count for my R programming book (in Polish). Importantly, each chapter is stored in a separate .tex file (there are 30 files), so "clicking out” the answer in Kile would be a bit problematic:

apply(
   sapply(
      list.files(path="~/Publikacje/ProgramowanieR/rozdzialy/",
         pattern=glob2rx("*.tex"), recursive=TRUE, full.names=TRUE),
      function(x)
      stri_stats_latex(readLines(x))
   ), 1, sum)
## CharsWord CharsCmdEnvir    CharsWhite         Words          Cmds        Envirs
##    718755        458403        281989        120202         37055          6119

Notably, my publisher was satisfied with the above estimate. :-)

Next time we'll take a look at ICU's very powerful transliteration services.

PS. There’s also a nice LuaTeX package called chickenize. Check that out.

More information

For more information check out the stringi package website and its on-line documentation.

For bug reports and feature requests visit our GitHub profile.

Any comments and suggestions are warmly welcome.

Marek Gagolewski

Tagged with: , , , ,
Posted in Blog/LaTeX, Blog/R, Blog/R-bloggers

(String/text processing)++: stringi 0.2-3 released

A new release of the stringi package is available on CRAN (please wait a few days for Windows and OS X binary builds).

stringi is a package providing (but definitely not limiting to) replacements for nearly all the character string processing functions known from base R. While developing the package we had high performance and portability of its facilities in our minds.

Read more ›

Tagged with: , , , ,
Posted in Blog/R, Blog/R-bloggers

ShareLaTeX now supports knitr

ShareLaTeX (click here to register a free account) is a wonderful and reliable on-line editor for writing and compiling LaTeX documents “in the cloud” as well as working together in real-time (imagine Google Docs supporting LaTeX => you get ShareLaTeX).

What is even more, the ShareLaTeX team recently announced that now its users are able to prepare documents using knitr! R version 3.0.2 is currently available.

Here is an exemplary chunk in an .Rtex file giving the list of installed packages:

%% begin.rcode
% cat(strwrap(paste(sort(rownames(installed.packages())), collapse=", "), 
%    width=80), sep='\n')
%% end.rcode

which results in:

## KernSmooth, MASS, Matrix, base, bitops, boot, class, cluster, codetools,
## compiler, datasets, digest, evaluate, foreign, formatR, grDevices, graphics,
## grid, highr, knitr, lattice, markdown, methods, mgcv, nlme, nnet, parallel,
## rpart, spatial, splines, stats, stats4, stringr, survival, tcltk, testit,
## tools, utils

Installation of new packages has been disabled (as well as any access to on-line resources). However, you may upload your data sets (e.g. in CSV format) to your projects and read them with R commands. It seems that the compilation takes place in a chrooted-like environment, so it is secure.

Enjoy!

Tagged with: , , , ,
Posted in Blog/LaTeX, Blog/R, Blog/R-bloggers, Blog/R-knitr

FuzzyNumbers_0.3-3 released

A new release of the FuzzyNumbers package for R is now available on CRAN. The package provides S4 classes and methods to deal with Fuzzy Numbers that allow for computations of arithmetic operations, approximation by trapezoidal and piecewise linear FNs, visualization, etc.
Read more ›

Tagged with: ,
Posted in Blog/News, Blog/R, Blog/R-bloggers

stringi 0.1-11 available for testing

We have prepared binary builds of the latest version of our stringi processing package (Windows i386/x86_64 for R 2.15.X and 3.0.X and OS X x86_64 for R 3.0.X). The are – together with the source package – available with the on-line installer:

source("http://stringi.rexamine.com/install.R")
Tagged with: , ,
Posted in Blog/News, Blog/R