Google Ngram Viewer
The Google Ngram Viewer or Google Books Ngram Viewer is an online search engine that charts frequencies of any set of comma-delimited search strings using a yearly count of n-grams found in sources printed between 1500 and 2008[1][2][3][4] in Google's text corpora in American English, British English, French, German, Spanish, Russian, Hebrew, or Chinese.,[1][5] generated in either 2008 or 2012. There are also some specialized English-language corpora, such as English Fiction. Lacking an independent corpus, Italian words are counted by their use in other languages.
The program can search for a single word or a phrase, including misspellings or gibberish.[5] The n-grams are matched with the text within the selected corpus, optionally using case-sensitive spelling (which compares the exact use of uppercase letters),[2] and, if found in 40 or more books, are then plotted on a graph.[6]
The Ngram Viewer as of January 2016 supports searches for parts of speech and wildcards.
History
The program was developed by Jon Orwant and Will Brockman and released in mid-December 2010.[1][3] It was inspired by a prototype (called "Bookworm") created by Jean-Baptiste Michel and Erez Aiden from Harvard's Cultural Observatory and Yuan Shen from MIT and Steven Pinker.[7]
The Ngram Viewer was initially based on Google Books, but then switched to the 2009 edition of the Google Books Ngram Corpus. As of January 2016, the program can search an individual language's corpus within the 2009 or the 2012 edition. Although the latter edition has various new features, it only includes source texts within the same range of years as the former: through 2008 and not beyond.
Research based on the Ngram Corpus (that is, the databases of text from the scanned books) has included the finding of correlations between the emotional output and significant events in the 20th century such as World War II[8] or to check and challenge popular trend statements such as the secularisation or economisation of modern societies.[9] With such results, the Viewer and its corpora are useful for research in the digital humanities.
Operation and restrictions
Commas delimit user-entered search-terms, indicating each separate word or phrase to find.[6] The Ngram Viewer returns a plotted line chart within seconds of the user pressing the Enter key or the "Search" button on the screen.
As an adjustment for more books having been published during some years, the data is normalized, as a relative level, by the number of books published in each year.[6]
Google populated the database from over 5 million books published up to 2008. Accordingly, as of January 2016, no data will match beyond the year 2008, no matter if the corpora was generated in 2009 or 2012. Due to limitations on the size of the Ngram database, only matches found in over 40 books are indexed in the database; otherwise the database could not have stored all possible combinations.[6]
Typically, search terms cannot end with punctuation, although a separate full stop (a period) can be searched.[6] Also, an ending question mark (as in "Why?") will cause a 2nd search for the question mark separately.[6]
Omitting the periods in abbreviations will allow a form of matching, such as using "R M S" to search for "R.M.S." versus "RMS".
Corpora
The corpora used for the search are composed of total_counts, 1-grams, 2-grams, 3-grams, 4-grams, and 5-grams files for each language. The file format of each of the files is tab-separated data. Each line has the following format:[10]
- total_counts file
- year TAB match_count TAB page_count TAB volume_count NEWLINE
- Version 1 ngram file (generated in July 2009)
- ngram TAB year TAB match_count TAB page_count TAB volume_count NEWLINE
- Version 2 ngram file (generated in July 2012)
- ngram TAB year TAB match_count TAB volume_count NEWLINE
The Google Ngram Viewer uses match_count to plot the graph.
As an example, a word "Wikipedia" from the Version 2 file of the English 1-grams is stored as follows:[11]
ngram | year | match_count | volume_count |
Wikipedia | 1904 | 1 | 1 |
Wikipedia | 1912 | 11 | 1 |
Wikipedia | 1924 | 1 | 1 |
Wikipedia | 1925 | 11 | 1 |
Wikipedia | 1929 | 11 | 1 |
Wikipedia | 1943 | 11 | 1 |
Wikipedia | 1946 | 11 | 1 |
Wikipedia | 1947 | 11 | 1 |
Wikipedia | 1949 | 11 | 1 |
Wikipedia | 1951 | 11 | 1 |
Wikipedia | 1953 | 22 | 2 |
Wikipedia | 1955 | 11 | 1 |
Wikipedia | 1958 | 1 | 1 |
Wikipedia | 1961 | 22 | 2 |
Wikipedia | 1964 | 22 | 2 |
Wikipedia | 1965 | 11 | 1 |
Wikipedia | 1966 | 15 | 2 |
Wikipedia | 1969 | 33 | 3 |
Wikipedia | 1970 | 129 | 4 |
Wikipedia | 1971 | 44 | 4 |
Wikipedia | 1972 | 22 | 2 |
Wikipedia | 1973 | 1 | 1 |
Wikipedia | 1974 | 2 | 1 |
Wikipedia | 1975 | 33 | 3 |
Wikipedia | 1976 | 11 | 1 |
Wikipedia | 1977 | 13 | 3 |
Wikipedia | 1978 | 11 | 1 |
Wikipedia | 1979 | 112 | 12 |
Wikipedia | 1980 | 13 | 4 |
Wikipedia | 1982 | 11 | 1 |
Wikipedia | 1983 | 3 | 2 |
Wikipedia | 1984 | 48 | 3 |
Wikipedia | 1985 | 37 | 3 |
Wikipedia | 1986 | 6 | 4 |
Wikipedia | 1987 | 13 | 2 |
Wikipedia | 1988 | 14 | 3 |
Wikipedia | 1990 | 12 | 2 |
Wikipedia | 1991 | 8 | 5 |
Wikipedia | 1992 | 1 | 1 |
Wikipedia | 1993 | 1 | 1 |
Wikipedia | 1994 | 23 | 3 |
Wikipedia | 1995 | 4 | 1 |
Wikipedia | 1996 | 23 | 3 |
Wikipedia | 1997 | 6 | 1 |
Wikipedia | 1998 | 32 | 10 |
Wikipedia | 1999 | 39 | 11 |
Wikipedia | 2000 | 43 | 12 |
Wikipedia | 2001 | 59 | 14 |
Wikipedia | 2002 | 105 | 19 |
Wikipedia | 2003 | 149 | 53 |
Wikipedia | 2004 | 803 | 285 |
Wikipedia | 2005 | 2964 | 911 |
Wikipedia | 2006 | 9818 | 2655 |
Wikipedia | 2007 | 20017 | 5400 |
Wikipedia | 2008 | 33722 | 6825 |
The graph plotted by the Google Ngram Viewer using the above data is here.
Criticism
The data set has been criticized for its reliance upon inaccurate OCR, an overabundance of Scientific Literature, and for including large numbers of incorrectly dated and categorized texts.[12][13]
OCR Issues
OCR, or optical character recognition, is how computers take the pixels of a scanned book and convert it into text. It’s never a perfect process, and it only gets harder when computers are trying to decipher squiggles on a 200-year-old page.
As Mark Liberman, a computational linguist at the University of Pennsylvania, points out, the confusion over s and f turns up time and again: case versus cafe, funk versus sunk, fame versus same. Plenty of OCR errors probably exist, but systematic ones like confusing s and f are where you have to start being careful.
Although the authors claim that the results are reliable from 1800 onwards, poor OCR and insufficient data mean that frequencies given for languages such as Chinese may only be accurate from 1970 onwards, with earlier parts of the corpus showing no results at all for common terms, and data for some years containing more than 50% noise.[14][15]
Overabundance of Scientific Literature
Google Book’s English language corpus is a mishmash of fiction, nonfiction, reports, proceedings, and, as Dodds’ paper seems to show, a whole lot of scientific literature. Many have noted that the pre-20th century corpus has way more sermons. Recently scientific papers have taken up more of the corpus. Any abundance of a certain type of writing skews the data if researchers are trying to understand the popularity of a term in a particular context.
Messy metadata
When Google scans books, it also populates the metadata: date published, author, length, genre, and so on. Like OCR, this is a largely automated process, and like OCR, it’s prone to error. Over at the blog Language Log, University of California linguist Geoff Nunberg has documented the books whose dates are very wrong. He notes that a search for Barack Obama restricted to years before his birth turns up 29 results. Some of these errors have since been fixed, as Google is pretty vigilant when it notices errors in Google Books.
But the fixes don’t make it into the indexed corpus that powers Google Ngram right away. That has been updated only once, in 2012. “Our paper is bit of an appeal to Google to release a third edition which would be more nuanced,” says Dodds. “We need a recleaning of the data.”[16]
See also
References
- 1 2 3 "Google Ngram Database Tracks Popularity Of 500 Billion Words" Huffington Post, 17 December 2010, webpage: HP8150.
- 1 2 "Google Ngram Viewer - Google Books", Books.Google.com, May 2012, webpage: G-Ngrams.
- 1 2 "Google's Ngram Viewer: A time machine for wordplay", Cnet.com, 17 December 2010, webpage: CN93.
- ↑ "A Picture is Worth 500 Billion Words – By Rusty S. Thompson", HarrisburgMagazine.com, 20 September 2011, webpage: HBMag20.
- 1 2 "Google Books Ngram Viewer - University at Buffalo Libraries", Lib.Buffalo.edu, 22 August 2011, webpage: Buf497.
- 1 2 3 4 5 6 "Google Ngram Viewer - Google Books" (Information), Books.Google.com, December 16, 2010, webpage: G-Ngrams-info: notes bigrams and use of quotes for words with apostrophes.
- ↑ https://www.youtube.com/watch?v=5S1d3cNge24&feature=youtu.be&t=56m58s
- ↑ Acerbi A, Lampos V, Garnett P, Bentley RA (2013) The Expression of Emotions in 20th Century Books. PLoS ONE 8(3): e59030. doi: 10.1371/journal.pone.0059030
- ↑ Roth, S. (2014), "Fashionable functions. A Google ngram view of trends in functional differentiation (1800-2000)", International Journal of Technology and Human Interaction, Band 10, Nr. 2, S. 34-58 (online: http://ssrn.com/abstract=2491422).
- ↑ "Google Books Ngram Viewer". Google.
- ↑ googlebooks-eng-all-1gram-20120701-w.gz at http://storage.googleapis.com/books/ngrams/books/datasetsv2.html
- ↑ Google Ngrams: OCR and Metadata. web.resourceshelf.com.
- ↑ Humanities research with the Google Books corpus. languagelog.ldc.upenn.edu.
- ↑ Google n-grams and pre-modern Chinese. digitalsinology.org.
- ↑ When n-grams go bad. digitalsinology.org.
- ↑ "The Pitfalls of Using Google Ngram to Study Language". WIRED. Retrieved 2016-04-18.
- Lin, Yuri; et al. (July 2012). "Syntactic Annotations for the Google Books Ngram Corpus" (pdf). Proceedings of the 50th Annual Meeting. Demo Papers (Jeju, Republic of Korea: Association for Computational Linguistics) 2: 169–174. 2390499.
Whitepaper presenting the 2012 edition of the Google Books Ngram Corpus