Column-oriented DBMS
A column-oriented DBMS (or columnar database) is a database management system (DBMS) that stores data tables as columns rather than as rows. Practical use of a column store versus a row store differs little in the relational DBMS world. Both columnar and row databases use traditional database languages like SQL to load data and perform queries. Both row and columnar databases can become the backbone in a system to serve data for common ETL and data visualization tools. However, by storing data in columns rather than rows, the database can more precisely access the data it needs to answer a query rather than scanning and discarding unwanted data in rows. Query performance is often increased[1] as a result, particularly in very large data sets.
Another benefit of columnar storage is compression efficiency.[1] It is well known that a record of similar data (dates, for example) compresses more efficiently than disparate data across records. For this reason, columnar databases (whose fundamental record is the column, which contains lots of similar data) are well-known for minimizing storage and reducing the I/O spent reading data to answer a query compared to row-based databases (whose fundamental record is the row, which can contain arbitrary, often dissimilar data). Columnar databases most often are paired with Massively Parallel Processing (MPP) capability (for example, as provided by Hadoop) to allow sharing the analytical workload across a cluster.
Description
Background
A relational database management system provides data that represents a two-dimensional table, of columns and rows. For example, a database might have this table:
RowId | EmpId | Lastname | Firstname | Salary |
---|---|---|---|---|
001 | 10 | Smith | Joe | 40000 |
002 | 12 | Jones | Mary | 50000 |
003 | 11 | Johnson | Cathy | 44000 |
004 | 22 | Jones | Bob | 55000 |
This simple table includes an employee identifier (EmpId), name fields (Lastname and Firstname) and a salary (Salary). This two-dimensional format exists only in theory. In practice, storage hardware requires the data to be serialized into one form or another.
The most expensive operations involving hard disks are seeks. In order to improve overall performance, related data should be stored in a fashion to minimize the number of seeks. This is known as locality of reference, and the basic concept appears in a number of different contexts. Hard disks are organized into a series of blocks of a fixed size, typically enough to store several rows of the table. By organizing the table's data so rows fit within these blocks, and grouping related rows onto sequential blocks, the number of blocks that need to be read or sought is minimized, along with the number of seeks.
Row-oriented systems
The common solution to the storage problem is to serialize each row of data, like this;
001:10,Smith,Joe,40000; 002:12,Jones,Mary,50000; 003:11,Johnson,Cathy,44000; 004:22,Jones,Bob,55000;
As data is inserted into the table, it is assigned an internal ID, the rowid
that is used internally in the system to refer to data. In this case the records have sequential rowids independent of the user-assigned empid. In this example, the DBMS uses short integers to store rowids. In practice, larger numbers, 64-bit or 128-bit, are normally used.
Row-based systems are designed to efficiently return data for an entire row, or record, in as few operations as possible. This matches the common use-case where the system is attempting to retrieve information about a particular object, say the contact information for a user in a rolodex system, or product information for an online shopping system. By storing the record's data in a single block on the disk, along with related records, the system can quickly retrieve records with a minimum of disk operations.
Row-based systems are not efficient at performing set-wide operations on the whole table, as opposed to a small number of specific records. In order to, for instance, find all records in the example table with salaries between 40,000 and 50,000, the DBMS would have to fully scan through the entire table looking for matching records. While the example table shown above will likely fit in a single disk block, a table with even a few hundred rows would not, and multiple disk operations would be needed to retrieve the data and examine it.
To improve the performance of these sorts of operations (which are very common, and generally the point of using a DBMS), most DBMSs support the use of database indexes, which store all the values from a set of columns along with rowid pointers back into the original table. An index on the salary column would look something like this:
001:40000; 003:44000; 002:50000; 004:55000;
As they store only single pieces of data, rather than entire rows, indexes are generally much smaller than the main table stores. Scanning this smaller set of data reduces the number of disk operations. If the index is heavily used, it can dramatically reduce the time for common operations. However, maintaining indexes adds overhead to the system, especially when new data is written to the database. Records not only need to be stored in the main table, but any attached indexes have to be updated as well.
Database indexes on one or more columns are typically sorted by value, which makes range queries operations (like the above "find all records with salaries between 40,000 and 50,000 example) very fast.
A number of row-oriented databases are designed to fit entirely in RAM, an in-memory database. These systems do not depend on disk operations, and have equal-time access to the entire dataset. This reduces the need for indexes, as it requires the same amount of operations to full scan the original data as a complete index for typical aggregation purposes. Such systems may be therefore simpler and smaller, but can only manage databases that will fit in memory.
Column-oriented systems
A column-oriented database serializes all of the values of a column together, then the values of the next column, and so on. For our example table, the data would be stored in this fashion:
10:001,12:002,11:003,22:004; Smith:001,Jones:002,Johnson:003,Jones:004; Joe:001,Mary:002,Cathy:003,Bob:004; 40000:001,50000:002,44000:003,55000:004;
In this layout, any one of the columns more closely matches the structure of an index in a row-based system. This may cause confusion that can lead to the mistaken belief a column-oriented store "is really just" a row-store with an index on every column. However, it is the mapping of the data that differs dramatically. In a row-oriented indexed system, the primary key is the rowid that is mapped to indexed data. In the column-oriented system, the primary key is the data, mapping back to rowids.[2] This may seem subtle, but the difference can be seen in this common modification to the same store:
…;Smith:001;Jones:002,004;Johnson:003;…
As two of the records store the same value, "Jones", it is possible to store this only once in the column store, along with pointers to all of the rows that match it. For many common searches, like "find all the people with the last name Jones", the answer is retrieved in a single operation. Other operations, like counting the number of matching records or performing math over a set of data, can be greatly improved through this organization.
Whether or not a column-oriented system will be more efficient in operation depends heavily on the workload being automated. Operations that retrieve all the data for a given objects (the entire row) are be slower. A row-based system can retrieve the row in a single disk read, whereas numerous disk operations to collect data from multiple columns are required from a columnar database. However, these whole-row operations are generally rare. In the majority of cases, only a limited subset of data is retrieved. In a rolodex application, for instance, collecting the first and last names from many rows to build a list of contacts is far more common than reading all data for any single address. This is even more true for writing data into the database, especially if the data tends to be "sparse" with many optional columns. For this reason, column stores have demonstrated excellent real-world performance in spite of many theoretical disadvantages.[3]
This is a simplification. Moreover, partitioning, indexing, caching, views, OLAP cubes, and transactional systems such as write-ahead logging or multiversion concurrency control all dramatically affect the physical organization of either system. That said, online transaction processing (OLTP)-focused RDBMS systems are more row-oriented, while online analytical processing (OLAP)-focused systems are a balance of row-oriented and column-oriented.
Benefits
Comparisons between row-oriented and column-oriented databases are typically concerned with the efficiency of hard-disk access for a given workload, as seek time is incredibly long compared to the other bottlenecks in computers. For example, a Serial ATA (SATA) hard drive has a maximum transfer rate of 600 MB/second (Megabytes per second) [4] while DDR3 SDRAM Memory can reach transfer rates of 17 GB/s (Gigabytes per second) [5]:157–165. Clearly, a major bottleneck in handling big data is disk access. Columnar databases boost performance by reducing the amount of data that needs to be read from disk, both by efficiently compressing the similar columnar data and by reading only the data necessary to answer the query.
In practice, columnar databases are well-suited for OLAP-like workloads (e.g., data warehouses) which typically involve highly complex queries over all data (possibly petabytes). However, some work must be done to write data into a columnar database. Transactions (INSERTs) must be separated into columns and compressed as they are stored, making it less suited for OLTP workloads. Row-oriented databases are well-suited for OLTP-like workloads which are more heavily loaded with interactive transactions.
Compression
Column data is of uniform type; therefore, there are some opportunities for storage size optimizations available in column-oriented data that are not available in row-oriented data. For example, many popular modern compression schemes, such as LZW or run-length encoding, make use of the similarity of adjacent data to compress. Missing values and repeated values, common in clinical data, can be represented by a two-bit marker.[6] While the same techniques may be used on row-oriented data, a typical implementation will achieve less effective results.[7][8]
To improve compression, sorting rows can also help. For example, using bitmap indexes, sorting can improve compression by an order of magnitude.[9] To maximize the compression benefits of the lexicographical order with respect to run-length encoding, it is best to use low-cardinality columns as the first sort keys.[10] For example, given a table with columns sex, age, name, it would be best to sort first on the value sex (cardinality of two), then age (cardinality of <150), then name.
Columnar compression achieves a reduction in disk space at the expense of efficiency of retrieval. Retrieving all data from a single row is more efficient when that data is located in a single location, such as in a row-oriented architecture. Further, the greater adjacent compression achieved, the more difficult random-access may become, as data might need to be uncompressed to be read. Therefore, column-oriented architectures are sometimes enriched by additional mechanisms aimed at minimizing the need for access to compressed data.[11]
History
Column stores or transposed files have been implemented from the early days of DBMS development. TAXIR was the first application of a column-oriented database storage system with focus on information-retrieval in biology[12] in 1969. Clinical data from patient records with many more attributes than could be analyzed were processed 1975 and after by a Time-Oriented Database System (TODS).[6] Statistics Canada implemented the RAPID system[13] in 1976 and used it for processing and retrieval of the Canadian Census of Population and Housing as well as several other statistical applications. RAPID was shared with other statistical organizations throughout the world and used widely in the 1980s. It continued to be used by Statistics Canada until the 1990s.
KDB was the first commercially available column-oriented database developed in 1993 followed in 1995 by Sybase IQ. However, that has changed rapidly since about 2004 with many open source and commercial implementations. MonetDB was released under an open-source license on September 30, 2004,[14] followed closely by the now defunct C-Store.[15] Vertica was eventually developed out of C-Store, while the MonetDB-related X100 project evolved into VectorWise.[16][17] Druid is a column-oriented data store that was open-sourced in late 2012 and now used by numerous organizations.[18]
Implementations
While even a traditional row-oriented RDBMS system can achieve some benefits of column-oriented layout, specialization of the storage layer and of the query-execution engine provide further benefits.[19] While nothing precludes providing both row- and column-optimized capabilities in a single DBMS, products typically specialize in one of the two directions.
See also
References
- 1 2 Ventana; et al. (2011). "Ins and Outs of Columnar Databases".
- ↑ Abadi, Daniel; Madden, Samuel (31 July 2008). "Debunking Another Myth: Column-Stores vs. Vertical Partitioning". The Database Column. Archived from the original on December 4, 2008.
- ↑ Harizopoulos, Stavros; Abadi, Daniel; Boncz, Peter. "Column-Oriented Database Systems" (PDF). VLDB 2009 Tutorial. p. 5.
- ↑ "SATA-IO Releases SATA Revision 3.0 Specification" (PDF) (Press release). Serial ATA International Organization. May 27, 2009. Retrieved 3 July 2009.
- ↑ "DDR3 SDRAM standard (revision F)". JEDEC. July 2012. Retrieved 2015-07-05.
- 1 2 Weyl, Stephen, James F. Fries, Gio Wiederhold, and Frank Germano (1975), "A Modular Self-describing Clinical Database System", Computers in Biomedical Research 8, pp. 279–293, doi:10.1016/0010-4809(75)90045-2
- ↑ D. J. Abadi, S. R. Madden, N. Hachem (2008). Column-stores vs. row-stores: how different are they really?. SIGMOD’08. pp. 967–980.
- ↑ Bruno, N (2009). "Teaching an old elephant new tricks" (PDF). CIDR ’09.
- ↑ Daniel Lemire, Owen Kaser, Kamel Aouiche, "Sorting improves word-aligned bitmap indexes", Data & Knowledge Engineering, Volume 69, Issue 1 (2010), pp. 3-28.
- ↑ Daniel Lemire and Owen Kaser, Reordering Columns for Smaller Indexes, Information Sciences 181 (12), 2011
- ↑ Slezak; et al. (2008). "Brighthouse: an analytic data warehouse for ad hoc queries" (PDF). Proceedings of the 34th VLDB Conference. Auckland, New Zealand.
- ↑ "The theory of the TAXIR accessioner". Mathematical Biosciences 5: 327–340. doi:10.1016/0025-5564(69)90050-9.
- ↑ "A DBMS for large statistical databases". acm.org.
- ↑ "A short history about us". monetdb.org.
- ↑ "C-Store". mit.edu.
- ↑ Marcin Zukowski and Peter Boncz (May 20, 2012). "From x100 to vectorwise: opportunities, challenges and things most researchers do not think about". Proceedings of the 2012 ACM SIGMOD International Conference on Management of Data (ACM): 861–862. doi:10.1145/2213836.2213967. ISBN 978-1-4503-1247-9.
- ↑ Inkster, D. and Zukowski, M. and Boncz, P. A. (September 20, 2011). "Integration of VectorWise with Ingres" (PDF). ACM SIGMOD Record (ACM) 40: 45. doi:10.1145/2070736.2070747.
- ↑ "Druid". druid.io.
- ↑ "Column-Stores vs. Row-Stores: How Different Are They Really?" (PDF).
External links
- Distinguishing Two Major Types Of Column-Stores
- VLDB 2009 Tutorial - overview
- Tour Through Hybrid Column-Row Oriented DBMS
- Weaving Relations for Cache Performance - column-oriented block layout
- The Design and Implementation of Modern Column-Oriented Database Systems
|
|
|