Partitioning in Hive 23

In this post, we will discuss about one of the most critical and important concept in Hive, Partitioning in Hive Tables.

Partitioning in Hive

Table partitioning means dividing table data into some parts based on the values of particular columns like date or country, segregate the input records into different files/directories based on date or country.

Partitioning can be done based on more than column which will impose multi-dimensional structure on directory storage. For Example, In addition to partitioning log records by date column, we can also sup divide the single day records into country wise separate files by including country column into partitioning. We will see more about this in the examples.

Partitions are defined at the time of table creation using the PARTITIONED BY clause, with a list of column definitions for partitioning.


As shown in syntax, we can also add comments to partitioned columns.

  • Partitioning is used for distributing execution load horizontally.
  • As the data is stored as slices/parts, query response time is faster to process the small part of the data instead of looking for a search in the entire data set.
  • For example, In a large user table where the table is partitioned by country, then selecting users of country ‘IN’ will just scan one directory ‘country=IN’ instead of all the directories.
  • Having too many partitions in table creates large number of files and directories in HDFS, which is an overhead to NameNode since it must keep all metadata for the file system in memory only.
  • Partitions may optimize some queries based on Where clauses, but may be less responsive for other important queries on grouping clauses.
  • In Mapreduce processing, Huge number of partitions will lead to huge no of tasks (which will run in separate JVM) in each mapreduce job, thus creates lot of overhead in maintaining JVM start up and tear down. For small files, a separate task will be used for each file. In worst scenarios, the overhead of JVM start up and tear down can exceed the actual processing time.
Example Scenarios
  • Partitioning is used in real-time log files analysis to segregate the records based on time stamp or date value to see the results day wise quickly.
  • Another real-time use is that, Customer/user details are partitioned by country/state or department for fast retrieval of subset data pertaining to some category.
  • Sales records by-product type, country, year and month is another commonly used scenario.

In this post we will try examples of use case 2.

Sample Use Case

Lets explore the other features of partitions with the help of sample use case of Loading User records into Hive and performing some queries.

Sample User Records file for testing in this post –> UserRecords

Observation of Input Data

Input data has below fields or columns.

  • First Name
  • Last Name
  • Address
  • Country
  • City
  • State
  • Postal Code
  • Phone Number
  • Alternative Phone Number
  • Email Id
  • Website URL

Easiest part is that, each field is separated by ‘,’ and no field contains the same ‘,’ in its values. Lets Assume we need to create Hive Table partitioned_user partitioned by Country and State and load these input records into table is our requirement.

Creation of Partition Table

Managed Partitioned Table

Below is the HiveQL to create managed partitioned_user table as per the above requirements.

Note that we didn’t include country and state columns in table definition but included in partition definition. If we include them, then we will encounter error scenario 1. We can verify the partition columns of the table with the help of below command.

Partitioned columns country and state can be used in Query statements WHERE clause and can be treated regular column names even though there is actual column inside the input file data.

External Partitioned Tables

We can create external partitioned tables as well, just by using the EXTERNAL keyword in the CREATE statement, but for creation of External Partitioned Tables, we do not need to mention LOCATION clause as we will mention locations of each partitions separately while inserting data into table.

Inserting Data Into Partitioned Tables

Data insertion into partitioned tables can be done in two modes.

  • Static Partitioning
  • Dynamic Partitioning

Static Partitioning in Hive

In this mode, input data should contain the columns listed only in table definition (for example, firstname, lastname, address, city, post, phone1, phone2, email and web) but not the columns defined in partitioned by clause (country and state).

If our input column layout is according to the expected layout and we already have separate input files for each partitioned key value pairs, like one separate file for each combination of country and state values (country=US and state=CA), then these files can be easily loaded into partitioned tables with below syntax.

Loading Data into Managed Partitioned Table From Local FS


For example, lets take below 3 records, which are not containing partitioned columns and save into staticinput.txt file on home directory. And assume that all these records belongs to country=US and State=CA.

Now this file can be loaded into partitioned table with below syntax by specifying the country and state value at load time itself.

This will create separate directory under the default warehouse directory in HDFS.

Similarly we have to add other partitions, which will create corresponding directories in HDFS. Or else we can load the entire directory into Hive table with single command and can add partitions for each file with ALTER command.