Loading Data into Hive - Pentaho Big Data - Pentaho Wiki
How to use a PDI job to load a data file into a Hive table.
Note For those of you familiar with Hive, you will note that a Hive table could be defined with "external" data. Using the external option, you could define a Hive table that simply uses the HDFS directory that contains the parsed file. For this how-to, we chose not to use the external option so that you can see the ease with which files can be added to non-external Hive tables. |
Prerequisites
In order follow along with this how-to guide you will need the following:
- Hadoop
- Pentaho Data Integration
- Hive
Sample Files
The sample data file needed for this guide is:
File Name | Content |
weblogs_parse.txt.zip | Unparsed, raw weblog data |
NOTE: If you have previously completed the "Using Pentaho MapReduce to Parse Weblog Data" guide the necessary files will already be the proper directory.
This file should be placed in the /user/pdi/weblogs/parse directory of HDFS using the following commands.
hadoop fs -mkdir /user/pdi/weblogs hadoop fs -mkdir /user/pdi/weblogs/parse hadoop fs -put weblogs_parse.txt /user/pdi/weblogs/parse/part-00000
Step-By-Step Instructions
Setup
Start Hadoop if it is not already running.
Start Hive Server if it is not already running.
Create a Hive Table
- Open the Hive Shell: Open the Hive shell so you can manually create a Hive table by entering 'hive' at the command line.
- Create the Table in Hive:You need a hive table to load the data to, so enter the following in the hive shell.
create table weblogs ( client_ip string, full_request_date string, day string, month string, month_numint, year string, hour string, minute string, second string, timezone string, http_verb string, uri string, http_status_code string, bytes_returned string, referrer string, user_agent string) row format delimited fields terminated by '\t';
- Close the Hive Shell: You are done with the Hive Shell for now, so close it by entering 'quit;' in the Hive Shell.
Create a Job to Load Hive
In this task you will be creating a job to load parsed and delimited weblog data into a Hive table. Once the data is loaded into the table, you will be able to run HiveQL statements to query this data.
Speed Tip You can download the Kettle Jobload_hive.kjbalready completed |
- Start PDI on your desktop. Once it is running choose 'File' -> 'New' -> 'Job' from the menu system or click on the 'New file' icon on the toolbar and choose the 'Job' option.
- Add a Start Job Entry:You need to tell PDI where to start the job, so expand the 'General' section of the Design palette and drag a 'Start' job entry onto the job canvas. Your canvas should look like:
- Add a Copy File Job Entry:You will need to copy the parsed file into the Hive table, so expand the 'Big Data' section of the Design palette and drag a 'Hadoop Copy Files' job entry onto the job canvas. Your canvas should look like:
- Connect the Start and Copy Files job entries: Hover the mouse over the 'Start' job entry and a tooltip will appear. Click on the output connector (the green arrow pointing to the right) and drag a connector arrow to the 'Hadoop Copy Files' node. Your canvas should look like:
- Edit the Copy Files Job Entry: Double-click on the 'Hadoop Copy Files' job entry to edit its properties. Enter this information:
- File/Folder source: hdfs://<NAMENODE>:<PORT>/user/pdi/weblogs/parse
- File/Folder destination: hdfs://<NAMENODE>:<PORT>/user/hive/warehouse/weblogs
- Wildcard (RegExp): Enter 'part-.*'
- Click the 'Add' button to add the files to the list of files to copy.
When you are done your window should look like (your folder path may be different):
Click 'OK' to close the window.
Notice that you could also load a local file into hive using this step. The file does not already have to be in Hadoop.
- File/Folder source: hdfs://<NAMENODE>:<PORT>/user/pdi/weblogs/parse
- Save the Job: Choose 'File' -> 'Save as...' from the menu system. Save the transformation as 'load_hive.kjb' into a folder of your choice.
- Run the Job: Choose 'Action' -> 'Run' from the menu system or click on the green run button on the job toolbar. A 'Execute a job' window will open. Click on the 'Launch' button. An 'Execution Results' panel will open at the bottom of the PDI window and it will show you the progress of the job as it runs. After a few seconds the job should finish successfully:
If any errors occurred the job step that failed will be highlighted in red and you can use the 'Logging' tab to view error messages.
Check Hive
- Open the Hive Shell: Open the Hive shell so you can manually create a Hive table by entering 'hive' at the command line.
- Query Hive for Data:Verify the data has been loaded to Hive by querying the weblogs table.
select * from weblogs limit 10;
- Close the Hive Shell: You are done with the Hive Shell for now, so close it by entering 'quit;' in the Hive Shell.
Summary
During this guide you learned how to load data into a Hive table using a PDI job. PDI jobs can be used to put files into Hive from many different sources.
Other guides in this series cover how to transform data in Hive, get data out of the Hive, and report on data within the Hive.