Quantcast
Channel: Kumar Chinnakali – dataottam
Viewing all articles
Browse latest Browse all 65

How to Ingest HDFS in JSON format using Apache Sqoop ?

$
0
0

How to Ingest HDFS in JSON format using Apache Sqoop ? by NS Saravanan

In current project use lambda architecture, so Data from sources system extracted in two ways,

  1. Real time streaming OR speed layer
  2. Batch process or Bach Layer

Speed layer implemented using Attunity  > Kafka > Spark streaming . The out of Spark stream will be stored in data lake (HDFS) in JSON format, so customer like to keep both speed layer output and Batch layer output in same format. But Sqoop import does not produce output in Json format.

sara

Two Step approach :

As i told, sqoop  import will not support direct json output to HDFS, But sqoop can import data from  any databse in AVRO format and Avro data can be converted to  JSON.  Avro is he binary compressed file format, so it will be fast in both import and processing this file.

Step1 : Sqoop import output in avro format

To import data in avro format we need to specify  –as-avrodatafile   and  -Dmapreduce.job.user.classpath.first=true

Below is the complete example for sqoop import output in Avro format:

sqoop import -Dmapreduce.job.user.classpath.first=true –connect “jdbc:oracle:thin:@//hostIP:1521/ServiceName”

–username DB_User_name –password DB_password –table SCHEMA.TABLE_NAME -m 1  –as-avrodatafile  –target-dir /hdfs/path/

Step 2: Convert avro to JSON without any coding

You can fine hadoop-streaming*.jar file in /usr/hdp/version_number/  ( for Hortonwork) or Hadoop lib folder, using this hadoop stream, we can convert  avro file to json.  To run this program in mapreduce modle, we need to pass other two dependant jar (  avro-mapred*.jar and  avro*.jar) . replace * with your jar version number.

Below is the complete steps  to convert avro file in json .

hadoop jar /usr/hdp/2.6.0.3-8/hadoop-mapreduce/hadoop-streaming-2.7.3.2.6.0.3-8.jar -D mapred.job.name=”avro-streaming” -D mapred.reduce.tasks=0

-files /usr/hdp/2.6.0.3-8/sqoop/lib/avro-1.8.0.jar,/usr/hdp/2.6.0.3-8/sqoop/lib/avro-mapred-1.8.0-hadoop2.jar

-libjars /usr/hdp/2.6.0.3-8/sqoop/lib/avro-1.8.0.jar,/usr/hdp/2.6.0.3-8/sqoop/lib/avro-mapred-1.8.0-hadoop2.jar

-input  /HDFS/Sqoop/Ouput/ -output /HDFS/path/for/output

-mapper org.apache.hadoop.mapred.lib.IdentityMapper -inputformat org.apache.avro.mapred.AvroAsTextInputFormat

Author: NS Saravanan, Big Data Engineer | Architect | Spark | Scala | NLP | New York USA

Please subscribe dataottam blog to keep yourself up-to-the-minute on Data life cycle from inception to intelligence.


Viewing all articles
Browse latest Browse all 65

Trending Articles