/opt/module/sqoop/bin/sqoop import \
--connect \
--username \
--password \
--target-dir \
--delete-target-dir \
--num-mappers \
--fields-terminated-by \
--query "$2" ' and $CONDITIONS;'
Hive中的Null在底层是以“\N”来存储,而MySQL中的Null在底层就是Null,为了保证数据两端的一致性。在导出数据时采用–input-null-string和–input-null-non-string两个参数。导入数据时采用–null-string和–null-non-string。
Sqoop在导出到Mysql时,使用4个Map任务,过程中有2个任务失败,那此时MySQL中存储了另外两个Map任务导入的数据,此时业务正好看到了这个报表数据。而开发工程师发现任务失败后,会调试问题并最终将全部数据正确的导入MySQL,那后面业务再次看报表数据,发现本次看到的数据与之前的不一致,这在生产环境是不允许的。
Since Sqoop breaks down export process into multiple transactions, it is possible that a failed export job may result in partial data being committed to the database. This can further lead to subsequent jobs failing due to insert collisions in some cases, or lead to duplicated data in others. You can overcome this problem by specifying a staging table via the --staging-table option which acts as an auxiliary table that is used to stage exported data. The staged data is finally moved to the destination table in a single transaction.官方链接
–staging-table方式
sqoop export
--connect jdbc:mysql://192.168.137.10:3306/user_behavior
--username root
--password 123456
--table app_cource_study_report
--columns watch_video_cnt,complete_video_cnt,dt
--fields-terminated-by "\t"
--export-dir "/user/hive/warehouse/tmp.db/app_cource_study_analysis_${day}"
--staging-table app_cource_study_report_tmp
--clear-staging-table
--input-null-string '\N'
只有Map阶段,没有Reduce阶段的任务。默认是4个MapTask。
split-by:按照自增主键来切分表的工作单元;
num-mappers:启动N个map来并行导入数据,默认4个;
Ads层数据用Sqoop往MySql中导入数据的时候,如果用了orc(Parquet)不能导入,需转化成text格式。
(1)创建临时表,把Parquet中表数据导入到临时表,把临时表导出到目标表用于可视化
(2)ads层建表的时候就不要建Parquet表