扫二维码与项目经理沟通
我们在微信上24小时期待你的声音
解答本文疑问/技术咨询/运营咨询/技术建议/互联网交流
这篇文章主要讲解了“从oracle到hdfs如何初始化数据”,文中的讲解内容简单清晰,易于学习与理解,下面请大家跟着小编的思路慢慢深入,一起来研究和学习“从oracle到hdfs如何初始化数据”吧!
网站建设哪家好,找创新互联!专注于网页设计、网站建设、微信开发、微信小程序定制开发、集团企业网站建设等服务项目。为回馈新老客户创新互联还提供了金门免费建站欢迎大家使用!
### oracle GoldenGate for BigData 部分
### 本章节重点描述: 如何初始化数据,从oracle 到 hdfs
解压OGG软件
# chown htjs:htjs 123010_ggs_Adapters_Linux_x64.zip
# mv 123010_ggs_Adapters_Linux_x64.zip /home/htjs/
# su - htjs
$ unzip 123010_ggs_Adapters_Linux_x64.zip -d /ogg/oggbd
$ cd /ogg/oggbd
$ tar -xf ggs_Adapters_Linux_x64.tar
$ ./ggsci
启动mgr
GGSCI (node1) 1>
create subdirs
edit params mgr
port 7839
start manager
exit
hdfs配置文件
$ cp /ogg/oggbd/AdapterExamples/big-data/hdfs/* /ogg/oggbd/dirprm/
$ vi /ogg/oggbd/dirprm/hdfs.props
...这个目录前面已经创建
gg.handler.hdfs.rootFilePath=/ogg1
...这个hadoop的驱动目录需要修改
gg.classpath=/usr/hadoop/share/hadoop/common/*:/usr/hadoop/share/hadoop/common/lib/*:/usr/hadoop/share/hadoop/hdfs/*:/usr/hadoop/share/hadoop/hdfs/lib/*:/usr/hadoop/etc/hadoop/:
irhdfs.prm 参数文件
$ cat irhdfs.prm
--passive REPLICAT for initial load irhdfs
-- Trail file for this example is located in "dirdat/initld"
-- Command to run REPLICAT:
-- ./replicat paramfile dirprm/irhdfs.prm reportfile dirrpt/ini_rhdfs.rpt
SPECIALRUN
END RUNTIME
EXTFILE /ogg/oggbd/dirdat/initld
--DDLERROR default discard
setenv HADOOP_COMMON_LIB_NATIVE_DIR=/usr/hadoop/lib/native
DDL include all
TARGETDB LIBFILE libggjava.so SET property=dirprm/hdfs.props
REPORTCOUNT EVERY 1 MINUTES, RATE
GROUPTRANSOPS 10000
MAP ggtest.tt, TARGET bdtest.tt;
这个时候,需要去源端10.10.13.53服务器上,启动ogg,
将ggtest的tt表的初始化数据,推给本地/ogg/oggbd/dirdat
检查源端参数文件 ini_ext.prm
[crmdb2:oracle] cat ini_ext.prm
SOURCEISTABLE
userid ogg@spark,password AACAAAAAAAAAAAHAYBGFCDZCJHWCEIHH, BLOWFISH, ENCRYPTKEY DEFAULT
--RMTHOSTOPTIONS
RMTHOST slave03, MGRPORT 7839 (对应hosts文件的 10.3.105.41 slave03)
RMTFILE /ogg/oggbd/dirdat/initld, MEGABYTES 2, PURGE
--DDL include objname ggtest.*
TABLE ggtest.tt;
或者源端先生成到自己的dirdat目录,然后复制给目标端
[crmdb2:oracle] vi ini_ext.prm
"ini_ext.prm" 8 lines, 251 characters
SOURCEISTABLE
userid ogg@spark,password AACAAAAAAAAAAAHAYBGFCDZCJHWCEIHH, BLOWFISH, ENCRYPTKEY DEFAULT
--RMTHOSTOPTIONS
RMTHOST crmdb2, MGRPORT 7829
RMTFILE /ogg/oggora/dirdat/initld, MEGABYTES 2, PURGE
--DDL include objname ggtest.*
TABLE ggtest.tt;
./extract paramfile dirprm/ini_ext.prm reportfile dirrpt/ini_ext.rpt
目标端文件放到dirdat目录下
[root@node1 ~]# chown htjs:htjs initld
[root@node1 ~]# mv initld /ogg/oggbd/dirdat/
$ ./replicat paramfile dirprm/irhdfs.prm reportfile dirrpt/initld.rpt
检查导入结果
[htjs@node1 oggbd]$ hdfs dfs -ls /ogg1
Found 2 items
-rw-r--r-- 3 htjs supergroup 0 2017-07-21 10:43 /ogg1/README.txt
drwxr-xr-x - htjs supergroup 0 2017-07-21 13:16 /ogg1/bdtest.tt 新建的目录
[htjs@node1 oggbd]$ hdfs dfs -ls /ogg1/bdtest.tt 目录下的文件
Found 1 items
-rw-r--r-- 3 htjs supergroup 13826 2017-07-21 13:16 /ogg1/bdtest.tt/bdtest.tt_2017-07-21_13-15-58.773.txt
hdfs dfs -tail /ogg1/bdtest.tt/bdtest.tt_2017-07-21_13-15-58.773.txt
创建一个外部表,来查询数据。
hive> create database bdtest;
hive> CREATE EXTERNAL TABLE BDTEST.tt
(
owner string,
table_name string,
tablespace_name string,
cluster_name string,
iot_name string,
status string,
pct_free string,
pct_used string,
ini_trans string,
max_trans string,
initial_extent string,
next_extent string,
min_extents string,
max_extents string,
pct_increase string,
freelists string,
freelist_groups string,
logging string,
backed_up string,
num_rows string,
blocks string,
empty_blocks string,
avg_space string,
chain_cnt string,
avg_row_len string,
avg_space_freelist_blocks string,
num_freelist_blocks string,
degree string,
instances string,
cache string,
table_lock string,
sample_size string,
last_analyzed string,
partitioned VARCHAR(3),
iot_type VARCHAR(12),
temporary VARCHAR(1),
secondary VARCHAR(1),
nested VARCHAR(3),
buffer_pool VARCHAR(7),
flash_cache VARCHAR(7),
cell_flash_cache VARCHAR(7),
row_movement VARCHAR(8),
global_stats VARCHAR(3),
user_stats VARCHAR(3),
duration VARCHAR(15),
skip_corrupt VARCHAR(8),
monitoring VARCHAR(3),
cluster_owner VARCHAR(30),
dependencies VARCHAR(8),
compression VARCHAR(8),
compress_for VARCHAR(12),
dropped VARCHAR(3),
read_only VARCHAR(3),
segment_created VARCHAR(3),
result_cache VARCHAR(7)
)
stored as textfile location '/ogg1/bdtest.tt';
hive> select * from bdtest.tt;
初始化完成,接下来可以启动复制进程,开始同步数据了。
将原来10.3.254.53 MongoDB环境中的/ogg/oggbd/dirdat/in000XXX 的文件,
复制到10.3.105.41:/ogg/oggbd/dirdat
原来是导入到mongoDB中,这次导入到hdfs中。
添加目标端的replicat进程,读取in000XXX,导入到hdfs
cat repfils.prm
REPLICAT REPFILS
-- Trail file for this example is located in "AdapterExamples/trail" directory
-- Command to add REPLICAT
-- add replicat repfils, exttrail AdapterExamples/trail/tr
TARGETDB LIBFILE libggjava.so SET property=dirprm/hdfs.props
REPORTCOUNT EVERY 1 MINUTES, RATE
GROUPTRANSOPS 10000
MAP ggtest.*, TARGET bdtest.*;
add replicat repfils, exttrail /ogg/oggbd/dirdat/in
hive> create table bdtest.yth_client_userinfo(
id VARCHAR(32),
qyid VARCHAR(32),
khsh VARCHAR(32),
dlzh VARCHAR(32),
cps VARCHAR(1024),
khshs VARCHAR(1024),
clientver VARCHAR(32),
createtime string,
clientip VARCHAR(100),
browser VARCHAR(32),
os VARCHAR(32),
memory VARCHAR(32),
clientid VARCHAR(100)
)
stored as textfile location '/ogg1/bdtest.yth_client_userinfo';
ogg 配置完毕。下一章节 增量数据从oracle同步到hdfs
感谢各位的阅读,以上就是“从oracle到hdfs如何初始化数据”的内容了,经过本文的学习后,相信大家对从oracle到hdfs如何初始化数据这一问题有了更深刻的体会,具体使用情况还需要大家实践验证。这里是创新互联,小编将为大家推送更多相关知识点的文章,欢迎关注!
我们在微信上24小时期待你的声音
解答本文疑问/技术咨询/运营咨询/技术建议/互联网交流