Integrating data from multiple sources is essential in the age of big data, but it can be a challenging and time-consuming task. This handy cookbook provides dozens of ready-to-use recipes for using Apache Sqoop, the command-line interface application that optimizes data transfers between relational databases and Hadoop.
Sqoop is both powerful and bewildering, but with this cookbook’s problem-solution-discussion format, you’ll quickly learn how to deploy and then apply Sqoop in your environment. The authors provide MySQL, Oracle, and PostgreSQL database examples on GitHub that you can easily adapt for SQL Server, Netezza, Teradata, or other relational systems.
Transfer data from a single database table into your Hadoop ecosystem
Keep table data and Hadoop in sync by importing data incrementally
Import data from more than one database table
Customize transferred data by calling various database functions
Export generated, processed, or backed-up data from Hadoop to your database
Run Sqoop within Oozie, Hadoop’s specialized workflow scheduler
Load data into Hadoop’s data warehouse (Hive) or database (HBase)
Handle installation, connection, and syntax issues common to specific database vendors
工具书
评分工具书
评分要是能有API编程的相关内容就更好了
评分要是能有API编程的相关内容就更好了
评分小巧实用,简明易读
本站所有内容均为互联网搜索引擎提供的公开搜索信息,本站不存储任何数据与内容,任何内容与数据均与本站无关,如有需要请联系相关搜索引擎包括但不限于百度,google,bing,sogou 等
© 2025 onlinetoolsland.com All Rights Reserved. 本本书屋 版权所有