Notes utilisation Cloudera

sudo service cloudera-scm-agent start
sudo service cloudera-scm-server restart

tutorial

Notes : 4GB RAM (recommandé 8) 2 CPU

sudo /home/cloudera/cloudera-manager --force

http://quickstart.cloudera:7180

Import

import des données locales mysql dans hdfs

sqoop import-all-tables \
  -m 1 \
  --connect jdbc:mysql://quickstart.cloudera:3306/retail_db \
  --username=retail_dba \
  --password=cloudera \
  --compression-codec=snappy \
  --as-avrodatafile \
  --warehouse-dir=/user/hive/warehouse

Vérification du résultat

hadoop fs -ls /user/hive/warehouse

envoi des schémas avro pour hive

sudo -u hdfs hadoop fs -mkdir /user/examples
sudo -u hdfs hadoop fs -chmod +rw /user/examples
hadoop fs -copyFromLocal ~/*.avsc /user/examples/

Hive

accéder à hive pour définir les tables

hive
hive> 
CREATE EXTERNAL TABLE categories
ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.avro.AvroSerDe'
STORED AS INPUTFORMAT 'org.apache.hadoop.hive.ql.io.avro.AvroContainerInputFormat'
OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.avro.AvroContainerOutputFormat'
LOCATION 'hdfs:///user/hive/warehouse/categories'
TBLPROPERTIES ('avro.schema.url'='hdfs://quickstart.cloudera/user/examples/sqoop_import_categories.avsc');

CREATE EXTERNAL TABLE customers
ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.avro.AvroSerDe'
STORED AS INPUTFORMAT 'org.apache.hadoop.hive.ql.io.avro.AvroContainerInputFormat'
OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.avro.AvroContainerOutputFormat'
LOCATION 'hdfs:///user/hive/warehouse/customers'
TBLPROPERTIES ('avro.schema.url'='hdfs://quickstart.cloudera/user/examples/sqoop_import_customers.avsc');

CREATE EXTERNAL TABLE departments
ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.avro.AvroSerDe'
STORED AS INPUTFORMAT 'org.apache.hadoop.hive.ql.io.avro.AvroContainerInputFormat'
OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.avro.AvroContainerOutputFormat'
LOCATION 'hdfs:///user/hive/warehouse/departments'
TBLPROPERTIES ('avro.schema.url'='hdfs://quickstart.cloudera/user/examples/sqoop_import_departments.avsc');

CREATE EXTERNAL TABLE orders
ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.avro.AvroSerDe'
STORED AS INPUTFORMAT 'org.apache.hadoop.hive.ql.io.avro.AvroContainerInputFormat'
OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.avro.AvroContainerOutputFormat'
LOCATION 'hdfs:///user/hive/warehouse/orders'
TBLPROPERTIES ('avro.schema.url'='hdfs://quickstart.cloudera/user/examples/sqoop_import_orders.avsc');

CREATE EXTERNAL TABLE order_items
ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.avro.AvroSerDe'
STORED AS INPUTFORMAT 'org.apache.hadoop.hive.ql.io.avro.AvroContainerInputFormat'
OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.avro.AvroContainerOutputFormat'
LOCATION 'hdfs:///user/hive/warehouse/order_items'
TBLPROPERTIES ('avro.schema.url'='hdfs://quickstart.cloudera/user/examples/sqoop_import_order_items.avsc');

CREATE EXTERNAL TABLE products
ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.avro.AvroSerDe'
STORED AS INPUTFORMAT 'org.apache.hadoop.hive.ql.io.avro.AvroContainerInputFormat'
OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.avro.AvroContainerOutputFormat'
LOCATION 'hdfs:///user/hive/warehouse/products'
TBLPROPERTIES ('avro.schema.url'='hdfs://quickstart.cloudera/user/examples/sqoop_import_products.avsc');

hive> show tables;
hive> exit;

Requêtes Hive ou Impala (sql standard)

-- top 10 revenue generating products
select p.product_id, p.product_name, r.revenue
from products p inner join
(select oi.order_item_product_id, sum(cast(oi.order_item_subtotal as float)) as revenue
from order_items oi inner join orders o
on oi.order_item_order_id = o.order_id
where o.order_status <> 'CANCELED'
and o.order_status <> 'SUSPECTED_FRAUD'
group by order_item_product_id) r
on p.product_id = r.order_item_product_id
order by r.revenue desc
limit 10;

chargement de données de logs

sudo -u hdfs hadoop fs -mkdir /user/hive/warehouse/original_access_logs
sudo -u hdfs hadoop fs -copyFromLocal /opt/examples/log_files/access.log.2 /user/hive/warehouse/original_access_logs

Spark

spark-shell --jars /usr/lib/avro/avro-mapred.jar \
--conf spark.serializer=org.apache.spark.serializer.KryoSerializer

scala>
// First we're going to import the classes we need and open some of the files
// we imported from our relational database into Hadoop with Sqoop

import org.apache.avro.generic.GenericRecord
import org.apache.avro.mapred.{AvroInputFormat, AvroWrapper}
import org.apache.hadoop.io.NullWritable

val warehouse = "hdfs://quickstart.cloudera/user/hive/warehouse/"

val order_items_path = warehouse + "order_items"
val order_items = sc.hadoopFile[AvroWrapper[GenericRecord], NullWritable, AvroInputFormat[GenericRecord]](order_items_path)

val products_path = warehouse + "products"
val products = sc.hadoopFile[AvroWrapper[GenericRecord], NullWritable, AvroInputFormat[GenericRecord]](products_path)

// Next, we extract the fields from order_items and products that we care about
// and get a list of every product, its name and quantity, grouped by order

val orders = order_items.map { x => (
x._1.datum.get("order_item_product_id"),
(x._1.datum.get("order_item_order_id"), x._1.datum.get("order_item_quantity")))
}.join(
  products.map { x => (
x._1.datum.get("product_id"),
(x._1.datum.get("product_name")))
  }
).map(x => (
scala.Int.unbox(x._2._1._1), // order_id
(
scala.Int.unbox(x._2._1._2), // quantity
x._2._2.toString // product_name
)
)).groupByKey()

// Finally, we tally how many times each combination of products appears
// together in an order, and print the 10 most common combinations.

val cooccurrences = orders.map(order =>
  (
order._1,
order._2.toList.combinations(2).map(order_pair =>
(
if (order_pair(0)._2 < order_pair(1)._2) (order_pair(0)._2, order_pair(1)._2) else (order_pair(1)._2, order_pair(0)._2),
order_pair(0)._1 * order_pair(1)._1
)
)
  )
)
val combos = cooccurrences.flatMap(x => x._2).reduceByKey((a, b) => a + b)
val mostCommon = combos.map(x => (x._2, x._1)).sortByKey(false).take(10)

println(mostCommon.deep.mkString("\n"))

Flume , SolR

Flume + Morphline : ETL import des données, SolR : indexation

Creating an empty configuration

solrctl --zk quickstart.cloudera:2181/solr instancedir --generate solr_configs

Uploading the configuration

solrctl --zk quickstart.cloudera:2181/solr instancedir --create live_logs ./solr_configs

Creating your collection

solrctl --zk quickstart.cloudera:2181/solr collection --create live_logs -s 1

flume-ng agent --conf /opt/examples/flume/conf --conf-file /opt/examples/flume/conf/flume.conf --name agent1 -Dflume.root.logger=DEBUG,INFO,console

- Tintouli