Kafka 连接器教程

介绍

Presto 的 Kafka 连接器允许使用 Presto 访问 Apache Kafka 中的实时主题数据。本教程演示如何设置主题以及如何创建支持 Presto 表的主题描述文件。

安装

本教程假定您熟悉 Presto,并有一个正常工作的本地 Presto 安装(请参阅 部署 Presto)。它将重点介绍如何设置 Apache Kafka 并将其与 Presto 集成。

步骤 1: 安装 Apache Kafka

下载并解压缩 Apache Kafka

注意

本教程已使用 Apache Kafka 0.8.1 测试。它应该适用于任何 0.8.x 版本的 Apache Kafka。

启动 ZooKeeper 和 Kafka 服务器

$ bin/zookeeper-server-start.sh config/zookeeper.properties
[2013-04-22 15:01:37,495] INFO Reading configuration from: config/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig)
...
$ bin/kafka-server-start.sh config/server.properties
[2013-04-22 15:01:47,028] INFO Verifying properties (kafka.utils.VerifiableProperties)
[2013-04-22 15:01:47,051] INFO Property socket.send.buffer.bytes is overridden to 1048576 (kafka.utils.VerifiableProperties)
...

这将在端口 2181 上启动 Zookeeper,并在端口 9092 上启动 Kafka。

步骤 2: 加载数据

从 Maven 中央仓库下载 tpch-kafka 加载器

$ curl -o kafka-tpch https://repo1.maven.org/maven2/de/softwareforge/kafka_tpch_0811/1.0/kafka_tpch_0811-1.0.sh
$ chmod 755 kafka-tpch

现在运行 kafka-tpch 程序,将多个主题预加载到 tpch 数据中

$ ./kafka-tpch load --brokers localhost:9092 --prefix tpch. --tpch-type tiny
2014-07-28T17:17:07.594-0700     INFO    main    com.facebook.airlift.log.Logging    Logging to stderr
2014-07-28T17:17:07.623-0700     INFO    main    de.softwareforge.kafka.LoadCommand    Processing tables: [customer, orders, lineitem, part, partsupp, supplier, nation, region]
2014-07-28T17:17:07.981-0700     INFO    pool-1-thread-1    de.softwareforge.kafka.LoadCommand    Loading table 'customer' into topic 'tpch.customer'...
2014-07-28T17:17:07.981-0700     INFO    pool-1-thread-2    de.softwareforge.kafka.LoadCommand    Loading table 'orders' into topic 'tpch.orders'...
2014-07-28T17:17:07.981-0700     INFO    pool-1-thread-3    de.softwareforge.kafka.LoadCommand    Loading table 'lineitem' into topic 'tpch.lineitem'...
2014-07-28T17:17:07.982-0700     INFO    pool-1-thread-4    de.softwareforge.kafka.LoadCommand    Loading table 'part' into topic 'tpch.part'...
2014-07-28T17:17:07.982-0700     INFO    pool-1-thread-5    de.softwareforge.kafka.LoadCommand    Loading table 'partsupp' into topic 'tpch.partsupp'...
2014-07-28T17:17:07.982-0700     INFO    pool-1-thread-6    de.softwareforge.kafka.LoadCommand    Loading table 'supplier' into topic 'tpch.supplier'...
2014-07-28T17:17:07.982-0700     INFO    pool-1-thread-7    de.softwareforge.kafka.LoadCommand    Loading table 'nation' into topic 'tpch.nation'...
2014-07-28T17:17:07.982-0700     INFO    pool-1-thread-8    de.softwareforge.kafka.LoadCommand    Loading table 'region' into topic 'tpch.region'...
2014-07-28T17:17:10.612-0700    ERROR    pool-1-thread-8    kafka.producer.async.DefaultEventHandler    Failed to collate messages by topic, partition due to: Failed to fetch topic metadata for topic: tpch.region
2014-07-28T17:17:10.781-0700     INFO    pool-1-thread-8    de.softwareforge.kafka.LoadCommand    Generated 5 rows for table 'region'.
2014-07-28T17:17:10.797-0700    ERROR    pool-1-thread-3    kafka.producer.async.DefaultEventHandler    Failed to collate messages by topic, partition due to: Failed to fetch topic metadata for topic: tpch.lineitem
2014-07-28T17:17:10.932-0700    ERROR    pool-1-thread-1    kafka.producer.async.DefaultEventHandler    Failed to collate messages by topic, partition due to: Failed to fetch topic metadata for topic: tpch.customer
2014-07-28T17:17:11.068-0700    ERROR    pool-1-thread-2    kafka.producer.async.DefaultEventHandler    Failed to collate messages by topic, partition due to: Failed to fetch topic metadata for topic: tpch.orders
2014-07-28T17:17:11.200-0700    ERROR    pool-1-thread-6    kafka.producer.async.DefaultEventHandler    Failed to collate messages by topic, partition due to: Failed to fetch topic metadata for topic: tpch.supplier
2014-07-28T17:17:11.319-0700     INFO    pool-1-thread-6    de.softwareforge.kafka.LoadCommand    Generated 100 rows for table 'supplier'.
2014-07-28T17:17:11.333-0700    ERROR    pool-1-thread-4    kafka.producer.async.DefaultEventHandler    Failed to collate messages by topic, partition due to: Failed to fetch topic metadata for topic: tpch.part
2014-07-28T17:17:11.466-0700    ERROR    pool-1-thread-5    kafka.producer.async.DefaultEventHandler    Failed to collate messages by topic, partition due to: Failed to fetch topic metadata for topic: tpch.partsupp
2014-07-28T17:17:11.597-0700    ERROR    pool-1-thread-7    kafka.producer.async.DefaultEventHandler    Failed to collate messages by topic, partition due to: Failed to fetch topic metadata for topic: tpch.nation
2014-07-28T17:17:11.706-0700     INFO    pool-1-thread-7    de.softwareforge.kafka.LoadCommand    Generated 25 rows for table 'nation'.
2014-07-28T17:17:12.180-0700     INFO    pool-1-thread-1    de.softwareforge.kafka.LoadCommand    Generated 1500 rows for table 'customer'.
2014-07-28T17:17:12.251-0700     INFO    pool-1-thread-4    de.softwareforge.kafka.LoadCommand    Generated 2000 rows for table 'part'.
2014-07-28T17:17:12.905-0700     INFO    pool-1-thread-2    de.softwareforge.kafka.LoadCommand    Generated 15000 rows for table 'orders'.
2014-07-28T17:17:12.919-0700     INFO    pool-1-thread-5    de.softwareforge.kafka.LoadCommand    Generated 8000 rows for table 'partsupp'.
2014-07-28T17:17:13.877-0700     INFO    pool-1-thread-3    de.softwareforge.kafka.LoadCommand    Generated 60175 rows for table 'lineitem'.

Kafka 现在拥有多个主题,这些主题已预加载有要查询的数据。

步骤 3: 使 Kafka 主题为 Presto 所知

在您的 Presto 安装中,为 Kafka 连接器添加一个目录属性文件 etc/catalog/kafka.properties。此文件列出了 Kafka 节点和主题

connector.name=kafka
kafka.nodes=localhost:9092
kafka.table-names=tpch.customer,tpch.orders,tpch.lineitem,tpch.part,tpch.partsupp,tpch.supplier,tpch.nation,tpch.region
kafka.hide-internal-columns=false

现在启动 Presto

$ bin/launcher start

由于 Kafka 表在配置中都具有 tpch. 前缀,因此这些表位于 tpch 模式中。连接器被挂载到 kafka 目录中,因为属性文件名为 kafka.properties

启动 Presto CLI

$ ./presto --catalog kafka --schema tpch

列出表以验证一切正常

presto:tpch> SHOW TABLES;
  Table
----------
 customer
 lineitem
 nation
 orders
 part
 partsupp
 region
 supplier
(8 rows)

步骤 4: 基本数据查询

Kafka 数据是非结构化的,它没有元数据来描述消息的格式。如果没有进一步配置,Kafka 连接器可以访问数据并以原始形式映射数据,但除了内置列之外,没有实际的列

presto:tpch> DESCRIBE customer;
      Column       |  Type   | Extra |                   Comment
-------------------+---------+-------+---------------------------------------------
 _partition_id     | bigint  |       | Partition Id
 _partition_offset | bigint  |       | Offset for the message within the partition
 _key              | varchar |       | Key text
 _key_corrupt      | boolean |       | Key data is corrupt
 _key_length       | bigint  |       | Total number of key bytes
 _message          | varchar |       | Message text
 _message_corrupt  | boolean |       | Message data is corrupt
 _message_length   | bigint  |       | Total number of message bytes
(11 rows)

presto:tpch> SELECT count(*) FROM customer;
 _col0
-------
  1500

presto:tpch> SELECT _message FROM customer LIMIT 5;
                                                                                                                                                 _message
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 {"rowNumber":1,"customerKey":1,"name":"Customer#000000001","address":"IVhzIApeRb ot,c,E","nationKey":15,"phone":"25-989-741-2988","accountBalance":711.56,"marketSegment":"BUILDING","comment":"to the even, regular platelets. regular, ironic epitaphs nag e"}
 {"rowNumber":3,"customerKey":3,"name":"Customer#000000003","address":"MG9kdTD2WBHm","nationKey":1,"phone":"11-719-748-3364","accountBalance":7498.12,"marketSegment":"AUTOMOBILE","comment":" deposits eat slyly ironic, even instructions. express foxes detect slyly. blithel
 {"rowNumber":5,"customerKey":5,"name":"Customer#000000005","address":"KvpyuHCplrB84WgAiGV6sYpZq7Tj","nationKey":3,"phone":"13-750-942-6364","accountBalance":794.47,"marketSegment":"HOUSEHOLD","comment":"n accounts will have to unwind. foxes cajole accor"}
 {"rowNumber":7,"customerKey":7,"name":"Customer#000000007","address":"TcGe5gaZNgVePxU5kRrvXBfkasDTea","nationKey":18,"phone":"28-190-982-9759","accountBalance":9561.95,"marketSegment":"AUTOMOBILE","comment":"ainst the ironic, express theodolites. express, even pinto bean
 {"rowNumber":9,"customerKey":9,"name":"Customer#000000009","address":"xKiAFTjUsCuxfeleNqefumTrjS","nationKey":8,"phone":"18-338-906-3675","accountBalance":8324.07,"marketSegment":"FURNITURE","comment":"r theodolites according to the requests wake thinly excuses: pending
(5 rows)

presto:tpch> SELECT sum(cast(json_extract_scalar(_message, '$.accountBalance') AS double)) FROM customer LIMIT 10;
   _col0
------------
 6681865.59
(1 row)

可以使用 Presto 查询 Kafka 中的数据,但它尚未处于实际表形状中。可以通过 _message_key 列访问原始数据,但它不会被解码为列。由于示例数据采用 JSON 格式,因此可以使用 Presto 中内置的 JSON 函数和运算符 来切片数据。

步骤 5: 添加主题描述文件

Kafka 连接器支持主题描述文件,以将原始数据转换为表格式。这些文件位于 Presto 安装中的 etc/kafka 文件夹中,必须以 .json 结尾。建议文件名与表名匹配,但这不是必需的。

将以下文件添加为 etc/kafka/tpch.customer.json 并重新启动 Presto

{
    "tableName": "customer",
    "schemaName": "tpch",
    "topicName": "tpch.customer",
    "key": {
        "dataFormat": "raw",
        "fields": [
            {
                "name": "kafka_key",
                "dataFormat": "LONG",
                "type": "BIGINT",
                "hidden": "false"
            }
        ]
    }
}

customer 表现在拥有一个额外的列:kafka_key

presto:tpch> DESCRIBE customer;
      Column       |  Type   | Extra |                   Comment
-------------------+---------+-------+---------------------------------------------
 kafka_key         | bigint  |       |
 _partition_id     | bigint  |       | Partition Id
 _partition_offset | bigint  |       | Offset for the message within the partition
 _key              | varchar |       | Key text
 _key_corrupt      | boolean |       | Key data is corrupt
 _key_length       | bigint  |       | Total number of key bytes
 _message          | varchar |       | Message text
 _message_corrupt  | boolean |       | Message data is corrupt
 _message_length   | bigint  |       | Total number of message bytes
(12 rows)

presto:tpch> SELECT kafka_key FROM customer ORDER BY kafka_key LIMIT 10;
 kafka_key
-----------
         0
         1
         2
         3
         4
         5
         6
         7
         8
         9
(10 rows)

主题定义文件将内部 Kafka 密钥(它是八字节的原始长整型)映射到 Presto BIGINT 列。

步骤 6: 将主题消息中的所有值映射到列

更新 etc/kafka/tpch.customer.json 文件以添加消息字段并重新启动 Presto。由于消息中的字段是 JSON,因此它使用 json 数据格式。这是一个使用不同数据格式来表示密钥和消息的示例。

{
    "tableName": "customer",
    "schemaName": "tpch",
    "topicName": "tpch.customer",
    "key": {
        "dataFormat": "raw",
        "fields": [
            {
                "name": "kafka_key",
                "dataFormat": "LONG",
                "type": "BIGINT",
                "hidden": "false"
            }
        ]
    },
    "message": {
        "dataFormat": "json",
        "fields": [
            {
                "name": "row_number",
                "mapping": "rowNumber",
                "type": "BIGINT"
            },
            {
                "name": "customer_key",
                "mapping": "customerKey",
                "type": "BIGINT"
            },
            {
                "name": "name",
                "mapping": "name",
                "type": "VARCHAR"
            },
            {
                "name": "address",
                "mapping": "address",
                "type": "VARCHAR"
            },
            {
                "name": "nation_key",
                "mapping": "nationKey",
                "type": "BIGINT"
            },
            {
                "name": "phone",
                "mapping": "phone",
                "type": "VARCHAR"
            },
            {
                "name": "account_balance",
                "mapping": "accountBalance",
                "type": "DOUBLE"
            },
            {
                "name": "market_segment",
                "mapping": "marketSegment",
                "type": "VARCHAR"
            },
            {
                "name": "comment",
                "mapping": "comment",
                "type": "VARCHAR"
            }
        ]
    }
}

现在,对于消息 JSON 中的所有字段,都会定义列,而前面的 sum 查询可以直接在 account_balance 列上运行

presto:tpch> DESCRIBE customer;
      Column       |  Type   | Extra |                   Comment
-------------------+---------+-------+---------------------------------------------
 kafka_key         | bigint  |       |
 row_number        | bigint  |       |
 customer_key      | bigint  |       |
 name              | varchar |       |
 address           | varchar |       |
 nation_key        | bigint  |       |
 phone             | varchar |       |
 account_balance   | double  |       |
 market_segment    | varchar |       |
 comment           | varchar |       |
 _partition_id     | bigint  |       | Partition Id
 _partition_offset | bigint  |       | Offset for the message within the partition
 _key              | varchar |       | Key text
 _key_corrupt      | boolean |       | Key data is corrupt
 _key_length       | bigint  |       | Total number of key bytes
 _message          | varchar |       | Message text
 _message_corrupt  | boolean |       | Message data is corrupt
 _message_length   | bigint  |       | Total number of message bytes
(21 rows)

presto:tpch> SELECT * FROM customer LIMIT 5;
 kafka_key | row_number | customer_key |        name        |                address                | nation_key |      phone      | account_balance | market_segment |                                                      comment
-----------+------------+--------------+--------------------+---------------------------------------+------------+-----------------+-----------------+----------------+---------------------------------------------------------------------------------------------------------
         1 |          2 |            2 | Customer#000000002 | XSTf4,NCwDVaWNe6tEgvwfmRchLXak        |         13 | 23-768-687-3665 |          121.65 | AUTOMOBILE     | l accounts. blithely ironic theodolites integrate boldly: caref
         3 |          4 |            4 | Customer#000000004 | XxVSJsLAGtn                           |          4 | 14-128-190-5944 |         2866.83 | MACHINERY      |  requests. final, regular ideas sleep final accou
         5 |          6 |            6 | Customer#000000006 | sKZz0CsnMD7mp4Xd0YrBvx,LREYKUWAh yVn  |         20 | 30-114-968-4951 |         7638.57 | AUTOMOBILE     | tions. even deposits boost according to the slyly bold packages. final accounts cajole requests. furious
         7 |          8 |            8 | Customer#000000008 | I0B10bB0AymmC, 0PrRYBCP1yGJ8xcBPmWhl5 |         17 | 27-147-574-9335 |         6819.74 | BUILDING       | among the slyly regular theodolites kindle blithely courts. carefully even theodolites haggle slyly alon
         9 |         10 |           10 | Customer#000000010 | 6LrEaV6KR6PLVcgl2ArL Q3rqzLzcT1 v2    |          5 | 15-741-346-9870 |         2753.54 | HOUSEHOLD      | es regular deposits haggle. fur
(5 rows)

presto:tpch> SELECT sum(account_balance) FROM customer LIMIT 10;
   _col0
------------
 6681865.59
(1 row)

现在,来自 customer 主题消息的所有字段都可作为 Presto 表列使用。

步骤 7: 使用实时数据

Presto 可以查询 Kafka 中到达的实时数据。为了模拟实时数据馈送,本教程将设置实时推文的馈送到 Kafka。

设置实时 Twitter 馈送

  • 下载 twistr 工具

$ curl -o twistr https://repo1.maven.org/maven2/de/softwareforge/twistr_kafka_0811/1.2/twistr_kafka_0811-1.2.sh
$ chmod 755 twistr
  • https://dev.twitter.com/ 上创建一个开发者帐户,并设置访问和消费者令牌。

  • 创建一个 twistr.properties 文件,并将访问和消费者密钥和秘密放入其中

twistr.access-token-key=...
twistr.access-token-secret=...
twistr.consumer-key=...
twistr.consumer-secret=...
twistr.kafka.brokers=localhost:9092

在 Presto 上创建 tweets 表

将 tweets 表添加到 etc/catalog/kafka.properties 文件中

connector.name=kafka
kafka.nodes=localhost:9092
kafka.table-names=tpch.customer,tpch.orders,tpch.lineitem,tpch.part,tpch.partsupp,tpch.supplier,tpch.nation,tpch.region,tweets
kafka.hide-internal-columns=false

将主题定义文件添加为 Twitter 馈送的 etc/kafka/tweets.json

{
    "tableName": "tweets",
    "topicName": "twitter_feed",
    "dataFormat": "json",
    "key": {
        "dataFormat": "raw",
        "fields": [
            {
                "name": "kafka_key",
                "dataFormat": "LONG",
                "type": "BIGINT",
                "hidden": "false"
            }
        ]
    },
    "message": {
        "dataFormat":"json",
        "fields": [
            {
                "name": "text",
                "mapping": "text",
                "type": "VARCHAR"
            },
            {
                "name": "user_name",
                "mapping": "user/screen_name",
                "type": "VARCHAR"
            },
            {
                "name": "lang",
                "mapping": "lang",
                "type": "VARCHAR"
            },
            {
                "name": "created_at",
                "mapping": "created_at",
                "type": "TIMESTAMP",
                "dataFormat": "rfc2822"
            },
            {
                "name": "favorite_count",
                "mapping": "favorite_count",
                "type": "BIGINT"
            },
            {
                "name": "retweet_count",
                "mapping": "retweet_count",
                "type": "BIGINT"
            },
            {
                "name": "favorited",
                "mapping": "favorited",
                    "type": "BOOLEAN"
            },
            {
                "name": "id",
                "mapping": "id_str",
                "type": "VARCHAR"
            },
            {
                "name": "in_reply_to_screen_name",
                "mapping": "in_reply_to_screen_name",
                "type": "VARCHAR"
            },
            {
                "name": "place_name",
                "mapping": "place/full_name",
                "type": "VARCHAR"
            }
        ]
    }
}

由于此表没有显式模式名称,因此它将被放置在 default 模式中。

馈送实时数据

启动 twistr 工具

$ java -Dness.config.location=file:$(pwd) -Dness.config=twistr -jar ./twistr

twistr 连接到 Twitter API 并将“示例推文”馈送馈送到名为 twitter_feed 的 Kafka 主题。

现在,对实时数据运行查询

$ ./presto-cli --catalog kafka --schema default

presto:default> SELECT count(*) FROM tweets;
 _col0
-------
  4467
(1 row)

presto:default> SELECT count(*) FROM tweets;
 _col0
-------
  4517
(1 row)

presto:default> SELECT count(*) FROM tweets;
 _col0
-------
  4572
(1 row)

presto:default> SELECT kafka_key, user_name, lang, created_at FROM tweets LIMIT 10;
     kafka_key      |    user_name    | lang |       created_at
--------------------+-----------------+------+-------------------------
 494227746231685121 | burncaniff      | en   | 2014-07-29 14:07:31.000
 494227746214535169 | gu8tn           | ja   | 2014-07-29 14:07:31.000
 494227746219126785 | pequitamedicen  | es   | 2014-07-29 14:07:31.000
 494227746201931777 | josnyS          | ht   | 2014-07-29 14:07:31.000
 494227746219110401 | Cafe510         | en   | 2014-07-29 14:07:31.000
 494227746210332673 | Da_JuanAnd_Only | en   | 2014-07-29 14:07:31.000
 494227746193956865 | Smile_Kidrauhl6 | pt   | 2014-07-29 14:07:31.000
 494227750426017793 | CashforeverCD   | en   | 2014-07-29 14:07:32.000
 494227750396653569 | FilmArsivimiz   | tr   | 2014-07-29 14:07:32.000
 494227750388256769 | jmolas          | es   | 2014-07-29 14:07:32.000
(10 rows)

现在有一个实时馈送到 Kafka,可以使用 Presto 查询该馈送。

结语: 时间戳

在最后一步中设置的推文馈送包含一个 RFC 2822 格式的时间戳,作为每条推文中的 created_at 属性。

presto:default> SELECT DISTINCT json_extract_scalar(_message, '$.created_at')) AS raw_date
             -> FROM tweets LIMIT 5;
            raw_date
--------------------------------
 Tue Jul 29 21:07:31 +0000 2014
 Tue Jul 29 21:07:32 +0000 2014
 Tue Jul 29 21:07:33 +0000 2014
 Tue Jul 29 21:07:34 +0000 2014
 Tue Jul 29 21:07:35 +0000 2014
(5 rows)

tweets 表的主题定义文件包含使用 rfc2822 转换器映射到时间戳的映射

...
{
    "name": "created_at",
    "mapping": "created_at",
    "type": "TIMESTAMP",
    "dataFormat": "rfc2822"
},
...

这允许将原始数据映射到 Presto 时间戳列

presto:default> SELECT created_at, raw_date FROM (
             ->   SELECT created_at, json_extract_scalar(_message, '$.created_at') AS raw_date
             ->   FROM tweets)
             -> GROUP BY 1, 2 LIMIT 5;
       created_at        |            raw_date
-------------------------+--------------------------------
 2014-07-29 14:07:20.000 | Tue Jul 29 21:07:20 +0000 2014
 2014-07-29 14:07:21.000 | Tue Jul 29 21:07:21 +0000 2014
 2014-07-29 14:07:22.000 | Tue Jul 29 21:07:22 +0000 2014
 2014-07-29 14:07:23.000 | Tue Jul 29 21:07:23 +0000 2014
 2014-07-29 14:07:24.000 | Tue Jul 29 21:07:24 +0000 2014
(5 rows)

Kafka 连接器包含用于 ISO 8601、RFC 2822 文本格式以及基于数字的、自纪元以来以秒或毫秒为单位的时间戳的转换器。还有一种通用的基于文本的格式化程序,它使用 Joda-Time 格式字符串来解析文本列。