-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathlocal-search.xml
428 lines (206 loc) · 400 KB
/
local-search.xml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
<?xml version="1.0" encoding="utf-8"?>
<search>
<entry>
<title>How Apache Kylin Query Work(三)</title>
<link href="/2025/01/23/How-Apache-Kylin-Query-Work%EF%BC%88%E4%B8%89%EF%BC%89/"/>
<url>/2025/01/23/How-Apache-Kylin-Query-Work%EF%BC%88%E4%B8%89%EF%BC%89/</url>
<content type="html"><![CDATA[<h1 id="FYI"><a href="#FYI" class="headerlink" title="FYI"></a>FYI</h1><ul><li>repo:<a href="https://github.com/apache/kylin">https://github.com/apache/kylin</a></li><li>branch:kylin5</li><li>commitMessage:Merge pull request #2245 from VaitaR/patch-1</li><li>commitID:e18b73ab6a6ed66de41532bc03373e8efeff0b77</li></ul><h1 id="Concepts"><a href="#Concepts" class="headerlink" title="Concepts"></a>Concepts</h1><h2 id="Basic"><a href="#Basic" class="headerlink" title="Basic"></a>Basic</h2><ul><li><p><strong>Table</strong> - 源数据表。在创建模型并加载数据之前,系统需要从数据源(通常为 Hive)同步表的元数据,包含表名、列名、列属性等。</p></li><li><p><strong>Model</strong> - 模型,也是逻辑语义层。模型是一组表以及它们间的关联关系 (Join Relationship)。模型中定义了事实表、维度表、度量、维度、和一组索引。模型和其中的索引定义了加载数据时要执行的预计算。系统支持基于<a href="https://baike.baidu.com/item/%E6%98%9F%E5%9E%8B%E6%A8%A1%E5%9E%8B/9133897">星型模型</a> 和 <a href="https://baike.baidu.com/item/%E9%9B%AA%E8%8A%B1%E6%A8%A1%E5%9E%8B">雪花模型</a> 的多维模型。</p></li><li><p><strong>Index</strong> - 索引,在数据加载时将构建索引,索引将被用于加速查询。索引分为聚合索引与明细索引。</p><ul><li><strong>Aggregate Index</strong> - 聚合索引,本质是多个维度和度量的组合,适合回答聚合查询,比如某年的销售总额。</li><li><strong>Table Index</strong> - 明细索引,本质是大宽表的多路索引,适合回答精确到记录的明细查询,比如某用户最近 100 笔交易。</li></ul></li><li><p><strong>Load Data</strong> - 加载数据。为了加速查询,需要将数据从源表加载入模型,在此过程中也将构建索引,整个过程即是数据的预计算过程。每一次数据加载将产生一个 Segment,载入数据后的模型可以服务于查询。</p><ul><li><strong>Incremental Load</strong> - 增量数据加载。在事实表上可以定义一个分区日期或时间列。根据分区列,可以按时间范围对超大数据集做增量加载。</li><li><strong>Full Load</strong> - 全量加载。如果没有定义分区列,那么源表中的所有数据将被一次性加载。</li><li><strong>Build Index</strong> - 重建索引。用户可以随时调整模型和索引的定义。对于已加载的数据,其上的索引需要按新的定义重新构建。</li></ul></li><li><p><strong>Segments</strong> - 数据块。是模型(索引组)经过数据加载后形成的数据块。Segment 的生成以分区列为依据。对于有分区列的模型(索引组),可以拥有一个或多个 Segment,对于没有分区列的模型(索引组),只能拥有一个 Segment。</p></li></ul><h2 id="OlapContext"><a href="#OlapContext" class="headerlink" title="OlapContext"></a>OlapContext</h2><p>SQL 进入 Kylin 中经过 Calcite 解析转换、优化后形成一棵树结构的查询逻辑计划 RelNode,这种结构是 Calcite 在逻辑层的一种表示,比较典型的 RelNode 树结构如下图 case1 所示。<br>如果再加上 RelNode 的详细信息,绝大多数场景下可以将这棵树重新翻译成原始 SQL,但这种结构无法直接作用于 Kylin 的预计算,因此 Kylin 定义了一种可以预计算的数据结构,这种结构称之为 OlapContext,它能够同时对应 RelNode 和 Kylin 匹配的模型索引。</p><p><img src="https://cdn.jsdelivr.net/gh/gleonSun/images@main/image/202408221026607.png"></p><p>需重点关注的属性</p><ul><li><strong><code>firstTableScan</code></strong>: OlapContext 用到的第一张表(通常指事实表)</li><li><strong><code>allTableScans</code></strong>: 使用到的所有表信息</li><li><strong><code>aggregations</code></strong>: 查询的度量算子</li><li><strong><code>filterColumns</code></strong>: 过滤条件(SQL where 条件的列或者表达式)</li><li><strong><code>joins</code></strong>: 表与表的 join 关系</li><li><strong><code>sql</code></strong>: 生成 OlapContext 的原始 SQL,一条 SQL 可能会被切分成多个 OlapContext</li><li><strong><code>topNode</code></strong>: OlapContext 最顶端的 RelNode 节点</li><li><strong><code>expandedFilterConditions</code></strong>: 记录查询用到的过滤表达式,以支持后面做过滤优化</li></ul><p>除此之外还有一些别的属性需要留意</p><ul><li><strong><code>parentOfTopNode</code></strong>: 一般为 null 除非 JoinRel 被切分开</li><li><strong><code>innerGroupByColumns、innerFilterColumns</code></strong>: 推荐可计算列时使用到</li><li><strong><code>sortColumns</code></strong>: 排序列</li></ul><p>总结一下,OlapContext 记录了整个 Kylin 模型匹配的上下文信息,是最核心的数据结构,对这块熟悉可以更好地理解索引匹配流程。</p><h2 id="OlapRel"><a href="#OlapRel" class="headerlink" title="OlapRel"></a>OlapRel</h2><p>Kylin 继承自 Calcite 实现的抽象接口类,定义了遍历整个查询阶段所需上下文及遍历方式,需关注的属性和方法</p><ul><li><strong><code>getColumnRowType</code></strong>: 记录了原始表类型和 Kylin 模型中列数据类型的对应关系</li><li><strong><code>implementOlap</code></strong>: 子类需实现的遍历方法,包含建立和原始表的对应关系,收集 OlapContext 的信息都会在这个方法中完成,是很重要的方法</li><li><strong><code>implementRewrite</code></strong>: 在完成模型匹配之后,基于情况对查询逻辑计划树进行重建</li><li><strong><code>implementEnumerable</code></strong>: 适配 Calcite EnumerableConvention 物理执行引擎的 Java 实现</li><li><strong><code>implementContext</code></strong>: 分配 OlapContext 的逻辑方法,一个完整的查询逻辑计划可能会划分成多个 OlapContext</li><li><strong><code>implementCutContext</code></strong>: 如果 OlapContext 切分得太大无法匹配模型索引,则会尝试对其再次切分</li></ul><hr><h2 id="Model-Match"><a href="#Model-Match" class="headerlink" title="Model Match"></a>Model Match</h2><p>入口: <strong><span class="label label-primary">QueryContextCutter#selectRealization</span></strong></p><p>整体流程图<br><img src="https://cdn.jsdelivr.net/gh/gleonSun/images@main/image/KylinModelMatch-1737621343603.png"></p><h3 id="1-Divide-OlapContext"><a href="#1-Divide-OlapContext" class="headerlink" title="1. Divide OlapContext"></a>1. Divide OlapContext</h3><p>OlapContext 的划分和分配在选出最优的查询逻辑计划之后,匹配模型索引之前。</p><p>一个基本的规则是遇到 agg 就划分出一个 OlapContext,如 OlapContext 示例图的 case1,从 agg 往下遍历时没有其他的 agg,当前查询逻辑计划树就只会分配一个 OlapContext,同理,case2 和 case3 会划分出两个 OlapContext。</p><p>每个 OlapContext 代表着模型索引的最小匹配单元,在划分后,将 OlapContext 与模型索引进行匹配,当无法匹配时,会将大的 OlapContext 再次切分成小的进行匹配,直到达到最大尝试切分的匹配次数,默认是 10,可通过项目级参数进行配置。</p><p>划分 OlapContext 主要通过下面两个类</p><ul><li><p><strong><code>ContextInitialCutStrategy</code></strong></p><ul><li>遍历查询逻辑计划树,接着通过子类实现的 <strong><code>OlapRel#implementContext</code></strong> 方法划分 OlapContext</li><li>如果还有未分配的表,会为其直接分配 OlapContext</li></ul></li><li><p><strong><code>ContextReCutStrategy</code></strong></p><ul><li>主要逻辑是将大的 OlapContext 切小尽可能匹配模型索引,接着通过子类实现的 <strong><code>OlapRel#implementCutContext</code></strong> 方法划分 OlapContext</li></ul></li></ul><p>划分逻辑相对复杂的子类是 OlapJoinRel,先访问 leftChild,再访问 rightChild,最后在当前节点上分配 OlapContext。</p><h3 id="2-Fill-OlapContext"><a href="#2-Fill-OlapContext" class="headerlink" title="2. Fill OlapContext"></a>2. Fill OlapContext</h3><p>通过后序遍历的方式对先前压入栈的查询逻辑计划节点填充 OlapContext 信息,上面第一段逻辑是基于查询逻辑计划切分出 OlapContext,然而还需要在查询节点上收集必要的 OlapContext 信息,通过每个子类实现的 <strong><code>OlapRel#implementOlap</code></strong> 方法进行填充。</p><h3 id="3-Choose-Candidate"><a href="#3-Choose-Candidate" class="headerlink" title="3. Choose Candidate"></a>3. Choose Candidate</h3><p>默认通过多线程的方式来选择匹配合适的索引,使用 CountDownLatch,输入是 Context 划分的数量。也可以通过项目级参数配置不使用多线程的方式匹配索引,串行执行。</p><h4 id="3-1-Attempt-Select-Candidate"><a href="#3-1-Attempt-Select-Candidate" class="headerlink" title="3.1 Attempt Select Candidate"></a>3.1 Attempt Select Candidate</h4><h5 id="3-1-1-Filter-qualified-models-by-firstTable-of-OlapContext"><a href="#3-1-1-Filter-qualified-models-by-firstTable-of-OlapContext" class="headerlink" title="3.1.1 Filter qualified models by firstTable of OlapContext"></a>3.1.1 Filter qualified models by firstTable of OlapContext</h5><p>基于 OlapContext 第一张表即事实表来筛选出待匹配的模型,每个 Project 都保存了对应的模型信息缓存在内存中,取出的操作是比较快的,取出后再过滤掉不符合条件的模型。</p><ul><li>移除没有准备好 Segments 的模型</li><li>用户通过 SQL hint 的方式指定了模型匹配的优先级,未指定的模型会被移除</li></ul><h5 id="3-1-2-Match-model"><a href="#3-1-2-Match-model" class="headerlink" title="3.1.2 Match model"></a>3.1.2 Match model</h5><p>先检查是否有待匹配模型,没有的话直接抛出异常,等待下次重试。模型匹配采用的是图匹配方式,参考类: <strong><code>JoinsGraph</code></strong></p><p>需关注属性</p><ul><li><strong><code>center</code></strong>: 表示图的中心表(通常是查询的主表),类型为 <code>TableRef</code>。</li><li><strong><code>vertexMap</code></strong>: 存储所有表的别名与表引用 <code>TableRef</code> 的映射关系,类型为 <code>Map<String, TableRef></code>。</li><li><strong><code>vertexInfoMap</code></strong>: 存储每个表 <code>TableRef</code> 的顶点信息,包括该表的出边 <code>outEdges</code> 和入边 <code>inEdges</code>,类型为 <code>Map<TableRef, VertexInfo<Edge>></code>。</li><li><strong><code>edges</code></strong>: 存储图中所有的边 <code>Edge</code>,类型为 <code>Set<Edge></code>。<br>需关注方法</li><li><strong><code>match</code></strong>: 将当前图与一个模式图 <code>pattern</code> 进行匹配,返回是否匹配成功。匹配过程中会生成一个别名映射表 <code>matchAliasMap</code>,用于记录两个图中表的对应关系。</li><li><strong><code>match0</code></strong>: 匹配的核心逻辑,递归地匹配图中的表和边。</li><li><strong><code>findOutEdgeFromDualTable</code></strong>: 在模式图中查找与查询图匹配的边。</li><li><strong><code>normalize</code></strong>: 对图进行规范化处理,将某些左连接 <code>LEFT JOIN</code> 转换为左或内连接 <code>LEFT OR INNER JOIN</code> ,以便优化查询,需要通过额外的参数配置。</li></ul><h6 id="3-1-2-1-Try-to-exactly-match-model"><a href="#3-1-2-1-Try-to-exactly-match-model" class="headerlink" title="3.1.2.1 Try to exactly match model"></a>3.1.2.1 Try to exactly match model</h6><p>通过图匹配的方式对 OlapContext 中的事实表和join 关系与模型上的表和 join 关系等进行对比,检查其是否一一对应,如果都能匹配上,那么就可以说当前模型的索引是能够精确匹配查询 SQL 的。</p><p class="note note-info">OlapContex 中的表信息是从查询 SQL 的逻辑计划树中收集的。</p><h6 id="3-1-2-2-Try-to-partial-match-model"><a href="#3-1-2-2-Try-to-partial-match-model" class="headerlink" title="3.1.2.2 Try to partial match model"></a>3.1.2.2 Try to partial match model</h6><p>在精确匹配未找到合适模型的情况下,会基于配置参数再尝试部分匹配模型,这里的部分指的是仅匹配部分 join 关系。</p><h6 id="3-1-2-3-Lauout-Match-Select-Realizations"><a href="#3-1-2-3-Lauout-Match-Select-Realizations" class="headerlink" title="3.1.2.3 Lauout Match - Select Realizations"></a>3.1.2.3 Lauout Match - Select Realizations</h6><p>在已经选出合适的 layout 之后,会继续对候选的索引进行筛选,陆续应用以下 Rules</p><ul><li><strong><code>KylinTableChooserRule</code></strong>: 匹配模型索引(分为明细索引和聚合索引,需要所有的列和聚合算子都能匹配上,默认不会用明细索引回答聚合查询,可通过参数配置)</li><li><strong><code>SegmentPruningRule</code></strong>: 根据分区列和 Filter 条件对 Segment 进行裁剪</li><li><strong><code>PartitionPruningRule</code></strong>: 根据多级分区列筛选分区</li><li><strong><code>RemoveIncapableRealizationsRule</code></strong>: 选择成本最低的 layout</li><li><strong><code>VacantIndexPruningRule(optional)</code></strong>: 选择空的 layout 回答查询</li></ul><p class="note note-info">layout 指的是代码层面的抽象索引(包含多种维度和度量的组合),其实就是 Index。</p><h5 id="3-1-3-Find-the-lowest-cost-candidate"><a href="#3-1-3-Find-the-lowest-cost-candidate" class="headerlink" title="3.1.3 Find the lowest-cost candidate"></a>3.1.3 Find the lowest-cost candidate</h5><p>对所有选出的 layout 应用排序规则后取出最优的回答查询,有时候不一定是成本最低的,比如用户某些场景的特殊需求下,成本最低的 layout 的索引数据是不完整的,Kylin 首先需要保证查询数据的完整性。</p><p>至此,模型匹配的逻辑已经讲述完毕。</p>]]></content>
<categories>
<category>分布式系统</category>
<category>OLAP</category>
</categories>
<tags>
<tag>Kylin</tag>
</tags>
</entry>
<entry>
<title>How Apache Kylin Query Work(二)</title>
<link href="/2024/08/19/How-Apache-Kylin-Query-Work%EF%BC%88%E4%BA%8C%EF%BC%89/"/>
<url>/2024/08/19/How-Apache-Kylin-Query-Work%EF%BC%88%E4%BA%8C%EF%BC%89/</url>
<content type="html"><![CDATA[<h1 id="FYI"><a href="#FYI" class="headerlink" title="FYI"></a>FYI</h1><p>全文仅关注逻辑主体代码,其他代码均省略。</p><ul><li>repo:<a href="https://github.com/apache/kylin">https://github.com/apache/kylin</a></li><li>branch:kylin5</li><li>commitMessage:KYLIN-5943 Upgrade spark to 3.3.0-kylin-4.6.26.0</li><li>commitID:77201e7bcddb605da56e7f00d39db82e8f2d8931</li></ul><h1 id="Query-Entrance"><a href="#Query-Entrance" class="headerlink" title="Query Entrance"></a>Query Entrance</h1><p>我们跳过其他部分,直接进入 Kylin 查询真正处理的核心入口 <code>QueryExec#executeQuery</code>。</p><div class="hljs code-wrapper"><pre><code class="hljs java"><span class="hljs-keyword">public</span> QueryResult <span class="hljs-title function_">executeQuery</span><span class="hljs-params">(String sql)</span> <span class="hljs-keyword">throws</span> SQLException {<span class="hljs-type">RelRoot</span> <span class="hljs-variable">relRoot</span> <span class="hljs-operator">=</span> sqlConverter.convertSqlToRelNode(sql);<span class="hljs-type">RelNode</span> <span class="hljs-variable">node</span> <span class="hljs-operator">=</span> queryOptimizer.optimize(relRoot).rel;<span class="hljs-type">QueryResult</span> <span class="hljs-variable">queryResult</span> <span class="hljs-operator">=</span> <span class="hljs-keyword">new</span> <span class="hljs-title class_">QueryResult</span>(executeQueryPlan(postOptimize(node)), resultFields);}</code></pre></div><h1 id="Calcite"><a href="#Calcite" class="headerlink" title="Calcite"></a>Calcite</h1><p>在模型匹配前的查询逻辑都是在 Calcite 中进行处理的。</p><h2 id="Prepare"><a href="#Prepare" class="headerlink" title="Prepare"></a>Prepare</h2><p>这一过程为后续 Calcite 的元数据 Schema 以及查询阶段使用的优化规则做了准备,参考 QueryExec 的构造方法</p><div class="hljs code-wrapper"><pre><code class="hljs java"><span class="hljs-keyword">public</span> <span class="hljs-title function_">QueryExec</span><span class="hljs-params">(String project, KylinConfig kylinConfig, <span class="hljs-type">boolean</span> allowAlternativeQueryPlan)</span> { <span class="hljs-built_in">this</span>.project = project; <span class="hljs-built_in">this</span>.kylinConfig = kylinConfig; connectionConfig = KylinConnectionConfig.fromKapConfig(kylinConfig); schemaFactory = <span class="hljs-keyword">new</span> <span class="hljs-title class_">ProjectSchemaFactory</span>(project, kylinConfig); rootSchema = schemaFactory.createProjectRootSchema(); <span class="hljs-type">String</span> <span class="hljs-variable">defaultSchemaName</span> <span class="hljs-operator">=</span> schemaFactory.getDefaultSchema(); catalogReader = SqlConverter.createCatalogReader(connectionConfig, rootSchema, defaultSchemaName); planner = <span class="hljs-keyword">new</span> <span class="hljs-title class_">PlannerFactory</span>(kylinConfig).createVolcanoPlanner(connectionConfig); sqlConverter = QueryExec.createConverter(connectionConfig, planner, catalogReader); dataContext = createDataContext(rootSchema); planner.setExecutor(<span class="hljs-keyword">new</span> <span class="hljs-title class_">RexExecutorImpl</span>(dataContext)); queryOptimizer = <span class="hljs-keyword">new</span> <span class="hljs-title class_">QueryOptimizer</span>(planner);}</code></pre></div><p>注意这里的 planner 是 Kylin 在 CBO 阶段用到的优化规则,包含 Calcite 默认提供的一些优化规则,以及 Kylin 自己实现的优化规则,需要说明的是 Kylin 通过 CBO 阶段将 Calcite 通过 Schema 校验后的查询逻辑计划首先转变为自定义的 Olap Convension 逻辑计划,这之后还会经过一次 RBO 阶段优化才会转为可执行的物理执行计划。</p><div class="hljs code-wrapper"><pre><code class="hljs java"><span class="hljs-keyword">public</span> VolcanoPlanner <span class="hljs-title function_">createVolcanoPlanner</span><span class="hljs-params">(CalciteConnectionConfig connectionConfig)</span> { <span class="hljs-type">VolcanoPlanner</span> <span class="hljs-variable">planner</span> <span class="hljs-operator">=</span> <span class="hljs-keyword">new</span> <span class="hljs-title class_">VolcanoPlanner</span>(<span class="hljs-keyword">new</span> <span class="hljs-title class_">PlannerContext</span>(connectionConfig)); registerDefaultRules(planner); registerCustomRules(planner); <span class="hljs-keyword">return</span> planner; }</code></pre></div><p>Kylin 在 CBO 阶段自定义 Rule 大多继承自 Calcite ConverterRule,该抽象类的定义是在不改变语义的情况下,将一种调用约定 Convension 转换为另一种 Convension,如 Kylin 中从默认的 NONE -> OLAP,转换时一般是伴随的关系,如 Kylin 中 OlapProjectRule 将 LogicalProject 转换为 OlapProjectRel,这样就可以在后续对 OlapProjectRel 继续进行转换优化,LogicalXxx 是 Calcite 通过校验后未经优化的查询逻辑计划。</p><p>当 Calcite 执行 CBO 优化完成后,会检查当前查询逻辑计划中是否仍有 NONE 的 RelNode,如果有则说明优化转换没有覆盖到,此时会报错,比如下面就是超过了 CBO 最大重试次数后 LogicalSort 未能成功转换的报错信息。</p><div class="hljs code-wrapper"><pre><code class="hljs java">There are not enough rules to produce a node with desired properties: convention=ENUMERABLE, sort=[<span class="hljs-number">0</span> ASC-nulls-first]. Missing conversion is LogicalSort[convention: NONE -> ENUMERABLE]</code></pre></div><h2 id="SQL-gt-AST-gt-RelRoot"><a href="#SQL-gt-AST-gt-RelRoot" class="headerlink" title="SQL -> AST -> RelRoot"></a>SQL -> AST -> RelRoot</h2><p>对应前文的 <code>sqlConverter.convertSqlToRelNode(sql)</code> 逻辑,这一段首先将 SQL 转换为一棵抽象语法树 AST,Calcite 使用的是 JavaCC,Spark 使用的是 Antlr。</p><div class="hljs code-wrapper"><pre><code class="hljs java"><span class="hljs-keyword">public</span> RelRoot <span class="hljs-title function_">convertSqlToRelNode</span><span class="hljs-params">(String sql)</span> <span class="hljs-keyword">throws</span> SqlParseException { <span class="hljs-type">SqlNode</span> <span class="hljs-variable">sqlNode</span> <span class="hljs-operator">=</span> parseSQL(sql); <span class="hljs-keyword">return</span> convertToRelNode(sqlNode);}</code></pre></div><p>转换时涉及到词法分析、语法分析,编写模板是 parser.jj 文件,可以通过在文件中新增定义实现并支持自己的语法。<br>SQL 转换为 SqlNode 之后长这样<br><img src="https://guimy.tech/images/introduction_calcite/sql_node_object.png"><br>接着经过一系列的校验以及和元数据信息的绑定,就可以从一棵抽象语法树 AST 变成未经优化的逻辑计划 RelRoot,RelRoot 是一系列查询逻辑计划节点 RelNode 的根节点。</p><div class="hljs code-wrapper"><pre><code class="hljs java"><span class="hljs-keyword">private</span> RelRoot <span class="hljs-title function_">convertToRelNode</span><span class="hljs-params">(SqlNode sqlNode)</span> { <span class="hljs-type">RelRoot</span> <span class="hljs-variable">root</span> <span class="hljs-operator">=</span> sqlToRelConverter.convertQuery(sqlNode, <span class="hljs-literal">true</span>, <span class="hljs-literal">true</span>); <span class="hljs-keyword">return</span> root;}</code></pre></div><p>元数据信息在 Calcite 中称为 Schema,有个抽象类 AbstractSchema,Kylin 的 OlapSchema 继承并实现了该抽象类,这些信息在创建出 sqlConverter 前需要先准备好。</p><h2 id="CBO"><a href="#CBO" class="headerlink" title="CBO"></a>CBO</h2><p>我们来看这一段 <code>queryOptimizer.optimize(relRoot)</code>,从这里就开始了对查询逻辑计划的优化操作。<br>这一块包含多处子步骤优化,列举如下</p><ul><li>subQuery</li><li>DecorrelateProgram</li><li>TrimFieldsProgram</li><li>program1</li><li>calc</li></ul><h3 id="subQuery"><a href="#subQuery" class="headerlink" title="subQuery"></a>subQuery</h3><p>Calcite 原生仅有 3 个优化规则,Kylin 在此基础上新增了 OLAPJoinPushThroughJoinRule 和 OLAPJoinPushThroughJoinRule2,这两个规则均改自 Calcite 原生的 JoinPushThroughJoinRule,目的是将带有 join 的子查询下推至表与表的 join 查询逻辑之后,这样方便 Kylin 在使用查询逻辑计划匹配模型时能够匹配上预定义的表 join 关系,OLAPJoinPushThroughJoinRule2 则在此基础上允许循环匹配,需要说明的是 Kylin 创建模型定义的表 join 关系只有 left 和 inner 两种,当 SQL 查询为 right join 时不会作此改写。</p><ul><li>CoreRules.FILTER_SUB_QUERY_TO_CORRELATE</li><li>CoreRules.PROJECT_SUB_QUERY_TO_CORRELATE</li><li>CoreRules.JOIN_SUB_QUERY_TO_CORRELATE</li></ul><p>以一条 SQL 举例说明子查询的可读性,比如查询没有订购物品的消费者信息</p><div class="hljs code-wrapper"><pre><code class="hljs sql"><span class="hljs-keyword">SELECT</span> c.c_custkey<span class="hljs-keyword">FROM</span> customer c<span class="hljs-keyword">LEFT</span> <span class="hljs-keyword">JOIN</span> orders o <span class="hljs-keyword">ON</span> c.c_custkey <span class="hljs-operator">=</span> o.o_custkey<span class="hljs-keyword">WHERE</span> o.o_custkey <span class="hljs-keyword">IS</span> <span class="hljs-keyword">NULL</span>;</code></pre></div><p>使用子查询的方式改写如下,极大地降低了 SQL 的复杂性</p><div class="hljs code-wrapper"><pre><code class="hljs sql"><span class="hljs-keyword">SELECT</span> c_custkey<span class="hljs-keyword">FROM</span> customer<span class="hljs-keyword">WHERE</span> <span class="hljs-keyword">NOT</span> <span class="hljs-keyword">EXISTS</span> ( <span class="hljs-keyword">SELECT</span> <span class="hljs-operator">*</span> <span class="hljs-keyword">FROM</span> orders <span class="hljs-keyword">WHERE</span> o_custkey <span class="hljs-operator">=</span> c_custkey)</code></pre></div><p>在查询逻辑计划中将连接外部查询和子查询的运算符称为 <code>Correlate</code>,Calcite 通过这些规则将用户写的子查询 SQL 改写为上面的 SQL 在后续逻辑进行处理,这样做更便于进行查询逻辑计划优化。</p><h3 id="DecorrelateProgram"><a href="#DecorrelateProgram" class="headerlink" title="DecorrelateProgram"></a>DecorrelateProgram</h3><p>这一部分和上面消除子查询的优化相互关联,这一过程称为去相关或取消嵌套,去相关的关键是<strong>获得子查询的外部查询对应列值</strong>。当相关连接算子的左右子树没有相关列时,可以将 Correlate join 转换为普通的 join,参考下图,这样就可以像之前一样从下到上进行计算。<br><img src="https://miro.medium.com/v2/resize:fit:1400/format:webp/0*navAQNlGX38i6Hzt.png"><br>还有转换为带有 condition 的 Correlate join,参考下图。<br><img src="https://miro.medium.com/v2/resize:fit:1400/format:webp/0*pEb3o8oHbCrUUDm4.png"><br>还有很多其他转换思路,这里不再一一举例。</p><blockquote><p>FYI:<a href="https://alibaba-cloud.medium.com/query-optimization-technology-for-correlated-subqueries-8d265a51f58e">Query Optimization Technology for Correlated Subqueries</a></p></blockquote><h3 id="TrimFieldsProgram"><a href="#TrimFieldsProgram" class="headerlink" title="TrimFieldsProgram"></a>TrimFieldsProgram</h3><p>该过程无法通过参数控制,其主要作用是裁剪关系表达式中用不到的字段,在创建新的 RelNode(Calcite 中定义的查询逻辑计划类比 Spark Logical Plan) 同时,也会进行必要的优化,比如对 Filter 条件表达式进行优化,参考如下方法,Calcite 会尝试对表达式进行各种优化:布尔表达式是否返回值始终为 false、常量值是否能直接计算(这部分会进一步使用 <code>RexExecutable</code> 调用 JDK 底层方法直接生成可执行代码)等。</p><div class="hljs code-wrapper"><pre><code class="hljs java"><span class="hljs-comment">// RelFieldTrimmer#trimFields(Filter, ImmutableBitSet, Set<RelDataTypeField>)</span><span class="hljs-keyword">public</span> TrimResult <span class="hljs-title function_">trimFields</span><span class="hljs-params">( </span><span class="hljs-params"> Filter filter, </span><span class="hljs-params"> ImmutableBitSet fieldsUsed, </span><span class="hljs-params"> Set<RelDataTypeField> extraFields)</span> { <span class="hljs-comment">// If the input is unchanged, and we need to project all columns, </span> <span class="hljs-comment">// there's nothing we can do. if (newInput == input </span> && fieldsUsed.cardinality() == fieldCount) { <span class="hljs-keyword">return</span> result(filter, Mappings.createIdentity(fieldCount)); } <span class="hljs-comment">// Build new filter with trimmed input and condition. </span> relBuilder.push(newInput) .filter(filter.getVariablesSet(), newConditionExpr); <span class="hljs-comment">// The result has the same mapping as the input gave us. Sometimes we </span> <span class="hljs-comment">// return fields that the consumer didn't ask for, because the filter </span> <span class="hljs-comment">// needs them for its condition. </span> <span class="hljs-keyword">return</span> result(relBuilder.build(), inputMapping); }</code></pre></div><h3 id="program1"><a href="#program1" class="headerlink" title="program1"></a>program1</h3><p>接下来就到了执行 planner 中预定义好的优化规则这一步,由于前文创建的是 VolcanoPlanner,直接看 <code>VolcanoPlanner#findBestExp</code> 方法。</p><div class="hljs code-wrapper"><pre><code class="hljs java"><span class="hljs-meta">@Override</span> <span class="hljs-keyword">public</span> RelNode <span class="hljs-title function_">findBestExp</span><span class="hljs-params">()</span> { ensureRootConverters(); registerMaterializations(); ruleDriver.drive(); <span class="hljs-type">RelNode</span> <span class="hljs-variable">cheapest</span> <span class="hljs-operator">=</span> root.buildCheapestPlan(<span class="hljs-built_in">this</span>); <span class="hljs-keyword">return</span> cheapest; }</code></pre></div><p><code>ruleDriver.drive()</code> 是这段逻辑的核心,而 <code>buildCheapestPlan</code> 是将每个逻辑计划中最优也就是代价最低的查询逻辑计划选出来,继续往下分析 drive 方法。</p><p>执行优化匹配时,依次从 ruleQueue 中弹出一条优化规则,首先检查是否符合 <code>matches</code> 的判断条件(默认返回 true),满足条件时再调用优化规则的 <code>onmatch</code> 方法进行处理,<code>onmatch</code> 内部的逻辑涉及优化规则具体的优化步骤和规则对优化前后查询逻辑计划的转换。<code>canonize</code> 方法用于保证始终返回当前查询逻辑计划的根节点。至于计算 cost 并选出 best 查询节点 RelNode 的过程则是在 <code>VolcanoPlanner#setRoot</code> 中进行的,这里均不展开细讲。</p><div class="hljs code-wrapper"><pre><code class="hljs java"><span class="hljs-meta">@Override</span> <span class="hljs-keyword">public</span> <span class="hljs-keyword">void</span> <span class="hljs-title function_">drive</span><span class="hljs-params">()</span> { <span class="hljs-keyword">while</span> (<span class="hljs-literal">true</span>) { <span class="hljs-keyword">assert</span> planner.root != <span class="hljs-literal">null</span> : <span class="hljs-string">"RelSubset must not be null at this point"</span>; LOGGER.debug(<span class="hljs-string">"Best cost before rule match: {}"</span>, planner.root.bestCost); <span class="hljs-type">VolcanoRuleMatch</span> <span class="hljs-variable">match</span> <span class="hljs-operator">=</span> ruleQueue.popMatch(); <span class="hljs-keyword">if</span> (match == <span class="hljs-literal">null</span>) { <span class="hljs-keyword">break</span>; } <span class="hljs-keyword">assert</span> match.getRule().matches(match); <span class="hljs-keyword">try</span> { match.onMatch(); } <span class="hljs-keyword">catch</span> (VolcanoTimeoutException e) { LOGGER.warn(<span class="hljs-string">"Volcano planning times out, cancels the subsequent optimization."</span>); planner.canonize(); <span class="hljs-keyword">break</span>; } <span class="hljs-comment">// The root may have been merged with another </span> <span class="hljs-comment">// subset. Find the new root subset. </span> planner.canonize(); } }</code></pre></div><p>这时可能有人会疑问,为什么继承了抽象类 RelOptRule 实现自定义的优化规则,在没有重载 <code>matches</code> 方法的情况下,优化规则却没有匹配进入呢?这是个非常好的问题,和注册优化规则时的逻辑有关系,我们回过头关注一下其构造方法。重点关注变量 <code>RelOptRuleOperand</code>,在传参时甚至会校验该变量值不能为 null。</p><div class="hljs code-wrapper"><pre><code class="hljs java"><span class="hljs-keyword">protected</span> <span class="hljs-title function_">RelOptRule</span><span class="hljs-params">(RelOptRuleOperand operand, </span><span class="hljs-params"> RelBuilderFactory relBuilderFactory, <span class="hljs-meta">@Nullable</span> String description)</span> { <span class="hljs-built_in">this</span>.operand = Objects.requireNonNull(operand, <span class="hljs-string">"operand"</span>);}</code></pre></div><p>结合 <code>RelOptRuleOperand</code> 的 <code>matches</code> 方法和 Kylin 中一个具体的优化规则 <code>OlapAggProjectMergeRule</code> 来举例。</p><div class="hljs code-wrapper"><pre><code class="hljs java"><span class="hljs-keyword">public</span> <span class="hljs-type">boolean</span> <span class="hljs-title function_">matches</span><span class="hljs-params">(RelNode rel)</span> { <span class="hljs-keyword">if</span> (!clazz.isInstance(rel)) { <span class="hljs-keyword">return</span> <span class="hljs-literal">false</span>; } <span class="hljs-keyword">if</span> ((trait != <span class="hljs-literal">null</span>) && !rel.getTraitSet().contains(trait)) { <span class="hljs-keyword">return</span> <span class="hljs-literal">false</span>; } <span class="hljs-keyword">return</span> predicate.test(rel); }</code></pre></div><p>下面这段是 <code>OlapAggProjectMergeRule</code> 涉及到的方法,可以看到其构造方法传给父类时的 <code>RelOptRuleOperand</code> 包含了多个 RelNode 之间的关系,比如查询逻辑计划符合 <code>agg-project-join</code> 或是 <code>agg-project-filter-join</code> 这样的操作顺序,当触发优化规则执行时,不符合这一条件的优化规则首先就被过滤掉了。</p><div class="hljs code-wrapper"><pre><code class="hljs java"><span class="hljs-keyword">public</span> <span class="hljs-keyword">class</span> <span class="hljs-title class_">OlapAggProjectMergeRule</span> <span class="hljs-keyword">extends</span> <span class="hljs-title class_">RelOptRule</span> {<span class="hljs-keyword">public</span> <span class="hljs-keyword">static</span> <span class="hljs-keyword">final</span> <span class="hljs-type">OlapAggProjectMergeRule</span> <span class="hljs-variable">AGG_PROJECT_JOIN</span> <span class="hljs-operator">=</span> <span class="hljs-keyword">new</span> <span class="hljs-title class_">OlapAggProjectMergeRule</span>( operand(OlapAggregateRel.class, operand(OlapProjectRel.class, operand(OlapJoinRel.class, any()))), RelFactories.LOGICAL_BUILDER, <span class="hljs-string">"OlapAggProjectMergeRule:agg-project-join"</span>); <span class="hljs-keyword">public</span> <span class="hljs-keyword">static</span> <span class="hljs-keyword">final</span> <span class="hljs-type">OlapAggProjectMergeRule</span> <span class="hljs-variable">AGG_PROJECT_FILTER_JOIN</span> <span class="hljs-operator">=</span> <span class="hljs-keyword">new</span> <span class="hljs-title class_">OlapAggProjectMergeRule</span>( operand(OlapAggregateRel.class, operand(OlapProjectRel.class, operand(OlapFilterRel.class, operand(OlapJoinRel.class, any())))), RelFactories.LOGICAL_BUILDER, <span class="hljs-string">"OlapAggProjectMergeRule:agg-project-filter-join"</span>); <span class="hljs-keyword">public</span> <span class="hljs-title function_">OlapAggProjectMergeRule</span><span class="hljs-params">(RelOptRuleOperand operand, RelBuilderFactory relBuilderFactory, String description)</span> { <span class="hljs-built_in">super</span>(operand, relBuilderFactory, description); }}</code></pre></div><h3 id="calc"><a href="#calc" class="headerlink" title="calc"></a>calc</h3><p>这一过程比较特殊,属于可执行的优化规则(指 Convension 由 NONE -> BindableConvention),见 <code>RelOptRules#CALC_RULES</code> ,其顺序如下。执行时同样先检查是否符合优化规则匹配条件,再执行优化操作。</p><div class="hljs code-wrapper"><pre><code class="hljs leaf">HepPlanner<span class="hljs-punctuation">#</span><span class="hljs-keyword">findBestExp</span> -> HepPlanner<span class="hljs-punctuation">#</span><span class="hljs-keyword">executeProgram</span><span class="hljs-params">(<span class="hljs-variable">HepProgram</span>)</span> -> RuleInstance.State<span class="hljs-punctuation">#</span><span class="hljs-keyword">execute</span> -> HepPlanner<span class="hljs-punctuation">#</span><span class="hljs-keyword">applyRules</span> -> HepPlanner<span class="hljs-punctuation">#</span><span class="hljs-keyword">applyRule</span></code></pre></div><h2 id="RBO"><a href="#RBO" class="headerlink" title="RBO"></a>RBO</h2><p>经过一系列 CBO 阶段优化规则之后,来到了 RBO 阶段,直接看代码逻辑,见 <code>HepUtils.runRuleCollection</code>。</p><div class="hljs code-wrapper"><pre><code class="hljs java"><span class="hljs-keyword">public</span> List<RelNode> <span class="hljs-title function_">postOptimize</span><span class="hljs-params">(RelNode node)</span> { Collection<RelOptRule> postOptRules = <span class="hljs-keyword">new</span> <span class="hljs-title class_">LinkedHashSet</span><>(); <span class="hljs-comment">// It will definitely work if it were put here </span> postOptRules.add(SumConstantConvertRule.INSTANCE); <span class="hljs-keyword">if</span> (kylinConfig.isConvertSumExpressionEnabled()) { postOptRules.addAll(HepUtils.SumExprRules); } <span class="hljs-keyword">if</span> (kylinConfig.isConvertCountDistinctExpressionEnabled()) { postOptRules.addAll(HepUtils.CountDistinctExprRules); } <span class="hljs-keyword">if</span> (kylinConfig.isAggregatePushdownEnabled()) { postOptRules.addAll(HepUtils.AggPushDownRules); } <span class="hljs-keyword">if</span> (kylinConfig.isScalarSubqueryJoinEnabled()) { postOptRules.addAll(HepUtils.ScalarSubqueryJoinRules); } <span class="hljs-keyword">if</span> (kylinConfig.isOptimizedSumCastDoubleRuleEnabled()) { postOptRules.addAll(HepUtils.SumCastDoubleRules); } <span class="hljs-keyword">if</span> (kylinConfig.isQueryFilterReductionEnabled()) { postOptRules.addAll(HepUtils.FilterReductionRules); } postOptRules.add(OlapFilterJoinRule.FILTER_ON_JOIN); <span class="hljs-comment">// this rule should after sum-expression and count-distinct-expression </span> postOptRules.add(OlapProjectJoinTransposeRule.INSTANCE); <span class="hljs-type">RelNode</span> <span class="hljs-variable">transformed</span> <span class="hljs-operator">=</span> HepUtils.runRuleCollection(node, postOptRules, <span class="hljs-literal">false</span>); <span class="hljs-keyword">if</span> (transformed != node && allowAlternativeQueryPlan) { <span class="hljs-keyword">return</span> Lists.newArrayList(transformed, node); } <span class="hljs-keyword">else</span> { <span class="hljs-keyword">return</span> Lists.newArrayList(transformed); } }</code></pre></div><p>RBO 阶段执行优化的逻辑和 CBO 阶段类似,不同的是 RBO 只会按照固定添加的优化规则顺序匹配并依次执行,不会重复进入同样的优化规则(除非添加两次且都符合条件),同时这一阶段也不会计算 cost,这是两者最大的区别。</p><h1 id="Spark"><a href="#Spark" class="headerlink" title="Spark"></a>Spark</h1><p>模型匹配后的逻辑则是在 Spark 这一层做的,中间省略了模型匹配的过程。</p><h2 id="Details"><a href="#Details" class="headerlink" title="Details"></a>Details</h2><p>到这步,就需要将 Calcite 的查询逻辑计划转换为 Spark 的查询逻辑计划,见 <code>CalciteToSparkPlaner#visit</code>。</p><ul><li><p>按照查询计划从下往上依次进行转换</p></li><li><p>转换时跳过 OlapJoinRel/OlapNonEquiJoinRel 且不是 runtime join 的情况,runtime join 指匹配不上索引需要现算</p></li><li><p><code>CalciteToSparkPlaner#convertTableScan</code> 方法在转换 OlapTableScan 和 OlapJoinRel 时都会用到,也就是说对于 join 这种场景,在前面的逻辑成功匹配模型索引后,真正执行时直接扫描两张表已经 join 之后的数据地址即可,无需真正扫描两张表再 join 计算,这些信息在构建模型索引时和数据地址一并存储在模型索引的元数据信息中</p></li><li><p>对于非 admin 用户,在转换完成返回 Spark DataSet 时会对数据做一些其他操作,比如数据脱敏</p></li><li><p>真正执行计算时会判断数据入口,这里会判断是否来自于 MDX 的计算,之前 Kylin 开源过一版和 MDX 的对接,见:<a href="https://kylin.apache.org/cn/docs/tutorial/quick_start_for_mdx.html">QuickStartForMDX</a> ,MDX 是一种类似 SQL 的查询语法,但其抽象程度比 SQL 更高,拥有类似 Hierarchy 这样的概念,用户多会通过 Excel 使用后台对接 MDX 进行查询,最早由微软开放出来而现在已经放弃了该项目,相对来说用的人很少,市面上资料比较少,门槛也高。之前 Kylin 商业版开发过 MDX on Spark 的项目,主要目的是使 MDX 能够拥有分布式计算的能力,我是该攻坚项目的核心研发之一,项目并未开源。在此基础上,有一种场景是客户使用 Excel 拖拉拽式查询,后台通过 MDX + Kylin -> Calcite/Spark 的方式匹配预计算结果返回,体验还是不错的。</p></li><li><p>查询返回有两种情况</p><ul><li>异步查询:提交异步任务计算,结果保存在 HDFS,一般通过单独的接口调用,mock 虚拟结果直接返回</li><li>即时查询:基于文件大小估算分区数量,记录执行任务的相关信息,在通过大查询校验后(通过扫描行数以及相关配置参数来判断是否拒绝此查询),真正执行计算获得结果</li></ul></li><li><p>查询引擎里的 Spark Driver 和 Kylin Server 在一个常驻进程里,内部将查询的 Spark 称为 Sparder 引擎以区分构建使用的 Spark 引擎</p></li></ul><p>至此, 查询流程模型匹配前的 Calcite 处理和模型匹配后的 Spark 处理部分介绍完毕,后续再补充模型匹配处理流程。</p>]]></content>
<categories>
<category>分布式系统</category>
<category>OLAP</category>
</categories>
<tags>
<tag>Kylin</tag>
</tags>
</entry>
<entry>
<title>How Apache Kylin Query Work(一)</title>
<link href="/2024/07/31/How-Apache-Kylin-Query-Work%EF%BC%88%E4%B8%80%EF%BC%89/"/>
<url>/2024/07/31/How-Apache-Kylin-Query-Work%EF%BC%88%E4%B8%80%EF%BC%89/</url>
<content type="html"><![CDATA[<h1 id="What-is-Query-Engine"><a href="#What-is-Query-Engine" class="headerlink" title="What is Query Engine"></a>What is Query Engine</h1><p>什么是查询引擎?有很多说法,普遍认知是一种可以对数据执行查询并生成答案的软件。<br>比如:今年公司每个月的平均销售额是多少?这个季度员工的平均薪资是多少?这些查询作用在用户构建的数据之上,执行并返回答案,最广泛的查询语言是结构化查询语言 SQL,下面是一条 SQL 查询</p><div class="hljs code-wrapper"><pre><code class="hljs SQL"><span class="hljs-keyword">SELECT</span> <span class="hljs-keyword">month</span>, <span class="hljs-built_in">AVG</span>(sales)<span class="hljs-keyword">FROM</span> product_sales<span class="hljs-keyword">WHERE</span> <span class="hljs-keyword">year</span> <span class="hljs-operator">=</span> <span class="hljs-number">2024</span><span class="hljs-keyword">GROUP</span> <span class="hljs-keyword">BY</span> <span class="hljs-keyword">month</span>;</code></pre></div><p>现在有很多流行的 SQL 查询引擎如 Hive、Impala、Presto、Spark SQL 等等,从本文开始将和大家一起讨论下 Apache Kylin 的查询是怎么工作的,这里摘录来自官网的描述:<br><strong>Apache Kylin™是一个开源的、分布式的分析型数据仓库,提供Hadoop/Spark 之上的 SQL 查询接口及多维分析(OLAP)能力以支持超大规模数据,最初由 eBay 开发并贡献至开源社区。它能在亚秒内查询巨大的表。</strong></p><h2 id="Concepts"><a href="#Concepts" class="headerlink" title="Concepts"></a>Concepts</h2><p>首先需要说明的是 <a href="https://kylin.apache.org/">Apache Kylin</a> 使用 <a href="https://calcite.apache.org/">Apache Calcite</a> 作为 SQL 入口,执行层使用的是 <a href="https://spark.apache.org/">Apache Spark</a>,文件存储格式使用的是 <a href="https://parquet.apache.org/">Apache Parquet</a>,Kylin 的核心理念是预计算,也就是用空间换时间。<br>传统 RDBMS 使用行式存储,其着重点在于 ACID 事务,文件存储格式通常使用的是行式存储,无法很好地支撑高并发的大数据场景,本文讨论的 OLAP 领域着重点在于查询分析,使用列式存储作为文件存储格式更友好。<br>用户在从 Hive 加载数据源表,创建模型对应表与表之间的连接关系,同时定义好需要预计算的维度列、度量和可计算列,生成对应的聚合索引和明细索引。在执行构建索引的步骤之后,即可通过预计算生成的索引结果回答查询。</p><blockquote><p>补充说明:Kylin 支持的数据模型为星型模型和雪花模型,不支持多张事实表的星座模型。</p></blockquote><h2 id="Type-System"><a href="#Type-System" class="headerlink" title="Type System"></a>Type System</h2><h3 id="Schema"><a href="#Schema" class="headerlink" title="Schema"></a>Schema</h3><p>通常类型系统一般称为 Schema,为数据源或查询结果提供元数据,由不同的字段和数据类型组成,还包含一些另外的信息如:是否允许 null 值、字段的存储格式等。<br>Kylin 在继承 Calcite 抽象类 CalciteSchema 的基础上实现了自定义 Schema —— OlapSchema,同时会注入 Kylin 自己实现的 UDF 函数。<br>其中有两个比较重要的信息</p><ul><li>TableDescs:加载至 Kylin 的表元数据,通常以 json 文件的形式存储在分布式存储系统 HDFS 上<ul><li>ColumnDesc:来自于数据源的列元数据,包含列名、数据类型、<strong>可计算列</strong>等信息</li></ul></li><li>NDataModel:模型元数据,包含一个事实表与多个维表以及用户定义的需要预计算的维度列、度量、衍生维度列等信息</li></ul><blockquote><ul><li>可计算列在 Kylin 中又称为 CC 列(即 Computed Column),如:<code>TEST_KYLIN_FACT.PRICE * TEST_KYLIN_FACT.ITEM_COUNT</code> 的结果可以直接定义为一种特殊的可计算列。</li><li>衍生维度列:只要事实表对应外键被加入聚合索引并构建,且该维度表有 Snapshot,那么该列被称为衍生维度列,即便没有定义成预计算列也能通过索引进行回答。</li></ul></blockquote><h3 id="Type"><a href="#Type" class="headerlink" title="Type"></a>Type</h3><p>Kylin 在继承 Calcite 抽象类 RelDataTypeSystemImpl 的基础上实现了自定义数据类型 —— KylinRelDataTypeSystem。<br>主要针对一些计算如 SUM 算子和 Decimal 的乘除法等做了类型调整和适配,因为 Kylin 需要兼容 Calcite 和 Spark 两者的类型系统,同时还有查询优化时对数据类型的微调等。</p><h2 id="Data-Sources"><a href="#Data-Sources" class="headerlink" title="Data Sources"></a>Data Sources</h2><p>数据源模块非常重要,如果没有可读取的数据源,查询引擎将毫无用处,通常情况每个查询引擎都会有一个用来与数据源交互的接口以支持多个数据源。<br>Kylin 中的数据源接口是 <code>org.apache.kylin.source.ISource</code>,子类实现已支持的有 CsvSource、JdbcSource、NSparkDataSource 和 NSparkKafkaSource。<br>可以参考 CsvSource 简单理解加载过程,其中使用最多的是 NSparkDataSource,即来自于 Hive 的数据源,用户在加载数据源时 Kylin 会将表类型保存在表的元数据中,在后面的加载中不需要用户关心使用哪种 DataSource,而是由 Kylin 的 DataSource 基于表的元数据信息自适应子类实现加载表,这也正是数据源模块接口抽象出来的作用。</p><h2 id="Logical-Plan"><a href="#Logical-Plan" class="headerlink" title="Logical Plan"></a>Logical Plan</h2><p>逻辑计划是数据查询的结构化表示形式,描述了从数据库或数据源检索数据所需的操作和转换,抽象出特定的实现细节,并专注于查询的逻辑,如 filter、sort 和 join 等操作。每个逻辑计划都可以有 0 个或多个逻辑计划作为输入,逻辑计划可以暴露其子计划,以便使用 visitor 模式遍历。<br>Kylin 引入 Calcite 作为模型匹配前的查询引擎,同时引入 Spark 作为模型匹配后的查询引擎,因此包含这两部分查询逻辑计划。<br>Calcite 主要在解析查询和优化阶段使用,而 Spark 则是真正的执行层,这两者查询逻辑计划需要相互转换,后面有时间再展开讲,这里仅作提及。</p><blockquote><p>部分常量计算执行引擎是交由 Calcite 执行的,因为一些简单的计算时间远小于 Spark 框架调用执行的时间。</p></blockquote><h3 id="Printing-Logical-Plans"><a href="#Printing-Logical-Plans" class="headerlink" title="Printing Logical Plans"></a>Printing Logical Plans</h3><p>以人类可读的形式打印逻辑计划对调试非常重要,这里贴一下 Kylin 输出逻辑计划的方式。</p><ul><li>Calcite <div class="hljs code-wrapper"><pre><code class="hljs java"><span class="hljs-type">RelNode</span> <span class="hljs-variable">root</span> <span class="hljs-operator">=</span> xxx;RelOptUtil.toString(root);</code></pre></div></li><li>Spark <div class="hljs code-wrapper"><pre><code class="hljs scala"><span class="hljs-type">Dataset</span><<span class="hljs-type">Row</span>> sparkPlan = xxx;sparkPlan.queryExecution().logical()</code></pre></div></li></ul><h3 id="Serialization"><a href="#Serialization" class="headerlink" title="Serialization"></a>Serialization</h3><p>通过序列化查询计划可以将其转移到另一个进程,通常有两种方法,Kylin 因为直接使用了 Calcite 和 Spark 的缘故,无需关注查询逻辑计划的序列化部分,不过在元数据模块中使用到 Jackson 的序列化。</p><ul><li>使用实现语言的默认机制对数据进行转化,如 Java 的 Jackson 库,Kotli 的 kotlinx.serialization 库,Rust 的 serde crate 等</li><li>使用与语言无关的序列化格式,然后编写代码在此格式和实现语言的格式之间进行转换,如 Avro、Thrift 和 Protocol Buffers 等</li></ul><h3 id="Logical-Expressions"><a href="#Logical-Expressions" class="headerlink" title="Logical Expressions"></a>Logical Expressions</h3><p>查询计划的一个基本概念是逻辑表达式,可以在运行时根据数据进行计算,如:Column Expressions、Literal Expressions、Binary Expressions、Comparison Expressions、Math Expressions、Aggregate Expressions 等,参考下表给出的例子。</p><table><thead><tr><th align="center">Expression</th><th align="center">Examples</th></tr></thead><tbody><tr><td align="center">Literal Value</td><td align="center">“hello”, 12.34</td></tr><tr><td align="center">Column Reference</td><td align="center">user_id, first_name, last_name</td></tr><tr><td align="center">Math Expression</td><td align="center">salary * state_tax</td></tr><tr><td align="center">Comparison Expression</td><td align="center">x ≥ y</td></tr><tr><td align="center">Boolean Expression</td><td align="center">birthday = today() AND age ≥ 21</td></tr><tr><td align="center">Aggregate Expression</td><td align="center">MIN(salary), MAX(salary), SUM(salary), AVG(salary), COUNT(*)</td></tr><tr><td align="center">Scalar Function</td><td align="center">CONCAT(first_name, “ “, last_name)</td></tr><tr><td align="center">Aliased Expression</td><td align="center">salary * 0.02 AS pay_increase</td></tr></tbody></table><h3 id="Logical-Plans"><a href="#Logical-Plans" class="headerlink" title="Logical Plans"></a>Logical Plans</h3><p>有了逻辑表达式,接下来就是对查询引擎支持的各种转换实现逻辑计划,如:Scan、Projection、Selection(Filter)、Aggregate 等。</p><ul><li>Scan:从可选 Projection 的 数据源中提取数据,Scan 是查询逻辑计划中唯一没有另一个逻辑计划作为输入的逻辑计划,它是查询树中的叶子节点。</li><li>Projection:作用在输入的逻辑计划之上,如:<code>SELECT a、b、c FROM foo</code> 这里的 a、b、c 列即为 Projection。</li><li>Selection(Filter):应用在输入的逻辑计划之上,筛选结果中包含的行,如:<code>SELECT * FROM foo WHERE a > 5</code>,这里的 a > 5 即为 Selection,也称为 Filter。</li><li>Aggregate:计算基础数据的聚合结果,最小值、最大值、平均值和总和等。如:<code>SELECT job,AVG(salary) FROM EMPLOYEE GROUP BY job</code>,这里 AVG(salary) 就是聚合计算的算子。</li></ul><h2 id="DataFrames"><a href="#DataFrames" class="headerlink" title="DataFrames"></a>DataFrames</h2><p>已经有了查询逻辑计划为什么还需要 DataFrames 呢?参考下面的例子,每个逻辑表达式都很清晰,但是整块代码比较分散,无法统一起来</p><div class="hljs code-wrapper"><pre><code class="hljs kotlin"><span class="hljs-comment">// create a plan to represent the data source</span><span class="hljs-keyword">val</span> csv = CsvDataSource(<span class="hljs-string">"employee.csv"</span>)<span class="hljs-comment">// create a plan to represent the scan of the data source (FROM)</span><span class="hljs-keyword">val</span> scan = Scan(<span class="hljs-string">"employee"</span>, csv, listOf())<span class="hljs-comment">// create a plan to represent the selection (WHERE)</span><span class="hljs-keyword">val</span> filterExpr = Eq(Column(<span class="hljs-string">"state"</span>), LiteralString(<span class="hljs-string">"CO"</span>))<span class="hljs-keyword">val</span> selection = Selection(scan, filterExpr)<span class="hljs-comment">// create a plan to represent the projection (SELECT)</span><span class="hljs-keyword">val</span> projectionList = listOf(Column(<span class="hljs-string">"id"</span>), Column(<span class="hljs-string">"first_name"</span>), Column(<span class="hljs-string">"last_name"</span>), Column(<span class="hljs-string">"state"</span>), Column(<span class="hljs-string">"salary"</span>))<span class="hljs-keyword">val</span> plan = Projection(selection, projectionList)<span class="hljs-comment">// print the plan</span>println(format(plan))</code></pre></div><p>打印的逻辑计划如下</p><div class="hljs code-wrapper"><pre><code class="hljs kotlin">Projection: #id, #first_name, #last_name, #state, #salary Filter: #state = <span class="hljs-string">'CO'</span> Scan: employee; projection=None</code></pre></div><p>如果有 DataFrame 做一层抽象,那么就可以写出像下面这样的代码,非常简洁,参照 Spark 的 DataFrame 做类比。</p><div class="hljs code-wrapper"><pre><code class="hljs kotlin"><span class="hljs-keyword">val</span> df = ctx.csv(employeeCsv) .filter(col(<span class="hljs-string">"state"</span>) eq lit(<span class="hljs-string">"CO"</span>)) .select(listOf( col(<span class="hljs-string">"id"</span>), col(<span class="hljs-string">"first_name"</span>), col(<span class="hljs-string">"last_name"</span>), col(<span class="hljs-string">"salary"</span>), (col(<span class="hljs-string">"salary"</span>) mult lit(<span class="hljs-number">0.1</span>)) alias <span class="hljs-string">"bonus"</span>)) .filter(col(<span class="hljs-string">"bonus"</span>) gt lit(<span class="hljs-number">1000</span>))</code></pre></div><h2 id="Physical-Plans"><a href="#Physical-Plans" class="headerlink" title="Physical Plans"></a>Physical Plans</h2><p>通常情况下查询会分为逻辑计划和物理计划,合在一起降低复杂性也是可以的,但出于其他考量会将两者分开。<br>逻辑计划主要负责关系的逻辑表达和优化,而物理计划则是在逻辑计划的基础上根据数据的实际分布情况进一步优化制定执行计划,确保查询效率最大化。<br>这里以 Column Expressions 来举例,在 Logical Plans 中,Column 表示对命名列的引用,这个“列”可以是由输入的逻辑计划生成的列,也可以表示数据源中的列,或者针对其他输入表达式计算的结果,而在 Physical Plans 中, Column 为了避免每次计算表达式时都要查找名称的成本,可能会改为按索引引用列,直接对应了数据实际存储的序号引用。</p><h2 id="Query-Planning"><a href="#Query-Planning" class="headerlink" title="Query Planning"></a>Query Planning</h2><p>在定义了逻辑计划和物理计划之后,还需要有一个可以将逻辑计划转换为物理计划的查询计划器,某种程度上还可以通过配置自适应选择不同的转换方式执行查询。<br>同样以 Column Expressions 为例,逻辑表达式按名称引用列,但物理表达式使用列索引来提高性能,那么就需要一个从列名到列序的转换,并在无效时报错,简单列出一些代码方便理解。</p><div class="hljs code-wrapper"><pre><code class="hljs kotlin"><span class="hljs-function"><span class="hljs-keyword">fun</span> <span class="hljs-title">createPhysicalExpr</span><span class="hljs-params">(expr: <span class="hljs-type">LogicalExpr</span>,</span></span><span class="hljs-params"><span class="hljs-function"> input: <span class="hljs-type">LogicalPlan</span>)</span></span>: PhysicalExpr = <span class="hljs-keyword">when</span> (expr) { <span class="hljs-keyword">is</span> ColumnIndex -> ColumnExpression(expr.i) <span class="hljs-keyword">is</span> LiteralString -> LiteralStringExpression(expr.str) <span class="hljs-keyword">is</span> BinaryExpr -> { <span class="hljs-keyword">val</span> l = createPhysicalExpr(expr.left, input) <span class="hljs-keyword">val</span> r = createPhysicalExpr(expr.right, input) ... } ...}<span class="hljs-keyword">is</span> Column -> { <span class="hljs-keyword">val</span> i = input.schema().fields.indexOfFirst { it.name == expr.name } <span class="hljs-keyword">if</span> (i == -<span class="hljs-number">1</span>) { <span class="hljs-keyword">throw</span> SQLException(<span class="hljs-string">"No column named '<span class="hljs-subst">${expr.name}</span>'"</span>) } ColumnExpression(i)</code></pre></div><h2 id="Query-Optimizers"><a href="#Query-Optimizers" class="headerlink" title="Query Optimizers"></a>Query Optimizers</h2><p>Kylin 使用了 Calcite 中的 VolcanoPlanner 和 HepPlanner 来进行查询优化,分别对应 CBO 和 RBO,这里不展开细讲,仅列举业界一些常用的优化方式来说明。</p><h3 id="Rule-Based-Optimizations"><a href="#Rule-Based-Optimizations" class="headerlink" title="Rule-Based-Optimizations"></a>Rule-Based-Optimizations</h3><p>基于规则的优化,按照一系列规则遍历并优化逻辑计划,将其转换为同等的 SQL 执行计划的优化规则。</p><h4 id="Projection-Push-Down"><a href="#Projection-Push-Down" class="headerlink" title="Projection Push-Down"></a>Projection Push-Down</h4><p>投影下推:尽可能早地在读取数据时筛选出列,以减少内存中所需处理的数据量。<br>优化前</p><div class="hljs code-wrapper"><pre><code class="hljs kotlin">Projection: #id, #first_name, #last_name Filter: #state = <span class="hljs-string">'CO'</span> Scan: employee; projection=None</code></pre></div><p>优化后</p><div class="hljs code-wrapper"><pre><code class="hljs kotlin">Projection: #id, #first_name, #last_name Filter: #state = <span class="hljs-string">'CO'</span> Scan: employee; projection=[first_name, id, last_name, state]</code></pre></div><p>同样是查询 id、first_name、last_name 这三列,优化前是读取整张 employee 表再做过滤处理,优化后则是仅读取 employee 表中的 id、first_name、last_name 这三列数据,在大数据量量下两者可能存在指数级的差距,毕竟在很多 OLAP 场景中都使用的列式文件存储格式。</p><h4 id="Predicate-Push-Down"><a href="#Predicate-Push-Down" class="headerlink" title="Predicate Push-Down"></a>Predicate Push-Down</h4><p>谓词下推:尽早在查询中过滤行,以避免冗余处理。<br>优化前</p><div class="hljs code-wrapper"><pre><code class="hljs kotlin">Projection: #dept_name, #first_name, #last_name Filter: #state = <span class="hljs-string">'CO'</span> Join: #employee.dept_id = #dept.id Scan: employee; projection=[first_name, id, last_name, state] Scan: dept; projection=[id, dept_name]</code></pre></div><p>优化后</p><div class="hljs code-wrapper"><pre><code class="hljs kotlin">Projection: #dept_name, #first_name, #last_name Join: #employee.dept_id = #dept.id Filter: #state = <span class="hljs-string">'CO'</span> Scan: employee; projection=[first_name, id, last_name, state] Scan: dept; projection=[id, dept_name]</code></pre></div><p>在先对 employee 表做了 state = ‘CO’ 的过滤条件处理后,再将 employee 表和 dept 表 join 起来毫无疑问是减少很多开销的。</p><h4 id="Eliminate-Common-Subexpression"><a href="#Eliminate-Common-Subexpression" class="headerlink" title="Eliminate Common Subexpression"></a>Eliminate Common Subexpression</h4><p>消除子表达式:重用子表达式,而不是重复执行多次计算。<br>优化前</p><div class="hljs code-wrapper"><pre><code class="hljs kotlin">Projection: sum(#price * #qty), sum(#price * #qty * #tax) Scan: sales</code></pre></div><p>优化后</p><div class="hljs code-wrapper"><pre><code class="hljs kotlin">Projection: sum(#_price_mult_qty), sum(#_price_mult_qty * #tax) Projection: #price * #qty <span class="hljs-keyword">as</span> _price_mult_qty Scan: sales</code></pre></div><h3 id="Cost-Based-Optimizations"><a href="#Cost-Based-Optimizations" class="headerlink" title="Cost-Based-Optimizations"></a>Cost-Based-Optimizations</h3><p>基于成本的优化,使用底层数据的统计信息来确定执行查询所需的成本,然后通过寻找低成本的执行计划选择最佳执行计划的优化规则。<br>这些统计信息通常包括列的空值情况、非重复值情况、最大最小值等信息,比如某一列可以直接通过最大最小值统计信息直接过滤掉部分数据文件的真正读取(这些信息是通过读取数据文件的元数据信息得到的),那么可以考虑将该列的执行时间往前放。</p><h2 id="Query-Execution"><a href="#Query-Execution" class="headerlink" title="Query Execution"></a>Query Execution</h2><h3 id="SQL-查询执行流程"><a href="#SQL-查询执行流程" class="headerlink" title="SQL 查询执行流程"></a>SQL 查询执行流程</h3><p><img src="https://cdn.jsdelivr.net/gh/gleonSun/images@main/image/202401241627448.png"></p><ul><li>用户 SQL 进入查询引擎后,经由 Parser 转换为一棵抽象语法树 AST</li><li>接着通过绑定 Schema 元数据信息的校验阶段,此时这棵树会从 AST 转换为一个查询逻辑计划</li><li>Optimize 阶段会应用 RBO/CBO 优化手段对逻辑计划进行优化</li><li>优化后的逻辑计划会再转换为物理计划分发到各个节点进行计算,并将结果汇报给主节点进行汇总计算得到最终查询结果</li></ul><h3 id="Kylin-查询执行流程"><a href="#Kylin-查询执行流程" class="headerlink" title="Kylin 查询执行流程"></a>Kylin 查询执行流程</h3><p><img src="https://cdn.jsdelivr.net/gh/gleonSun/images@main/image/QueryProcedure.png"><br>和传统查询引擎不同的点在于 Kylin 依赖于预计算,这也是其核心理念和功能:即用户在创建模型和索引之后执行构建任务先生成对应的索引结果,再进行查询的过程。这也是为什么在查询执行流程中间出现 Model Match 这一过程,可以先简单粗暴地把模型理解为物化视图,但两者区别挺大的,Kylin 的模型索引相较物化视图更灵活。</p>]]></content>
<categories>
<category>分布式系统</category>
<category>OLAP</category>
</categories>
<tags>
<tag>Kylin</tag>
</tags>
</entry>
<entry>
<title>Apache Kylin 构建(一)</title>
<link href="/2024/06/05/Apache%20Kylin%20%E6%9E%84%E5%BB%BA%EF%BC%88%E4%B8%80%EF%BC%89/"/>
<url>/2024/06/05/Apache%20Kylin%20%E6%9E%84%E5%BB%BA%EF%BC%88%E4%B8%80%EF%BC%89/</url>
<content type="html"><![CDATA[<h1 id="FYI"><a href="#FYI" class="headerlink" title="FYI"></a>FYI</h1><ul><li>repo:<a href="https://github.com/apache/kylin">https://github.com/apache/kylin</a></li><li>branch:kylin5</li><li>commitMessage:KYLIN-5846 upgrade spark version to 3.2.0-kylin-4.6.16.0</li><li>commitID:3f9b9c83bedbce17be0dcac5af427c636353621a</li></ul><h1 id="任务调度流程"><a href="#任务调度流程" class="headerlink" title="任务调度流程"></a>任务调度流程</h1><p>当 Kylin 服务启动准备充分时,将初始化 EpochOrchestrator 并注册 ReloadMetadataListener,准备工作还包含其他定时任务,如打印堆栈信息、检查 HA 进程状态、移除过期任务等等。</p><p>初始化 EpochOrchestrator 的过程只会在非 query 节点进行(包括 all 节点和 job 节点,其中 all 节点既可构建也可查询),继而通过定时线程运行 EpochChecker 和 EpochRenewer,间隔时间可配置默认为 30s。</p><ul><li><strong>EpochRenewer</strong> 负责选出 Epoch Owner 即元数据更新主节点,Kylin 支持 HA 功能,从节点只有读取权限,这里不向下深挖元数据相关逻辑。</li><li><strong>EpochChecker</strong> 会按照 project 依次更新 epoch,接着发出异步事件 ProjectControlledNotifier,当 <code>EpochChangedListener#onProjectControlled</code> 监听到通知后在 project 层面通过 <code>NDefaultScheduler#init</code> 创建定时调度线程池,用于调度执行 JobCheckRunner 和 FetcherRunner。<ul><li><strong>JobCheckRunner</strong><ul><li>两个作用:一是检测到超时任务时将状态标记为失败并丢弃,二是当任务运行超过容量限制时停止任务。</li></ul></li><li><strong>FetcherRunner</strong><ul><li>主要作用是调度任务,同时也会记录不同状态的任务数量,当任务执行完成时还会执行一些清理操作。</li></ul></li></ul></li></ul><div class="note note-info"> <p>在 kylin5 分支中调度流程开始和结束时以及子任务调度执行起始都会有相应日志输出,不同日志会基于分类归类到不同的日志文件,如 kylin.schedule.log、kylin.query.log、kylin.build.log 和 kylin.metadata.log 等等,可参考类 <code>KylinLogTool</code> 查看更多日志类型。此外 Kylin 支持通过诊断包的形式定位排查问题,同时还有火焰图功能用于分析性能问题。(ps. 这些功能是我做的,不用担心我乱说,后面可能会另开单章讲火焰图功能,属实是性能分析利器)</p> </div><p>当调度至 <code>AbstractExecutable#execute</code> 意味着进入到下一个任务创建阶段,在执行遇到异常时会进行重试,重试会默认等待 30s 以防止同一时刻任务提交过多。该方法有前置方法 onExecuteStart 和后置方法 onExecuteFinished,前置任务是为了更新任务状态,而后置任务除了改变任务状态外还支持以邮件方式通知使用者任务执行状态(商业版功能)。<strong>以调度至创建任务阶段为例</strong>,下图为任务调度流程</p><p><img src="https://cdn.jsdelivr.net/gh/gleonSun/images@main/image/JobScheduler-1737345754952.png"></p><h1 id="任务创建流程"><a href="#任务创建流程" class="headerlink" title="任务创建流程"></a>任务创建流程</h1><p>Kylin 使用的是 SpringBoot 作为内部服务框架,采用的也是类似 MVC 架构的方式接收用户请求,这里<strong>以全量构建 Segment 为例</strong>进行说明。当用户在 UI 界面上点击全量构建 Segment 时,会按照图序逐步调用至 <code>AbstractJobHandler#doHandle</code> ,该方法中有两块重要逻辑</p><ul><li><code>AbstractJobHandler#createJob</code>(举例情况为 <code>AddSegmentHandler#createJob</code>)</li><li><code>NExecutableManager#addJob</code></li></ul><p>createJob 用于准备构建 cube 时需要的上下文参数,包含 3 个子步骤</p><ul><li>JobStepType.RESOURCE_DETECT</li><li>JobStepType.CUBING</li><li>JobStepType.UPDATE_METADATA</li></ul><p>在 JobStepType.CUBING 这一步,会通过指定 className 的方式给后续执行构建 Segment 的 Spark Application 设置主类,该参数通过 <code>KylinConfigBase#getSparkBuildClassName</code> 进行配置,默认是 <code>org.apache.kylin.engine.spark.job.SegmentBuildJob</code> 。</p><p>而 addJob 会在任务准备完成时发出 2 个事件</p><ul><li><p><strong>JobReadyNotifier</strong></p><p> 当 JobSchedulerListener 监听到 JobReadyNotifier 事件后会直接调用 FetcherRunner 调度任务。</p></li><li><p><strong>JobAddedNotifier</strong></p><p> 当 JobSchedulerListener 监听到 JobAddedNotifier 事件后会记录一些任务的指标 metric 信息,输出到日志或者监控系统中。</p></li></ul><p>接上面任务调度流程往下讲,<code>AbstractExecutable#execute</code> 调用的 doWork 方法默认实现是 DefaultExecutable 类,调用模式为 CHAIN 即串联执行(还有一种是 DAG 模式,该模式为商业版功能,主要是支持分层存储功能对接 ClickHouse 的索引)。executeStep 方法会依据上下文存储的步骤信息依次执行。<br><code>NSparkExecutable#runSparkSubmit</code> 需要关注两部分</p><ul><li><p><strong>generateSparkCmd</strong></p><ul><li>generateSparkCmd 为运行 Spark Application 做了很多准备,包括:设置 HADOOP_CONF_DIR,指定主类为 SparkEntry,准备 sparkJars、sparkFiles 和 sparkConf,准备一会任务运行时的 jar 主类(如前文举例的 <code>org.apache.kylin.engine.spark.job.SegmentBuildJob</code>)等等。</li></ul></li><li><p><strong>runSparkSubmit</strong></p><ul><li>ps. Kylin 异步查询也复用了该逻辑执行查询任务。</li></ul></li></ul><p>其实在 runSparkSubmit 中执行提交任务时会有个区分,即在本地运行提交 Spark submit 还是通过远程在 ClickHouse 上执行相关任务(商业版功能),这里只对本地提交的方式加以说明。提交任务后,省略 Spark RPC 通信逻辑,就进入到 <code>SegmentBuildJob#main</code> 方法,真正意义上完成了任务的创建流程,下图为任务创建流程</p><p><img src="https://cdn.jsdelivr.net/gh/gleonSun/images@main/image/JobCreate-1737345728357.png"></p>]]></content>
<categories>
<category>分布式系统</category>
<category>OLAP</category>
</categories>
<tags>
<tag>Kylin</tag>
</tags>
</entry>
<entry>
<title>Solr BlockCache</title>
<link href="/2021/04/19/Solr-BlockCache/"/>
<url>/2021/04/19/Solr-BlockCache/</url>
<content type="html"><![CDATA[<h1 id="概述"><a href="#概述" class="headerlink" title="概述"></a>概述</h1><p>Solr 中为了加速索引在 HDFS 上的读写,增加了缓存,相关代码均位于 org.apache.solr.store.blockcache 包中。</p><h1 id="源码分析"><a href="#源码分析" class="headerlink" title="源码分析"></a>源码分析</h1><p>本篇源码基于 lucene-solr-8.5.2。</p><h2 id="初始化"><a href="#初始化" class="headerlink" title="初始化"></a>初始化</h2><p>初始化的过程位于 HdfsDirectoryFactory 的 create 方法中,启用 BlockCache 需要配置对应参数,可参考 <a href="https://solr.apache.org/guide/7_2/running-solr-on-hdfs.html">Running Solr on HDFS</a>,其中 BlockCache 可配置为全局的 BlockCache,也可以在每个 SolrCore 中创建单独的 BlockCache。NRTCachingDirectory 也是用于加速索引读取的,其内部使用的是 RAMDirectory(内存中的 Directory 实现),本文不予展开分析。</p><p>初始化的过程主要包含三个部分:</p><ul><li>BlockCache</li><li>BlockDirectoryCache</li><li>BlockDirectory</li></ul><div class="note note-warning"> <p>这里补充一下概念:默认地,每个 BlockCache 拥有 1 个 bank,这个 bank 下会有 16384 个 block,每个 block 是 (8192 / 1024) = 8K,像这样被称为一个 slab,其大小为 (16384 * 8192) / 1024 / 1024 = 128M。</p> </div><div class="hljs code-wrapper"><pre><code class="hljs java"><span class="hljs-keyword">protected</span> Directory <span class="hljs-title function_">create</span><span class="hljs-params">(String path, LockFactory lockFactory, DirContext dirContext)</span> <span class="hljs-keyword">throws</span> IOException { <span class="hljs-keyword">assert</span> params != <span class="hljs-literal">null</span> : <span class="hljs-string">"init must be called before create"</span>; log.info(<span class="hljs-string">"creating directory factory for path {}"</span>, path); <span class="hljs-type">Configuration</span> <span class="hljs-variable">conf</span> <span class="hljs-operator">=</span> getConf(<span class="hljs-keyword">new</span> <span class="hljs-title class_">Path</span>(path)); <span class="hljs-comment">// metrics 是通过静态内部类 MetricsHolder 的单例模式构造的对象,是全局唯一的</span> <span class="hljs-keyword">if</span> (metrics == <span class="hljs-literal">null</span>) { metrics = MetricsHolder.metrics; } <span class="hljs-comment">// 启用 BlockCache</span> <span class="hljs-type">boolean</span> <span class="hljs-variable">blockCacheEnabled</span> <span class="hljs-operator">=</span> getConfig(BLOCKCACHE_ENABLED, <span class="hljs-literal">true</span>); <span class="hljs-comment">// 如果启用,对于每个节点上的集合都会使用一个 HDFS BlockCache</span> <span class="hljs-comment">// 如果禁用,每个 SolrCore 都会创建自己私有的 HDFS BlockCache</span> <span class="hljs-type">boolean</span> <span class="hljs-variable">blockCacheGlobal</span> <span class="hljs-operator">=</span> getConfig(BLOCKCACHE_GLOBAL, <span class="hljs-literal">true</span>); <span class="hljs-comment">// 启用读 BlockCache</span> <span class="hljs-type">boolean</span> <span class="hljs-variable">blockCacheReadEnabled</span> <span class="hljs-operator">=</span> getConfig(BLOCKCACHE_READ_ENABLED, <span class="hljs-literal">true</span>); <span class="hljs-keyword">final</span> HdfsDirectory hdfsDir; <span class="hljs-keyword">final</span> Directory dir; <span class="hljs-comment">// 判断是否启用 BlockCache</span> <span class="hljs-keyword">if</span> (blockCacheEnabled && dirContext != DirContext.META_DATA) { <span class="hljs-comment">// 每个缓存片的块数</span> <span class="hljs-type">int</span> <span class="hljs-variable">numberOfBlocksPerBank</span> <span class="hljs-operator">=</span> getConfig(NUMBEROFBLOCKSPERBANK, <span class="hljs-number">16384</span>); <span class="hljs-comment">// 缓存大小,默认值为 8192 即 8K</span> <span class="hljs-type">int</span> <span class="hljs-variable">blockSize</span> <span class="hljs-operator">=</span> BlockDirectory.BLOCK_SIZE; <span class="hljs-comment">// 每个 BlockCache 的切片数</span> <span class="hljs-type">int</span> <span class="hljs-variable">bankCount</span> <span class="hljs-operator">=</span> getConfig(BLOCKCACHE_SLAB_COUNT, <span class="hljs-number">1</span>); <span class="hljs-comment">// 启用直接内存分配(堆外内存),如果为 false 则使用堆内存</span> <span class="hljs-type">boolean</span> <span class="hljs-variable">directAllocation</span> <span class="hljs-operator">=</span> getConfig(BLOCKCACHE_DIRECT_MEMORY_ALLOCATION, <span class="hljs-literal">true</span>); <span class="hljs-comment">// 每个切片的大小</span> <span class="hljs-type">int</span> <span class="hljs-variable">slabSize</span> <span class="hljs-operator">=</span> numberOfBlocksPerBank * blockSize; log.info( <span class="hljs-string">"Number of slabs of block cache [{}] with direct memory allocation set to [{}]"</span>, bankCount, directAllocation); log.info( <span class="hljs-string">"Block cache target memory usage, slab size of [{}] will allocate [{}] slabs and use ~[{}] bytes"</span>, <span class="hljs-keyword">new</span> <span class="hljs-title class_">Object</span>[] {slabSize, bankCount, ((<span class="hljs-type">long</span>) bankCount * (<span class="hljs-type">long</span>) slabSize)}); <span class="hljs-type">int</span> <span class="hljs-variable">bsBufferSize</span> <span class="hljs-operator">=</span> params.getInt(<span class="hljs-string">"solr.hdfs.blockcache.bufferstore.buffersize"</span>, blockSize); <span class="hljs-type">int</span> <span class="hljs-variable">bsBufferCount</span> <span class="hljs-operator">=</span> params.getInt(<span class="hljs-string">"solr.hdfs.blockcache.bufferstore.buffercount"</span>, <span class="hljs-number">0</span>); <span class="hljs-comment">// this is actually total size</span> <span class="hljs-type">BlockCache</span> <span class="hljs-variable">blockCache</span> <span class="hljs-operator">=</span> getBlockDirectoryCache(numberOfBlocksPerBank, blockSize, bankCount, directAllocation, slabSize, bsBufferSize, bsBufferCount, blockCacheGlobal); <span class="hljs-type">Cache</span> <span class="hljs-variable">cache</span> <span class="hljs-operator">=</span> <span class="hljs-keyword">new</span> <span class="hljs-title class_">BlockDirectoryCache</span>(blockCache, path, metrics, blockCacheGlobal); <span class="hljs-type">int</span> <span class="hljs-variable">readBufferSize</span> <span class="hljs-operator">=</span> params.getInt(<span class="hljs-string">"solr.hdfs.blockcache.read.buffersize"</span>, blockSize); hdfsDir = <span class="hljs-keyword">new</span> <span class="hljs-title class_">HdfsDirectory</span>(<span class="hljs-keyword">new</span> <span class="hljs-title class_">Path</span>(path), lockFactory, conf, readBufferSize); dir = <span class="hljs-keyword">new</span> <span class="hljs-title class_">BlockDirectory</span>(path, hdfsDir, cache, <span class="hljs-literal">null</span>, blockCacheReadEnabled, <span class="hljs-literal">false</span>, cacheMerges, cacheReadOnce); } <span class="hljs-keyword">else</span> { hdfsDir = <span class="hljs-keyword">new</span> <span class="hljs-title class_">HdfsDirectory</span>(<span class="hljs-keyword">new</span> <span class="hljs-title class_">Path</span>(path), conf); dir = hdfsDir; } <span class="hljs-keyword">if</span> (params.getBool(LOCALITYMETRICS_ENABLED, <span class="hljs-literal">false</span>)) { LocalityHolder.reporter.registerDirectory(hdfsDir); } <span class="hljs-comment">// 默认使用 NRTCachingDirectory 以达到近实时搜索的目的</span> <span class="hljs-type">boolean</span> <span class="hljs-variable">nrtCachingDirectory</span> <span class="hljs-operator">=</span> getConfig(NRTCACHINGDIRECTORY_ENABLE, <span class="hljs-literal">true</span>); <span class="hljs-keyword">if</span> (nrtCachingDirectory) { <span class="hljs-type">double</span> <span class="hljs-variable">nrtCacheMaxMergeSizeMB</span> <span class="hljs-operator">=</span> getConfig(NRTCACHINGDIRECTORY_MAXMERGESIZEMB, <span class="hljs-number">16</span>); <span class="hljs-type">double</span> <span class="hljs-variable">nrtCacheMaxCacheMB</span> <span class="hljs-operator">=</span> getConfig(NRTCACHINGDIRECTORY_MAXCACHEMB, <span class="hljs-number">192</span>); <span class="hljs-keyword">return</span> <span class="hljs-keyword">new</span> <span class="hljs-title class_">NRTCachingDirectory</span>(dir, nrtCacheMaxMergeSizeMB, nrtCacheMaxCacheMB); } <span class="hljs-keyword">return</span> dir;}</code></pre></div><h3 id="BlockCache"><a href="#BlockCache" class="headerlink" title="BlockCache"></a>BlockCache</h3><p>当配置全局的 BlockCache 时,下面的方法保证了 BlockCache 是全局唯一共享的,理论上这里我觉得可以用 volatile 关键字修饰 globalBlockCache,然后再加上一层判断 globalBlockCache 是否为 null 后使用 synchronized 关键字,应该可以稍微提升一点效率,也就是采用双重校验锁的单例设计模式,不过此方法作为初始化方法也不会频繁进入,最新版尚未改动。</p><div class="hljs code-wrapper"><pre><code class="hljs java"><span class="hljs-keyword">private</span> BlockCache <span class="hljs-title function_">getBlockDirectoryCache</span><span class="hljs-params">(<span class="hljs-type">int</span> numberOfBlocksPerBank, <span class="hljs-type">int</span> blockSize, <span class="hljs-type">int</span> bankCount,</span><span class="hljs-params"> <span class="hljs-type">boolean</span> directAllocation, <span class="hljs-type">int</span> slabSize, <span class="hljs-type">int</span> bufferSize, <span class="hljs-type">int</span> bufferCount, <span class="hljs-type">boolean</span> staticBlockCache)</span> { <span class="hljs-comment">// 未配置 solr.hdfs.blockcache.global 为 false,每个 SolrCore 都会新创建一个 BlockCache</span> <span class="hljs-keyword">if</span> (!staticBlockCache) { log.info(<span class="hljs-string">"Creating new single instance HDFS BlockCache"</span>); <span class="hljs-keyword">return</span> createBlockCache(numberOfBlocksPerBank, blockSize, bankCount, directAllocation, slabSize, bufferSize, bufferCount); } <span class="hljs-comment">// 默认配置全局 BlockCache,不会创建新的 BlockCache,而是共享</span> <span class="hljs-keyword">synchronized</span> (HdfsDirectoryFactory.class) { <span class="hljs-keyword">if</span> (globalBlockCache == <span class="hljs-literal">null</span>) { log.info(<span class="hljs-string">"Creating new global HDFS BlockCache"</span>); globalBlockCache = createBlockCache(numberOfBlocksPerBank, blockSize, bankCount, directAllocation, slabSize, bufferSize, bufferCount); } } <span class="hljs-keyword">return</span> globalBlockCache;}</code></pre></div><p>在创建 BlockCache 之前会首先初始化 BufferStore,同时计算出分配的总内存。默认 directAllocation 是配置为 true 即开启堆外内存的,所以当堆外内存过小时,可能会提示 OOM 相关报错,需要指定 MaxDirectMemorySize 参数进行配置或者也可关闭堆外内存的分配。</p><div class="hljs code-wrapper"><pre><code class="hljs java"><span class="hljs-keyword">private</span> BlockCache <span class="hljs-title function_">createBlockCache</span><span class="hljs-params">(<span class="hljs-type">int</span> numberOfBlocksPerBank, <span class="hljs-type">int</span> blockSize,</span><span class="hljs-params"> <span class="hljs-type">int</span> bankCount, <span class="hljs-type">boolean</span> directAllocation, <span class="hljs-type">int</span> slabSize, <span class="hljs-type">int</span> bufferSize,</span><span class="hljs-params"> <span class="hljs-type">int</span> bufferCount)</span> { BufferStore.initNewBuffer(bufferSize, bufferCount, metrics); <span class="hljs-type">long</span> <span class="hljs-variable">totalMemory</span> <span class="hljs-operator">=</span> (<span class="hljs-type">long</span>) bankCount * (<span class="hljs-type">long</span>) numberOfBlocksPerBank * (<span class="hljs-type">long</span>) blockSize; BlockCache blockCache; <span class="hljs-keyword">try</span> { blockCache = <span class="hljs-keyword">new</span> <span class="hljs-title class_">BlockCache</span>(metrics, directAllocation, totalMemory, slabSize, blockSize); } <span class="hljs-keyword">catch</span> (OutOfMemoryError e) { <span class="hljs-keyword">throw</span> <span class="hljs-keyword">new</span> <span class="hljs-title class_">RuntimeException</span>( <span class="hljs-string">"The max direct memory is likely too low. Either increase it (by adding -XX:MaxDirectMemorySize=<size>g -XX:+UseLargePages to your containers startup args)"</span> + <span class="hljs-string">" or disable direct allocation using solr.hdfs.blockcache.direct.memory.allocation=false in solrconfig.xml. If you are putting the block cache on the heap,"</span> + <span class="hljs-string">" your java heap size might not be large enough."</span> + <span class="hljs-string">" Failed allocating ~"</span> + totalMemory / <span class="hljs-number">1000000.0</span> + <span class="hljs-string">" MB."</span>, e); } <span class="hljs-keyword">return</span> blockCache;}</code></pre></div><p>在初始化 BufferStore 时,将 shardBuffercacheLost、shardBuffercacheAllocate 与 metric 中的对应信息绑定,这样在后续的监控指标中能够获取具体的数据。新创建的 BufferStore 中,会调用至 setupBuffers 方法设置缓冲区,这个缓冲区会创建一个 bufferSize 大小的字节数组阻塞队列。</p><div class="note note-warning"> <p>BufferStore 实现了接口 Store,其定义了两个方法,分别是取出缓存的 takeBuffer 方法和放入缓存的 putBuffer 方法,当成功取出缓存时,会增加 shardBuffercacheAllocate,而放入缓存失败时,则会增加 shardBuffercacheLost,以更新监控指标信息。</p> </div><div class="hljs code-wrapper"><pre><code class="hljs java"><span class="hljs-keyword">public</span> <span class="hljs-keyword">synchronized</span> <span class="hljs-keyword">static</span> <span class="hljs-keyword">void</span> <span class="hljs-title function_">initNewBuffer</span><span class="hljs-params">(<span class="hljs-type">int</span> bufferSize, <span class="hljs-type">long</span> totalAmount, Metrics metrics)</span> { <span class="hljs-keyword">if</span> (totalAmount == <span class="hljs-number">0</span>) { <span class="hljs-keyword">return</span>; } <span class="hljs-type">BufferStore</span> <span class="hljs-variable">bufferStore</span> <span class="hljs-operator">=</span> bufferStores.get(bufferSize); <span class="hljs-keyword">if</span> (bufferStore == <span class="hljs-literal">null</span>) { <span class="hljs-type">long</span> <span class="hljs-variable">count</span> <span class="hljs-operator">=</span> totalAmount / bufferSize; <span class="hljs-keyword">if</span> (count > Integer.MAX_VALUE) { count = Integer.MAX_VALUE; } <span class="hljs-type">AtomicLong</span> <span class="hljs-variable">shardBuffercacheLost</span> <span class="hljs-operator">=</span> <span class="hljs-keyword">new</span> <span class="hljs-title class_">AtomicLong</span>(<span class="hljs-number">0</span>); <span class="hljs-type">AtomicLong</span> <span class="hljs-variable">shardBuffercacheAllocate</span> <span class="hljs-operator">=</span> <span class="hljs-keyword">new</span> <span class="hljs-title class_">AtomicLong</span>(<span class="hljs-number">0</span>); <span class="hljs-keyword">if</span> (metrics != <span class="hljs-literal">null</span>) { shardBuffercacheLost = metrics.shardBuffercacheLost; shardBuffercacheAllocate = metrics.shardBuffercacheAllocate; } <span class="hljs-type">BufferStore</span> <span class="hljs-variable">store</span> <span class="hljs-operator">=</span> <span class="hljs-keyword">new</span> <span class="hljs-title class_">BufferStore</span>(bufferSize, (<span class="hljs-type">int</span>) count, shardBuffercacheAllocate, shardBuffercacheLost); bufferStores.put(bufferSize, store); }}</code></pre></div><p>继续来看 BlockCache 的构造过程,每个 bank 都会为其创建一个对应的 BlockLocks 和 lockCounters,用于在缓冲时,检查是否能够找到位置进行缓存。默认配置了堆外内存,此处会进行分配,最大实例数为 16384 - 1 = 16383,当内存不足以分配时,会引发上述的 OOM 报错并提示相关信息。</p><p>这里的 cache 是用的 Google 的 Caffeine 本地缓存框架,并加入了监听器,当监听到文件删除时,会释放相应的缓存文件。当然,在关闭 BlockDirectoryCache 时,也会调用 BlockCache 中的 release 方法释放待删除的缓存文件。</p><div class="note note-warning"> <p>cache 中存放的是 BlockCacheKey 和 BlockCacheLocation 的对应关系,其中 BlockCacheKey 包含 BlockID、已缓存的文件数、索引文件目录,BlockCacheLocation 包含 BankID、Bank 内 Block 的 bit 位、最后一次进入的时间和访问次数等。</p> </div><div class="hljs code-wrapper"><pre><code class="hljs java"><span class="hljs-keyword">public</span> <span class="hljs-title function_">BlockCache</span><span class="hljs-params">(Metrics metrics, <span class="hljs-type">boolean</span> directAllocation,</span><span class="hljs-params"> <span class="hljs-type">long</span> totalMemory, <span class="hljs-type">int</span> slabSize, <span class="hljs-type">int</span> blockSize)</span> { <span class="hljs-built_in">this</span>.metrics = metrics; numberOfBlocksPerBank = slabSize / blockSize; <span class="hljs-type">int</span> <span class="hljs-variable">numberOfBanks</span> <span class="hljs-operator">=</span> (<span class="hljs-type">int</span>) (totalMemory / slabSize); banks = <span class="hljs-keyword">new</span> <span class="hljs-title class_">ByteBuffer</span>[numberOfBanks]; locks = <span class="hljs-keyword">new</span> <span class="hljs-title class_">BlockLocks</span>[numberOfBanks]; lockCounters = <span class="hljs-keyword">new</span> <span class="hljs-title class_">AtomicInteger</span>[numberOfBanks]; maxEntries = (numberOfBlocksPerBank * numberOfBanks) - <span class="hljs-number">1</span>; <span class="hljs-keyword">for</span> (<span class="hljs-type">int</span> <span class="hljs-variable">i</span> <span class="hljs-operator">=</span> <span class="hljs-number">0</span>; i < numberOfBanks; i++) { <span class="hljs-keyword">if</span> (directAllocation) { banks[i] = ByteBuffer.allocateDirect(numberOfBlocksPerBank * blockSize); } <span class="hljs-keyword">else</span> { banks[i] = ByteBuffer.allocate(numberOfBlocksPerBank * blockSize); } locks[i] = <span class="hljs-keyword">new</span> <span class="hljs-title class_">BlockLocks</span>(numberOfBlocksPerBank); lockCounters[i] = <span class="hljs-keyword">new</span> <span class="hljs-title class_">AtomicInteger</span>(); } <span class="hljs-comment">// 用于监听文件删除,并释放缓存资源</span> RemovalListener<BlockCacheKey,BlockCacheLocation> listener = (blockCacheKey, blockCacheLocation, removalCause) -> releaseLocation(blockCacheKey, blockCacheLocation, removalCause); cache = Caffeine.newBuilder() .removalListener(listener) .maximumSize(maxEntries) .build(); <span class="hljs-built_in">this</span>.blockSize = blockSize;}</code></pre></div><h3 id="BlockDirectoryCache"><a href="#BlockDirectoryCache" class="headerlink" title="BlockDirectoryCache"></a>BlockDirectoryCache</h3><p>这里同样用 Caffeine 初始化了 names,names 中保存的是 <span class="label label-primary">缓存文件名 + 已缓存的文件数</span> 对应关系。BlockDirectoryCache 是该包中接口 Cache 的实现,定义了 6 个方法。这里的 setOnRelease 方法会将待释放资存储到 OnRelease 的 CopyOnWriteArrayList 中。在上面定义的监听器监听到文件删除时,会调用 releaseLocation 释放文件资源,并最终通过传入的 BlockCacheKey 删除 keysToRelease 中对应的 key。keysToRelease 存储了待释放的 BlockCacheKey,实际上是通过 BlockCache 的 release 方法调用至 cache.invalidate(Object key) 释放资源。</p><div class="note note-warning"> <p>CopyOnWriteArrayList 是写数组的拷贝,支持高效率并发且是线程安全的,读操作无锁的 ArrayList,其本质是所有可变操作都通过对底层数组进行一次新的复制来实现,适合读多写少的场景。</p> </div><div class="note note-info"> <ul><li><code>delete</code> - 从缓存中删除指定文件</li><li><code>update</code> - 更新指定缓存文件的内容,如有必要会创建一个缓存实例</li><li><code>fetch</code> - 获取指定的缓存文件内容,如果能找到缓存内容则返回 true</li><li><code>size</code> - 已缓存的实例数</li><li><code>renameCacheFile</code> - 重命名缓存中的指定文件,允许在不使缓存无效(即缓存有效)的情况下移动文件</li><li><code>releaseResources</code> - 释放与缓存相关联的所有文件资源</li></ul> </div><div class="hljs code-wrapper"><pre><code class="hljs java"><span class="hljs-keyword">public</span> <span class="hljs-title function_">BlockDirectoryCache</span><span class="hljs-params">(BlockCache blockCache, String path, Metrics metrics, <span class="hljs-type">boolean</span> releaseBlocks)</span> { <span class="hljs-built_in">this</span>.blockCache = blockCache; <span class="hljs-built_in">this</span>.path = path; <span class="hljs-built_in">this</span>.metrics = metrics; <span class="hljs-comment">// 最多缓存 50000 的文件数</span> names = Caffeine.newBuilder().maximumSize(<span class="hljs-number">50000</span>).build(); <span class="hljs-keyword">if</span> (releaseBlocks) { <span class="hljs-comment">// Collections 提供了 newSetFromMap 来保证元素唯一性的 Map 实现,就是用一个 Set 来表示 Map,它持有这个 Map 的引用,并且保持 Map 的顺序、并发和性能特征</span> keysToRelease = Collections.newSetFromMap(<span class="hljs-keyword">new</span> <span class="hljs-title class_">ConcurrentHashMap</span><BlockCacheKey,Boolean>(<span class="hljs-number">1024</span>, <span class="hljs-number">0.75f</span>, <span class="hljs-number">512</span>)); blockCache.setOnRelease(<span class="hljs-keyword">new</span> <span class="hljs-title class_">OnRelease</span>() { <span class="hljs-meta">@Override</span> <span class="hljs-keyword">public</span> <span class="hljs-keyword">void</span> <span class="hljs-title function_">release</span><span class="hljs-params">(BlockCacheKey key)</span> { keysToRelease.remove(key); } }); }}</code></pre></div><h3 id="BlockDirectory"><a href="#BlockDirectory" class="headerlink" title="BlockDirectory"></a>BlockDirectory</h3><p>BlockDirectory 继承自抽象类 FilterDirectory,该抽象类将调用委托给另一个 Directory 实现,如 NRTCachingDirectory,它们之间可以进行协作。cacheMerges、cacheReadOnce 默认均为 false,当判断是否使用读写缓存时,会用到这两个变量值。blockCacheFileTypes 是 Set<String> 类型,当用户指定了缓存的文件类型时,只针对符合文件后缀名的进行缓存,默认是 null,也就是说缓存所有类型的文件。blockCacheReadEnabled 默认为 true 即开启读缓存,可通过配置参数改变值;而 blockCacheWriteEnabled 默认为 false 即关闭写缓存,并且不可通过配置改变值。</p><div class="hljs code-wrapper"><pre><code class="hljs java"><span class="hljs-keyword">public</span> <span class="hljs-title function_">BlockDirectory</span><span class="hljs-params">(String dirName, Directory directory, Cache cache,</span><span class="hljs-params"> Set<String> blockCacheFileTypes, <span class="hljs-type">boolean</span> blockCacheReadEnabled,</span><span class="hljs-params"> <span class="hljs-type">boolean</span> blockCacheWriteEnabled, <span class="hljs-type">boolean</span> cacheMerges, <span class="hljs-type">boolean</span> cacheReadOnce)</span> <span class="hljs-keyword">throws</span> IOException { <span class="hljs-built_in">super</span>(directory); <span class="hljs-built_in">this</span>.cacheMerges = cacheMerges; <span class="hljs-built_in">this</span>.cacheReadOnce = cacheReadOnce; <span class="hljs-built_in">this</span>.dirName = dirName; blockSize = BLOCK_SIZE; <span class="hljs-built_in">this</span>.cache = cache; <span class="hljs-comment">// 检查是否指定了缓存的文件类型,如 fdt、fdx...</span> <span class="hljs-keyword">if</span> (blockCacheFileTypes == <span class="hljs-literal">null</span> || blockCacheFileTypes.isEmpty()) { <span class="hljs-built_in">this</span>.blockCacheFileTypes = <span class="hljs-literal">null</span>; } <span class="hljs-keyword">else</span> { <span class="hljs-built_in">this</span>.blockCacheFileTypes = blockCacheFileTypes; } <span class="hljs-built_in">this</span>.blockCacheReadEnabled = blockCacheReadEnabled; <span class="hljs-keyword">if</span> (!blockCacheReadEnabled) { log.info(<span class="hljs-string">"Block cache on read is disabled"</span>); } <span class="hljs-built_in">this</span>.blockCacheWriteEnabled = blockCacheWriteEnabled; <span class="hljs-keyword">if</span> (!blockCacheWriteEnabled) { log.info(<span class="hljs-string">"Block cache on write is disabled"</span>); }}</code></pre></div><h2 id="写流程"><a href="#写流程" class="headerlink" title="写流程"></a>写流程</h2><p>从 BlockDirectory 的 createOutput 方法开始,该方法会在上层调用,在目录中创建一个新的空文件,并返回一个 IndexOutput 实例,用于追加数据到此文件。</p><div class="note note-warning"> <p>注意:因为在 BlockDirectory 的构造方法中 blockCacheWriteEnabled 默认是 false,所以此处的 useWriteCache(name, context) 只会返回 false(方法此处不展开,感兴趣可自行查看源码),并且由于该值不能通过参数配置,所以用户只能通过改动代码后重新编译打包以支持此功能。</p> </div><div class="hljs code-wrapper"><pre><code class="hljs java"><span class="hljs-keyword">public</span> IndexOutput <span class="hljs-title function_">createOutput</span><span class="hljs-params">(String name, IOContext context)</span> <span class="hljs-keyword">throws</span> IOException { <span class="hljs-keyword">final</span> <span class="hljs-type">IndexOutput</span> <span class="hljs-variable">dest</span> <span class="hljs-operator">=</span> <span class="hljs-built_in">super</span>.createOutput(name, context); <span class="hljs-keyword">if</span> (useWriteCache(name, context)) { <span class="hljs-keyword">return</span> <span class="hljs-keyword">new</span> <span class="hljs-title class_">CachedIndexOutput</span>(<span class="hljs-built_in">this</span>, dest, blockSize, name, cache, blockSize); } <span class="hljs-keyword">return</span> dest;}</code></pre></div><p>CachedIndexOutput 继承自 ReusedBufferedIndexOutput,在该类的构造方法中会从 BufferStore 中取出缓存准备好。directory.getFileCacheLocation(name) 方法则是将目录与索引文件名拼好作为变量 location 的值,每个 location 都是唯一的。</p><div class="note note-warning"> <p>Segment 文件由于索引频繁的小合并,所以会不断改变其值,在缓存文件时要注意。</p> </div><div class="hljs code-wrapper"><pre><code class="hljs java"><span class="hljs-keyword">public</span> <span class="hljs-title function_">CachedIndexOutput</span><span class="hljs-params">(BlockDirectory directory, IndexOutput dest,</span><span class="hljs-params"> <span class="hljs-type">int</span> blockSize, String name, Cache cache, <span class="hljs-type">int</span> bufferSize)</span> { <span class="hljs-built_in">super</span>(<span class="hljs-string">"dest="</span> + dest + <span class="hljs-string">" name="</span> + name, name, bufferSize); <span class="hljs-built_in">this</span>.directory = directory; <span class="hljs-built_in">this</span>.dest = dest; <span class="hljs-built_in">this</span>.blockSize = blockSize; <span class="hljs-built_in">this</span>.name = name; <span class="hljs-built_in">this</span>.location = directory.getFileCacheLocation(name); <span class="hljs-built_in">this</span>.cache = cache;}</code></pre></div><p>创建完 IndexOutput 是为了实际写入数据,于是便会继续调用 writeByte 方法写入,当下一个要写入的字节 bufferPosition 大于等于 bufferSize 即 1024 时调用 flushBufferToCache 方法将缓冲的字节写入缓存,该方法会调用至 writeInternal 方法,然后调整下一个写入的位置和长度等信息。</p><div class="hljs code-wrapper"><pre><code class="hljs java"><span class="hljs-keyword">public</span> <span class="hljs-keyword">void</span> <span class="hljs-title function_">writeByte</span><span class="hljs-params">(<span class="hljs-type">byte</span> b)</span> <span class="hljs-keyword">throws</span> IOException { <span class="hljs-keyword">if</span> (bufferPosition >= bufferSize) { flushBufferToCache(); } <span class="hljs-keyword">if</span> (getFilePointer() >= fileLength) { fileLength++; } buffer[bufferPosition++] = b; <span class="hljs-keyword">if</span> (bufferPosition > bufferLength) { bufferLength = bufferPosition; }}</code></pre></div><p>获取缓存文件中的位置,写入。</p><div class="hljs code-wrapper"><pre><code class="hljs java"><span class="hljs-keyword">public</span> <span class="hljs-keyword">void</span> <span class="hljs-title function_">writeInternal</span><span class="hljs-params">(<span class="hljs-type">byte</span>[] b, <span class="hljs-type">int</span> offset, <span class="hljs-type">int</span> length)</span> <span class="hljs-keyword">throws</span> IOException { <span class="hljs-type">long</span> <span class="hljs-variable">position</span> <span class="hljs-operator">=</span> getBufferStart(); <span class="hljs-keyword">while</span> (length > <span class="hljs-number">0</span>) { <span class="hljs-type">int</span> <span class="hljs-variable">len</span> <span class="hljs-operator">=</span> writeBlock(position, b, offset, length); position += len; length -= len; offset += len; } }</code></pre></div><p>获取 Block 的编号、偏移量和要写入的长度信息,先写入文件,再复制到缓存中。</p><div class="hljs code-wrapper"><pre><code class="hljs java"><span class="hljs-keyword">private</span> <span class="hljs-type">int</span> <span class="hljs-title function_">writeBlock</span><span class="hljs-params">(<span class="hljs-type">long</span> position, <span class="hljs-type">byte</span>[] b, <span class="hljs-type">int</span> offset, <span class="hljs-type">int</span> length)</span> <span class="hljs-keyword">throws</span> IOException { <span class="hljs-comment">// read whole block into cache and then provide needed data</span> <span class="hljs-comment">// 将整个块读入缓存,然后提供所需数据,只有当数据大于 8192 右移后才能分配到新的 blockId</span> <span class="hljs-type">long</span> <span class="hljs-variable">blockId</span> <span class="hljs-operator">=</span> BlockDirectory.getBlock(position); <span class="hljs-type">int</span> <span class="hljs-variable">blockOffset</span> <span class="hljs-operator">=</span> (<span class="hljs-type">int</span>) BlockDirectory.getPosition(position); <span class="hljs-type">int</span> <span class="hljs-variable">lengthToWriteInBlock</span> <span class="hljs-operator">=</span> Math.min(length, blockSize - blockOffset); <span class="hljs-comment">// write the file and copy into the cache</span> <span class="hljs-comment">// 写入文件,并复制到缓存中</span> dest.writeBytes(b, offset, lengthToWriteInBlock); <span class="hljs-comment">// location:索引文件目录 + 文件名</span> cache.update(location, blockId, blockOffset, b, offset, lengthToWriteInBlock); <span class="hljs-keyword">return</span> lengthToWriteInBlock;}</code></pre></div><p>names 中存放的是缓存的文件名 + 已缓存的文件数(该值是通过原子类变量 counter 递增存入的),构造一个 BlockCacheKey 对象后,调用 BlockCache 的 store 方法存入相应值,成功后将其添加至待释放资源对象的 keysToRelease 中。</p><div class="hljs code-wrapper"><pre><code class="hljs java"><span class="hljs-keyword">public</span> <span class="hljs-keyword">void</span> <span class="hljs-title function_">update</span><span class="hljs-params">(String name, <span class="hljs-type">long</span> blockId, <span class="hljs-type">int</span> blockOffset, <span class="hljs-type">byte</span>[] buffer,</span><span class="hljs-params"> <span class="hljs-type">int</span> offset, <span class="hljs-type">int</span> length)</span> { <span class="hljs-type">Integer</span> <span class="hljs-variable">file</span> <span class="hljs-operator">=</span> names.getIfPresent(name); <span class="hljs-keyword">if</span> (file == <span class="hljs-literal">null</span>) { file = counter.incrementAndGet(); names.put(name, file); } <span class="hljs-type">BlockCacheKey</span> <span class="hljs-variable">blockCacheKey</span> <span class="hljs-operator">=</span> <span class="hljs-keyword">new</span> <span class="hljs-title class_">BlockCacheKey</span>(); blockCacheKey.setPath(path); blockCacheKey.setBlock(blockId); blockCacheKey.setFile(file); <span class="hljs-keyword">if</span> (blockCache.store(blockCacheKey, blockOffset, buffer, offset, length) && keysToRelease != <span class="hljs-literal">null</span>) { keysToRelease.add(blockCacheKey); }}</code></pre></div><p>该方法可能会返回 false,这意味着无法缓存该 Block,也可能是已经缓存了该 Block,所以 Block 当前可能是未更新的,写流程分析至此。</p><div class="hljs code-wrapper"><pre><code class="hljs java"><span class="hljs-keyword">public</span> <span class="hljs-type">boolean</span> <span class="hljs-title function_">store</span><span class="hljs-params">(BlockCacheKey blockCacheKey, <span class="hljs-type">int</span> blockOffset,</span><span class="hljs-params"> <span class="hljs-type">byte</span>[] data, <span class="hljs-type">int</span> offset, <span class="hljs-type">int</span> length)</span> { <span class="hljs-keyword">if</span> (length + blockOffset > blockSize) { <span class="hljs-keyword">throw</span> <span class="hljs-keyword">new</span> <span class="hljs-title class_">RuntimeException</span>(<span class="hljs-string">"Buffer size exceeded, expecting max ["</span> + blockSize + <span class="hljs-string">"] got length ["</span> + length + <span class="hljs-string">"] with blockOffset ["</span> + blockOffset + <span class="hljs-string">"]"</span>); } <span class="hljs-type">BlockCacheLocation</span> <span class="hljs-variable">location</span> <span class="hljs-operator">=</span> cache.getIfPresent(blockCacheKey); <span class="hljs-keyword">if</span> (location == <span class="hljs-literal">null</span>) { location = <span class="hljs-keyword">new</span> <span class="hljs-title class_">BlockCacheLocation</span>(); <span class="hljs-comment">// 当缓存已满(正常情况)时,两次并发写会导致其中一个失败,一个简单的解决办法是留一个空的 Block,社区当前未做</span> <span class="hljs-keyword">if</span> (!findEmptyLocation(location)) { metrics.blockCacheStoreFail.incrementAndGet(); <span class="hljs-keyword">return</span> <span class="hljs-literal">false</span>; } } <span class="hljs-keyword">else</span> { <span class="hljs-comment">// 没有其他指标需要存储,不将冗余存储视为存储失败</span> <span class="hljs-keyword">return</span> <span class="hljs-literal">false</span>; } <span class="hljs-type">int</span> <span class="hljs-variable">bankId</span> <span class="hljs-operator">=</span> location.getBankId(); <span class="hljs-type">int</span> <span class="hljs-variable">bankOffset</span> <span class="hljs-operator">=</span> location.getBlock() * blockSize; <span class="hljs-type">ByteBuffer</span> <span class="hljs-variable">bank</span> <span class="hljs-operator">=</span> getBank(bankId); bank.position(bankOffset + blockOffset); bank.put(data, offset, length); cache.put(blockCacheKey.clone(), location); metrics.blockCacheSize.incrementAndGet(); <span class="hljs-keyword">return</span> <span class="hljs-literal">true</span>;}</code></pre></div><h2 id="读流程"><a href="#读流程" class="headerlink" title="读流程"></a>读流程</h2><p>从 BlockDirectory 的 openInput 方法开始,该方法会在上层调用,创建一个 IndexInput 读取已有文件,符合条件则创建 CachedIndexInput,该类继承自抽象类 CustomBufferedIndexInput。</p><div class="hljs code-wrapper"><pre><code class="hljs java"><span class="hljs-keyword">private</span> IndexInput <span class="hljs-title function_">openInput</span><span class="hljs-params">(String name, <span class="hljs-type">int</span> bufferSize, IOContext context)</span> <span class="hljs-keyword">throws</span> IOException { <span class="hljs-keyword">final</span> <span class="hljs-type">IndexInput</span> <span class="hljs-variable">source</span> <span class="hljs-operator">=</span> <span class="hljs-built_in">super</span>.openInput(name, context); <span class="hljs-keyword">if</span> (useReadCache(name, context)) { <span class="hljs-keyword">return</span> <span class="hljs-keyword">new</span> <span class="hljs-title class_">CachedIndexInput</span>(source, blockSize, name, getFileCacheName(name), cache, bufferSize); } <span class="hljs-keyword">return</span> source;}</code></pre></div><p>而开始读取索引文件时,无非是几个方法,readByte 和 readBytes,都会调用一个比较重要的方法 refill,当没有数据时,会从 BufferStore 中取出缓存,获取相应的位置,调用 fetchBlock 方法,该方法会试着读取缓存文件内容,如果可以就直接返回,如果不可以则将文件读取至缓存或者更新缓存内容。</p><div class="hljs code-wrapper"><pre><code class="hljs java"><span class="hljs-keyword">private</span> <span class="hljs-keyword">void</span> <span class="hljs-title function_">refill</span><span class="hljs-params">()</span> <span class="hljs-keyword">throws</span> IOException { <span class="hljs-type">long</span> <span class="hljs-variable">start</span> <span class="hljs-operator">=</span> bufferStart + bufferPosition; <span class="hljs-type">long</span> <span class="hljs-variable">end</span> <span class="hljs-operator">=</span> start + bufferSize; <span class="hljs-keyword">if</span> (end > length()) <span class="hljs-comment">// don't read past EOF</span> end = length(); <span class="hljs-type">int</span> <span class="hljs-variable">newLength</span> <span class="hljs-operator">=</span> (<span class="hljs-type">int</span>) (end - start); <span class="hljs-keyword">if</span> (newLength <= <span class="hljs-number">0</span>) <span class="hljs-keyword">throw</span> <span class="hljs-keyword">new</span> <span class="hljs-title class_">EOFException</span>(<span class="hljs-string">"read past EOF: "</span> + <span class="hljs-built_in">this</span>); <span class="hljs-keyword">if</span> (buffer == <span class="hljs-literal">null</span>) { buffer = store.takeBuffer(bufferSize); seekInternal(bufferStart); } readInternal(buffer, <span class="hljs-number">0</span>, newLength); bufferLength = newLength; bufferStart = start; bufferPosition = <span class="hljs-number">0</span>;}</code></pre></div><p>在 fetchBlock 中,调用 checkCache 方法,然后调用至 BlockDirectoryCache 的 fetch 方法获取指定的缓存文件内容,如果能找到返回 true。</p><div class="hljs code-wrapper"><pre><code class="hljs java"><span class="hljs-keyword">public</span> <span class="hljs-type">boolean</span> <span class="hljs-title function_">fetch</span><span class="hljs-params">(String name, <span class="hljs-type">long</span> blockId, <span class="hljs-type">int</span> blockOffset, <span class="hljs-type">byte</span>[] b,</span><span class="hljs-params"> <span class="hljs-type">int</span> off, <span class="hljs-type">int</span> lengthToReadInBlock)</span> { <span class="hljs-type">Integer</span> <span class="hljs-variable">file</span> <span class="hljs-operator">=</span> names.getIfPresent(name); <span class="hljs-keyword">if</span> (file == <span class="hljs-literal">null</span>) { <span class="hljs-keyword">return</span> <span class="hljs-literal">false</span>; } <span class="hljs-type">BlockCacheKey</span> <span class="hljs-variable">blockCacheKey</span> <span class="hljs-operator">=</span> <span class="hljs-keyword">new</span> <span class="hljs-title class_">BlockCacheKey</span>(); blockCacheKey.setPath(path); blockCacheKey.setBlock(blockId); blockCacheKey.setFile(file); <span class="hljs-type">boolean</span> <span class="hljs-variable">fetch</span> <span class="hljs-operator">=</span> blockCache.fetch(blockCacheKey, b, blockOffset, off, lengthToReadInBlock); <span class="hljs-keyword">return</span> fetch;}</code></pre></div><p>直接获取缓存文件内容,如果没找到或者失效了则返回 false。</p><div class="hljs code-wrapper"><pre><code class="hljs java"><span class="hljs-keyword">public</span> <span class="hljs-type">boolean</span> <span class="hljs-title function_">fetch</span><span class="hljs-params">(BlockCacheKey blockCacheKey, <span class="hljs-type">byte</span>[] buffer,</span><span class="hljs-params"> <span class="hljs-type">int</span> blockOffset, <span class="hljs-type">int</span> off, <span class="hljs-type">int</span> length)</span> { <span class="hljs-type">BlockCacheLocation</span> <span class="hljs-variable">location</span> <span class="hljs-operator">=</span> cache.getIfPresent(blockCacheKey); <span class="hljs-keyword">if</span> (location == <span class="hljs-literal">null</span>) { metrics.blockCacheMiss.incrementAndGet(); <span class="hljs-keyword">return</span> <span class="hljs-literal">false</span>; } <span class="hljs-type">int</span> <span class="hljs-variable">bankId</span> <span class="hljs-operator">=</span> location.getBankId(); <span class="hljs-type">int</span> <span class="hljs-variable">bankOffset</span> <span class="hljs-operator">=</span> location.getBlock() * blockSize; location.touch(); <span class="hljs-type">ByteBuffer</span> <span class="hljs-variable">bank</span> <span class="hljs-operator">=</span> getBank(bankId); bank.position(bankOffset + blockOffset); bank.get(buffer, off, length); <span class="hljs-keyword">if</span> (location.isRemoved()) { <span class="hljs-comment">// 必须在读取完成后检查,因为在读取之前或读取期间可能已将 bank 重新用于另一个块</span> metrics.blockCacheMiss.incrementAndGet(); <span class="hljs-keyword">return</span> <span class="hljs-literal">false</span>; } metrics.blockCacheHit.incrementAndGet(); <span class="hljs-keyword">return</span> <span class="hljs-literal">true</span>;}</code></pre></div><p>未获取到指定的缓存文件内容,从文件系统中读取文件内容并加载至缓存,此处调用的 update 方法在写流程中已经分析过,该方法更新指定缓存文件的内容,如有必要也会创建一个缓存实例,以便下次读取,读流程分析至此。</p><div class="hljs code-wrapper"><pre><code class="hljs java"><span class="hljs-keyword">private</span> <span class="hljs-keyword">void</span> <span class="hljs-title function_">readIntoCacheAndResult</span><span class="hljs-params">(<span class="hljs-type">long</span> blockId, <span class="hljs-type">int</span> blockOffset,</span><span class="hljs-params"> <span class="hljs-type">byte</span>[] b, <span class="hljs-type">int</span> off, <span class="hljs-type">int</span> lengthToReadInBlock)</span> <span class="hljs-keyword">throws</span> IOException { <span class="hljs-type">long</span> <span class="hljs-variable">position</span> <span class="hljs-operator">=</span> getRealPosition(blockId, <span class="hljs-number">0</span>); <span class="hljs-type">int</span> <span class="hljs-variable">length</span> <span class="hljs-operator">=</span> (<span class="hljs-type">int</span>) Math.min(blockSize, fileLength - position); source.seek(position); <span class="hljs-type">byte</span>[] buf = store.takeBuffer(blockSize); source.readBytes(buf, <span class="hljs-number">0</span>, length); System.arraycopy(buf, blockOffset, b, off, lengthToReadInBlock); cache.update(cacheName, blockId, <span class="hljs-number">0</span>, buf, <span class="hljs-number">0</span>, blockSize); store.putBuffer(buf);}</code></pre></div>]]></content>
<categories>
<category>分布式系统</category>
<category>分布式检索</category>
<category>Solr</category>
</categories>
<tags>
<tag>Solr</tag>
</tags>
</entry>
<entry>
<title>Spark 调度系统</title>
<link href="/2021/04/13/Spark-%E8%B0%83%E5%BA%A6%E7%B3%BB%E7%BB%9F/"/>
<url>/2021/04/13/Spark-%E8%B0%83%E5%BA%A6%E7%B3%BB%E7%BB%9F/</url>
<content type="html">< => <span class="hljs-type">U</span>, partitions: <span class="hljs-type">Seq</span>[<span class="hljs-type">Int</span>], resultHandler: (<span class="hljs-type">Int</span>, <span class="hljs-type">U</span>) => <span class="hljs-type">Unit</span>): <span class="hljs-type">Unit</span> = { <span class="hljs-keyword">if</span> (stopped.get()) { <span class="hljs-keyword">throw</span> <span class="hljs-keyword">new</span> <span class="hljs-type">IllegalStateException</span>(<span class="hljs-string">"SparkContext has been shutdown"</span>) } <span class="hljs-keyword">val</span> callSite = getCallSite <span class="hljs-keyword">val</span> cleanedFunc = clean(func) logInfo(<span class="hljs-string">"Starting job: "</span> + callSite.shortForm) <span class="hljs-keyword">if</span> (conf.getBoolean(<span class="hljs-string">"spark.logLineage"</span>, <span class="hljs-literal">false</span>)) { logInfo(<span class="hljs-string">"RDD's recursive dependencies:\n"</span> + rdd.toDebugString) } <span class="hljs-comment">// 将 DAG 及 RDD 提交给 DAGScheduler 进行调度</span> dagScheduler.runJob(rdd, cleanedFunc, partitions, callSite, resultHandler, localProperties.get) progressBar.foreach(_.finishAll()) <span class="hljs-comment">// 保存检查点</span> rdd.doCheckpoint()}</code></pre></div><p>生成 Job 的运行时间 start 并调用 submitJob 方法提交 Job。由于执行 Job 的过程是异步的,因此 submitJob 将立即返回 JobWaiter 对象。使用 JobWaiter 等待 Job 处理完毕。</p><div class="hljs code-wrapper"><pre><code class="hljs scala"><span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">runJob</span></span>[<span class="hljs-type">T</span>, <span class="hljs-type">U</span>]( rdd: <span class="hljs-type">RDD</span>[<span class="hljs-type">T</span>], func: (<span class="hljs-type">TaskContext</span>, <span class="hljs-type">Iterator</span>[<span class="hljs-type">T</span>]) => <span class="hljs-type">U</span>, partitions: <span class="hljs-type">Seq</span>[<span class="hljs-type">Int</span>], callSite: <span class="hljs-type">CallSite</span>, resultHandler: (<span class="hljs-type">Int</span>, <span class="hljs-type">U</span>) => <span class="hljs-type">Unit</span>, properties: <span class="hljs-type">Properties</span>): <span class="hljs-type">Unit</span> = { <span class="hljs-keyword">val</span> start = <span class="hljs-type">System</span>.nanoTime <span class="hljs-comment">// 提交 Job</span> <span class="hljs-keyword">val</span> waiter = submitJob(rdd, func, partitions, callSite, resultHandler, properties) <span class="hljs-comment">// JobWaiter 等待 Job 处理完毕</span> <span class="hljs-type">ThreadUtils</span>.awaitReady(waiter.completionFuture, <span class="hljs-type">Duration</span>.<span class="hljs-type">Inf</span>) waiter.completionFuture.value.get <span class="hljs-keyword">match</span> { <span class="hljs-comment">// JobWaiter 监听到 Job 的处理结果,进行进一步处理</span> <span class="hljs-keyword">case</span> scala.util.<span class="hljs-type">Success</span>(_) => <span class="hljs-comment">// 如果 Job 执行成功,根据处理结果打印相应的日志</span> logInfo(<span class="hljs-string">"Job %d finished: %s, took %f s"</span>.format (waiter.jobId, callSite.shortForm, (<span class="hljs-type">System</span>.nanoTime - start) / <span class="hljs-number">1e9</span>)) <span class="hljs-keyword">case</span> scala.util.<span class="hljs-type">Failure</span>(exception) => <span class="hljs-comment">// 如果 Job 执行失败,除打印日志外,还将抛出 Job 失败的异常信息</span> logInfo(<span class="hljs-string">"Job %d failed: %s, took %f s"</span>.format (waiter.jobId, callSite.shortForm, (<span class="hljs-type">System</span>.nanoTime - start) / <span class="hljs-number">1e9</span>)) <span class="hljs-keyword">val</span> callerStackTrace = <span class="hljs-type">Thread</span>.currentThread().getStackTrace.tail exception.setStackTrace(exception.getStackTrace ++ callerStackTrace) <span class="hljs-keyword">throw</span> exception }}</code></pre></div><p>在检查 Job 分区数量符合条件后,会向 DAGSchedulerEventProcessLoop 发送 JobSubmitted 事件,同时会将事件放入 eventQueue(LinkedBlockingDeque)中。</p><div class="hljs code-wrapper"><pre><code class="hljs scala"><span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">submitJob</span></span>[<span class="hljs-type">T</span>, <span class="hljs-type">U</span>]( rdd: <span class="hljs-type">RDD</span>[<span class="hljs-type">T</span>], func: (<span class="hljs-type">TaskContext</span>, <span class="hljs-type">Iterator</span>[<span class="hljs-type">T</span>]) => <span class="hljs-type">U</span>, partitions: <span class="hljs-type">Seq</span>[<span class="hljs-type">Int</span>], callSite: <span class="hljs-type">CallSite</span>, resultHandler: (<span class="hljs-type">Int</span>, <span class="hljs-type">U</span>) => <span class="hljs-type">Unit</span>, properties: <span class="hljs-type">Properties</span>): <span class="hljs-type">JobWaiter</span>[<span class="hljs-type">U</span>] = { <span class="hljs-comment">// 获取当前 Job 的最大分区数 maxPartitions</span> <span class="hljs-keyword">val</span> maxPartitions = rdd.partitions.length partitions.find(p => p >= maxPartitions || p < <span class="hljs-number">0</span>).foreach { p => <span class="hljs-keyword">throw</span> <span class="hljs-keyword">new</span> <span class="hljs-type">IllegalArgumentException</span>( <span class="hljs-string">"Attempting to access a non-existent partition: "</span> + p + <span class="hljs-string">". "</span> + <span class="hljs-string">"Total number of partitions: "</span> + maxPartitions) } <span class="hljs-comment">// 生成下一个 Job 的 jobId</span> <span class="hljs-keyword">val</span> jobId = nextJobId.getAndIncrement() <span class="hljs-comment">// 如果 Job 分区数为 0,创建一个 totalTasks 属性为 0 的 JobWaiter 并返回</span> <span class="hljs-keyword">if</span> (partitions.size == <span class="hljs-number">0</span>) { <span class="hljs-keyword">return</span> <span class="hljs-keyword">new</span> <span class="hljs-type">JobWaiter</span>[<span class="hljs-type">U</span>](<span class="hljs-keyword">this</span>, jobId, <span class="hljs-number">0</span>, resultHandler) } assert(partitions.size > <span class="hljs-number">0</span>) <span class="hljs-keyword">val</span> func2 = func.asInstanceOf[(<span class="hljs-type">TaskContext</span>, <span class="hljs-type">Iterator</span>[_]) => _] <span class="hljs-comment">// 创建等待 Job 完成的 JobWaiter</span> <span class="hljs-keyword">val</span> waiter = <span class="hljs-keyword">new</span> <span class="hljs-type">JobWaiter</span>(<span class="hljs-keyword">this</span>, jobId, partitions.size, resultHandler) <span class="hljs-comment">// 向 DAGSchedulerEventProcessLoop 发送 JobSubmitted 事件</span> eventProcessLoop.post(<span class="hljs-type">JobSubmitted</span>( jobId, rdd, func2, partitions.toArray, callSite, waiter, <span class="hljs-type">SerializationUtils</span>.clone(properties))) waiter}</code></pre></div><p>而 DAGSchedulerEventProcessLoop 会轮询 eventQueue 中的事件(event),再通过 onReceive方法接收事件,最终到达 DAGScheduler 中的 doOnReceive 方法匹配对应的事件进行处理。</p><div class="hljs code-wrapper"><pre><code class="hljs scala"><span class="hljs-keyword">private</span> <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">doOnReceive</span></span>(event: <span class="hljs-type">DAGSchedulerEvent</span>): <span class="hljs-type">Unit</span> = event <span class="hljs-keyword">match</span> { <span class="hljs-keyword">case</span> <span class="hljs-type">JobSubmitted</span>(jobId, rdd, func, partitions, callSite, listener, properties) => dagScheduler.handleJobSubmitted(jobId, rdd, func, partitions, callSite, listener, properties) <span class="hljs-comment">// 省略其他事件</span>}</code></pre></div><p>创建 ResultStage 并处理这个过程中可能发生的异常(如依赖的 HDFS 文件被删除),创建 ActiveJob 并处理,向 LiveListenerBus 投递 SparkListenerJobStart 事件(引发监听器执行相应操作),其中最重要的是调用 submitStage 方法提交 ResultStage。</p><div class="hljs code-wrapper"><pre><code class="hljs java"><span class="hljs-keyword">private</span>[scheduler] def <span class="hljs-title function_">handleJobSubmitted</span><span class="hljs-params">(jobId: Int,</span><span class="hljs-params"> finalRDD: RDD[_],</span><span class="hljs-params"> func: (TaskContext, Iterator[_])</span> => _, partitions: Array[Int], callSite: CallSite, listener: JobListener, properties: Properties) { <span class="hljs-keyword">var</span> finalStage: ResultStage = <span class="hljs-literal">null</span> <span class="hljs-keyword">try</span> { <span class="hljs-comment">// 创建 ResultStage</span> finalStage = createResultStage(finalRDD, func, partitions, jobId, callSite) } <span class="hljs-keyword">catch</span> { <span class="hljs-comment">// 省略异常捕获代码</span> <span class="hljs-keyword">return</span> } barrierJobIdToNumTasksCheckFailures.remove(jobId) <span class="hljs-comment">// 创建 ActiveJob</span> <span class="hljs-type">val</span> <span class="hljs-variable">job</span> <span class="hljs-operator">=</span> <span class="hljs-keyword">new</span> <span class="hljs-title class_">ActiveJob</span>(jobId, finalStage, callSite, listener, properties) clearCacheLocs() logInfo(<span class="hljs-string">"Got job %s (%s) with %d output partitions"</span>.format( job.jobId, callSite.shortForm, partitions.length)) logInfo(<span class="hljs-string">"Final stage: "</span> + finalStage + <span class="hljs-string">" ("</span> + finalStage.name + <span class="hljs-string">")"</span>) logInfo(<span class="hljs-string">"Parents of final stage: "</span> + finalStage.parents) logInfo(<span class="hljs-string">"Missing parents: "</span> + getMissingParentStages(finalStage)) <span class="hljs-comment">// 生产 Job 的提交时间</span> <span class="hljs-type">val</span> <span class="hljs-variable">jobSubmissionTime</span> <span class="hljs-operator">=</span> clock.getTimeMillis() jobIdToActiveJob(jobId) = job activeJobs += job finalStage.setActiveJob(job) <span class="hljs-type">val</span> <span class="hljs-variable">stageIds</span> <span class="hljs-operator">=</span> jobIdToStageIds(jobId).toArray <span class="hljs-type">val</span> <span class="hljs-variable">stageInfos</span> <span class="hljs-operator">=</span> stageIds.flatMap(id => stageIdToStage.get(id).map(_.latestInfo)) listenerBus.post( SparkListenerJobStart(job.jobId, jobSubmissionTime, stageInfos, properties)) <span class="hljs-comment">// 提交 ResultStage</span> submitStage(finalStage)}</code></pre></div><p>获取当前 Stage 的所有 ActiveJob 身份标识,如果有身份标识,但 Stage 未提交,则查看父 Stage。父 Stage 也未提交,那么调用 submitStage 逐个提交所有未提交的 Stage,父 Stage 已经提交,那么调用 submitMissingTasks 提交当前 Stage 未提交的 Task。如果没有身份标识,直接终止依赖于当前 Stage 的所有 Job。</p><div class="hljs code-wrapper"><pre><code class="hljs scala"><span class="hljs-keyword">private</span> <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">submitStage</span></span>(stage: <span class="hljs-type">Stage</span>) { <span class="hljs-comment">// 获取当前 Stage 对应的 Job 的 ID</span> <span class="hljs-keyword">val</span> jobId = activeJobForStage(stage) <span class="hljs-keyword">if</span> (jobId.isDefined) { logDebug(<span class="hljs-string">s"submitStage(<span class="hljs-subst">$stage</span> (name=<span class="hljs-subst">${stage.name}</span>;"</span> + <span class="hljs-string">s"jobs=<span class="hljs-subst">${stage.jobIds.toSeq.sorted.mkString(",")}</span>))"</span>) <span class="hljs-keyword">if</span> (!waitingStages(stage) && !runningStages(stage) && !failedStages(stage)) { <span class="hljs-comment">// 当前 Stage 未提交</span> <span class="hljs-keyword">val</span> missing = getMissingParentStages(stage).sortBy(_.id) logDebug(<span class="hljs-string">"missing: "</span> + missing) <span class="hljs-comment">// 不存在未提交的父 Stage,那么提交当前 Stage 所有未提交的 Task</span> <span class="hljs-keyword">if</span> (missing.isEmpty) { logInfo(<span class="hljs-string">"Submitting "</span> + stage + <span class="hljs-string">" ("</span> + stage.rdd + <span class="hljs-string">"), which has no missing parents"</span>) submitMissingTasks(stage, jobId.get) } <span class="hljs-keyword">else</span> { <span class="hljs-comment">// 存在未提交的父 Stage,那么逐个提交它们</span> <span class="hljs-keyword">for</span> (parent <- missing) { submitStage(parent) } waitingStages += stage } } } <span class="hljs-keyword">else</span> { <span class="hljs-comment">// 终止依赖于当前 Stage 的所有 Job</span> abortStage(stage, <span class="hljs-string">"No active job for stage "</span> + stage.id, <span class="hljs-type">None</span>) }}</code></pre></div><p>此方法在 Stage 没有不可用的父 Stage 时,提交当前 Stage 还未提交的任务。</p><ol><li><p>调用 Stage 的 findMissingPartitions 方法,找出当前 Stage 的所有分区中还没有完成计算的分区的索引</p></li><li><p>获取 ActiveJob 的 properties。properties 包含了当前 Job 的调度、group、描述等属性信息</p></li><li><p>将当前 Stage 加入 runningStages 集合中,即当前 Stage 已经处于运行状态</p></li><li><p>调用 OutputCommitCoordinator 的 stageStart 方法,启动对当前 Stage 的输出提交到 HDFS 的协调</p></li><li><p>调用 DAGScheduler 的 getPreferredLocs 方法,获取 partitionsToCompute 中的每一个分区的偏好位置。如果发生异常,则调用 Stage 的 makeNewStageAttempt 方法开始一次新的 Stage 执行尝试,然后向 listenerBus 投递 SparkListenerStageSubmitted 事件</p></li><li><p>调用 Stage 的 makeNewStageAttempt 方法开始 Stage 的执行尝试,并向 listenerBus 投递 SparkListenerStageSubmitted 事件</p></li><li><p>如果当前 Stage 是 ShuffleMapStage,那么对 Stage 的 rdd 和 ShuffleDependency 进行序列化;如果当前 Stage 是 ResultStage,那么对 Stage 的 rdd 和对 RDD 的分区进行计算的函数 func 进行序列化</p></li><li><p>调用 SparkContext 的 broadcast 方法广播上一步生成的序列化对象</p></li><li><p>如果当前 Stage 是 ShuffleMapStage,则为 ShuffleMapStage 的每一个分区创建一个 ShuffleMapTask。如果当前 Stage 是 ResultStage,则为 ResultStage 的每一个分区创建一个 ResultTask。</p></li><li><p>如果第 9 步中创建了至少一个 Task,那么为这批 Task 创建 TaskSet(即任务集合),并调用 TaskScheduler 的 submitTasks 方法提交此批 Task</p></li><li><p>如果第 10 步没有创建任何 Task,这意味着当前 Stage 没有 Task 任务需要提交执行,因此调用 DAGScheduler 的 markStageAsFinished 方法,将当前 Stage 标记为完成。然后调用 submitWaitingChildStages 方法,提交当前 Stage 的子 Stage。</p></li></ol><div class="hljs code-wrapper"><pre><code class="hljs scala"><span class="hljs-keyword">private</span> <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">submitMissingTasks</span></span>(stage: <span class="hljs-type">Stage</span>, jobId: <span class="hljs-type">Int</span>) { logDebug(<span class="hljs-string">"submitMissingTasks("</span> + stage + <span class="hljs-string">")"</span>) <span class="hljs-comment">// 找出当前 Stage 的所有分区中还没有完成计算的分区的索引</span> <span class="hljs-keyword">val</span> partitionsToCompute: <span class="hljs-type">Seq</span>[<span class="hljs-type">Int</span>] = stage.findMissingPartitions() <span class="hljs-comment">// 获取 ActiveJob 的 properties。properties 包含了当前 Job 的调度、group、描述等属性信息</span> <span class="hljs-keyword">val</span> properties = jobIdToActiveJob(jobId).properties runningStages += stage <span class="hljs-comment">// 启动对当前 Stage 的输出提交到 HDFS 的协调</span> stage <span class="hljs-keyword">match</span> { <span class="hljs-keyword">case</span> s: <span class="hljs-type">ShuffleMapStage</span> => outputCommitCoordinator.stageStart(stage = s.id, maxPartitionId = s.numPartitions - <span class="hljs-number">1</span>) <span class="hljs-keyword">case</span> s: <span class="hljs-type">ResultStage</span> => outputCommitCoordinator.stageStart( stage = s.id, maxPartitionId = s.rdd.partitions.length - <span class="hljs-number">1</span>) } <span class="hljs-keyword">val</span> taskIdToLocations: <span class="hljs-type">Map</span>[<span class="hljs-type">Int</span>, <span class="hljs-type">Seq</span>[<span class="hljs-type">TaskLocation</span>]] = <span class="hljs-keyword">try</span> { <span class="hljs-comment">// 获取还没有完成计算的每一个分区的偏好位置</span> stage <span class="hljs-keyword">match</span> { <span class="hljs-keyword">case</span> s: <span class="hljs-type">ShuffleMapStage</span> => partitionsToCompute.map { id => (id, getPreferredLocs(stage.rdd, id))}.toMap <span class="hljs-keyword">case</span> s: <span class="hljs-type">ResultStage</span> => partitionsToCompute.map { id => <span class="hljs-keyword">val</span> p = s.partitions(id) (id, getPreferredLocs(stage.rdd, p)) }.toMap } } <span class="hljs-keyword">catch</span> { <span class="hljs-comment">// 如果发生任何异常,则调用 Stage 的 makeNewStageAttempt 方法开始一次新的 Stage 执行尝试</span> <span class="hljs-keyword">case</span> <span class="hljs-type">NonFatal</span>(e) => stage.makeNewStageAttempt(partitionsToCompute.size) listenerBus.post(<span class="hljs-type">SparkListenerStageSubmitted</span>(stage.latestInfo, properties)) abortStage(stage, <span class="hljs-string">s"Task creation failed: <span class="hljs-subst">$e</span>\n<span class="hljs-subst">${Utils.exceptionString(e)}</span>"</span>, <span class="hljs-type">Some</span>(e)) runningStages -= stage <span class="hljs-keyword">return</span> } <span class="hljs-comment">// 开始 Stage 的执行尝试</span> stage.makeNewStageAttempt(partitionsToCompute.size, taskIdToLocations.values.toSeq) <span class="hljs-keyword">if</span> (partitionsToCompute.nonEmpty) { stage.latestInfo.submissionTime = <span class="hljs-type">Some</span>(clock.getTimeMillis()) } listenerBus.post(<span class="hljs-type">SparkListenerStageSubmitted</span>(stage.latestInfo, properties)) <span class="hljs-keyword">var</span> taskBinary: <span class="hljs-type">Broadcast</span>[<span class="hljs-type">Array</span>[<span class="hljs-type">Byte</span>]] = <span class="hljs-literal">null</span> <span class="hljs-keyword">var</span> partitions: <span class="hljs-type">Array</span>[<span class="hljs-type">Partition</span>] = <span class="hljs-literal">null</span> <span class="hljs-keyword">try</span> { <span class="hljs-comment">// 对于 ShuffleMapTask,进行序列化和广播 (rdd, shuffleDep).</span> <span class="hljs-comment">// 对于 ResultTask,进行序列化和广播 (rdd, func).</span> <span class="hljs-keyword">var</span> taskBinaryBytes: <span class="hljs-type">Array</span>[<span class="hljs-type">Byte</span>] = <span class="hljs-literal">null</span> <span class="hljs-comment">// taskBinaryBytes 和分区都受检查点状态影响,如果另一个并发 Job 正在为此 RDD 设置检查点,则需要进行同步</span> <span class="hljs-type">RDDCheckpointData</span>.synchronized { taskBinaryBytes = stage <span class="hljs-keyword">match</span> { <span class="hljs-keyword">case</span> stage: <span class="hljs-type">ShuffleMapStage</span> => <span class="hljs-type">JavaUtils</span>.bufferToArray( closureSerializer.serialize((stage.rdd, stage.shuffleDep): <span class="hljs-type">AnyRef</span>)) <span class="hljs-keyword">case</span> stage: <span class="hljs-type">ResultStage</span> => <span class="hljs-type">JavaUtils</span>.bufferToArray(closureSerializer.serialize((stage.rdd, stage.func): <span class="hljs-type">AnyRef</span>)) } partitions = stage.rdd.partitions } <span class="hljs-comment">// 广播任务的序列化对象</span> taskBinary = sc.broadcast(taskBinaryBytes) } <span class="hljs-keyword">catch</span> { <span class="hljs-comment">// 如果序列化失败,终止该 Stage</span> <span class="hljs-keyword">case</span> e: <span class="hljs-type">NotSerializableException</span> => abortStage(stage, <span class="hljs-string">"Task not serializable: "</span> + e.toString, <span class="hljs-type">Some</span>(e)) runningStages -= stage <span class="hljs-comment">// 终止异常</span> <span class="hljs-keyword">return</span> <span class="hljs-keyword">case</span> e: <span class="hljs-type">Throwable</span> => abortStage(stage, <span class="hljs-string">s"Task serialization failed: <span class="hljs-subst">$e</span>\n<span class="hljs-subst">${Utils.exceptionString(e)}</span>"</span>, <span class="hljs-type">Some</span>(e)) runningStages -= stage <span class="hljs-comment">// 终止异常</span> <span class="hljs-keyword">return</span> } <span class="hljs-keyword">val</span> tasks: <span class="hljs-type">Seq</span>[<span class="hljs-type">Task</span>[_]] = <span class="hljs-keyword">try</span> { <span class="hljs-keyword">val</span> serializedTaskMetrics = closureSerializer.serialize(stage.latestInfo.taskMetrics).array() stage <span class="hljs-keyword">match</span> { <span class="hljs-comment">// 为 ShuffleMapStage 的每一个分区创建一个 ShuffleMapTask</span> <span class="hljs-keyword">case</span> stage: <span class="hljs-type">ShuffleMapStage</span> => stage.pendingPartitions.clear() partitionsToCompute.map { id => <span class="hljs-keyword">val</span> locs = taskIdToLocations(id) <span class="hljs-keyword">val</span> part = partitions(id) stage.pendingPartitions += id <span class="hljs-keyword">new</span> <span class="hljs-type">ShuffleMapTask</span>(stage.id, stage.latestInfo.attemptNumber, taskBinary, part, locs, properties, serializedTaskMetrics, <span class="hljs-type">Option</span>(jobId), <span class="hljs-type">Option</span>(sc.applicationId), sc.applicationAttemptId, stage.rdd.isBarrier()) } <span class="hljs-comment">// 为 ResultStage 的每一个分区创建一个 ResultTask</span> <span class="hljs-keyword">case</span> stage: <span class="hljs-type">ResultStage</span> => partitionsToCompute.map { id => <span class="hljs-keyword">val</span> p: <span class="hljs-type">Int</span> = stage.partitions(id) <span class="hljs-keyword">val</span> part = partitions(p) <span class="hljs-keyword">val</span> locs = taskIdToLocations(id) <span class="hljs-keyword">new</span> <span class="hljs-type">ResultTask</span>(stage.id, stage.latestInfo.attemptNumber, taskBinary, part, locs, id, properties, serializedTaskMetrics, <span class="hljs-type">Option</span>(jobId), <span class="hljs-type">Option</span>(sc.applicationId), sc.applicationAttemptId, stage.rdd.isBarrier()) } } } <span class="hljs-keyword">catch</span> { <span class="hljs-keyword">case</span> <span class="hljs-type">NonFatal</span>(e) => abortStage(stage, <span class="hljs-string">s"Task creation failed: <span class="hljs-subst">$e</span>\n<span class="hljs-subst">${Utils.exceptionString(e)}</span>"</span>, <span class="hljs-type">Some</span>(e)) runningStages -= stage <span class="hljs-keyword">return</span> } <span class="hljs-comment">// 调用 TaskScheduler 的 submitTasks 方法提交此批 Task</span> <span class="hljs-keyword">if</span> (tasks.size > <span class="hljs-number">0</span>) { logInfo(<span class="hljs-string">s"Submitting <span class="hljs-subst">${tasks.size}</span> missing tasks from <span class="hljs-subst">$stage</span> (<span class="hljs-subst">${stage.rdd}</span>) (first 15 "</span> + <span class="hljs-string">s"tasks are for partitions <span class="hljs-subst">${tasks.take(15).map(_.partitionId)}</span>)"</span>) taskScheduler.submitTasks(<span class="hljs-keyword">new</span> <span class="hljs-type">TaskSet</span>( tasks.toArray, stage.id, stage.latestInfo.attemptNumber, jobId, properties)) } <span class="hljs-keyword">else</span> { <span class="hljs-comment">// 没有创建任何 Task,将当前 Stage 标记为完成</span> markStageAsFinished(stage, <span class="hljs-type">None</span>) stage <span class="hljs-keyword">match</span> { <span class="hljs-keyword">case</span> stage: <span class="hljs-type">ShuffleMapStage</span> => logDebug(<span class="hljs-string">s"Stage <span class="hljs-subst">${stage}</span> is actually done; "</span> + <span class="hljs-string">s"(available: <span class="hljs-subst">${stage.isAvailable}</span>,"</span> + <span class="hljs-string">s"available outputs: <span class="hljs-subst">${stage.numAvailableOutputs}</span>,"</span> + <span class="hljs-string">s"partitions: <span class="hljs-subst">${stage.numPartitions}</span>)"</span>) markMapStageJobsAsFinished(stage) <span class="hljs-keyword">case</span> stage : <span class="hljs-type">ResultStage</span> => logDebug(<span class="hljs-string">s"Stage <span class="hljs-subst">${stage}</span> is actually done; (partitions: <span class="hljs-subst">${stage.numPartitions}</span>)"</span>) } submitWaitingChildStages(stage) }}</code></pre></div><p>DAGScheduler 将 Stage 中各个分区的 Task 封装为 TaskSet 后,会将 TaskSet 交给 TaskSchedulerImpl 处理,此方法是这一过程的入口。</p><ol><li><p>获取 TaskSet 中的所有 Task。</p></li><li><p>调用 createTaskSetManager 方法创建 TaskSetManager</p></li><li><p>在 taskSetsByStageIdAndAttempt 中设置 TaskSet 关联的 Stage、Stage 尝试及刚创建的 TaskSetManager 之间的三级映射关系。</p></li><li><p>对当前 TaskSet 进行冲突检测,即 taskSetsByStageIdAndAttempt 中不应该存在同属于当前 Stage,但是 TaskSet 却不相同的情况。</p></li><li><p>调用调度池构建器的 addTaskSetManager 方法,将刚创建的 TaskSetManager 添加到调度池构建器的调度池中。</p></li><li><p>如果当前应用程序不是 Local 模式并且 TaskSchedulerImpl 还没有接收到 Task,那么设置一个定时器按照 STARVATION_TIMEOUT_MS 指定的时间间隔检查 TaskScheduleImpl 的饥饿状况,当 TaskScheduleImpl 已经运行 Task 后,取消此定时器</p></li><li><p>将 hasReceivedTask 设置为 tue,以表示 TaskSchedulerImpl 已经接收到 Task</p></li><li><p>调用 SchedulerBackend 的 reviveOffers 方法给 Task 分配资源并运行 Task</p></li></ol><div class="note note-light"> <p>local 模式(其他模式也类似)</p><ol><li><p>在提交的最后会调用 LocalSchedulerBackend 的 reviveOffers 方法</p></li><li><p>LocalSchedulerBackend 的 reviveOffers 方法只是向 LocalEndpoint 发送 ReviveOffers 消息</p></li><li><p>LocalEndpoint 收到 ReviveOffers 消息后,调用 TaskSchedulerImpl 的 resourceOffers 方法申请资源,TaskSchedulerImpl 将根据任务申请的 CPU 核数、内存、本地化等条件为其分配资源</p></li></ol> </div><div class="hljs code-wrapper"><pre><code class="hljs scala"><span class="hljs-keyword">override</span> <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">submitTasks</span></span>(taskSet: <span class="hljs-type">TaskSet</span>) { <span class="hljs-comment">// 获取 TaskSet 中的所有 Task</span> <span class="hljs-keyword">val</span> tasks = taskSet.tasks logInfo(<span class="hljs-string">"Adding task set "</span> + taskSet.id + <span class="hljs-string">" with "</span> + tasks.length + <span class="hljs-string">" tasks"</span>) <span class="hljs-keyword">this</span>.synchronized { <span class="hljs-keyword">val</span> manager = createTaskSetManager(taskSet, maxTaskFailures) <span class="hljs-keyword">val</span> stage = taskSet.stageId <span class="hljs-keyword">val</span> stageTaskSets = taskSetsByStageIdAndAttempt.getOrElseUpdate(stage, <span class="hljs-keyword">new</span> <span class="hljs-type">HashMap</span>[<span class="hljs-type">Int</span>, <span class="hljs-type">TaskSetManager</span>]) <span class="hljs-comment">// 将所有现有 TaskSetManager 标记为僵尸(当 TaskSetManager 所管理的 TaskSet 中所有 Task 都执行成功了,不再有更多的 Task 尝试被启动时,就处于“僵尸”状态)</span> stageTaskSets.foreach { <span class="hljs-keyword">case</span> (_, ts) => ts.isZombie = <span class="hljs-literal">true</span> } stageTaskSets(taskSet.stageAttemptId) = manager schedulableBuilder.addTaskSetManager(manager, manager.taskSet.properties) <span class="hljs-comment">// 设置检查 TaskSchedulerImpl 的饥饿状况的定时器</span> <span class="hljs-keyword">if</span> (!isLocal && !hasReceivedTask) { starvationTimer.scheduleAtFixedRate(<span class="hljs-keyword">new</span> <span class="hljs-type">TimerTask</span>() { <span class="hljs-keyword">override</span> <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">run</span></span>() { <span class="hljs-keyword">if</span> (!hasLaunchedTask) { logWarning(<span class="hljs-string">"Initial job has not accepted any resources; "</span> + <span class="hljs-string">"check your cluster UI to ensure that workers are registered "</span> + <span class="hljs-string">"and have sufficient resources"</span>) } <span class="hljs-keyword">else</span> { <span class="hljs-keyword">this</span>.cancel() } } }, <span class="hljs-type">STARVATION_TIMEOUT_MS</span>, <span class="hljs-type">STARVATION_TIMEOUT_MS</span>) } <span class="hljs-comment">// 表示 TaskSchedulerImpl 已经接收到 Task</span> hasReceivedTask = <span class="hljs-literal">true</span> } <span class="hljs-comment">// 给 Task 分配资源并运行 Task</span> backend.reviveOffers()}</code></pre></div><p>上述代码中会向 SchedulableBuilder 添加 TaskSetManager,这个 SchedulableBuilder 定义的是调度池构建器的行为规范,针对 FIFO 和 FAIR 两种调度算法,默认调用实现 FIFOSchedulableBuilder。然后向根调度池中添加 TaskSetManager。</p><div class="hljs code-wrapper"><pre><code class="hljs scala"><span class="hljs-keyword">override</span> <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">addTaskSetManager</span></span>(manager: <span class="hljs-type">Schedulable</span>, properties: <span class="hljs-type">Properties</span>) { rootPool.addSchedulable(manager)}</code></pre></div><p>将 Schedulable 添加到 schedulableQueue 和 schedulableNameToSchedulable 中,并将 Schedulable 的父亲设置为当前 Pool。</p><div class="hljs code-wrapper"><pre><code class="hljs scala"><span class="hljs-keyword">override</span> <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">addSchedulable</span></span>(schedulable: <span class="hljs-type">Schedulable</span>) { require(schedulable != <span class="hljs-literal">null</span>) schedulableQueue.add(schedulable) schedulableNameToSchedulable.put(schedulable.name, schedulable) schedulable.parent = <span class="hljs-keyword">this</span>}</code></pre></div><p>继续上文,通过 SchedulerBackend 给调度池中的所有 Task 分配资源。在 CoarseGrainedSchedulerBackend 中通过 driverEndpoint 发送 ReviveOffers 消息,在接收到消息后,继续进行处理。</p><div class="hljs code-wrapper"><pre><code class="hljs scala"><span class="hljs-keyword">private</span> <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">makeOffers</span></span>() { <span class="hljs-comment">// 确保在有 Task 运行的时候没有杀死 Executor</span> <span class="hljs-keyword">val</span> taskDescs = withLock { <span class="hljs-comment">// 过滤掉被杀的 Executor</span> <span class="hljs-keyword">val</span> activeExecutors = executorDataMap.filterKeys(executorIsAlive) <span class="hljs-keyword">val</span> workOffers = activeExecutors.map { <span class="hljs-keyword">case</span> (id, executorData) => <span class="hljs-keyword">new</span> <span class="hljs-type">WorkerOffer</span>(id, executorData.executorHost, executorData.freeCores, <span class="hljs-type">Some</span>(executorData.executorAddress.hostPort)) }.toIndexedSeq <span class="hljs-comment">// 接收资源消息</span> scheduler.resourceOffers(workOffers) } <span class="hljs-keyword">if</span> (!taskDescs.isEmpty) { <span class="hljs-comment">// 启动 Task</span> launchTasks(taskDescs) }}</code></pre></div><p>给 Task 分配资源:</p><ol><li><p>遍历 WorkerOffer 序列,对每一个 WorkerOffer 执行以下操作:</p><ul><li><p>更新 Host 与 Executor 的各种映射关系。</p></li><li><p>调用 TaskSchedulerImpl 的 executorAdded 方法(此方法实际仅仅调用了 DagScheduler 的 executorAdded 方法)向 DagScheduler 的 DagSchedulerEventProcessLoop 投递 ExecutorAdded 事件。</p></li><li><p>标记添加了新的 Executor(即将 newExecAvail 设置为 true)</p></li><li><p>更新 Host 与机架之间的关系</p></li></ul></li><li><p>对所有 WorkerOffer 随机洗牌,避免将任务总是分配给同样一组 Worker</p></li><li><p>根据每个 WorkerOffer 的可用的 CPU 核数创建同等尺寸的任务描述(TaskDescription)数组</p></li><li><p>将每个 WorkerOffer 的可用的 CPU 核数统计到可用 CPU (availableCpus)数组中</p></li><li><p>调用 rootPool 的 getSortedTaskSetQueue 方法,对 rootPool 中的所有 TaskSetManager 按照调度算法排序</p></li><li><p>如果 newExecAvail 为 true,那么调用每个 TaskSetManager 的 executorAdded 方法。此 executorAdded 方法实际调用了 computeValidLocalityLevels 方法重新计算 TaskSet 的本地性</p></li><li><p>遍历 TaskSetManager,按照最大本地性的原则(即从高本地性级别到低本地性级别调用 resourceOfferSingleTaskSet,给单个 TaskSet 中的 Task 提供资源。如果在任何 TaskSet 所允许的本地性级别下,TaskSet 中没有任何一个任务获得了资源,那么将调用 TaskSetManager 的 abortSinceCompletelyBlacklisted 方法,放弃在黑名单中的 Task</p></li><li><p>返回生成的 TaskDescription 列表,即已经获得了资源的任务列表</p></li></ol><div class="hljs code-wrapper"><pre><code class="hljs scala"><span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">resourceOffers</span></span>(offers: <span class="hljs-type">IndexedSeq</span>[<span class="hljs-type">WorkerOffer</span>]): <span class="hljs-type">Seq</span>[<span class="hljs-type">Seq</span>[<span class="hljs-type">TaskDescription</span>]] = synchronized { <span class="hljs-comment">// 将每个 slave 标记为活动的,记录其主机名,并追踪是否添加了新的 Executor</span> <span class="hljs-keyword">var</span> newExecAvail = <span class="hljs-literal">false</span> <span class="hljs-keyword">for</span> (o <- offers) { <span class="hljs-keyword">if</span> (!hostToExecutors.contains(o.host)) { hostToExecutors(o.host) = <span class="hljs-keyword">new</span> <span class="hljs-type">HashSet</span>[<span class="hljs-type">String</span>]() } <span class="hljs-comment">// 更新 Host 与 Executor 的各种映射关系</span> <span class="hljs-keyword">if</span> (!executorIdToRunningTaskIds.contains(o.executorId)) { hostToExecutors(o.host) += o.executorId executorAdded(o.executorId, o.host) executorIdToHost(o.executorId) = o.host executorIdToRunningTaskIds(o.executorId) = <span class="hljs-type">HashSet</span>[<span class="hljs-type">Long</span>]() <span class="hljs-comment">// 标记添加了新的 Executor</span> newExecAvail = <span class="hljs-literal">true</span> } <span class="hljs-comment">// 更新 Host 与机架之间的关系</span> <span class="hljs-keyword">for</span> (rack <- getRackForHost(o.host)) { hostsByRack.getOrElseUpdate(rack, <span class="hljs-keyword">new</span> <span class="hljs-type">HashSet</span>[<span class="hljs-type">String</span>]()) += o.host } } <span class="hljs-comment">// 提供资源之前,从黑名单中删除过期节点,在这里操作是为了避免使用单独的线程增加开销,也因为只有在提供资源时才需要更新黑名单</span> blacklistTrackerOpt.foreach(_.applyBlacklistTimeout()) <span class="hljs-keyword">val</span> filteredOffers = blacklistTrackerOpt.map { blacklistTracker => offers.filter { offer => !blacklistTracker.isNodeBlacklisted(offer.host) && !blacklistTracker.isExecutorBlacklisted(offer.executorId) } }.getOrElse(offers) <span class="hljs-comment">// 随机 shuffle,避免将任务总是分配给同样一组 Worker</span> <span class="hljs-keyword">val</span> shuffledOffers = shuffleOffers(filteredOffers) <span class="hljs-comment">// 建立分配给每个 Worker 的任务列表</span> <span class="hljs-keyword">val</span> tasks = shuffledOffers.map(o => <span class="hljs-keyword">new</span> <span class="hljs-type">ArrayBuffer</span>[<span class="hljs-type">TaskDescription</span>](o.cores / <span class="hljs-type">CPUS_PER_TASK</span>)) <span class="hljs-comment">// 统计每个 Worker 的可用 CPU 核数</span> <span class="hljs-keyword">val</span> availableCpus = shuffledOffers.map(o => o.cores).toArray <span class="hljs-comment">// 所有 TaskSetManager 按照调度算法排序</span> <span class="hljs-keyword">val</span> sortedTaskSets = rootPool.getSortedTaskSetQueue <span class="hljs-keyword">for</span> (taskSet <- sortedTaskSets) { logDebug(<span class="hljs-string">"parentName: %s, name: %s, runningTasks: %s"</span>.format( taskSet.parent.name, taskSet.name, taskSet.runningTasks)) <span class="hljs-keyword">if</span> (newExecAvail) { <span class="hljs-comment">// 重新计算 TaskSet 的本地性</span> taskSet.executorAdded() } } <span class="hljs-comment">// 按照调度算法顺序获取 TaskSet,然后按照数据的本地性级别升序提供给每个节点,以便在所有节点上启动本地任务。所有的本地性级别顺序: PROCESS_LOCAL, NODE_LOCAL, NO_PREF, RACK_LOCAL, ANY</span> <span class="hljs-keyword">for</span> (taskSet <- sortedTaskSets) { <span class="hljs-keyword">val</span> availableSlots = availableCpus.map(c => c / <span class="hljs-type">CPUS_PER_TASK</span>).sum <span class="hljs-comment">// 如果可获得的资源数少于挂起的任务数,那么跳过有障碍的 TaskSet</span> <span class="hljs-keyword">if</span> (taskSet.isBarrier && availableSlots < taskSet.numTasks) { logInfo(<span class="hljs-string">s"Skip current round of resource offers for barrier stage <span class="hljs-subst">${taskSet.stageId}</span> "</span> + <span class="hljs-string">s"because the barrier taskSet requires <span class="hljs-subst">${taskSet.numTasks}</span> slots, while the total "</span> + <span class="hljs-string">s"number of available slots is <span class="hljs-subst">$availableSlots</span>."</span>) } <span class="hljs-keyword">else</span> { <span class="hljs-keyword">var</span> launchedAnyTask = <span class="hljs-literal">false</span> <span class="hljs-comment">// 记录有障碍的 Task 所在的 Executor ID</span> <span class="hljs-keyword">val</span> addressesWithDescs = <span class="hljs-type">ArrayBuffer</span>[(<span class="hljs-type">String</span>, <span class="hljs-type">TaskDescription</span>)]() <span class="hljs-comment">// 按照最大本地性的原则,给 Task 提供资源</span> <span class="hljs-keyword">for</span> (currentMaxLocality <- taskSet.myLocalityLevels) { <span class="hljs-keyword">var</span> launchedTaskAtCurrentMaxLocality = <span class="hljs-literal">false</span> <span class="hljs-keyword">do</span> { <span class="hljs-comment">// 给单个 TaskSet 中的 Task 提供资源</span> launchedTaskAtCurrentMaxLocality = resourceOfferSingleTaskSet(taskSet, currentMaxLocality, shuffledOffers, availableCpus, tasks, addressesWithDescs) launchedAnyTask |= launchedTaskAtCurrentMaxLocality } <span class="hljs-keyword">while</span> (launchedTaskAtCurrentMaxLocality) } <span class="hljs-keyword">if</span> (!launchedAnyTask) { taskSet.getCompletelyBlacklistedTaskIfAny(hostToExecutors).foreach { taskIndex => executorIdToRunningTaskIds.find(x => !isExecutorBusy(x._1)) <span class="hljs-keyword">match</span> { <span class="hljs-keyword">case</span> <span class="hljs-type">Some</span> ((executorId, _)) => <span class="hljs-keyword">if</span> (!unschedulableTaskSetToExpiryTime.contains(taskSet)) { blacklistTrackerOpt.foreach(blt => blt.killBlacklistedIdleExecutor(executorId)) <span class="hljs-keyword">val</span> timeout = conf.get(config.<span class="hljs-type">UNSCHEDULABLE_TASKSET_TIMEOUT</span>) * <span class="hljs-number">1000</span> unschedulableTaskSetToExpiryTime(taskSet) = clock.getTimeMillis() + timeout logInfo(<span class="hljs-string">s"Waiting for <span class="hljs-subst">$timeout</span> ms for completely "</span> + <span class="hljs-string">s"blacklisted task to be schedulable again before aborting <span class="hljs-subst">$taskSet</span>."</span>) abortTimer.schedule( createUnschedulableTaskSetAbortTimer(taskSet, taskIndex), timeout) } <span class="hljs-keyword">case</span> <span class="hljs-type">None</span> => <span class="hljs-comment">// 立即终止</span> logInfo(<span class="hljs-string">"Cannot schedule any task because of complete blacklisting. No idle"</span> + <span class="hljs-string">s" executors can be found to kill. Aborting <span class="hljs-subst">$taskSet</span>."</span> ) taskSet.abortSinceCompletelyBlacklisted(taskIndex) } } } <span class="hljs-keyword">else</span> { <span class="hljs-keyword">if</span> (unschedulableTaskSetToExpiryTime.nonEmpty) { logInfo(<span class="hljs-string">"Clearing the expiry times for all unschedulable taskSets as a task was "</span> + <span class="hljs-string">"recently scheduled."</span>) unschedulableTaskSetToExpiryTime.clear() } } <span class="hljs-keyword">if</span> (launchedAnyTask && taskSet.isBarrier) { <span class="hljs-comment">// 检查有障碍的 task 是否部分启动</span> require(addressesWithDescs.size == taskSet.numTasks, <span class="hljs-string">s"Skip current round of resource offers for barrier stage <span class="hljs-subst">${taskSet.stageId}</span> "</span> + <span class="hljs-string">s"because only <span class="hljs-subst">${addressesWithDescs.size}</span> out of a total number of "</span> + <span class="hljs-string">s"<span class="hljs-subst">${taskSet.numTasks}</span> tasks got resource offers. The resource offers may have "</span> + <span class="hljs-string">"been blacklisted or cannot fulfill task locality requirements."</span>) maybeInitBarrierCoordinator() <span class="hljs-keyword">val</span> addressesStr = addressesWithDescs <span class="hljs-comment">// Addresses ordered by partitionId</span> .sortBy(_._2.partitionId) .map(_._1) .mkString(<span class="hljs-string">","</span>) addressesWithDescs.foreach(_._2.properties.setProperty(<span class="hljs-string">"addresses"</span>, addressesStr)) logInfo(<span class="hljs-string">s"Successfully scheduled all the <span class="hljs-subst">${addressesWithDescs.size}</span> tasks for barrier "</span> + <span class="hljs-string">s"stage <span class="hljs-subst">${taskSet.stageId}</span>."</span>) } } } <span class="hljs-keyword">if</span> (tasks.size > <span class="hljs-number">0</span>) { hasLaunchedTask = <span class="hljs-literal">true</span> } <span class="hljs-comment">// 返回已经获得了资源的任务列表</span> <span class="hljs-keyword">return</span> tasks}</code></pre></div><p>上述中的 resourceOfferSingleTaskSet 方法给单个 TaskSet 提供资源,获取 WorkerOffer 相关信息并给符合条件的 Task 创建 TaskDescription 以分配资源。</p><div class="hljs code-wrapper"><pre><code class="hljs scala"><span class="hljs-keyword">private</span> <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">resourceOfferSingleTaskSet</span></span>( taskSet: <span class="hljs-type">TaskSetManager</span>, maxLocality: <span class="hljs-type">TaskLocality</span>, shuffledOffers: <span class="hljs-type">Seq</span>[<span class="hljs-type">WorkerOffer</span>], availableCpus: <span class="hljs-type">Array</span>[<span class="hljs-type">Int</span>], tasks: <span class="hljs-type">IndexedSeq</span>[<span class="hljs-type">ArrayBuffer</span>[<span class="hljs-type">TaskDescription</span>]], addressesWithDescs: <span class="hljs-type">ArrayBuffer</span>[(<span class="hljs-type">String</span>, <span class="hljs-type">TaskDescription</span>)]) : <span class="hljs-type">Boolean</span> = { <span class="hljs-keyword">var</span> launchedTask = <span class="hljs-literal">false</span> <span class="hljs-comment">// 到目前为止,整个应用程序中列入黑名单的节点和 Executor 已被滤除</span> <span class="hljs-keyword">for</span> (i <- <span class="hljs-number">0</span> until shuffledOffers.size) { <span class="hljs-keyword">val</span> execId = shuffledOffers(i).executorId <span class="hljs-keyword">val</span> host = shuffledOffers(i).host <span class="hljs-keyword">if</span> (availableCpus(i) >= <span class="hljs-type">CPUS_PER_TASK</span>) { <span class="hljs-keyword">try</span> { <span class="hljs-comment">// 给符合条件的待处理 Task 创建 TaskDescription</span> <span class="hljs-keyword">for</span> (task <- taskSet.resourceOffer(execId, host, maxLocality)) { tasks(i) += task <span class="hljs-keyword">val</span> tid = task.taskId taskIdToTaskSetManager.put(tid, taskSet) taskIdToExecutorId(tid) = execId executorIdToRunningTaskIds(execId).add(tid) availableCpus(i) -= <span class="hljs-type">CPUS_PER_TASK</span> assert(availableCpus(i) >= <span class="hljs-number">0</span>) <span class="hljs-keyword">if</span> (taskSet.isBarrier) { addressesWithDescs += (shuffledOffers(i).address.get -> task) } launchedTask = <span class="hljs-literal">true</span> } } <span class="hljs-keyword">catch</span> { <span class="hljs-keyword">case</span> e: <span class="hljs-type">TaskNotSerializableException</span> => logError(<span class="hljs-string">s"Resource offer failed, task set <span class="hljs-subst">${taskSet.name}</span> was not serializable"</span>) <span class="hljs-comment">// 序列化异常,不为该 Task 提供资源,但是不能抛错,允许其他 TaskSet 提交</span> <span class="hljs-keyword">return</span> launchedTask } } } <span class="hljs-keyword">return</span> launchedTask}</code></pre></div><p>当资源申请完后,由 Driver 向 Executor 发送启动 Task 的消息 LaunchTask,至此任务调度流程分析完毕。</p><div class="hljs code-wrapper"><pre><code class="hljs scala"><span class="hljs-keyword">private</span> <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">launchTasks</span></span>(tasks: <span class="hljs-type">Seq</span>[<span class="hljs-type">Seq</span>[<span class="hljs-type">TaskDescription</span>]]) { <span class="hljs-keyword">for</span> (task <- tasks.flatten) { <span class="hljs-keyword">val</span> serializedTask = <span class="hljs-type">TaskDescription</span>.encode(task) <span class="hljs-keyword">if</span> (serializedTask.limit() >= maxRpcMessageSize) { <span class="hljs-type">Option</span>(scheduler.taskIdToTaskSetManager.get(task.taskId)).foreach { taskSetMgr => <span class="hljs-keyword">try</span> { <span class="hljs-keyword">var</span> msg = <span class="hljs-string">"Serialized task %s:%d was %d bytes, which exceeds max allowed: "</span> + <span class="hljs-string">"spark.rpc.message.maxSize (%d bytes). Consider increasing "</span> + <span class="hljs-string">"spark.rpc.message.maxSize or using broadcast variables for large values."</span> msg = msg.format(task.taskId, task.index, serializedTask.limit(), maxRpcMessageSize) taskSetMgr.abort(msg) } <span class="hljs-keyword">catch</span> { <span class="hljs-keyword">case</span> e: <span class="hljs-type">Exception</span> => logError(<span class="hljs-string">"Exception in error callback"</span>, e) } } } <span class="hljs-keyword">else</span> { <span class="hljs-keyword">val</span> executorData = executorDataMap(task.executorId) executorData.freeCores -= scheduler.<span class="hljs-type">CPUS_PER_TASK</span> logDebug(<span class="hljs-string">s"Launching task <span class="hljs-subst">${task.taskId}</span> on executor id: <span class="hljs-subst">${task.executorId}</span> hostname: "</span> + <span class="hljs-string">s"<span class="hljs-subst">${executorData.executorHost}</span>."</span>) executorData.executorEndpoint.send(<span class="hljs-type">LaunchTask</span>(<span class="hljs-keyword">new</span> <span class="hljs-type">SerializableBuffer</span>(serializedTask))) } }}</code></pre></div>]]></content>
<categories>
<category>分布式系统</category>
<category>分布式计算</category>
<category>Spark</category>
</categories>
<tags>
<tag>Spark</tag>
</tags>
</entry>
<entry>
<title>Spark SQL 执行流程</title>
<link href="/2021/04/06/Spark-SQL-%E6%89%A7%E8%A1%8C%E6%B5%81%E7%A8%8B/"/>
<url>/2021/04/06/Spark-SQL-%E6%89%A7%E8%A1%8C%E6%B5%81%E7%A8%8B/</url>
<content type="html"><![CDATA[<h1 id="Spark-SQL-执行流程"><a href="#Spark-SQL-执行流程" class="headerlink" title="Spark SQL 执行流程"></a>Spark SQL 执行流程</h1><p>一般来说,从 SQL 转换到 RDD 执行需要经过两个大阶段,分别是逻辑计划(LogicalPlan)和物理计划(SparkPlan),而在整个 Spark 的执行过程中,其代码都是惰性的,即到最后 SQL 真正执行的时候,整个代码才会从后向前按调用的依赖顺序执行。</p><h2 id="概述"><a href="#概述" class="headerlink" title="概述"></a>概述</h2><ol><li><p>逻辑计划</p><ul><li><p><code>Unresolved LogicalPlan</code>:仅仅是数据结构,不包含具体数据</p></li><li><p><code>Analyzed LogicalPlan</code>:绑定与数据对应的具体信息</p></li><li><p><code>Optimized LogicalPlan</code>:应用优化规则</p></li></ul></li><li><p>物理计划</p><ul><li><p><code>Iterator[PhysicalPlan]</code>:生成物理算子树的列表</p></li><li><p><code>SparkPlan</code>:按照策略选取最优的物理算子树</p></li><li><p><code>Prepared SparkPlan</code>:进行提交前的准备工作</p></li></ul></li></ol><h2 id="流程图"><a href="#流程图" class="headerlink" title="流程图"></a>流程图</h2><p><img src="https://cdn.jsdelivr.net/gh/gleonSun/images@main/image/SQLExecutionFlow-1737348368885.png"></p><h1 id="源码分析"><a href="#源码分析" class="headerlink" title="源码分析"></a>源码分析</h1><p>SparkSession 类中的 sql 方法是 Spark 执行 SQL 查询的入口,sqlText 即为用户输入的 SQL 语句,其中 parsePlan 方法则是对 SQL 语句进行解析,Spark 使用的编译器语法基于 ANTLR4 这一工具,下文会稍微提及此部分内容。</p><div class="hljs code-wrapper"><pre><code class="hljs java">def <span class="hljs-title function_">sql</span><span class="hljs-params">(sqlText: String)</span>: DataFrame = { Dataset.ofRows(self, sessionState.sqlParser.parsePlan(sqlText))}</code></pre></div><h2 id="Parser"><a href="#Parser" class="headerlink" title="Parser"></a>Parser</h2><h3 id="ANTLR4"><a href="#ANTLR4" class="headerlink" title="ANTLR4"></a>ANTLR4</h3><p>ANTLR4 有两种遍历模式,一种是监听模式,属于被动型的;另一种是访问者模式,属于主动型的,这也是 Spark 使用的遍历模式,可以显示地定义遍历语法树的顺序。</p><p>在 Spark 中体现为 <span class="label label-primary">SqlBase.g4</span> 文件,包含词法分析器(SqlBaseLexer)、语法分析器(SqlBaseParser)和访问者类(SqlBaseVisitor 接口与 SqlBaseBaseVisitor 类)。</p><p>也就是说,如果用户需要增加新的语法,在 <span class="label label-primary">SqlBase.g4</span> 文件中增加相应语法和词法后,重新编译后即增加了新的语法句式,然后便可以基于 AstBuilder(SparkSqlAstBuilder) 中对新增的语法进行逻辑补充,这种可以直接执行的都属于 Command,后面会以实例进行说明。</p><h3 id="AbstractSqlParser"><a href="#AbstractSqlParser" class="headerlink" title="AbstractSqlParser"></a>AbstractSqlParser</h3><p>Spark SQL 中的 Catalyst 中提供了直接面向用户的 ParserInterface 接口,该接口中包含了对 SQL 语句、Expression 表达式和 TableIdentifier 数据表标识符等的解析方法。AbstractSqlParser 继承了 ParserInterface,主要借助 AstBuilder 对语法树进行解析(遵循后序遍历方式)。</p><h3 id="SQL-实例"><a href="#SQL-实例" class="headerlink" title="SQL 实例"></a>SQL 实例</h3><p>以 IDEA 为例,先安装 ANTLR4 的插件,然后右键选择 <span class="label label-primary">singleStatement</span>,点击 <span class="label label-secondary">Test Rule singleStatement</span> 进行调试,这里我们输入一个简单的 SQL:<span class="label label-success">DROP TABLE IF EXISTS SPARKTEST</span></p><div class="note note-warning"> <p>此处的所有字母均为大写,因为这里对应的语法和词法是区分大小写的,我们仅仅是在调试对应的 SQL 语法树,Spark 在后面的解析中利用 UpperCaseCharStream 才会将 SQL 都转为大写进行处理,所以用户的 SQL 语句不需要大写,如下图。</p> </div><p><img src="https://cdn.jsdelivr.net/gh/gleonSun/images@main/image/drop_sql-1737348368879.png"></p><p>现在我们来自定义一条 SQL 语法:<span class="label label-success">SHOW STATUS</span></p><ul><li>修改 <span class="label label-primary">SqlBase.g4</span> 文件,新增词法和语法</li></ul><p><img src="https://cdn.jsdelivr.net/gh/gleonSun/images@main/image/sql_yf-1737348368884.png"><br><img src="https://cdn.jsdelivr.net/gh/gleonSun/images@main/image/sql_cf-1737348368882.png"></p><ul><li>这里我是整个项目编译的,因为之前编译的不小心清除了,只需要编译 Spark SQL 模块即可</li></ul><p><img src="https://cdn.jsdelivr.net/gh/gleonSun/images@main/image/sql_by-1737348368880.png"></p><ul><li>编写相应逻辑的 ShowStatusCommand 样例类</li></ul><p><img src="https://cdn.jsdelivr.net/gh/gleonSun/images@main/image/sql_yl-1737348368884.png"></p><ul><li>在 SparkSqlAstBuilder 中增加对外接口</li></ul><p><img src="https://cdn.jsdelivr.net/gh/gleonSun/images@main/image/sql_jk-1737348368882.png"></p><ul><li>通过 Spark API 使用 SQL 输出结果</li></ul><p><img src="https://cdn.jsdelivr.net/gh/gleonSun/images@main/image/sql_api-1737348368880.png"><br><img src="https://cdn.jsdelivr.net/gh/gleonSun/images@main/image/sql_result-1737348368883.png"></p><h2 id="Logical-Spark-Plan"><a href="#Logical-Spark-Plan" class="headerlink" title="Logical/Spark Plan"></a>Logical/Spark Plan</h2><p>在 Dataset 中继续对未解析的逻辑计划进行解析,本文仅针对 Command 部分的逻辑计划举例分析。</p><div class="hljs code-wrapper"><pre><code class="hljs java">def <span class="hljs-title function_">ofRows</span><span class="hljs-params">(sparkSession: SparkSession, logicalPlan: LogicalPlan)</span>: DataFrame = { <span class="hljs-type">val</span> <span class="hljs-variable">qe</span> <span class="hljs-operator">=</span> sparkSession.sessionState.executePlan(logicalPlan) qe.assertAnalyzed() <span class="hljs-keyword">new</span> <span class="hljs-title class_">Dataset</span>[Row](sparkSession, qe, RowEncoder(qe.analyzed.schema))}</code></pre></div><p>可以看到,在 QueryExecution 中,将未解析的逻辑计划转换为解析的逻辑计划,详细代码在 Analyzer 的 executeAndCheck 方法中,最终调用了特质 CheckAnalysis 的 checkAnalysis 方法进行数据的绑定解析,代码较长此处就不贴了,这一过程会对应表(Relation)、Where 后的过滤条件(Filter)、查询的列(Project)、别名(Cast)等等进行绑定,解析失败会抛出相应错误。</p><div class="hljs code-wrapper"><pre><code class="hljs java">def <span class="hljs-title function_">assertAnalyzed</span><span class="hljs-params">()</span>: Unit = analyzedlazy val analyzed: LogicalPlan = { SparkSession.setActiveSession(sparkSession) sparkSession.sessionState.analyzer.executeAndCheck(logical)}</code></pre></div><p>当执行到 Dataset 中 ofRows 最后一行 <code>new Dataset[Row](...)</code> 时,会调用到初始化 logicalPlan 的地方,到这里开始向前追溯,需要判断当前解析后的逻辑计划是 Command 还是其他的逻辑计划。</p><div class="note note-warning"> <p>Command 在 Spark 中比较特殊,可以直接在 Driver 端执行,此处我们基于上面的 DropTableCommand 来分析。</p> </div><div class="hljs code-wrapper"><pre><code class="hljs java"><span class="hljs-meta">@transient</span> <span class="hljs-keyword">private</span>[sql] val logicalPlan: LogicalPlan = { queryExecution.analyzed match { <span class="hljs-keyword">case</span> c: Command => LocalRelation(c.output, withAction(<span class="hljs-string">"command"</span>, queryExecution)(_.executeCollect())) <span class="hljs-keyword">case</span> u @ Union(children) <span class="hljs-keyword">if</span> children.forall(_.isInstanceOf[Command]) => LocalRelation(u.output, withAction(<span class="hljs-string">"command"</span>, queryExecution)(_.executeCollect())) <span class="hljs-type">case</span> <span class="hljs-variable">_</span> <span class="hljs-operator">=</span>> queryExecution.analyzed } }</code></pre></div><p>往下继续执行,无论是 Command 还是其他的逻辑计划,均会经历下面的过程,不同的是 Command 直接就执行了,而其他的逻辑计划如查询等则会经历更多的变换过程直至发往 Executor 执行。</p><div class="hljs code-wrapper"><pre><code class="hljs java"><span class="hljs-keyword">private</span> def withAction[U](name: String, qe: QueryExecution)(action: SparkPlan => U) = { <span class="hljs-keyword">try</span> { qe.executedPlan.foreach { plan => plan.resetMetrics() } <span class="hljs-type">val</span> <span class="hljs-variable">start</span> <span class="hljs-operator">=</span> System.nanoTime() <span class="hljs-type">val</span> <span class="hljs-variable">result</span> <span class="hljs-operator">=</span> SQLExecution.withNewExecutionId(sparkSession, qe) { action(qe.executedPlan) } <span class="hljs-type">val</span> <span class="hljs-variable">end</span> <span class="hljs-operator">=</span> System.nanoTime() sparkSession.listenerManager.onSuccess(name, qe, end - start) result } <span class="hljs-keyword">catch</span> { <span class="hljs-keyword">case</span> e: Exception => sparkSession.listenerManager.onFailure(name, qe, e) <span class="hljs-keyword">throw</span> e }}</code></pre></div><p>前面说过,Spark 是惰性执行的,我们看一下 QueryExecution 中的部分代码,当真正需要执行的时候才会从 Prepared SparkPlan 向前追溯并按照依赖顺序执行,中间还有很多过程,包括优化逻辑计划,运用策略转换物理计划,选取最优物理计划等等,以下是逻辑计划和物理计划的转换过程部分代码。</p><div class="hljs code-wrapper"><pre><code class="hljs java">lazy val executedPlan: SparkPlan = prepareForExecution(sparkPlan)lazy val sparkPlan: SparkPlan = { SparkSession.setActiveSession(sparkSession) planner.plan(ReturnAnswer(optimizedPlan)).next()}lazy val optimizedPlan: LogicalPlan = sparkSession.sessionState.optimizer.execute(withCachedData)lazy val withCachedData: LogicalPlan = { assertAnalyzed() assertSupported() sparkSession.sharedState.cacheManager.useCachedData(analyzed)}lazy val analyzed: LogicalPlan = { SparkSession.setActiveSession(sparkSession) sparkSession.sessionState.analyzer.executeAndCheck(logical)}</code></pre></div><h3 id="Command"><a href="#Command" class="headerlink" title="Command"></a>Command</h3><p>接上面 Dataset 里 logicalPlan 中的 <code>_.executeCollect()</code> 方法,由于是可执行 Command,所以调用至 ExecutedCommandExec 的 executeCollect 方法继续执行,最终调用了 DropTableCommand 的 run 方法执行。</p><div class="hljs code-wrapper"><pre><code class="hljs java">override def <span class="hljs-title function_">executeCollect</span><span class="hljs-params">()</span>: Array[InternalRow] = sideEffectResult.toArray<span class="hljs-keyword">protected</span>[sql] lazy val sideEffectResult: Seq[InternalRow] = { <span class="hljs-type">val</span> <span class="hljs-variable">converter</span> <span class="hljs-operator">=</span> CatalystTypeConverters.createToCatalystConverter(schema) cmd.run(sqlContext.sparkSession).map(converter(_).asInstanceOf[InternalRow])}</code></pre></div><p>DropTableCommand 继承了 RunnableCommand,而 RunnableCommand 则包装在 ExecutedCommandExec 中,下面的代码可以看到,先根据表名取出对应表的元数据信息,然后清除缓存并刷新缓存状态,再调用 SessionCatalog 的 dropTable 方法,如果是 Hive 表,则会调用 externalCatalog(HiveExternalCatalog)的 dropTable 方法对表进行删除。</p><div class="hljs code-wrapper"><pre><code class="hljs java"><span class="hljs-keyword">case</span> <span class="hljs-keyword">class</span> <span class="hljs-title class_">DropTableCommand</span>( tableName: TableIdentifier, ifExists: Boolean, isView: Boolean, purge: Boolean) <span class="hljs-keyword">extends</span> <span class="hljs-title class_">RunnableCommand</span> { override def <span class="hljs-title function_">run</span><span class="hljs-params">(sparkSession: SparkSession)</span>: Seq[Row] = { <span class="hljs-type">val</span> <span class="hljs-variable">catalog</span> <span class="hljs-operator">=</span> sparkSession.sessionState.catalog <span class="hljs-type">val</span> <span class="hljs-variable">isTempView</span> <span class="hljs-operator">=</span> catalog.isTemporaryTable(tableName) <span class="hljs-keyword">if</span> (!isTempView && catalog.tableExists(tableName)) { catalog.getTableMetadata(tableName).tableType match { <span class="hljs-keyword">case</span> CatalogTableType.VIEW <span class="hljs-keyword">if</span> !isView => <span class="hljs-keyword">throw</span> <span class="hljs-keyword">new</span> <span class="hljs-title class_">AnalysisException</span>( <span class="hljs-string">"Cannot drop a view with DROP TABLE. Please use DROP VIEW instead"</span>) <span class="hljs-keyword">case</span> o <span class="hljs-keyword">if</span> o != CatalogTableType.VIEW && isView => <span class="hljs-keyword">throw</span> <span class="hljs-keyword">new</span> <span class="hljs-title class_">AnalysisException</span>( s<span class="hljs-string">"Cannot drop a table with DROP VIEW. Please use DROP TABLE instead"</span>) <span class="hljs-type">case</span> <span class="hljs-variable">_</span> <span class="hljs-operator">=</span>> } } <span class="hljs-keyword">if</span> (isTempView || catalog.tableExists(tableName)) { <span class="hljs-keyword">try</span> { sparkSession.sharedState.cacheManager.uncacheQuery( sparkSession.table(tableName), cascade = !isTempView) } <span class="hljs-keyword">catch</span> { <span class="hljs-keyword">case</span> <span class="hljs-title function_">NonFatal</span><span class="hljs-params">(e)</span> => log.warn(e.toString, e) } catalog.refreshTable(tableName) catalog.dropTable(tableName, ifExists, purge) } <span class="hljs-keyword">else</span> <span class="hljs-keyword">if</span> (ifExists) { <span class="hljs-comment">// no-op</span> } <span class="hljs-keyword">else</span> { <span class="hljs-keyword">throw</span> <span class="hljs-keyword">new</span> <span class="hljs-title class_">AnalysisException</span>(s<span class="hljs-string">"Table or view not found: ${tableName.identifier}"</span>) } Seq.empty[Row] }}</code></pre></div><p>Spark SQL 中的 Catalog 体系实现以 SessionCatalog 为主体,通过 SparkSession 提供给外部调用,它起到了一个代理的作用,对底层的元数据信息、临时表信息、视图信息和函数信息进行了封装。初始化过程在 BaseSessionStateBuilder 类,而 externalCatalog 则是基于配置参数 <code>spark.sql.catalogImplementation</code> 进行匹配选择的,代码位于 SharedState 类,默认是 <code>in-memory</code> 即内存模式,可选的是 <code>hive</code> 模式,至此 DropTableCommand 分析完毕。</p><div class="hljs code-wrapper"><pre><code class="hljs java"><span class="hljs-keyword">protected</span> lazy val catalog: SessionCatalog = { <span class="hljs-type">val</span> <span class="hljs-variable">catalog</span> <span class="hljs-operator">=</span> <span class="hljs-keyword">new</span> <span class="hljs-title class_">SessionCatalog</span>( () => session.sharedState.externalCatalog, () => session.sharedState.globalTempViewManager, functionRegistry, conf, SessionState.newHadoopConf(session.sparkContext.hadoopConfiguration, conf), sqlParser, resourceLoader) parentState.foreach(_.catalog.copyStateTo(catalog)) catalog }</code></pre></div>]]></content>
<categories>
<category>分布式系统</category>
<category>分布式计算</category>
<category>Spark</category>
</categories>
<tags>
<tag>Spark</tag>
</tags>
</entry>
<entry>
<title>Spark RDD</title>
<link href="/2021/04/02/Spark-RDD/"/>
<url>/2021/04/02/Spark-RDD/</url>
<content type="html"><![CDATA[<h1 id="Spark-RDD"><a href="#Spark-RDD" class="headerlink" title="Spark RDD"></a>Spark RDD</h1><p>弹性分布式数据集 ( Resilient Distrbuted Dataset),本质是一种分布式的内存抽象,表示一个只读的数据分区(Partition)集合。</p><p>RDD 本身是不存储数据的,且只有在调用例如 collect 时才真正执行逻辑。RDD 是不可变的,只能产生新的 RDD,其内部封装了计算逻辑。</p><h2 id="弹性"><a href="#弹性" class="headerlink" title="弹性"></a>弹性</h2><ul><li>在内存和磁盘间存储方式的自动切换,数据优先在内存缓冲,达到阈值持久化到磁盘</li><li>基于血缘关系(Lineage)的容错机制,只需要重新计算丢失的分区数据</li></ul><h2 id="特性"><a href="#特性" class="headerlink" title="特性"></a>特性</h2><ul><li><p><code>A list of partitions</code></p><p> 分区列表。RDD 包含多个 partition,每个 partition 由一个 Task 处理,可以在创建 RDD 时指定分片个数。</p></li><li><p><code>A function for computing each split</code></p><p> 每个分区都有个计算函数。以分片为单位并行计算。</p></li><li><p><code>A list of dependencies on other RDDs</code></p><p> 依赖于其他 RDD 的列表。RDD 每次转换都会生成新的 RDD,形成前后的依赖关系,分为窄依赖和宽依赖,当有分区数据丢失时,Spark 会通过依赖关系重新计算,从而计算出丢失的数据,而不是对 RDD 所有分区重新计算。</p></li><li><p><code>Optionally, a Partitioner for key-value RDDs</code></p><p> K-V 类型的 RDD 分区器。</p></li><li><p><code>Optionally, a list of preferred locations to compute each split on</code></p><p> 每个分区的优先位置列表。该列表会存储每个 partition 的优先位置,移动代码而非移动数据,将任务调度到数据文件所在的具体位置以提高处理速度。</p></li></ul><h2 id="转换"><a href="#转换" class="headerlink" title="转换"></a>转换</h2><p>RDD 计算的时候通过 compute 函数得到每个分区的数据,若 RDD 是通过已有的文件系统构建的,则读取指定文件系统中的数据;若 RDD 是通过其他 RDD 转换的,则执行转换逻辑,将其他 RDD 数据进行转换。其操作算子主要包括两类:</p><ul><li><p><code>transformation</code>,转换 RDD,构建依赖关系</p></li><li><p><code>action</code>,触发 RDD 计算,得到计算结果或将 RDD 保存到文件系统中,例:show、count、collect、saveAsTextFile等</p></li></ul><p>RDD 是惰性的,只有在 action 阶段才会真正执行 RDD 计算。</p><h2 id="任务执行及划分"><a href="#任务执行及划分" class="headerlink" title="任务执行及划分"></a>任务执行及划分</h2><ul><li><p>基于 RDD 的计算任务</p><p> 从物理存储(如HDFS)中加载数据,将数据传入由一组确定性操作构成的有向无环图(DAG),然后写回去。</p></li><li><p>任务执行关系</p><ul><li><p>文件根据 InputFormat 被划分为若干个 InputSplit,InputSplit 与 Task 一一对应</p></li><li><p>每个 Task 执行的结果来自于 RDD 的一个 partition</p></li><li><p>每个 Executor 由若干 core(虚拟的,非物理 CPU 核) 组成,每个 Executor 的 core 一次只能执行一个 Task</p></li><li><p>Task 执行的并发度 = Executor 数 * 每个 Executor 核数</p></li></ul></li></ul><blockquote><ul><li>RDD 中用到的对象都必须是可序列化的,代码和引用对象会序列化后复制到多台机器的 RDD 上,否则会引发序列化方面的异常,可继承 Serializable 或使用 Kryo 序列化</li><li>RDD 不支持嵌套,会导致空指针</li></ul></blockquote><h2 id="依赖关系"><a href="#依赖关系" class="headerlink" title="依赖关系"></a>依赖关系</h2><p>新的 RDD 包含了如何从其他 RDD 衍生所必需的信息,这些信息构成了 RDD 之间的依赖关系。</p><ul><li><p>窄依赖</p><p> 每个父 RDD 的一个 partition 最多被子 RDD 的一个 partition 使用,例如:map、filter、union等,是一对一或多对一的关系。转换操作可以通过类似管道(pipeline)的方式执行。</p></li><li><p>宽依赖</p><ul><li><p>一个父 RDD 的 partition 同时被多个子 RDD 的 partition 使用,例如:groupByKey、reduceByKey、sortByKey等,是一对多的关系。数据需要在不同节点之间进行 shuffle 传输。</p></li><li><p>遇到一个宽依赖划分一个 stage</p></li></ul></li></ul><h2 id="自定义-RDD"><a href="#自定义-RDD" class="headerlink" title="自定义 RDD"></a>自定义 RDD</h2><p>继承 RDD 并实现以下函数,一般来说前三个比较重要。</p><ul><li><p>compute</p><p> 对 RDD 的分区进行计算,收集每个分区的结果</p></li><li><p>getPartitions</p><p> 自定义分区器,获取当前 RDD 的所有分区</p></li><li><p>getPreferredLocations</p><p> 本地化计算,调度任务至最近节点以提高计算效率</p></li></ul>]]></content>
<categories>
<category>分布式系统</category>
<category>分布式计算</category>
<category>Spark</category>
</categories>
<tags>
<tag>Spark</tag>
</tags>
</entry>
<entry>
<title>Spark 动态分配 Executor</title>
<link href="/2021/03/31/Spark-%E5%8A%A8%E6%80%81%E5%88%86%E9%85%8D-Executor/"/>
<url>/2021/03/31/Spark-%E5%8A%A8%E6%80%81%E5%88%86%E9%85%8D-Executor/</url>
<content type="html"><![CDATA[<h1 id="概述"><a href="#概述" class="headerlink" title="概述"></a>概述</h1><p>Spark 提供了一种机制,可以根据工作负载动态调整用户的应用程序占用资源。这意味着,如果资源不再使用,应用程序可能会将它们返还给集群,并在之后需要的时候再发起请求。这个特性对于多个应用程序共享同一个 Spark 集群的时候特别有用。</p><p>Spark 默认是关闭该特性的,但是该特性在所有的集群模式下均可开启,包括 Standalone、YARN、Mesos、K8s 等,详情参考<a href="https://spark.apache.org/docs/latest/job-scheduling.html#dynamic-resource-allocation">官网描述</a>。</p><p>源码对应 Spark 2.4.5 版本。</p><h1 id="源码分析"><a href="#源码分析" class="headerlink" title="源码分析"></a>源码分析</h1><p>启动 ExecutorAllocationManager 需要配置 <span class="label label-success">spark.dynamicAllocation.enabled</span> 为 true,且不能为 local 模式,也可配置 <span class="label label-success">spark.dynamicAllocation.testing</span> 为 true 进行指定测试时启用,相关源码位于 SparkContext 中。</p><h2 id="启动与运行"><a href="#启动与运行" class="headerlink" title="启动与运行"></a>启动与运行</h2><div class="hljs code-wrapper"><pre><code class="hljs java"><span class="hljs-comment">// Optionally scale number of executors dynamically based on workload. Exposed for testing.</span><span class="hljs-type">val</span> <span class="hljs-variable">dynamicAllocationEnabled</span> <span class="hljs-operator">=</span> Utils.isDynamicAllocationEnabled(_conf)<span class="hljs-comment">// 基于工作负载动态分配和删除 Executor 的代理</span>_executorAllocationManager = <span class="hljs-keyword">if</span> (dynamicAllocationEnabled) { schedulerBackend match { <span class="hljs-keyword">case</span> b: ExecutorAllocationClient => Some(<span class="hljs-keyword">new</span> <span class="hljs-title class_">ExecutorAllocationManager</span>( schedulerBackend.asInstanceOf[ExecutorAllocationClient], listenerBus, _conf, _env.blockManager.master)) <span class="hljs-type">case</span> <span class="hljs-variable">_</span> <span class="hljs-operator">=</span>> None } } <span class="hljs-keyword">else</span> { None }_executorAllocationManager.foreach(_.start())</code></pre></div><p>在 ExecutorAllocationManager 启动方法中设置了对应的定时调度任务,并通过一个单一线程的线程池进行固定时间调度。</p><div class="hljs code-wrapper"><pre><code class="hljs java">def <span class="hljs-title function_">start</span><span class="hljs-params">()</span>: Unit = { <span class="hljs-comment">// 向事件总线添加 ExecutorAllocationListener</span> listenerBus.addToManagementQueue(listener) <span class="hljs-comment">// 定时调度任务</span> <span class="hljs-type">val</span> <span class="hljs-variable">scheduleTask</span> <span class="hljs-operator">=</span> <span class="hljs-keyword">new</span> <span class="hljs-title class_">Runnable</span>() { override def <span class="hljs-title function_">run</span><span class="hljs-params">()</span>: Unit = { <span class="hljs-keyword">try</span> { schedule() } <span class="hljs-keyword">catch</span> { <span class="hljs-keyword">case</span> ct: ControlThrowable => <span class="hljs-keyword">throw</span> ct <span class="hljs-keyword">case</span> t: Throwable => logWarning(s<span class="hljs-string">"Uncaught exception in thread ${Thread.currentThread().getName}"</span>, t) } } } <span class="hljs-comment">// 由只有一个线程且名为 spark-dynamic-executor-allocation 的 ScheduledThreadPoolExecutor 以默认值 100 ms 进行固定时间调度</span> executor.scheduleWithFixedDelay(scheduleTask, <span class="hljs-number">0</span>, intervalMillis, TimeUnit.MILLISECONDS) <span class="hljs-comment">// 请求所有的 Executor,numExecutorsTarget 为 spark.dynamicAllocation.minExecutors、spark.dynamicAllocation.initialExecutors、spark.executor.instances 的最大值,</span> <span class="hljs-comment">// localityAwareTasks 为本地性偏好的 Task 数量,hostToLocalTaskCount 是 Host 与想要在此节点上运行的 Task 数量之间的映射关系</span> client.requestTotalExecutors(numExecutorsTarget, localityAwareTasks, hostToLocalTaskCount)</code></pre></div><p>更新并同步目标 Executor 的数量,这里会比较实际需要的 Executor 最大数量和配置的 Executor 最大数量之间的关系,并根据情况决定合适的值。</p><div class="hljs code-wrapper"><pre><code class="hljs java"><span class="hljs-keyword">private</span> def <span class="hljs-title function_">updateAndSyncNumExecutorsTarget</span><span class="hljs-params">(now: Long)</span>: Int = <span class="hljs-keyword">synchronized</span> { <span class="hljs-comment">// 获得实际需要的 Executor 的最大数量</span> <span class="hljs-type">val</span> <span class="hljs-variable">maxNeeded</span> <span class="hljs-operator">=</span> maxNumExecutorsNeeded <span class="hljs-title function_">if</span> <span class="hljs-params">(initializing)</span> { <span class="hljs-comment">// Do not change our target while we are still initializing,</span> <span class="hljs-comment">// Otherwise the first job may have to ramp up unnecessarily</span> <span class="hljs-number">0</span> } <span class="hljs-keyword">else</span> <span class="hljs-keyword">if</span> (maxNeeded < numExecutorsTarget) { <span class="hljs-comment">// numExecutorsTarget 超过了实际需要的 Executor 最大数量,则减少需要的 Executor 数量</span> <span class="hljs-comment">// The target number exceeds the number we actually need, so stop adding new</span> <span class="hljs-comment">// executors and inform the cluster manager to cancel the extra pending requests</span> <span class="hljs-type">val</span> <span class="hljs-variable">oldNumExecutorsTarget</span> <span class="hljs-operator">=</span> <span class="hljs-type">numExecutorsTarget</span> <span class="hljs-variable">numExecutorsTarget</span> <span class="hljs-operator">=</span> math.max(maxNeeded, minNumExecutors) numExecutorsToAdd = <span class="hljs-number">1</span> <span class="hljs-comment">// If the new target has not changed, avoid sending a message to the cluster manager</span> <span class="hljs-keyword">if</span> (numExecutorsTarget < oldNumExecutorsTarget) { <span class="hljs-comment">// We lower the target number of executors but don't actively kill any yet. Killing is</span> <span class="hljs-comment">// controlled separately by an idle timeout. It's still helpful to reduce the target number</span> <span class="hljs-comment">// in case an executor just happens to get lost (eg., bad hardware, or the cluster manager</span> <span class="hljs-comment">// preempts it) -- in that case, there is no point in trying to immediately get a new</span> <span class="hljs-comment">// executor, since we wouldn't even use it yet.</span> <span class="hljs-comment">// 重新请求 numExecutorsTarget 指定的目标 Executor 数量,以此停止添加新的执行程序,并通知集群管理器取消额外的待处理</span> <span class="hljs-comment">// Executor 请求,最后返回减少的 Executor 数量</span> client.requestTotalExecutors(numExecutorsTarget, localityAwareTasks, hostToLocalTaskCount) logDebug(s<span class="hljs-string">"Lowering target number of executors to $numExecutorsTarget (previously "</span> + s<span class="hljs-string">"$oldNumExecutorsTarget) because not all requested executors are actually needed"</span>) } numExecutorsTarget - oldNumExecutorsTarget } <span class="hljs-keyword">else</span> <span class="hljs-keyword">if</span> (addTime != NOT_SET && now >= addTime) { <span class="hljs-comment">// 如果实际需要的 Executor 最大数量小于 numExecutorsTarget,且当前时间大于上次添加 Executor 的时间,则先通知集群管理器添加新的 Executor,</span> <span class="hljs-comment">// 再更新添加 Executor 的时间,最后返回添加的 Executor 数量</span> <span class="hljs-type">val</span> <span class="hljs-variable">delta</span> <span class="hljs-operator">=</span> addExecutors(maxNeeded) logDebug(s<span class="hljs-string">"Starting timer to add more executors (to "</span> + s<span class="hljs-string">"expire in $sustainedSchedulerBacklogTimeoutS seconds)"</span>) addTime = now + (sustainedSchedulerBacklogTimeoutS * <span class="hljs-number">1000</span>) delta } <span class="hljs-keyword">else</span> { <span class="hljs-number">0</span> } }</code></pre></div><h2 id="思路图"><a href="#思路图" class="headerlink" title="思路图"></a>思路图</h2><p><img src="https://cdn.jsdelivr.net/gh/gleonSun/images@main/image/Spark-ExecutorAllocationManager-1737348049930.png"></p><h1 id="相关参数"><a href="#相关参数" class="headerlink" title="相关参数"></a>相关参数</h1><ul><li><code>spark.dynamicAllocation.enabled</code> - 是否启用 ExecutorAllocationManager</li><li><code>spark.dynamicAllocation.minExecutors</code> - Executor 最小数量</li><li><code>spark.dynamicAllocation.maxExecutors</code> - Executor 最大数量</li><li><code>spark.dynamicAllocation.initialExecutors</code> - 初始化的 Executor 数量</li><li><code>spark.dynamicAllocation.executorAllocationRatio</code> - 用于减少动态分配的并行性,在任务较小时会浪费资源,值在 0.0 到 1.0 之间</li><li><code>spark.dynamicAllocation.schedulerBacklogTimeout</code> - 如果在此时间内存在积压的任务,创建新的 Executor,默认 1s</li><li><code>spark.dynamicAllocation.sustainedSchedulerBacklogTimeout</code> - 如果在此时间内持续性积压任务,创建新的 Executor,在超过 <code>schedulerBacklogTimeout</code> 后的启动间隔,时间与其保持一致</li><li><code>spark.dynamicAllocation.executorIdleTimeout</code> - 如果 Executor 在此时间内保持闲置,除非它缓存了一些块数据,则将其移除,默认 60s</li></ul><div class="note note-warning"> <p>在启用 ExecutorAllocationManager 的情况下,最好也配置 <code>spark.shuffle.service.enabled</code> 为 true,否则可能会在移除 Executor 的过程中,丢失 Shuffle 数据。</p> </div>]]></content>
<categories>
<category>分布式系统</category>
<category>分布式计算</category>
<category>Spark</category>
</categories>
<tags>
<tag>Spark</tag>
</tags>
</entry>
<entry>
<title>Spark RPC</title>
<link href="/2021/03/29/Spark-RPC/"/>
<url>/2021/03/29/Spark-RPC/</url>
<content type="html"><![CDATA[<h1 id="概述"><a href="#概述" class="headerlink" title="概述"></a>概述</h1><p>在分布式系统中,通信是很重要的部分。集群成员很少共享硬件资源,通信的单一解决方案是客户端-服务器模型(C/S)中的消息交换。RPC 是 Remote Procedure Call 的缩写,当客户端执行请求时,它被发送到存根(stub)。当请求最终到达对应的服务器时,它还会到达服务器的存根,捕获的请求会转换为服务器端可执行过程。在物理执行后,将结果发送回客户端,示意图如下。</p><div class="note note-warning"> <p>为屏蔽客户调用远程主机上的对象,必须提供某种方式来模拟本地对象,这种本地对象称为存根(stub),负责接收本地方法调用,并将它们委派给各自的具体实现对象。</p> </div><p><img src="https://cdn.jsdelivr.net/gh/gleonSun/images@main/image/rpc_schema-1737348115557.png"></p><p>在 Spark 0.x.x 和 1.x.x 版本中,组件间的消息通信主要借助于 Akka,但是 Spark 2.0.0 版本已经移除了该部分的依赖,基于 Netty 实现了 RPC 功能。</p><div class="note note-warning"> <p>用户 Spark Application 中 Akka 版本和 Spark 内置的 Akka 版本可能会冲突,而 Akka 不同版本之间无法互相通信。Spark 用的 Akka 特性比较少,这部分特性很容易自己实现,基于以上种种考量最终 Spark 废弃了 Akka,详见 <a href="https://issues.apache.org/jira/plugins/servlet/mobile#issue/SPARK-5293">JIRA</a>。</p> </div><h1 id="参考模型"><a href="#参考模型" class="headerlink" title="参考模型"></a>参考模型</h1><p>Spark RPC 主要参考了 Actor 模型和 Reactor 模型。</p><h2 id="Actor"><a href="#Actor" class="headerlink" title="Actor"></a>Actor</h2><p>用于解决多线程并发条件下锁等一系列线程问题,以异步非阻塞方式完成消息的传递。Actor 由状态(state)、行为(behavior)、邮箱(mailbox)三者组成。</p><p>Actor 遵循以下规则:</p><ul><li>创建其他的 Actor</li><li>发送消息给其他的 Actor</li><li>接受并处理消息,修改自己的状态</li></ul><p>上面的规则还隐含了以下意思:</p><ol><li>每个 Actor 都是独立的,能与其他 Actor 互不干扰的并发运行,同时每个 Actor 有自身的邮箱,任意 Actor 可以向自己地址发送的信息都会放置在这个邮箱里,邮箱里消息的处理遵循 FIFO 顺序。</li><li>消息的投递和读取是两个过程,这样 Actor 之间的交互就解耦了。</li><li>Actor 之间的通信是异步的,发送方只管发送,不关心超时和错误,这些都交给框架或者独立的错误处理机制。</li><li>Actor 的通信兼顾了本地和远程调用,因此本地处理不过来的时候可以在远程节点上启动 Actor 再把消息转发过去进行处理,拥有了扩展的特性。</li></ol><p>这里贴出 Actor 模型的一个经典图片,另附上 B 站一段视频中关于 <a href="https://www.bilibili.com/video/BV12y4y1a7e4?from=search&seid=8241130096895464139">Actor</a> 模型的解释。</p><p><img src="https://cdn.jsdelivr.net/gh/gleonSun/images@main/image/actor-1737348115557.png"></p><h2 id="Reactor"><a href="#Reactor" class="headerlink" title="Reactor"></a>Reactor</h2><p>Reactor 模型是一种典型的事件驱动的编程模型。</p><p>模型定义了三种角色:</p><ol><li>Reactor - 将 I/O 事件分派给对应的 Handler</li><li>Acceptor - 处理客户端新连接,并分派请求到处理器链中</li><li>Handlers - 执行非阻塞读/写任务</li></ol><p>为什么使用 Reactor 模型?我们来看一下传统的阻塞 I/O 模型:</p><ul><li>每个线程都需要独立的线程处理,并发足够大时,会占用很多资源</li><li>采用阻塞 I/O 模型,连接建立后,即便没有数据读,线程的阻塞操作也会浪费资源</li></ul><p>针对以上问题可以采用以下方案:</p><ul><li>创建一个线程池,避免为每个连接创建线程池,连接完成就把逻辑交给线程池处理</li><li>基于 I/O 复用模型,多个连接共用同一个阻塞对象。有新数据时,线程不再阻塞,跳出状态进行处理。</li></ul><p>而 Reactor 模型就是基于 I/O 复用和线程池的结合,根据 Reactor 数量和处理资源的线程数量不同,分为三类:</p><ol><li>单 Reactor 单线程模型(一般不用,对多核机器资源有些浪费)</li><li>单 Reactor 多线程模型(高并发场景下存在性能问题)</li><li>多 Reactor 多线程模型</li></ol><p>此处附上大神 Doug Lea 在 Scalable IO in Java 中给出的阐述,其中 Netty NIO 默认模式沿用的是多 Reactor 多线程模型变种,对应 pdf 中 26 页框架图,另感兴趣可自行阅读 <a href="http://www.laputan.org/pub/sag/reactor.pdf">Reactor</a> 架构设计。</p><div class="row"> <embed src="./nio.pdf" width="100%" height="550" type="application/pdf"></div><h1 id="源码分析"><a href="#源码分析" class="headerlink" title="源码分析"></a>源码分析</h1><p>此处只提及核心部分,大部分的源码都在 <code>org.apache.spark.rpc</code> 包中,负责将消息发送到客户端存根的对象由 Dispatcher 类表示,通过内部的 post* 方法之一(postToAll、postRemoteMessage 等),准备消息实例(RpcMessage)并将其发送到预期端点(endpoint),具体实现类为 NettyRpcEndpointRef。</p><p>RPC endpoints 主要由两个类表示</p><ol><li><p>RpcEndpoint</p><ul><li>每个节点都可以称为一个 RpcEndpoint(Client、Worker 等)</li><li>主要方法为 onStart、receive、receiveAndReply、onStop</li></ul></li><li><p>RpcEndpointRef</p><ul><li>作用是发送请求,本质是对 RpcEndpoint 的一个引用</li><li>主要方法为 ask(异步请求-响应)、askSync(同步请求-响应)</li></ul></li></ol><div class="note note-warning"> <p>onStart 和 onStop 在端点启动和停止时调用,receive 发送请求或响应,对应<code>RpcEndpointRef.send</code> 或 <code>RpcCallContext.reply</code>,而 receiveAndReply 则处理回应请求,对应<code>RpcEndpointRef.ask</code>。RpcEndpointRef 发送的请求允许用户传入超时时间。</p> </div><p>其中 Dispatcher 也可称为消息收发器,将需要发送的消息和远程 RPC 端点接收到的消息,分发至对应的收件箱/发件箱,当轮询消息的时候进行处理。</p><ul><li><p>分发</p> <div class="hljs code-wrapper"><pre><code class="hljs java"><span class="hljs-keyword">private</span>[netty] def <span class="hljs-title function_">send</span><span class="hljs-params">(message: RequestMessage)</span>: Unit = { <span class="hljs-type">val</span> <span class="hljs-variable">remoteAddr</span> <span class="hljs-operator">=</span> message.receiver.address <span class="hljs-title function_">if</span> <span class="hljs-params">(remoteAddr == address)</span> { <span class="hljs-comment">// 将消息发送到本地 RPC 端点(收件箱),存入当前 RpcEndpoint 对应的 Inbox</span> <span class="hljs-keyword">try</span> { dispatcher.postOneWayMessage(message) } <span class="hljs-keyword">catch</span> { <span class="hljs-keyword">case</span> e: RpcEnvStoppedException => logDebug(e.getMessage) } } <span class="hljs-keyword">else</span> { <span class="hljs-comment">// 将消息发送到远程 RPC 端点(发件箱),最终通过 TransportClient 将消息发送出去</span> postToOutbox(message.receiver, OneWayOutboxMessage(message.serialize(<span class="hljs-built_in">this</span>))) } }</code></pre></div></li><li><p>处理</p> <div class="hljs code-wrapper"><pre><code class="hljs java"><span class="hljs-comment">/** Message loop used for dispatching messages. */</span> <span class="hljs-keyword">private</span> <span class="hljs-keyword">class</span> <span class="hljs-title class_">MessageLoop</span> <span class="hljs-keyword">extends</span> <span class="hljs-title class_">Runnable</span> { override def <span class="hljs-title function_">run</span><span class="hljs-params">()</span>: Unit = { <span class="hljs-keyword">try</span> { <span class="hljs-keyword">while</span> (<span class="hljs-literal">true</span>) { <span class="hljs-keyword">try</span> { <span class="hljs-comment">// 从 receivers 中获得 EndpointData,receivers 是 LinkBlockingQueue,没有元素时会阻塞</span> <span class="hljs-type">val</span> <span class="hljs-variable">data</span> <span class="hljs-operator">=</span> receivers.take() <span class="hljs-keyword">if</span> (data == PoisonPill) { <span class="hljs-comment">// Put PoisonPill back so that other MessageLoops can see it.</span> receivers.offer(PoisonPill) <span class="hljs-keyword">return</span> } <span class="hljs-comment">//调用 process 方法对 RpcEndpointData 中 Inbox 的 message 进行处理</span> data.inbox.process(Dispatcher.<span class="hljs-built_in">this</span>) } <span class="hljs-keyword">catch</span> { <span class="hljs-keyword">case</span> <span class="hljs-title function_">NonFatal</span><span class="hljs-params">(e)</span> => logError(e.getMessage, e) } } } <span class="hljs-keyword">catch</span> { <span class="hljs-keyword">case</span> _: InterruptedException => <span class="hljs-comment">// exit</span> <span class="hljs-keyword">case</span> t: Throwable => <span class="hljs-keyword">try</span> { <span class="hljs-comment">// Re-submit a MessageLoop so that Dispatcher will still work if</span> <span class="hljs-comment">// UncaughtExceptionHandler decides to not kill JVM.</span> threadpool.execute(<span class="hljs-keyword">new</span> <span class="hljs-title class_">MessageLoop</span>) } <span class="hljs-keyword">finally</span> { <span class="hljs-keyword">throw</span> t } } } }</code></pre></div></li></ul><p>Spark RPC 的源码抽象图大致如下图所示,RpcAddress 是 RpcEndpointRef 的地址(Host + Port),而 RpcEnv 则为 RpcEndpoint 提供处理消息的环境及管理其生命周期等。</p><p><img src="https://cdn.jsdelivr.net/gh/gleonSun/images@main/image/rpc_structure-1737348115557.png"></p><div class="note note-warning"> <p>其中 MessageEncoder 和 MessageDecoder 是用于解决可能出现的半包、粘包问题。在基于流的传输(如TCP/IP)中,数据会先存储到一个 socket 缓冲里,但这个传输不是一个数据包队列,而是一个字节队列。因此就可能出现这种情况,我们想发送 3 个数据包:ABC、DEF、GHI,但是由于传输协议,应用程序在接收时可能会变成这种情况:AB、CDEF、GH、I。所以需要对传输的数据流进行特殊处理,常见的比如:以特殊字符作为数据的末尾;或者发送固定长度的数据包(接收方也只接收固定长度的数据),不过这种情况不太适合频繁的请求。Spark 采用的是在协议上封装一层数据请求协议,即数据包=数据包长度+数据包内容,这样接收方就可以根据长度进行接收。</p> </div><h1 id="代码实例"><a href="#代码实例" class="headerlink" title="代码实例"></a>代码实例</h1><p>通过自定义代码实例可以更好地了解 Spark RPC 是如何运作的,GitHub 上有个 <a href="https://github.com/neoremind/kraps-rpc">kraps-rpc</a> 项目,该项目是从 Spark 中将 RPC 框架剥离出来的一部分,由于 GitHub 经常被墙,此处贴出我 fork 后在 Gitee 上更新过的 <a href="https://gitee.com/gleonSun/kraps-rpc">kraps-rpc</a>。</p><h2 id="服务端"><a href="#服务端" class="headerlink" title="服务端"></a>服务端</h2><p>注册自身引用、相应请求及定义消息体等。</p><div class="hljs code-wrapper"><pre><code class="hljs java">object FaceToFaceServer { <span class="hljs-type">val</span> <span class="hljs-variable">SERVER_HOST</span> <span class="hljs-operator">=</span> <span class="hljs-string">"localhost"</span> <span class="hljs-type">val</span> <span class="hljs-variable">SERVER_PORT</span> <span class="hljs-operator">=</span> <span class="hljs-number">4399</span> <span class="hljs-type">val</span> <span class="hljs-variable">SERVER_NAME</span> <span class="hljs-operator">=</span> <span class="hljs-string">"FaceServer"</span> def <span class="hljs-title function_">main</span><span class="hljs-params">(args: Array[String])</span>: Unit = { <span class="hljs-type">val</span> <span class="hljs-variable">config</span> <span class="hljs-operator">=</span> RpcEnvServerConfig(<span class="hljs-keyword">new</span> <span class="hljs-title class_">RpcConf</span>, <span class="hljs-string">"FaceService"</span>, SERVER_HOST, SERVER_PORT) val rpcEnv: RpcEnv = NettyRpcEnvFactory.create(config) <span class="hljs-type">val</span> <span class="hljs-variable">faceEndpoint</span> <span class="hljs-operator">=</span> <span class="hljs-keyword">new</span> <span class="hljs-title class_">FaceEndpoint</span>(rpcEnv) rpcEnv.setupEndpoint(SERVER_NAME, faceEndpoint) rpcEnv.awaitTermination() }}<span class="hljs-keyword">class</span> <span class="hljs-title class_">FaceEndpoint</span>(override val rpcEnv: RpcEnv) <span class="hljs-keyword">extends</span> <span class="hljs-title class_">RpcEndpoint</span> { override def <span class="hljs-title function_">onStart</span><span class="hljs-params">()</span>: Unit = { println(<span class="hljs-string">"Start FaceEndpoint."</span>) } override def <span class="hljs-title function_">receiveAndReply</span><span class="hljs-params">(context: RpcCallContext)</span>: PartialFunction[Any, Unit] = { <span class="hljs-keyword">case</span> <span class="hljs-title function_">SayMeeting</span><span class="hljs-params">(name)</span> => println(s<span class="hljs-string">"Hi $name, nice to meet you."</span>) context.reply(name) <span class="hljs-keyword">case</span> <span class="hljs-title function_">SayGoodBye</span><span class="hljs-params">(name)</span> => println(s<span class="hljs-string">"Hi $name, good bye."</span>) context.reply(name) <span class="hljs-type">case</span> <span class="hljs-variable">_</span> <span class="hljs-operator">=</span>> println(s<span class="hljs-string">"Receiver unknown message."</span>) } override def <span class="hljs-title function_">onStop</span><span class="hljs-params">()</span>: Unit = { println(<span class="hljs-string">"Stop FaceEndpoint."</span>) }}<span class="hljs-keyword">case</span> <span class="hljs-keyword">class</span> <span class="hljs-title class_">SayMeeting</span>(name: String)<span class="hljs-keyword">case</span> <span class="hljs-keyword">class</span> <span class="hljs-title class_">SayGoodBye</span>(name: String)</code></pre></div><h2 id="客户端"><a href="#客户端" class="headerlink" title="客户端"></a>客户端</h2><p>注册自身引用、寻找服务端引用和两种向服务端不同的请求方式(同步/异步)。</p><div class="hljs code-wrapper"><pre><code class="hljs java">object FaceToFaceClient { <span class="hljs-type">val</span> <span class="hljs-variable">CLIENT_NAME</span> <span class="hljs-operator">=</span> <span class="hljs-string">"FaceClient"</span> def <span class="hljs-title function_">main</span><span class="hljs-params">(args: Array[String])</span>: Unit = { <span class="hljs-type">val</span> <span class="hljs-variable">rpcAddress</span> <span class="hljs-operator">=</span> RpcAddress(SERVER_HOST, SERVER_PORT) faceAsync(rpcAddress)<span class="hljs-comment">// faceSync(rpcAddress)</span> } def <span class="hljs-title function_">faceAsync</span><span class="hljs-params">(rpcAddress: RpcAddress)</span>: Unit = { <span class="hljs-type">val</span> <span class="hljs-variable">config</span> <span class="hljs-operator">=</span> RpcEnvClientConfig(<span class="hljs-keyword">new</span> <span class="hljs-title class_">RpcConf</span>, CLIENT_NAME) val rpcEnv: RpcEnv = NettyRpcEnvFactory.create(config) <span class="hljs-type">val</span> <span class="hljs-variable">serverEndpointRef</span> <span class="hljs-operator">=</span> rpcEnv.setupEndpointRef(rpcAddress, SERVER_NAME) <span class="hljs-type">val</span> <span class="hljs-variable">future</span> <span class="hljs-operator">=</span> serverEndpointRef.ask[String](SayMeeting(<span class="hljs-string">"GLeon"</span>)) future.onComplete { <span class="hljs-keyword">case</span> <span class="hljs-title function_">Success</span><span class="hljs-params">(value)</span> => println(s<span class="hljs-string">"Get value: $value"</span>) <span class="hljs-keyword">case</span> <span class="hljs-title function_">Failure</span><span class="hljs-params">(exception)</span> => println(s<span class="hljs-string">"Get error: $exception"</span>) } <span class="hljs-comment">// 等待 Future 完成或超时</span> Await.result(future, Duration.apply(<span class="hljs-string">"30s"</span>)) } def <span class="hljs-title function_">faceSync</span><span class="hljs-params">(rpcAddress: RpcAddress)</span>: Unit = { <span class="hljs-type">val</span> <span class="hljs-variable">config</span> <span class="hljs-operator">=</span> RpcEnvClientConfig(<span class="hljs-keyword">new</span> <span class="hljs-title class_">RpcConf</span>, CLIENT_NAME) val rpcEnv: RpcEnv = NettyRpcEnvFactory.create(config) <span class="hljs-type">val</span> <span class="hljs-variable">serverEndpointRef</span> <span class="hljs-operator">=</span> rpcEnv.setupEndpointRef(rpcAddress, SERVER_NAME) <span class="hljs-type">val</span> <span class="hljs-variable">result</span> <span class="hljs-operator">=</span> serverEndpointRef.askWithRetry[String](SayMeeting(<span class="hljs-string">"GLeon"</span>)) println(s<span class="hljs-string">"Send name: $result"</span>) }}</code></pre></div>]]></content>
<categories>
<category>分布式系统</category>
<category>分布式计算</category>
<category>Spark</category>
</categories>
<tags>
<tag>Spark</tag>
</tags>
</entry>
<entry>
<title>HBase 启停流程</title>
<link href="/2021/03/18/HBase%E5%90%AF%E5%81%9C%E6%B5%81%E7%A8%8B/"/>
<url>/2021/03/18/HBase%E5%90%AF%E5%81%9C%E6%B5%81%E7%A8%8B/</url>
<content type="html"><![CDATA[<h1 id="整体流程分析"><a href="#整体流程分析" class="headerlink" title="整体流程分析"></a>整体流程分析</h1><p>版本:hbase-2.2.4<br>说明:分析展现的源码和脚本中会省略一部分,只保留与分析相关联的,感兴趣的可自行查阅。</p><h2 id="启动"><a href="#启动" class="headerlink" title="启动"></a>启动</h2><h3 id="start-hbase-sh"><a href="#start-hbase-sh" class="headerlink" title="start-hbase.sh"></a>start-hbase.sh</h3><p>启动 HBase 的入口,有两种模式:单机模式和集群模式,何种模式取决于用户的配置,下文会详细说明。</p><div class="hljs code-wrapper"><pre><code class="hljs shell"><span class="hljs-meta prompt_"># </span><span class="language-bash">获取当前路径,即 {HBASE_HOME}/bin</span>bin=`dirname "${BASH_SOURCE-$0}"`bin=`cd "$bin">/dev/null; pwd`<span class="hljs-meta prompt_"></span><span class="hljs-meta prompt_"># </span><span class="language-bash">加载 bin 目录下的 hbase-config.sh 文件</span>. "$bin"/hbase-config.sh<span class="hljs-meta prompt_"></span><span class="hljs-meta prompt_"># </span><span class="language-bash">判断加载 hbase-config.sh 是否成功,失败则退出,通常最后命令的退出状态为 0 表示没有错误</span>errCode=$?if [ $errCode -ne 0 ]then exit $errCodefi<span class="hljs-meta prompt_"></span><span class="hljs-meta prompt_"># </span><span class="language-bash">此处用户一般不传参,所以默认将 start 赋值给 commandToRun</span>if [ "$1" = "autostart" ]then commandToRun="--autostart-window-size ${AUTOSTART_WINDOW_SIZE} --autostart-window-retry-limit ${AUTOSTART_WINDOW_RETRY_LIMIT} autostart"else commandToRun="start"fi<span class="hljs-meta prompt_"></span><span class="hljs-meta prompt_"># </span><span class="language-bash">通过 HBase 源码中的 HBaseConfTool 获取 conf/hbase-site.xml 中参数 hbase.cluster.distributed 的配置值,表示是否为集群模式,接下文附 1</span>distMode=`$bin/hbase --config "$HBASE_CONF_DIR" org.apache.hadoop.hbase.util.HBaseConfTool hbase.cluster.distributed | head -n 1`<span class="hljs-meta prompt_"></span><span class="hljs-meta prompt_"># </span><span class="language-bash">当 distMode 为 <span class="hljs-literal">false</span> 时,启动单机测试版,此时 HMaster 和 HRegionServer 以及内嵌的 MiniZooKeeperCluster 均在同一个 JVM 里启动,接下文附 2</span>if [ "$distMode" == 'false' ]then "$bin"/hbase-daemon.sh --config "${HBASE_CONF_DIR}" $commandToRun master<span class="hljs-meta prompt_"># </span><span class="language-bash">当该值为 <span class="hljs-literal">true</span> 时,启动 HBase 集群,接下文附 2。分别启动 Zookeeper、HMaster 和 HRegionServer,其中 Zookeeper 的启动情况分两种,一种是 HBase 管理的,一种是独立部署的,取决于是否在 hbase-env.sh 中配置 HBASE_MANAGES_ZK 参数,为 <span class="hljs-literal">true</span> 时由 HBase 管理。</span>else "$bin"/hbase-daemons.sh --config "${HBASE_CONF_DIR}" $commandToRun zookeeper "$bin"/hbase-daemon.sh --config "${HBASE_CONF_DIR}" $commandToRun master "$bin"/hbase-daemons.sh --config "${HBASE_CONF_DIR}" \ --hosts "${HBASE_REGIONSERVERS}" $commandToRun regionserver "$bin"/hbase-daemons.sh --config "${HBASE_CONF_DIR}" \ --hosts "${HBASE_BACKUP_MASTERS}" $commandToRun master-backupfi</code></pre></div><h4 id="附-1"><a href="#附-1" class="headerlink" title="附 1"></a>附 1</h4><p>调用 HBaseConfTool 及相关的 HBaseConfiguration 源码部分,可以清楚地看到读取了配置文件 hbase-default.xml 和 hbase-site.xml,通过脚本传入的 key 来获取相应的 value</p><div class="hljs code-wrapper"><pre><code class="hljs java"><span class="hljs-keyword">public</span> <span class="hljs-keyword">class</span> <span class="hljs-title class_">HBaseConfTool</span> { <span class="hljs-keyword">public</span> <span class="hljs-keyword">static</span> <span class="hljs-keyword">void</span> <span class="hljs-title function_">main</span><span class="hljs-params">(String args[])</span> { <span class="hljs-keyword">if</span> (args.length < <span class="hljs-number">1</span>) { System.err.println(<span class="hljs-string">"Usage: HBaseConfTool <CONFIGURATION_KEY>"</span>); System.exit(<span class="hljs-number">1</span>); <span class="hljs-keyword">return</span>; } <span class="hljs-type">Configuration</span> <span class="hljs-variable">conf</span> <span class="hljs-operator">=</span> HBaseConfiguration.create(); System.out.println(conf.get(args[<span class="hljs-number">0</span>])); }}---<span class="hljs-keyword">public</span> <span class="hljs-keyword">class</span> <span class="hljs-title class_">HBaseConfiguration</span> <span class="hljs-keyword">extends</span> <span class="hljs-title class_">Configuration</span> { <span class="hljs-keyword">public</span> <span class="hljs-keyword">static</span> Configuration <span class="hljs-title function_">create</span><span class="hljs-params">()</span> { conf.setClassLoader(HBaseConfiguration.class.getClassLoader()); <span class="hljs-keyword">return</span> addHbaseResources(conf); } <span class="hljs-keyword">public</span> <span class="hljs-keyword">static</span> Configuration <span class="hljs-title function_">addHbaseResources</span><span class="hljs-params">(Configuration conf)</span> { conf.addResource(<span class="hljs-string">"hbase-default.xml"</span>); conf.addResource(<span class="hljs-string">"hbase-site.xml"</span>); checkDefaultsVersion(conf); <span class="hljs-keyword">return</span> conf; }}</code></pre></div><h4 id="附-2"><a href="#附-2" class="headerlink" title="附 2"></a>附 2</h4><p>由 HMaster 接受脚本传入的参数,调用 ServerCommandLine 中的 doMain 方法解析后再通过 HMasterCommandLine 进行启动,单机版和集群版仅仅是在 HMasterCommandLine 中的 run 方法中判断后走了不同的逻辑。</p><div class="hljs code-wrapper"><pre><code class="hljs java"><span class="hljs-keyword">public</span> <span class="hljs-keyword">class</span> <span class="hljs-title class_">HMaster</span> <span class="hljs-keyword">extends</span> <span class="hljs-title class_">HRegionServer</span> <span class="hljs-keyword">implements</span> <span class="hljs-title class_">MasterServices</span> { <span class="hljs-keyword">public</span> <span class="hljs-keyword">static</span> <span class="hljs-keyword">void</span> <span class="hljs-title function_">main</span><span class="hljs-params">(String [] args)</span> { LOG.info(<span class="hljs-string">"STARTING service "</span> + HMaster.class.getSimpleName()); VersionInfo.logVersion(); <span class="hljs-keyword">new</span> <span class="hljs-title class_">HMasterCommandLine</span>(HMaster.class).doMain(args); }}---<span class="hljs-keyword">public</span> <span class="hljs-keyword">abstract</span> <span class="hljs-keyword">class</span> <span class="hljs-title class_">ServerCommandLine</span> <span class="hljs-keyword">extends</span> <span class="hljs-title class_">Configured</span> <span class="hljs-keyword">implements</span> <span class="hljs-title class_">Tool</span> { <span class="hljs-keyword">public</span> <span class="hljs-keyword">void</span> <span class="hljs-title function_">doMain</span><span class="hljs-params">(String args[])</span> { <span class="hljs-keyword">try</span> { <span class="hljs-comment">// 加载 HBase 的配置文件,并调用 ToolRunner 类</span> <span class="hljs-type">int</span> <span class="hljs-variable">ret</span> <span class="hljs-operator">=</span> ToolRunner.run(HBaseConfiguration.create(), <span class="hljs-built_in">this</span>, args); <span class="hljs-keyword">if</span> (ret != <span class="hljs-number">0</span>) { System.exit(ret); } } <span class="hljs-keyword">catch</span> (Exception e) { LOG.error(<span class="hljs-string">"Failed to run"</span>, e); System.exit(-<span class="hljs-number">1</span>); } }}---<span class="hljs-keyword">public</span> <span class="hljs-keyword">class</span> <span class="hljs-title class_">ToolRunner</span> { <span class="hljs-keyword">public</span> <span class="hljs-keyword">static</span> <span class="hljs-type">int</span> <span class="hljs-title function_">run</span><span class="hljs-params">(Configuration conf, Tool tool, String[] args)</span> <span class="hljs-keyword">throws</span> Exception { …… <span class="hljs-comment">// 调用 HMasterCommandLine 的 run 方法</span> <span class="hljs-keyword">return</span> tool.run(toolArgs); }}---<span class="hljs-keyword">public</span> <span class="hljs-keyword">class</span> <span class="hljs-title class_">HMasterCommandLine</span> <span class="hljs-keyword">extends</span> <span class="hljs-title class_">ServerCommandLine</span> { <span class="hljs-keyword">public</span> <span class="hljs-type">int</span> <span class="hljs-title function_">run</span><span class="hljs-params">(String args[])</span> <span class="hljs-keyword">throws</span> Exception { <span class="hljs-comment">// 添加默认参数</span> …… CommandLine cmd; <span class="hljs-keyword">try</span> { <span class="hljs-comment">// 解析参数,失败则最终会调用 HMasterCommandLine 的 getUsage 方法返回操作指示,此处将 start 作为 args 加入到 cmd 中</span> cmd = <span class="hljs-keyword">new</span> <span class="hljs-title class_">GnuParser</span>().parse(opt, args); } <span class="hljs-keyword">catch</span> (ParseException e) { LOG.error(<span class="hljs-string">"Could not parse: "</span>, e); usage(<span class="hljs-literal">null</span>); <span class="hljs-keyword">return</span> <span class="hljs-number">1</span>; } <span class="hljs-comment">// 配置参数</span> …… <span class="hljs-comment">// 最终解析完成的剩下的参数,此处为 start</span> <span class="hljs-meta">@SuppressWarnings("unchecked")</span> List<String> remainingArgs = cmd.getArgList(); <span class="hljs-keyword">if</span> (remainingArgs.size() != <span class="hljs-number">1</span>) { usage(<span class="hljs-literal">null</span>); <span class="hljs-keyword">return</span> <span class="hljs-number">1</span>; } <span class="hljs-type">String</span> <span class="hljs-variable">command</span> <span class="hljs-operator">=</span> remainingArgs.get(<span class="hljs-number">0</span>); <span class="hljs-comment">// 根据接收到的 command 调用相应方法</span> <span class="hljs-keyword">if</span> (<span class="hljs-string">"start"</span>.equals(command)) { <span class="hljs-keyword">return</span> startMaster(); } <span class="hljs-keyword">else</span> <span class="hljs-keyword">if</span> (<span class="hljs-string">"stop"</span>.equals(command)) { <span class="hljs-keyword">return</span> stopMaster(); } <span class="hljs-keyword">else</span> <span class="hljs-keyword">if</span> (<span class="hljs-string">"clear"</span>.equals(command)) { <span class="hljs-keyword">return</span> (ZNodeClearer.clear(getConf()) ? <span class="hljs-number">0</span> : <span class="hljs-number">1</span>); } <span class="hljs-keyword">else</span> { usage(<span class="hljs-string">"Invalid command: "</span> + command); <span class="hljs-keyword">return</span> <span class="hljs-number">1</span>; } } <span class="hljs-keyword">private</span> <span class="hljs-type">int</span> <span class="hljs-title function_">startMaster</span><span class="hljs-params">()</span> { <span class="hljs-comment">// 获取配置参数</span> <span class="hljs-type">Configuration</span> <span class="hljs-variable">conf</span> <span class="hljs-operator">=</span> getConf(); <span class="hljs-comment">// TraceUtil 是个包装类,以一种简化的方式提供了访问 htrace 4+ 的函数,Apache HTrace 是 Cloudera 开源出来的一个分布式系统跟踪框架,支持HDFS和HBase等系统,为应用提供请求跟踪和性能分析</span> TraceUtil.initTracer(conf); <span class="hljs-keyword">try</span> { <span class="hljs-comment">// 这里从配置文件中识别出当前是单机模式还是集群模式,单机模式下指的是 LocalHBaseCluster 实例,会在同一个 JVM 里启动 Master 和 RegionServer</span> <span class="hljs-keyword">if</span> (LocalHBaseCluster.isLocal(conf)) { DefaultMetricsSystem.setMiniClusterMode(<span class="hljs-literal">true</span>); <span class="hljs-comment">// 单机模式下启动 MiniZooKeeperCluster 作为 Zookeeper 服务,该类中的许多代码都是从 Zookeeper 的测试代码中剥离出来的</span> <span class="hljs-keyword">final</span> <span class="hljs-type">MiniZooKeeperCluster</span> <span class="hljs-variable">zooKeeperCluster</span> <span class="hljs-operator">=</span> <span class="hljs-keyword">new</span> <span class="hljs-title class_">MiniZooKeeperCluster</span>(conf); <span class="hljs-comment">// 从配置文件获取 hbase.zookeeper.property.dataDir 配置的参数作为 Zookeeper 数据目录</span> <span class="hljs-type">File</span> <span class="hljs-variable">zkDataPath</span> <span class="hljs-operator">=</span> <span class="hljs-keyword">new</span> <span class="hljs-title class_">File</span>(conf.get(HConstants.ZOOKEEPER_DATA_DIR)); <span class="hljs-comment">// find out the default client port</span> <span class="hljs-type">int</span> <span class="hljs-variable">zkClientPort</span> <span class="hljs-operator">=</span> <span class="hljs-number">0</span>; <span class="hljs-comment">// 从 hbase.zookeeper.quorum 参数解析并获取 Zookeeper 配置的端口号</span> <span class="hljs-type">String</span> <span class="hljs-variable">zkserver</span> <span class="hljs-operator">=</span> conf.get(HConstants.ZOOKEEPER_QUORUM); <span class="hljs-keyword">if</span> (zkserver != <span class="hljs-literal">null</span>) { String[] zkservers = zkserver.split(<span class="hljs-string">","</span>); <span class="hljs-comment">// 单机模式仅支持一个 Zookeeper 服务</span> <span class="hljs-keyword">if</span> (zkservers.length > <span class="hljs-number">1</span>) { <span class="hljs-comment">// In local mode deployment, we have the master + a region server and zookeeper server</span> <span class="hljs-comment">// started in the same process. Therefore, we only support one zookeeper server.</span> <span class="hljs-type">String</span> <span class="hljs-variable">errorMsg</span> <span class="hljs-operator">=</span> <span class="hljs-string">"Could not start ZK with "</span> + zkservers.length + <span class="hljs-string">" ZK servers in local mode deployment. Aborting as clients (e.g. shell) will not "</span> + <span class="hljs-string">"be able to find this ZK quorum."</span>; System.err.println(errorMsg); <span class="hljs-keyword">throw</span> <span class="hljs-keyword">new</span> <span class="hljs-title class_">IOException</span>(errorMsg); } String[] parts = zkservers[<span class="hljs-number">0</span>].split(<span class="hljs-string">":"</span>); <span class="hljs-keyword">if</span> (parts.length == <span class="hljs-number">2</span>) { <span class="hljs-comment">// the second part is the client port</span> zkClientPort = Integer.parseInt(parts [<span class="hljs-number">1</span>]); } } <span class="hljs-comment">// If the client port could not be find in server quorum conf, try another conf</span> <span class="hljs-keyword">if</span> (zkClientPort == <span class="hljs-number">0</span>) { zkClientPort = conf.getInt(HConstants.ZOOKEEPER_CLIENT_PORT, <span class="hljs-number">0</span>); <span class="hljs-comment">// The client port has to be set by now; if not, throw exception.</span> <span class="hljs-keyword">if</span> (zkClientPort == <span class="hljs-number">0</span>) { <span class="hljs-keyword">throw</span> <span class="hljs-keyword">new</span> <span class="hljs-title class_">IOException</span>(<span class="hljs-string">"No config value for "</span> + HConstants.ZOOKEEPER_CLIENT_PORT); } } zooKeeperCluster.setDefaultClientPort(zkClientPort); <span class="hljs-comment">// set the ZK tick time if specified</span> <span class="hljs-type">int</span> <span class="hljs-variable">zkTickTime</span> <span class="hljs-operator">=</span> conf.getInt(HConstants.ZOOKEEPER_TICK_TIME, <span class="hljs-number">0</span>); <span class="hljs-keyword">if</span> (zkTickTime > <span class="hljs-number">0</span>) { zooKeeperCluster.setTickTime(zkTickTime); } <span class="hljs-comment">// 如果启用了安全认证,需要配置 Zookeeper 的 keytab 文件和 principal 等</span> <span class="hljs-comment">// login the zookeeper server principal (if using security)</span> ZKUtil.loginServer(conf, HConstants.ZK_SERVER_KEYTAB_FILE, HConstants.ZK_SERVER_KERBEROS_PRINCIPAL, <span class="hljs-literal">null</span>); <span class="hljs-type">int</span> <span class="hljs-variable">localZKClusterSessionTimeout</span> <span class="hljs-operator">=</span> conf.getInt(HConstants.ZK_SESSION_TIMEOUT + <span class="hljs-string">".localHBaseCluster"</span>, <span class="hljs-number">10</span>*<span class="hljs-number">1000</span>); conf.setInt(HConstants.ZK_SESSION_TIMEOUT, localZKClusterSessionTimeout); LOG.info(<span class="hljs-string">"Starting a zookeeper cluster"</span>); <span class="hljs-comment">// 启动 Zookeeper 服务</span> <span class="hljs-type">int</span> <span class="hljs-variable">clientPort</span> <span class="hljs-operator">=</span> zooKeeperCluster.startup(zkDataPath); <span class="hljs-comment">// Zookeeper 启动失败会输出相应信息</span> <span class="hljs-keyword">if</span> (clientPort != zkClientPort) { <span class="hljs-type">String</span> <span class="hljs-variable">errorMsg</span> <span class="hljs-operator">=</span> <span class="hljs-string">"Could not start ZK at requested port of "</span> + zkClientPort + <span class="hljs-string">". ZK was started at port: "</span> + clientPort + <span class="hljs-string">". Aborting as clients (e.g. shell) will not be able to find "</span> + <span class="hljs-string">"this ZK quorum."</span>; System.err.println(errorMsg); <span class="hljs-keyword">throw</span> <span class="hljs-keyword">new</span> <span class="hljs-title class_">IOException</span>(errorMsg); } <span class="hljs-comment">// 启动成功则设置 HBase 有关 Zookeeper 的参数</span> conf.set(HConstants.ZOOKEEPER_CLIENT_PORT, Integer.toString(clientPort)); <span class="hljs-comment">// Need to have the zk cluster shutdown when master is shutdown.</span> <span class="hljs-comment">// Run a subclass that does the zk cluster shutdown on its way out.</span> <span class="hljs-type">int</span> <span class="hljs-variable">mastersCount</span> <span class="hljs-operator">=</span> conf.getInt(<span class="hljs-string">"hbase.masters"</span>, <span class="hljs-number">1</span>); <span class="hljs-type">int</span> <span class="hljs-variable">regionServersCount</span> <span class="hljs-operator">=</span> conf.getInt(<span class="hljs-string">"hbase.regionservers"</span>, <span class="hljs-number">1</span>); <span class="hljs-comment">// Set start timeout to 5 minutes for cmd line start operations</span> conf.setIfUnset(<span class="hljs-string">"hbase.master.start.timeout.localHBaseCluster"</span>, <span class="hljs-string">"300000"</span>); LOG.info(<span class="hljs-string">"Starting up instance of localHBaseCluster; master="</span> + mastersCount + <span class="hljs-string">", regionserversCount="</span> + regionServersCount); <span class="hljs-comment">// LocalHMaster 继承自 HMaster,和 HRegionServer 同时启动,在停止的同时也停止 Zookeeper 服务</span> <span class="hljs-type">LocalHBaseCluster</span> <span class="hljs-variable">cluster</span> <span class="hljs-operator">=</span> <span class="hljs-keyword">new</span> <span class="hljs-title class_">LocalHBaseCluster</span>(conf, mastersCount, regionServersCount, LocalHMaster.class, HRegionServer.class); <span class="hljs-comment">// 将运行的 zooKeeperCluster 置于 LocalHMaster 中,以便在 LocalHMaster 停止的时候停止 Zookeeper 服务</span> ((LocalHMaster)cluster.getMaster(<span class="hljs-number">0</span>)).setZKCluster(zooKeeperCluster); <span class="hljs-comment">// 调用 LocalHBaseCluster 的 startup 方法启动</span> cluster.startup(); waitOnMasterThreads(cluster); } <span class="hljs-keyword">else</span> { <span class="hljs-comment">// 启动集群模式</span> <span class="hljs-comment">// 记录有关当前正在运行的JVM进程的信息,包括环境变量,可以通过配置 hbase.envvars.logging.disabled 为 true 禁用</span> logProcessInfo(getConf()); <span class="hljs-comment">// 通过反射 HMaster 的构造方法对其进行实例化</span> <span class="hljs-type">HMaster</span> <span class="hljs-variable">master</span> <span class="hljs-operator">=</span> HMaster.constructMaster(masterClass, conf); <span class="hljs-comment">// 如果此时请求关闭 HMaster 则不会启动</span> <span class="hljs-keyword">if</span> (master.isStopped()) { LOG.info(<span class="hljs-string">"Won't bring the Master up as a shutdown is requested"</span>); <span class="hljs-keyword">return</span> <span class="hljs-number">1</span>; } <span class="hljs-comment">// 启动 HMaster,调用 HMaster 的 run 方法进行处理</span> master.start(); <span class="hljs-comment">// 等待 HMaster 启动成功</span> master.join(); <span class="hljs-comment">// 异常信息则输出错误信息</span> <span class="hljs-keyword">if</span>(master.isAborted()) <span class="hljs-keyword">throw</span> <span class="hljs-keyword">new</span> <span class="hljs-title class_">RuntimeException</span>(<span class="hljs-string">"HMaster Aborted"</span>); } } <span class="hljs-keyword">catch</span> (Throwable t) { LOG.error(<span class="hljs-string">"Master exiting"</span>, t); <span class="hljs-keyword">return</span> <span class="hljs-number">1</span>; } <span class="hljs-keyword">return</span> <span class="hljs-number">0</span>; } <span class="hljs-comment">// 由 HMasterCommandLine 在 startMaster 方法中启动单机模式的 HBase 时调用 </span> <span class="hljs-keyword">private</span> <span class="hljs-keyword">void</span> <span class="hljs-title function_">waitOnMasterThreads</span><span class="hljs-params">(LocalHBaseCluster cluster)</span> <span class="hljs-keyword">throws</span> InterruptedException{ List<JVMClusterUtil.MasterThread> masters = cluster.getMasters(); List<JVMClusterUtil.RegionServerThread> regionservers = cluster.getRegionServers(); <span class="hljs-keyword">if</span> (masters != <span class="hljs-literal">null</span>) { <span class="hljs-keyword">for</span> (JVMClusterUtil.MasterThread t : masters) { <span class="hljs-comment">// 先等待 MasterThread 启动完成再启动 RegionServerThread,如果出现异常则关闭 RegionServer 并输出错误信息</span> t.join(); <span class="hljs-keyword">if</span>(t.getMaster().isAborted()) { closeAllRegionServerThreads(regionservers); <span class="hljs-keyword">throw</span> <span class="hljs-keyword">new</span> <span class="hljs-title class_">RuntimeException</span>(<span class="hljs-string">"HMaster Aborted"</span>); } } } }}---<span class="hljs-keyword">public</span> <span class="hljs-keyword">class</span> <span class="hljs-title class_">LocalHBaseCluster</span> { <span class="hljs-keyword">public</span> <span class="hljs-title function_">LocalHBaseCluster</span><span class="hljs-params">(<span class="hljs-keyword">final</span> Configuration conf, <span class="hljs-keyword">final</span> <span class="hljs-type">int</span> noMasters,</span><span class="hljs-params"> <span class="hljs-keyword">final</span> <span class="hljs-type">int</span> noRegionServers, <span class="hljs-keyword">final</span> Class<? extends HMaster> masterClass,</span><span class="hljs-params"> <span class="hljs-keyword">final</span> Class<? extends HRegionServer> regionServerClass)</span> <span class="hljs-keyword">throws</span> IOException { <span class="hljs-built_in">this</span>.conf = conf; <span class="hljs-comment">// 获取及配置 HBase 相关参数</span> …… <span class="hljs-comment">// 设置 masterClass,此处即为传入的 LocalHMaster</span> <span class="hljs-built_in">this</span>.masterClass = (Class<? <span class="hljs-keyword">extends</span> <span class="hljs-title class_">HMaster</span>>) conf.getClass(HConstants.MASTER_IMPL, masterClass); <span class="hljs-comment">// 最终调用至 JVMClusterUtil 工具类的 createMasterThread 方法,通过反射调用继承自 HMaster 的子类构造方法进行实例化,得到 MasterThread</span> <span class="hljs-keyword">for</span> (<span class="hljs-type">int</span> <span class="hljs-variable">i</span> <span class="hljs-operator">=</span> <span class="hljs-number">0</span>; i < noMasters; i++) { addMaster(<span class="hljs-keyword">new</span> <span class="hljs-title class_">Configuration</span>(conf), i); } <span class="hljs-comment">// 设置 regionServerClass,此处即为传入的 HRegionServer</span> <span class="hljs-built_in">this</span>.regionServerClass = (Class<? <span class="hljs-keyword">extends</span> <span class="hljs-title class_">HRegionServer</span>>)conf.getClass(HConstants.REGION_SERVER_IMPL, regionServerClass); <span class="hljs-comment">// 最终调用至 JVMClusterUtil 工具类的 createRegionServerThread 方法,通过反射调用继承自 HRegionServer 的子类构造方法进行实例化,得到 RegionServerThread</span> <span class="hljs-keyword">for</span> (<span class="hljs-type">int</span> <span class="hljs-variable">i</span> <span class="hljs-operator">=</span> <span class="hljs-number">0</span>; i < noRegionServers; i++) { addRegionServer(<span class="hljs-keyword">new</span> <span class="hljs-title class_">Configuration</span>(conf), i); } } <span class="hljs-comment">// 启动前面实例化好的 masterThreads 和 regionThreads,等待启动完成,至此单机模式下的 HMaster 和 HRegionServer 均已启动,可以正常提供服务</span> <span class="hljs-keyword">public</span> <span class="hljs-keyword">void</span> <span class="hljs-title function_">startup</span><span class="hljs-params">()</span> <span class="hljs-keyword">throws</span> IOException { JVMClusterUtil.startup(<span class="hljs-built_in">this</span>.masterThreads, <span class="hljs-built_in">this</span>.regionThreads); }}---<span class="hljs-keyword">public</span> <span class="hljs-keyword">static</span> <span class="hljs-keyword">class</span> <span class="hljs-title class_">LocalHMaster</span> <span class="hljs-keyword">extends</span> <span class="hljs-title class_">HMaster</span> { <span class="hljs-keyword">private</span> <span class="hljs-type">MiniZooKeeperCluster</span> <span class="hljs-variable">zkcluster</span> <span class="hljs-operator">=</span> <span class="hljs-literal">null</span>; <span class="hljs-keyword">public</span> <span class="hljs-title function_">LocalHMaster</span><span class="hljs-params">(Configuration conf)</span> <span class="hljs-keyword">throws</span> IOException, KeeperException, InterruptedException { <span class="hljs-built_in">super</span>(conf); } <span class="hljs-meta">@Override</span> <span class="hljs-keyword">public</span> <span class="hljs-keyword">void</span> <span class="hljs-title function_">run</span><span class="hljs-params">()</span> { <span class="hljs-comment">// 调用父类 HMaster 的 run 方法</span> <span class="hljs-built_in">super</span>.run(); <span class="hljs-comment">// 调用 MiniZooKeeperCluster 的 shutdown 方法,停止单机模式下的 Zookeeper 服务</span> <span class="hljs-keyword">if</span> (<span class="hljs-built_in">this</span>.zkcluster != <span class="hljs-literal">null</span>) { <span class="hljs-keyword">try</span> { <span class="hljs-built_in">this</span>.zkcluster.shutdown(); } <span class="hljs-keyword">catch</span> (IOException e) { e.printStackTrace(); } } } <span class="hljs-keyword">void</span> <span class="hljs-title function_">setZKCluster</span><span class="hljs-params">(<span class="hljs-keyword">final</span> MiniZooKeeperCluster zkcluster)</span> { <span class="hljs-built_in">this</span>.zkcluster = zkcluster; } }</code></pre></div><h3 id="hbase-config-sh"><a href="#hbase-config-sh" class="headerlink" title="hbase-config.sh"></a>hbase-config.sh</h3><p>用于获取配置参数的脚本,会去加载 hbase-env.sh 中设置的环境变量。</p><div class="hljs code-wrapper"><pre><code class="hljs shell"><span class="hljs-meta prompt_"># </span><span class="language-bash">获取当前路径</span>this="${BASH_SOURCE-$0}"<span class="hljs-meta prompt_"></span><span class="hljs-meta prompt_"># </span><span class="language-bash">解析 <span class="hljs-variable">${BASH_SOURCE-$0}</span> 有可能是 softlink 的问题</span>while [ -h "$this" ]; do ls=`ls -ld "$this"` link=`expr "$ls" : '.*-> \(.*\)$'` if expr "$link" : '.*/.*' > /dev/null; then this="$link" else this=`dirname "$this"`/"$link" fidone<span class="hljs-meta prompt_"></span><span class="hljs-meta prompt_"># </span><span class="language-bash">convert relative path to absolute path</span>bin=`dirname "$this"`script=`basename "$this"`bin=`cd "$bin">/dev/null; pwd`this="$bin/$script"<span class="hljs-meta prompt_"></span><span class="hljs-meta prompt_"># </span><span class="language-bash">将 HBASE_HOME 设置为 HBase 安装的根目录</span>if [ -z "$HBASE_HOME" ]; then export HBASE_HOME=`dirname "$this"`/..fi<span class="hljs-meta prompt_"></span><span class="hljs-meta prompt_"># </span><span class="language-bash">检查是否有可选参数传入,接受到则进行相应的参数设置</span>……<span class="hljs-meta prompt_"></span><span class="hljs-meta prompt_"># </span><span class="language-bash">以下项为参数设置</span><span class="hljs-meta prompt_"># </span><span class="language-bash">Allow alternate hbase conf <span class="hljs-built_in">dir</span> location.</span>HBASE_CONF_DIR="${HBASE_CONF_DIR:-$HBASE_HOME/conf}"<span class="hljs-meta prompt_"># </span><span class="language-bash">List of hbase regions servers.</span>HBASE_REGIONSERVERS="${HBASE_REGIONSERVERS:-$HBASE_CONF_DIR/regionservers}"<span class="hljs-meta prompt_"># </span><span class="language-bash">List of hbase secondary masters.</span>HBASE_BACKUP_MASTERS="${HBASE_BACKUP_MASTERS:-$HBASE_CONF_DIR/backup-masters}"if [ -n "$HBASE_JMX_BASE" ] && [ -z "$HBASE_JMX_OPTS" ]; then HBASE_JMX_OPTS="$HBASE_JMX_BASE"fi……<span class="hljs-meta prompt_"></span><span class="hljs-meta prompt_"># </span><span class="language-bash">加载 hbase-env.sh</span>if [ -z "$HBASE_ENV_INIT" ] && [ -f "${HBASE_CONF_DIR}/hbase-env.sh" ]; then . "${HBASE_CONF_DIR}/hbase-env.sh" export HBASE_ENV_INIT="true"fi<span class="hljs-meta prompt_"></span><span class="hljs-meta prompt_"># </span><span class="language-bash">检测 HBASE_REGIONSERVER_MLOCK 是否设置为 <span class="hljs-literal">true</span>,主要是判断系统是否使用了 mlock 来锁住内存,防止这段内存被操作系统放到 swap 空间,即使该程序已经有一段时间没有访问这段空间</span>if [ "$HBASE_REGIONSERVER_MLOCK" = "true" ]; then MLOCK_AGENT="$HBASE_HOME/lib/native/libmlockall_agent.so" if [ ! -f "$MLOCK_AGENT" ]; then cat 1>&2 <<EOFUnable to find mlockall_agent, hbase must be compiled with -PnativeEOF exit 1 fi // 配置 HBASE_REGIONSERVER_UID if [ -z "$HBASE_REGIONSERVER_UID" ] || [ "$HBASE_REGIONSERVER_UID" == "$USER" ]; then HBASE_REGIONSERVER_OPTS="$HBASE_REGIONSERVER_OPTS -agentpath:$MLOCK_AGENT" else HBASE_REGIONSERVER_OPTS="$HBASE_REGIONSERVER_OPTS -agentpath:$MLOCK_AGENT=user=$HBASE_REGIONSERVER_UID" fifi……<span class="hljs-meta prompt_"></span><span class="hljs-meta prompt_"># </span><span class="language-bash">检查是否配置了 jdk,HBase-2.2.4 至少需要 1.8 以上的 JDK 版本,未配置则退出</span><span class="hljs-meta prompt_"># </span><span class="language-bash">Now having JAVA_HOME defined is required</span> if [ -z "$JAVA_HOME" ]; then cat 1>&2 <<EOF……fi</code></pre></div><h3 id="hbase-daemons-sh"><a href="#hbase-daemons-sh" class="headerlink" title="hbase-daemons.sh"></a>hbase-daemons.sh</h3><p>根据要启动的进程,生成好远程执行命令 remote_cmd 并调用其他脚本执行。</p><div class="hljs code-wrapper"><pre><code class="hljs shell"><span class="hljs-meta prompt_"># </span><span class="language-bash">脚本用法</span>usage="Usage: hbase-daemons.sh [--config <hbase-confdir>] [--autostart-window-size <window size in hours>]\ [--autostart-window-retry-limit <retry count limit for autostart>] \ [--hosts regionserversfile] [autostart|autorestart|restart|start|stop] command args..."<span class="hljs-meta prompt_"></span><span class="hljs-meta prompt_"># </span><span class="language-bash">如果没有指定参数,输出 usage</span>if [ $# -le 1 ]; then echo $usage exit 1fi<span class="hljs-meta prompt_"></span><span class="hljs-meta prompt_"># </span><span class="language-bash">获取当前路径</span>bin=`dirname "${BASH_SOURCE-$0}"`bin=`cd "$bin">/dev/null; pwd`<span class="hljs-meta prompt_"></span><span class="hljs-meta prompt_"># </span><span class="language-bash">默认的自动启动参数相关配置,一般前面脚本都是传递诸如 start 参数过来,可以忽略</span>AUTOSTART_WINDOW_SIZE=0AUTOSTART_WINDOW_RETRY_LIMIT=0<span class="hljs-meta prompt_"></span><span class="hljs-meta prompt_"># </span><span class="language-bash">加载 hbase-config.sh 脚本</span>. $bin/hbase-config.sh……<span class="hljs-meta prompt_"></span><span class="hljs-meta prompt_"># </span><span class="language-bash">调用 hbase-daemon.sh 并向其传递参数</span>remote_cmd="$bin/hbase-daemon.sh --config ${HBASE_CONF_DIR} ${autostart_args} $@"<span class="hljs-meta prompt_"># </span><span class="language-bash">将 <span class="hljs-variable">$remote_cmd</span> 作为参数继续包装到 args 中</span>args="--hosts ${HBASE_REGIONSERVERS} --config ${HBASE_CONF_DIR} $remote_cmd"<span class="hljs-meta prompt_"></span><span class="hljs-meta prompt_"># </span><span class="language-bash">接受到的第二个参数的值,例如 start-hbase.sh 集群模式下传递了 zookeeper、regionserver 等,基于该参数分别调用相应的脚本执行,执行后退出</span>command=$2case $command in (zookeeper) exec "$bin/zookeepers.sh" $args ;; (master-backup) exec "$bin/master-backup.sh" $args ;; (*) exec "$bin/regionservers.sh" $args ;;esac</code></pre></div><h3 id="hbase-daemon-sh"><a href="#hbase-daemon-sh" class="headerlink" title="hbase-daemon.sh"></a>hbase-daemon.sh</h3><p>这个脚本很重要,前面的脚本都是做一些准备工作,它负责启动前的检查清理、日志滚动以及进程的启动等等。支持 7 种方式:start、autostart、autorestart、foreground_start、internal_autostart、stop、restart,其他参数则输出操作用法。</p><div class="hljs code-wrapper"><pre><code class="hljs shell"><span class="hljs-meta prompt_"># </span><span class="language-bash">将 Hadoop hbase 命令作为守护程序执行</span><span class="hljs-meta prompt_"># </span><span class="language-bash">环境变量</span><span class="hljs-meta prompt_"></span><span class="hljs-meta prompt_"># </span><span class="language-bash">配置文件目录,默认是 <span class="hljs-variable">${HBASE_HOME}</span>/conf</span><span class="hljs-meta prompt_"># </span><span class="language-bash">HBASE_CONF_DIR</span><span class="hljs-meta prompt_"></span><span class="hljs-meta prompt_"># </span><span class="language-bash">日志存储目录,默认情况下为 <span class="hljs-built_in">pwd</span></span><span class="hljs-meta prompt_"># </span><span class="language-bash">HBASE_LOG_DIR</span><span class="hljs-meta prompt_"></span><span class="hljs-meta prompt_"># </span><span class="language-bash">进程号存放目录,默认是 /tmp</span><span class="hljs-meta prompt_"># </span><span class="language-bash">HBASE_PID_DIR</span><span class="hljs-meta prompt_"></span><span class="hljs-meta prompt_"># </span><span class="language-bash">代表当前 hadoop 实例的字符串,默认是当前点用户 <span class="hljs-variable">$USER</span></span><span class="hljs-meta prompt_"># </span><span class="language-bash">HBASE_IDENT_STRING</span><span class="hljs-meta prompt_"></span><span class="hljs-meta prompt_"># </span><span class="language-bash">守护程序的调度优先级,默认是 0</span><span class="hljs-meta prompt_"># </span><span class="language-bash">HBASE_NICENESS</span><span class="hljs-meta prompt_"></span><span class="hljs-meta prompt_"># </span><span class="language-bash">在停止服务此时间之后,服务还未停止,将对其执行 <span class="hljs-built_in">kill</span> -9 命令,默认 1200(单位是 s)</span><span class="hljs-meta prompt_"># </span><span class="language-bash">HBASE_STOP_TIMEOUT</span><span class="hljs-meta prompt_"></span><span class="hljs-meta prompt_"># </span><span class="language-bash">仿照了 <span class="hljs-variable">$HADOOP_HOME</span>/bin/hadoop-daemon.sh</span>usage="Usage: hbase-daemon.sh [--config <conf-dir>]\ [--autostart-window-size <window size in hours>]\ [--autostart-window-retry-limit <retry count limit for autostart>]\ (start|stop|restart|autostart|autorestart|foreground_start) <hbase-command> \ <args...>"<span class="hljs-meta prompt_"></span><span class="hljs-meta prompt_"># </span><span class="language-bash">如果没有指定参数,输出 usage</span>if [ $# -le 1 ]; then echo $usage exit 1fi<span class="hljs-meta prompt_"></span><span class="hljs-meta prompt_"># </span><span class="language-bash">默认的自动启动配置参数</span>AUTOSTART_WINDOW_SIZE=0AUTOSTART_WINDOW_RETRY_LIMIT=0<span class="hljs-meta prompt_"></span><span class="hljs-meta prompt_"># </span><span class="language-bash">获取当前路径</span>bin=`dirname "${BASH_SOURCE-$0}"`bin=`cd "$bin">/dev/null; pwd`<span class="hljs-meta prompt_"></span><span class="hljs-meta prompt_"># </span><span class="language-bash">加载 hbase-config.sh 以及 hbase-common.sh</span>. "$bin"/hbase-config.sh. "$bin"/hbase-common.sh<span class="hljs-meta prompt_"></span><span class="hljs-meta prompt_"># </span><span class="language-bash">获取传递的参数,即 start 或 stop</span>startStop=$1<span class="hljs-meta prompt_"># </span><span class="language-bash">命令左移,<span class="hljs-built_in">shift</span> 命令每执行一次,变量的个数(<span class="hljs-variable">$#</span>)减一,而变量值提前一位</span>shiftcommand=$1shift<span class="hljs-meta prompt_"></span><span class="hljs-meta prompt_"># </span><span class="language-bash">日志滚动</span>hbase_rotate_log (){ log=$1; num=5; if [ -n "$2" ]; then num=$2 fi # 检查是否存在日志文件,若存在则进行日志滚动 if [ -f "$log" ]; then # rotate logs while [ $num -gt 1 ]; do prev=`expr $num - 1` [ -f "$log.$prev" ] && mv -f "$log.$prev" "$log.$num" num=$prev done # 修改日志文件名 mv -f "$log" "$log.$num"; fi}<span class="hljs-meta prompt_"></span><span class="hljs-meta prompt_"># </span><span class="language-bash">当运行遇到问题时进行清理,在 foreground_start 方式中,接收到 SIGHUP(终端线路挂断) SIGINT(中断进程) SIGTERM(软件终止信号) EXIT(退出)等信号时,</span><span class="hljs-meta prompt_"># </span><span class="language-bash">使用 <span class="hljs-built_in">trap</span> 命令对要处理的信号名采取相应的行动,即 <span class="hljs-built_in">kill</span> 掉正在运行的进程,并通知 Zookeeper 删除节点</span>cleanAfterRun() { if [ -f ${HBASE_PID} ]; then # If the process is still running time to tear it down. kill -9 `cat ${HBASE_PID}` > /dev/null 2>&1 rm -f ${HBASE_PID} > /dev/null 2>&1 fi if [ -f ${HBASE_ZNODE_FILE} ]; then if [ "$command" = "master" ]; then HBASE_OPTS="$HBASE_OPTS $HBASE_MASTER_OPTS" $bin/hbase master clear > /dev/null 2>&1 else # call ZK to delete the node ZNODE=`cat ${HBASE_ZNODE_FILE}` HBASE_OPTS="$HBASE_OPTS $HBASE_REGIONSERVER_OPTS" $bin/hbase zkcli delete ${ZNODE} > /dev/null 2>&1 fi rm ${HBASE_ZNODE_FILE} fi}<span class="hljs-meta prompt_"></span><span class="hljs-meta prompt_"># </span><span class="language-bash">启动之间先检查进程是否存在,存在则输出警告信息</span>check_before_start(){ #ckeck if the process is not running mkdir -p "$HBASE_PID_DIR" if [ -f $HBASE_PID ]; then # kill -0 pid 不发送任何信号,但是系统会进行错误检查,检查一个进程是否存在,存在返回 0;不存在返回 1 if kill -0 `cat $HBASE_PID` > /dev/null 2>&1; then echo $command running as process `cat $HBASE_PID`. Stop it first. exit 1 fi fi}<span class="hljs-meta prompt_"></span><span class="hljs-meta prompt_"># </span><span class="language-bash">等待命令执行完成,超过 HBASE_SLAVE_TIMEOUT 即 300 之后则调用 <span class="hljs-built_in">kill</span> -9 杀掉服务并输出警告信息</span>wait_until_done (){ p=$1 cnt=${HBASE_SLAVE_TIMEOUT:-300} origcnt=$cnt # 进程仍在运行,睡眠 1s 后重新判断,直到超过指定次数(时间)调用 kill -9 $pid while kill -0 $p > /dev/null 2>&1; do if [ $cnt -gt 1 ]; then cnt=`expr $cnt - 1` sleep 1 else echo "Process did not complete after $origcnt seconds, killing." kill -9 $p exit 1 fi done return 0}<span class="hljs-meta prompt_"></span><span class="hljs-meta prompt_"># </span><span class="language-bash">获取日志目录</span>if [ "$HBASE_LOG_DIR" = "" ]; then export HBASE_LOG_DIR="$HBASE_HOME/logs"fimkdir -p "$HBASE_LOG_DIR"<span class="hljs-meta prompt_"></span><span class="hljs-meta prompt_"># </span><span class="language-bash">如果没有配置 HBASE_PID_DIR 目录,则默认为 /tmp</span>if [ "$HBASE_PID_DIR" = "" ]; then HBASE_PID_DIR=/tmpfiif [ "$HBASE_IDENT_STRING" = "" ]; then export HBASE_IDENT_STRING="$USER"fi<span class="hljs-meta prompt_"></span><span class="hljs-meta prompt_"># </span><span class="language-bash">配置 JAVA_HOME</span>if [ "$JAVA_HOME" != "" ]; then<span class="hljs-meta prompt_"> #</span><span class="language-bash"><span class="hljs-built_in">echo</span> <span class="hljs-string">"run java in <span class="hljs-variable">$JAVA_HOME</span>"</span></span> JAVA_HOME=$JAVA_HOMEfiif [ "$JAVA_HOME" = "" ]; then echo "Error: JAVA_HOME is not set." exit 1fiJAVA=$JAVA_HOME/bin/java<span class="hljs-meta prompt_"># </span><span class="language-bash">日志前缀,如:hbase-root-master-node1</span>export HBASE_LOG_PREFIX=hbase-$HBASE_IDENT_STRING-$command-$HOSTNAMEexport HBASE_LOGFILE=$HBASE_LOG_PREFIX.log<span class="hljs-meta prompt_"></span><span class="hljs-meta prompt_"># </span><span class="language-bash">如果未配置 HBASE_ROOT_LOGGER 参数,则设置默认的日志级别</span>if [ -z "${HBASE_ROOT_LOGGER}" ]; thenexport HBASE_ROOT_LOGGER=${HBASE_ROOT_LOGGER:-"INFO,RFA"}fi<span class="hljs-meta prompt_"></span><span class="hljs-meta prompt_"># </span><span class="language-bash">如果未配置 HBASE_SECURITY_LOGGER 参数,则设置默认的安全日志级别</span>if [ -z "${HBASE_SECURITY_LOGGER}" ]; thenexport HBASE_SECURITY_LOGGER=${HBASE_SECURITY_LOGGER:-"INFO,RFAS"}fi<span class="hljs-meta prompt_"></span><span class="hljs-meta prompt_"># </span><span class="language-bash">out 日志,即 System.out 输出信息</span>HBASE_LOGOUT=${HBASE_LOGOUT:-"$HBASE_LOG_DIR/$HBASE_LOG_PREFIX.out"}HBASE_LOGGC=${HBASE_LOGGC:-"$HBASE_LOG_DIR/$HBASE_LOG_PREFIX.gc"}HBASE_LOGLOG=${HBASE_LOGLOG:-"${HBASE_LOG_DIR}/${HBASE_LOGFILE}"}<span class="hljs-meta prompt_"># </span><span class="language-bash">HBase 相关服务进程</span>HBASE_PID=$HBASE_PID_DIR/hbase-$HBASE_IDENT_STRING-$command.pid<span class="hljs-meta prompt_"># </span><span class="language-bash">HBase 的 znode 文件</span>export HBASE_ZNODE_FILE=$HBASE_PID_DIR/hbase-$HBASE_IDENT_STRING-$command.znode<span class="hljs-meta prompt_"># </span><span class="language-bash">HBase 的 autostart 文件</span>export HBASE_AUTOSTART_FILE=$HBASE_PID_DIR/hbase-$HBASE_IDENT_STRING-$command.autostart<span class="hljs-meta prompt_"></span><span class="hljs-meta prompt_"># </span><span class="language-bash">如果配置了 SERVER_GC_OPTS、CLIENT_GC_OPTS 参数,则设置对应变量</span>if [ -n "$SERVER_GC_OPTS" ]; then export SERVER_GC_OPTS=${SERVER_GC_OPTS/"-Xloggc:<FILE-PATH>"/"-Xloggc:${HBASE_LOGGC}"}fiif [ -n "$CLIENT_GC_OPTS" ]; then export CLIENT_GC_OPTS=${CLIENT_GC_OPTS/"-Xloggc:<FILE-PATH>"/"-Xloggc:${HBASE_LOGGC}"}fi<span class="hljs-meta prompt_"></span><span class="hljs-meta prompt_"># </span><span class="language-bash">设置默认的程度调度优先级为 0</span>if [ "$HBASE_NICENESS" = "" ]; then export HBASE_NICENESS=0fi<span class="hljs-meta prompt_"></span><span class="hljs-meta prompt_"># </span><span class="language-bash">获取当前路径</span>thiscmd="$bin/$(basename ${BASH_SOURCE-$0})"args=$@case $startStop in(start) check_before_start hbase_rotate_log $HBASE_LOGOUT hbase_rotate_log $HBASE_LOGGC # 输出启动的程序,以及日志输出目录,接着调用 foreground_start echo running $command, logging to $HBASE_LOGOUT $thiscmd --config "${HBASE_CONF_DIR}" \ foreground_start $command $args < /dev/null > ${HBASE_LOGOUT} 2>&1 & # 使正在运行的作业忽略 HUP 信号,避免当用户注销(logout)或者网络断开时,终端会收到 Linux HUP(hangup)信号从而关闭其所有子进程 disown -h -r sleep 1; head "${HBASE_LOGOUT}" ;;(autostart) check_before_start hbase_rotate_log $HBASE_LOGOUT hbase_rotate_log $HBASE_LOGGC echo running $command, logging to $HBASE_LOGOUT # 使用 nohup 挂起并执行自动启动程序,调用 internal_autostart 继续执行 nohup $thiscmd --config "${HBASE_CONF_DIR}" --autostart-window-size ${AUTOSTART_WINDOW_SIZE} --autostart-window-retry-limit ${AUTOSTART_WINDOW_RETRY_LIMIT} \ internal_autostart $command $args < /dev/null > ${HBASE_LOGOUT} 2>&1 & ;;(autorestart) echo running $command, logging to $HBASE_LOGOUT # 先停止当前服务,并等待所有进程都停止 $thiscmd --config "${HBASE_CONF_DIR}" stop $command $args & wait_until_done $! # 等待用户指定的睡眠周期 sp=${HBASE_RESTART_SLEEP:-3} if [ $sp -gt 0 ]; then sleep $sp fi check_before_start hbase_rotate_log $HBASE_LOGOUT # 使用 nohup 挂起并执行自动启动程序,调用 internal_autostart 继续执行 nohup $thiscmd --config "${HBASE_CONF_DIR}" --autostart-window-size ${AUTOSTART_WINDOW_SIZE} --autostart-window-retry-limit ${AUTOSTART_WINDOW_RETRY_LIMIT} \ internal_autostart $command $args < /dev/null > ${HBASE_LOGOUT} 2>&1 & ;;(foreground_start) trap cleanAfterRun SIGHUP SIGINT SIGTERM EXIT # 日志不重定向参数,一般都输出到日志中,这部分逻辑主要走 else if [ "$HBASE_NO_REDIRECT_LOG" != "" ]; then # NO REDIRECT echo "`date` Starting $command on `hostname`" echo "`ulimit -a`" # in case the parent shell gets the kill make sure to trap signals. # Only one will get called. Either the trap or the flow will go through. nice -n $HBASE_NICENESS "$HBASE_HOME"/bin/hbase \ --config "${HBASE_CONF_DIR}" \ $command "$@" start & else echo "`date` Starting $command on `hostname`" >> ${HBASE_LOGLOG} echo "`ulimit -a`" >> "$HBASE_LOGLOG" 2>&1 # nice 以更改过的优先序来执行程序,调用 $HBASE_HOME/bin/hbase 传递参数继续执行 nice -n $HBASE_NICENESS "$HBASE_HOME"/bin/hbase \ --config "${HBASE_CONF_DIR}" \ $command "$@" start >> ${HBASE_LOGOUT} 2>&1 & fi # 获取最后一个进程号,将其覆盖写入 $HBASE_PID 文件中,暂停当前进程并释放资源等待前面的线程执行 hbase_pid=$! echo $hbase_pid > ${HBASE_PID} wait $hbase_pid ;;(internal_autostart) ONE_HOUR_IN_SECS=3600 # 自动启动的开始日期 autostartWindowStartDate=`date +%s` autostartCount=0 # 创建自动启动的文件 touch "$HBASE_AUTOSTART_FILE" # 除非被要求停止,否则一直保持启动命令的状态,在崩溃时重新进入循环 while true do hbase_rotate_log $HBASE_LOGGC if [ -f $HBASE_PID ] && kill -0 "$(cat "$HBASE_PID")" > /dev/null 2>&1 ; then wait "$(cat "$HBASE_PID")" else # 如果 $HBASE_AUTOSTART_FILE 不存在,说明服务可能不是通过 stop 命令停止的 if [ ! -f "$HBASE_AUTOSTART_FILE" ]; then echo "`date` HBase might be stopped removing the autostart file. Exiting Autostart process" >> ${HBASE_LOGOUT} exit 1 fi echo "`date` Autostarting hbase $command service. Attempt no: $(( $autostartCount + 1))" >> ${HBASE_LOGLOG} touch "$HBASE_AUTOSTART_FILE" $thiscmd --config "${HBASE_CONF_DIR}" foreground_start $command $args autostartCount=$(( $autostartCount + 1 )) # HBASE-6504 - 仅当输出详细gc时,才采用输出的第一行 distMode=`$bin/hbase --config "$HBASE_CONF_DIR" org.apache.hadoop.hbase.util.HBaseConfTool hbase.cluster.distributed | head -n 1` if [ "$distMode" != 'false' ]; then # 如果集群正在被停止,不再重启 zparent=`$bin/hbase org.apache.hadoop.hbase.util.HBaseConfTool zookeeper.znode.parent` # 创建对应的 znode 并设置服务运行状态 if [ "$zparent" == "null" ]; then zparent="/hbase"; fi zkrunning=`$bin/hbase org.apache.hadoop.hbase.util.HBaseConfTool zookeeper.znode.state` if [ "$zkrunning" == "null" ]; then zkrunning="running"; fi zkFullRunning=$zparent/$zkrunning $bin/hbase zkcli stat $zkFullRunning 2>&1 | grep "Node does not exist" 1>/dev/null 2>&1 # 如果发现上述指令的 grep 匹配到结果,则说明遇到了问题,显示警告信息,处理后退出 if [ $? -eq 0 ]; then echo "`date` hbase znode does not exist. Exiting Autostart process" >> ${HBASE_LOGOUT} # 删除 $HBASE_AUTOSTART_FILE 文件 rm -f "$HBASE_AUTOSTART_FILE" exit 1 fi # 如果没有找到 Zookeeper 服务,就不重启并显示警告信息 $bin/hbase zkcli stat $zkFullRunning 2>&1 | grep Exception | grep ConnectionLoss 1>/dev/null 2>&1 if [ $? -eq 0 ]; then echo "`date` zookeeper not found. Exiting Autostart process" >> ${HBASE_LOGOUT} rm -f "$HBASE_AUTOSTART_FILE" exit 1 fi fi fi // 当前日期 curDate=`date +%s` // 是否重新设置自动启动窗口 autostartWindowReset=false # 假如超过了自动启动的窗口大小,就重新设置一下 if [ $AUTOSTART_WINDOW_SIZE -gt 0 ] && [ $(( $curDate - $autostartWindowStartDate )) -gt $(( $AUTOSTART_WINDOW_SIZE * $ONE_HOUR_IN_SECS )) ]; then echo "Resetting Autorestart window size: $autostartWindowStartDate" >> ${HBASE_LOGOUT} autostartWindowStartDate=$curDate autostartWindowReset=true autostartCount=0 fi # 当重试次数超过了给定的窗口大小限制(窗口大小不是 0),就杀掉程序,处理后退出 if ! $autostartWindowReset && [ $AUTOSTART_WINDOW_RETRY_LIMIT -gt 0 ] && [ $autostartCount -gt $AUTOSTART_WINDOW_RETRY_LIMIT ]; then echo "`date` Autostart window retry limit: $AUTOSTART_WINDOW_RETRY_LIMIT exceeded for given window size: $AUTOSTART_WINDOW_SIZE hours.. Exiting..." >> ${HBASE_LOGLOG} rm -f "$HBASE_AUTOSTART_FILE" exit 1 fi # 等待关闭的钩子完成 sleep 20 done ;;(stop) echo running $command, logging to $HBASE_LOGOUT rm -f "$HBASE_AUTOSTART_FILE" # 判断是否存在进程号的文件 if [ -f $HBASE_PID ]; then pidToKill=`cat $HBASE_PID` # 执行 kill -0 以确认进程是否在运行,如果在运行则传递 kill 信号,调用 hbase-common.sh 的 waitForProcessEnd 函数等待执行 if kill -0 $pidToKill > /dev/null 2>&1; then echo -n stopping $command echo "`date` Terminating $command" >> $HBASE_LOGLOG kill $pidToKill > /dev/null 2>&1 waitForProcessEnd $pidToKill $command else retval=$? echo no $command to stop because kill -0 of pid $pidToKill failed with status $retval fi else echo no $command to stop because no pid file $HBASE_PID fi rm -f $HBASE_PID ;;(restart) echo running $command, logging to $HBASE_LOGOUT # 停止服务 $thiscmd --config "${HBASE_CONF_DIR}" stop $command $args & wait_until_done $! # 等待用户指定的睡眠周期 sp=${HBASE_RESTART_SLEEP:-3} if [ $sp -gt 0 ]; then sleep $sp fi # 启动服务 $thiscmd --config "${HBASE_CONF_DIR}" start $command $args & wait_until_done $! ;;(*) echo $usage exit 1 ;;esac</code></pre></div><h3 id="hbase-common-sh"><a href="#hbase-common-sh" class="headerlink" title="hbase-common.sh"></a>hbase-common.sh</h3><p>仅有 waitForProcessEnd 方法,是个共享函数,用于等待进程结束,以 pid 和命令名称为参数。</p><div class="hljs code-wrapper"><pre><code class="hljs shell">waitForProcessEnd() {<span class="hljs-meta prompt_"> # </span><span class="language-bash">待停止的进程号</span> pidKilled=$1<span class="hljs-meta prompt_"> # </span><span class="language-bash">服务名</span> commandName=$2 processedAt=`date +%s`<span class="hljs-meta prompt_"> # </span><span class="language-bash">判断进程是否仍在运行</span> while kill -0 $pidKilled > /dev/null 2>&1; do echo -n "." sleep 1; # 如果进程持续的时间超过 $HBASE_STOP_TIMEOUT 即 1200s,不再等待,继续往下执行 if [ $(( `date +%s` - $processedAt )) -gt ${HBASE_STOP_TIMEOUT:-1200} ]; then break; fi done<span class="hljs-meta prompt_"> # </span><span class="language-bash">如果进程仍在运行,执行 <span class="hljs-built_in">kill</span> -9</span> if kill -0 $pidKilled > /dev/null 2>&1; then echo -n force stopping $commandName with kill -9 $pidKilled $JAVA_HOME/bin/jstack -l $pidKilled > "$logout" 2>&1 kill -9 $pidKilled > /dev/null 2>&1 fi<span class="hljs-meta prompt_"> # </span><span class="language-bash">Add a CR after we<span class="hljs-string">'re done w/ dots.</span></span> echo}</code></pre></div><h3 id="zookeepers-sh"><a href="#zookeepers-sh" class="headerlink" title="zookeepers.sh"></a>zookeepers.sh</h3><p>接收 hbase-daemon.sh 中传递的参数,在所有的 Zookeeper 主机上执行命令。</p><div class="hljs code-wrapper"><pre><code class="hljs shell"><span class="hljs-meta prompt_"># </span><span class="language-bash">环境变量</span><span class="hljs-meta prompt_"></span><span class="hljs-meta prompt_"># </span><span class="language-bash">配置文件目录,默认是 <span class="hljs-variable">${HBASE_HOME}</span>/conf</span><span class="hljs-meta prompt_"># </span><span class="language-bash">HBASE_CONF_DIR</span><span class="hljs-meta prompt_"></span><span class="hljs-meta prompt_"># </span><span class="language-bash">在生成远程命令的时候睡眠的时间,默认未设置</span><span class="hljs-meta prompt_"># </span><span class="language-bash">HBASE_SLAVE_SLEEP</span><span class="hljs-meta prompt_"></span><span class="hljs-meta prompt_"># </span><span class="language-bash">执行远程命令时,传递给 ssh 的选项</span><span class="hljs-meta prompt_"># </span><span class="language-bash">HBASE_SSH_OPTS</span><span class="hljs-meta prompt_"></span><span class="hljs-meta prompt_"># </span><span class="language-bash">仿照 <span class="hljs-variable">$HADOOP_HOME</span>/bin/slaves.sh</span>usage="Usage: zookeepers [--config <hbase-confdir>] command..."<span class="hljs-meta prompt_"></span><span class="hljs-meta prompt_"># </span><span class="language-bash">如果没有指定参数,输出 usage</span>if [ $# -le 0 ]; then echo $usage exit 1fi<span class="hljs-meta prompt_"></span><span class="hljs-meta prompt_"># </span><span class="language-bash">获取当前路径</span>bin=`dirname "${BASH_SOURCE-$0}"`bin=`cd "$bin">/dev/null; pwd`<span class="hljs-meta prompt_"></span><span class="hljs-meta prompt_"># </span><span class="language-bash">加载 hbase-config.sh</span>. "$bin"/hbase-config.sh<span class="hljs-meta prompt_"></span><span class="hljs-meta prompt_"># </span><span class="language-bash">如果 <span class="hljs-variable">$HBASE_MANAGES_ZK</span> 参数未配置,即 hbase-env.sh 中的 <span class="hljs-built_in">export</span> HBASE_MANAGES_ZK=<span class="hljs-literal">true</span> 注释没打开,则将此参数设置为 <span class="hljs-literal">true</span></span>if [ "$HBASE_MANAGES_ZK" = "" ]; then HBASE_MANAGES_ZK=truefi<span class="hljs-meta prompt_"></span><span class="hljs-meta prompt_"># </span><span class="language-bash">调用 <span class="hljs-variable">$bin</span>/hbase 脚本,运行 ZKServerTool 类获取 Zookeeper 所有的主机,通过 grep 和 sed 命令对结果进行处理,见附 3,$<span class="hljs-string">"<span class="hljs-variable">${@// /\\ }</span>"</span>会将命令中将所有的 \ 替换成为空格</span>if [ "$HBASE_MANAGES_ZK" = "true" ]; then hosts=`"$bin"/hbase org.apache.hadoop.hbase.zookeeper.ZKServerTool | grep '^ZK host:' | sed 's,^ZK host:,,'` cmd=$"${@// /\\ }"<span class="hljs-meta prompt_"> # </span><span class="language-bash">在所有的主机上启动 Zookeeper 服务</span> for zookeeper in $hosts; do ssh $HBASE_SSH_OPTS $zookeeper $cmd 2>&1 | sed "s/^/$zookeeper: /" & if [ "$HBASE_SLAVE_SLEEP" != "" ]; then sleep $HBASE_SLAVE_SLEEP fi donefiwait</code></pre></div><h4 id="附-3"><a href="#附-3" class="headerlink" title="附 3"></a>附 3</h4><p>通过 $bin/hbase 运行此类。</p><div class="hljs code-wrapper"><pre><code class="hljs java"><span class="hljs-keyword">public</span> <span class="hljs-keyword">final</span> <span class="hljs-keyword">class</span> <span class="hljs-title class_">ZKServerTool</span> { <span class="hljs-keyword">private</span> <span class="hljs-title function_">ZKServerTool</span><span class="hljs-params">()</span> { } <span class="hljs-keyword">public</span> <span class="hljs-keyword">static</span> ServerName[] readZKNodes(Configuration conf) { List<ServerName> hosts = <span class="hljs-keyword">new</span> <span class="hljs-title class_">LinkedList</span><>(); <span class="hljs-comment">// 从 conf 中获取 hbase.zookeeper.quorum 参数对应的值,默认为 localhost</span> <span class="hljs-type">String</span> <span class="hljs-variable">quorum</span> <span class="hljs-operator">=</span> conf.get(HConstants.ZOOKEEPER_QUORUM, HConstants.LOCALHOST); String[] values = quorum.split(<span class="hljs-string">","</span>); <span class="hljs-keyword">for</span> (String value : values) { String[] parts = value.split(<span class="hljs-string">":"</span>); <span class="hljs-type">String</span> <span class="hljs-variable">host</span> <span class="hljs-operator">=</span> parts[<span class="hljs-number">0</span>]; <span class="hljs-comment">// 默认端口 2181</span> <span class="hljs-type">int</span> <span class="hljs-variable">port</span> <span class="hljs-operator">=</span> HConstants.DEFAULT_ZOOKEEPER_CLIENT_PORT; <span class="hljs-keyword">if</span> (parts.length > <span class="hljs-number">1</span>) { port = Integer.parseInt(parts[<span class="hljs-number">1</span>]); } hosts.add(ServerName.valueOf(host, port, -<span class="hljs-number">1</span>)); } <span class="hljs-comment">// 转换成数组输出</span> <span class="hljs-keyword">return</span> hosts.toArray(<span class="hljs-keyword">new</span> <span class="hljs-title class_">ServerName</span>[hosts.size()]); } <span class="hljs-keyword">public</span> <span class="hljs-keyword">static</span> <span class="hljs-keyword">void</span> <span class="hljs-title function_">main</span><span class="hljs-params">(String[] args)</span> { <span class="hljs-keyword">for</span>(ServerName server: readZKNodes(HBaseConfiguration.create())) { <span class="hljs-comment">// bin/zookeeper.sh 依赖于 "ZK host" 字符串进行 grep 操作,区分大小写</span> System.out.println(<span class="hljs-string">"ZK host: "</span> + server.getHostname()); } }}</code></pre></div><h3 id="master-backup-sh"><a href="#master-backup-sh" class="headerlink" title="master-backup.sh"></a>master-backup.sh</h3><p>接收 hbase-daemon.sh 中传递的参数,在所有的 backup master 主机上执行命令。</p><div class="hljs code-wrapper"><pre><code class="hljs shell"><span class="hljs-meta prompt_"># </span><span class="language-bash">环境变量</span><span class="hljs-meta prompt_"></span><span class="hljs-meta prompt_"># </span><span class="language-bash">远程主机文件命名,默认是 <span class="hljs-variable">${HBASE_CONF_DIR}</span>/backup-masters</span><span class="hljs-meta prompt_"># </span><span class="language-bash">HBASE_BACKUP_MASTERS</span><span class="hljs-meta prompt_"></span><span class="hljs-meta prompt_"># </span><span class="language-bash">Hadoop 配置文件路径,默认是 <span class="hljs-variable">${HADOOP_HOME}</span>/conf</span><span class="hljs-meta prompt_"># </span><span class="language-bash">HADOOP_CONF_DIR</span><span class="hljs-meta prompt_"></span><span class="hljs-meta prompt_"># </span><span class="language-bash">HBase 配置文件路径,默认是 <span class="hljs-variable">${HBASE_HOME}</span>/conf</span><span class="hljs-meta prompt_"># </span><span class="language-bash">HBASE_CONF_DIR</span><span class="hljs-meta prompt_"></span><span class="hljs-meta prompt_"># </span><span class="language-bash">在生成远程命令的时候睡眠的时间,默认未设置</span><span class="hljs-meta prompt_"># </span><span class="language-bash">HBASE_SLAVE_SLEEP</span><span class="hljs-meta prompt_"></span><span class="hljs-meta prompt_"># </span><span class="language-bash">执行远程命令时,传递给 ssh 的选项</span><span class="hljs-meta prompt_"># </span><span class="language-bash">HBASE_SSH_OPTS</span><span class="hljs-meta prompt_"></span><span class="hljs-meta prompt_"># </span><span class="language-bash">仿照 <span class="hljs-variable">$HADOOP_HOME</span>/bin/slaves.sh</span>usage="Usage: $0 [--config <hbase-confdir>] command..."<span class="hljs-meta prompt_"></span><span class="hljs-meta prompt_"># </span><span class="language-bash">如果没有指定参数,输出 usage</span>if [ $# -le 0 ]; then echo $usage exit 1fi<span class="hljs-meta prompt_"></span><span class="hljs-meta prompt_"># </span><span class="language-bash">获取当前路径</span>bin=`dirname "${BASH_SOURCE-$0}"`bin=`cd "$bin">/dev/null; pwd`<span class="hljs-meta prompt_"></span><span class="hljs-meta prompt_"># </span><span class="language-bash">加载 hbase-config.sh</span>. "$bin"/hbase-config.sh<span class="hljs-meta prompt_"></span><span class="hljs-meta prompt_"># </span><span class="language-bash">如果在命令行中指定了 master backup 文件,那么优先级高于 hbase-env.sh 中的配置,此处进行保存</span>HOSTLIST=$HBASE_BACKUP_MASTERSif [ "$HOSTLIST" = "" ]; then if [ "$HBASE_BACKUP_MASTERS" = "" ]; then export HOSTLIST="${HBASE_CONF_DIR}/backup-masters" else export HOSTLIST="${HBASE_BACKUP_MASTERS}" fifi<span class="hljs-meta prompt_"></span><span class="hljs-meta prompt_"># </span><span class="language-bash">$<span class="hljs-string">"<span class="hljs-variable">${@// /\\ }</span>"</span>会将命令中将所有的 \ 替换成为空格</span>args=${@// /\\ }args=${args/master-backup/master}<span class="hljs-meta prompt_"></span><span class="hljs-meta prompt_"># </span><span class="language-bash">登陆到每个节点上,以 backup 的方式启动 master,启动后 Zookeeper 会自动选取一个 master 作为 active,其他的都是 backup</span>if [ -f $HOSTLIST ]; then for hmaster in `cat "$HOSTLIST"`; do ssh $HBASE_SSH_OPTS $hmaster $"$args --backup" \ 2>&1 | sed "s/^/$hmaster: /" & if [ "$HBASE_SLAVE_SLEEP" != "" ]; then sleep $HBASE_SLAVE_SLEEP fi donefi wait</code></pre></div><h3 id="regionservers-sh"><a href="#regionservers-sh" class="headerlink" title="regionservers.sh"></a>regionservers.sh</h3><p>接收 hbase-daemon.sh 中传递的参数,在所有的 RegionServer 主机上执行命令。</p><div class="hljs code-wrapper"><pre><code class="hljs shell"><span class="hljs-meta prompt_"># </span><span class="language-bash">环境变量</span><span class="hljs-meta prompt_"></span><span class="hljs-meta prompt_"># </span><span class="language-bash">远程主机文件命名,默认是 <span class="hljs-variable">${HADOOP_CONF_DIR}</span>/regionservers</span><span class="hljs-meta prompt_"># </span><span class="language-bash">HBASE_REGIONSERVERS</span><span class="hljs-meta prompt_"></span><span class="hljs-meta prompt_"># </span><span class="language-bash">Hadoop 配置文件路径,默认是 <span class="hljs-variable">${HADOOP_HOME}</span>/conf</span><span class="hljs-meta prompt_"># </span><span class="language-bash">HADOOP_CONF_DIR</span><span class="hljs-meta prompt_"></span><span class="hljs-meta prompt_"># </span><span class="language-bash">HBase 配置文件路径,默认是 <span class="hljs-variable">${HBASE_HOME}</span>/conf</span><span class="hljs-meta prompt_"># </span><span class="language-bash">HBASE_CONF_DIR</span><span class="hljs-meta prompt_"></span><span class="hljs-meta prompt_"># </span><span class="language-bash">在生成远程命令的时候睡眠的时间,默认未设置</span><span class="hljs-meta prompt_"># </span><span class="language-bash">HBASE_SLAVE_SLEEP</span><span class="hljs-meta prompt_"></span><span class="hljs-meta prompt_"># </span><span class="language-bash">执行远程命令时,传递给 ssh 的选项</span><span class="hljs-meta prompt_"># </span><span class="language-bash">HBASE_SSH_OPTS</span><span class="hljs-meta prompt_"></span><span class="hljs-meta prompt_"># </span><span class="language-bash">仿照 <span class="hljs-variable">$HADOOP_HOME</span>/bin/slaves.sh</span>usage="Usage: regionservers [--config <hbase-confdir>] command..."<span class="hljs-meta prompt_"></span><span class="hljs-meta prompt_"># </span><span class="language-bash">如果没有指定参数,输出 usage</span>if [ $# -le 0 ]; then echo $usage exit 1fi<span class="hljs-meta prompt_"></span><span class="hljs-meta prompt_"># </span><span class="language-bash">获取当前路径</span>bin=`dirname "${BASH_SOURCE-$0}"`bin=`cd "$bin">/dev/null; pwd`. "$bin"/hbase-config.sh<span class="hljs-meta prompt_"></span><span class="hljs-meta prompt_"># </span><span class="language-bash">如果在命令行中指定了 regionservers 文件,那么优先级高于 hbase-env.sh 中的配置,此处进行保存</span>HOSTLIST=$HBASE_REGIONSERVERSif [ "$HOSTLIST" = "" ]; then if [ "$HBASE_REGIONSERVERS" = "" ]; then export HOSTLIST="${HBASE_CONF_DIR}/regionservers" else export HOSTLIST="${HBASE_REGIONSERVERS}" fifiregionservers=`cat "$HOSTLIST"`<span class="hljs-meta prompt_"># </span><span class="language-bash">如果 regionservers 是默认的 localhost,则会在本地启动 regionserver,集群模式则按顺序在各个节点上启动 RegionServer,$<span class="hljs-string">"<span class="hljs-variable">${@// /\\ }</span>"</span>会将命令中将所有的 \ 替换成为空格</span>if [ "$regionservers" = "localhost" ]; then HBASE_REGIONSERVER_ARGS="\ -Dhbase.regionserver.port=16020 \ -Dhbase.regionserver.info.port=16030"<span class="hljs-meta prompt_"></span><span class="hljs-meta prompt_"> $</span><span class="language-bash"><span class="hljs-string">"<span class="hljs-variable">${@// /\\ }</span>"</span> <span class="hljs-variable">${HBASE_REGIONSERVER_ARGS}</span> \</span><span class="language-bash"> 2>&1 | sed <span class="hljs-string">"s/^/<span class="hljs-variable">$regionserver</span>: /"</span> &</span>else for regionserver in `cat "$HOSTLIST"`; do if ${HBASE_SLAVE_PARALLEL:-true}; then ssh $HBASE_SSH_OPTS $regionserver $"${@// /\\ }" \ 2>&1 | sed "s/^/$regionserver: /" & else # run each command serially ssh $HBASE_SSH_OPTS $regionserver $"${@// /\\ }" \ 2>&1 | sed "s/^/$regionserver: /" fi if [ "$HBASE_SLAVE_SLEEP" != "" ]; then sleep $HBASE_SLAVE_SLEEP fi donefiwait</code></pre></div><h3 id="bin-hbase"><a href="#bin-hbase" class="headerlink" title="bin/hbase"></a>bin/hbase</h3><p>hbase 命令脚本,基于 hadoop 命令脚本,它在 hadoop 脚本之前完成了相关配置。</p><div class="hljs code-wrapper"><pre><code class="hljs shell"><span class="hljs-meta prompt_"># </span><span class="language-bash">环境变量</span><span class="hljs-meta prompt_"></span><span class="hljs-meta prompt_"># </span><span class="language-bash">要使用的 java 实现,覆盖 JAVA_HOME</span><span class="hljs-meta prompt_"># </span><span class="language-bash">JAVA_HOME</span><span class="hljs-meta prompt_"></span><span class="hljs-meta prompt_"># </span><span class="language-bash">额外的 Java CLASSPATH</span><span class="hljs-meta prompt_"># </span><span class="language-bash">HBASE_CLASSPATH</span><span class="hljs-meta prompt_"></span><span class="hljs-meta prompt_"># </span><span class="language-bash">作为 system classpath 的额外 Java CLASSPATH 的前缀</span><span class="hljs-meta prompt_"># </span><span class="language-bash">HBASE_CLASSPATH_PREFIX</span><span class="hljs-meta prompt_"></span><span class="hljs-meta prompt_"># </span><span class="language-bash">使用的最大堆数量,默认未设置,并使用 JVM 的默认设置,通常是可用内存的 1/4</span><span class="hljs-meta prompt_"># </span><span class="language-bash">HBASE_HEAPSIZE</span><span class="hljs-meta prompt_"></span><span class="hljs-meta prompt_"># </span><span class="language-bash">对 JAVA_LIBRARY_PATH 的 HBase 添加,用于添加本机库</span><span class="hljs-meta prompt_"># </span><span class="language-bash">HBASE_LIBRARY_PATH</span><span class="hljs-meta prompt_"></span><span class="hljs-meta prompt_"># </span><span class="language-bash">额外的 Java 运行时选项</span><span class="hljs-meta prompt_"># </span><span class="language-bash">HBASE_OPTS</span><span class="hljs-meta prompt_"></span><span class="hljs-meta prompt_"># </span><span class="language-bash">HBase 配置文件路径,默认是 <span class="hljs-variable">${HBASE_HOME}</span>/conf</span><span class="hljs-meta prompt_"># </span><span class="language-bash">HBASE_CONF_DIR</span><span class="hljs-meta prompt_"></span><span class="hljs-meta prompt_">#</span><span class="language-bash">日志追加器,默认是控制台 INFO 级别</span><span class="hljs-meta prompt_"># </span><span class="language-bash">HBASE_ROOT_LOGGER</span><span class="hljs-meta prompt_"></span><span class="hljs-meta prompt_"># </span><span class="language-bash">JRuby路径:<span class="hljs-variable">$JRUBY_HOME</span>/lib/jruby.jar 应该存在,默认为 HBase 打包的jar</span><span class="hljs-meta prompt_"># </span><span class="language-bash">JRUBY_HOME</span><span class="hljs-meta prompt_"></span><span class="hljs-meta prompt_"># </span><span class="language-bash">额外的选项(例如<span class="hljs-string">'--1.9'</span>)传递给了 hbase,默认为空</span><span class="hljs-meta prompt_"># </span><span class="language-bash">JRUBY_OPTS</span><span class="hljs-meta prompt_"></span><span class="hljs-meta prompt_"># </span><span class="language-bash">额外的传递给 hbase shell 的选项,默认为空</span><span class="hljs-meta prompt_"># </span><span class="language-bash"> HBASE_SHELL_OPTS</span><span class="hljs-meta prompt_"></span><span class="hljs-meta prompt_"># </span><span class="language-bash">获取当前目录</span>bin=`dirname "$0"`bin=`cd "$bin">/dev/null; pwd`<span class="hljs-meta prompt_"></span><span class="hljs-meta prompt_"># </span><span class="language-bash">加载 hbase-config.sh 获取配置</span>. "$bin"/hbase-config.sh<span class="hljs-meta prompt_"></span><span class="hljs-meta prompt_"># </span><span class="language-bash">检测系统,是否使用 cygwin</span>cygwin=falsecase "`uname`" inCYGWIN*) cygwin=true;;esac<span class="hljs-meta prompt_"></span><span class="hljs-meta prompt_"># </span><span class="language-bash">检测当前是否在 HBase 的根目录中</span>in_dev_env=falseif [ -d "${HBASE_HOME}/target" ]; then in_dev_env=truefi<span class="hljs-meta prompt_"></span><span class="hljs-meta prompt_"># </span><span class="language-bash">检测是否在综合压缩包中</span>in_omnibus_tarball="false"if [ -f "${HBASE_HOME}/bin/hbase-daemons.sh" ]; then in_omnibus_tarball="true"fi<span class="hljs-meta prompt_"></span><span class="hljs-meta prompt_"># </span><span class="language-bash"><span class="hljs-built_in">read</span> 当读到 <span class="hljs-string">''</span> 即结束,此处作为一部分用法进行输出</span>read -d '' options_string << EOFOptions: --config DIR Configuration direction to use. Default: ./conf --hosts HOSTS Override the list in 'regionservers' file --auth-as-server Authenticate to ZooKeeper using servers configuration --internal-classpath Skip attempting to use client facing jars (WARNING: unstable results between versions)EOF<span class="hljs-meta prompt_"># </span><span class="language-bash">如果没有指定参数,输出 usage,包含上面的内容</span>if [ $# = 0 ]; then echo "Usage: hbase [<options>] <command> [<args>]" echo "$options_string" echo "" echo "Commands:" echo "Some commands take arguments. Pass no args or -h for usage." echo " shell Run the HBase shell" echo " hbck Run the HBase 'fsck' tool. Defaults read-only hbck1." echo " Pass '-j /path/to/HBCK2.jar' to run hbase-2.x HBCK2." echo " snapshot Tool for managing snapshots" if [ "${in_omnibus_tarball}" = "true" ]; then echo " wal Write-ahead-log analyzer" echo " hfile Store file analyzer" echo " zkcli Run the ZooKeeper shell" echo " master Run an HBase HMaster node" echo " regionserver Run an HBase HRegionServer node" echo " zookeeper Run a ZooKeeper server" echo " rest Run an HBase REST server" echo " thrift Run the HBase Thrift server" echo " thrift2 Run the HBase Thrift2 server" echo " clean Run the HBase clean up script" fi echo " classpath Dump hbase CLASSPATH" echo " mapredcp Dump CLASSPATH entries required by mapreduce" echo " pe Run PerformanceEvaluation" echo " ltt Run LoadTestTool" echo " canary Run the Canary tool" echo " version Print the version" echo " completebulkload Run BulkLoadHFiles tool" echo " regionsplitter Run RegionSplitter tool" echo " rowcounter Run RowCounter tool" echo " cellcounter Run CellCounter tool" echo " pre-upgrade Run Pre-Upgrade validator tool" echo " hbtop Run HBTop tool" echo " CLASSNAME Run the class named CLASSNAME" exit 1fi<span class="hljs-meta prompt_"></span><span class="hljs-meta prompt_"># </span><span class="language-bash">获取传入的第一个参数</span>COMMAND=$1<span class="hljs-meta prompt_"># </span><span class="language-bash">命令左移</span>shiftJAVA=$JAVA_HOME/bin/java<span class="hljs-meta prompt_"></span><span class="hljs-meta prompt_"># </span><span class="language-bash">覆盖此命令的默认设置(如果适用)</span>if [ -f "$HBASE_HOME/conf/hbase-env-$COMMAND.sh" ]; then . "$HBASE_HOME/conf/hbase-env-$COMMAND.sh"fiadd_size_suffix() { # 如果参数缺少一个,则添加一个“m”后缀 local val="$1" local lastchar=${val: -1} if [[ "mMgG" == *$lastchar* ]]; then echo $val else echo ${val}m fi}<span class="hljs-meta prompt_"></span><span class="hljs-meta prompt_"># </span><span class="language-bash">检测 HBASE_HEAPSIZE 是否设置</span>if [[ -n "$HBASE_HEAPSIZE" ]]; then JAVA_HEAP_MAX="-Xmx$(add_size_suffix $HBASE_HEAPSIZE)"fi<span class="hljs-meta prompt_"></span><span class="hljs-meta prompt_"># </span><span class="language-bash">检测 HBASE_OFFHEAPSIZE 是否设置</span>if [[ -n "$HBASE_OFFHEAPSIZE" ]]; then JAVA_OFFHEAP_MAX="-XX:MaxDirectMemorySize=$(add_size_suffix $HBASE_OFFHEAPSIZE)"fi<span class="hljs-meta prompt_"></span><span class="hljs-meta prompt_"># </span><span class="language-bash">这样在下面的循环中可以正确处理带空格的文件名,设置 IFS</span>ORIG_IFS=$IFSIFS=<span class="hljs-meta prompt_"></span><span class="hljs-meta prompt_"># </span><span class="language-bash">CLASSPATH 初始化包含 HBASE_CONF_DIR</span>CLASSPATH="${HBASE_CONF_DIR}"CLASSPATH=${CLASSPATH}:$JAVA_HOME/lib/tools.jar<span class="hljs-meta prompt_"></span><span class="hljs-meta prompt_"># </span><span class="language-bash">如果传入的文件存在,则加入 CLASSPATH</span>add_to_cp_if_exists() { if [ -d "$@" ]; then CLASSPATH=${CLASSPATH}:"$@" fi}<span class="hljs-meta prompt_"></span><span class="hljs-meta prompt_"># </span><span class="language-bash">对于发行版,将 hbase 和 webapp 添加到 CLASSPATH 中 Webapp 必须首先出现,否则会使 Jetty 混乱</span>if [ -d "$HBASE_HOME/hbase-webapps" ]; then add_to_cp_if_exists "${HBASE_HOME}"fi<span class="hljs-meta prompt_"># </span><span class="language-bash">如果在开发环境中则添加</span>if [ -d "$HBASE_HOME/hbase-server/target/hbase-webapps" ]; then if [ "$COMMAND" = "thrift" ] ; then add_to_cp_if_exists "${HBASE_HOME}/hbase-thrift/target" elif [ "$COMMAND" = "thrift2" ] ; then add_to_cp_if_exists "${HBASE_HOME}/hbase-thrift/target" elif [ "$COMMAND" = "rest" ] ; then add_to_cp_if_exists "${HBASE_HOME}/hbase-rest/target" else add_to_cp_if_exists "${HBASE_HOME}/hbase-server/target" # 需要下面的 GetJavaProperty 检查 add_to_cp_if_exists "${HBASE_HOME}/hbase-server/target/classes" fifi<span class="hljs-meta prompt_"></span><span class="hljs-meta prompt_"># </span><span class="language-bash">如果可用,将 Hadoop 添加到 CLASSPATH 和 JAVA_LIBRARY_PATH,允许禁用此功能</span>if [ "$HBASE_DISABLE_HADOOP_CLASSPATH_LOOKUP" != "true" ] ; then HADOOP_IN_PATH=$(PATH="${HADOOP_HOME:-${HADOOP_PREFIX}}/bin:$PATH" which hadoop 2>/dev/null)fi<span class="hljs-meta prompt_"></span><span class="hljs-meta prompt_"># </span><span class="language-bash">声明 shaded_jar,将 libs 添加到 CLASSPATH</span>declare shaded_jarif [ "${INTERNAL_CLASSPATH}" != "true" ]; then<span class="hljs-meta prompt_"> # </span><span class="language-bash">find our shaded jars</span> declare shaded_client declare shaded_client_byo_hadoop declare shaded_mapreduce for f in "${HBASE_HOME}"/lib/shaded-clients/hbase-shaded-client*.jar; do if [[ "${f}" =~ byo-hadoop ]]; then shaded_client_byo_hadoop="${f}" else shaded_client="${f}" fi done for f in "${HBASE_HOME}"/lib/shaded-clients/hbase-shaded-mapreduce*.jar; do shaded_mapreduce="${f}" done<span class="hljs-meta prompt_"></span><span class="hljs-meta prompt_"> # </span><span class="language-bash">如果命令可以使用 shaded client,使用它</span> declare -a commands_in_client_jar=("classpath" "version" "hbtop") for c in "${commands_in_client_jar[@]}"; do if [ "${COMMAND}" = "${c}" ]; then if [ -n "${HADOOP_IN_PATH}" ] && [ -f "${HADOOP_IN_PATH}" ]; then # 如果上面没有找到一个 jar,它将为空,然后下面的检查将默认返回内部类路径 shaded_jar="${shaded_client_byo_hadoop}" else # 如果上面没有找到一个jar,它将为空,然后下面的检查将默认返回内部类路径 shaded_jar="${shaded_client}" fi break fi done<span class="hljs-meta prompt_"></span><span class="hljs-meta prompt_"> # </span><span class="language-bash">如果命令需要 shaded mapreduce,使用它</span><span class="hljs-meta prompt_"> # </span><span class="language-bash">此处不包含 N.B “mapredcp”,因为在 shaded 情况下,它会跳过我们构建的类路径</span> declare -a commands_in_mr_jar=("hbck" "snapshot" "canary" "regionsplitter" "pre-upgrade") for c in "${commands_in_mr_jar[@]}"; do if [ "${COMMAND}" = "${c}" ]; then # 如果上面没有找到一个jar,它将为空,然后下面的检查将默认返回内部类路径 shaded_jar="${shaded_mapreduce}" break fi done<span class="hljs-meta prompt_"></span><span class="hljs-meta prompt_"> # </span><span class="language-bash">当我们在运行时获得完整的 hadoop 类路径时,某些命令专门只能使用 shaded mapreduce</span> if [ -n "${HADOOP_IN_PATH}" ] && [ -f "${HADOOP_IN_PATH}" ]; then declare -a commands_in_mr_need_hadoop=("backup" "restore" "rowcounter" "cellcounter") for c in "${commands_in_mr_need_hadoop[@]}"; do if [ "${COMMAND}" = "${c}" ]; then # 如果上面没有找到一个jar,它将为空,然后下面的检查将默认返回内部类路径 shaded_jar="${shaded_mapreduce}" break fi done fifi<span class="hljs-meta prompt_"></span><span class="hljs-meta prompt_"># </span><span class="language-bash">加载相关 jar 包</span>if [ -n "${shaded_jar}" ] && [ -f "${shaded_jar}" ]; then CLASSPATH="${CLASSPATH}:${shaded_jar}"<span class="hljs-meta prompt_"># </span><span class="language-bash">fall through to grabbing all the lib jars and hope we<span class="hljs-string">'re in the omnibus tarball</span></span><span class="hljs-meta prompt_">#</span><span class="language-bash"><span class="hljs-string"></span></span><span class="hljs-string"><span class="language-bash"># N.B. shell specifically can'</span>t rely on the shaded artifacts because RSGroups is only</span><span class="hljs-meta prompt_"># </span><span class="language-bash">available as non-shaded</span><span class="hljs-meta prompt_">#</span><span class="language-bash"></span><span class="language-bash"><span class="hljs-comment"># N.B. pe and ltt can't easily rely on shaded artifacts because they live in hbase-mapreduce:test-jar</span></span><span class="hljs-meta prompt_"># </span><span class="language-bash">and need some other jars that haven<span class="hljs-string">'t been relocated. Currently enumerating that list</span></span><span class="hljs-meta prompt_"># </span><span class="language-bash"><span class="hljs-string">is too hard to be worth it.</span></span><span class="hljs-meta prompt_">#</span><span class="language-bash"><span class="hljs-string"></span></span><span class="hljs-string"><span class="language-bash">else</span></span> for f in $HBASE_HOME/lib/*.jar; do CLASSPATH=${CLASSPATH}:$f; done<span class="hljs-meta prompt_"> # </span><span class="language-bash"><span class="hljs-string">make it easier to check for shaded/not later on.</span></span> shaded_jar=""fifor f in "${HBASE_HOME}"/lib/client-facing-thirdparty/*.jar; do if [[ ! "${f}" =~ ^.*/htrace-core-3.*\.jar$ ]] && \ [ "${f}" != "htrace-core.jar$" ] && \ [[ ! "${f}" =~ ^.*/slf4j-log4j.*$ ]]; then CLASSPATH="${CLASSPATH}:${f}" fidone<span class="hljs-meta prompt_"></span><span class="hljs-meta prompt_"># </span><span class="language-bash"><span class="hljs-string">默认的日志文件目录</span></span>if [ "$HBASE_LOG_DIR" = "" ]; then HBASE_LOG_DIR="$HBASE_HOME/logs"fi<span class="hljs-meta prompt_"># </span><span class="language-bash"><span class="hljs-string">默认的日志名</span></span>if [ "$HBASE_LOGFILE" = "" ]; then HBASE_LOGFILE='hbase.log'fi<span class="hljs-meta prompt_"></span><span class="hljs-meta prompt_"># </span><span class="language-bash"><span class="hljs-string">组装 jar</span></span>function append_path() { if [ -z "$1" ]; then echo "$2" else echo "$1:$2" fi}JAVA_PLATFORM=""<span class="hljs-meta prompt_"></span><span class="hljs-meta prompt_"># </span><span class="language-bash"><span class="hljs-string">如果定义了 HBASE_LIBRARY_PATH,则将其用作第一个或第二个选项</span></span>if [ "$HBASE_LIBRARY_PATH" != "" ]; then JAVA_LIBRARY_PATH=$(append_path "$JAVA_LIBRARY_PATH" "$HBASE_LIBRARY_PATH")fi<span class="hljs-meta prompt_"></span><span class="hljs-meta prompt_"># </span><span class="language-bash"><span class="hljs-string">如果已配置并且可用,则将 Hadoop 添加到 CLASSPATH 和 JAVA_LIBRARY_PATH</span></span>if [ -n "${HADOOP_IN_PATH}" ] && [ -f "${HADOOP_IN_PATH}" ]; then<span class="hljs-meta prompt_"> # </span><span class="language-bash"><span class="hljs-string">如果构建了 hbase,则将 hbase-server.jar 临时添加到 GetJavaProperty 的类路径中</span></span><span class="hljs-meta prompt_"> # </span><span class="language-bash"><span class="hljs-string">排除 hbase-server*-tests.jar</span></span> temporary_cp= for f in "${HBASE_HOME}"/lib/hbase-server*.jar; do if [[ ! "${f}" =~ ^.*\-tests\.jar$ ]]; then temporary_cp=":$f" fi done HADOOP_JAVA_LIBRARY_PATH=$(HADOOP_CLASSPATH="$CLASSPATH${temporary_cp}" "${HADOOP_IN_PATH}" \ org.apache.hadoop.hbase.util.GetJavaProperty java.library.path) if [ -n "$HADOOP_JAVA_LIBRARY_PATH" ]; then JAVA_LIBRARY_PATH=$(append_path "${JAVA_LIBRARY_PATH}" "$HADOOP_JAVA_LIBRARY_PATH") fi CLASSPATH=$(append_path "${CLASSPATH}" "$(${HADOOP_IN_PATH} classpath 2>/dev/null)")else<span class="hljs-meta prompt_"> # </span><span class="language-bash"><span class="hljs-string">否则,如果我们提供的是 Hadoop,我们还需要使用它的版本构建,则应包括 htrace 3</span></span> for f in "${HBASE_HOME}"/lib/client-facing-thirdparty/htrace-core-3*.jar "${HBASE_HOME}"/lib/client-facing-thirdparty/htrace-core.jar; do if [ -f "${f}" ]; then CLASSPATH="${CLASSPATH}:${f}" break fi done<span class="hljs-meta prompt_"> # </span><span class="language-bash"><span class="hljs-string">使用 shaded jars 时,某些命令需要特殊处理。对于这些情况,我们依赖于 hbase-shaded-mapreduce 而不是 hbase-shaded-client*,因为我们利用了一些 IA.Private 类,这些私有类不再后者中。但是我们不使用"hadoop jar"来调用它们,因此当我们不执行运行时 hadoop 类路径查找时,我们需要确保有一些 Hadoop 类可用。</span></span><span class="hljs-meta prompt_"> # </span><span class="language-bash"><span class="hljs-string">我们需要的一组类是打包在 shaded-client 中的那些类</span></span> for c in "${commands_in_mr_jar[@]}"; do if [ "${COMMAND}" = "${c}" ] && [ -n "${shaded_jar}" ]; then CLASSPATH="${CLASSPATH}:${shaded_client:?We couldn\'t find the shaded client jar even though we did find the shaded MR jar. for command ${COMMAND} we need both. please use --internal-classpath as a workaround.}" break fi donefi<span class="hljs-meta prompt_"></span><span class="hljs-meta prompt_"># </span><span class="language-bash"><span class="hljs-string">最后添加用户指定的 CLASSPATH</span></span>if [ "$HBASE_CLASSPATH" != "" ]; then CLASSPATH=${CLASSPATH}:${HBASE_CLASSPATH}fi<span class="hljs-meta prompt_"></span><span class="hljs-meta prompt_"># </span><span class="language-bash"><span class="hljs-string">首先添加用户指定的 CLASSPATH 前缀</span></span>if [ "$HBASE_CLASSPATH_PREFIX" != "" ]; then CLASSPATH=${HBASE_CLASSPATH_PREFIX}:${CLASSPATH}fi<span class="hljs-meta prompt_"></span><span class="hljs-meta prompt_"># </span><span class="language-bash"><span class="hljs-string">cygwin 路径转换</span></span>if $cygwin; then CLASSPATH=`cygpath -p -w "$CLASSPATH"` HBASE_HOME=`cygpath -d "$HBASE_HOME"` HBASE_LOG_DIR=`cygpath -d "$HBASE_LOG_DIR"`fiif [ -d "${HBASE_HOME}/build/native" -o -d "${HBASE_HOME}/lib/native" ]; then if [ -z $JAVA_PLATFORM ]; then JAVA_PLATFORM=`CLASSPATH=${CLASSPATH} ${JAVA} org.apache.hadoop.util.PlatformName | sed -e "s/ /_/g"` fi if [ -d "$HBASE_HOME/build/native" ]; then JAVA_LIBRARY_PATH=$(append_path "$JAVA_LIBRARY_PATH" "${HBASE_HOME}/build/native/${JAVA_PLATFORM}/lib") fi if [ -d "${HBASE_HOME}/lib/native" ]; then JAVA_LIBRARY_PATH=$(append_path "$JAVA_LIBRARY_PATH" "${HBASE_HOME}/lib/native/${JAVA_PLATFORM}") fifi<span class="hljs-meta prompt_"></span><span class="hljs-meta prompt_"># </span><span class="language-bash"><span class="hljs-string">cygwin 路径转换</span></span>if $cygwin; then JAVA_LIBRARY_PATH=`cygpath -p "$JAVA_LIBRARY_PATH"`fi<span class="hljs-meta prompt_"></span><span class="hljs-meta prompt_"># </span><span class="language-bash"><span class="hljs-string">清除循环中的 IFS</span></span>unset IFS<span class="hljs-meta prompt_"></span><span class="hljs-meta prompt_"># </span><span class="language-bash"><span class="hljs-string">根据我们所运行的设置正确的 GC 选项</span></span>declare -a server_cmds=("master" "regionserver" "thrift" "thrift2" "rest" "avro" "zookeeper")for cmd in ${server_cmds[@]}; doif [[ $cmd == $COMMAND ]]; thenserver=truebreakfidoneif [[ $server ]]; thenHBASE_OPTS="$HBASE_OPTS $SERVER_GC_OPTS"elseHBASE_OPTS="$HBASE_OPTS $CLIENT_GC_OPTS"fiif [ "$AUTH_AS_SERVER" == "true" ] || [ "$COMMAND" = "hbck" ]; then if [ -n "$HBASE_SERVER_JAAS_OPTS" ]; then HBASE_OPTS="$HBASE_OPTS $HBASE_SERVER_JAAS_OPTS" else HBASE_OPTS="$HBASE_OPTS $HBASE_REGIONSERVER_OPTS" fifi<span class="hljs-meta prompt_"></span><span class="hljs-meta prompt_"># </span><span class="language-bash"><span class="hljs-string">检测命令是否需要 jline</span></span>declare -a jline_cmds=("zkcli" "org.apache.hadoop.hbase.zookeeper.ZKMainServer")for cmd in "${jline_cmds[@]}"; do if [[ $cmd == "$COMMAND" ]]; then jline_needed=true break fidone<span class="hljs-meta prompt_"></span><span class="hljs-meta prompt_"># </span><span class="language-bash"><span class="hljs-string">for jruby</span></span><span class="hljs-meta prompt_"># </span><span class="language-bash"><span class="hljs-string">(1) for the commands which need jruby (see jruby_cmds defined below)</span></span><span class="hljs-meta prompt_"># </span><span class="language-bash"><span class="hljs-string"> A. when JRUBY_HOME is specified explicitly, eg. export JRUBY_HOME=/usr/local/share/jruby</span></span><span class="hljs-meta prompt_"># </span><span class="language-bash"><span class="hljs-string"> CLASSPATH and HBASE_OPTS are updated according to JRUBY_HOME specified</span></span><span class="hljs-meta prompt_"># </span><span class="language-bash"><span class="hljs-string"> B. when JRUBY_HOME is not specified explicitly</span></span><span class="hljs-meta prompt_"># </span><span class="language-bash"><span class="hljs-string"> add jruby packaged with HBase to CLASSPATH</span></span><span class="hljs-meta prompt_"># </span><span class="language-bash"><span class="hljs-string">(2) for other commands, do nothing</span></span><span class="hljs-meta prompt_"></span><span class="hljs-meta prompt_"># </span><span class="language-bash"><span class="hljs-string">检测命令是否需要 jruby</span></span>declare -a jruby_cmds=("shell" "org.jruby.Main")for cmd in "${jruby_cmds[@]}"; do if [[ $cmd == "$COMMAND" ]]; then jruby_needed=true break fidoneadd_maven_deps_to_classpath() { f="${HBASE_HOME}/hbase-build-configuration/target/$1" if [ ! -f "${f}" ]; then echo "As this is a development environment, we need ${f} to be generated from maven (command: mvn install -DskipTests)" exit 1 fi CLASSPATH=${CLASSPATH}:$(cat "${f}")}<span class="hljs-meta prompt_"></span><span class="hljs-meta prompt_"># </span><span class="language-bash"><span class="hljs-string">添加开发环境类路径的东西</span></span>if $in_dev_env; then add_maven_deps_to_classpath "cached_classpath.txt" if [[ $jline_needed ]]; then add_maven_deps_to_classpath "cached_classpath_jline.txt" elif [[ $jruby_needed ]]; then add_maven_deps_to_classpath "cached_classpath_jruby.txt" fifi<span class="hljs-meta prompt_"></span><span class="hljs-meta prompt_"># </span><span class="language-bash"><span class="hljs-string">命令需要 jruby</span></span>if [[ $jruby_needed ]]; then if [ "$JRUBY_HOME" != "" ]; then # JRUBY_HOME is specified explicitly, eg. export JRUBY_HOME=/usr/local/share/jruby # add jruby.jar into CLASSPATH CLASSPATH="$JRUBY_HOME/lib/jruby.jar:$CLASSPATH" # add jruby to HBASE_OPTS HBASE_OPTS="$HBASE_OPTS -Djruby.home=$JRUBY_HOME -Djruby.lib=$JRUBY_HOME/lib" else # JRUBY_HOME is not specified explicitly if ! $in_dev_env; then # not in dev environment # add jruby packaged with HBase to CLASSPATH JRUBY_PACKAGED_WITH_HBASE="$HBASE_HOME/lib/ruby/*.jar" for jruby_jar in $JRUBY_PACKAGED_WITH_HBASE; do CLASSPATH=$jruby_jar:$CLASSPATH; done fi fifi<span class="hljs-meta prompt_"></span><span class="hljs-meta prompt_"># </span><span class="language-bash"><span class="hljs-string">找出要运行的 class,该脚本可用于直接运行 Java 类</span></span>if [ "$COMMAND" = "shell" ] ; then<span class="hljs-meta prompt_">#</span><span class="language-bash"><span class="hljs-string">find the hbase ruby sources</span></span> if [ -d "$HBASE_HOME/lib/ruby" ]; then HBASE_OPTS="$HBASE_OPTS -Dhbase.ruby.sources=$HBASE_HOME/lib/ruby" else HBASE_OPTS="$HBASE_OPTS -Dhbase.ruby.sources=$HBASE_HOME/hbase-shell/src/main/ruby" fi HBASE_OPTS="$HBASE_OPTS $HBASE_SHELL_OPTS" CLASS="org.jruby.Main -X+O ${JRUBY_OPTS} ${HBASE_HOME}/bin/hirb.rb"elif [ "$COMMAND" = "hbck" ] ; then<span class="hljs-meta prompt_"> # </span><span class="language-bash"><span class="hljs-string">Look for the -j /path/to/HBCK2.jar parameter. Else pass through to hbck.</span></span> case "${1}" in -j) # Found -j parameter. Add arg to CLASSPATH and set CLASS to HBCK2. shift JAR="${1}" if [ ! -f "${JAR}" ]; then echo "${JAR} file not found!" echo "Usage: hbase [<options>] hbck -jar /path/to/HBCK2.jar [<args>]" exit 1 fi CLASSPATH="${JAR}:${CLASSPATH}"; CLASS="org.apache.hbase.HBCK2" shift # past argument=value ;; *) CLASS='org.apache.hadoop.hbase.util.HBaseFsck' ;; esacelif [ "$COMMAND" = "wal" ] ; then CLASS='org.apache.hadoop.hbase.wal.WALPrettyPrinter'elif [ "$COMMAND" = "hfile" ] ; then CLASS='org.apache.hadoop.hbase.io.hfile.HFilePrettyPrinter'elif [ "$COMMAND" = "zkcli" ] ; then CLASS="org.apache.hadoop.hbase.zookeeper.ZKMainServer" for f in $HBASE_HOME/lib/zkcli/*.jar; do CLASSPATH="${CLASSPATH}:$f"; doneelif [ "$COMMAND" = "upgrade" ] ; then echo "This command was used to upgrade to HBase 0.96, it was removed in HBase 2.0.0." echo "Please follow the documentation at http://hbase.apache.org/book.html#upgrading." exit 1elif [ "$COMMAND" = "snapshot" ] ; then SUBCOMMAND=$1 shift if [ "$SUBCOMMAND" = "create" ] ; then CLASS="org.apache.hadoop.hbase.snapshot.CreateSnapshot" elif [ "$SUBCOMMAND" = "info" ] ; then CLASS="org.apache.hadoop.hbase.snapshot.SnapshotInfo" elif [ "$SUBCOMMAND" = "export" ] ; then CLASS="org.apache.hadoop.hbase.snapshot.ExportSnapshot" else echo "Usage: hbase [<options>] snapshot <subcommand> [<args>]" echo "$options_string" echo "" echo "Subcommands:" echo " create Create a new snapshot of a table" echo " info Tool for dumping snapshot information" echo " export Export an existing snapshot" exit 1 fielif [ "$COMMAND" = "master" ] ; then CLASS='org.apache.hadoop.hbase.master.HMaster' if [ "$1" != "stop" ] && [ "$1" != "clear" ] ; then HBASE_OPTS="$HBASE_OPTS $HBASE_MASTER_OPTS" fielif [ "$COMMAND" = "regionserver" ] ; then CLASS='org.apache.hadoop.hbase.regionserver.HRegionServer' if [ "$1" != "stop" ] ; then HBASE_OPTS="$HBASE_OPTS $HBASE_REGIONSERVER_OPTS" fielif [ "$COMMAND" = "thrift" ] ; then CLASS='org.apache.hadoop.hbase.thrift.ThriftServer' if [ "$1" != "stop" ] ; then HBASE_OPTS="$HBASE_OPTS $HBASE_THRIFT_OPTS" fielif [ "$COMMAND" = "thrift2" ] ; then CLASS='org.apache.hadoop.hbase.thrift2.ThriftServer' if [ "$1" != "stop" ] ; then HBASE_OPTS="$HBASE_OPTS $HBASE_THRIFT_OPTS" fielif [ "$COMMAND" = "rest" ] ; then CLASS='org.apache.hadoop.hbase.rest.RESTServer' if [ "$1" != "stop" ] ; then HBASE_OPTS="$HBASE_OPTS $HBASE_REST_OPTS" fielif [ "$COMMAND" = "zookeeper" ] ; then CLASS='org.apache.hadoop.hbase.zookeeper.HQuorumPeer' if [ "$1" != "stop" ] ; then HBASE_OPTS="$HBASE_OPTS $HBASE_ZOOKEEPER_OPTS" fielif [ "$COMMAND" = "clean" ] ; then case $1 in --cleanZk|--cleanHdfs|--cleanAll) matches="yes" ;; *) ;; esac if [ $# -ne 1 -o "$matches" = "" ]; then echo "Usage: hbase clean (--cleanZk|--cleanHdfs|--cleanAll)" echo "Options: " echo " --cleanZk cleans hbase related data from zookeeper." echo " --cleanHdfs cleans hbase related data from hdfs." echo " --cleanAll cleans hbase related data from both zookeeper and hdfs." exit 1; fi "$bin"/hbase-cleanup.sh --config ${HBASE_CONF_DIR} $@ exit $?elif [ "$COMMAND" = "mapredcp" ] ; then<span class="hljs-meta prompt_"> # </span><span class="language-bash"><span class="hljs-string">If we didn'</span>t find a jar above, this will just be blank and the</span><span class="hljs-meta prompt_"> # </span><span class="language-bash">check below will <span class="hljs-keyword">then</span> default back to the internal classpath.</span> shaded_jar="${shaded_mapreduce}" if [ "${INTERNAL_CLASSPATH}" != "true" ] && [ -f "${shaded_jar}" ]; then echo -n "${shaded_jar}" for f in "${HBASE_HOME}"/lib/client-facing-thirdparty/*.jar; do if [[ ! "${f}" =~ ^.*/htrace-core-3.*\.jar$ ]] && \ [ "${f}" != "htrace-core.jar$" ] && \ [[ ! "${f}" =~ ^.*/slf4j-log4j.*$ ]]; then echo -n ":${f}" fi done echo "" exit 0 fi CLASS='org.apache.hadoop.hbase.util.MapreduceDependencyClasspathTool'elif [ "$COMMAND" = "classpath" ] ; then echo "$CLASSPATH" exit 0elif [ "$COMMAND" = "pe" ] ; then CLASS='org.apache.hadoop.hbase.PerformanceEvaluation' HBASE_OPTS="$HBASE_OPTS $HBASE_PE_OPTS"elif [ "$COMMAND" = "ltt" ] ; then CLASS='org.apache.hadoop.hbase.util.LoadTestTool' HBASE_OPTS="$HBASE_OPTS $HBASE_LTT_OPTS"elif [ "$COMMAND" = "canary" ] ; then CLASS='org.apache.hadoop.hbase.tool.CanaryTool' HBASE_OPTS="$HBASE_OPTS $HBASE_CANARY_OPTS"elif [ "$COMMAND" = "version" ] ; then CLASS='org.apache.hadoop.hbase.util.VersionInfo'elif [ "$COMMAND" = "regionsplitter" ] ; then CLASS='org.apache.hadoop.hbase.util.RegionSplitter'elif [ "$COMMAND" = "rowcounter" ] ; then CLASS='org.apache.hadoop.hbase.mapreduce.RowCounter'elif [ "$COMMAND" = "cellcounter" ] ; then CLASS='org.apache.hadoop.hbase.mapreduce.CellCounter'elif [ "$COMMAND" = "pre-upgrade" ] ; then CLASS='org.apache.hadoop.hbase.tool.PreUpgradeValidator'elif [ "$COMMAND" = "completebulkload" ] ; then CLASS='org.apache.hadoop.hbase.tool.BulkLoadHFilesTool'elif [ "$COMMAND" = "hbtop" ] ; then CLASS='org.apache.hadoop.hbase.hbtop.HBTop' if [ -n "${shaded_jar}" ] ; then for f in "${HBASE_HOME}"/lib/hbase-hbtop*.jar; do if [ -f "${f}" ]; then CLASSPATH="${CLASSPATH}:${f}" break fi done for f in "${HBASE_HOME}"/lib/commons-lang3*.jar; do if [ -f "${f}" ]; then CLASSPATH="${CLASSPATH}:${f}" break fi done fi if [ -f "${HBASE_HOME}/conf/log4j-hbtop.properties" ] ; then HBASE_HBTOP_OPTS="${HBASE_HBTOP_OPTS} -Dlog4j.configuration=file:${HBASE_HOME}/conf/log4j-hbtop.properties" fi HBASE_OPTS="${HBASE_OPTS} ${HBASE_HBTOP_OPTS}"else CLASS=$COMMANDfi<span class="hljs-meta prompt_"></span><span class="hljs-meta prompt_"># </span><span class="language-bash">Have JVM dump heap <span class="hljs-keyword">if</span> we run out of memory. Files will be <span class="hljs-string">'launch directory'</span></span><span class="hljs-meta prompt_"># </span><span class="language-bash">and are named like the following: java_pid21612.hprof. Apparently it doesn<span class="hljs-string">'t</span></span><span class="hljs-meta prompt_"># </span><span class="language-bash"><span class="hljs-string">'</span>cost<span class="hljs-string">' to have this flag enabled. Its a 1.6 flag only. See:</span></span><span class="hljs-meta prompt_"># </span><span class="language-bash"><span class="hljs-string">http://blogs.sun.com/alanb/entry/outofmemoryerror_looks_a_bit_better</span></span>HBASE_OPTS="$HBASE_OPTS -Dhbase.log.dir=$HBASE_LOG_DIR"HBASE_OPTS="$HBASE_OPTS -Dhbase.log.file=$HBASE_LOGFILE"HBASE_OPTS="$HBASE_OPTS -Dhbase.home.dir=$HBASE_HOME"HBASE_OPTS="$HBASE_OPTS -Dhbase.id.str=$HBASE_IDENT_STRING"HBASE_OPTS="$HBASE_OPTS -Dhbase.root.logger=${HBASE_ROOT_LOGGER:-INFO,console}"if [ "x$JAVA_LIBRARY_PATH" != "x" ]; then HBASE_OPTS="$HBASE_OPTS -Djava.library.path=$JAVA_LIBRARY_PATH" export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$JAVA_LIBRARY_PATH"fi<span class="hljs-meta prompt_"></span><span class="hljs-meta prompt_"># </span><span class="language-bash"><span class="hljs-string">仅在 master 和 regionserver 上启用安全日志记录</span></span>if [ "$COMMAND" = "master" ] || [ "$COMMAND" = "regionserver" ]; then HBASE_OPTS="$HBASE_OPTS -Dhbase.security.logger=${HBASE_SECURITY_LOGGER:-INFO,RFAS}"else HBASE_OPTS="$HBASE_OPTS -Dhbase.security.logger=${HBASE_SECURITY_LOGGER:-INFO,NullAppender}"fiHEAP_SETTINGS="$JAVA_HEAP_MAX $JAVA_OFFHEAP_MAX"<span class="hljs-meta prompt_"># </span><span class="language-bash"><span class="hljs-string">现在,如果我们正在运行命令,则意味着我们需要记录</span></span>for f in ${HBASE_HOME}/lib/client-facing-thirdparty/slf4j-log4j*.jar; do if [ -f "${f}" ]; then CLASSPATH="${CLASSPATH}:${f}" break fidone<span class="hljs-meta prompt_"></span><span class="hljs-meta prompt_"># </span><span class="language-bash"><span class="hljs-string">除非设置了 HBASE_NOEXEC,否则执行</span></span>export CLASSPATHif [ "${DEBUG}" = "true" ]; then echo "classpath=${CLASSPATH}" >&2 HBASE_OPTS="${HBASE_OPTS} -Xdiag"fiif [ "${HBASE_NOEXEC}" != "" ]; then "$JAVA" -Dproc_$COMMAND -XX:OnOutOfMemoryError="kill -9 %p" $HEAP_SETTINGS $HBASE_OPTS $CLASS "$@"else export JVM_PID="$$" exec "$JAVA" -Dproc_$COMMAND -XX:OnOutOfMemoryError="kill -9 %p" $HEAP_SETTINGS $HBASE_OPTS $CLASS "$@"fi</code></pre></div><hr><h2 id="停止"><a href="#停止" class="headerlink" title="停止"></a>停止</h2><h3 id="stop-hbase-sh"><a href="#stop-hbase-sh" class="headerlink" title="stop-hbase.sh"></a>stop-hbase.sh</h3><p>停止 hadoop hbase 守护程序,在主节点上运行以停止整个 HBase 服务。</p><div class="hljs code-wrapper"><pre><code class="hljs shell"><span class="hljs-meta prompt_"># </span><span class="language-bash">仿照 <span class="hljs-variable">$HADOOP_HOME</span>/bin/stop-hbase.sh.</span>bin=`dirname "${BASH_SOURCE-$0}"`bin=`cd "$bin">/dev/null; pwd`<span class="hljs-meta prompt_"></span><span class="hljs-meta prompt_"># </span><span class="language-bash">加载环境变量和参数</span>. "$bin"/hbase-config.sh. "$bin"/hbase-common.sh<span class="hljs-meta prompt_"></span><span class="hljs-meta prompt_"># </span><span class="language-bash">停止命令需要的一些参数</span>if [ "$HBASE_LOG_DIR" = "" ]; then export HBASE_LOG_DIR="$HBASE_HOME/logs"fimkdir -p "$HBASE_LOG_DIR"if [ "$HBASE_IDENT_STRING" = "" ]; then export HBASE_IDENT_STRING="$USER"fiexport HBASE_LOG_PREFIX=hbase-$HBASE_IDENT_STRING-master-$HOSTNAMEexport HBASE_LOGFILE=$HBASE_LOG_PREFIX.loglogout=$HBASE_LOG_DIR/$HBASE_LOG_PREFIX.out loglog="${HBASE_LOG_DIR}/${HBASE_LOGFILE}"pid=${HBASE_PID_DIR:-/tmp}/hbase-$HBASE_IDENT_STRING-master.pid<span class="hljs-meta prompt_"></span><span class="hljs-meta prompt_"># </span><span class="language-bash">如果 HBase 的相关进程号文件存在,则调用 <span class="hljs-string">"<span class="hljs-variable">$HBASE_HOME</span>"</span>/bin/hbase 停止服务,并记录日志,停止后删除进程号文件,见附 4</span>if [[ -e $pid ]]; then echo -n stopping hbase echo "`date` Stopping hbase (via master)" >> $loglog nohup nice -n ${HBASE_NICENESS:-0} "$HBASE_HOME"/bin/hbase \ --config "${HBASE_CONF_DIR}" \ master stop "$@" > "$logout" 2>&1 < /dev/null & waitForProcessEnd `cat $pid` 'stop-master-command' rm -f $pidelse echo no hbase master foundfi<span class="hljs-meta prompt_"></span><span class="hljs-meta prompt_"># </span><span class="language-bash">单机模式下停止由 HBase 管理的 Zookeeper 服务,即 HQuorumPeer 进程</span>distMode=`$bin/hbase --config "$HBASE_CONF_DIR" org.apache.hadoop.hbase.util.HBaseConfTool hbase.cluster.distributed | head -n 1`if [ "$distMode" == 'true' ] then "$bin"/hbase-daemons.sh --config "${HBASE_CONF_DIR}" stop zookeeperfi</code></pre></div><h4 id="附-4"><a href="#附-4" class="headerlink" title="附 4"></a>附 4</h4><p><code>$bin/hbase</code>接收到 master stop 参数,并经过脚本识别后调用 HMaster 类,进行停止。省略了从 HMaster 到 HMasterCommandLine 的传参过程,前文已经描述过,这里直接从 HMasterCommandLine 中的 stopMaster 方法开始分析。</p><div class="hljs code-wrapper"><pre><code class="hljs java"><span class="hljs-keyword">public</span> <span class="hljs-keyword">class</span> <span class="hljs-title class_">HMasterCommandLine</span> <span class="hljs-keyword">extends</span> <span class="hljs-title class_">ServerCommandLine</span> { <span class="hljs-keyword">public</span> <span class="hljs-type">int</span> <span class="hljs-title function_">run</span><span class="hljs-params">(String args[])</span> <span class="hljs-keyword">throws</span> Exception { …… <span class="hljs-keyword">if</span> (<span class="hljs-string">"start"</span>.equals(command)) { <span class="hljs-keyword">return</span> startMaster(); } <span class="hljs-keyword">else</span> <span class="hljs-keyword">if</span> (<span class="hljs-string">"stop"</span>.equals(command)) { <span class="hljs-comment">// 匹配到 stop 的指令</span> <span class="hljs-keyword">return</span> stopMaster(); } <span class="hljs-keyword">else</span> <span class="hljs-keyword">if</span> (<span class="hljs-string">"clear"</span>.equals(command)) { <span class="hljs-keyword">return</span> (ZNodeClearer.clear(getConf()) ? <span class="hljs-number">0</span> : <span class="hljs-number">1</span>); } <span class="hljs-keyword">else</span> { usage(<span class="hljs-string">"Invalid command: "</span> + command); <span class="hljs-keyword">return</span> <span class="hljs-number">1</span>; } } <span class="hljs-keyword">private</span> <span class="hljs-type">int</span> <span class="hljs-title function_">stopMaster</span><span class="hljs-params">()</span> { <span class="hljs-comment">// 获取配置文件</span> <span class="hljs-type">Configuration</span> <span class="hljs-variable">conf</span> <span class="hljs-operator">=</span> getConf(); <span class="hljs-comment">// 客户端请求失败不再重试</span> conf.setInt(HConstants.HBASE_CLIENT_RETRIES_NUMBER, <span class="hljs-number">0</span>); <span class="hljs-comment">// 此处 createConnection 方法通过反射获取一个新的 connection 实例</span> <span class="hljs-keyword">try</span> (<span class="hljs-type">Connection</span> <span class="hljs-variable">connection</span> <span class="hljs-operator">=</span> ConnectionFactory.createConnection(conf)) { <span class="hljs-comment">// 再经过 connection 获得 Admin 实例,Admin 是 HBase 用来管理的 API</span> <span class="hljs-keyword">try</span> (<span class="hljs-type">Admin</span> <span class="hljs-variable">admin</span> <span class="hljs-operator">=</span> connection.getAdmin()) { admin.shutdown(); } <span class="hljs-keyword">catch</span> (Throwable t) { LOG.error(<span class="hljs-string">"Failed to stop master"</span>, t); <span class="hljs-keyword">return</span> <span class="hljs-number">1</span>; } } <span class="hljs-keyword">catch</span> (MasterNotRunningException e) { LOG.error(<span class="hljs-string">"Master not running"</span>); <span class="hljs-keyword">return</span> <span class="hljs-number">1</span>; } <span class="hljs-keyword">catch</span> (ZooKeeperConnectionException e) { LOG.error(<span class="hljs-string">"ZooKeeper not available"</span>); <span class="hljs-keyword">return</span> <span class="hljs-number">1</span>; } <span class="hljs-keyword">catch</span> (IOException e) { LOG.error(<span class="hljs-string">"Got IOException: "</span> +e.getMessage(), e); <span class="hljs-keyword">return</span> <span class="hljs-number">1</span>; } <span class="hljs-comment">// 只有当正确停止后,返回 0</span> <span class="hljs-keyword">return</span> <span class="hljs-number">0</span>; }}</code></pre></div><p>源码中看到 shutdown 方法和 ShutdownRequest 类等等都是报红的,这是因为 HBase 的某些类和方法是由 protobuf 之类的工具生成的。变量 master 是接口 MasterKeepAliveConnection 的实例,该接口有两个实现类:在 ConnectionImplementation 类中 getKeepAliveMasterService 方法直接返回的内部类 MasterKeepAliveConnection 以及 ShortCircuitMasterConnection。ShortCircuitMasterConnection 是与本地主机通信时可以绕过RPC层(串行化,反序列化,网络等)的短路连接类。</p><div class="hljs code-wrapper"><pre><code class="hljs java"><span class="hljs-keyword">public</span> <span class="hljs-keyword">class</span> <span class="hljs-title class_">HBaseAdmin</span> <span class="hljs-keyword">implements</span> <span class="hljs-title class_">Admin</span> { <span class="hljs-keyword">protected</span> MasterKeepAliveConnection master; …… <span class="hljs-keyword">public</span> <span class="hljs-keyword">synchronized</span> <span class="hljs-keyword">void</span> <span class="hljs-title function_">shutdown</span><span class="hljs-params">()</span> <span class="hljs-keyword">throws</span> IOException { executeCallable(<span class="hljs-keyword">new</span> <span class="hljs-title class_">MasterCallable</span><Void>(getConnection(), getRpcControllerFactory()) { <span class="hljs-meta">@Override</span> <span class="hljs-keyword">protected</span> Void <span class="hljs-title function_">rpcCall</span><span class="hljs-params">()</span> <span class="hljs-keyword">throws</span> Exception { <span class="hljs-comment">// 设置请求的优先级为高优先级</span> setPriority(HConstants.HIGH_QOS); master.shutdown(getRpcController(), ShutdownRequest.newBuilder().build()); <span class="hljs-keyword">return</span> <span class="hljs-literal">null</span>; } }); }}</code></pre></div><p>这里调用分析的是 getKeepAliveMasterService 方法返回的内部类,ShortCircuitMasterConnection 类中的 shutdown 方法也是类似的,通过 MasterProtos 最终调用至实现了 MasterService.BlockingInterface 接口的 MasterRpcServices 类。</p><div class="hljs code-wrapper"><pre><code class="hljs java"><span class="hljs-keyword">class</span> <span class="hljs-title class_">ConnectionImplementation</span> <span class="hljs-keyword">implements</span> <span class="hljs-title class_">ClusterConnection</span>, Closeable { …… <span class="hljs-keyword">private</span> MasterKeepAliveConnection <span class="hljs-title function_">getKeepAliveMasterService</span><span class="hljs-params">()</span> <span class="hljs-keyword">throws</span> IOException { …… <span class="hljs-comment">// Ugly delegation just so we can add in a Close method.</span> <span class="hljs-keyword">final</span> MasterProtos.MasterService.<span class="hljs-type">BlockingInterface</span> <span class="hljs-variable">stub</span> <span class="hljs-operator">=</span> <span class="hljs-built_in">this</span>.masterServiceState.stub; <span class="hljs-keyword">return</span> <span class="hljs-keyword">new</span> <span class="hljs-title class_">MasterKeepAliveConnection</span>() { <span class="hljs-type">MasterServiceState</span> <span class="hljs-variable">mss</span> <span class="hljs-operator">=</span> masterServiceState; …… <span class="hljs-meta">@Override</span> <span class="hljs-keyword">public</span> MasterProtos.ShutdownResponse <span class="hljs-title function_">shutdown</span><span class="hljs-params">(RpcController controller,</span><span class="hljs-params"> MasterProtos.ShutdownRequest request)</span> <span class="hljs-keyword">throws</span> ServiceException { <span class="hljs-keyword">return</span> stub.shutdown(controller, request); } } }}</code></pre></div><p>可以看到,在 MasterRpcServices 中,通过实例化的 HMaster 对象,调用的是 shutdown 方法来进行停止。</p><div class="hljs code-wrapper"><pre><code class="hljs java"><span class="hljs-keyword">public</span> <span class="hljs-keyword">class</span> <span class="hljs-title class_">MasterRpcServices</span> <span class="hljs-keyword">extends</span> <span class="hljs-title class_">RSRpcServices</span> <span class="hljs-keyword">implements</span> <span class="hljs-title class_">MasterService</span>.BlockingInterface, RegionServerStatusService.BlockingInterface, LockService.BlockingInterface, HbckService.BlockingInterface { <span class="hljs-keyword">private</span> <span class="hljs-keyword">final</span> HMaster master; …… <span class="hljs-meta">@Override</span> <span class="hljs-keyword">public</span> ShutdownResponse <span class="hljs-title function_">shutdown</span><span class="hljs-params">(RpcController controller,</span><span class="hljs-params"> ShutdownRequest request)</span> <span class="hljs-keyword">throws</span> ServiceException { LOG.info(master.getClientIdAuditPrefix() + <span class="hljs-string">" shutdown"</span>); <span class="hljs-keyword">try</span> { master.shutdown(); } <span class="hljs-keyword">catch</span> (IOException e) { LOG.error(<span class="hljs-string">"Exception occurred in HMaster.shutdown()"</span>, e); <span class="hljs-keyword">throw</span> <span class="hljs-keyword">new</span> <span class="hljs-title class_">ServiceException</span>(e); } <span class="hljs-keyword">return</span> ShutdownResponse.newBuilder().build(); }}</code></pre></div><p>HMaster 会先停止所有的 HRegionServer 服务,然后再停止自身。将 ServerManager 的状态设置为关闭后,RegionServer 将注意到状态的变化,并开始自行关闭,等最后一个 RegionServer 退出后,HMaster 即可关闭。</p><div class="hljs code-wrapper"><pre><code class="hljs java"><span class="hljs-keyword">public</span> <span class="hljs-keyword">class</span> <span class="hljs-title class_">HMaster</span> <span class="hljs-keyword">extends</span> <span class="hljs-title class_">HRegionServer</span> <span class="hljs-keyword">implements</span> <span class="hljs-title class_">MasterServices</span> { …… <span class="hljs-keyword">public</span> <span class="hljs-keyword">void</span> <span class="hljs-title function_">shutdown</span><span class="hljs-params">()</span> <span class="hljs-keyword">throws</span> IOException { <span class="hljs-keyword">if</span> (cpHost != <span class="hljs-literal">null</span>) { cpHost.preShutdown(); } <span class="hljs-comment">// 告知 serverManager 关闭集群,serverManager 是用于管理 RegionServer 的</span> <span class="hljs-keyword">if</span> (<span class="hljs-built_in">this</span>.serverManager != <span class="hljs-literal">null</span>) { <span class="hljs-built_in">this</span>.serverManager.shutdownCluster(); } <span class="hljs-comment">// clusterStatusTracker 是用于在 Zookeeper 中对集群设置进行追踪的,这里通过删除 znode 来达到关闭集群的目的</span> <span class="hljs-keyword">if</span> (<span class="hljs-built_in">this</span>.clusterStatusTracker != <span class="hljs-literal">null</span>) { <span class="hljs-keyword">try</span> { <span class="hljs-built_in">this</span>.clusterStatusTracker.setClusterDown(); } <span class="hljs-keyword">catch</span> (KeeperException e) { LOG.error(<span class="hljs-string">"ZooKeeper exception trying to set cluster as down in ZK"</span>, e); } } <span class="hljs-comment">// Stop the procedure executor. Will stop any ongoing assign, unassign, server crash etc.,</span> <span class="hljs-comment">// processing so we can go down.</span> <span class="hljs-keyword">if</span> (<span class="hljs-built_in">this</span>.procedureExecutor != <span class="hljs-literal">null</span>) { <span class="hljs-built_in">this</span>.procedureExecutor.stop(); } <span class="hljs-comment">// 关闭集群联机,将杀死可能正在运行的 RPC,如果不关闭连接,将不得不等待 RPC 超时</span> <span class="hljs-keyword">if</span> (<span class="hljs-built_in">this</span>.clusterConnection != <span class="hljs-literal">null</span>) { <span class="hljs-built_in">this</span>.clusterConnection.close(); } } <span class="hljs-meta">@Override</span> <span class="hljs-keyword">public</span> <span class="hljs-keyword">void</span> <span class="hljs-title function_">stop</span><span class="hljs-params">(String msg)</span> { <span class="hljs-comment">// isStopped 方法继承自 HRegionServer,在其停止后会设置为 false</span> <span class="hljs-keyword">if</span> (!isStopped()) { <span class="hljs-comment">// 调用父类 HRegionServer 的 stop 方法挨个进行停止</span> <span class="hljs-built_in">super</span>.stop(msg); <span class="hljs-keyword">if</span> (<span class="hljs-built_in">this</span>.activeMasterManager != <span class="hljs-literal">null</span>) { <span class="hljs-built_in">this</span>.activeMasterManager.stop(); } } }}</code></pre></div><p>接上文,在 ServerManager 中调用 shutdownCluster 方法后又回到 HMaster 中,调用其自身的 stop 方法进行停止。</p><div class="hljs code-wrapper"><pre><code class="hljs java"><span class="hljs-keyword">public</span> <span class="hljs-keyword">class</span> <span class="hljs-title class_">ServerManager</span> { <span class="hljs-keyword">private</span> <span class="hljs-keyword">final</span> MasterServices master; …… <span class="hljs-keyword">public</span> <span class="hljs-keyword">void</span> <span class="hljs-title function_">shutdownCluster</span><span class="hljs-params">()</span> { <span class="hljs-type">String</span> <span class="hljs-variable">statusStr</span> <span class="hljs-operator">=</span> <span class="hljs-string">"Cluster shutdown requested of master="</span> + <span class="hljs-built_in">this</span>.master.getServerName(); LOG.info(statusStr); <span class="hljs-comment">// 设置集群关闭状态</span> <span class="hljs-built_in">this</span>.clusterShutdown.set(<span class="hljs-literal">true</span>); <span class="hljs-keyword">if</span> (onlineServers.isEmpty()) { <span class="hljs-comment">// 这里没有使用同步方法可能会导致停止两次,但这没啥问题</span> master.stop(<span class="hljs-string">"OnlineServer=0 right after cluster shutdown set"</span>); } }}</code></pre></div><p>HRegionServer 在接收到子类 HMaster 的 stop 方法调用后,开始停止服务。其 run 方法在开始运行时一直处于自旋状态,将 stopped 变量改为 true 后,会运行后面部分的代码,即停止相关服务。</p><div class="hljs code-wrapper"><pre><code class="hljs java"><span class="hljs-keyword">public</span> <span class="hljs-keyword">class</span> <span class="hljs-title class_">HRegionServer</span> <span class="hljs-keyword">extends</span> <span class="hljs-title class_">HasThread</span> <span class="hljs-keyword">implements</span> <span class="hljs-title class_">RegionServerServices</span>, LastSequenceId, ConfigurationObserver { <span class="hljs-keyword">private</span> <span class="hljs-keyword">volatile</span> <span class="hljs-type">boolean</span> <span class="hljs-variable">stopped</span> <span class="hljs-operator">=</span> <span class="hljs-literal">false</span>; …… <span class="hljs-meta">@Override</span> <span class="hljs-keyword">public</span> <span class="hljs-keyword">void</span> <span class="hljs-title function_">stop</span><span class="hljs-params">(<span class="hljs-keyword">final</span> String msg)</span> { stop(msg, <span class="hljs-literal">false</span>, RpcServer.getRequestUser().orElse(<span class="hljs-literal">null</span>)); } <span class="hljs-keyword">public</span> <span class="hljs-keyword">void</span> <span class="hljs-title function_">stop</span><span class="hljs-params">(<span class="hljs-keyword">final</span> String msg, <span class="hljs-keyword">final</span> <span class="hljs-type">boolean</span> force, <span class="hljs-keyword">final</span> User user)</span> { <span class="hljs-keyword">if</span> (!<span class="hljs-built_in">this</span>.stopped) { LOG.info(<span class="hljs-string">"***** STOPPING region server '"</span> + <span class="hljs-built_in">this</span> + <span class="hljs-string">"' *****"</span>); <span class="hljs-keyword">if</span> (<span class="hljs-built_in">this</span>.rsHost != <span class="hljs-literal">null</span>) { <span class="hljs-comment">// when forced via abort don't allow CPs to override</span> <span class="hljs-keyword">try</span> { <span class="hljs-built_in">this</span>.rsHost.preStop(msg, user); } <span class="hljs-keyword">catch</span> (IOException ioe) { <span class="hljs-keyword">if</span> (!force) { LOG.warn(<span class="hljs-string">"The region server did not stop"</span>, ioe); <span class="hljs-keyword">return</span>; } LOG.warn(<span class="hljs-string">"Skipping coprocessor exception on preStop() due to forced shutdown"</span>, ioe); } } <span class="hljs-built_in">this</span>.stopped = <span class="hljs-literal">true</span>; LOG.info(<span class="hljs-string">"STOPPED: "</span> + msg); <span class="hljs-comment">// Wakes run() if it is sleeping</span> sleeper.skipSleepCycle(); } }}</code></pre></div><p>省略了后续相关服务停止以及 Zookeeper 清理等部分,至此,整个 HMaster 集群已经完全关闭。</p><hr><h2 id="配置文件"><a href="#配置文件" class="headerlink" title="配置文件"></a>配置文件</h2><h3 id="hbase-env-sh"><a href="#hbase-env-sh" class="headerlink" title="hbase-env.sh"></a>hbase-env.sh</h3><p>前面的一些脚本中有加载 hbase-env.sh 中的环境变量,这些变量都是给用户提供的可配置项。<br>它设置了 HBase 运行中的一些重要 JVM 参数,在对 HBase 进行调优时可能会用到。</p><p>文件格式是以<code>export 环境变量名=变量值</code>这种形式组织的</p><ul><li><p><code>JAVA_HOME</code> - JDK 路径,Java 1.8+</p></li><li><p><code>HBASE_CLASSPATH</code> - 额外的 Java CLASSPATH,可选项</p></li><li><p><code>HBASE_HEAPSIZE</code> - 使用的最大堆数量,默认为 JVM 默认值</p></li><li><p><code>HBASE_OFFHEAPSIZE</code> - 堆外内存</p></li><li><p><code>HBASE_OPTS</code> - 额外的 Java 运行时参数,默认为”-XX:+UseConcMarkSweepGC”,使用 CMS 收集器对年老代进行垃圾收集,CMS 收集器通过多线程并发进行垃圾回收,尽量减少垃圾收集造成的停顿</p></li><li><p><code>SERVER_GC_OPTS</code> - 可以为服务器端进程启用 Java 垃圾回收日志记录</p></li><li><p><code>CLIENT_GC_OPTS</code> - 为客户端进程启用Java垃圾回收日志记录</p></li><li><p>额外的运行时选项配置,包含 JMX 导出、启用主要 HBase 进程的远程 JDWP 调试等</p><ul><li> <code>HBASE_JMX_BASE</code></li><li> <code>HBASE_MASTER_OPTS</code></li><li> <code>HBASE_REGIONSERVER_OPTS</code></li><li> <code>HBASE_THRIFT_OPTS</code></li><li> <code>HBASE_ZOOKEEPER_OPTS</code></li></ul></li><li><p><code>HBASE_REGIONSERVERS</code> - RegionServer 服务运行节点</p></li><li><p><code>HBASE_REGIONSERVER_MLOCK</code> - 是否使所有区域服务器页面都映射为驻留在内存中</p></li><li><p><code>HBASE_REGIONSERVER_UID</code> - RegionServer 的用户 ID</p></li><li><p><code>HBASE_BACKUP_MASTERS</code> - 备用 Master 节点</p></li><li><p><code>HBASE_SSH_OPTS</code> - 额外的 ssh 选项</p></li><li><p><code>HBASE_LOG_DIR</code> - HBase 日志存储路径</p></li><li><p><code>HBASE_IDENT_STRING</code> - 标识 HBase 实例的字符串,默认为当前用户</p></li><li><p><code>HBASE_NICENESS</code> - 守护进程的调度优先级</p></li><li><p><code>HBASE_PID_DIR</code> - PID 文件的存储路径,默认是 /tmp,最好换个稳定的路径</p></li><li><p><code>HBASE_SLAVE_SLEEP</code> - 在从属命令之间休眠的秒数,默认情况下未设置</p></li><li><p><code>HBASE_MANAGES_ZK</code> - 是否启动 HBase 内嵌的 Zookeeper,一般使用集群的 Zookeeper</p></li><li><p><code>HBASE_ROOT_LOGGER</code> - HBase 日志级别</p></li><li><p><code>HBASE_DISABLE_HADOOP_CLASSPATH_LOOKUP</code> - HBase 启动时是否应包含 Hadoop 的库,默认值为 false,表示包含 Hadoop 的库</p></li></ul><h3 id="hbase-site-xml"><a href="#hbase-site-xml" class="headerlink" title="hbase-site.xml"></a>hbase-site.xml</h3><p>该文件配置项较多,此处仅列举一些常见集群配置项,更多参数请移步<a href="https://hbase.apache.org/book.html#config.files">官方文档</a>。<br>文件格式是以下面这种形式组织的:</p><div class="hljs code-wrapper"><pre><code class="hljs xml"><span class="hljs-tag"><<span class="hljs-name">property</span>></span> <span class="hljs-tag"><<span class="hljs-name">name</span>></span>参数名称<span class="hljs-tag"></<span class="hljs-name">name</span>></span> <span class="hljs-tag"><<span class="hljs-name">value</span>></span>参数值<span class="hljs-tag"></<span class="hljs-name">value</span>></span><span class="hljs-tag"></<span class="hljs-name">property</span>></span></code></pre></div><ul><li><p><code>hbase.tmp.dir</code> - 本地文件系统上的临时目录,默认为 /tmp/hbase-${user.name},最好更改此路径为一个更稳定的,否则数据容易丢失。</p></li><li><p><code>hbase.rootdir</code> - RegionServers 共享目录,也是 HBase 持久化存储的目录,支持 HDFS 的存储。默认情况下写到 ${hbase.tmp.dir}/hbase 目录,所以最好更改此目录,否则机器重新启动后,所有的数据将丢失。</p></li><li><p><code>hbase.cluster.distributed</code> - 集群启动模式,单机模式为 false(默认值),集群模式为 true。如果为 false,将在同一个 JVM 中运行 HBase 以及 Zookeeper 的进程。</p></li><li><p><code>hbase.zookeeper.quorum</code> - Zookeeper 服务器列表,以逗号分隔,默认是 127.0.0.1。如果在 hbase-env.sh 中配置了<code>export HBASE_MANAGES_ZK=true</code>,那么该 Zookeeper服务将由 HBase 进行管理,作为 HBase 启动/停止的一部分,最好是部署独立的 Zookeeper 集群。</p></li><li><p><code>hbase.zookeeper.property.dataDir</code> - Zookeeper 配置文件 zoo.cfg 中的属性,也是快照存储的目录,只有在使用外置的 Zookeeper 集群服务时有效。</p></li><li><p><code>hbase.master.port</code> - Master 的内部端口号,默认是 16000。</p></li><li><p><code>hbase.master.info.port</code> - Master 的 Web UI 端口号,默认是 16010,如果不想运行 UI 实例,设置为 -1 即可。</p></li><li><p><code>hbase.regionserver.port</code> - RegionServer 的内部端口号,默认是 16020。</p></li><li><p><code>hbase.regionserver.info.port</code> - RegionServer 的 Web UI 端口号,默认是 16030,如果不想运行 UI 实例,设置为 -1 即可。</p></li><li><p><code>hbase.regionserver.handler.count</code> - 在 RegionServer 上的 RPC 监听器实例计数,Master 也使用相同的属性,太多的 handlers 可能会适得其反。将其设置为 CPU 的倍数,如果大多数情况下是只读的,那么接近 CPU 数更好,从 CPU 数的两倍开始进行调整,默认为 30。</p></li><li><p><code>hbase.regionserver.global.memstore.size</code> - 在阻止新的更新并强制刷新之前,RegionServer 中所有内存的最大值,默认为堆的 0.4。更新被阻塞并强制刷新,知道一个 RegionServer 中所有内存的大小达到 hbase.regionserver.global.memstore.size.lower.limit,配置中的默认值保留为空。</p></li><li><p><code>hbase.regionserver.global.memstore.size.lower.limit</code> - 默认是 hbase.regionserver.global.memstore.size 的 95%,配置中的默认值保留为空。</p></li><li><p><code>zookeeper.znode.parent</code> - Zookeeper 中 HBase 的根 Znode 节点,默认是 /hbase。</p></li><li><p><code>dfs.client.read.shortcircuit</code> - 设置为 true,则启用本地短路读,默认是 false。</p></li><li><p><code>hbase.column.max.version</code> - 新的列簇将使用此值作为默认的版本数,默认是 1。</p></li><li><p><code>hbase.coprocessor.master.classes</code> - 以逗号分隔的协处理器列表,在 HMaster 上加载的 MasterObserver 协处理器,指定完整的类名。</p></li><li><p><code>hbase.coprocessor.region.classes</code> - 以逗号分隔的协处理器列表,在所有的表上加载,指定完整的类名,也可通过 HTableDescriptor 或 HBase shell 按需加载。</p></li><li><p><code>hbase.coprocessor.user.region.classes</code> - 从配置中加载用户表的系统默认协处理器,用户可以继承 HBase 的 RegionCoprocessor 实现自己需要的逻辑部分,指定完整的类名。</p></li><li><p><code>hbase.coprocessor.user.enabled</code> - 启用/禁用加载用户的协处理器加载,默认为 true。</p></li><li><p><code>hbase.coprocessor.enabled</code> - 启用/禁用加载所有的协处理器加载,默认为 true。</p></li></ul>]]></content>
<categories>
<category>分布式系统</category>
<category>分布式存储</category>
<category>HBase</category>
</categories>
<tags>
<tag>HBase</tag>
</tags>
</entry>
<entry>
<title>Spark 概述</title>
<link href="/2021/03/17/Spark%E6%A6%82%E8%BF%B0/"/>
<url>/2021/03/17/Spark%E6%A6%82%E8%BF%B0/</url>
<content type="html"><![CDATA[<h1 id="Spark概述"><a href="#Spark概述" class="headerlink" title="Spark概述"></a>Spark概述</h1><h2 id="基本概念"><a href="#基本概念" class="headerlink" title="基本概念"></a>基本概念</h2><h3 id="RDD"><a href="#RDD" class="headerlink" title="RDD"></a>RDD</h3><p>弹性分布式数据集 ( Resilient Distrbuted Dataset),本质是一种分布式的内存抽象,表示一个只读的数据分区(Partition)集合。</p><h3 id="DAG"><a href="#DAG" class="headerlink" title="DAG"></a>DAG</h3><p>有向无环图(Directed Acycle graph),Spark 使用 DAG 来反映各 RDD 间的依赖或血缘关系。</p><h3 id="Partition"><a href="#Partition" class="headerlink" title="Partition"></a>Partition</h3><p>数据分区,即一个 RDD 的数据可以划分为多少个分区,Spark 根据 Partition 的数量来确定 Task 的数量。</p><h3 id="NarrowDependency"><a href="#NarrowDependency" class="headerlink" title="NarrowDependency"></a>NarrowDependency</h3><p>窄依赖,即子 RDD 依赖于父 RDD 中固定的 Partition。分为 OneToOneDependency 和 RangeDependency 两种。</p><h3 id="ShuffleDependency"><a href="#ShuffleDependency" class="headerlink" title="ShuffleDependency"></a>ShuffleDependency</h3><p> 宽依赖,即子 RDD 对父 RDD 中的所有 Partition 都可能产生依赖。</p><h3 id="Job"><a href="#Job" class="headerlink" title="Job"></a>Job</h3><p>用户提交的作业。当 RDD 及 DAG 被提交给 DAGScheduler 后,DAGScheduler 会将所有 RDD 中的转换及动作视为一个 Job(由一到多个 Task 组成)。</p><h3 id="Stage"><a href="#Stage" class="headerlink" title="Stage"></a>Stage</h3><p>Job 的执行阶段。DAGScheduler 按照 ShuffleDependency 作为 Stage 的划分节点对 RDD 的 DAG 进行 Stage 划分。一个 Job 可能被分为一到多个 Stage,主要为 ShuffleMapStage 和 ResultStage 两种。</p><h3 id="Task"><a href="#Task" class="headerlink" title="Task"></a>Task</h3><p>具体执行任务。一个 Job 在每个 Stage 内都会按照 RDD 的 Partition 数量,创建多个 Task。Task 分为 ShuffleMapTask(ShuffleMapStage) 和 ResultTask(ResultStage)两种,对应 Hadoop 中的 Map 任务和 Reduce 任务。</p><h3 id="Shuffle"><a href="#Shuffle" class="headerlink" title="Shuffle"></a>Shuffle</h3><p>所有 MapReduce 计算框架的核心执行阶段,用于打通 Map 任务的输出和 Reduce 任务的输入,Map 任务的中间输出结果按照指定的分区策略(例如按照 key 值哈希)分配给处理某一分区的 Reduce 任务。</p><h2 id="基本架构"><a href="#基本架构" class="headerlink" title="基本架构"></a>基本架构</h2><p>从集群部署的角度来看,Spark 由集群管理器(Cluster Manager)、工作节点(Worker)、执行器(Executor)、驱动器(Driver)、应用程序(Application)等部分组成,如下图所示。</p><p><img src="https://cdn.jsdelivr.net/gh/gleonSun/images@main/image/20250120115743349.png" alt="Spark基本架构"></p><h3 id="Cluster-Manager"><a href="#Cluster-Manager" class="headerlink" title="Cluster Manager"></a>Cluster Manager</h3><p>Spark 的集群管理器,主要负责对整个集群资源的分配与管理。在 YARN 模式下为 ResourceManager;在 Mesos 模式下为 MesosMaster;在 Standlone 模式下为 Master。</p><p>分配的资源属于一级分配,将各个 Worker 上的内存、CPU 等资源分配给 Application,但是并不负责对 Executor 的资源分配。</p><h3 id="Worker"><a href="#Worker" class="headerlink" title="Worker"></a>Worker</h3><p>Spark 的工作节点,YARN 下由 NodeManager 替代。主要负责申请资源并创建 Executor,同时给其分配资源。在 Standalone 模式下,Master 将 Worker 上的内存、CPU 及 Executor 等资源分配给 Application 后,将命令 Worker 启动 CoarseGrainedExecutorBackend 进程(创建 Executor 实例)。</p><h3 id="Executor"><a href="#Executor" class="headerlink" title="Executor"></a>Executor</h3><p>执行计算任务的一线组件,主要负责任务的执行及与 Worker、Driver 的信息同步。</p><h3 id="Driver"><a href="#Driver" class="headerlink" title="Driver"></a>Driver</h3><p>Application 的驱动程序,接受用户的 SQL 请求并进行解析。Driver 可以运行在 Application 中,也可以由 Application 提交给 Cluster Manager 并由其安排 Worker 运行。</p><h3 id="Application"><a href="#Application" class="headerlink" title="Application"></a>Application</h3><p>表示用户的应用程序,通过 Spark API 进行 RDD 的转换和 DAG 的构建,并通过 Driver 将 Application 注册到 Cluster Manager。</p><h2 id="模块划分"><a href="#模块划分" class="headerlink" title="模块划分"></a>模块划分</h2><p>整个 Spark 主要由 Spark Core、Spark SQL、Spark Streaming、Grapx、MLlib 组成,其核心引擎部分是 Spark Core,Spark SQL 部分则支持了 SQL 及 Hive,本文也着重分析的这两部分。</p><hr><h3 id="Spark-Core"><a href="#Spark-Core" class="headerlink" title="Spark Core"></a>Spark Core</h3><h4 id="基础设施"><a href="#基础设施" class="headerlink" title="基础设施"></a>基础设施</h4><p>包括 Spark 的配置(SparkConf)、Spark 内置的 RPC 框架(早期使用的是 Akka)、事件总线(ListenerBus)、度量系统。</p><ol><li>SparkConf 管理 Spark 应用程序的各种配置信息。</li><li>RPC 框架使用 Netty 实现,有同步和异步之分。</li><li>事件总线是 SparkContext 内部各个组件间使用事件 - 监听器模式异步调用的实现。</li><li>度量系统由 Spark 中的多种度量源(Source)和多种度量输出(Sink)构成,完成对整个 Spark 集群中各个组件运行期状态的监控。</li></ol><h4 id="SparkContext"><a href="#SparkContext" class="headerlink" title="SparkContext"></a>SparkContext</h4><p>在正式提交应用程序之前,首先需要初始化 SparkContext。其隐藏了网络通信、分布式部署、消息通信、存储体系、计算引擎、度量系统、文件服务、WebUI 等内容。</p><h4 id="SparkEnv"><a href="#SparkEnv" class="headerlink" title="SparkEnv"></a>SparkEnv</h4><p>Spark 的执行环境,是 Spark 中 Task 运行所必需的组件。内部封装了 RPC 环境(RpcEnv)、序列化管理器、广播管理器(BroadcastManager)、Map 任务输出跟踪器(MapOutputTracker)、存储体系、度量系统(MetricSystem)、输出提交协调器(OutputCommitCoordinator)等 Task 运行所需的各种组件。</p><h4 id="存储体系"><a href="#存储体系" class="headerlink" title="存储体系"></a>存储体系</h4><p>Spark 优先考虑使用各节点的内存作为存储,当内存不足时才会考虑使用磁盘,极大地减少了磁盘 I/O。Spark 的内存空间还提供了 Tungsten 的实现,直接操作操作系统的内存。</p><h4 id="调度系统"><a href="#调度系统" class="headerlink" title="调度系统"></a>调度系统</h4><p>主要由 DAGScheduler 和 TaskScheduler 组成,都内置在 SparkContext 中。DAGSCheduler 负责创建 Job、将 DAG 中的 RDD 划分到不同的 Stage、给 Stage 创建对应的 Task、批量提交 Task 等功能。TaskScheduler 负责按照 FIFO 或者 FAIR 等调度算法对批量 Task 进行调度、给 Task 分配资源;将 Task 发送到 Executor 上由其执行。</p><h4 id="计算引擎"><a href="#计算引擎" class="headerlink" title="计算引擎"></a>计算引擎</h4><p>由内存管理器(MemoryManager)、Tungsten、任务内存管理器(TaskMemoryManager)、Task、外部排序器(ExternalSorter)、Shuffle 管理器(ShuffleManager)等组成。</p><hr><h3 id="Spark-SQL"><a href="#Spark-SQL" class="headerlink" title="Spark SQL"></a>Spark SQL</h3><h4 id="编译器-Parser"><a href="#编译器-Parser" class="headerlink" title="编译器 Parser"></a>编译器 Parser</h4><p>Spark SQL 采用 <strong>ANTLR4</strong> 作为 SQL 语法工具。它有两种遍历模式:监听器模式(Listener)和访问者模式(Visitor),Spark 主要采用的是后者,基于 ANTLR4 文件来生成词法分析器(SqlBaseLexer)、语法分析器(SqlBaseParser)和访问者类(SqlBaseVisitor 接口与 SqlBaseBaseVisitor 类)。</p><p>当面临开发新的语法支持时,首先改动 SqlBase.g4 文件,然后在 AstBuilder 等类中添加相应的访问逻辑,最后添加执行逻辑即可。</p><h4 id="逻辑计划"><a href="#逻辑计划" class="headerlink" title="逻辑计划"></a>逻辑计划</h4><p>在此阶段,SQL 语句转换为树结构形态的逻辑算子树,SQL 中包含的各种处理逻辑(过滤、裁剪等)和数据信息都会被整合在逻辑算子树的不同节点中。在实现层面被定义为 LogicalPlan 类。</p><p>从 SQL 语句经过 SparkSqlParser 解析生成 Unresolved LogicalPlan,到最后优化成为 Optimized LogicalPlan,再传递到下一个阶段用于物理执行计划的生成。</p><h4 id="物理计划"><a href="#物理计划" class="headerlink" title="物理计划"></a>物理计划</h4><p>这是 Spark SQL 整个查询过程处理流程的最后一步,与底层平台紧密相关。Spark SQL 会对生成的逻辑算子树进一步处理得到物理算子树,并将 LogicalPlan 节点及其所包含的各种信息映射成 Spark Core 计算模型的元素,如 RDD、Transformation 和 Action 等,其实现类为 SparkPlan。</p><hr><h2 id="References"><a href="#References" class="headerlink" title="References"></a>References</h2><ul><li><a href="http://spark.apache.org/docs/latest/">Spark官方文档</a></li><li><a href="https://github.com/apache/spark">Spark源码</a></li><li><a href="https://book.douban.com/subject/30157181/">Spark内核设计的艺术</a></li><li><a href="https://book.douban.com/subject/30296615/">Spark SQL内核剖析</a></li></ul>]]></content>
<categories>
<category>分布式系统</category>
<category>分布式计算</category>
<category>Spark</category>
</categories>
<tags>
<tag>Spark</tag>
</tags>
</entry>
<entry>
<title>分布式系统</title>
<link href="/2020/10/30/%E5%88%86%E5%B8%83%E5%BC%8F%E7%B3%BB%E7%BB%9F/"/>
<url>/2020/10/30/%E5%88%86%E5%B8%83%E5%BC%8F%E7%B3%BB%E7%BB%9F/</url>
<content type="html"><![CDATA[<h1 id="分布式系统"><a href="#分布式系统" class="headerlink" title="分布式系统"></a>分布式系统</h1><p>在介绍分布式系统之前,先说一说与之对应的集中式系统。</p><p>集中式系统有一个大型的中央处理系统,往往是一台高性能的计算机,所有数据的运算和处理都在中央计算节点上完成,然后由多个终端进行访问连接,终端只用来输入输出,不具备处理能力。像我们日常生活中见到的银行自动提款机 ATM 就是使用的集中式系统。</p><p>这类系统最大的特点就是部署简单,但是由于采用单机部署,系统会很复杂,容易发生单点故障,从而导致整个系统服务崩溃,扩展性比较差。</p><p>当然,分布式系统也有问题,系统中的各个部分彼此分开放置,这本身就带来了极大的困难,远程进程间的通信链路可能既慢又不可靠,分布式领域中的大多数研究都与“没有什么是完全可靠的”这一事实有关。</p><h2 id="背景"><a href="#背景" class="headerlink" title="背景"></a>背景</h2><p>维基百科的定义:分布式系统是一组电脑,透过网络相互连接传递消息与通信后并协调它们的行为而形成的系统。组件之间彼此进行交互以实现一个共同的目标。把需要进行大量计算的工程数据分割成小块,由多台计算机分别计算,再上传运算结果后,将结果统一合并得出数据结论的科学。</p><p>Google 以三驾马车开启了大数据领域的先河,此处简单罗列这三篇论文:</p><ol><li><p>2003 年公布的第一篇论文,这是一个可扩展的分布式文件系统,用于大型的、分布式的、对大量数据进行访问的应用。</p><div class="row"> <embed src="./gfs.pdf" width="100%" height="550" type="application/pdf"></div></li><li><p>2004 年发布的 MapReduce 基本上可以代表大数据处理思想的出现了,其核心是将任务拆解然后在多台廉价的计算机节点上进行运算,最后再将结果合并。</p><div class="row"> <embed src="./mapreduce.pdf" width="100%" height="550" type="application/pdf"></div></li><li><p>2006 年发布的 BigTable 启发了无数的 NoSQL 数据库,最典型的比如:Cassandra、HBase等等。</p><div class="row"> <embed src="./bigtable.pdf" width="100%" height="550" type="application/pdf"></div></li></ol><h2 id="概述"><a href="#概述" class="headerlink" title="概述"></a>概述</h2><p>那么,为什么人们要创建一个分布式系统呢?</p><ul><li>通过并行增加计算能力</li><li>通过复制增加容错</li><li>将计算物理上靠近外部实体,通过某种通信方式克服距离</li><li>通过隔离实现安全,解决通信协议的安全、孤立问题</li></ul><p>分布式系统虽好,但同时其复杂程度也是呈几何倍上升的,在创建和设计一个分布式系统的过程中就不可避免地产生许多问题,比如网络异常、节点故障、负载均衡、资源调度、数据的一致性、分布式事务等等。</p><p>在解决这些问题的过程中,逐渐产生了一些理论,譬如 CAP、BASE 等等,当然还有很多理论,此处暂不讨论。</p><h2 id="CAP-理论"><a href="#CAP-理论" class="headerlink" title="CAP 理论"></a>CAP 理论</h2><p>2000 年 7 月,加州大学伯克利分校的 Eric Brewer 教授在 ACM PODC 会议上提出 CAP 猜想。2 年后,麻省理工学院的 Seth Gilbert 和 Nancy Lynch 从理论上证明了 CAP。之后,CAP 理论正式成为分布式计算领域的公认定理,明确指出任何分布式系统最多可以具有以下三个属性中的两个:</p><ul><li>C:Consistency</li><li>A:Availability</li><li>P:Partition tolerance</li></ul><h3 id="一致性"><a href="#一致性" class="headerlink" title="一致性"></a>一致性</h3><p>即在写操作之后的读操作,必须返回该值。</p><p>举例说明:某条记录是 G1=a,同时这条记录有一个备份 G2=a,用户向 G1 发起写请求,将 a 改为 b,然后用户再向 G1 发起读请求得到 b,这就满足了一致性,但是用户也可能向 G2 发起读请求得到的却还是 a,这就不满足一致性。</p><p>解决方案:为了在 G2 进行读操作的时候与 G1 得到相同的结果,就要在 G1 进行写操作时,让 G1 向 G2 也发送一条信息,要求 G2 将 a 改为 b,这样用户向 G2 发起读请求时也能得到 b。</p><p>此处的一致性要区别于数据库 ACID 中的一致性,这里的一致性是指数据副本的一致性,而事务的一致性则指数据从一个状态变为另一个状态的整体是一致的。比如银行转账,甲(5元)转账给乙(10元)5 元,那么甲就一定要变为 0 元,乙一定就变为 15 元,而不是甲还有 5 元这种整体的一致性。</p><h3 id="可用性"><a href="#可用性" class="headerlink" title="可用性"></a>可用性</h3><p>即系统中非故障节点收到用户的请求后,都必须做出响应。要求系统在一个或多个节点出现故障或不可用的情况任然能够处理请求。</p><h3 id="分区容错性"><a href="#分区容错性" class="headerlink" title="分区容错性"></a>分区容错性</h3><p>从一个节点发送到另一个节点的消息允许丢失。</p><p>大多数分布式系统都存在多个子网络分区,这些分区间的通信有可能会失败,一般来说,分区容错是无法避免的,比如一台服务器在北京,一台服务器在上海,它们之间可能因为很多问题导致通信失败,所以基本可以认为分布式系统中 P 一定是成立的。</p><h3 id="冲突点"><a href="#冲突点" class="headerlink" title="冲突点"></a>冲突点</h3><p>在网络分区的情况下,我们无法实现一个同时保证可用性和一致性的分布式系统。</p><p>举例来说,如果要保证一致性,我们在向 G1 发起写操作时,需要锁定 G2 的读写操作,直到数据同步后再开放,但在锁定期间,G2 是不可用的,违背了可用性;如果要保证 G2 的可用性,那就不能锁定 G2,无法保证一致性。</p><p>因此,我们只能在提供尽力而为的可用性的同时保证强一致性,或者在提供尽力而为一致性的同时保证可用性,于是就有了下面两种系统:</p><ol><li>一致性和分区容忍系统 CP<br>CP 系统更倾向于拒绝请求,而不是提供可能不一致的数据</li><li>可用性和分区容忍系统 AP<br>AP系统则放松了一致性的要求,允许再请求期间提供可能不一致的值</li></ol><h2 id="BASE-理论"><a href="#BASE-理论" class="headerlink" title="BASE 理论"></a>BASE 理论</h2><h3 id="概述-1"><a href="#概述-1" class="headerlink" title="概述"></a>概述</h3><p>eBay 的架构师 Dan Pritchett 源于对大规模分布式系统的实践总结,在 ACM 上发表文章提出 BASE 理论,BASE 理论是对 CAP 理论的延伸,核心思想是即使无法做到强一致性(Strong Consistency,CAP 的一致性就是强一致性),但应用可以采用适合的方式达到最终一致性(Eventual Consitency)。四个字母是 Basically Available(基本可用)、Soft state(软状态)和 Eventually consistent(最终一致性)的简写。</p><h3 id="基本可用"><a href="#基本可用" class="headerlink" title="基本可用"></a>基本可用</h3><p>分布式系统在出现故障的时候,允许损失部分可用性,即保证核心可用。</p><p>举例来说,假设系统出现不可预知的故障:</p><ol><li>响应时间上的损失:正常情况搜索引擎 0.5s 返回结果,而基本可用的搜索引擎在 2s 内返回结果</li><li>功能上的损失:正常情况电商平台用户可以顺利下订单,但是促销时,为了保护购物的稳定性,部分消费者可能会被引导到一个降级页面。</li></ol><h3 id="软状态"><a href="#软状态" class="headerlink" title="软状态"></a>软状态</h3><p>允许系统存在中间状态,而该中间状态不会影响系统整体可用性,即允许系统在多个不同节点的数据副本存在数据延时。数据的三个副本和 MySQL Replication 的异步复制都是一种体现。</p><h3 id="最终一致性"><a href="#最终一致性" class="headerlink" title="最终一致性"></a>最终一致性</h3><p>最终一致性是指系统中的所有数据副本经过一定时间后,最终能够达到一致的状态。在实际工程实践中,有五种情况。</p><h4 id="因果一致性(Causal-consistency)"><a href="#因果一致性(Causal-consistency)" class="headerlink" title="因果一致性(Causal consistency)"></a>因果一致性(Causal consistency)</h4><p>如果进程 A 在更新完某个数据项后通知了进程 B,那么进程 B 之后对该数据项的访问都应该能够获取到进程 A 更新后的最新值,并且如果进程 B 要对该数据项进行更新操作的话,务必基于进程 A 更新后的最新值,即不能发生丢失更新情况。与此同时,与进程 A 无因果关系的进程 C 的数据访问则没有这样的限制。</p><h4 id="读己之所写(Read-your-writes)"><a href="#读己之所写(Read-your-writes)" class="headerlink" title="读己之所写(Read your writes)"></a>读己之所写(Read your writes)</h4><p>读己之所写是指,进程 A 更新一个数据项之后,他自己总是能够访问到更新过的最新值,而不会看到旧值。</p><h4 id="会话一致性(Session-consistency)"><a href="#会话一致性(Session-consistency)" class="headerlink" title="会话一致性(Session consistency)"></a>会话一致性(Session consistency)</h4><p>将对系统数据的访问过程框定在了一个会话当中:系统能保证在同一个有效地会话中实现“读己之所写”的一致性。</p><h4 id="单调读一致性(Monotonic-read-consistency)"><a href="#单调读一致性(Monotonic-read-consistency)" class="headerlink" title="单调读一致性(Monotonic read consistency)"></a>单调读一致性(Monotonic read consistency)</h4><p>如果一个进程从系统中读取出一个数据项的某个值后,那么系统对于该进程后续的任何数据访问都不应该返回更旧的值。</p><h4 id="单调写一致性(Monotonic-write-consistency)"><a href="#单调写一致性(Monotonic-write-consistency)" class="headerlink" title="单调写一致性(Monotonic write consistency)"></a>单调写一致性(Monotonic write consistency)</h4><p>一个系统需要能够保证来自同一个进程的写操作被顺序的执行。</p>]]></content>
<categories>
<category>分布式系统</category>
<category>概述</category>
</categories>
<tags>
<tag>分布式系统</tag>
</tags>
</entry>
<entry>
<title>HBase 概述</title>
<link href="/2020/10/30/HBase%E6%A6%82%E8%BF%B0/"/>
<url>/2020/10/30/HBase%E6%A6%82%E8%BF%B0/</url>
<content type="html"><--><p>行存更适合结构化数据,传统的关系型数据库基本都是行式存储,这样无论是在事务的支持还是在多表关联的场景下都能很好地发挥作用。而列存则比较适合非结构化或半结构化数据,只需要进行特定的查询,比如上表中行存有三个字段,而此时只需要查出 name 这列的数据,因此只返回一列的查询无疑效率是最高的,在数据量很大的情况下可以减少 IO,同时由于每列的数据类型都是一样的,我们还可以针对不同的数据类型进行压缩的优化,在查询时降低带宽的消耗。</p><p>HBase 在设计上有一个列簇的概念,那么当一个列簇下有多个列时,可以说此时 HBase 在逻辑存储上是行存的;若是一个列簇一个列,则可以说是列存的,但其在物理存储上都是 KV 结构,因此 HBase 其实是一种支持自动负载均衡的分布式 KV 数据库。</p><h2 id="数据模型"><a href="#数据模型" class="headerlink" title="数据模型"></a>数据模型</h2><p><img src="https://cdn.jsdelivr.net/gh/gleonSun/images@main/image/DataModel-1737346423338.png"></p><p>对应的物理存储模型</p><p><img src="https://cdn.jsdelivr.net/gh/gleonSun/images@main/image/PhysicalDataModel-1737346429742.png"></p><h2 id="架构图"><a href="#架构图" class="headerlink" title="架构图"></a>架构图</h2><p><img src="https://cdn.jsdelivr.net/gh/gleonSun/images@main/image/jg-1737346426371.png"></p><h2 id="概念"><a href="#概念" class="headerlink" title="概念"></a>概念</h2><ul><li><p>HMaster</p></li><li><p>HRegionServer</p></li><li><p>HLog</p></li><li><p>BlockCache</p></li><li><p>Region</p></li><li><p>Store</p></li><li><p>MemStore</p></li><li><p>HFile</p></li></ul>]]></content>
<categories>
<category>分布式系统</category>
<category>分布式存储</category>
<category>HBase</category>
</categories>
<tags>
<tag>HBase</tag>
</tags>
</entry>
<entry>
<title>Flink 源码编译</title>
<link href="/2020/10/30/Flink%E6%BA%90%E7%A0%81%E7%BC%96%E8%AF%91/"/>
<url>/2020/10/30/Flink%E6%BA%90%E7%A0%81%E7%BC%96%E8%AF%91/</url>
<content type="html"><![CDATA[<h1 id="Flink-源码编译"><a href="#Flink-源码编译" class="headerlink" title="Flink 源码编译"></a>Flink 源码编译</h1><h2 id="环境"><a href="#环境" class="headerlink" title="环境"></a>环境</h2><p>仅记录编译遇到的两个问题</p><h3 id="版本"><a href="#版本" class="headerlink" title="版本"></a>版本</h3><p>flink-tag-1.11.2<br>jdk-1.8.0_251<br>scala-2.12.11<br>apache-maven-3.5.4</p><h3 id="编译"><a href="#编译" class="headerlink" title="编译"></a>编译</h3><div class="hljs code-wrapper"><pre><code class="hljs apache"><span class="hljs-attribute">git</span> clone [email protected]:apache/flink.git<span class="hljs-attribute">cd</span> flink<span class="hljs-attribute">git</span> checkout -b xxx release-<span class="hljs-number">1</span>.<span class="hljs-number">11</span>.<span class="hljs-number">2</span>-rc1<span class="hljs-attribute">mvn</span> clean package -DskipTests -e</code></pre></div><h3 id="问题"><a href="#问题" class="headerlink" title="问题"></a>问题</h3><ol><li><p>依赖包下载失败,需要重新获取</p><p> <img src="https://cdn.jsdelivr.net/gh/gleonSun/images@main/image/1-1737346049883.jpg"></p> <div class="note note-danger"> <p>[ERROR] Failed to execute goal on project flink-azure-fs-hadoop: Could not resolve dependencies for project org.apache.flink:flink-azure-fs-hadoop:jar:1.11.2: Failure to find io.reactivex:rxjava:jar:1.3.8 in <a href="http://192.168.0.139:8081/repository/maven-public/">http://192.168.0.139:8081/repository/maven-public/</a> was cached in the local repository, resolution will not be reattempted until the update interval of nexus_public has elapsed or updates are forced -> [Help 1]<br>org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute goal on project flink-azure-fs-hadoop: Could not resolve dependencies for project org.apache.flink:flink-azure-fs-hadoop:jar:1.11.2: Failure to find io.reactivex:rxjava:jar:1.3.8 in <a href="http://192.168.0.139:8081/repository/maven-public/">http://192.168.0.139:8081/repository/maven-public/</a> was cached in the local repository, resolution will not be reattempted until the update interval of nexus_public has elapsed or updates are forced<br>```</p> </div> <div class="note note-warning"> <p>flink-azure-fs-hadoop 模块的依赖 jar 包 io.reactivex:rxjava:jar:1.3.8 未下载完全,因为本地有缓存,编译时不会重新拉取,手动删除如下图的目录,重新编译下载即可。</p> </div><p> <img src="https://cdn.jsdelivr.net/gh/gleonSun/images@main/image/2-1737346055475.jpg"></p></li><li><p>库中缺少依赖包,需手动下载</p><p> <img src="https://cdn.jsdelivr.net/gh/gleonSun/images@main/image/3-1737346061603.jpg"></p> <div class="note note-danger"> <p>[ERROR] Failed to execute goal on project flink-avro-confluent-registry: Could not resolve dependencies for project org.apache.flink:flink-avro-confluent-registry:jar:1.11.2: Could not find artifact io.confluent:kafka-schema-registry-client:jar:4.1.0 in nexus_public (<a href="http://192.168.0.139:8081/repository/maven-public/">http://192.168.0.139:8081/repository/maven-public/</a>) -> [Help 1]<br>org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute goal on project flink-avro-confluent-registry: Could not resolve dependencies for project org.apache.flink:flink-avro-confluent-registry:jar:1.11.2: Could not find artifact io.confluent:kafka-schema-registry-client:jar:4.1.0 in nexus_public (<a href="http://192.168.0.139:8081/repository/maven-public/">http://192.168.0.139:8081/repository/maven-public/</a>)</p> </div></li></ol><p>下载 <a class="btn" href="http://packages.confluent.io/maven/io/confluent/kafka-schema-registry-client/4.1.0" title="kafka-schema-registry-client" target="_blank">Jar</a> 后重新编译,直接通过 IDEA 导入</p><p><img src="https://cdn.jsdelivr.net/gh/gleonSun/images@main/image/4-1737346065444.jpg"></p>]]></content>
<categories>
<category>编译</category>
</categories>
<tags>
<tag>Flink</tag>
</tags>
</entry>
</search>