-
Notifications
You must be signed in to change notification settings - Fork 89
/
Glue Workflow Lab.txt
72 lines (55 loc) · 1.49 KB
/
Glue Workflow Lab.txt
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
Step 1:
--------
Create a s3 bucket with 2 folders
Step 2:
--------
Create a Glue role
Step 3:
--------
Create a Glue Database
Step 4:
-------
Create 2 Glue Crawlers
Step 5:
--------
Create a Glue job with Job bookmark enabled--
import sys
from awsglue.transforms import *
from awsglue.utils import getResolvedOptions
from pyspark.context import SparkContext
from awsglue.context import GlueContext
from awsglue.job import Job
## @params: [JOB_NAME]
args = getResolvedOptions(sys.argv, ['JOB_NAME'])
sc = SparkContext()
glueContext = GlueContext(sc)
spark = glueContext.spark_session
job = Job(glueContext)
job.init(args['JOB_NAME'], args)
datasource0 = glueContext.create_dynamic_frame.from_catalog(database = "workflowdemoyt", table_name = "csvstorer", transformation_ctx = "datasource0")
datasink4 = glueContext.write_dynamic_frame.from_options(frame = datasource0, connection_type = "s3",
connection_options = {"path": "s3://{}/{}/"}, format = "parquet", transformation_ctx = "datasink4")
job.commit()
Step 6:
--------
Create the Glue Workflow
Step 7:
--------
Download the Snowflake data --
select * from books where publishyear=2002 and PUBLISHMONTH=23;
Step 8:
-------
Trigger the Glue workflow
Step 9:
--------
Query using Athena--
Step 10:
----------
Download the Snowflake data --
select * from books where publishyear=2001 and PUBLISHMONTH=1;
Step 11:
---------
Trigger the Glue workflow
Step 12:
---------
Query using Athena--