The data build tool (dbt) is an effective data transformation tool and it supports key AWS analytics services - Redshift, Glue, EMR and Athena. In the previous posts, we discussed benefits of a common data transformation tool and the potential of dbt to cover a wide range of data projects from data warehousing to data lake to data lakehouse. Demo data projects that target Redshift Serverless and Glue are illustrated as well. In part 3 of the dbt on AWS series, we discuss data transformation pipelines using dbt on Amazon EMR. Subsets of IMDb data are used as source and data models are developed in multiple layers according to the dbt best practices. A list of posts of this series can be found below.
Below shows an overview diagram of the scope of this dbt on AWS series. EMR is highlighted as it is discussed in this post.
Infrastructure
The infrastructure hosting this solution leverages an Amazon EMR cluster and a S3 bucket. We also need a VPN server so that a developer can connect to the EMR cluster in a private subnet. It is extended from a previous post and the resources covered there (VPC, subnets, auto scaling group for VPN etc) are not repeated. All resources are deployed using Terraform and the source can be found in the GitHub repository of this post.
EMR Cluster
The EMR 6.7.0 release is deployed with single master and core node instances. It is configured to use the AWS Glue Data Catalog as the metastore for Hive and Spark SQL and it is done by adding the corresponding configuration classification. Also a managed scaling policy is created so that up to 4 additional task instances are added to the cluster. Note an additional security group is attached to the master and core groups for VPN access - the details of that security group is shown below.
1# dbt-on-aws/emr-ec2/infra/emr.tf
2resource "aws_emr_cluster" "emr_cluster" {
3 name = "${local.name}-emr-cluster"
4 release_label = local.emr.release_label # emr-6.7.0
5 service_role = aws_iam_role.emr_service_role.arn
6 autoscaling_role = aws_iam_role.emr_autoscaling_role.arn
7 applications = local.emr.applications # ["Spark", "Livy", "JupyterEnterpriseGateway", "Hive"]
8 ebs_root_volume_size = local.emr.ebs_root_volume_size
9 log_uri = "s3n://${aws_s3_bucket.default_bucket[0].id}/elasticmapreduce/"
10 step_concurrency_level = 256
11 keep_job_flow_alive_when_no_steps = true
12 termination_protection = false
13
14 ec2_attributes {
15 key_name = aws_key_pair.emr_key_pair.key_name
16 instance_profile = aws_iam_instance_profile.emr_ec2_instance_profile.arn
17 subnet_id = element(tolist(module.vpc.private_subnets), 0)
18 emr_managed_master_security_group = aws_security_group.emr_master.id
19 emr_managed_slave_security_group = aws_security_group.emr_slave.id
20 service_access_security_group = aws_security_group.emr_service_access.id
21 additional_master_security_groups = aws_security_group.emr_vpn_access.id # grant access to VPN server
22 additional_slave_security_groups = aws_security_group.emr_vpn_access.id # grant access to VPN server
23 }
24
25 master_instance_group {
26 instance_type = local.emr.instance_type # m5.xlarge
27 instance_count = local.emr.instance_count # 1
28 }
29 core_instance_group {
30 instance_type = local.emr.instance_type # m5.xlarge
31 instance_count = local.emr.instance_count # 1
32 }
33
34 configurations_json = <<EOF
35 [
36 {
37 "Classification": "hive-site",
38 "Properties": {
39 "hive.metastore.client.factory.class": "com.amazonaws.glue.catalog.metastore.AWSGlueDataCatalogHiveClientFactory"
40 }
41 },
42 {
43 "Classification": "spark-hive-site",
44 "Properties": {
45 "hive.metastore.client.factory.class": "com.amazonaws.glue.catalog.metastore.AWSGlueDataCatalogHiveClientFactory"
46 }
47 }
48 ]
49 EOF
50
51 tags = local.tags
52
53 depends_on = [module.vpc]
54}
55
56resource "aws_emr_managed_scaling_policy" "emr_scaling_policy" {
57 cluster_id = aws_emr_cluster.emr_cluster.id
58
59 compute_limits {
60 unit_type = "Instances"
61 minimum_capacity_units = 1
62 maximum_capacity_units = 5
63 }
64}
The following security group is created to enable access from the VPN server to the EMR instances. Note that the inbound rule is created only when the local.vpn.to_create variable value is true while the security group is created always - if the value is false, the security group has no inbound rule.
1# dbt-on-aws/emr-ec2/infra/emr.tf
2resource "aws_security_group" "emr_vpn_access" {
3 name = "${local.name}-emr-vpn-access"
4 vpc_id = module.vpc.vpc_id
5
6 lifecycle {
7 create_before_destroy = true
8 }
9
10 tags = local.tags
11}
12
13resource "aws_security_group_rule" "emr_vpn_inbound" {
14 count = local.vpn.to_create ? 1 : 0
15 type = "ingress"
16 description = "VPN access"
17 security_group_id = aws_security_group.emr_vpn_access.id
18 protocol = "tcp"
19 from_port = 0
20 to_port = 65535
21 source_security_group_id = aws_security_group.vpn[0].id
22}
As in the previous post, we connect to the EMR cluster via SoftEther VPN. Instead of providing VPN related secrets as Terraform variables, they are created internally and stored to AWS Secrets Manager. The details can be found in dbt-on-aws/emr-ec2/infra/secrets.tf and the secret string can be retrieved as shown below.
1$ aws secretsmanager get-secret-value --secret-id emr-ec2-all-secrets --query "SecretString" --output text
2 {
3 "vpn_pre_shared_key": "<vpn-pre-shared-key>",
4 "vpn_admin_password": "<vpn-admin-password>"
5 }
The previous post demonstrates how to create a VPN user and to establish connection in detail. An example of a successful connection is shown below.
Glue Databases
We have two Glue databases. The source tables and the tables of the staging and intermediate layers are kept in the imdb database. The tables of the marts layer are stored in the _imdb_analytics _database.
1# glue databases
2resource "aws_glue_catalog_database" "imdb_db" {
3 name = "imdb"
4 location_uri = "s3://${local.default_bucket.name}/imdb"
5 description = "Database that contains IMDb staging/intermediate model datasets"
6}
7
8resource "aws_glue_catalog_database" "imdb_db_marts" {
9 name = "imdb_analytics"
10 location_uri = "s3://${local.default_bucket.name}/imdb_analytics"
11 description = "Database that contains IMDb marts model datasets"
12}
Project
We build a data transformation pipeline using subsets of IMDb data - seven titles and names related datasets are provided as gzipped, tab-separated-values (TSV) formatted files. The project ends up creating three tables that can be used for reporting and analysis.
Save Data to S3
The Axel download accelerator is used to download the data files locally followed by decompressing with the gzip utility. Note that simple retry logic is added as I see download failure from time to time. Finally, the decompressed files are saved into the project S3 bucket using the S3 sync command.
1# dbt-on-aws/emr-ec2/upload-data.sh
2#!/usr/bin/env bash
3
4s3_bucket=$(terraform -chdir=./infra output --raw default_bucket_name)
5hostname="datasets.imdbws.com"
6declare -a file_names=(
7 "name.basics.tsv.gz" \
8 "title.akas.tsv.gz" \
9 "title.basics.tsv.gz" \
10 "title.crew.tsv.gz" \
11 "title.episode.tsv.gz" \
12 "title.principals.tsv.gz" \
13 "title.ratings.tsv.gz"
14 )
15
16rm -rf imdb-data
17
18for fn in "${file_names[@]}"
19do
20 download_url="https://$hostname/$fn"
21 prefix=$(echo ${fn::-7} | tr '.' '_')
22 echo "download imdb-data/$prefix/$fn from $download_url"
23 while true;
24 do
25 mkdir -p imdb-data/$prefix
26 axel -n 32 -a -o imdb-data/$prefix/$fn $download_url
27 gzip -d imdb-data/$prefix/$fn
28 num_files=$(ls imdb-data/$prefix | wc -l)
29 if [ $num_files == 1 ]; then
30 break
31 fi
32 rm -rf imdb-data/$prefix
33 done
34done
35
36aws s3 sync ./imdb-data s3://$s3_bucket
Start Thrift JDBC/ODBC Server
The connection from dbt to the EMR cluster is made by the Thrift JDBC/ODBC server and it can be started by adding an EMR step as shown below.
1$ cd emr-ec2
2$ CLUSTER_ID=$(terraform -chdir=./infra output --raw emr_cluster_id)
3$ aws emr add-steps \
4 --cluster-id $CLUSTER_ID \
5 --steps Type=CUSTOM_JAR,Name="spark thrift server",ActionOnFailure=CONTINUE,Jar=command-runner.jar,Args=[sudo,/usr/lib/spark/sbin/start-thriftserver.sh]
We can quickly check if the thrift server is started using the beeline JDBC client. The port is 10001 and, as connection is made in non-secure mode, we can simply enter the default username and a blank password. When we query databases, we see the Glue databases that are created earlier.
1$ STACK_NAME=emr-ec2
2$ EMR_CLUSTER_MASTER_DNS=$(terraform -chdir=./infra output --raw emr_cluster_master_dns)
3$ ssh -i infra/key-pair/$STACK_NAME-emr-key.pem hadoop@$EMR_CLUSTER_MASTER_DNS
4Last login: Thu Oct 13 08:59:51 2022 from ip-10-0-32-240.ap-southeast-2.compute.internal
5
6
7...
8
9
10[hadoop@ip-10-0-113-195 ~]$ beeline
11SLF4J: Class path contains multiple SLF4J bindings.
12SLF4J: Found binding in [jar:file:/usr/lib/hive/lib/log4j-slf4j-impl-2.17.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
13SLF4J: Found binding in [jar:file:/usr/lib/tez/lib/slf4j-reload4j-1.7.36.jar!/org/slf4j/impl/StaticLoggerBinder.class]
14SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
15SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
16SLF4J: Class path contains multiple SLF4J bindings.
17SLF4J: Found binding in [jar:file:/usr/lib/hive/lib/log4j-slf4j-impl-2.17.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
18SLF4J: Found binding in [jar:file:/usr/lib/tez/lib/slf4j-reload4j-1.7.36.jar!/org/slf4j/impl/StaticLoggerBinder.class]
19SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
20SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
21Beeline version 3.1.3-amzn-0 by Apache Hive
22beeline> !connect jdbc:hive2://localhost:10001
23Connecting to jdbc:hive2://localhost:10001
24Enter username for jdbc:hive2://localhost:10001: hadoop
25Enter password for jdbc:hive2://localhost:10001:
26Connected to: Spark SQL (version 3.2.1-amzn-0)
27Driver: Hive JDBC (version 3.1.3-amzn-0)
28Transaction isolation: TRANSACTION_REPEATABLE_READ
290: jdbc:hive2://localhost:10001> show databases;
30+-------------------------------+
31| namespace |
32+-------------------------------+
33| default |
34| imdb |
35| imdb_analytics |
36+-------------------------------+
373 rows selected (0.311 seconds)
Setup dbt Project
We use the dbt-spark adapter to work with the EMR cluster. As connection is made by the Thrift JDBC/ODBC server, it is necessary to install the adapter with the PyHive package. I use Ubuntu 20.04 in WSL 2 and it needs to install the libsasl2-dev apt package, which is required for one of the dependent packages of PyHive (pure-sasl). After installing it, we can install the dbt packages as usual.
1$ sudo apt-get install libsasl2-dev
2$ python3 -m venv venv
3$ source venv/bin/activate
4$ pip install --upgrade pip
5$ pip install dbt-core "dbt-spark[PyHive]"
We can initialise a dbt project with the dbt init command. We are required to specify project details - project name, host, connection method, port, schema and the number of threads. Note dbt creates the project profile to .dbt/profile.yml of the user home directory by default.
1$ dbt init
221:00:16 Running with dbt=1.2.2
3Enter a name for your project (letters, digits, underscore): emr_ec2
4Which database would you like to use?
5[1] spark
6
7(Don't see the one you want? https://docs.getdbt.com/docs/available-adapters)
8
9Enter a number: 1
10host (yourorg.sparkhost.com): <hostname-or-ip-address-of-master-instance>
11[1] odbc
12[2] http
13[3] thrift
14Desired authentication method option (enter a number): 3
15port [443]: 10001
16schema (default schema that dbt will build objects in): imdb
17threads (1 or more) [1]: 3
1821:50:28 Profile emr_ec2 written to /home/<username>/.dbt/profiles.yml using target's profile_template.yml and your supplied values. Run 'dbt debug' to validate the connection.
1921:50:28
20Your new dbt project "emr_ec2" was created!
21
22For more information on how to configure the profiles.yml file,
23please consult the dbt documentation here:
24
25 https://docs.getdbt.com/docs/configure-your-profile
26
27One more thing:
28
29Need help? Don't hesitate to reach out to us via GitHub issues or on Slack:
30
31 https://community.getdbt.com/
32
33Happy modeling!
dbt initialises a project in a folder that matches to the project name and generates project boilerplate as shown below. Some of the main objects are dbt_project.yml, and the model folder. The former is required because dbt doesn’t know if a folder is a dbt project without it. Also it contains information that tells dbt how to operate on the project. The latter is for including dbt models, which is basically a set of SQL select statements. See dbt documentation for more details.
1$ tree emr-ec2/emr_ec2/ -L 1
2emr-ec2/emr_ec2/
3├── README.md
4├── analyses
5├── dbt_packages
6├── dbt_project.yml
7├── logs
8├── macros
9├── models
10├── packages.yml
11├── seeds
12├── snapshots
13├── target
14└── tests
We can check connection to the EMR cluster with the dbt debug command as shown below.
1$ dbt debug
221:51:38 Running with dbt=1.2.2
3dbt version: 1.2.2
4python version: 3.8.10
5python path: <path-to-python-path>
6os info: Linux-5.4.72-microsoft-standard-WSL2-x86_64-with-glibc2.29
7Using profiles.yml file at /home/<username>/.dbt/profiles.yml
8Using dbt_project.yml file at <path-to-dbt-project>/dbt_project.yml
9
10Configuration:
11 profiles.yml file [OK found and valid]
12 dbt_project.yml file [OK found and valid]
13
14Required dependencies:
15 - git [OK found]
16
17Connection:
18 host: 10.0.113.195
19 port: 10001
20 cluster: None
21 endpoint: None
22 schema: imdb
23 organization: 0
24 Connection test: [OK connection ok]
25
26All checks passed!
After initialisation, the model configuration is updated. The project materialisation is specified as view, although it is the default materialisation. Also, tags are added to the entire model folder as well as folders of specific layers - staging, intermediate and marts. As shown below, tags can simplify model execution.
1# emr-ec2/emr_ec2/dbt_project.yml
2name: "emr_ec2"
3
4...
5
6models:
7 dbt_glue_proj:
8 +materialized: view
9 +tags:
10 - "imdb"
11 staging:
12 +tags:
13 - "staging"
14 intermediate:
15 +tags:
16 - "intermediate"
17 marts:
18 +tags:
19 - "marts"
While we created source tables using Glue crawlers in part 2, they are created directly from S3 by the dbt_external_tables package in this post. Also, the dbt_utils package is installed for adding tests to the final marts models. They can be installed by the dbt deps command.
1# emr-ec2/emr_ec2/packages.yml
2packages:
3 - package: dbt-labs/dbt_external_tables
4 version: 0.8.2
5 - package: dbt-labs/dbt_utils
6 version: 0.9.2
Create dbt Models
The models for this post are organised into three layers according to the dbt best practices - staging, intermediate and marts.
External Source
The seven tables that are loaded from S3 are dbt source tables and their details are declared in a YAML file (_imdb_sources.yml). Macros of the dbt_external_tables package parse properties of each table and execute SQL to create each of them. By doing so, we are able to refer to the source tables with the {{ source() }}
function. Also we can add tests to source tables. For example two tests (unique, not_null) are added to the tconst column of the title_basics table below and these tests can be executed by the dbt test command.
1# emr-ec2/emr_ec2/models/staging/imdb/_imdb__sources.yml
2version: 2
3
4sources:
5 - name: imdb
6 description: Subsets of IMDb data, which are available for access to customers for personal and non-commercial use
7 tables:
8 - name: title_basics
9 description: Table that contains basic information of titles
10 external:
11 location: "s3://<s3-bucket-name>/title_basics/"
12 row_format: >
13 serde 'org.apache.hadoop.hive.serde2.OpenCSVSerde'
14 with serdeproperties (
15 'separatorChar'='\t'
16 )
17 table_properties: "('skip.header.line.count'='1')"
18 columns:
19 - name: tconst
20 data_type: string
21 description: alphanumeric unique identifier of the title
22 tests:
23 - unique
24 - not_null
25 - name: titletype
26 data_type: string
27 description: the type/format of the title (e.g. movie, short, tvseries, tvepisode, video, etc)
28 - name: primarytitle
29 data_type: string
30 description: the more popular title / the title used by the filmmakers on promotional materials at the point of release
31 - name: originaltitle
32 data_type: string
33 description: original title, in the original language
34 - name: isadult
35 data_type: string
36 description: flag that indicates whether it is an adult title or not
37 - name: startyear
38 data_type: string
39 description: represents the release year of a title. In the case of TV Series, it is the series start year
40 - name: endyear
41 data_type: string
42 description: TV Series end year. NULL for all other title types
43 - name: runtimeminutes
44 data_type: string
45 description: primary runtime of the title, in minutes
46 - name: genres
47 data_type: string
48 description: includes up to three genres associated with the title
The source tables can be created by dbt run-operation stage_external_sources
. Note that the following SQL is executed for the _title_basics _table under the hood.
1create table imdb.title_basics (
2 tconst string,
3 titletype string,
4 primarytitle string,
5 originaltitle string,
6 isadult string,
7 startyear string,
8 endyear string,
9 runtimeminutes string,
10 genres string
11)
12row format serde 'org.apache.hadoop.hive.serde2.OpenCSVSerde' with serdeproperties (
13'separatorChar'='\t'
14)
15location 's3://<s3-bucket-name>/title_basics/'
16tblproperties ('skip.header.line.count'='1')
Interestingly the header rows of the source tables are not skipped when they are queried by spark while they are skipped by Athena. They have to be filtered out in the stage models of the dbt project as spark is the query engine.
Staging
Based on the source tables, staging models are created. They are created as views, which is the project’s default materialisation. In the SQL statements, column names and data types are modified mainly.
1# emr-ec2/emr_ec2/models/staging/imdb/stg_imdb__title_basics.sql
2with source as (
3
4 select * from {{ source('imdb', 'title_basics') }}
5
6),
7
8renamed as (
9
10 select
11 tconst as title_id,
12 titletype as title_type,
13 primarytitle as primary_title,
14 originaltitle as original_title,
15 cast(isadult as boolean) as is_adult,
16 cast(startyear as int) as start_year,
17 cast(endyear as int) as end_year,
18 cast(runtimeminutes as int) as runtime_minutes,
19 case when genres = 'N' then null else genres end as genres
20 from source
21 where tconst <> 'tconst'
22
23)
24
25select * from renamed
Below shows the file tree of the staging models. The staging models can be executed using the dbt run command. As we’ve added tags to the staging layer models, we can limit to execute only this layer by dbt run –select staging.
1$ tree emr-ec2/emr_ec2/models/staging/
2emr-ec2/emr_ec2/models/staging/
3└── imdb
4 ├── _imdb__models.yml
5 ├── _imdb__sources.yml
6 ├── stg_imdb__name_basics.sql
7 ├── stg_imdb__title_akas.sql
8 ├── stg_imdb__title_basics.sql
9 ├── stg_imdb__title_crews.sql
10 ├── stg_imdb__title_episodes.sql
11 ├── stg_imdb__title_principals.sql
12 └── stg_imdb__title_ratings.sql
Note that the model materialisation of the staging and intermediate models is view and the dbt project creates VIRTUAL_VIEW tables. Although we are able to reference those tables in other models, they cannot be queried by Athena.
1$ aws glue get-tables --database imdb \
2 --query "TableList[?Name=='stg_imdb__title_basics'].[Name, TableType, StorageDescriptor.Columns]" --output yaml
3- - stg_imdb__title_basics
4 - VIRTUAL_VIEW
5 - - Name: title_id
6 Type: string
7 - Name: title_type
8 Type: string
9 - Name: primary_title
10 Type: string
11 - Name: original_title
12 Type: string
13 - Name: is_adult
14 Type: boolean
15 - Name: start_year
16 Type: int
17 - Name: end_year
18 Type: int
19 - Name: runtime_minutes
20 Type: int
21 - Name: genres
22 Type: string
Instead we can use spark sql to query the tables as shown below.
Intermediate
We can keep intermediate results in this layer so that the models of the final marts layer can be simplified. The source data includes columns where array values are kept as comma separated strings. For example, the genres column of the stg_imdb__title_basics model includes up to three genre values as shown in the previous screenshot. A total of seven columns in three models are columns of comma-separated strings and it is better to flatten them in the intermediate layer. Also, in order to avoid repetition, a dbt macro (f_latten_fields_) is created to share the column-flattening logic.
1# emr-ec2/emr_ec2/macros/flatten_fields.sql
2{% macro flatten_fields(model, field_name, id_field_name) %}
3 select
4 {{ id_field_name }} as id,
5 explode(split({{ field_name }}, ',')) as field
6 from {{ model }}
7{% endmacro %}
The macro function can be added inside a common table expression (CTE) by specifying the relevant model, field name to flatten and id field name.
1-- emr-ec2/emr_ec2/models/intermediate/title/int_genres_flattened_from_title_basics.sql
2with flattened as (
3 {{ flatten_fields(ref('stg_imdb__title_basics'), 'genres', 'title_id') }}
4)
5
6select
7 id as title_id,
8 field as genre
9from flattened
10order by id
The intermediate models are also materialised as views and we can check the array columns are flattened as expected.
Below shows the file tree of the intermediate models. Similar to the staging models, the intermediate models can be executed by dbt run --select intermediate
.
1$ tree emr-ec2/emr_ec2/models/intermediate/ emr-ec2/emr_ec2/macros/
2emr-ec2/emr_ec2/models/intermediate/
3├── name
4│ ├── _int_name__models.yml
5│ ├── int_known_for_titles_flattened_from_name_basics.sql
6│ └── int_primary_profession_flattened_from_name_basics.sql
7└── title
8 ├── _int_title__models.yml
9 ├── int_directors_flattened_from_title_crews.sql
10 ├── int_genres_flattened_from_title_basics.sql
11 └── int_writers_flattened_from_title_crews.sql
12
13emr-ec2/emr_ec2/macros/
14└── flatten_fields.sql
Marts
The models in the marts layer are configured to be materialised as tables in a custom schema. Their materialisation is set to table and the custom schema is specified as analytics while taking _parquet _as the file format. Note that the custom schema name becomes imdb_analytics according to the naming convention of dbt custom schemas. Models of both the staging and intermediate layers are used to create final models to be used for reporting and analytics.
1-- emr-ec2/emr_ec2/models/marts/analytics/titles.sql
2{{
3 config(
4 schema='analytics',
5 materialized='table',
6 file_format='parquet'
7 )
8}}
9
10with titles as (
11
12 select * from {{ ref('stg_imdb__title_basics') }}
13
14),
15
16principals as (
17
18 select
19 title_id,
20 count(name_id) as num_principals
21 from {{ ref('stg_imdb__title_principals') }}
22 group by title_id
23
24),
25
26names as (
27
28 select
29 title_id,
30 count(name_id) as num_names
31 from {{ ref('int_known_for_titles_flattened_from_name_basics') }}
32 group by title_id
33
34),
35
36ratings as (
37
38 select
39 title_id,
40 average_rating,
41 num_votes
42 from {{ ref('stg_imdb__title_ratings') }}
43
44),
45
46episodes as (
47
48 select
49 parent_title_id,
50 count(title_id) as num_episodes
51 from {{ ref('stg_imdb__title_episodes') }}
52 group by parent_title_id
53
54),
55
56distributions as (
57
58 select
59 title_id,
60 count(title) as num_distributions
61 from {{ ref('stg_imdb__title_akas') }}
62 group by title_id
63
64),
65
66final as (
67
68 select
69 t.title_id,
70 t.title_type,
71 t.primary_title,
72 t.original_title,
73 t.is_adult,
74 t.start_year,
75 t.end_year,
76 t.runtime_minutes,
77 t.genres,
78 p.num_principals,
79 n.num_names,
80 r.average_rating,
81 r.num_votes,
82 e.num_episodes,
83 d.num_distributions
84 from titles as t
85 left join principals as p on t.title_id = p.title_id
86 left join names as n on t.title_id = n.title_id
87 left join ratings as r on t.title_id = r.title_id
88 left join episodes as e on t.title_id = e.parent_title_id
89 left join distributions as d on t.title_id = d.title_id
90
91)
92
93select * from final
The details of the three models can be found in a YAML file (_analytics__models.yml). We can add tests to models and below we see tests of row count matching to their corresponding staging models.
1# emr-ec2/emr_ec2/models/marts/analytics/_analytics__models.yml
2version: 2
3
4models:
5 - name: names
6 description: Table that contains all names with additional details
7 tests:
8 - dbt_utils.equal_rowcount:
9 compare_model: ref('stg_imdb__name_basics')
10 - name: titles
11 description: Table that contains all titles with additional details
12 tests:
13 - dbt_utils.equal_rowcount:
14 compare_model: ref('stg_imdb__title_basics')
15 - name: genre_titles
16 description: Table that contains basic title details after flattening genres
The models of the marts layer can be tested using the dbt test command as shown below.
1$ dbt test --select marts
219:29:31 Running with dbt=1.2.2
319:29:31 Found 15 models, 17 tests, 0 snapshots, 0 analyses, 533 macros, 0 operations, 0 seed files, 7 sources, 0 exposures, 0 metrics
419:29:31
519:29:41 Concurrency: 3 threads (target='dev')
619:29:41
719:29:41 1 of 2 START test dbt_utils_equal_rowcount_names_ref_stg_imdb__name_basics_ .... [RUN]
819:29:41 2 of 2 START test dbt_utils_equal_rowcount_titles_ref_stg_imdb__title_basics_ .. [RUN]
919:29:54 1 of 2 PASS dbt_utils_equal_rowcount_names_ref_stg_imdb__name_basics_ .......... [PASS in 13.11s]
1019:29:56 2 of 2 PASS dbt_utils_equal_rowcount_titles_ref_stg_imdb__title_basics_ ........ [PASS in 15.14s]
1119:29:56
1219:29:56 Finished running 2 tests in 0 hours 0 minutes and 25.54 seconds (25.54s).
1319:29:56
1419:29:56 Completed successfully
1519:29:56
1619:29:56 Done. PASS=2 WARN=0 ERROR=0 SKIP=0 TOTAL=2
Below shows the file tree of the marts models. As with the other layers, the marts models can be executed by dbt run --select marts
.
1$ tree emr-ec2/emr_ec2/models/marts/
2emr-ec2/emr_ec2/models/marts/
3└── analytics
4 ├── _analytics__models.yml
5 ├── genre_titles.sql
6 ├── names.sql
7 └── titles.sql
Build Dashboard
The models of the marts layer can be consumed by external tools such as Amazon QuickSight. Below shows an example dashboard. The pie chart on the left shows the proportion of titles by genre while the box plot on the right shows the dispersion of average rating by title type.
Generate dbt Documentation
A nice feature of dbt is documentation. It provides information about the project and the data warehouse and it facilitates consumers as well as other developers to discover and understand the datasets better. We can generate the project documents and start a document server as shown below.
1$ dbt docs generate
2$ dbt docs serve
A very useful element of dbt documentation is data lineage, which provides an overall view about how data is transformed and consumed. Below we can see that the final titles model consumes all title-related stating models and an intermediate model from the name basics staging model.
Summary
In this post, we discussed how to build data transformation pipelines using dbt on Amazon EMR. Subsets of IMDb data are used as source and data models are developed in multiple layers according to the dbt best practices. dbt can be used as an effective tool for data transformation in a wide range of data projects from data warehousing to data lake to data lakehouse and it supports key AWS analytics services - Redshift, Glue, EMR and Athena. More examples of using dbt will be discussed in subsequent posts.
Comments