Skip to content

Commit

Permalink
Updated the Autonomous Database 15 Minute Quick Start workshop (#309)
Browse files Browse the repository at this point in the history
* Update use-delta-sharing.md

* Update use-delta-sharing.md

* Update use-delta-sharing.md

* livelabs folder

* Update adw-dcat-integrate.md

* updates

* updates to Introduction lab

* Update introduction.md

* LiveLabs workshop version and other freetier changes

* Update introduction.md

* Update introduction.md

* livelabs and freetier changes

* Update load-analyze-json.md

* Update load-analyze-rest.md

* more changes

* more updates

* more updates

* Update adw-dcat-integrate.md

* changed shared with serverless as deployment type

* Update load-local-data.md

* update shared to serverless

* update adb-dcat workshop

* added new lab 12

* Update manifest.json

* added new folder with an .md file in it

Please work!

* more testing

* Delete introduction.md

* new workshop and labs wip

* update labs

* more updates

* more updates

* more updates

* updates

* more updates

* updates before review cycle

* Update endpoint.png

* Update setup-workshop-environment.md

* Update setup-workshop-environment.md

* more updates

* final update before review

* updates

* replacement code

* Update create-share-recipients.md

* Update create-share-recipients.md

* Update create-share-recipients.md

* Update create-share-recipients.md

* Update create-share-recipients.md

* Update create-share-recipients.md

* updates

* Update manifest.json

* folder rename

* added content to data studio folder

* Delete user-bucket-credential-diagram.png

* updates self-qa

* Update introduction.md

* remove extra text files

* Update introduction.md

* Update setup-workshop-environment.md

* Data Studio Workshop Changes

* changes to data studio workshop

* Update setup-workshop-environment.md

* adb changes

* Update recipient-diagram.png

* diagram change

* Update user-bucket-credential-diagram.png

* SME feedback

* Update create-share.md

* Nilay changes

* changes

* Update consume-share.md

* Anoosha's feedback

* Update consume-share.md

* updated 2 screens and a sentence

* minor changes

* deleted extra images and added doc references

* new ECPU changes

* more changes to data sharing workshops

* more changes to fork (data studio)

* more changes

* Marty's feedback

* Marty's feedback to plsql workshop too

* Update setup-workshop-environment.md

* Delete 7381.png

* workshop # 3 ADB set up

and a couple of minor typos in workshops 1 and 2

* changes to adb-dcat workshop

* more changes

* minor typos in all 4 workshops

* quarterly qa build data lake

* new lab 11 in build DL with ADW

* minor changes database actions drop-down list

* final changes to build data lake workshop

* AI updates

AI workshop updates

* ai workshop updates

* Update query-using-select-ai.md

* Update query-using-select-ai.md

* updates

* more updates

* Update query-using-select-ai.md

* more new updates to ai workshop

* Update query-using-select-ai.md

* a new screen capture

* push Marty's feedback to fork

Final changes.

* updates sandbox manifest

* updates

* restored sandbox manifest

* Update setup-environment.md

* updates after CloudWorld

* final updates to ai workshop (also new labs 4 and 5)

* marty's feedback

* incorporated feedback

* minor PR edits by Sarah

* removed steps 7 & 8 Lab 2 > Task 3 per Alexey

The customer asked to remove this as it's not a requirement for the bucket to be public.

* more changes

* more changes per Alexey

* Update load-os-data-public.md

* Quarterly QA

I added a new step per the PM's request in the Data Sharing PL/SQL workshop. I also made a minor edit (removed space) in the Data Lake workshop.

* more updates

* Quarterly QA changes

* Update consume-share.md

* minor edit based on workshop user

* quarterly qa November 2023

* Added new videos to the workshop

Replaced 3 old silent videos with new ones. Added two new videos.

* Adding important notes to the two data sharing workshops

Per the PM's request.

* folder structure only  push to production

This push and the PR later is to make sure the folder structure is in the production repo before I start development. Only 1 .md file and the workshops folder.

* typos

* cloud links workshop

* UPDATES

* Update query-view.png

* update

* minor updates to chat ai workshop (Fork)

* test clones

* test pr

* Alexey's feedback

* Update data-sharing-diagram.png

* sarah's edits

* changes to Data Load UI

* removed script causing ML issue

* Update load-local-data.md

* updates: deprecated procedure and new code

* updates and test

* more updates

* minor update

* testing using a building block in a workshop

* updates

* building blocks debugging

* Update manifest.json

* fixing issues

* Update manifest.json

* delete cleanup.md from workshop folder (use common file)

* use common cleanup.md instead of local cleanup.md

* test common tasks

* update data sharing data studio workshop

* Update create-recipient.png

* PM's 1 feedback

* quarterly qa

* missing "Lab 2" from Manifest

* always free note addition

added a note

* always free change

* Update setup-environment.md

* update manage and monitor workshop

* Folder structure for new data share workshop (plus introduction.md)

* Updated Load and Analyze from clone

* Data Lake minor changes from clone

* manage and monitor workshop

* Remove the lab from the workshop per Marty's request

* mark-hornick-feedback

* used marty's setup file

* replaced notebook with a new one

* updates to lab 6 of manage and monitor

* Update adb-auto-scaling.md

* Nilay's feedback

* Update adb-auto-scaling.md

* updates to second ai workshop

* note change

* Changes to Load and Analyze workshop (other minor changes too)

* quarterly qa

* Update diagrams per Alexey (remove delta share icon)

* updated the 15-minutes workshop

* Update analyzing-movie-sales-data.md

---------

Co-authored-by: Michelle Malcher <[email protected]>
Co-authored-by: Sarah Hirschfeld <[email protected]>
  • Loading branch information
3 people authored Apr 22, 2024
1 parent c7c9bd8 commit 33c6de2
Show file tree
Hide file tree
Showing 14 changed files with 75 additions and 74 deletions.
Original file line number Diff line number Diff line change
@@ -1,35 +1,38 @@
# Analyze movie sales data
# Analyze the Movie Sales Data

## Introduction
In this lab, use Oracle analytic SQL to gain a better understanding of customers.
In this lab, you use Oracle analytic SQL to gain a better understanding of the customers.

Estimated Time: 5 minutes

### Objectives

- Understand how to use SQL Worksheet

- Bin customers by recency, frequency and monetary metrics

- Understand how to use the SQL Worksheet
- Group customers by recency, frequency and monetary metrics

### Prerequisites
- This lab requires completion of Labs 1 and 2 in the Contents menu on the left.

- This lab requires the completion of **Labs 1** and **Lab 2** in the **Contents** menu on the left.

## Task 1: Navigate to the SQL Worksheet
1. From the Data Load tool, navigate to the SQL Worksheet by clicking the top-left hamburger menu, navigate to **Development**, and then select **SQL**.

![Go to SQL worksheet](images/goto-sql.png " ")
1. From the Data Load tool, navigate to the SQL Worksheet. Click the **Selector** menu. In the **Development** section, click **SQL**.

2. Learn more about the SQL Worksheet features using the help tips or simply click the **X** on the top left of a tip to dismiss it. You will run queries by entering your commands in the worksheet. You can use the shortcuts [Control-Enter] or [Command-Enter] to run the command and view the Query Result (tabular format). Clear your worksheet by clicking the trash:
![Go to SQL worksheet](images/goto-sql.png =60%x*)

![Go to SQL worksheet](images/sql-worksheet.png " ")
The SQL Worksheet is displayed.

You are now ready to start analyzing MovieStream's performance using SQL.
![The SQL worksheet is displayed](images/sql-worksheet.png " ")

2. Learn more about the SQL Worksheet features using the tooltips when you hover over an icon. You will run queries by entering your commands in the worksheet. Click the **Run Statement** icon to run the command and view the output in the **Query Result** tab in tabular format. Click the **Run Script** icon to run a script and display the output in the **Script Output** tab in text format. You can clear your worksheet by clicking the trash icon.

## Task 2: Explore sales data
![Go to SQL worksheet](images/run-query.png " ")

1. Let's use a very simple query to look at sales by customer education level.
You are now ready to start analyzing MovieStream's performance using SQL.

## Task 2: Explore the Sales Data

1. Let's use a very simple query to look at sales by customer education level. Copy the following code and paste it in your SQL Worksheet, and then click the **Run Statement** icon in the toolbar.

```
<copy>select
Expand All @@ -43,36 +46,33 @@ You are now ready to start analyzing MovieStream's performance using SQL.
</copy>
```
Copy this SQL statement into the worksheet, and press the **Run command** button to start the query. This returns a result similar to the following:
This returns a result similar to the following:
![sales by education level](images/sales-by-education-level.png " ")
As you can see, most sales are to customers with a high school education, and the fewest sales are to customers with a masters degree. It is interesting that MovieStream has more sales to customers with a doctorate degree than customers with a masters degree!
## Task 3: Finding our most important customers
## Task 3: Finding our Most Important Customers
### Overview
Let's pivot and look at customer behavior by utilizing an RFM analysis. RFM is a very commonly used method for analyzing customer value. It is often used in general customer marketing, direct marketing, and retail sectors.
In the following steps, the scripts will build a SQL query to identify:
Let's pivot and look at customer behavior by utilizing an **RFM** analysis. RFM is a very commonly used method for analyzing customer value. It is often used in general customer marketing, direct marketing, and retail sectors.
- Recency: when was the last time the customer accessed the site?
In the following steps, the scripts will build a SQL query to identify:
- Frequency: what is the level of activity for that customer on the site?
- **Recency:** when was the last time the customer accessed the site?
- Monetary Value: how much money has the customer spent?
- **Frequency:** what is the level of activity for that customer on the site?
Customers will be categorized into 5 buckets measured (using the NTILE function) in increasing importance. For example, an RFM combined score of 551 indicates that the customer is in the highest tier of customers in terms of recent visits (R=5) and activity on the site (F=5), however the customer is in the lowest tier in terms of spend (M=1). Perhaps this is a customer that performs research on the site, but then decides to buy movies elsewhere!
- **Monetary Value:** how much money has the customer spent?
1. Binning customers based on behavior
Customers will be categorized into 5 buckets measured (using the **`NTILE`** function) in increasing importance. For example, an **RFM** combined score of **551** indicates that the customer is in the highest tier of customers in terms of recent visits (**R=5**) and activity on the site (**F=5**); however the customer is in the lowest tier in terms of spend (**M=1**). Perhaps this is a customer that performs research on the site, but then decides to buy movies elsewhere!
Use the following query to segment customer behavior into 5 distinct bins based on the recency, frequency and monetary metrics:
1. Let's bin customers based on their behavior. Copy the following query and paste it in your SQL Worksheet, and then click the **Run Statement** icon in the toolbar. Use the query to segment customer behavior into **5 distinct bins** based on the **recency**, **frequency**, and **monetary** metrics.
```
<copy>SELECT
cust_id,
cust_id,
NTILE (5) OVER (ORDER BY max(day_ID)) AS rfm_recency,
NTILE (5) OVER (ORDER BY count(1)) AS rfm_frequency,
NTILE (5) OVER (ORDER BY SUM(actual_price)) AS rfm_monetary
Expand All @@ -81,19 +81,16 @@ Customers will be categorized into 5 buckets measured (using the NTILE function)
ORDER BY cust_id
FETCH FIRST 10 ROWS ONLY;</copy>
```
Below is a snapshot of the result (and your result may differ):
![binned customers by sales, last visit and frequency](images/t4-bin-rfm.png " ")
Below is a snapshot of the result (your result may differ).
![binned customers by sales, last visit and frequency](images/t4-bin-rfm.png =65%x*)
The rfm\_* columns in the report shows the "bin" values based on the 5 quintiles described above.
For more information about using the `NTILE` function, see [the SQL documentation](https://docs.oracle.com/en/database/oracle/oracle-database/19/sqlrf/NTILE.html#GUID-FAD7A986-AEBD-4A03-B0D2-F7F2148BA5E9).
For more information about using the `NTILE` function, see the [SQL Language Reference](https://docs.oracle.com/en/database/oracle/oracle-database/19/sqlrf/NTILE.html#GUID-FAD7A986-AEBD-4A03-B0D2-F7F2148BA5E9) documentation.
2. Add customer information to the RFM query
Now we use the **`WITH`** clause to create an RFM query and then join that to attributes coming from the CUSTOMER table. In addition, the query will focus on important customers (based on spend) that are at risk:
2. Add customer information to the RFM query. Now we use the **`WITH`** clause to create an RFM query and then join that to attributes coming from the **CUSTOMER** table. In addition, the query will focus on important customers (based on spend) that are at risk. Copy the following query and paste it in your SQL Worksheet, and then click the **Run Statement** icon in the toolbar.
```
<copy>WITH rfm AS (
Expand Down Expand Up @@ -121,18 +118,17 @@ Customers will be categorized into 5 buckets measured (using the NTILE function)
AND r.rfm_recency = 1
ORDER BY r.rfm_monetary desc, r.rfm_recency desc;</copy>
```
The result only shows customers who historically had significant spend (equal to 5) but have not visited the site recently (equal to 1). MovieStream does not want to lose these important customers!
The result only shows customers who historically had significant spend (equal to 5) but have not visited the site recently (equal to 1). MovieStream does not want to lose these important customers!
![RFM query](images/t4-rfm.png " ")
## Recap
We accomplished alot in just 15 minutes!
We accomplished a lot in just 15 minutes!
* Deployed an Autonomous Database instance that is optimized for data warehousing workloads
* Used Autonomous Database tools to load object storage sources
* Use advanced SQL to uncover issues and possibilities
* Used Autonomous Database tools to load data from object storage sources
* Used advanced SQL to uncover issues and possibilities
## Learn more
Expand All @@ -141,5 +137,7 @@ We accomplished alot in just 15 minutes!
## **Acknowledgements**
- **Authors** - Marty Gubar, Oracle Autonomous Database Product Management
- **Last Updated By/Date** - Katherine Wardhana, User Assistance Developer, February 2024
- **Authors:**
* Marty Gubar, Oracle Autonomous Database Product Management
* Lauran K. Serhal, Consulting User Assistance Developer
- **Last Updated By/Date:** Lauran K. Serhal, April 2024
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
11 changes: 6 additions & 5 deletions shared/adb-15-minutes/introduction/introduction.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,19 +20,20 @@ The workshop's business scenario is based on Oracle MovieStream - a fictitious m



## Learn more
## Learn More

* [Oracle Autonomous Database Documentation](https://docs.oracle.com/en/cloud/paas/autonomous-data-warehouse-cloud/index.html)
* [Additional Autonomous Database Tutorials](https://docs.oracle.com/en/cloud/paas/autonomous-data-warehouse-cloud/tutorials.html)
* [Additional Autonomous Database Tutorials](https://docs.oracle.com/en/cloud/paas/autonomous-database/serverless/index.html)


## Acknowledgements
* **Author** - Marty Gubar, Product Manager
* **Last Updated By/Date** - Richard Green, Principal Developer, Database User Assistance, February 2022
* **Author:** Marty Gubar, Product Manager
* **Contributor:** Lauran K. Serhal, Consulting User Assistance Developer
* **Last Updated By/Date** April 2024

Data about movies in this workshop were sourced from **Wikipedia**.

Copyright (C) Oracle Corporation.
Copyright (C) 2024 Oracle Corporation.

Permission is granted to copy, distribute and/or modify this document
under the terms of the GNU Free Documentation License, Version 1.3
Expand Down
16 changes: 9 additions & 7 deletions shared/adb-15-minutes/load-data/load-data.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Use Data Tools to load data
# Use Data Tools to Load Data from Object Storage Buckets

## Introduction

Expand All @@ -16,22 +16,24 @@ In this lab, you will:

- This lab requires completion of Lab 1, **Provision an ADB Instance**, in the Contents menu on the left.

## Task 1: Navigate to Database Actions and open the Data Load utility
## Task 1: Navigate to Database Actions and Open the Data Load Utility

[](include:adb-goto-data-load-utility.md)

## Task 2: Create tables and load data from files in public Object Storage buckets using Database Actions tools
## Task 2: Create Tables and Load Data from Files in Public Object Storage Buckets

Autonomous Database supports a variety of object stores, including Oracle Object Storage, Amazon S3, Azure Blob Storage, Google Cloud Storage and Wasabi Hot Cloud Storage ([see the documentation for more details](https://docs.oracle.com/en/cloud/paas/autonomous-database/adbsa/data-load.html#GUID-E810061A-42B3-485F-92B8-3B872D790D85)). In this step, we will use Database Actions to create Autonomous Database tables and then load those tables from csv files stored in Oracle Object Storage.

[](include:adb-load-public-db-actions-15-min-quickstart.md)

This completes the Data Load lab. We now have a full set of structured tables loaded into Autonomous Database instance from files on object storage. We will be working with these tables in the next lab.

Please proceed to the next lab.
You may now **proceed to the next lab**.

## Acknowledgements

* **Author** - Mike Matthews, Autonomous Database Product Management
* **Contributors** - Marty Gubar, Autonomous Database Product Management
* **Last Updated By/Date** - Katherine Wardhana, User Assistance Developer, February 2024
* **Author:** Mike Matthews, Autonomous Database Product Management
* **Contributors:**
- Marty Gubar, Autonomous Database Product Management
- Lauran K. Serhal, Consulting User Assistance Developer
* **Last Updated By/Date:** Lauran K. Serhal, April 2024
Original file line number Diff line number Diff line change
Expand Up @@ -10,27 +10,27 @@ Estimated Time: 5 minutes

In this lab, you will:

- Provision a new Autonomous Database
- Provision a new Autonomous Database

### Prerequisites

- This lab requires completion of the Get Started section in the Contents menu on the left.

- This lab requires completion of the Get Started section in the Contents menu on the left.

## Task 1: Choose Autonomous Database from the OCI services menu
[](include:adb-goto-service-body.md)

## Task 2: Create the Autonomous Database instance
## Task 2: Create the Autonomous Database Instance
[](include:adb-provision-body.md)

Please proceed to the next lab.
You may now **proceed to the next lab**.

## Learn more

See the [documentation](https://docs.oracle.com/en/cloud/paas/autonomous-data-warehouse-cloud/user/autonomous-workflow.html#GUID-5780368D-6D40-475C-8DEB-DBA14BA675C3) on the typical workflow for using Autonomous Data Warehouse.
* [Using Oracle Autonomous Database Serverless](https://docs.oracle.com/en/cloud/paas/autonomous-database/serverless/adbsb/index.html)

## Acknowledgements

- **Author** - Nilay Panchal, Oracle Autonomous Database Product Management
- **Adapted for Cloud by** - Richard Green, Principal Developer, Database User Assistance
- **Last Updated By/Date** - Katherine Wardhana, User Assistance Developer, February 2024
- **Authors:**
* Lauran K. Serhal, Consulting User Assistance Developer
* Nilay Panchal, Oracle Autonomous Database Product Management
- **Last Updated By/Date:** - Lauran K. Serhal, April 2024
10 changes: 5 additions & 5 deletions shared/adb-15-minutes/workshops/freetier/manifest.json
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
{
"workshoptitle": "Autonomous Database 15 minute quick start",
"workshoptitle": "Autonomous Database 15 Minute Quick Start",
"help": "[email protected]",
"include": {
"adb-load-public-db-actions-15-min-quickstart.md":"/common/building-blocks/tasks/adb/load-public-db-actions-15-min-quickstart.md",
Expand All @@ -15,20 +15,20 @@
"filename": "../../introduction/introduction.md"
},
{
"title": "Get started",
"title": "Get Started",
"filename": "https://oracle-livelabs.github.io/common/labs/cloud-login/cloud-login.md"
},
{
"title": "Lab 1: Provision an ADB instance",
"title": "Lab 1: Provision an ADB Instance",
"type": "freetier",
"filename": "../../provision-database/adb-provision-moviestream.md"
},
{
"title": "Lab 2: Create tables and load data",
"title": "Lab 2: Load Data from a Public Object Storage Bucket and Create Tables",
"filename": "../../load-data/load-data.md"
},
{
"title": "Lab 3: Use SQL to analyze movie sales data",
"title": "Lab 3: Use SQL to Analyze Movie Sales Data",
"filename": "../../analyzing-movie-sales-data/analyzing-movie-sales-data.md"
},
{
Expand Down
10 changes: 5 additions & 5 deletions shared/adb-15-minutes/workshops/livelabs/manifest.json
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
{
"workshoptitle": "Autonomous Database 15 minute quick start",
"workshoptitle": "Autonomous Database 15 Minute Quick Start",
"help": "[email protected]",
"include": {
"adb-load-public-db-actions-15-min-quickstart.md":"/common/building-blocks/tasks/adb/load-public-db-actions-15-min-quickstart.md",
Expand All @@ -14,20 +14,20 @@
"filename": "../../introduction/introduction.md"
},
{
"title": "Get started",
"title": "Get Started",
"filename": "https://oracle-livelabs.github.io/common/labs/cloud-login/pre-register-free-tier-account.md"
},
{
"title": "Lab 1: Provision an ADB instance",
"title": "Lab 1: Provision an ADB Instance",
"type": "livelabs",
"filename": "../../provision-database/adb-provision-moviestream.md"
},
{
"title": "Lab 2: Use Data Tools to load data",
"title": "Lab 2: Use Data Tools to Load Data",
"filename": "../../load-data/load-data.md"
},
{
"title": "Lab 3: Use SQL to analyze movie sales data",
"title": "Lab 3: Use SQL to Analyze Movie Sales Data",
"filename": "../../analyzing-movie-sales-data/analyzing-movie-sales-data.md"

},
Expand Down
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
{
"paragraphs": [
{
"text": "%md\n![MovieStream Logo](https://oracle-livelabs.github.io/adb/shared/movie-stream-story/introduction/images/moviestream.jpeg)",
"text": "%md\n![MovieStream Logo](https://oracle-livelabs.github.io/adb/movie-stream-story-lite/introduction/images/moviestream.jpeg)",
"user": "MOVIESTREAM",
"dateUpdated": "2022-01-21T14:57:32+0000",
"progress": 0,
Expand Down

0 comments on commit 33c6de2

Please sign in to comment.