diff --git a/docs/assets/images/installation/Fabric_1.png b/docs/assets/images/installation/Fabric_1.png
new file mode 100644
index 0000000000..2dc888f791
Binary files /dev/null and b/docs/assets/images/installation/Fabric_1.png differ
diff --git a/docs/assets/images/installation/Fabric_2.png b/docs/assets/images/installation/Fabric_2.png
new file mode 100644
index 0000000000..3e9ca5a289
Binary files /dev/null and b/docs/assets/images/installation/Fabric_2.png differ
diff --git a/docs/assets/images/installation/Fabric_3.png b/docs/assets/images/installation/Fabric_3.png
new file mode 100644
index 0000000000..90a8f9aa0f
Binary files /dev/null and b/docs/assets/images/installation/Fabric_3.png differ
diff --git a/docs/assets/images/installation/Fabric_4.png b/docs/assets/images/installation/Fabric_4.png
new file mode 100644
index 0000000000..73bda2487e
Binary files /dev/null and b/docs/assets/images/installation/Fabric_4.png differ
diff --git a/docs/assets/images/installation/Fabric_5.png b/docs/assets/images/installation/Fabric_5.png
new file mode 100644
index 0000000000..fb231bdc4d
Binary files /dev/null and b/docs/assets/images/installation/Fabric_5.png differ
diff --git a/docs/en/licensed_install.md b/docs/en/licensed_install.md
index bec001987e..44676f5d87 100644
--- a/docs/en/licensed_install.md
+++ b/docs/en/licensed_install.md
@@ -1589,36 +1589,36 @@ Navigate to [MS Fabric](https://app.fabric.microsoft.com/) and sign in with your
### Step 2: Create a Lakehouse
-- Go to the **Synapse Data Science** section.
+- Go to the **Data Science** section.
- Navigate to the **Create** section.
- Create a new lakehouse, (for instance let us name it `jsl_workspace`.)
-
+
### Step 3: Create a Notebook
- Similarly, create a new notebook ( for instance let us name it `JSL_Notebook`.)
-
+
### Step 4: Attach the Lakehouse
Attach the newly created lakehouse (`jsl_workspace`) to your notebook.
-
+
-
+
### Step 5: Upload Files
Upload the necessary `.jar` and `.whl` files to the attached lakehouse.
-
+
-
+
After uploading is complete, you can configure and run the notebook.
@@ -1631,21 +1631,30 @@ Configure the session within the notebook as follows:
%%configure -f
{
"conf": {
- "spark.hadoop.fs.s3a.access.key": {
+ "spark.jsl.settings.aws.credentials.access_key_id": {
"parameterName": "awsAccessKey",
- "defaultValue": "
"
+ "defaultValue": ""
},
- "spark.hadoop.fs.s3a.secret.key": {
+ "spark.jsl.settings.aws.credentials.secret_access_key": {
"parameterName": "awsSecretKey",
- "defaultValue": ""
+ "defaultValue": ""
},
+
"spark.yarn.appMasterEnv.SPARK_NLP_LICENSE": {
"parameterName": "sparkNlpLicense",
- "defaultValue": ""
+ "defaultValue": ""
},
"spark.jars": {
"parameterName": "sparkJars",
- "defaultValue": ","
+ "defaultValue": "abfss://&&&&&&/Files/spark-nlp-assembly-5.5.0.jar, abfss://&&&&&&/Files/spark-nlp-jsl-5.5.0.jar"
+ },
+ "spark.jsl.settings.pretrained.cache_folder": {
+ "parameterName": "cacheFolder",
+ "defaultValue": "abfss://&&&&&&/Files/unzip_files"
+ },
+ "spark.extraListeners": {
+ "parameterName": "extraListener",
+ "defaultvalue": "com.johnsnowlabs.license.LicenseLifeCycleManager"
}
}
}
@@ -1658,6 +1667,7 @@ Configure the session within the notebook as follows:
Install the required Spark NLP libraries using pip commands:
```bash
+%pip install
%pip install
%pip install
```
@@ -1754,4 +1764,9 @@ result = pipeline.annotate(text)

+### Step 12: Run the pipeline with `.pretrained()` method
+You can also run the pipelines without using the `.load()` or `.from_disk()` methods
+
+
+
\ No newline at end of file