Activate a specified user.
@@ -147,7 +147,7 @@
Returns a user to ACTIVE status. This operation can only be performed on users that have a SUSPENDED status.
@@ -206,7 +206,7 @@
Returns all the enrolled factors for the specified user.
@@ -313,7 +313,7 @@
}
Enrolls and verifies a push factor for a specified user.
@@ -376,7 +376,7 @@
}
Removes an existing factor for the specified user, allowing the user to enroll a new factor.
@@ -415,7 +415,7 @@
Returns all user groups associated with a specified user.
@@ -545,7 +545,7 @@
}
Fetches information for a specified user. You must enter one or more parameters for the command to run.
@@ -759,7 +759,7 @@
}
Creates a new user with the option to set a password, and recovery question and answer. This flow is common when developing a custom user registration experience.
@@ -1041,7 +1041,7 @@
}
Updates account details for a specified user. The only required parameter is username.
@@ -1256,7 +1256,7 @@
Returns event details of Okta issued sessions for user authentication for user failed logins.
@@ -1599,7 +1599,7 @@
}
Returns event details for when a user is added to a group.
@@ -1956,7 +1956,7 @@
}
Returns event details for when a user is assigned to an application.
@@ -2320,7 +2320,7 @@
}
Returns event details for when a user attempts to sign on using SSO to an application managed in Okta.
@@ -2679,7 +2679,7 @@
}
Returns logs using specified filters.
@@ -3142,7 +3142,7 @@
}
Enumerates groups in your organization. A subset of groups can be returned that match a supported filter expression or query.
@@ -3272,7 +3272,7 @@
}
Returns members of a specified group.
@@ -3446,7 +3446,7 @@
}
This is a list of probable reasons for possible errors.
diff --git a/Packs/Okta/Integrations/OktaEventCollector/OktaEventCollector.yml b/Packs/Okta/Integrations/OktaEventCollector/OktaEventCollector.yml
index b2ddcd9166c6..c99544759b8e 100644
--- a/Packs/Okta/Integrations/OktaEventCollector/OktaEventCollector.yml
+++ b/Packs/Okta/Integrations/OktaEventCollector/OktaEventCollector.yml
@@ -73,7 +73,7 @@ script:
required: false
description: Manual command to fetch events and display them.
name: okta-get-events
- dockerimage: demisto/fastapi:1.0.0.87576
+ dockerimage: demisto/fastapi:0.115.4.115067
isfetchevents: true
subtype: python3
marketplaces:
diff --git a/Packs/Okta/Integrations/Okta_IAM/README.md b/Packs/Okta/Integrations/Okta_IAM/README.md
index 4d52f08f3ce3..ef1a9a65954f 100644
--- a/Packs/Okta/Integrations/Okta_IAM/README.md
+++ b/Packs/Okta/Integrations/Okta_IAM/README.md
@@ -288,9 +288,9 @@ Returns a list of Okta applications data.
##### Okta Applications (1 - 3)
|ID|Name|Label|Logo|
|---|---|---|---|
-| 0ob8zlypk6GVPRr2T0h7 | workday | Workday - Preview | ![](https://op1static.oktacdn.com/fs/bcg/4/gfsnda403rf16Qe790h7) |
-| 0oabz0ozy5dDpEKyA0h7 | workday | Workday - Prod - DryRun | ![](https://op1static.oktacdn.com/fs/bcg/4/gfsnda403rf16Qe790h7) |
-| 0oae3ioe51sQ64Aui2h7 | workday | Workday - Impl1 | ![](https://op1static.oktacdn.com/fs/bcg/4/gfsnda403rf16Qe790h7) |
+| 0ob8zlypk6GVPRr2T0h7 | workday | Workday - Preview | ![](../../doc_files/gfsnda403rf16Qe790h7) |
+| 0oabz0ozy5dDpEKyA0h7 | workday | Workday - Prod - DryRun | ![](../../doc_files/gfsnda403rf16Qe790h7) |
+| 0oae3ioe51sQ64Aui2h7 | workday | Workday - Impl1 | ![](../../doc_files/gfsnda403rf16Qe790h7) |
### okta-list-user-applications
@@ -359,7 +359,7 @@ There are no input arguments for this command.
##### Okta IAM Configuration
|ApplicationID|Instance|Label|Logo|Name|
|---|---|---|---|---|
-| 0oc8zlypk6GVPRr2G0h7 | ServiceNow IAM_instance_1 | ServiceNow | ![](https://op1static.oktacdn.com/fs/bcg/4/gfskliw1i51ScX6pf0h7) | servicenow |
+| 0oc8zlypk6GVPRr2G0h7 | ServiceNow IAM_instance_1 | ServiceNow | ![](../../doc_files/gfskliw1i51ScX6pf0h7) | servicenow |
### okta-iam-set-configuration
diff --git a/Packs/Okta/ReleaseNotes/3_3_6.md b/Packs/Okta/ReleaseNotes/3_3_6.md
new file mode 100644
index 000000000000..75d623189047
--- /dev/null
+++ b/Packs/Okta/ReleaseNotes/3_3_6.md
@@ -0,0 +1,12 @@
+
+#### Scripts
+
+##### IAMInitOktaUser
+
+
+- Updated the Docker image to: *demisto/py3-tools:1.0.0.114656*.
+#### Integrations
+
+##### Okta Event Collector
+
+- Updated the Docker image to: *demisto/fastapi:0.115.4.115067*.
diff --git a/Packs/Okta/Scripts/IAMInitOktaUser/IAMInitOktaUser.yml b/Packs/Okta/Scripts/IAMInitOktaUser/IAMInitOktaUser.yml
index bd72654a0248..36f34e39c00d 100644
--- a/Packs/Okta/Scripts/IAMInitOktaUser/IAMInitOktaUser.yml
+++ b/Packs/Okta/Scripts/IAMInitOktaUser/IAMInitOktaUser.yml
@@ -81,7 +81,7 @@ outputs:
type: String
scripttarget: 0
subtype: python3
-dockerimage: demisto/py3-tools:1.0.0.95440
+dockerimage: demisto/py3-tools:1.0.0.114656
runas: DBotWeakRole
fromversion: 6.5.0
tests:
diff --git a/Packs/Okta/doc_files/43457688-4e6ba010-94d0-11e8-9697-89a0edda4ba3.png b/Packs/Okta/doc_files/43457688-4e6ba010-94d0-11e8-9697-89a0edda4ba3.png
new file mode 100644
index 000000000000..15336312a135
Binary files /dev/null and b/Packs/Okta/doc_files/43457688-4e6ba010-94d0-11e8-9697-89a0edda4ba3.png differ
diff --git a/Packs/Okta/doc_files/43457838-c2f59d64-94d0-11e8-99d5-743216e3cf57.png b/Packs/Okta/doc_files/43457838-c2f59d64-94d0-11e8-99d5-743216e3cf57.png
new file mode 100644
index 000000000000..763f96745a39
Binary files /dev/null and b/Packs/Okta/doc_files/43457838-c2f59d64-94d0-11e8-99d5-743216e3cf57.png differ
diff --git a/Packs/Okta/doc_files/43458368-b70ff2a4-94d2-11e8-91b3-dd8467e3e076.png b/Packs/Okta/doc_files/43458368-b70ff2a4-94d2-11e8-91b3-dd8467e3e076.png
new file mode 100644
index 000000000000..c0bbb5ce9659
Binary files /dev/null and b/Packs/Okta/doc_files/43458368-b70ff2a4-94d2-11e8-91b3-dd8467e3e076.png differ
diff --git a/Packs/Okta/doc_files/43458391-cbbeda8a-94d2-11e8-99cd-7913ec999799.png b/Packs/Okta/doc_files/43458391-cbbeda8a-94d2-11e8-99cd-7913ec999799.png
new file mode 100644
index 000000000000..17e03ba33516
Binary files /dev/null and b/Packs/Okta/doc_files/43458391-cbbeda8a-94d2-11e8-99cd-7913ec999799.png differ
diff --git a/Packs/Okta/doc_files/43460338-02e11d74-94d9-11e8-86d5-058fa1e15651.png b/Packs/Okta/doc_files/43460338-02e11d74-94d9-11e8-86d5-058fa1e15651.png
new file mode 100644
index 000000000000..0309495183e9
Binary files /dev/null and b/Packs/Okta/doc_files/43460338-02e11d74-94d9-11e8-86d5-058fa1e15651.png differ
diff --git a/Packs/Okta/doc_files/43460466-632e38c4-94d9-11e8-9f2a-2187b3076b18.png b/Packs/Okta/doc_files/43460466-632e38c4-94d9-11e8-9f2a-2187b3076b18.png
new file mode 100644
index 000000000000..512d78a54fd0
Binary files /dev/null and b/Packs/Okta/doc_files/43460466-632e38c4-94d9-11e8-9f2a-2187b3076b18.png differ
diff --git a/Packs/Okta/doc_files/43460554-9fa5d1fe-94d9-11e8-8769-eddc01050544.png b/Packs/Okta/doc_files/43460554-9fa5d1fe-94d9-11e8-8769-eddc01050544.png
new file mode 100644
index 000000000000..aca5cee8cf19
Binary files /dev/null and b/Packs/Okta/doc_files/43460554-9fa5d1fe-94d9-11e8-8769-eddc01050544.png differ
diff --git a/Packs/Okta/doc_files/43461863-3629ab34-94dd-11e8-8959-58bafde323ef.png b/Packs/Okta/doc_files/43461863-3629ab34-94dd-11e8-8959-58bafde323ef.png
new file mode 100644
index 000000000000..3ca1ab7cde3b
Binary files /dev/null and b/Packs/Okta/doc_files/43461863-3629ab34-94dd-11e8-8959-58bafde323ef.png differ
diff --git a/Packs/Okta/doc_files/43462076-ccb0ee46-94dd-11e8-8bac-9a54b5315c9a.png b/Packs/Okta/doc_files/43462076-ccb0ee46-94dd-11e8-8bac-9a54b5315c9a.png
new file mode 100644
index 000000000000..53d33945a441
Binary files /dev/null and b/Packs/Okta/doc_files/43462076-ccb0ee46-94dd-11e8-8bac-9a54b5315c9a.png differ
diff --git a/Packs/Okta/doc_files/43462420-baffa5ec-94de-11e8-853f-bf02b25c9690.png b/Packs/Okta/doc_files/43462420-baffa5ec-94de-11e8-853f-bf02b25c9690.png
new file mode 100644
index 000000000000..7a8ad794d235
Binary files /dev/null and b/Packs/Okta/doc_files/43462420-baffa5ec-94de-11e8-853f-bf02b25c9690.png differ
diff --git a/Packs/Okta/doc_files/43462731-91389038-94df-11e8-9fe5-39b030db28fe.png b/Packs/Okta/doc_files/43462731-91389038-94df-11e8-9fe5-39b030db28fe.png
new file mode 100644
index 000000000000..922a79b1406b
Binary files /dev/null and b/Packs/Okta/doc_files/43462731-91389038-94df-11e8-9fe5-39b030db28fe.png differ
diff --git a/Packs/Okta/doc_files/43463068-64380af4-94e0-11e8-9824-b7f9f0d0ed43.png b/Packs/Okta/doc_files/43463068-64380af4-94e0-11e8-9824-b7f9f0d0ed43.png
new file mode 100644
index 000000000000..a7e49ab9d725
Binary files /dev/null and b/Packs/Okta/doc_files/43463068-64380af4-94e0-11e8-9824-b7f9f0d0ed43.png differ
diff --git a/Packs/Okta/doc_files/43463112-8652d6dc-94e0-11e8-88a0-e53e59dc79e4.png b/Packs/Okta/doc_files/43463112-8652d6dc-94e0-11e8-88a0-e53e59dc79e4.png
new file mode 100644
index 000000000000..2286ceda1624
Binary files /dev/null and b/Packs/Okta/doc_files/43463112-8652d6dc-94e0-11e8-88a0-e53e59dc79e4.png differ
diff --git a/Packs/Okta/doc_files/43464242-3cd6e748-94e3-11e8-974e-5d27d1b0ccbf.png b/Packs/Okta/doc_files/43464242-3cd6e748-94e3-11e8-974e-5d27d1b0ccbf.png
new file mode 100644
index 000000000000..c03e54f2bec3
Binary files /dev/null and b/Packs/Okta/doc_files/43464242-3cd6e748-94e3-11e8-974e-5d27d1b0ccbf.png differ
diff --git a/Packs/Okta/doc_files/43464335-8cb9d3ec-94e3-11e8-900b-8b27c7789dd6.png b/Packs/Okta/doc_files/43464335-8cb9d3ec-94e3-11e8-900b-8b27c7789dd6.png
new file mode 100644
index 000000000000..2b134da02599
Binary files /dev/null and b/Packs/Okta/doc_files/43464335-8cb9d3ec-94e3-11e8-900b-8b27c7789dd6.png differ
diff --git a/Packs/Okta/doc_files/44389432-8189b180-a533-11e8-8984-b29dcd5851e5.png b/Packs/Okta/doc_files/44389432-8189b180-a533-11e8-8984-b29dcd5851e5.png
new file mode 100644
index 000000000000..08631544021d
Binary files /dev/null and b/Packs/Okta/doc_files/44389432-8189b180-a533-11e8-8984-b29dcd5851e5.png differ
diff --git a/Packs/Okta/doc_files/44389475-a0884380-a533-11e8-8c0f-365e1b9bc723.png b/Packs/Okta/doc_files/44389475-a0884380-a533-11e8-8c0f-365e1b9bc723.png
new file mode 100644
index 000000000000..cbd000c8bd7c
Binary files /dev/null and b/Packs/Okta/doc_files/44389475-a0884380-a533-11e8-8c0f-365e1b9bc723.png differ
diff --git a/Packs/Okta/doc_files/49521506-3e6b0880-f8ae-11e8-9f2a-37ec8d49d5b0.png b/Packs/Okta/doc_files/49521506-3e6b0880-f8ae-11e8-9f2a-37ec8d49d5b0.png
new file mode 100644
index 000000000000..89807629bb36
Binary files /dev/null and b/Packs/Okta/doc_files/49521506-3e6b0880-f8ae-11e8-9f2a-37ec8d49d5b0.png differ
diff --git a/Packs/Okta/doc_files/49521733-c2bd8b80-f8ae-11e8-878f-e10e60a951ac.png b/Packs/Okta/doc_files/49521733-c2bd8b80-f8ae-11e8-878f-e10e60a951ac.png
new file mode 100644
index 000000000000..8094c12abbde
Binary files /dev/null and b/Packs/Okta/doc_files/49521733-c2bd8b80-f8ae-11e8-878f-e10e60a951ac.png differ
diff --git a/Packs/Okta/doc_files/49521874-00221900-f8af-11e8-8ab8-5c50575a372c.png b/Packs/Okta/doc_files/49521874-00221900-f8af-11e8-8ab8-5c50575a372c.png
new file mode 100644
index 000000000000..c3175b8b3d07
Binary files /dev/null and b/Packs/Okta/doc_files/49521874-00221900-f8af-11e8-8ab8-5c50575a372c.png differ
diff --git a/Packs/Okta/doc_files/gfskliw1i51ScX6pf0h7 b/Packs/Okta/doc_files/gfskliw1i51ScX6pf0h7
new file mode 100644
index 000000000000..3a8884cdee43
Binary files /dev/null and b/Packs/Okta/doc_files/gfskliw1i51ScX6pf0h7 differ
diff --git a/Packs/Okta/doc_files/gfsnda403rf16Qe790h7 b/Packs/Okta/doc_files/gfsnda403rf16Qe790h7
new file mode 100644
index 000000000000..eb13b1e8189c
Binary files /dev/null and b/Packs/Okta/doc_files/gfsnda403rf16Qe790h7 differ
diff --git a/Packs/Okta/pack_metadata.json b/Packs/Okta/pack_metadata.json
index 011ec14cbc54..1860f150e7f7 100644
--- a/Packs/Okta/pack_metadata.json
+++ b/Packs/Okta/pack_metadata.json
@@ -2,7 +2,7 @@
"name": "Okta",
"description": "Integration with Okta's cloud-based identity management service.",
"support": "xsoar",
- "currentVersion": "3.3.5",
+ "currentVersion": "3.3.6",
"author": "Cortex XSOAR",
"url": "https://www.paloaltonetworks.com/cortex",
"email": "",
diff --git a/Packs/OpenCTI/Integrations/OpenCTI/OpenCTI.yml b/Packs/OpenCTI/Integrations/OpenCTI/OpenCTI.yml
index 269d236c5927..bfa2f5bb4f21 100644
--- a/Packs/OpenCTI/Integrations/OpenCTI/OpenCTI.yml
+++ b/Packs/OpenCTI/Integrations/OpenCTI/OpenCTI.yml
@@ -293,7 +293,7 @@ script:
- contextPath: OpenCTI.MarkingDefinitions.markingsLastRun
description: The last ID of the previous fetch to use for pagination.
type: String
- dockerimage: demisto/vendors-sdk:1.0.0.110574
+ dockerimage: demisto/vendors-sdk:1.0.0.115493
runonce: false
script: '-'
subtype: python3
diff --git a/Packs/OpenCTI/ReleaseNotes/1_0_14.md b/Packs/OpenCTI/ReleaseNotes/1_0_14.md
new file mode 100644
index 000000000000..07ae66504029
--- /dev/null
+++ b/Packs/OpenCTI/ReleaseNotes/1_0_14.md
@@ -0,0 +1,7 @@
+
+#### Integrations
+
+##### OpenCTI
+
+
+- Updated the Docker image to: *demisto/vendors-sdk:1.0.0.115493*.
diff --git a/Packs/OpenCTI/pack_metadata.json b/Packs/OpenCTI/pack_metadata.json
index fc0d12812f74..3153de924ae6 100644
--- a/Packs/OpenCTI/pack_metadata.json
+++ b/Packs/OpenCTI/pack_metadata.json
@@ -2,7 +2,7 @@
"name": "OpenCTI",
"description": "Manages indicators from OpenCTI.",
"support": "xsoar",
- "currentVersion": "1.0.13",
+ "currentVersion": "1.0.14",
"author": "Cortex XSOAR",
"url": "https://www.paloaltonetworks.com/cortex",
"email": "",
diff --git a/Packs/OpenLDAP/Integrations/OpenLDAP/OpenLDAP.yml b/Packs/OpenLDAP/Integrations/OpenLDAP/OpenLDAP.yml
index 0acb3f281ab1..ea2a3600ad75 100644
--- a/Packs/OpenLDAP/Integrations/OpenLDAP/OpenLDAP.yml
+++ b/Packs/OpenLDAP/Integrations/OpenLDAP/OpenLDAP.yml
@@ -235,7 +235,7 @@ script:
defaultValue: 50
description: A generic LDAP search command.
name: ad-entries-search
- dockerimage: demisto/py3-tools:1.0.0.102774
+ dockerimage: demisto/py3-tools:1.0.0.114656
runonce: false
script: '-'
subtype: python3
diff --git a/Packs/OpenLDAP/Integrations/OpenLDAP/README.md b/Packs/OpenLDAP/Integrations/OpenLDAP/README.md
index 74edf937b772..6db5c26d353f 100644
--- a/Packs/OpenLDAP/Integrations/OpenLDAP/README.md
+++ b/Packs/OpenLDAP/Integrations/OpenLDAP/README.md
@@ -42,11 +42,11 @@ Use OpenLDAP or Active Directory user authentication groups to set user roles in
**Steps required for setting AD roles Mapping:** (The steps refer to an OpenLDAP server)
1. Create OpenLDAP child entry of *User Account* template under wanted *Organizational Unit* and *Posix Group*, with *uid* as part of DN:
-![user](https://user-images.githubusercontent.com/45535078/71556364-722c4980-2a40-11ea-850a-4b556f5f0f4b.png)
+![user](../../doc_files/71556364-722c4980-2a40-11ea-850a-4b556f5f0f4b.png)
2. Create OpenLDAP child entry of *Posix Group* template, with created account from step 1 as *memberUid*:
-![group](https://user-images.githubusercontent.com/45535078/71556408-04345200-2a41-11ea-8368-6eb430c1aa93.png)
+![group](../../doc_files/71556408-04345200-2a41-11ea-8368-6eb430c1aa93.png)
3. If using different attributes and class/group templates (different *objectClass*), customize the following default values in the instance configuration:
@@ -61,7 +61,7 @@ Use OpenLDAP or Active Directory user authentication groups to set user roles in
5. Choose the role.
6. Add the created group from step 2 to **AD Roles Mapping**.
-![mapping](https://user-images.githubusercontent.com/45535078/71556645-ee745c00-2a43-11ea-90da-764d0543f1ca.png)
+![mapping](../../doc_files/71556645-ee745c00-2a43-11ea-90da-764d0543f1ca.png)
7. Login to Cortex XSOAR using *uid* or full DN and password of the user created in step 1.
diff --git a/Packs/OpenLDAP/ReleaseNotes/2_0_15.md b/Packs/OpenLDAP/ReleaseNotes/2_0_15.md
new file mode 100644
index 000000000000..9f4cfcb14cc0
--- /dev/null
+++ b/Packs/OpenLDAP/ReleaseNotes/2_0_15.md
@@ -0,0 +1,7 @@
+
+#### Integrations
+
+##### LDAP Authentication
+
+
+- Updated the Docker image to: *demisto/py3-tools:1.0.0.114656*.
diff --git a/Packs/OpenLDAP/pack_metadata.json b/Packs/OpenLDAP/pack_metadata.json
index b4312cf2f227..30f7b3667511 100644
--- a/Packs/OpenLDAP/pack_metadata.json
+++ b/Packs/OpenLDAP/pack_metadata.json
@@ -2,7 +2,7 @@
"name": "LDAP Authentication",
"description": "Authenticate using Open LDAP or Active Directory",
"support": "xsoar",
- "currentVersion": "2.0.14",
+ "currentVersion": "2.0.15",
"author": "Cortex XSOAR",
"url": "https://www.paloaltonetworks.com/cortex",
"email": "",
diff --git a/Packs/OpenPhish/Integrations/OpenPhish_v2/OpenPhish_v2.py b/Packs/OpenPhish/Integrations/OpenPhish_v2/OpenPhish_v2.py
index 65e2004a70d3..8c7727bd673a 100644
--- a/Packs/OpenPhish/Integrations/OpenPhish_v2/OpenPhish_v2.py
+++ b/Packs/OpenPhish/Integrations/OpenPhish_v2/OpenPhish_v2.py
@@ -5,7 +5,6 @@
import urllib3
import traceback
-from typing import Dict
# Disable insecure warnings
urllib3.disable_warnings()
@@ -17,7 +16,6 @@
class Error(Exception):
"""Base class for exceptions in this module."""
- pass
class NotFoundError(Error):
@@ -69,7 +67,7 @@ def _save_urls_to_instance(client: Client):
raise Exception(f'Check server URL - {e.message}')
-def _is_reload_needed(client: Client, data: Dict) -> bool:
+def _is_reload_needed(client: Client, data: dict) -> bool:
"""
Checks if there is a need to reload the data from api to instance's memory
Args:
diff --git a/Packs/OpenPhish/Integrations/OpenPhish_v2/OpenPhish_v2.yml b/Packs/OpenPhish/Integrations/OpenPhish_v2/OpenPhish_v2.yml
index 2c64fcc4251d..a40ea0b292f4 100644
--- a/Packs/OpenPhish/Integrations/OpenPhish_v2/OpenPhish_v2.yml
+++ b/Packs/OpenPhish/Integrations/OpenPhish_v2/OpenPhish_v2.yml
@@ -54,7 +54,7 @@ description: OpenPhish uses proprietary Artificial Intelligence algorithms to au
name: OpenPhish_v2
display: OpenPhish v2
script:
- dockerimage: demisto/python3:3.10.13.72123
+ dockerimage: demisto/python3:3.11.10.113941
commands:
- name: url
arguments:
@@ -62,10 +62,10 @@ script:
isArray: true
required: true
default: true
- description: URL to check
+ description: URL to check.
outputs:
- contextPath: URL.Data
- description: The URL
+ description: The URL.
- contextPath: URL.Malicious.Vendor
description: The vendor reporting the URL as malicious.
- contextPath: URL.Malicious.Description
@@ -81,10 +81,10 @@ script:
description: Checks the reputation of a URL.
- name: openphish-reload
arguments: []
- description: Reload OpenPhish database
+ description: Reload OpenPhish database.
- name: openphish-status
arguments: []
- description: Show OpenPhish database status
+ description: Show OpenPhish database status.
script: '-'
subtype: python3
type: python
diff --git a/Packs/OpenPhish/Integrations/OpenPhish_v2/README.md b/Packs/OpenPhish/Integrations/OpenPhish_v2/README.md
index 2800c434ae2c..b1567b93f4df 100644
--- a/Packs/OpenPhish/Integrations/OpenPhish_v2/README.md
+++ b/Packs/OpenPhish/Integrations/OpenPhish_v2/README.md
@@ -134,4 +134,4 @@ There is no context output for this command.
#### Human Readable Output
-![image](https://user-images.githubusercontent.com/71636766/94807766-c5c92a80-03f8-11eb-9339-d8e399d895c5.png)
+![image](../../doc_files/94807766-c5c92a80-03f8-11eb-9339-d8e399d895c5.png)
diff --git a/Packs/OpenPhish/ReleaseNotes/2_0_18.md b/Packs/OpenPhish/ReleaseNotes/2_0_18.md
new file mode 100644
index 000000000000..dd408d1ff2d7
--- /dev/null
+++ b/Packs/OpenPhish/ReleaseNotes/2_0_18.md
@@ -0,0 +1,7 @@
+
+#### Integrations
+
+##### OpenPhish v2
+
+
+- Updated the Docker image to: *demisto/python3:3.11.10.113941*.
diff --git a/Packs/OpenPhish/ReleaseNotes/2_0_19.md b/Packs/OpenPhish/ReleaseNotes/2_0_19.md
new file mode 100644
index 000000000000..09b015dd85c2
--- /dev/null
+++ b/Packs/OpenPhish/ReleaseNotes/2_0_19.md
@@ -0,0 +1,6 @@
+
+#### Integrations
+
+##### OpenPhish v2
+
+- Updated the Docker image to: *demisto/python3:3.11.10.113941*.
diff --git a/Packs/OpenPhish/pack_metadata.json b/Packs/OpenPhish/pack_metadata.json
index 352b4139cac0..82a5390c3934 100644
--- a/Packs/OpenPhish/pack_metadata.json
+++ b/Packs/OpenPhish/pack_metadata.json
@@ -2,7 +2,7 @@
"name": "OpenPhish",
"description": "OpenPhish uses proprietary Artificial Intelligence algorithms to automatically identify zero-day phishing sites and provide comprehensive, actionable, real-time threat intelligence.",
"support": "xsoar",
- "currentVersion": "2.0.17",
+ "currentVersion": "2.0.19",
"author": "Cortex XSOAR",
"url": "https://www.paloaltonetworks.com/cortex",
"email": "",
diff --git a/Packs/PAN-OS/ReleaseNotes/2_2_7.md b/Packs/PAN-OS/ReleaseNotes/2_2_7.md
new file mode 100644
index 000000000000..1f4b5b94fe6b
--- /dev/null
+++ b/Packs/PAN-OS/ReleaseNotes/2_2_7.md
@@ -0,0 +1,11 @@
+
+#### Scripts
+
+##### PanoramaSecurityPolicyMatchWrapper
+
+
+- Updated the Docker image to: *demisto/python3:3.11.10.113941*.
+##### PanoramaCVECoverage
+
+
+- Updated the Docker image to: *demisto/python3:3.11.10.113941*.
diff --git a/Packs/PAN-OS/ReleaseNotes/2_2_8.md b/Packs/PAN-OS/ReleaseNotes/2_2_8.md
new file mode 100644
index 000000000000..30794752b517
--- /dev/null
+++ b/Packs/PAN-OS/ReleaseNotes/2_2_8.md
@@ -0,0 +1,7 @@
+
+#### Scripts
+
+##### PanoramaSecurityPolicyMatchWrapper
+
+
+- Updated the Docker image to: *demisto/python3:3.11.10.115186*.
diff --git a/Packs/PAN-OS/Scripts/PanoramaCVECoverage/PanoramaCVECoverage.yml b/Packs/PAN-OS/Scripts/PanoramaCVECoverage/PanoramaCVECoverage.yml
index 2d6a799e3eb1..427a4b3b0884 100644
--- a/Packs/PAN-OS/Scripts/PanoramaCVECoverage/PanoramaCVECoverage.yml
+++ b/Packs/PAN-OS/Scripts/PanoramaCVECoverage/PanoramaCVECoverage.yml
@@ -43,7 +43,7 @@ outputs:
type: string
scripttarget: 0
subtype: python3
-dockerimage: demisto/python3:3.10.13.74666
+dockerimage: demisto/python3:3.11.10.113941
runas: DBotWeakRole
fromversion: 5.0.0
tests:
diff --git a/Packs/PAN-OS/Scripts/PanoramaSecurityPolicyMatchWrapper/PanoramaSecurityPolicyMatchWrapper.yml b/Packs/PAN-OS/Scripts/PanoramaSecurityPolicyMatchWrapper/PanoramaSecurityPolicyMatchWrapper.yml
index dfeae7667b50..680d3bdf2b20 100644
--- a/Packs/PAN-OS/Scripts/PanoramaSecurityPolicyMatchWrapper/PanoramaSecurityPolicyMatchWrapper.yml
+++ b/Packs/PAN-OS/Scripts/PanoramaSecurityPolicyMatchWrapper/PanoramaSecurityPolicyMatchWrapper.yml
@@ -63,7 +63,7 @@ script: '-'
timeout: '0'
type: python
subtype: python3
-dockerimage: demisto/python3:3.10.13.74666
+dockerimage: demisto/python3:3.11.10.115186
fromversion: 6.1.0
tests:
- No tests (auto formatted)
diff --git a/Packs/PAN-OS/pack_metadata.json b/Packs/PAN-OS/pack_metadata.json
index 0803d3d5719d..92d7b450b6f2 100644
--- a/Packs/PAN-OS/pack_metadata.json
+++ b/Packs/PAN-OS/pack_metadata.json
@@ -2,7 +2,7 @@
"name": "PAN-OS by Palo Alto Networks",
"description": "Manage Palo Alto Networks Firewall and Panorama. Use this pack to manage Prisma Access through Panorama. For more information see Panorama documentation.",
"support": "xsoar",
- "currentVersion": "2.2.6",
+ "currentVersion": "2.2.8",
"author": "Cortex XSOAR",
"url": "https://www.paloaltonetworks.com/cortex",
"email": "",
diff --git a/Packs/PANWComprehensiveInvestigation/ReleaseNotes/1_3_25.md b/Packs/PANWComprehensiveInvestigation/ReleaseNotes/1_3_25.md
new file mode 100644
index 000000000000..7a42e523655d
--- /dev/null
+++ b/Packs/PANWComprehensiveInvestigation/ReleaseNotes/1_3_25.md
@@ -0,0 +1,7 @@
+
+#### Scripts
+
+##### PanwIndicatorCreateQueries
+
+
+- Updated the Docker image to: *demisto/python3:3.11.10.113941*.
diff --git a/Packs/PANWComprehensiveInvestigation/ReleaseNotes/1_3_26.md b/Packs/PANWComprehensiveInvestigation/ReleaseNotes/1_3_26.md
new file mode 100644
index 000000000000..2d55183c2bc8
--- /dev/null
+++ b/Packs/PANWComprehensiveInvestigation/ReleaseNotes/1_3_26.md
@@ -0,0 +1,7 @@
+
+#### Scripts
+
+##### PanwIndicatorCreateQueries
+
+
+- Updated the Docker image to: *demisto/python3:3.11.10.115186*.
diff --git a/Packs/PANWComprehensiveInvestigation/Scripts/PanwIndicatorCreateQueries/PanwIndicatorCreateQueries.yml b/Packs/PANWComprehensiveInvestigation/Scripts/PanwIndicatorCreateQueries/PanwIndicatorCreateQueries.yml
index 5655c770ec7a..6a7ef3c90fcf 100644
--- a/Packs/PANWComprehensiveInvestigation/Scripts/PanwIndicatorCreateQueries/PanwIndicatorCreateQueries.yml
+++ b/Packs/PANWComprehensiveInvestigation/Scripts/PanwIndicatorCreateQueries/PanwIndicatorCreateQueries.yml
@@ -64,5 +64,5 @@ type: python
tests:
- No test
subtype: python3
-dockerimage: demisto/python3:3.10.12.66339
+dockerimage: demisto/python3:3.11.10.115186
fromversion: 5.0.0
diff --git a/Packs/PANWComprehensiveInvestigation/pack_metadata.json b/Packs/PANWComprehensiveInvestigation/pack_metadata.json
index e861c812bc26..20bdde007e34 100644
--- a/Packs/PANWComprehensiveInvestigation/pack_metadata.json
+++ b/Packs/PANWComprehensiveInvestigation/pack_metadata.json
@@ -2,7 +2,7 @@
"name": "Comprehensive Investigation by Palo Alto Networks",
"description": "Are you a Palo Alto Networks customer? We have just the content pack to help you orchestrate incident response across Palo Alto Networks products.",
"support": "xsoar",
- "currentVersion": "1.3.24",
+ "currentVersion": "1.3.26",
"author": "Cortex XSOAR",
"url": "https://www.paloaltonetworks.com/cortex",
"email": "",
diff --git a/Packs/PaloAltoNetworksAIOps/Integrations/PaloAltoNetworksAIOps/PaloAltoNetworksAIOps.yml b/Packs/PaloAltoNetworksAIOps/Integrations/PaloAltoNetworksAIOps/PaloAltoNetworksAIOps.yml
index 14e15469b50a..8d89e2cb8ee9 100644
--- a/Packs/PaloAltoNetworksAIOps/Integrations/PaloAltoNetworksAIOps/PaloAltoNetworksAIOps.yml
+++ b/Packs/PaloAltoNetworksAIOps/Integrations/PaloAltoNetworksAIOps/PaloAltoNetworksAIOps.yml
@@ -111,7 +111,7 @@ script:
script: '-'
type: python
subtype: python3
- dockerimage: demisto/python3:3.10.14.91134
+ dockerimage: demisto/python3:3.11.10.113941
fromversion: 6.9.0
tests:
- PaloAltoNetworksAIOps-Test
diff --git a/Packs/PaloAltoNetworksAIOps/ReleaseNotes/1_0_1.md b/Packs/PaloAltoNetworksAIOps/ReleaseNotes/1_0_1.md
new file mode 100644
index 000000000000..0b040b708eb4
--- /dev/null
+++ b/Packs/PaloAltoNetworksAIOps/ReleaseNotes/1_0_1.md
@@ -0,0 +1,7 @@
+
+#### Integrations
+
+##### Palo Alto Networks AIOps
+
+
+- Updated the Docker image to: *demisto/python3:3.11.10.113941*.
diff --git a/Packs/PaloAltoNetworksAIOps/pack_metadata.json b/Packs/PaloAltoNetworksAIOps/pack_metadata.json
index 6f9b63a1d375..5af4b5da7235 100644
--- a/Packs/PaloAltoNetworksAIOps/pack_metadata.json
+++ b/Packs/PaloAltoNetworksAIOps/pack_metadata.json
@@ -2,7 +2,7 @@
"name": "Palo Alto Networks AIOps",
"description": "Palo Alto Networks Best Practice Assessment (BPA) analyzes NGFW and Panorama configurations.",
"support": "xsoar",
- "currentVersion": "1.0.0",
+ "currentVersion": "1.0.1",
"author": "Cortex XSOAR",
"url": "https://www.paloaltonetworks.com/cortex",
"email": "",
diff --git a/Packs/PaloAltoNetworksAutomaticSLR_Community/Integrations/PaloAltoNetworksAutomaticSLRCommunity/PaloAltoNetworksAutomaticSLRCommunity.yml b/Packs/PaloAltoNetworksAutomaticSLR_Community/Integrations/PaloAltoNetworksAutomaticSLRCommunity/PaloAltoNetworksAutomaticSLRCommunity.yml
index 92c4e4e676bc..964562cda3c7 100644
--- a/Packs/PaloAltoNetworksAutomaticSLR_Community/Integrations/PaloAltoNetworksAutomaticSLRCommunity/PaloAltoNetworksAutomaticSLRCommunity.yml
+++ b/Packs/PaloAltoNetworksAutomaticSLR_Community/Integrations/PaloAltoNetworksAutomaticSLRCommunity/PaloAltoNetworksAutomaticSLRCommunity.yml
@@ -298,7 +298,7 @@ script:
type: boolean
description: This command will dump all the non-sensitive parameters to the context, useful for debugging purposes.
execution: true
- dockerimage: demisto/xml-feed:1.0.0.75078
+ dockerimage: demisto/xml-feed:1.0.0.116765
subtype: python3
fromversion: 5.0.0
tests:
diff --git a/Packs/PaloAltoNetworksAutomaticSLR_Community/ReleaseNotes/1_0_4.md b/Packs/PaloAltoNetworksAutomaticSLR_Community/ReleaseNotes/1_0_4.md
new file mode 100644
index 000000000000..9f470d371198
--- /dev/null
+++ b/Packs/PaloAltoNetworksAutomaticSLR_Community/ReleaseNotes/1_0_4.md
@@ -0,0 +1,7 @@
+
+#### Integrations
+
+##### Palo Alto Networks Automatic SLR
+
+
+- Updated the Docker image to: *demisto/xml-feed:1.0.0.116765*.
diff --git a/Packs/PaloAltoNetworksAutomaticSLR_Community/pack_metadata.json b/Packs/PaloAltoNetworksAutomaticSLR_Community/pack_metadata.json
index c92edf265300..b4ccbf52e458 100644
--- a/Packs/PaloAltoNetworksAutomaticSLR_Community/pack_metadata.json
+++ b/Packs/PaloAltoNetworksAutomaticSLR_Community/pack_metadata.json
@@ -2,7 +2,7 @@
"name": "Automatic SLR by Palo Alto Networks",
"description": "Automate the generation of Palo Alto Networks Security Lifecycle Reviews (SLR's) using XSOAR.",
"support": "community",
- "currentVersion": "1.0.3",
+ "currentVersion": "1.0.4",
"author": "Matt Smith",
"url": "",
"email": "",
diff --git a/Packs/PaloAltoNetworks_IoT/Integrations/PaloAltoNetworks_IoT/PaloAltoNetworks_IoT.py b/Packs/PaloAltoNetworks_IoT/Integrations/PaloAltoNetworks_IoT/PaloAltoNetworks_IoT.py
index 8af68b37732f..729b6e35974f 100644
--- a/Packs/PaloAltoNetworks_IoT/Integrations/PaloAltoNetworks_IoT/PaloAltoNetworks_IoT.py
+++ b/Packs/PaloAltoNetworks_IoT/Integrations/PaloAltoNetworks_IoT/PaloAltoNetworks_IoT.py
@@ -1,7 +1,7 @@
import demistomock as demisto # noqa: F401
from CommonServerPython import * # noqa: F401
import dateparser
-from datetime import datetime, timezone
+from datetime import datetime, UTC
import json
import time
from typing import Any
@@ -221,7 +221,7 @@ def arg_to_timestamp(arg: Any, arg_name: str, required: bool = False) -> int | N
# if d is None it means dateparser failed to parse it
raise ValueError(f'Invalid date: {arg}')
- return int(date.replace(tzinfo=timezone.utc).timestamp())
+ return int(date.replace(tzinfo=UTC).timestamp())
if isinstance(arg, int | float):
# Convert to int if the input is a float
return int(arg)
@@ -476,7 +476,7 @@ def fetch_incidents(client, last_run, is_test=False):
for alert in alerts:
alert_date_epoch = datetime.strptime(
- alert['date'], "%Y-%m-%dT%H:%M:%S.%fZ").replace(tzinfo=timezone.utc).timestamp()
+ alert['date'], "%Y-%m-%dT%H:%M:%S.%fZ").replace(tzinfo=UTC).timestamp()
alert_id = alert["zb_ticketid"].replace("alert-", "")
incident = {
'name': alert['name'],
@@ -533,7 +533,7 @@ def fetch_incidents(client, last_run, is_test=False):
detected_date = detected_date[0]
vuln_date_epoch = datetime.strptime(
- detected_date, "%Y-%m-%dT%H:%M:%S.%fZ").replace(tzinfo=timezone.utc).timestamp()
+ detected_date, "%Y-%m-%dT%H:%M:%S.%fZ").replace(tzinfo=UTC).timestamp()
vuln_name_encoded = vuln['vulnerability_name'].replace(' ', '+')
incident = {
'name': vuln['name'],
@@ -586,7 +586,7 @@ def main():
required=False
)
if ff:
- first_fetch = datetime.fromtimestamp(ff).astimezone(timezone.utc).strftime('%Y-%m-%dT%H:%M:%SZ')
+ first_fetch = datetime.fromtimestamp(ff).astimezone(UTC).strftime('%Y-%m-%dT%H:%M:%SZ')
except ValueError as e:
return_error(f'First fetch time is in a wrong format. Error: {str(e)}')
diff --git a/Packs/PaloAltoNetworks_IoT/Integrations/PaloAltoNetworks_IoT/PaloAltoNetworks_IoT.yml b/Packs/PaloAltoNetworks_IoT/Integrations/PaloAltoNetworks_IoT/PaloAltoNetworks_IoT.yml
index 4e6aa666e915..2f613f70471f 100644
--- a/Packs/PaloAltoNetworks_IoT/Integrations/PaloAltoNetworks_IoT/PaloAltoNetworks_IoT.yml
+++ b/Packs/PaloAltoNetworks_IoT/Integrations/PaloAltoNetworks_IoT/PaloAltoNetworks_IoT.yml
@@ -395,7 +395,7 @@ script:
script: '-'
type: python
subtype: python3
- dockerimage: demisto/python3:3.10.14.95956
+ dockerimage: demisto/python3:3.11.10.116439
fromversion: 5.0.0
tests:
- PaloAltoNetworks_IoT-Test
diff --git a/Packs/PaloAltoNetworks_IoT/ReleaseNotes/1_0_35.md b/Packs/PaloAltoNetworks_IoT/ReleaseNotes/1_0_35.md
new file mode 100644
index 000000000000..5b92b0255276
--- /dev/null
+++ b/Packs/PaloAltoNetworks_IoT/ReleaseNotes/1_0_35.md
@@ -0,0 +1,19 @@
+
+#### Scripts
+
+##### iot-security-vuln-post-processing
+
+
+- Updated the Docker image to: *demisto/python3:3.11.10.113941*.
+##### iot-security-check-servicenow
+
+
+- Updated the Docker image to: *demisto/python3:3.11.10.113941*.
+##### iot-security-alert-post-processing
+
+
+- Updated the Docker image to: *demisto/python3:3.11.10.113941*.
+##### iot-security-get-raci
+
+
+- Updated the Docker image to: *demisto/python3:3.11.10.113941*.
diff --git a/Packs/PaloAltoNetworks_IoT/ReleaseNotes/1_0_36.md b/Packs/PaloAltoNetworks_IoT/ReleaseNotes/1_0_36.md
new file mode 100644
index 000000000000..3277ed27babb
--- /dev/null
+++ b/Packs/PaloAltoNetworks_IoT/ReleaseNotes/1_0_36.md
@@ -0,0 +1,15 @@
+
+#### Scripts
+
+##### iot-security-alert-post-processing
+
+
+- Updated the Docker image to: *demisto/python3:3.11.10.115186*.
+##### iot-security-get-raci
+
+
+- Updated the Docker image to: *demisto/python3:3.11.10.115186*.
+##### iot-security-check-servicenow
+
+
+- Updated the Docker image to: *demisto/python3:3.11.10.115186*.
diff --git a/Packs/PaloAltoNetworks_IoT/ReleaseNotes/1_0_37.md b/Packs/PaloAltoNetworks_IoT/ReleaseNotes/1_0_37.md
new file mode 100644
index 000000000000..ac6a9984e67e
--- /dev/null
+++ b/Packs/PaloAltoNetworks_IoT/ReleaseNotes/1_0_37.md
@@ -0,0 +1,7 @@
+
+#### Integrations
+
+##### Palo Alto Networks IoT
+
+
+- Updated the Docker image to: *demisto/python3:3.11.10.116439*.
diff --git a/Packs/PaloAltoNetworks_IoT/Scripts/iot_alert_post_processing/iot_alert_post_processing.yml b/Packs/PaloAltoNetworks_IoT/Scripts/iot_alert_post_processing/iot_alert_post_processing.yml
index ea5e4774dd3d..70a1cf09f09a 100644
--- a/Packs/PaloAltoNetworks_IoT/Scripts/iot_alert_post_processing/iot_alert_post_processing.yml
+++ b/Packs/PaloAltoNetworks_IoT/Scripts/iot_alert_post_processing/iot_alert_post_processing.yml
@@ -16,5 +16,5 @@ tags:
timeout: '0'
type: python
subtype: python3
-dockerimage: demisto/python3:3.10.13.86272
+dockerimage: demisto/python3:3.11.10.115186
fromversion: 5.0.0
diff --git a/Packs/PaloAltoNetworks_IoT/Scripts/iot_check_servicenow/iot_check_servicenow.yml b/Packs/PaloAltoNetworks_IoT/Scripts/iot_check_servicenow/iot_check_servicenow.yml
index f85a4d92805c..32b684d214ac 100644
--- a/Packs/PaloAltoNetworks_IoT/Scripts/iot_check_servicenow/iot_check_servicenow.yml
+++ b/Packs/PaloAltoNetworks_IoT/Scripts/iot_check_servicenow/iot_check_servicenow.yml
@@ -9,6 +9,6 @@ tags:
timeout: '0'
type: python
subtype: python3
-dockerimage: demisto/python3:3.10.13.86272
+dockerimage: demisto/python3:3.11.10.115186
runas: DBotRole
fromversion: 5.0.0
diff --git a/Packs/PaloAltoNetworks_IoT/Scripts/iot_get_raci/iot_get_raci.yml b/Packs/PaloAltoNetworks_IoT/Scripts/iot_get_raci/iot_get_raci.yml
index ed33f8a1bddd..76161ab3dc3c 100644
--- a/Packs/PaloAltoNetworks_IoT/Scripts/iot_get_raci/iot_get_raci.yml
+++ b/Packs/PaloAltoNetworks_IoT/Scripts/iot_get_raci/iot_get_raci.yml
@@ -35,7 +35,7 @@ tags:
timeout: '0'
type: python
subtype: python3
-dockerimage: demisto/python3:3.10.13.83255
+dockerimage: demisto/python3:3.11.10.115186
fromversion: 5.0.0
tests:
- No tests (auto formatted)
diff --git a/Packs/PaloAltoNetworks_IoT/Scripts/iot_vuln_post_processing/iot_vuln_post_processing.yml b/Packs/PaloAltoNetworks_IoT/Scripts/iot_vuln_post_processing/iot_vuln_post_processing.yml
index d8f088e387cb..bca937e57d3a 100644
--- a/Packs/PaloAltoNetworks_IoT/Scripts/iot_vuln_post_processing/iot_vuln_post_processing.yml
+++ b/Packs/PaloAltoNetworks_IoT/Scripts/iot_vuln_post_processing/iot_vuln_post_processing.yml
@@ -11,5 +11,5 @@ tags:
timeout: '0'
type: python
subtype: python3
-dockerimage: demisto/python3:3.10.13.86272
+dockerimage: demisto/python3:3.11.10.113941
fromversion: 5.0.0
diff --git a/Packs/PaloAltoNetworks_IoT/pack_metadata.json b/Packs/PaloAltoNetworks_IoT/pack_metadata.json
index af170597c237..799cd8a7ddbd 100644
--- a/Packs/PaloAltoNetworks_IoT/pack_metadata.json
+++ b/Packs/PaloAltoNetworks_IoT/pack_metadata.json
@@ -2,7 +2,7 @@
"name": "IoT by Palo Alto Networks",
"description": "Palo Alto Networks IoT",
"support": "xsoar",
- "currentVersion": "1.0.34",
+ "currentVersion": "1.0.37",
"author": "Cortex XSOAR",
"url": "https://www.paloaltonetworks.com/cortex",
"email": "",
diff --git a/Packs/Palo_Alto_Networks_WildFire/Integrations/Palo_Alto_Networks_WildFire_v2/Palo_Alto_Networks_WildFire_v2.yml b/Packs/Palo_Alto_Networks_WildFire/Integrations/Palo_Alto_Networks_WildFire_v2/Palo_Alto_Networks_WildFire_v2.yml
index ec3c97c4d458..af6ab0b805b8 100644
--- a/Packs/Palo_Alto_Networks_WildFire/Integrations/Palo_Alto_Networks_WildFire_v2/Palo_Alto_Networks_WildFire_v2.yml
+++ b/Packs/Palo_Alto_Networks_WildFire/Integrations/Palo_Alto_Networks_WildFire_v2/Palo_Alto_Networks_WildFire_v2.yml
@@ -1264,7 +1264,7 @@ script:
- contextPath: InfoFile.Type
description: The web artifacts file type.
type: string
- dockerimage: demisto/python3:3.10.14.99865
+ dockerimage: demisto/python3:3.11.10.113941
runonce: false
script: '-'
subtype: python3
diff --git a/Packs/Palo_Alto_Networks_WildFire/Integrations/WildFireReports/WildFireReports.yml b/Packs/Palo_Alto_Networks_WildFire/Integrations/WildFireReports/WildFireReports.yml
index 0b812ff66b3e..f51e54054d5f 100644
--- a/Packs/Palo_Alto_Networks_WildFire/Integrations/WildFireReports/WildFireReports.yml
+++ b/Packs/Palo_Alto_Networks_WildFire/Integrations/WildFireReports/WildFireReports.yml
@@ -53,7 +53,7 @@ script:
script: '-'
type: python
subtype: python3
- dockerimage: demisto/python3:3.10.12.68714
+ dockerimage: demisto/python3:3.11.10.115186
fromversion: 6.5.0
tests:
- No tests (auto formatted)
diff --git a/Packs/Palo_Alto_Networks_WildFire/ReleaseNotes/2_1_52.md b/Packs/Palo_Alto_Networks_WildFire/ReleaseNotes/2_1_52.md
new file mode 100644
index 000000000000..e4cf71020776
--- /dev/null
+++ b/Packs/Palo_Alto_Networks_WildFire/ReleaseNotes/2_1_52.md
@@ -0,0 +1,11 @@
+
+#### Integrations
+
+##### Palo Alto Networks WildFire Reports
+
+
+- Updated the Docker image to: *demisto/python3:3.11.10.113941*.
+##### Palo Alto Networks WildFire v2
+
+
+- Updated the Docker image to: *demisto/python3:3.11.10.113941*.
diff --git a/Packs/Palo_Alto_Networks_WildFire/ReleaseNotes/2_1_53.md b/Packs/Palo_Alto_Networks_WildFire/ReleaseNotes/2_1_53.md
new file mode 100644
index 000000000000..eb4ff006b138
--- /dev/null
+++ b/Packs/Palo_Alto_Networks_WildFire/ReleaseNotes/2_1_53.md
@@ -0,0 +1,7 @@
+
+#### Integrations
+
+##### Palo Alto Networks WildFire Reports
+
+
+- Updated the Docker image to: *demisto/python3:3.11.10.115186*.
diff --git a/Packs/Palo_Alto_Networks_WildFire/pack_metadata.json b/Packs/Palo_Alto_Networks_WildFire/pack_metadata.json
index f0eee6d1090c..3f4515ea4e49 100644
--- a/Packs/Palo_Alto_Networks_WildFire/pack_metadata.json
+++ b/Packs/Palo_Alto_Networks_WildFire/pack_metadata.json
@@ -2,7 +2,7 @@
"name": "WildFire by Palo Alto Networks",
"description": "Perform malware dynamic analysis",
"support": "xsoar",
- "currentVersion": "2.1.51",
+ "currentVersion": "2.1.53",
"author": "Cortex XSOAR",
"url": "https://www.paloaltonetworks.com/cortex",
"email": "",
diff --git a/Packs/PcapAnalysis/ReleaseNotes/2_4_10.md b/Packs/PcapAnalysis/ReleaseNotes/2_4_10.md
new file mode 100644
index 000000000000..fe015122e751
--- /dev/null
+++ b/Packs/PcapAnalysis/ReleaseNotes/2_4_10.md
@@ -0,0 +1,22 @@
+
+#### Scripts
+
+##### PcapFileExtractor
+
+
+##### PcapConvert
+
+
+- Updated the Docker image to: *demisto/pcap-miner:1.0.0.115465*.
+##### PcapFileExtractStreams
+
+
+- Updated the Docker image to: *demisto/pcap-miner:1.0.0.115465*.
+##### PcapExtractStreams
+
+
+- Updated the Docker image to: *demisto/pcap-miner:1.0.0.115465*.
+##### PcapMinerV2
+
+
+- Updated the Docker image to: *demisto/pcap-miner:1.0.0.115465*.
diff --git a/Packs/PcapAnalysis/Scripts/PcapConvert/PcapConvert.yml b/Packs/PcapAnalysis/Scripts/PcapConvert/PcapConvert.yml
index 425954967833..1918db8612c5 100644
--- a/Packs/PcapAnalysis/Scripts/PcapConvert/PcapConvert.yml
+++ b/Packs/PcapAnalysis/Scripts/PcapConvert/PcapConvert.yml
@@ -25,7 +25,7 @@ commonfields:
contentitemexportablefields:
contentitemfields:
fromServerVersion: ''
-dockerimage: demisto/pcap-miner:1.0.0.80182
+dockerimage: demisto/pcap-miner:1.0.0.115465
enabled: true
name: PcapConvert
runas: DBotWeakRole
diff --git a/Packs/PcapAnalysis/Scripts/PcapExtractStreams/PcapExtractStreams.yml b/Packs/PcapAnalysis/Scripts/PcapExtractStreams/PcapExtractStreams.yml
index 7e6abe24c5ff..de4456759e7a 100644
--- a/Packs/PcapAnalysis/Scripts/PcapExtractStreams/PcapExtractStreams.yml
+++ b/Packs/PcapAnalysis/Scripts/PcapExtractStreams/PcapExtractStreams.yml
@@ -74,7 +74,7 @@ commonfields:
contentitemexportablefields:
contentitemfields:
fromServerVersion: ''
-dockerimage: demisto/pcap-miner:1.0.0.96695
+dockerimage: demisto/pcap-miner:1.0.0.115465
enabled: true
name: PcapExtractStreams
runas: DBotWeakRole
diff --git a/Packs/PcapAnalysis/Scripts/PcapFileExtractStreams/PcapFileExtractStreams.yml b/Packs/PcapAnalysis/Scripts/PcapFileExtractStreams/PcapFileExtractStreams.yml
index 49354caac349..cfd2e710f1c7 100644
--- a/Packs/PcapAnalysis/Scripts/PcapFileExtractStreams/PcapFileExtractStreams.yml
+++ b/Packs/PcapAnalysis/Scripts/PcapFileExtractStreams/PcapFileExtractStreams.yml
@@ -36,7 +36,7 @@ commonfields:
contentitemexportablefields:
contentitemfields:
fromServerVersion: ''
-dockerimage: demisto/pcap-miner:1.0.0.78355
+dockerimage: demisto/pcap-miner:1.0.0.115465
enabled: true
name: PcapFileExtractStreams
outputs:
diff --git a/Packs/PcapAnalysis/Scripts/PcapFileExtractor/PcapFileExtractor.yml b/Packs/PcapAnalysis/Scripts/PcapFileExtractor/PcapFileExtractor.yml
index 1cdb852bd244..f2d70959ae7c 100644
--- a/Packs/PcapAnalysis/Scripts/PcapFileExtractor/PcapFileExtractor.yml
+++ b/Packs/PcapAnalysis/Scripts/PcapFileExtractor/PcapFileExtractor.yml
@@ -91,3 +91,5 @@ dockerimage: demisto/pcap-miner:1.0.0.90440
tests:
- No tests (auto formatted)
fromversion: 5.0.0
+
+
diff --git a/Packs/PcapAnalysis/Scripts/PcapMinerV2/PcapMinerV2.yml b/Packs/PcapAnalysis/Scripts/PcapMinerV2/PcapMinerV2.yml
index e4a862ec6007..5107b1afa9fd 100644
--- a/Packs/PcapAnalysis/Scripts/PcapMinerV2/PcapMinerV2.yml
+++ b/Packs/PcapAnalysis/Scripts/PcapMinerV2/PcapMinerV2.yml
@@ -366,7 +366,7 @@ tags:
- Utility
timeout: '0'
type: python
-dockerimage: demisto/pcap-miner:1.0.0.90440
+dockerimage: demisto/pcap-miner:1.0.0.115465
runas: DBotWeakRole
runonce: true
tests:
diff --git a/Packs/PcapAnalysis/pack_metadata.json b/Packs/PcapAnalysis/pack_metadata.json
index 1e0672fa491c..6151a142d84e 100644
--- a/Packs/PcapAnalysis/pack_metadata.json
+++ b/Packs/PcapAnalysis/pack_metadata.json
@@ -2,7 +2,7 @@
"name": "PCAP Analysis",
"description": "Don't miss out on critical forensic data! This Content Pack automates PCAP file analysis such as parsing, searching, extracting indicators, and more.",
"support": "xsoar",
- "currentVersion": "2.4.9",
+ "currentVersion": "2.4.10",
"author": "Cortex XSOAR",
"url": "https://www.paloaltonetworks.com/cortex",
"email": "",
diff --git a/Packs/Pcysys/Integrations/Pcysys/Pcysys.yml b/Packs/Pcysys/Integrations/Pcysys/Pcysys.yml
index b1ac811fc02b..8e99d5cef584 100644
--- a/Packs/Pcysys/Integrations/Pcysys/Pcysys.yml
+++ b/Packs/Pcysys/Integrations/Pcysys/Pcysys.yml
@@ -121,7 +121,7 @@ script:
- contextPath: Pentera.TaskRun.FullActionReport.Status
description: 'The status of the action. Can be: success, failed, canceled, no_results.'
type: String
- dockerimage: demisto/auth-utils:1.0.0.76157
+ dockerimage: demisto/auth-utils:1.0.0.115527
runonce: false
script: '-'
subtype: python3
diff --git a/Packs/Pcysys/ReleaseNotes/1_3_9.md b/Packs/Pcysys/ReleaseNotes/1_3_9.md
new file mode 100644
index 000000000000..77438f2f9fc4
--- /dev/null
+++ b/Packs/Pcysys/ReleaseNotes/1_3_9.md
@@ -0,0 +1,7 @@
+
+#### Integrations
+
+##### Pentera
+
+
+- Updated the Docker image to: *demisto/auth-utils:1.0.0.115527*.
diff --git a/Packs/Pcysys/pack_metadata.json b/Packs/Pcysys/pack_metadata.json
index 011b06199f40..46292ebb47e9 100644
--- a/Packs/Pcysys/pack_metadata.json
+++ b/Packs/Pcysys/pack_metadata.json
@@ -2,7 +2,7 @@
"name": "Pentera",
"description": "Automate remediation actions based on Pentera, the Automated Security Validation Platform, proactively exposing high-risk vulnerabilities.",
"support": "partner",
- "currentVersion": "1.3.8",
+ "currentVersion": "1.3.9",
"author": "Pentera",
"url": "https://pentera.io",
"email": "support@pentera.io",
diff --git a/Packs/PerceptionPoint/Integrations/PerceptionPoint/PerceptionPoint.yml b/Packs/PerceptionPoint/Integrations/PerceptionPoint/PerceptionPoint.yml
index 40f482ba4a84..ec988410894a 100644
--- a/Packs/PerceptionPoint/Integrations/PerceptionPoint/PerceptionPoint.yml
+++ b/Packs/PerceptionPoint/Integrations/PerceptionPoint/PerceptionPoint.yml
@@ -59,7 +59,7 @@ script:
description: The scan ID of the released email.
type: number
description: Re-sends an email that was falsely quarantined, using the scan ID.
- dockerimage: demisto/python3:3.10.13.72123
+ dockerimage: demisto/python3:3.11.10.116439
isfetch: true
tests:
- PerceptionPoint Test
diff --git a/Packs/PerceptionPoint/ReleaseNotes/1_0_11.md b/Packs/PerceptionPoint/ReleaseNotes/1_0_11.md
new file mode 100644
index 000000000000..88504543bed1
--- /dev/null
+++ b/Packs/PerceptionPoint/ReleaseNotes/1_0_11.md
@@ -0,0 +1,10 @@
+
+#### Integrations
+
+##### PerceptionPoint
+- Updated the Docker image to: *demisto/python3:3.11.10.116439*.
+
+
+
+
+
diff --git a/Packs/PerceptionPoint/pack_metadata.json b/Packs/PerceptionPoint/pack_metadata.json
index 80ed5b3b6402..87e9165f4928 100644
--- a/Packs/PerceptionPoint/pack_metadata.json
+++ b/Packs/PerceptionPoint/pack_metadata.json
@@ -2,7 +2,7 @@
"name": "Perception Point",
"description": "Loads incidents from Perception Point and releases falsely quarantined emails.",
"support": "partner",
- "currentVersion": "1.0.10",
+ "currentVersion": "1.0.11",
"author": "Perception Point",
"url": "",
"email": "support@perception-point.io",
diff --git a/Packs/PerimeterX/Integrations/BotDefender/BotDefender.yml b/Packs/PerimeterX/Integrations/BotDefender/BotDefender.yml
index 7cf6b513d409..c5c959ca7f5b 100644
--- a/Packs/PerimeterX/Integrations/BotDefender/BotDefender.yml
+++ b/Packs/PerimeterX/Integrations/BotDefender/BotDefender.yml
@@ -105,6 +105,6 @@ script:
script: '-'
type: python
subtype: python3
- dockerimage: demisto/python3:3.10.13.74666
+ dockerimage: demisto/python3:3.11.10.116439
tests:
- No tests (auto formatted)
diff --git a/Packs/PerimeterX/ReleaseNotes/1_0_3.md b/Packs/PerimeterX/ReleaseNotes/1_0_3.md
new file mode 100644
index 000000000000..36e8be61363c
--- /dev/null
+++ b/Packs/PerimeterX/ReleaseNotes/1_0_3.md
@@ -0,0 +1,8 @@
+
+#### Integrations
+
+##### PerimeterX BotDefender
+- Updated the Docker image to: *demisto/python3:3.11.10.116439*.
+
+
+
diff --git a/Packs/PerimeterX/pack_metadata.json b/Packs/PerimeterX/pack_metadata.json
index 8be1c4e898b2..d6dc28bd3bfd 100644
--- a/Packs/PerimeterX/pack_metadata.json
+++ b/Packs/PerimeterX/pack_metadata.json
@@ -2,7 +2,7 @@
"name": "PerimeterX",
"description": "PerimeterX integration with Cortex XSOAR",
"support": "partner",
- "currentVersion": "1.0.2",
+ "currentVersion": "1.0.3",
"author": "PerimeterX",
"url": "http://www.perimeterx.com",
"email": "",
diff --git a/Packs/PhishLabs/.pack-ignore b/Packs/PhishLabs/.pack-ignore
index 92594dafb500..2165a5e62343 100644
--- a/Packs/PhishLabs/.pack-ignore
+++ b/Packs/PhishLabs/.pack-ignore
@@ -5,7 +5,7 @@ ignore=IN126,BA108,BA109
ignore=IN126,BA108,BA109
[file:PhishLabsIOC.yml]
-ignore=IN126
+ignore=IN126,RM106
[file:README.md]
-ignore=RM104
\ No newline at end of file
+ignore=RM104,RM106
\ No newline at end of file
diff --git a/Packs/PhishLabs/Integrations/PhishLabsIOC/PhishLabsIOC.py b/Packs/PhishLabs/Integrations/PhishLabsIOC/PhishLabsIOC.py
index dbf4726bba21..f2262c60ed98 100644
--- a/Packs/PhishLabs/Integrations/PhishLabsIOC/PhishLabsIOC.py
+++ b/Packs/PhishLabs/Integrations/PhishLabsIOC/PhishLabsIOC.py
@@ -5,7 +5,7 @@
import json
import requests
-from typing import Callable
+from collections.abc import Callable
# Disable insecure warnings
requests.packages.urllib3.disable_warnings() # type: ignore
@@ -61,7 +61,7 @@ def http_request(self, method: str, path: str, params: dict = None, data: dict =
return return_error(ssl_error)
except (requests.exceptions.ConnectionError, requests.exceptions.Timeout,
requests.exceptions.TooManyRedirects, requests.exceptions.RequestException) as e:
- connection_error = 'Could not connect to PhishLabs IOC Feed: {}'.format(str(e))
+ connection_error = f'Could not connect to PhishLabs IOC Feed: {str(e)}'
if RAISE_EXCEPTION_ON_ERROR:
raise Exception(connection_error)
return return_error(connection_error)
@@ -74,7 +74,7 @@ def http_request(self, method: str, path: str, params: dict = None, data: dict =
message = error_json.get('error', '')
except Exception:
pass
- error_message: str = ('Error in API call to PhishLabs IOC API, status code: {}'.format(status))
+ error_message: str = (f'Error in API call to PhishLabs IOC API, status code: {status}')
if status == 401:
error_message = 'Could not connect to PhishLabs IOC Feed: Wrong credentials'
if message:
@@ -86,7 +86,7 @@ def http_request(self, method: str, path: str, params: dict = None, data: dict =
try:
return res.json()
except Exception:
- error_message = 'Failed parsing the response from PhishLabs IOC API: {!r}'.format(res.content)
+ error_message = f'Failed parsing the response from PhishLabs IOC API: {res.content!r}'
if RAISE_EXCEPTION_ON_ERROR:
raise Exception(error_message)
else:
@@ -111,22 +111,20 @@ def populate_context(dbot_scores: list, domain_entries: list, file_entries: list
"""
context: dict = {}
if url_entries:
- context[outputPaths['url']] = createContext(list(map(lambda u: u[0], url_entries)))
- context['PhishLabs.URL(val.ID && val.ID === obj.ID)'] = createContext(list(map(lambda u: u[1], url_entries)),
+ context[outputPaths['url']] = createContext([u[0] for u in url_entries])
+ context['PhishLabs.URL(val.ID && val.ID === obj.ID)'] = createContext([u[1] for u in url_entries],
removeNull=True)
if domain_entries:
- context[outputPaths['domain']] = createContext(list(map(lambda d: d[0], domain_entries)))
- context['PhishLabs.Domain(val.ID && val.ID === obj.ID)'] = createContext(list(map(lambda d: d[1],
- domain_entries)),
+ context[outputPaths['domain']] = createContext([d[0] for d in domain_entries])
+ context['PhishLabs.Domain(val.ID && val.ID === obj.ID)'] = createContext([d[1] for d in domain_entries],
removeNull=True)
if file_entries:
- context[outputPaths['file']] = createContext(list(map(lambda f: f[0], file_entries)))
- context['PhishLabs.File(val.ID && val.ID === obj.ID)'] = createContext(list(map(lambda f: f[1], file_entries)),
+ context[outputPaths['file']] = createContext([f[0] for f in file_entries])
+ context['PhishLabs.File(val.ID && val.ID === obj.ID)'] = createContext([f[1] for f in file_entries],
removeNull=True)
if email_entries:
- context['Email'] = createContext(list(map(lambda e: e[0], email_entries)))
- context['PhishLabs.Email(val.ID && val.ID === obj.ID)'] = createContext(list(map(lambda e: e[1],
- email_entries)),
+ context['Email'] = createContext([e[0] for e in email_entries])
+ context['PhishLabs.Email(val.ID && val.ID === obj.ID)'] = createContext([e[1] for e in email_entries],
removeNull=True)
if dbot_scores:
context[outputPaths['dbotscore']] = dbot_scores
@@ -705,7 +703,7 @@ def main():
)
global RAISE_EXCEPTION_ON_ERROR
- LOG('Command being called is {}'.format(demisto.command()))
+ LOG(f'Command being called is {demisto.command()}')
handle_proxy()
command_dict = {
'test-module': test_module,
diff --git a/Packs/PhishLabs/Integrations/PhishLabsIOC/PhishLabsIOC.yml b/Packs/PhishLabs/Integrations/PhishLabsIOC/PhishLabsIOC.yml
index 9ee195827931..87b88ea2f973 100644
--- a/Packs/PhishLabs/Integrations/PhishLabsIOC/PhishLabsIOC.yml
+++ b/Packs/PhishLabs/Integrations/PhishLabsIOC/PhishLabsIOC.yml
@@ -266,7 +266,7 @@ script:
description: URL address.
type: String
- contextPath: PhishLabs.URL.CreatedAt
- description: URL creation time, in PhishLabs
+ description: URL creation time, in PhishLabs.
type: Date
- contextPath: PhishLabs.URL.UpdatedAt
description: URL update time, in PhishLabs.
@@ -293,7 +293,7 @@ script:
description: Description of the malicious domain.
type: String
- contextPath: PhishLabs.Domain.Name
- description: Domain name
+ description: Domain name.
type: String
- contextPath: PhishLabs.Domain.CreatedAt
description: Domain creation time, in PhishLabs.
@@ -406,7 +406,7 @@ script:
- contextPath: DBotScore.Score
description: The actual score.
type: number
- dockerimage: demisto/python3:3.10.13.86272
+ dockerimage: demisto/python3:3.11.10.115186
isfetch: true
runonce: false
script: '-'
diff --git a/Packs/PhishLabs/Integrations/PhishLabsIOC/README.md b/Packs/PhishLabs/Integrations/PhishLabsIOC/README.md
index f2cc2c366cf6..8f35d3116dcb 100644
--- a/Packs/PhishLabs/Integrations/PhishLabsIOC/README.md
+++ b/Packs/PhishLabs/Integrations/PhishLabsIOC/README.md
@@ -7,7 +7,7 @@
Use Cases
diff --git a/Packs/PhishLabs/ReleaseNotes/1_1_22.md b/Packs/PhishLabs/ReleaseNotes/1_1_22.md
new file mode 100644
index 000000000000..aa2a27c0fff5
--- /dev/null
+++ b/Packs/PhishLabs/ReleaseNotes/1_1_22.md
@@ -0,0 +1,7 @@
+
+#### Integrations
+
+##### PhishLabs IOC
+
+- Updated the Docker image to: *demisto/python3:3.11.10.115186*.
+
diff --git a/Packs/PhishLabs/doc_files/58381182-d8408200-7fc2-11e9-8726-8056cab1feea.png b/Packs/PhishLabs/doc_files/58381182-d8408200-7fc2-11e9-8726-8056cab1feea.png
new file mode 100644
index 000000000000..2f7ad66e0801
Binary files /dev/null and b/Packs/PhishLabs/doc_files/58381182-d8408200-7fc2-11e9-8726-8056cab1feea.png differ
diff --git a/Packs/PhishLabs/pack_metadata.json b/Packs/PhishLabs/pack_metadata.json
index f6d747ae939d..3775fe0b38c2 100644
--- a/Packs/PhishLabs/pack_metadata.json
+++ b/Packs/PhishLabs/pack_metadata.json
@@ -2,7 +2,7 @@
"name": "PhishLabs",
"description": "IOC information from PhishLabs.",
"support": "xsoar",
- "currentVersion": "1.1.21",
+ "currentVersion": "1.1.22",
"author": "Cortex XSOAR",
"url": "https://www.paloaltonetworks.com/cortex",
"email": "",
diff --git a/Packs/Phishing/ReleaseNotes/3_6_24.md b/Packs/Phishing/ReleaseNotes/3_6_24.md
new file mode 100644
index 000000000000..97df8a0b10d6
--- /dev/null
+++ b/Packs/Phishing/ReleaseNotes/3_6_24.md
@@ -0,0 +1,19 @@
+
+#### Scripts
+
+##### LinkToPhishingCampaign
+
+
+- Updated the Docker image to: *demisto/python3:3.11.10.113941*.
+##### DeleteReportedEmail
+
+
+- Updated the Docker image to: *demisto/python3:3.11.10.113941*.
+##### CheckEmailAuthenticity
+
+
+- Updated the Docker image to: *demisto/python3:3.11.10.113941*.
+##### GetBrandDeleteReportedEmail
+
+
+- Updated the Docker image to: *demisto/python3:3.11.10.113941*.
diff --git a/Packs/Phishing/ReleaseNotes/3_6_25.md b/Packs/Phishing/ReleaseNotes/3_6_25.md
new file mode 100644
index 000000000000..998bf8457085
--- /dev/null
+++ b/Packs/Phishing/ReleaseNotes/3_6_25.md
@@ -0,0 +1,7 @@
+<~XSOAR_SAAS>
+#### Scripts
+
+##### LinkToPhishingCampaign
+
+- Fixed the incident links in the script result.
+~XSOAR_SAAS>
diff --git a/Packs/Phishing/ReleaseNotes/3_6_26.md b/Packs/Phishing/ReleaseNotes/3_6_26.md
new file mode 100644
index 000000000000..963f0ad6ffb1
--- /dev/null
+++ b/Packs/Phishing/ReleaseNotes/3_6_26.md
@@ -0,0 +1,7 @@
+
+#### Scripts
+
+##### FindDuplicateEmailIncidents
+
+- Fixed an issue were the script's performance was decreased due to a bug in a python library.
+- Updated the Docker image to: *demisto/sklearn:1.0.0.115728*.
diff --git a/Packs/Phishing/ReleaseNotes/3_6_27.md b/Packs/Phishing/ReleaseNotes/3_6_27.md
new file mode 100644
index 000000000000..e425169049d8
--- /dev/null
+++ b/Packs/Phishing/ReleaseNotes/3_6_27.md
@@ -0,0 +1,15 @@
+
+#### Scripts
+
+##### GetBrandDeleteReportedEmail
+
+
+- Updated the Docker image to: *demisto/python3:3.11.10.115186*.
+##### CheckEmailAuthenticity
+
+
+- Updated the Docker image to: *demisto/python3:3.11.10.115186*.
+##### DeleteReportedEmail
+
+
+- Updated the Docker image to: *demisto/python3:3.11.10.115186*.
diff --git a/Packs/Phishing/Scripts/CheckEmailAuthenticity/CheckEmailAuthenticity.yml b/Packs/Phishing/Scripts/CheckEmailAuthenticity/CheckEmailAuthenticity.yml
index 4b46cf50ffd9..4ba3b580b6f1 100644
--- a/Packs/Phishing/Scripts/CheckEmailAuthenticity/CheckEmailAuthenticity.yml
+++ b/Packs/Phishing/Scripts/CheckEmailAuthenticity/CheckEmailAuthenticity.yml
@@ -222,6 +222,6 @@ tags:
timeout: '0'
type: python
subtype: python3
-dockerimage: demisto/python3:3.10.12.68300
+dockerimage: demisto/python3:3.11.10.115186
runas: DBotWeakRole
fromversion: 5.0.0
diff --git a/Packs/Phishing/Scripts/DeleteReportedEmail/DeleteReportedEmail.yml b/Packs/Phishing/Scripts/DeleteReportedEmail/DeleteReportedEmail.yml
index d8ce7d4275cf..b6a66e63c3cc 100644
--- a/Packs/Phishing/Scripts/DeleteReportedEmail/DeleteReportedEmail.yml
+++ b/Packs/Phishing/Scripts/DeleteReportedEmail/DeleteReportedEmail.yml
@@ -48,7 +48,7 @@ tags:
timeout: '0'
type: python
subtype: python3
-dockerimage: demisto/python3:3.10.13.87159
+dockerimage: demisto/python3:3.11.10.115186
fromversion: '6.1.0'
tests:
- No tests (auto formatted)
diff --git a/Packs/Phishing/Scripts/FindDuplicateEmailIncidents/FindDuplicateEmailIncidents.py b/Packs/Phishing/Scripts/FindDuplicateEmailIncidents/FindDuplicateEmailIncidents.py
index d29f617592e6..55c862f3ab5a 100644
--- a/Packs/Phishing/Scripts/FindDuplicateEmailIncidents/FindDuplicateEmailIncidents.py
+++ b/Packs/Phishing/Scripts/FindDuplicateEmailIncidents/FindDuplicateEmailIncidents.py
@@ -1,8 +1,13 @@
import demistomock as demisto # noqa: F401
from CommonServerPython import * # noqa: F401
-import dateutil # type: ignore
-
from CommonServerUserPython import *
+
+# set omp
+import os
+import multiprocessing
+os.environ['OMP_NUM_THREADS'] = str(multiprocessing.cpu_count()) # noqa
+
+import dateutil # type: ignore
import pandas as pd
from bs4 import BeautifulSoup
from sklearn.feature_extraction.text import CountVectorizer
diff --git a/Packs/Phishing/Scripts/FindDuplicateEmailIncidents/FindDuplicateEmailIncidents.yml b/Packs/Phishing/Scripts/FindDuplicateEmailIncidents/FindDuplicateEmailIncidents.yml
index bb5b4b6a395f..1ff06d1cf66f 100644
--- a/Packs/Phishing/Scripts/FindDuplicateEmailIncidents/FindDuplicateEmailIncidents.yml
+++ b/Packs/Phishing/Scripts/FindDuplicateEmailIncidents/FindDuplicateEmailIncidents.yml
@@ -109,7 +109,7 @@ tags:
- phishing
timeout: 600ns
type: python
-dockerimage: demisto/sklearn:1.0.0.105679
+dockerimage: demisto/sklearn:1.0.0.115728
tests:
- Detect & Manage Phishing Campaigns - Test
fromversion: 5.0.0
diff --git a/Packs/Phishing/Scripts/GetBrandDeleteReportedEmail/GetBrandDeleteReportedEmail.yml b/Packs/Phishing/Scripts/GetBrandDeleteReportedEmail/GetBrandDeleteReportedEmail.yml
index 7d086adba1b3..05fedbb99757 100644
--- a/Packs/Phishing/Scripts/GetBrandDeleteReportedEmail/GetBrandDeleteReportedEmail.yml
+++ b/Packs/Phishing/Scripts/GetBrandDeleteReportedEmail/GetBrandDeleteReportedEmail.yml
@@ -1,7 +1,7 @@
commonfields:
id: GetBrandDeleteReportedEmail
version: -1
-dockerimage: demisto/python3:3.10.12.68300
+dockerimage: demisto/python3:3.11.10.115186
enabled: true
name: GetBrandDeleteReportedEmail
runas: DBotWeakRole
diff --git a/Packs/Phishing/Scripts/LinkToPhishingCampaign/LinkToPhishingCampaign.py b/Packs/Phishing/Scripts/LinkToPhishingCampaign/LinkToPhishingCampaign.py
index da61f69e449c..8a7882f00b51 100644
--- a/Packs/Phishing/Scripts/LinkToPhishingCampaign/LinkToPhishingCampaign.py
+++ b/Packs/Phishing/Scripts/LinkToPhishingCampaign/LinkToPhishingCampaign.py
@@ -14,8 +14,10 @@
urls = demisto.demistoUrls()
server_url = urls.get('server', '')
if campaign_incident_id and campaign_incident_id != "None":
+ prefix = 'Custom' if is_xsoar_saas() else '#/Custom'
+ related_url = f"{server_url}/{prefix}/vvoh19d1ue/{campaign_incident_id}"
html = f"""
"""
else:
html = f"""
Human Readable Output
-
+
2. Get an offense by offense ID
Gets the offense with the matching offense ID from QRadar.
@@ -409,7 +409,7 @@
}
Human Readable Output
-
+
3. Search QRadar using AQLqradar-searches
Searches in QRadar using AQL. It is highly recommended to use the playbook 'QRadarFullSearch' instead of this command - it will execute the search, and will return the result.
@@ -474,7 +474,7 @@
}
Human Readable Output
-
+
4. Get a search ID and state
Gets a specific search ID and state.
@@ -539,7 +539,7 @@
}
Human Readable Output
-
+
5. Get search results
Gets search results.
@@ -614,7 +614,7 @@
}
Human Readable Output
-
+
6. Update an offense
Updates an offense.
@@ -837,7 +837,7 @@
}
Human Readable Output
-
+
7. List all assets
List all assets found in the model.
@@ -965,7 +965,7 @@
}
Human Readable Output
-
+
8. Get an asset by the asset ID
Retrieves the asset by ID.
@@ -1184,7 +1184,7 @@
]
Human Readable Output
-
+
9. Get the reason an offense was closed
Get closing reasons.
@@ -1288,7 +1288,7 @@
}
Human Readable Output
-
+
10. Create a note for an offense
Creates a note on an offense.
@@ -1375,7 +1375,7 @@
}
Human Readable Output
-
+
11. Get a note for an offense
Retrieve a note for an offense.
@@ -1462,7 +1462,7 @@
}
Human Readable Output
-
+
12. Get a reference by the reference name
Information about the reference set that had data added or updated. This returns information set but not the contained data. This feature is supported from version 8.1 and later.
@@ -1568,7 +1568,7 @@
}
Human Readable Output
-
+
13. Create a reference set
Creates a new reference set. If the specified name is already in use, the command will fail.
@@ -1661,7 +1661,7 @@
}
Human Readable Output
-
+
14. Delete a reference
Deletes a reference set corresponding to the name provided.
@@ -1690,7 +1690,7 @@
Command Example
!qradar-delete-reference-set ref_name=Date
Human Readable Output
-
+
15. Create a value in a reference set
Creates a value in a reference set.
@@ -1786,7 +1786,7 @@
}
Human Readable Output
-
+
16. Add or update a value in a reference set
Adds or updates a value in a reference set.
@@ -1879,7 +1879,7 @@
}
Human Readable Output
-
+
17. Delete a value from a reference set
Deletes a value from a reference set.
@@ -1970,4 +1970,4 @@
}
Human Readable Output
-
+
diff --git a/Packs/QRadar/Integrations/QRadar_v3/QRadar_v3.yml b/Packs/QRadar/Integrations/QRadar_v3/QRadar_v3.yml
index 3cba36cb3aa8..3b575cedda8f 100644
--- a/Packs/QRadar/Integrations/QRadar_v3/QRadar_v3.yml
+++ b/Packs/QRadar/Integrations/QRadar_v3/QRadar_v3.yml
@@ -3120,7 +3120,7 @@ script:
script: '-'
type: python
subtype: python3
- dockerimage: demisto/python3:3.11.9.107902
+ dockerimage: demisto/python3:3.11.10.115186
isremotesyncin: true
longRunning: true
isFetchSamples: true
diff --git a/Packs/QRadar/ReleaseNotes/2_5_7.md b/Packs/QRadar/ReleaseNotes/2_5_7.md
new file mode 100644
index 000000000000..5f69c68cd1f2
--- /dev/null
+++ b/Packs/QRadar/ReleaseNotes/2_5_7.md
@@ -0,0 +1,34 @@
+
+#### Integrations
+
+##### IBM QRadar v3
+
+
+- Updated the Docker image to: *demisto/python3:3.11.10.113941*.
+
+#### Scripts
+
+##### QRadarMirroringEventsStatus
+
+
+- Updated the Docker image to: *demisto/python3:3.11.10.113941*.
+##### QRadarMagnitude
+
+
+- Updated the Docker image to: *demisto/python3:3.11.10.113941*.
+##### QRadarPrintEvents
+
+
+- Updated the Docker image to: *demisto/python3:3.11.10.113941*.
+##### QRadarCreateAQLQuery
+
+
+- Updated the Docker image to: *demisto/python3:3.11.10.113941*.
+##### QRadarPrintAssets
+
+
+- Updated the Docker image to: *demisto/python3:3.11.10.113941*.
+##### QRadarFetchedEventsSum
+
+
+- Updated the Docker image to: *demisto/python3:3.11.10.113941*.
diff --git a/Packs/QRadar/ReleaseNotes/2_5_8.md b/Packs/QRadar/ReleaseNotes/2_5_8.md
new file mode 100644
index 000000000000..992703ba4642
--- /dev/null
+++ b/Packs/QRadar/ReleaseNotes/2_5_8.md
@@ -0,0 +1,18 @@
+
+#### Integrations
+
+##### IBM QRadar v3
+
+
+- Updated the Docker image to: *demisto/python3:3.11.10.115186*.
+
+#### Scripts
+
+##### QRadarMirroringEventsStatus
+
+
+- Updated the Docker image to: *demisto/python3:3.11.10.115186*.
+##### QRadarCreateAQLQuery
+
+
+- Updated the Docker image to: *demisto/python3:3.11.10.115186*.
diff --git a/Packs/QRadar/Scripts/QRadarCreateAQLQuery/QRadarCreateAQLQuery.yml b/Packs/QRadar/Scripts/QRadarCreateAQLQuery/QRadarCreateAQLQuery.yml
index ccacbded5787..bb204b60f6ef 100644
--- a/Packs/QRadar/Scripts/QRadarCreateAQLQuery/QRadarCreateAQLQuery.yml
+++ b/Packs/QRadar/Scripts/QRadarCreateAQLQuery/QRadarCreateAQLQuery.yml
@@ -85,7 +85,7 @@ outputs:
description: The resultant AQL query based on the inputs.
type: string
subtype: python3
-dockerimage: demisto/python3:3.10.13.78623
+dockerimage: demisto/python3:3.11.10.115186
runas: DBotWeakRole
fromversion: 6.0.0
tests:
diff --git a/Packs/QRadar/Scripts/QRadarFetchedEventsSum/QRadarFetchedEventsSum.yml b/Packs/QRadar/Scripts/QRadarFetchedEventsSum/QRadarFetchedEventsSum.yml
index 3dbb51ec68b3..eb532c63c5e2 100644
--- a/Packs/QRadar/Scripts/QRadarFetchedEventsSum/QRadarFetchedEventsSum.yml
+++ b/Packs/QRadar/Scripts/QRadarFetchedEventsSum/QRadarFetchedEventsSum.yml
@@ -2,7 +2,7 @@ comment: This display the amount of fetched events vs the total amount of events
commonfields:
id: QRadarFetchedEventsSum
version: -1
-dockerimage: demisto/python3:3.10.13.89009
+dockerimage: demisto/python3:3.11.10.113941
enabled: true
name: QRadarFetchedEventsSum
runas: DBotWeakRole
diff --git a/Packs/QRadar/Scripts/QRadarMagnitude/QRadarMagnitude.yml b/Packs/QRadar/Scripts/QRadarMagnitude/QRadarMagnitude.yml
index 0c27ea0c581e..7b47e5c9c8b7 100644
--- a/Packs/QRadar/Scripts/QRadarMagnitude/QRadarMagnitude.yml
+++ b/Packs/QRadar/Scripts/QRadarMagnitude/QRadarMagnitude.yml
@@ -2,7 +2,7 @@ comment: "This enables to color the field according to the magnitude. The scale
commonfields:
id: QRadarMagnitude
version: -1
-dockerimage: demisto/python3:3.10.13.89009
+dockerimage: demisto/python3:3.11.10.113941
enabled: true
name: QRadarMagnitude
runas: DBotWeakRole
diff --git a/Packs/QRadar/Scripts/QRadarMirroringEventsStatus/QRadarMirroringEventsStatus.yml b/Packs/QRadar/Scripts/QRadarMirroringEventsStatus/QRadarMirroringEventsStatus.yml
index 8a265aa48c69..567a55879f1f 100644
--- a/Packs/QRadar/Scripts/QRadarMirroringEventsStatus/QRadarMirroringEventsStatus.yml
+++ b/Packs/QRadar/Scripts/QRadarMirroringEventsStatus/QRadarMirroringEventsStatus.yml
@@ -11,7 +11,7 @@ tags:
- dynamic-section
type: python
subtype: python3
-dockerimage: demisto/python3:3.10.13.82467
+dockerimage: demisto/python3:3.11.10.115186
fromversion: 6.0.0
tests:
- No tests
diff --git a/Packs/QRadar/Scripts/QRadarPrintAssets/QRadarPrintAssets.yml b/Packs/QRadar/Scripts/QRadarPrintAssets/QRadarPrintAssets.yml
index bce9a4a0eda8..1f6ecc9584be 100644
--- a/Packs/QRadar/Scripts/QRadarPrintAssets/QRadarPrintAssets.yml
+++ b/Packs/QRadar/Scripts/QRadarPrintAssets/QRadarPrintAssets.yml
@@ -2,7 +2,7 @@ comment: "This script prints the assets fetched from the offense in a table form
commonfields:
id: QRadarPrintAssets
version: -1
-dockerimage: demisto/python3:3.10.13.89009
+dockerimage: demisto/python3:3.11.10.113941
enabled: true
name: QRadarPrintAssets
runas: DBotWeakRole
diff --git a/Packs/QRadar/Scripts/QRadarPrintEvents/QRadarPrintEvents.yml b/Packs/QRadar/Scripts/QRadarPrintEvents/QRadarPrintEvents.yml
index e2e5746aa664..b3832461c755 100644
--- a/Packs/QRadar/Scripts/QRadarPrintEvents/QRadarPrintEvents.yml
+++ b/Packs/QRadar/Scripts/QRadarPrintEvents/QRadarPrintEvents.yml
@@ -2,7 +2,7 @@ comment: This script prints the events fetched from the offense in a table forma
commonfields:
id: QRadarPrintEvents
version: -1
-dockerimage: demisto/python3:3.10.13.89009
+dockerimage: demisto/python3:3.11.10.113941
enabled: true
name: QRadarPrintEvents
runas: DBotWeakRole
diff --git a/Packs/QRadar/doc_files/47648599-299aa700-db83-11e8-934f-bf3394767da9.png b/Packs/QRadar/doc_files/47648599-299aa700-db83-11e8-934f-bf3394767da9.png
new file mode 100644
index 000000000000..8b63ea4a3a3e
Binary files /dev/null and b/Packs/QRadar/doc_files/47648599-299aa700-db83-11e8-934f-bf3394767da9.png differ
diff --git a/Packs/QRadar/doc_files/47648802-cbba8f00-db83-11e8-8f08-bd00838990af.png b/Packs/QRadar/doc_files/47648802-cbba8f00-db83-11e8-8f08-bd00838990af.png
new file mode 100644
index 000000000000..8b93126508c7
Binary files /dev/null and b/Packs/QRadar/doc_files/47648802-cbba8f00-db83-11e8-8f08-bd00838990af.png differ
diff --git a/Packs/QRadar/doc_files/47649297-84350280-db85-11e8-8ada-f478394b3c41.png b/Packs/QRadar/doc_files/47649297-84350280-db85-11e8-8ada-f478394b3c41.png
new file mode 100644
index 000000000000..eb290f4ceaf3
Binary files /dev/null and b/Packs/QRadar/doc_files/47649297-84350280-db85-11e8-8ada-f478394b3c41.png differ
diff --git a/Packs/QRadar/doc_files/47649648-89df1800-db86-11e8-8c2f-a164c8c7517f.png b/Packs/QRadar/doc_files/47649648-89df1800-db86-11e8-8c2f-a164c8c7517f.png
new file mode 100644
index 000000000000..5f9154ed26e4
Binary files /dev/null and b/Packs/QRadar/doc_files/47649648-89df1800-db86-11e8-8c2f-a164c8c7517f.png differ
diff --git a/Packs/QRadar/doc_files/47650820-10e1bf80-db8a-11e8-9577-3c639cc88472.png b/Packs/QRadar/doc_files/47650820-10e1bf80-db8a-11e8-9577-3c639cc88472.png
new file mode 100644
index 000000000000..03d82e29fc86
Binary files /dev/null and b/Packs/QRadar/doc_files/47650820-10e1bf80-db8a-11e8-9577-3c639cc88472.png differ
diff --git a/Packs/QRadar/doc_files/47650998-a67d4f00-db8a-11e8-8cb1-9791acccdff4.png b/Packs/QRadar/doc_files/47650998-a67d4f00-db8a-11e8-8cb1-9791acccdff4.png
new file mode 100644
index 000000000000..038fb1854e66
Binary files /dev/null and b/Packs/QRadar/doc_files/47650998-a67d4f00-db8a-11e8-8cb1-9791acccdff4.png differ
diff --git a/Packs/QRadar/doc_files/47652269-278a1580-db8e-11e8-913a-f761a9338f25.png b/Packs/QRadar/doc_files/47652269-278a1580-db8e-11e8-913a-f761a9338f25.png
new file mode 100644
index 000000000000..b91c93cd4a0a
Binary files /dev/null and b/Packs/QRadar/doc_files/47652269-278a1580-db8e-11e8-913a-f761a9338f25.png differ
diff --git a/Packs/QRadar/doc_files/47652373-69b35700-db8e-11e8-80cf-e42e65bcd3c1.png b/Packs/QRadar/doc_files/47652373-69b35700-db8e-11e8-80cf-e42e65bcd3c1.png
new file mode 100644
index 000000000000..3637437e9cf3
Binary files /dev/null and b/Packs/QRadar/doc_files/47652373-69b35700-db8e-11e8-80cf-e42e65bcd3c1.png differ
diff --git a/Packs/QRadar/doc_files/47652469-b5fe9700-db8e-11e8-89ec-eab5dd6af828.png b/Packs/QRadar/doc_files/47652469-b5fe9700-db8e-11e8-89ec-eab5dd6af828.png
new file mode 100644
index 000000000000..55504938a5da
Binary files /dev/null and b/Packs/QRadar/doc_files/47652469-b5fe9700-db8e-11e8-89ec-eab5dd6af828.png differ
diff --git a/Packs/QRadar/doc_files/47652630-1d1c4b80-db8f-11e8-9dd7-52947cfa8927.png b/Packs/QRadar/doc_files/47652630-1d1c4b80-db8f-11e8-9dd7-52947cfa8927.png
new file mode 100644
index 000000000000..3b6053139e99
Binary files /dev/null and b/Packs/QRadar/doc_files/47652630-1d1c4b80-db8f-11e8-9dd7-52947cfa8927.png differ
diff --git a/Packs/QRadar/doc_files/47652885-da0ea800-db8f-11e8-9d8b-e5e46cb8cb17.png b/Packs/QRadar/doc_files/47652885-da0ea800-db8f-11e8-9d8b-e5e46cb8cb17.png
new file mode 100644
index 000000000000..5e568ac65c24
Binary files /dev/null and b/Packs/QRadar/doc_files/47652885-da0ea800-db8f-11e8-9d8b-e5e46cb8cb17.png differ
diff --git a/Packs/QRadar/doc_files/48839982-f91ae700-ed95-11e8-85f4-6bce65b27f33.png b/Packs/QRadar/doc_files/48839982-f91ae700-ed95-11e8-85f4-6bce65b27f33.png
new file mode 100644
index 000000000000..bb0d07f9625e
Binary files /dev/null and b/Packs/QRadar/doc_files/48839982-f91ae700-ed95-11e8-85f4-6bce65b27f33.png differ
diff --git a/Packs/QRadar/doc_files/49079426-345f5f00-f249-11e8-8855-aecf575781f0.png b/Packs/QRadar/doc_files/49079426-345f5f00-f249-11e8-8855-aecf575781f0.png
new file mode 100644
index 000000000000..a1b0610af60d
Binary files /dev/null and b/Packs/QRadar/doc_files/49079426-345f5f00-f249-11e8-8855-aecf575781f0.png differ
diff --git a/Packs/QRadar/doc_files/49080365-10514d00-f24c-11e8-92a1-e61367eb9fb1.png b/Packs/QRadar/doc_files/49080365-10514d00-f24c-11e8-92a1-e61367eb9fb1.png
new file mode 100644
index 000000000000..c04227e621a1
Binary files /dev/null and b/Packs/QRadar/doc_files/49080365-10514d00-f24c-11e8-92a1-e61367eb9fb1.png differ
diff --git a/Packs/QRadar/doc_files/49080464-632b0480-f24c-11e8-91cd-3560d2626c2f.png b/Packs/QRadar/doc_files/49080464-632b0480-f24c-11e8-91cd-3560d2626c2f.png
new file mode 100644
index 000000000000..89f3258bc472
Binary files /dev/null and b/Packs/QRadar/doc_files/49080464-632b0480-f24c-11e8-91cd-3560d2626c2f.png differ
diff --git a/Packs/QRadar/doc_files/49081012-1811f100-f24e-11e8-81e6-af9cdf4f1a5a.png b/Packs/QRadar/doc_files/49081012-1811f100-f24e-11e8-81e6-af9cdf4f1a5a.png
new file mode 100644
index 000000000000..877ca75d42a1
Binary files /dev/null and b/Packs/QRadar/doc_files/49081012-1811f100-f24e-11e8-81e6-af9cdf4f1a5a.png differ
diff --git a/Packs/QRadar/doc_files/49081049-37108300-f24e-11e8-9554-95b2adccd563.png b/Packs/QRadar/doc_files/49081049-37108300-f24e-11e8-9554-95b2adccd563.png
new file mode 100644
index 000000000000..14e56317f5aa
Binary files /dev/null and b/Packs/QRadar/doc_files/49081049-37108300-f24e-11e8-9554-95b2adccd563.png differ
diff --git a/Packs/QRadar/pack_metadata.json b/Packs/QRadar/pack_metadata.json
index 81c56f500950..79072b239ef8 100644
--- a/Packs/QRadar/pack_metadata.json
+++ b/Packs/QRadar/pack_metadata.json
@@ -2,7 +2,7 @@
"name": "IBM QRadar",
"description": "Fetch offenses as incidents and search QRadar",
"support": "xsoar",
- "currentVersion": "2.5.6",
+ "currentVersion": "2.5.8",
"author": "Cortex XSOAR",
"url": "https://www.paloaltonetworks.com/cortex",
"email": "",
diff --git a/Packs/RSANetWitnessEndpoint/Integrations/RSANetWitnessEndpoint/RSANetWitnessEndpoint.yml b/Packs/RSANetWitnessEndpoint/Integrations/RSANetWitnessEndpoint/RSANetWitnessEndpoint.yml
index 6d131ad9159f..6fe4fdec0032 100644
--- a/Packs/RSANetWitnessEndpoint/Integrations/RSANetWitnessEndpoint/RSANetWitnessEndpoint.yml
+++ b/Packs/RSANetWitnessEndpoint/Integrations/RSANetWitnessEndpoint/RSANetWitnessEndpoint.yml
@@ -390,7 +390,7 @@ script:
- contextPath: NetWitness.Blacklist.Domains
description: Domains block listed successfully
description: 'Add a list of domains to block list '
- dockerimage: demisto/python3:3.10.13.72123
+ dockerimage: demisto/python3:3.11.10.116439
tests:
- NetWitness Endpoint Test
fromversion: 5.0.0
diff --git a/Packs/RSANetWitnessEndpoint/ReleaseNotes/1_0_14.md b/Packs/RSANetWitnessEndpoint/ReleaseNotes/1_0_14.md
new file mode 100644
index 000000000000..1e5acca2c3c7
--- /dev/null
+++ b/Packs/RSANetWitnessEndpoint/ReleaseNotes/1_0_14.md
@@ -0,0 +1,10 @@
+
+#### Integrations
+
+##### RSA NetWitness Endpoint
+- Updated the Docker image to: *demisto/python3:3.11.10.116439*.
+
+
+
+
+
diff --git a/Packs/RSANetWitnessEndpoint/pack_metadata.json b/Packs/RSANetWitnessEndpoint/pack_metadata.json
index 70eea2df4eff..58098866a379 100644
--- a/Packs/RSANetWitnessEndpoint/pack_metadata.json
+++ b/Packs/RSANetWitnessEndpoint/pack_metadata.json
@@ -2,7 +2,7 @@
"name": "RSA NetWitness Endpoint",
"description": "RSA NetWitness Endpoint provides deep visibility beyond basic endpoint security solutions by monitoring and collecting activity across all of your endpoints on and off your network. The RSA Demisto integration provides access to information about endpoints, modules and indicators. ",
"support": "xsoar",
- "currentVersion": "1.0.13",
+ "currentVersion": "1.0.14",
"author": "Cortex XSOAR",
"url": "https://www.paloaltonetworks.com/cortex",
"email": "",
diff --git a/Packs/RSANetWitness_v11_1/ReleaseNotes/3_2_5.md b/Packs/RSANetWitness_v11_1/ReleaseNotes/3_2_5.md
new file mode 100644
index 000000000000..dcbdb76b1994
--- /dev/null
+++ b/Packs/RSANetWitness_v11_1/ReleaseNotes/3_2_5.md
@@ -0,0 +1,21 @@
+
+#### Scripts
+
+##### RSA_DisplayMetasEvents
+- Updated the Docker image to: *demisto/python3:3.11.10.116439*.
+
+
+
+
+##### RSA_GetRawLog
+- Updated the Docker image to: *demisto/python3:3.11.10.116439*.
+
+
+
+
+##### SetRSANetWitnessAlertsMD
+- Updated the Docker image to: *demisto/python3:3.11.10.116439*.
+
+
+
+
diff --git a/Packs/RSANetWitness_v11_1/Scripts/RSADisplayMetasEvents/RSADisplayMetasEvents.yml b/Packs/RSANetWitness_v11_1/Scripts/RSADisplayMetasEvents/RSADisplayMetasEvents.yml
index 5b0c8d814421..d75ea2a6eafa 100644
--- a/Packs/RSANetWitness_v11_1/Scripts/RSADisplayMetasEvents/RSADisplayMetasEvents.yml
+++ b/Packs/RSANetWitness_v11_1/Scripts/RSADisplayMetasEvents/RSADisplayMetasEvents.yml
@@ -1,7 +1,7 @@
commonfields:
id: RSA_DisplayMetasEvents
version: -1
-dockerimage: demisto/python3:3.10.13.73190
+dockerimage: demisto/python3:3.11.10.116439
enabled: true
name: RSA_DisplayMetasEvents
comment: Use this script to display meta events inside the layout.
diff --git a/Packs/RSANetWitness_v11_1/Scripts/RSAGetRawLog/RSAGetRawLog.yml b/Packs/RSANetWitness_v11_1/Scripts/RSAGetRawLog/RSAGetRawLog.yml
index e4cccb301421..f4539bd4652b 100644
--- a/Packs/RSANetWitness_v11_1/Scripts/RSAGetRawLog/RSAGetRawLog.yml
+++ b/Packs/RSANetWitness_v11_1/Scripts/RSAGetRawLog/RSAGetRawLog.yml
@@ -1,7 +1,7 @@
commonfields:
id: RSA_GetRawLog
version: -1
-dockerimage: demisto/python3:3.10.13.73190
+dockerimage: demisto/python3:3.11.10.116439
enabled: true
name: RSA_GetRawLog
comment: Use this script to get RAW log.
diff --git a/Packs/RSANetWitness_v11_1/Scripts/SetRSANetWitnessAlertsMD/SetRSANetWitnessAlertsMD.yml b/Packs/RSANetWitness_v11_1/Scripts/SetRSANetWitnessAlertsMD/SetRSANetWitnessAlertsMD.yml
index 209f35e68997..e2a87a1345c6 100644
--- a/Packs/RSANetWitness_v11_1/Scripts/SetRSANetWitnessAlertsMD/SetRSANetWitnessAlertsMD.yml
+++ b/Packs/RSANetWitness_v11_1/Scripts/SetRSANetWitnessAlertsMD/SetRSANetWitnessAlertsMD.yml
@@ -10,7 +10,7 @@ comment: This automation takes several alert fields from the RSA NetWitness aler
enabled: true
scripttarget: 0
subtype: python3
-dockerimage: demisto/python3:3.10.13.73190
+dockerimage: demisto/python3:3.11.10.116439
runas: DBotWeakRole
fromversion: 6.9.0
tests:
diff --git a/Packs/RSANetWitness_v11_1/pack_metadata.json b/Packs/RSANetWitness_v11_1/pack_metadata.json
index 23b625d09011..83dbe0e3cdf8 100644
--- a/Packs/RSANetWitness_v11_1/pack_metadata.json
+++ b/Packs/RSANetWitness_v11_1/pack_metadata.json
@@ -2,7 +2,7 @@
"name": "RSA NetWitness",
"description": "RSA NetWitness Platform provides systems Logs, Network, and endpoint visibility for real-time collection, detection, and automated response with the Demisto Enterprise platform. Providing full session analysis, customers can extract critical data and effectively operate security operations automated playbook.",
"support": "partner",
- "currentVersion": "3.2.4",
+ "currentVersion": "3.2.5",
"author": "Netwitness",
"url": "https://www.netwitness.com/services/technical-support/",
"email": "nw.paloalto.support@netwitness.com",
@@ -27,4 +27,4 @@
"marketplacev2"
],
"itemPrefix": "RSA"
-}
+}
\ No newline at end of file
diff --git a/Packs/Rapid7AppSec/Integrations/Rapid7AppSec/Rapid7AppSec.yml b/Packs/Rapid7AppSec/Integrations/Rapid7AppSec/Rapid7AppSec.yml
index 77ee8b958e50..b9760d1ea686 100644
--- a/Packs/Rapid7AppSec/Integrations/Rapid7AppSec/Rapid7AppSec.yml
+++ b/Packs/Rapid7AppSec/Integrations/Rapid7AppSec/Rapid7AppSec.yml
@@ -723,7 +723,7 @@ script:
- contextPath: Rapid7AppSec.EngineGroup.description
description: Description about the engine group.
type: String
- dockerimage: demisto/python3:3.11.9.103066
+ dockerimage: demisto/python3:3.11.10.115186
feed: false
isfetch: false
longRunning: false
diff --git a/Packs/Rapid7AppSec/ReleaseNotes/1_0_5.md b/Packs/Rapid7AppSec/ReleaseNotes/1_0_5.md
new file mode 100644
index 000000000000..9c4bf33357fe
--- /dev/null
+++ b/Packs/Rapid7AppSec/ReleaseNotes/1_0_5.md
@@ -0,0 +1,7 @@
+
+#### Integrations
+
+##### Rapid7AppSec
+
+
+- Updated the Docker image to: *demisto/python3:3.11.10.115186*.
diff --git a/Packs/Rapid7AppSec/pack_metadata.json b/Packs/Rapid7AppSec/pack_metadata.json
index bd9798b8ca5a..f75722c64224 100644
--- a/Packs/Rapid7AppSec/pack_metadata.json
+++ b/Packs/Rapid7AppSec/pack_metadata.json
@@ -2,7 +2,7 @@
"name": "Rapid7 - AppSec",
"description": "Rapid7 AppSec content pack is designed to help users manage applications vulnerabilities and scans.",
"support": "xsoar",
- "currentVersion": "1.0.4",
+ "currentVersion": "1.0.5",
"author": "Cortex XSOAR",
"url": "https://www.paloaltonetworks.com/cortex",
"email": "",
diff --git a/Packs/Reco/Integrations/Reco/Reco.py b/Packs/Reco/Integrations/Reco/Reco.py
index c2001b43b6ca..b2bee048de15 100644
--- a/Packs/Reco/Integrations/Reco/Reco.py
+++ b/Packs/Reco/Integrations/Reco/Reco.py
@@ -8,6 +8,7 @@
from typing import Any
ENTRY_TYPE_USER = "ENTRY_TYPE_USER"
+ENTRY_TYPE_IDENTITY = "ENTRY_TYPE_IDENTITY"
LABEL_STATUS_ACTIVE = "LABEL_STATUS_ACTIVE"
@@ -35,6 +36,10 @@
STEP_INIT = "init"
+def create_filter(field, value):
+ return {"field": field, "stringContains": {"value": value}}
+
+
def extract_response(response: Any) -> list[dict[str, Any]]:
if response.get("getTableResponse") is None:
demisto.error(f"got bad response, {response}")
@@ -317,18 +322,68 @@ def resolve_visibility_event(self, entity_id: str, label_name: str) -> Any:
def get_risky_users(self) -> list[dict[str, Any]]:
"""Get risky users. Returns a list of risky users with analysis."""
- params = {
+ return self.get_identities(email_address=None, label=RISKY_USER)
+
+ def get_identities(self, email_address: Optional[str] = None, label: Optional[str] = None) -> list[dict[str, Any]]:
+ """
+ Get identities from Reco with specified filters.
+
+ :param email_address: Optional email substring to filter identities.
+ :param label: Optional label value to filter identities.
+ :return: A dictionary representing the getTableRequest payload.
+ """
+ params: Dict[str, Any] = {
"getTableRequest": {
- "tableName": "RISK_MANAGEMENT_VIEW_USER_LIST",
- "pageSize": 200,
+ "tableName": "RISK_MANAGEMENT_VIEW_IDENTITIES",
+ "pageSize": 50,
"fieldSorts": {
"sorts": [
- {"sortBy": "risk_level", "sortDirection": "SORT_DIRECTION_DESC"}
+ {"sortBy": "primary_email_address", "sortDirection": "SORT_DIRECTION_ASC"}
]
},
- "fieldFilters": {},
+ "fieldFilters": {
+ "relationship": "FILTER_RELATIONSHIP_AND",
+ "fieldFilterGroups": {
+ "fieldFilters": []
+ },
+ "forceEstimateSize": True
+ },
}
}
+
+ # Add label filter if provided
+ if label is not None:
+ label_filter = {
+ "relationship": "FILTER_RELATIONSHIP_OR",
+ "filters": {
+ "filters": [
+ {
+ "field": "labels",
+ "labelNameEquals": {
+ "keys": ["identity_id"],
+ "value": [label],
+ "filterColumn": "label_name",
+ "entryTypes": ["ENTRY_TYPE_IDENTITY"]
+ }
+ }
+ ]
+ }
+ }
+ params["getTableRequest"]["fieldFilters"]["fieldFilterGroups"]["fieldFilters"].append(label_filter)
+
+ # Add email address filter if provided
+ if email_address:
+ email_filter = {
+ "relationship": "FILTER_RELATIONSHIP_OR",
+ "filters": {
+ "filters": [
+ create_filter("full_name", email_address),
+ create_filter("primary_email_address", email_address)
+ ]
+ }
+ }
+ params["getTableRequest"]["fieldFilters"]["fieldFilterGroups"]["fieldFilters"].append(email_filter)
+
try:
response = self._http_request(
method="PUT",
@@ -343,7 +398,7 @@ def get_risky_users(self) -> list[dict[str, Any]]:
def get_exposed_publicly_files_at_risk(self) -> list[dict[str, Any]]:
"""Get exposed publicly files at risk. Returns a list of exposed publicly files at risk with analysis."""
- params = {
+ params: Dict[str, Any] = {
"getTableRequest": {
"tableName": "DATA_RISK_MANAGEMENT_VIEW_BREAKDOWN_EXPOSED_PUBLICLY",
"pageSize": PAGE_SIZE,
@@ -457,7 +512,8 @@ def get_list_of_private_emails_with_access(self) -> list[dict[str, Any]]:
"relationship": "FILTER_RELATIONSHIP_AND",
"fieldFilterGroups": {
"fieldFilters": []
- }
+ },
+ "forceEstimateSize": True
}
}
}
@@ -512,7 +568,8 @@ def get_3rd_parties_risk_list(self, last_interaction_time_before_in_days: int) -
}
}
]
- }
+ },
+ "forceEstimateSize": True
}
}
}
@@ -577,7 +634,8 @@ def get_files_shared_with_3rd_parties(self,
}
}
]
- }
+ },
+ "forceEstimateSize": True
}
}
}
@@ -702,8 +760,9 @@ def get_assets_shared_externally(self, email_address: str) -> list[dict[str, Any
}
}
]
- }
- }
+ },
+ "forceEstimateSize": True
+ },
}
}
try:
@@ -722,38 +781,45 @@ def get_user_context_by_email_address(
self, email_address: str
) -> list[dict[str, Any]]:
""" Get user context by email address. Returns a dict of user context. """
- params: dict[str, Any] = {
+ identities = self.get_identities(email_address=email_address)
+ if not identities:
+ return []
+ identity_ids = []
+ for user in identities:
+ user_as_dict = parse_table_row_to_dict(user.get("cells", {}))
+ identity_id = user_as_dict.get("identity_id")
+ if identity_id:
+ identity_ids.append(identity_id)
+
+ params: Dict[str, Any] = {
"getTableRequest": {
- "tableName": "enriched_users_view",
+ "tableName": "RISK_MANAGEMENT_VIEW_IDENTITIES",
"pageSize": 1,
"fieldSorts": {
"sorts": []
},
"fieldFilters": {
"relationship": "FILTER_RELATIONSHIP_OR",
- "fieldFilterGroups": {
- "fieldFilters": [
- {
- "relationship": "FILTER_RELATIONSHIP_OR",
- "filters": {
- "filters": [
- {
- "field": "email_account",
- "stringEquals": {
- "value": f"{email_address}"
- }
- }
- ]
- }
- }
- ]
- }
- }
+ "forceEstimateSize": True,
+ "filters": {"filters": []} if identity_ids else {},
+ },
}
}
+
+ # Add filters for multiple identity_ids
+ if identity_ids:
+ identity_filters = [
+ {
+ "field": "identity_id",
+ "stringEquals": {"value": identity_id}
+ }
+ for identity_id in identity_ids
+ ]
+ params["getTableRequest"]["fieldFilters"]["filters"]["filters"] = identity_filters
+
response = self._http_request(
- method="POST",
- url_suffix="/asset-management",
+ method="PUT",
+ url_suffix="/risk-management/get-risk-management-table",
timeout=RECO_API_TIMEOUT_IN_SECONDS * 2,
data=json.dumps(params),
)
@@ -1029,10 +1095,10 @@ def get_risky_users_from_reco(reco_client: RecoClient) -> CommandResults:
readable_output=tableToMarkdown(
"Risky Users",
users,
- headers=["email_account", "risk_level", "labels", "status"],
+ headers=user_as_dict.keys(),
),
outputs_prefix="Reco.RiskyUsers",
- outputs_key_field="email_account",
+ outputs_key_field="primary_email_address",
outputs=users,
raw_response=risky_users,
)
@@ -1040,9 +1106,14 @@ def get_risky_users_from_reco(reco_client: RecoClient) -> CommandResults:
def add_risky_user_label(reco_client: RecoClient, email_address: str) -> CommandResults:
"""Add a risky user to Reco."""
- raw_response = reco_client.set_entry_label_relations(
- email_address, RISKY_USER, LABEL_STATUS_ACTIVE, ENTRY_TYPE_USER
- )
+
+ users = reco_client.get_identities(email_address)
+ for user in users:
+ user_as_dict = parse_table_row_to_dict(user.get("cells", {}))
+ raw_response = reco_client.set_entry_label_relations(
+ user_as_dict["identity_id"], RISKY_USER, LABEL_STATUS_ACTIVE, ENTRY_TYPE_IDENTITY
+ )
+
return CommandResults(
raw_response=raw_response,
readable_output=f"User {email_address} labeled as risky",
@@ -1051,9 +1122,13 @@ def add_risky_user_label(reco_client: RecoClient, email_address: str) -> Command
def add_leaving_org_user(reco_client: RecoClient, email_address: str) -> CommandResults:
"""Tag user as leaving org."""
- raw_response = reco_client.set_entry_label_relations(
- email_address, LEAVING_ORG_USER, LABEL_STATUS_ACTIVE, ENTRY_TYPE_USER
- )
+ users = reco_client.get_identities(email_address)
+ for user in users:
+ user_as_dict = parse_table_row_to_dict(user.get("cells", {}))
+ raw_response = reco_client.set_entry_label_relations(
+ user_as_dict["identity_id"], LEAVING_ORG_USER, LABEL_STATUS_ACTIVE, ENTRY_TYPE_IDENTITY
+ )
+
return CommandResults(
raw_response=raw_response,
readable_output=f"User {email_address} labeled as leaving org user",
diff --git a/Packs/Reco/Integrations/Reco/Reco.yml b/Packs/Reco/Integrations/Reco/Reco.yml
index efdf5858d42e..16a062661745 100644
--- a/Packs/Reco/Integrations/Reco/Reco.yml
+++ b/Packs/Reco/Integrations/Reco/Reco.yml
@@ -65,7 +65,7 @@ description: Reco is a Saas data security solution that protects your data from
display: Reco
name: Reco
script:
- dockerimage: demisto/python3:3.11.9.107902
+ dockerimage: demisto/python3:3.11.10.116949
isfetch: true
runonce: false
script: "-"
diff --git a/Packs/Reco/Integrations/Reco/Reco_test.py b/Packs/Reco/Integrations/Reco/Reco_test.py
index 243b780efe4e..88730ee695f3 100644
--- a/Packs/Reco/Integrations/Reco/Reco_test.py
+++ b/Packs/Reco/Integrations/Reco/Reco_test.py
@@ -292,6 +292,12 @@ def get_random_risky_users_response() -> GetIncidentTableResponse:
"John Doe".encode(ENCODING)
).decode(ENCODING),
),
+ KeyValuePair(
+ key="identity_id",
+ value=base64.b64encode(
+ f"{uuid.uuid4()}".encode(ENCODING)
+ ).decode(ENCODING),
+ ),
KeyValuePair(
key="email_account",
value=base64.b64encode(
@@ -699,6 +705,12 @@ def test_add_risky_user_label(requests_mock, reco_client: RecoClient) -> None:
requests_mock.put(
f"{DUMMY_RECO_API_DNS_NAME}/entry-label-relations", json={}, status_code=200
)
+ raw_result = get_random_risky_users_response()
+ requests_mock.put(
+ f"{DUMMY_RECO_API_DNS_NAME}/risk-management/get-risk-management-table",
+ json=raw_result,
+ status_code=200,
+ )
res = add_risky_user_label(reco_client=reco_client, email_address=label_id)
assert "labeled as risky" in res.readable_output
@@ -800,7 +812,7 @@ def test_get_private_email_list_with_access(requests_mock, reco_client: RecoClie
actual_result = get_private_email_list_with_access(
reco_client=reco_client
)
- assert 0 == len(actual_result.outputs)
+ assert len(actual_result.outputs) == 0
def test_get_assets_shared_externally_command(requests_mock, reco_client: RecoClient) -> None:
@@ -902,6 +914,11 @@ def test_get_user_context_by_email(requests_mock, reco_client: RecoClient) -> No
requests_mock.post(
f"{DUMMY_RECO_API_DNS_NAME}/asset-management", json=raw_result, status_code=200
)
+ requests_mock.put(
+ f"{DUMMY_RECO_API_DNS_NAME}/risk-management/get-risk-management-table",
+ json=raw_result,
+ status_code=200,
+ )
res = get_user_context_by_email_address(reco_client, "charles@corp.com")
assert res.outputs_prefix == "Reco.User"
assert res.outputs.get("email_account") != ""
diff --git a/Packs/Reco/ReleaseNotes/1_5_1.md b/Packs/Reco/ReleaseNotes/1_5_1.md
new file mode 100644
index 000000000000..e892653a2a11
--- /dev/null
+++ b/Packs/Reco/ReleaseNotes/1_5_1.md
@@ -0,0 +1,8 @@
+
+#### Integrations
+
+##### Reco
+
+- Add Support to tag risky users with identities and not accounts.
+- Fix **reco-get-risky-users** command to return the correct output.
+- Fix **reco-get-assets-user-has-access-to** command to return the correct output.
\ No newline at end of file
diff --git a/Packs/Reco/pack_metadata.json b/Packs/Reco/pack_metadata.json
index 9bb7b4cf2503..c2cee31ec652 100644
--- a/Packs/Reco/pack_metadata.json
+++ b/Packs/Reco/pack_metadata.json
@@ -2,7 +2,7 @@
"name": "Reco",
"description": "Reco is an identity-first SaaS security solution that empowers organizations with full visibility into every app, identity, and their actions to seamlessly prioritize and control risks in the SaaS ecosystem",
"support": "partner",
- "currentVersion": "1.5.0",
+ "currentVersion": "1.5.1",
"author": "Reco",
"url": "https://reco.ai",
"email": "support@reco.ai",
diff --git a/Packs/Recorded_Future/Integrations/Recorded_Future/README.md b/Packs/Recorded_Future/Integrations/Recorded_Future/README.md
index 639d3e80b87d..bb4a64cb904c 100644
--- a/Packs/Recorded_Future/Integrations/Recorded_Future/README.md
+++ b/Packs/Recorded_Future/Integrations/Recorded_Future/README.md
@@ -1143,7 +1143,7 @@
Human Readable Output
-
+
@@ -1330,7 +1330,7 @@
Human Readable Output
-
+
@@ -1542,7 +1542,7 @@
Human Readable Output
-
+
@@ -1687,7 +1687,7 @@
Command Example
-
![image](https://user-images.githubusercontent.com/35098543/52180293-1421c280-27ed-11e9-82ec-cbb1669b20dc.png)
+
![image](../../doc_files/52180293-1421c280-27ed-11e9-82ec-cbb1669b20dc.png)
Context Example
@@ -1719,7 +1719,7 @@
Human Readable Output
-
+
5. Get threat intelligence context for an indicator
@@ -2236,7 +2236,7 @@
Human Readable Output
-
+
6. Get hash threats
@@ -2434,7 +2434,7 @@
Human Readable Output
-
+
7. Get IP threats
@@ -2606,7 +2606,7 @@
Human Readable Output
-
+
8. Get URL threats
@@ -2768,7 +2768,7 @@
Human Readable Output
-
+
9. Get domain threats
@@ -2930,7 +2930,7 @@
Human Readable Output
-
+
10. Get vulnerability threats
@@ -3062,7 +3062,7 @@
Human Readable Output
-
+
11. Get the domain risk list
@@ -3177,7 +3177,7 @@
Human Readable Output
-
+
12. Get the URL risk list
@@ -3292,7 +3292,7 @@
Human Readable Output
-
+
13. Get the IP address risk list
@@ -3407,7 +3407,7 @@
Human Readable Output
-
+
14. Get the vulnerability risk list
@@ -3522,7 +3522,7 @@
Human Readable Output
-
+
15. Get the hash risk list
@@ -3637,7 +3637,7 @@
Human Readable Output
-
+
16. Get the domain risk rules
@@ -3782,7 +3782,7 @@
Human Readable Output
-
+
17. Get the has risk rules
@@ -3927,7 +3927,7 @@
Human Readable Output
-
+
18. Get the IP address risk rules
@@ -4032,7 +4032,7 @@
Human Readable Output
-
+
19. Get the URL risk rules
@@ -4159,7 +4159,7 @@
Human Readable Output
-
+
20. Get the vulnerability risk rules
@@ -4264,7 +4264,7 @@
Human Readable Output
-
+
21. Get a list of alert rules
@@ -4370,7 +4370,7 @@
Human Readable Output
-
+
22. Get a list of alerts
@@ -4582,7 +4582,7 @@
Human Readable Output
-
+
@@ -4611,5 +4611,5 @@
diff --git a/Packs/Recorded_Future/doc_files/52179606-40d1dc00-27e5-11e9-90fb-8b675105c75d.png b/Packs/Recorded_Future/doc_files/52179606-40d1dc00-27e5-11e9-90fb-8b675105c75d.png
new file mode 100644
index 000000000000..ad7fec58abc1
Binary files /dev/null and b/Packs/Recorded_Future/doc_files/52179606-40d1dc00-27e5-11e9-90fb-8b675105c75d.png differ
diff --git a/Packs/Recorded_Future/doc_files/52179711-3fed7a00-27e6-11e9-9365-894b22815c11.png b/Packs/Recorded_Future/doc_files/52179711-3fed7a00-27e6-11e9-9365-894b22815c11.png
new file mode 100644
index 000000000000..bd2dd933baf1
Binary files /dev/null and b/Packs/Recorded_Future/doc_files/52179711-3fed7a00-27e6-11e9-9365-894b22815c11.png differ
diff --git a/Packs/Recorded_Future/doc_files/52179776-141ec400-27e7-11e9-91f7-fa24c1e290c2.png b/Packs/Recorded_Future/doc_files/52179776-141ec400-27e7-11e9-91f7-fa24c1e290c2.png
new file mode 100644
index 000000000000..727004652ab4
Binary files /dev/null and b/Packs/Recorded_Future/doc_files/52179776-141ec400-27e7-11e9-91f7-fa24c1e290c2.png differ
diff --git a/Packs/Recorded_Future/doc_files/52180330-af1a9c80-27ed-11e9-9be2-c8fd4ff93b58.png b/Packs/Recorded_Future/doc_files/52180330-af1a9c80-27ed-11e9-9be2-c8fd4ff93b58.png
new file mode 100644
index 000000000000..631d2d099be1
Binary files /dev/null and b/Packs/Recorded_Future/doc_files/52180330-af1a9c80-27ed-11e9-9be2-c8fd4ff93b58.png differ
diff --git a/Packs/Recorded_Future/doc_files/52180426-7d560580-27ee-11e9-9494-3867242aa829.png b/Packs/Recorded_Future/doc_files/52180426-7d560580-27ee-11e9-9494-3867242aa829.png
new file mode 100644
index 000000000000..c106f17f9459
Binary files /dev/null and b/Packs/Recorded_Future/doc_files/52180426-7d560580-27ee-11e9-9494-3867242aa829.png differ
diff --git a/Packs/Recorded_Future/doc_files/52180463-09682d00-27ef-11e9-8092-979b44295d54.png b/Packs/Recorded_Future/doc_files/52180463-09682d00-27ef-11e9-8092-979b44295d54.png
new file mode 100644
index 000000000000..d947cd73dd6b
Binary files /dev/null and b/Packs/Recorded_Future/doc_files/52180463-09682d00-27ef-11e9-8092-979b44295d54.png differ
diff --git a/Packs/Recorded_Future/doc_files/52180478-50eeb900-27ef-11e9-8860-fc088b799368.png b/Packs/Recorded_Future/doc_files/52180478-50eeb900-27ef-11e9-8860-fc088b799368.png
new file mode 100644
index 000000000000..61c00cccd881
Binary files /dev/null and b/Packs/Recorded_Future/doc_files/52180478-50eeb900-27ef-11e9-8860-fc088b799368.png differ
diff --git a/Packs/Recorded_Future/doc_files/52180512-e12cfe00-27ef-11e9-8f37-8fe5fdf33843.png b/Packs/Recorded_Future/doc_files/52180512-e12cfe00-27ef-11e9-8f37-8fe5fdf33843.png
new file mode 100644
index 000000000000..9fa0108ec841
Binary files /dev/null and b/Packs/Recorded_Future/doc_files/52180512-e12cfe00-27ef-11e9-8f37-8fe5fdf33843.png differ
diff --git a/Packs/Recorded_Future/doc_files/52180529-091c6180-27f0-11e9-8224-ad3ac75ed7dc.png b/Packs/Recorded_Future/doc_files/52180529-091c6180-27f0-11e9-8224-ad3ac75ed7dc.png
new file mode 100644
index 000000000000..ace48f5b7187
Binary files /dev/null and b/Packs/Recorded_Future/doc_files/52180529-091c6180-27f0-11e9-8224-ad3ac75ed7dc.png differ
diff --git a/Packs/Recorded_Future/doc_files/52180553-6e705280-27f0-11e9-9ef7-c1c13029eb58.png b/Packs/Recorded_Future/doc_files/52180553-6e705280-27f0-11e9-9ef7-c1c13029eb58.png
new file mode 100644
index 000000000000..7152c2177b2e
Binary files /dev/null and b/Packs/Recorded_Future/doc_files/52180553-6e705280-27f0-11e9-9ef7-c1c13029eb58.png differ
diff --git a/Packs/Recorded_Future/doc_files/52209971-3ddef600-288e-11e9-8e00-e964cea64913.png b/Packs/Recorded_Future/doc_files/52209971-3ddef600-288e-11e9-8e00-e964cea64913.png
new file mode 100644
index 000000000000..626cee146f9b
Binary files /dev/null and b/Packs/Recorded_Future/doc_files/52209971-3ddef600-288e-11e9-8e00-e964cea64913.png differ
diff --git a/Packs/Recorded_Future/doc_files/52210045-7b438380-288e-11e9-8ea5-81a8ac075eea.png b/Packs/Recorded_Future/doc_files/52210045-7b438380-288e-11e9-8ea5-81a8ac075eea.png
new file mode 100644
index 000000000000..c388d03abb8f
Binary files /dev/null and b/Packs/Recorded_Future/doc_files/52210045-7b438380-288e-11e9-8ea5-81a8ac075eea.png differ
diff --git a/Packs/Recorded_Future/doc_files/52210168-dbd2c080-288e-11e9-8be5-96f62b1d62f0.png b/Packs/Recorded_Future/doc_files/52210168-dbd2c080-288e-11e9-8be5-96f62b1d62f0.png
new file mode 100644
index 000000000000..996e1301988a
Binary files /dev/null and b/Packs/Recorded_Future/doc_files/52210168-dbd2c080-288e-11e9-8be5-96f62b1d62f0.png differ
diff --git a/Packs/Recorded_Future/doc_files/52210325-67e4e800-288f-11e9-9434-5def18e623d0.png b/Packs/Recorded_Future/doc_files/52210325-67e4e800-288f-11e9-9434-5def18e623d0.png
new file mode 100644
index 000000000000..077b3a620eda
Binary files /dev/null and b/Packs/Recorded_Future/doc_files/52210325-67e4e800-288f-11e9-9434-5def18e623d0.png differ
diff --git a/Packs/Recorded_Future/doc_files/52210449-d164f680-288f-11e9-861b-54c725025772.png b/Packs/Recorded_Future/doc_files/52210449-d164f680-288f-11e9-861b-54c725025772.png
new file mode 100644
index 000000000000..065744f5ea29
Binary files /dev/null and b/Packs/Recorded_Future/doc_files/52210449-d164f680-288f-11e9-861b-54c725025772.png differ
diff --git a/Packs/Recorded_Future/doc_files/52210580-26a10800-2890-11e9-95f3-1206fcfd4422.png b/Packs/Recorded_Future/doc_files/52210580-26a10800-2890-11e9-95f3-1206fcfd4422.png
new file mode 100644
index 000000000000..3803cc96e2d3
Binary files /dev/null and b/Packs/Recorded_Future/doc_files/52210580-26a10800-2890-11e9-95f3-1206fcfd4422.png differ
diff --git a/Packs/Recorded_Future/doc_files/52210727-84cdeb00-2890-11e9-9f2b-05575fb07f43.png b/Packs/Recorded_Future/doc_files/52210727-84cdeb00-2890-11e9-9f2b-05575fb07f43.png
new file mode 100644
index 000000000000..6ca23f8ea563
Binary files /dev/null and b/Packs/Recorded_Future/doc_files/52210727-84cdeb00-2890-11e9-9f2b-05575fb07f43.png differ
diff --git a/Packs/Recorded_Future/doc_files/52210851-d5454880-2890-11e9-8375-82058307dd3a.png b/Packs/Recorded_Future/doc_files/52210851-d5454880-2890-11e9-8375-82058307dd3a.png
new file mode 100644
index 000000000000..3a42243a5789
Binary files /dev/null and b/Packs/Recorded_Future/doc_files/52210851-d5454880-2890-11e9-8375-82058307dd3a.png differ
diff --git a/Packs/Recorded_Future/doc_files/52210941-12113f80-2891-11e9-9be2-bb944ef37462.png b/Packs/Recorded_Future/doc_files/52210941-12113f80-2891-11e9-9be2-bb944ef37462.png
new file mode 100644
index 000000000000..31eda8e16226
Binary files /dev/null and b/Packs/Recorded_Future/doc_files/52210941-12113f80-2891-11e9-9be2-bb944ef37462.png differ
diff --git a/Packs/Recorded_Future/doc_files/52211023-4ab11900-2891-11e9-92e2-05f407dfe2f2.png b/Packs/Recorded_Future/doc_files/52211023-4ab11900-2891-11e9-92e2-05f407dfe2f2.png
new file mode 100644
index 000000000000..412ff78b2213
Binary files /dev/null and b/Packs/Recorded_Future/doc_files/52211023-4ab11900-2891-11e9-92e2-05f407dfe2f2.png differ
diff --git a/Packs/Recorded_Future/doc_files/52211907-dcba2100-2893-11e9-83ce-f048959938f9.png b/Packs/Recorded_Future/doc_files/52211907-dcba2100-2893-11e9-83ce-f048959938f9.png
new file mode 100644
index 000000000000..92028eeb7ba1
Binary files /dev/null and b/Packs/Recorded_Future/doc_files/52211907-dcba2100-2893-11e9-83ce-f048959938f9.png differ
diff --git a/Packs/Recorded_Future/doc_files/52212030-30c50580-2894-11e9-84e2-d8e40d9f4c65.png b/Packs/Recorded_Future/doc_files/52212030-30c50580-2894-11e9-84e2-d8e40d9f4c65.png
new file mode 100644
index 000000000000..152fae0ae556
Binary files /dev/null and b/Packs/Recorded_Future/doc_files/52212030-30c50580-2894-11e9-84e2-d8e40d9f4c65.png differ
diff --git a/Packs/Recorded_Future/doc_files/52214134-9e276500-2899-11e9-8acd-d7166a2c24db.png b/Packs/Recorded_Future/doc_files/52214134-9e276500-2899-11e9-8acd-d7166a2c24db.png
new file mode 100644
index 000000000000..bc92f7a515ad
Binary files /dev/null and b/Packs/Recorded_Future/doc_files/52214134-9e276500-2899-11e9-8acd-d7166a2c24db.png differ
diff --git a/Packs/Respond/Integrations/RespondAnalyst/RespondAnalyst.yml b/Packs/Respond/Integrations/RespondAnalyst/RespondAnalyst.yml
index 3799b83cf51e..03940900133f 100644
--- a/Packs/Respond/Integrations/RespondAnalyst/RespondAnalyst.yml
+++ b/Packs/Respond/Integrations/RespondAnalyst/RespondAnalyst.yml
@@ -233,7 +233,7 @@ script:
name: tenant_id
description: Get escalation data associated with incident. In Respond, an 'escalation' is a specific event derived from a cybersecurity telemetry. Escalations are compiled together to form Incidents in Respond.
name: mad-get-escalations
- dockerimage: demisto/python3:3.11.9.103066
+ dockerimage: demisto/python3:3.11.10.115186
isfetch: true
isremotesyncin: true
isremotesyncout: true
diff --git a/Packs/Respond/ReleaseNotes/1_0_9.md b/Packs/Respond/ReleaseNotes/1_0_9.md
new file mode 100644
index 000000000000..ac10e5546d20
--- /dev/null
+++ b/Packs/Respond/ReleaseNotes/1_0_9.md
@@ -0,0 +1,7 @@
+
+#### Integrations
+
+##### Mandiant Automated Defense (Formerly Respond Software)
+
+
+- Updated the Docker image to: *demisto/python3:3.11.10.115186*.
diff --git a/Packs/Respond/pack_metadata.json b/Packs/Respond/pack_metadata.json
index c26c9adea6ea..348f923e9bc1 100644
--- a/Packs/Respond/pack_metadata.json
+++ b/Packs/Respond/pack_metadata.json
@@ -2,7 +2,7 @@
"name": "Mandiant Automated Defense",
"description": "Mandiant Automated Defense Pack",
"support": "partner",
- "currentVersion": "1.0.8",
+ "currentVersion": "1.0.9",
"author": "Mandiant",
"url": "https://www.mandiant.com/support",
"email": "customersupport@mandiant.com",
diff --git a/Packs/ReversingLabs_A1000/Integrations/ReversingLabsA1000/README.md b/Packs/ReversingLabs_A1000/Integrations/ReversingLabsA1000/README.md
index 03190fd8efb3..d91f3305b29f 100644
--- a/Packs/ReversingLabs_A1000/Integrations/ReversingLabsA1000/README.md
+++ b/Packs/ReversingLabs_A1000/Integrations/ReversingLabsA1000/README.md
@@ -111,7 +111,7 @@
Human Readable Output
-
+
Raw Output
{
@@ -154,7 +154,7 @@
Human Readable Output
-
+
Raw Output
{
"code": 200,
@@ -193,7 +193,7 @@
Human Readable Output
-
+
Raw Output
There is no raw output for this command.
When the command runs successfully, you get a downloadable file.
@@ -232,7 +232,7 @@
Human Readable Output
-
+
Raw Output
There is no raw output for this command.
@@ -272,7 +272,7 @@
Human Readable Output
-
+
Raw Output
{
"count": 5,
@@ -358,7 +358,7 @@ ReversingLabs
Human Readable Output
-
+
Raw Output
{
"code": 200,
@@ -399,7 +399,7 @@ ReversingLabs
Human Readable Output
-
+
Context Output
diff --git a/Packs/ReversingLabs_A1000/doc_files/integration-ReversingLabs_A1000_mceclip0.png b/Packs/ReversingLabs_A1000/doc_files/integration-ReversingLabs_A1000_mceclip0.png
new file mode 100644
index 000000000000..007fb58d0646
Binary files /dev/null and b/Packs/ReversingLabs_A1000/doc_files/integration-ReversingLabs_A1000_mceclip0.png differ
diff --git a/Packs/ReversingLabs_A1000/doc_files/integration-ReversingLabs_A1000_mceclip1.png b/Packs/ReversingLabs_A1000/doc_files/integration-ReversingLabs_A1000_mceclip1.png
new file mode 100644
index 000000000000..537960a868e3
Binary files /dev/null and b/Packs/ReversingLabs_A1000/doc_files/integration-ReversingLabs_A1000_mceclip1.png differ
diff --git a/Packs/ReversingLabs_A1000/doc_files/integration-ReversingLabs_A1000_mceclip2.png b/Packs/ReversingLabs_A1000/doc_files/integration-ReversingLabs_A1000_mceclip2.png
new file mode 100644
index 000000000000..e07dced2f573
Binary files /dev/null and b/Packs/ReversingLabs_A1000/doc_files/integration-ReversingLabs_A1000_mceclip2.png differ
diff --git a/Packs/ReversingLabs_A1000/doc_files/integration-ReversingLabs_A1000_mceclip3.png b/Packs/ReversingLabs_A1000/doc_files/integration-ReversingLabs_A1000_mceclip3.png
new file mode 100644
index 000000000000..fc4be062697d
Binary files /dev/null and b/Packs/ReversingLabs_A1000/doc_files/integration-ReversingLabs_A1000_mceclip3.png differ
diff --git a/Packs/ReversingLabs_A1000/doc_files/integration-ReversingLabs_A1000_mceclip4.png b/Packs/ReversingLabs_A1000/doc_files/integration-ReversingLabs_A1000_mceclip4.png
new file mode 100644
index 000000000000..abecb7773c7f
Binary files /dev/null and b/Packs/ReversingLabs_A1000/doc_files/integration-ReversingLabs_A1000_mceclip4.png differ
diff --git a/Packs/ReversingLabs_A1000/doc_files/integration-ReversingLabs_A1000_mceclip5.png b/Packs/ReversingLabs_A1000/doc_files/integration-ReversingLabs_A1000_mceclip5.png
new file mode 100644
index 000000000000..463c3c5dfb39
Binary files /dev/null and b/Packs/ReversingLabs_A1000/doc_files/integration-ReversingLabs_A1000_mceclip5.png differ
diff --git a/Packs/ReversingLabs_A1000/doc_files/integration-ReversingLabs_A1000_mceclip6.png b/Packs/ReversingLabs_A1000/doc_files/integration-ReversingLabs_A1000_mceclip6.png
new file mode 100644
index 000000000000..23eaad62f258
Binary files /dev/null and b/Packs/ReversingLabs_A1000/doc_files/integration-ReversingLabs_A1000_mceclip6.png differ
diff --git a/Packs/ReversingLabs_Titanium_Cloud/Integrations/ReversingLabsTitaniumCloud/README.md b/Packs/ReversingLabs_Titanium_Cloud/Integrations/ReversingLabsTitaniumCloud/README.md
index 8bd550ebb674..dd1894dcca30 100644
--- a/Packs/ReversingLabs_Titanium_Cloud/Integrations/ReversingLabsTitaniumCloud/README.md
+++ b/Packs/ReversingLabs_Titanium_Cloud/Integrations/ReversingLabsTitaniumCloud/README.md
@@ -92,9 +92,9 @@
Human Readable Output (extended = false)
-
+
Human Readable Output (extended = true)
-
+
Context Output
diff --git a/Packs/ReversingLabs_Titanium_Cloud/doc_files/integration-ReversingLabs_Titanium_Cloud_mceclip0.png b/Packs/ReversingLabs_Titanium_Cloud/doc_files/integration-ReversingLabs_Titanium_Cloud_mceclip0.png
new file mode 100644
index 000000000000..67ad3637d30c
Binary files /dev/null and b/Packs/ReversingLabs_Titanium_Cloud/doc_files/integration-ReversingLabs_Titanium_Cloud_mceclip0.png differ
diff --git a/Packs/ReversingLabs_Titanium_Cloud/doc_files/integration-ReversingLabs_Titanium_Cloud_mceclip1.png b/Packs/ReversingLabs_Titanium_Cloud/doc_files/integration-ReversingLabs_Titanium_Cloud_mceclip1.png
new file mode 100644
index 000000000000..86db86824718
Binary files /dev/null and b/Packs/ReversingLabs_Titanium_Cloud/doc_files/integration-ReversingLabs_Titanium_Cloud_mceclip1.png differ
diff --git a/Packs/RoksitDNSSecurity/Integrations/RoksitDNSSecurity/RoksitDNSSecurity.yml b/Packs/RoksitDNSSecurity/Integrations/RoksitDNSSecurity/RoksitDNSSecurity.yml
index d93e9b5698f3..f9d893fd06d6 100644
--- a/Packs/RoksitDNSSecurity/Integrations/RoksitDNSSecurity/RoksitDNSSecurity.yml
+++ b/Packs/RoksitDNSSecurity/Integrations/RoksitDNSSecurity/RoksitDNSSecurity.yml
@@ -29,7 +29,7 @@ script:
required: true
description: This command adds a given domain to tha Roksit blacklist.
name: Roksit-add-to-blacklist
- dockerimage: demisto/python3:3.10.13.73190
+ dockerimage: demisto/python3:3.11.10.116439
runonce: true
script: ''
subtype: python3
diff --git a/Packs/RoksitDNSSecurity/ReleaseNotes/1_0_1.md b/Packs/RoksitDNSSecurity/ReleaseNotes/1_0_1.md
new file mode 100644
index 000000000000..8aaa4aa5a17f
--- /dev/null
+++ b/Packs/RoksitDNSSecurity/ReleaseNotes/1_0_1.md
@@ -0,0 +1,9 @@
+
+#### Integrations
+
+##### Roksit DNS Security (DNSSense)
+- Updated the Docker image to: *demisto/python3:3.11.10.116439*.
+
+
+
+
diff --git a/Packs/RoksitDNSSecurity/pack_metadata.json b/Packs/RoksitDNSSecurity/pack_metadata.json
index 7770601318c6..57cd2324b6dc 100644
--- a/Packs/RoksitDNSSecurity/pack_metadata.json
+++ b/Packs/RoksitDNSSecurity/pack_metadata.json
@@ -2,12 +2,12 @@
"name": "Roksit DNS Security",
"description": "This integration provides adding selected domains to the Roksit Secure DNS's Blacklisted Domain List through API .",
"support": "community",
- "currentVersion": "1.0.0",
+ "currentVersion": "1.0.1",
"author": "Asim Sarp Kurt",
"url": "",
"email": "asimsarpkurt@gmail.com",
"created": "2023-08-29T22:53:41Z",
- "categories": [
+ "categories": [
"Network Security"
],
"tags": [],
@@ -20,4 +20,4 @@
"githubUser": [
"asimsarpkurt"
]
-}
+}
\ No newline at end of file
diff --git a/Packs/SEKOIAIntelligenceCenter/Integrations/SEKOIAIntelligenceCenter/SEKOIAIntelligenceCenter.yml b/Packs/SEKOIAIntelligenceCenter/Integrations/SEKOIAIntelligenceCenter/SEKOIAIntelligenceCenter.yml
index 1553341b55b7..952b9ed6139c 100644
--- a/Packs/SEKOIAIntelligenceCenter/Integrations/SEKOIAIntelligenceCenter/SEKOIAIntelligenceCenter.yml
+++ b/Packs/SEKOIAIntelligenceCenter/Integrations/SEKOIAIntelligenceCenter/SEKOIAIntelligenceCenter.yml
@@ -1255,7 +1255,7 @@ script:
script: '-'
type: python
subtype: python3
- dockerimage: demisto/py3-tools:1.0.0.102774
+ dockerimage: demisto/py3-tools:1.0.0.114656
fromversion: 6.2.0
tests:
- No tests (auto formatted)
diff --git a/Packs/SEKOIAIntelligenceCenter/ReleaseNotes/1_2_34.md b/Packs/SEKOIAIntelligenceCenter/ReleaseNotes/1_2_34.md
new file mode 100644
index 000000000000..00e689f32044
--- /dev/null
+++ b/Packs/SEKOIAIntelligenceCenter/ReleaseNotes/1_2_34.md
@@ -0,0 +1,7 @@
+
+#### Integrations
+
+##### SEKOIAIntelligenceCenter
+
+
+- Updated the Docker image to: *demisto/py3-tools:1.0.0.114656*.
diff --git a/Packs/SEKOIAIntelligenceCenter/pack_metadata.json b/Packs/SEKOIAIntelligenceCenter/pack_metadata.json
index 93205883d5c9..1edfd3e2d91e 100644
--- a/Packs/SEKOIAIntelligenceCenter/pack_metadata.json
+++ b/Packs/SEKOIAIntelligenceCenter/pack_metadata.json
@@ -2,7 +2,7 @@
"name": "SEKOIAIntelligenceCenter",
"description": "Request SEKOIA.IO Intelligence Center from Cortex XSOAR",
"support": "partner",
- "currentVersion": "1.2.33",
+ "currentVersion": "1.2.34",
"author": "SEKOIA.IO",
"url": "https://www.sekoia.io/en/contact/",
"email": "contact@sekoia.io",
diff --git a/Packs/SNDBOX/Integrations/SNDBOX/README.md b/Packs/SNDBOX/Integrations/SNDBOX/README.md
index 3449b5b7e1d7..25fb8799105f 100644
--- a/Packs/SNDBOX/Integrations/SNDBOX/README.md
+++ b/Packs/SNDBOX/Integrations/SNDBOX/README.md
@@ -186,9 +186,9 @@
Command Example
!sndbox-analysis-info analysis_id="65577395-48d8-4d51-bc97-bc2486f49ca0"
Context Example
-
+
Human Readable Output
-
+
3. Submit a sample for analysis
Submit a sample for analysis.
@@ -313,9 +313,9 @@
Command Example
!sndbox-analysis-submit-sample file_id="288@670"
Context Example
-
+
Human Readable Output
-
+
4. Download a report resource
Download a resource belonging to a report. This can be the full report, dropped binaries, etc.
@@ -390,9 +390,9 @@
Command Example
!sndbox-download-report analysis_id=65577395-48d8-4d51-bc97-bc2486f49ca0 type=json
Context Example
-
+
Human Readable Output
-
+
5. (Deprecated) Detonate a file
Submit a sample for detonation. This command is deprecated.
@@ -604,4 +604,4 @@
Command Example
!sndbox-download-sample analysis_id=65577395-48d8-4d51-bc97-bc2486f49ca0
Context Example
-
\ No newline at end of file
+
\ No newline at end of file
diff --git a/Packs/SNDBOX/doc_files/49791216-a3a07d00-fd38-11e8-9b59-9dd1df80cd68.png b/Packs/SNDBOX/doc_files/49791216-a3a07d00-fd38-11e8-9b59-9dd1df80cd68.png
new file mode 100644
index 000000000000..3afea153886d
Binary files /dev/null and b/Packs/SNDBOX/doc_files/49791216-a3a07d00-fd38-11e8-9b59-9dd1df80cd68.png differ
diff --git a/Packs/SNDBOX/doc_files/49791293-d9456600-fd38-11e8-9996-a96434e70e57.png b/Packs/SNDBOX/doc_files/49791293-d9456600-fd38-11e8-9996-a96434e70e57.png
new file mode 100644
index 000000000000..528f8551185e
Binary files /dev/null and b/Packs/SNDBOX/doc_files/49791293-d9456600-fd38-11e8-9996-a96434e70e57.png differ
diff --git a/Packs/SNDBOX/doc_files/49791444-2d504a80-fd39-11e8-98fd-38b2fefe17f3.png b/Packs/SNDBOX/doc_files/49791444-2d504a80-fd39-11e8-98fd-38b2fefe17f3.png
new file mode 100644
index 000000000000..de19d5e98a54
Binary files /dev/null and b/Packs/SNDBOX/doc_files/49791444-2d504a80-fd39-11e8-98fd-38b2fefe17f3.png differ
diff --git a/Packs/SNDBOX/doc_files/49791577-72747c80-fd39-11e8-9798-43ec28f944a2.png b/Packs/SNDBOX/doc_files/49791577-72747c80-fd39-11e8-9798-43ec28f944a2.png
new file mode 100644
index 000000000000..3ecf95cb09d0
Binary files /dev/null and b/Packs/SNDBOX/doc_files/49791577-72747c80-fd39-11e8-9798-43ec28f944a2.png differ
diff --git a/Packs/SNDBOX/doc_files/49791673-ae0f4680-fd39-11e8-8655-e3aa2d84a508.png b/Packs/SNDBOX/doc_files/49791673-ae0f4680-fd39-11e8-8655-e3aa2d84a508.png
new file mode 100644
index 000000000000..e52270d27692
Binary files /dev/null and b/Packs/SNDBOX/doc_files/49791673-ae0f4680-fd39-11e8-8655-e3aa2d84a508.png differ
diff --git a/Packs/SNDBOX/doc_files/49791827-f0d11e80-fd39-11e8-9ab4-bff7177937e5.png b/Packs/SNDBOX/doc_files/49791827-f0d11e80-fd39-11e8-9ab4-bff7177937e5.png
new file mode 100644
index 000000000000..4a826e95b121
Binary files /dev/null and b/Packs/SNDBOX/doc_files/49791827-f0d11e80-fd39-11e8-9ab4-bff7177937e5.png differ
diff --git a/Packs/SafeBreach/Integrations/SafeBreach/SafeBreach.py b/Packs/SafeBreach/Integrations/SafeBreach/SafeBreach.py
index e9f329f41723..db0d18b5f070 100644
--- a/Packs/SafeBreach/Integrations/SafeBreach/SafeBreach.py
+++ b/Packs/SafeBreach/Integrations/SafeBreach/SafeBreach.py
@@ -1513,7 +1513,7 @@ def delete_schedule(self):
schedule_id = int(demisto.args().get("schedule_id"))
method = "DELETE"
- url = f"/config/v1/accounts/{account_id}/plans/{schedule_id}"
+ url = f"/config/v2/accounts/{account_id}/plans/{schedule_id}"
schedule_data = self.get_response(url=url, method=method)
return schedule_data
diff --git a/Packs/SafeBreach/Integrations/SafeBreach/SafeBreach.yml b/Packs/SafeBreach/Integrations/SafeBreach/SafeBreach.yml
index d4999a5d6e50..0b42fba72db7 100644
--- a/Packs/SafeBreach/Integrations/SafeBreach/SafeBreach.yml
+++ b/Packs/SafeBreach/Integrations/SafeBreach/SafeBreach.yml
@@ -2039,7 +2039,7 @@ script:
script: '-'
type: python
subtype: python3
- dockerimage: demisto/python3:3.10.14.97374
+ dockerimage: demisto/python3:3.11.10.113941
feed: false
isfetch: false
runonce: false
diff --git a/Packs/SafeBreach/ReleaseNotes/1_4_4.md b/Packs/SafeBreach/ReleaseNotes/1_4_4.md
new file mode 100644
index 000000000000..5e49d7ffcf27
--- /dev/null
+++ b/Packs/SafeBreach/ReleaseNotes/1_4_4.md
@@ -0,0 +1,7 @@
+
+#### Integrations
+
+##### SafeBreach
+
+- Updated the SafeBreach deprecated delete plan API.
+- Updated the Docker image to: *demisto/python3:3.11.10.113941*.
diff --git a/Packs/SafeBreach/pack_metadata.json b/Packs/SafeBreach/pack_metadata.json
index 6ddef73aa43c..82e91ac5207c 100644
--- a/Packs/SafeBreach/pack_metadata.json
+++ b/Packs/SafeBreach/pack_metadata.json
@@ -5,7 +5,7 @@
"videos": [
"https://www.youtube.com/watch?v=Wb7q5Gbd2qo"
],
- "currentVersion": "1.4.3",
+ "currentVersion": "1.4.4",
"author": "SafeBreach",
"url": "https://www.safebreach.com",
"email": "support@safebreach.com",
diff --git a/Packs/SafeNet_Trusted_Access/Integrations/SafeNetTrustedAccess/SafeNetTrustedAccess.yml b/Packs/SafeNet_Trusted_Access/Integrations/SafeNetTrustedAccess/SafeNetTrustedAccess.yml
index 9e01e9bdc1ad..feb9d6b9365b 100644
--- a/Packs/SafeNet_Trusted_Access/Integrations/SafeNetTrustedAccess/SafeNetTrustedAccess.yml
+++ b/Packs/SafeNet_Trusted_Access/Integrations/SafeNetTrustedAccess/SafeNetTrustedAccess.yml
@@ -915,7 +915,7 @@ script:
- contextPath: STA.USER.SESSION.DELETED
description: Returns true, if all the user SSO sessions deleted successfully.
type: boolean
- dockerimage: demisto/python3:3.10.13.72123
+ dockerimage: demisto/python3:3.11.10.116439
runonce: false
script: ''
subtype: python3
diff --git a/Packs/SafeNet_Trusted_Access/ReleaseNotes/2_0_40.md b/Packs/SafeNet_Trusted_Access/ReleaseNotes/2_0_40.md
new file mode 100644
index 000000000000..108b5a3e9a2c
--- /dev/null
+++ b/Packs/SafeNet_Trusted_Access/ReleaseNotes/2_0_40.md
@@ -0,0 +1,10 @@
+
+#### Integrations
+
+##### Thales SafeNet Trusted Access
+- Updated the Docker image to: *demisto/python3:3.11.10.116439*.
+
+
+
+
+
diff --git a/Packs/SafeNet_Trusted_Access/pack_metadata.json b/Packs/SafeNet_Trusted_Access/pack_metadata.json
index 464aa7fbec6b..95140d2795ec 100644
--- a/Packs/SafeNet_Trusted_Access/pack_metadata.json
+++ b/Packs/SafeNet_Trusted_Access/pack_metadata.json
@@ -2,7 +2,7 @@
"name": "Thales SafeNet Trusted Access",
"description": "SafeNet Trusted Access by Thales is an access management solution that allows organizations to centrally manage and secure access to business applications.",
"support": "partner",
- "currentVersion": "2.0.39",
+ "currentVersion": "2.0.40",
"author": "Thales",
"url": "https://supportportal.gemalto.com/csm/?id=portal_home_page",
"email": "",
diff --git a/Packs/SailPointIdentityIQ/Integrations/SailPointIdentityIQ/SailPointIdentityIQ.yml b/Packs/SailPointIdentityIQ/Integrations/SailPointIdentityIQ/SailPointIdentityIQ.yml
index 92da40a93dc1..7855cd2ae39d 100644
--- a/Packs/SailPointIdentityIQ/Integrations/SailPointIdentityIQ/SailPointIdentityIQ.yml
+++ b/Packs/SailPointIdentityIQ/Integrations/SailPointIdentityIQ/SailPointIdentityIQ.yml
@@ -468,7 +468,7 @@ script:
- contextPath: IdentityIQ.Alert.application
description: List of applications that are related to this alert.
type: String
- dockerimage: demisto/python3:3.10.13.78960
+ dockerimage: demisto/python3:3.11.10.116439
isfetch: true
script: ''
subtype: python3
diff --git a/Packs/SailPointIdentityIQ/ReleaseNotes/1_0_16.md b/Packs/SailPointIdentityIQ/ReleaseNotes/1_0_16.md
new file mode 100644
index 000000000000..56a271009fe4
--- /dev/null
+++ b/Packs/SailPointIdentityIQ/ReleaseNotes/1_0_16.md
@@ -0,0 +1,9 @@
+
+#### Integrations
+
+##### SailPoint IdentityIQ
+- Updated the Docker image to: *demisto/python3:3.11.10.116439*.
+
+
+
+
diff --git a/Packs/SailPointIdentityIQ/pack_metadata.json b/Packs/SailPointIdentityIQ/pack_metadata.json
index bde47ed128c8..911370b8dd29 100644
--- a/Packs/SailPointIdentityIQ/pack_metadata.json
+++ b/Packs/SailPointIdentityIQ/pack_metadata.json
@@ -2,7 +2,7 @@
"name": "SailPoint IdentityIQ",
"description": "SailPoint IdentityIQ context pack enables XSOAR customers to utilize the deep, enriched contextual data in the SailPoint predictive identity platform to better drive identity-aware security practices.",
"support": "partner",
- "currentVersion": "1.0.15",
+ "currentVersion": "1.0.16",
"author": "SailPoint",
"url": "https://support.sailpoint.com/hc/en-us/requests/new",
"email": "support.idplusa@sailpoint.com",
diff --git a/Packs/ScreenshotMachine/Integrations/ScreenshotMachine/ScreenshotMachine.yml b/Packs/ScreenshotMachine/Integrations/ScreenshotMachine/ScreenshotMachine.yml
index bf2b9413f99e..a498c48bf24d 100644
--- a/Packs/ScreenshotMachine/Integrations/ScreenshotMachine/ScreenshotMachine.yml
+++ b/Packs/ScreenshotMachine/Integrations/ScreenshotMachine/ScreenshotMachine.yml
@@ -77,7 +77,7 @@ script:
name: md5Secret
description: Retrieve screenshot
name: screenshot-machine-get-screenshot
- dockerimage: demisto/python3:3.10.12.68714
+ dockerimage: demisto/python3:3.11.10.116439
runonce: false
script: ''
subtype: python3
diff --git a/Packs/ScreenshotMachine/ReleaseNotes/1_0_6.md b/Packs/ScreenshotMachine/ReleaseNotes/1_0_6.md
new file mode 100644
index 000000000000..408b70633842
--- /dev/null
+++ b/Packs/ScreenshotMachine/ReleaseNotes/1_0_6.md
@@ -0,0 +1,9 @@
+
+#### Integrations
+
+##### Screenshot Machine
+- Updated the Docker image to: *demisto/python3:3.11.10.116439*.
+
+
+
+
diff --git a/Packs/ScreenshotMachine/pack_metadata.json b/Packs/ScreenshotMachine/pack_metadata.json
index 24eef74e9630..15a2b84983a6 100644
--- a/Packs/ScreenshotMachine/pack_metadata.json
+++ b/Packs/ScreenshotMachine/pack_metadata.json
@@ -2,7 +2,7 @@
"name": "Screenshot Machine",
"description": "This is an integration for Screenshot Machine.\nCapture any online web page with website screenshot API.",
"support": "community",
- "currentVersion": "1.0.5",
+ "currentVersion": "1.0.6",
"author": "Art Norton",
"url": "",
"email": "",
diff --git a/Packs/SecurityIntelligenceServicesFeed/Integrations/SecurityIntelligenceServicesFeed/SecurityIntelligenceServicesFeed.py b/Packs/SecurityIntelligenceServicesFeed/Integrations/SecurityIntelligenceServicesFeed/SecurityIntelligenceServicesFeed.py
index 506ca695a699..87ccc91400cf 100644
--- a/Packs/SecurityIntelligenceServicesFeed/Integrations/SecurityIntelligenceServicesFeed/SecurityIntelligenceServicesFeed.py
+++ b/Packs/SecurityIntelligenceServicesFeed/Integrations/SecurityIntelligenceServicesFeed/SecurityIntelligenceServicesFeed.py
@@ -3,7 +3,7 @@
'''IMPORTS'''
from typing import Dict, Any, List, Union, Optional, Generator
-from datetime import timezone
+from datetime import UTC
import csv
import gzip
import boto3 as s3
@@ -331,7 +331,7 @@ def prepare_date_string_for_custom_fields(date_string: str) -> str:
parsed_dt = dateparser.parse(date_string)
if parsed_dt:
if parsed_dt.tzinfo is None:
- parsed_dt = parsed_dt.replace(tzinfo=timezone.utc)
+ parsed_dt = parsed_dt.replace(tzinfo=UTC)
return parsed_dt.isoformat()
return ''
@@ -357,7 +357,7 @@ def indicator_field_mapping(feed_type: str, indicator: Dict[str, Any], tags: Lis
if feed_type == 'domain':
if indicator.get('Timestamp'):
fields['firstseenbysource'] = datetime.fromtimestamp(int(indicator.get('Timestamp')), # type: ignore
- timezone.utc).isoformat()
+ UTC).isoformat()
else:
fields['threattypes'] = [{'threatcategory': feed_type.capitalize() if feed_type != 'phish' else 'Phishing'}]
if indicator.get('MatchType'):
@@ -421,7 +421,7 @@ def get_latest_key(client: Client, feed_type: str, first_fetch_interval: str,
return object_key_list[-1].get('Key', '') if object_key_list else cached_key
# Parsing first fetch time.
- date_from, now = dateparser.parse(f'{first_fetch_interval} UTC'), datetime.now(timezone.utc)
+ date_from, now = dateparser.parse(f'{first_fetch_interval} UTC'), datetime.now(UTC)
# Fetching latest object keys.
latest_key_list: List[str] = [key_dict.get('Key', '') for key_dict in
diff --git a/Packs/SecurityIntelligenceServicesFeed/Integrations/SecurityIntelligenceServicesFeed/SecurityIntelligenceServicesFeed.yml b/Packs/SecurityIntelligenceServicesFeed/Integrations/SecurityIntelligenceServicesFeed/SecurityIntelligenceServicesFeed.yml
index f1652f794aa4..5905fcd38593 100644
--- a/Packs/SecurityIntelligenceServicesFeed/Integrations/SecurityIntelligenceServicesFeed/SecurityIntelligenceServicesFeed.yml
+++ b/Packs/SecurityIntelligenceServicesFeed/Integrations/SecurityIntelligenceServicesFeed/SecurityIntelligenceServicesFeed.yml
@@ -128,7 +128,7 @@ script:
name: search
description: Gets indicators from Security Intelligence Services feed. Note- Indicators will fetch from the latest found object.
name: sis-get-indicators
- dockerimage: demisto/boto3py3:1.0.0.98661
+ dockerimage: demisto/boto3py3:1.0.0.116921
feed: true
runonce: false
script: '-'
diff --git a/Packs/SecurityIntelligenceServicesFeed/ReleaseNotes/1_0_39.md b/Packs/SecurityIntelligenceServicesFeed/ReleaseNotes/1_0_39.md
new file mode 100644
index 000000000000..34ea4ce4488c
--- /dev/null
+++ b/Packs/SecurityIntelligenceServicesFeed/ReleaseNotes/1_0_39.md
@@ -0,0 +1,7 @@
+
+#### Integrations
+
+##### Security Intelligence Services Feed
+
+
+- Updated the Docker image to: *demisto/boto3py3:1.0.0.116921*.
diff --git a/Packs/SecurityIntelligenceServicesFeed/pack_metadata.json b/Packs/SecurityIntelligenceServicesFeed/pack_metadata.json
index 4dbcb2dc3ce8..5c719bebadda 100644
--- a/Packs/SecurityIntelligenceServicesFeed/pack_metadata.json
+++ b/Packs/SecurityIntelligenceServicesFeed/pack_metadata.json
@@ -2,7 +2,7 @@
"name": "Security Intelligence Services Feed",
"description": "A PassiveTotal with Security Intelligence Services Feed can provide you newly observed Domain, Malware, Phishing, Content and Scam Blacklist.",
"support": "community",
- "currentVersion": "1.0.38",
+ "currentVersion": "1.0.39",
"author": "RiskIQ",
"url": "https://www.riskiq.com/resources/support/",
"email": "paloaltonetworks@riskiq.net",
diff --git a/Packs/SekoiaXDR/Classifiers/classifier-Sekoia_XDR_-_Incoming_Mapper.json b/Packs/SekoiaXDR/Classifiers/classifier-Sekoia_XDR_-_Incoming_Mapper.json
index 23899994b7b2..4df14a7cfe49 100644
--- a/Packs/SekoiaXDR/Classifiers/classifier-Sekoia_XDR_-_Incoming_Mapper.json
+++ b/Packs/SekoiaXDR/Classifiers/classifier-Sekoia_XDR_-_Incoming_Mapper.json
@@ -1,126 +1,141 @@
{
- "description": "Maps incoming Sekoia XDR incidents fields.\n",
- "feed": false,
- "id": "Sekoia XDR - Incoming Mapper",
- "mapping": {
- "Sekoia XDR": {
- "dontMapEventToLabels": true,
- "internalMapping": {
- "Alert Category": {
- "simple": "alert_type.category"
- },
- "Alert ID": {
- "simple": "short_id"
- },
- "Alert Name": {
- "simple": "title"
- },
- "Alert Type ID": {
- "simple": "alert_type.value"
- },
- "Description": {
- "simple": "details"
- },
- "External Link": {
- "simple": "target"
- },
- "Last Seen": {
- "simple": "last_seen_at"
- },
- "SekoiaXDR Alert Details": {
- "simple": "details"
- },
- "SekoiaXDR Alert Status": {
- "simple": "status.name"
- },
- "SekoiaXDR First Seen": {
- "simple": "first_seen_at"
- },
- "SekoiaXDR Kill Chain": {
- "complex": {
- "filters": [],
- "root": "kill_chain",
- "transformers": []
- }
- },
- "Source Create time": {
- "complex": {
- "filters": [],
- "root": "created_at",
- "transformers": [
- {
- "operator": "TimeStampToDate"
- }
- ]
- }
- },
- "Source IP": {
- "simple": "source"
- },
- "dbotMirrorInstance": {
- "simple": "mirror_instance"
- }
- }
- },
- "dbot_classification_incident_type_all": {
- "dontMapEventToLabels": false,
- "internalMapping": {
- "Alert Category": {
- "simple": "alert_type.category"
- },
- "Alert ID": {
- "simple": "short_id"
- },
- "Alert Name": {
- "simple": "title"
- },
- "Alert Type ID": {
- "simple": "alert_type.value"
- },
- "Description": {
- "simple": "details"
- },
- "Last Seen": {
- "simple": "last_seen_at"
- },
- "SekoiaXDR Alert Details": {
- "simple": "details"
- },
- "SekoiaXDR Alert Status": {
- "simple": "status.name"
- },
- "SekoiaXDR First Seen": {
- "simple": "first_seen_at"
- },
- "SekoiaXDR Kill Chain": {
- "complex": {
- "filters": [],
- "root": "kill_chain",
- "transformers": []
- }
- },
- "Source Create time": {
- "complex": {
- "filters": [],
- "root": "created_at",
- "transformers": [
- {
- "operator": "TimeStampToDate"
- }
- ]
- }
- },
- "Source IP": {
- "simple": "source"
- },
- "dbotMirrorInstance": {
- "simple": "mirror_instance"
- }
- }
- }
- },
- "name": "Sekoia XDR - Incoming Mapper",
- "type": "mapping-incoming",
+ "description": "Maps incoming Sekoia XDR incidents fields.\n",
+ "feed": false,
+ "id": "Sekoia XDR - Incoming Mapper",
+ "mapping": {
+ "Sekoia XDR": {
+ "dontMapEventToLabels": true,
+ "internalMapping": {
+ "Alert Category": {
+ "simple": "alert_type.category"
+ },
+ "Alert ID": {
+ "simple": "short_id"
+ },
+ "Alert Name": {
+ "simple": "title"
+ },
+ "Alert Type ID": {
+ "simple": "alert_type.value"
+ },
+ "Description": {
+ "simple": "details"
+ },
+ "External Link": {
+ "simple": "target"
+ },
+ "Last Seen": {
+ "simple": "last_seen_at"
+ },
+ "SekoiaXDR Alert Details": {
+ "simple": "details"
+ },
+ "SekoiaXDR Alert Status": {
+ "simple": "alert_status"
+ },
+ "SekoiaXDR First Seen": {
+ "simple": "first_seen_at"
+ },
+ "SekoiaXDR Kill Chain": {
+ "complex": {
+ "filters": [],
+ "root": "kill_chain",
+ "transformers": []
+ }
+ },
+ "SekoiaXDR MirrorOut": {
+ "simple": "mirrorOut"
+ },
+ "Source Create time": {
+ "complex": {
+ "filters": [],
+ "root": "created_at",
+ "transformers": [
+ {
+ "operator": "TimeStampToDate"
+ }
+ ]
+ }
+ },
+ "Source IP": {
+ "simple": "source"
+ },
+ "dbotMirrorDirection": {
+ "simple": "mirror_direction"
+ },
+ "dbotMirrorId": {
+ "simple": "mirrored_id"
+ },
+ "dbotMirrorInstance": {
+ "simple": "mirror_instance"
+ }
+ }
+ },
+ "dbot_classification_incident_type_all": {
+ "dontMapEventToLabels": false,
+ "internalMapping": {
+ "Alert Category": {
+ "simple": "alert_type.category"
+ },
+ "Alert ID": {
+ "simple": "short_id"
+ },
+ "Alert Name": {
+ "simple": "title"
+ },
+ "Alert Type ID": {
+ "simple": "alert_type.value"
+ },
+ "Description": {
+ "simple": "details"
+ },
+ "Last Seen": {
+ "simple": "last_seen_at"
+ },
+ "SekoiaXDR Alert Details": {
+ "simple": "details"
+ },
+ "SekoiaXDR Alert Status": {
+ "simple": "status.name"
+ },
+ "SekoiaXDR First Seen": {
+ "simple": "first_seen_at"
+ },
+ "SekoiaXDR Kill Chain": {
+ "complex": {
+ "filters": [],
+ "root": "kill_chain",
+ "transformers": []
+ }
+ },
+ "Source Create time": {
+ "complex": {
+ "filters": [],
+ "root": "created_at",
+ "transformers": [
+ {
+ "operator": "TimeStampToDate"
+ }
+ ]
+ }
+ },
+ "Source IP": {
+ "simple": "source"
+ },
+ "dbotMirrorDirection": {
+ "simple": "mirror_direction"
+ },
+ "dbotMirrorId": {
+ "simple": "mirrored_id"
+ },
+ "dbotMirrorInstance": {
+ "simple": "mirror_instance"
+ }
+ }
+ }
+ },
+ "name": "Sekoia XDR - Incoming Mapper",
+ "type": "mapping-incoming",
"version": -1,
"fromVersion": "6.10.0"
}
\ No newline at end of file
diff --git a/Packs/SekoiaXDR/IncidentFields/incident_sekoia_xdr_mirrorout_field.json b/Packs/SekoiaXDR/IncidentFields/incident_sekoia_xdr_mirrorout_field.json
new file mode 100644
index 000000000000..859a978102c7
--- /dev/null
+++ b/Packs/SekoiaXDR/IncidentFields/incident_sekoia_xdr_mirrorout_field.json
@@ -0,0 +1,30 @@
+{
+ "id": "incident_sekoiaxdrmirrorout",
+ "version": -1,
+ "modified": "2024-11-04T11:12:46.451426844Z",
+ "name": "SekoiaXDR MirrorOut",
+ "cliName": "sekoiaxdrmirrorout",
+ "type": "boolean",
+ "closeForm": false,
+ "editForm": true,
+ "required": false,
+ "neverSetAsRequired": false,
+ "isReadOnly": false,
+ "useAsKpi": false,
+ "locked": false,
+ "system": false,
+ "content": true,
+ "group": 0,
+ "hidden": false,
+ "openEnded": false,
+ "associatedTypes": [
+ "Sekoia XDR"
+ ],
+ "associatedToAll": false,
+ "unmapped": false,
+ "unsearchable": true,
+ "caseInsensitive": true,
+ "sla": 0,
+ "threshold": 72,
+ "fromVersion": "6.10.0"
+}
\ No newline at end of file
diff --git a/Packs/SekoiaXDR/Integrations/SekoiaXDR/README.md b/Packs/SekoiaXDR/Integrations/SekoiaXDR/README.md
index bf55af08538b..304d6f9f9749 100644
--- a/Packs/SekoiaXDR/Integrations/SekoiaXDR/README.md
+++ b/Packs/SekoiaXDR/Integrations/SekoiaXDR/README.md
@@ -12,7 +12,7 @@ This integration was integrated and tested with version 1.0 of Sekoia XDR.
| --- | --- | --- |
| API key | | True |
| API Key | | True |
- | Server URL (i.e. https://api.sekoia.io) | | True |
+ | Server URL (i.e. ) | | True |
| Trust any certificate (not secure) | | False |
| Use system proxy settings | | False |
| Fetch incidents | | False |
@@ -27,7 +27,13 @@ This integration was integrated and tested with version 1.0 of Sekoia XDR.
| Replace "dots" in event field names with another character. | Replacing dots in events will make names look pretty good for users | True |
| Events fields to exclude from the events search result. | These are the names of the headers presented in the events table. If the header is not in the dropdown list write it and press enter. | False |
| Include assets information in the alerts when fetching. | When selected, it includes the assets information in the alert when fetched from Sekoia. And also If there's no max_fetch it will fetch 10 incidents by default. | False |
- | Include kill chain information in the alerts when fetching. | When selected, it includes the kill chain information in the alert when fetched from Sekoia. And also If there's no max_fetch it will fetch 10 incidents by default. | False |
+ | Include kill chain information in the alerts when fetching. | When selected, it includes the kill chain information in the alert when fetched from Sekoia. And also If there's no max_fetch it will fetch 10 incidents by default. | False |
+ | Incident Mirroring Direction. | Choose the direction to mirror the incident: None\(Disable mirroring\), Incoming \(from Sekoia XDR to Cortex XSOAR\) , Outgoing \(from Cortex XSOAR to Sekoia XDR\), or Incoming and Outgoing \(from/to Cortex XSOAR and Sekoia XDR\). | True |
+ | Include events in the mirroring of the alerts. | When selected, it includes the events in the mirrored alerts when an alert is updated in Sekoia. | False |
+ | Include kill chain information in the mirroring of the alerts. | When selected, it includes the kill chain information of the alert in the mirrored alerts when an alert is updated in Sekoia. | False |
+ | Reopen Mirrored Cortex XSOAR Incidents (Incoming Mirroring) | When selected, reopening the Sekoia XDR alert will reopen the Cortex XSOAR incident. | False |
+ | Close Mirrored Cortex XSOAR Incidents (Incoming Mirroring) | When selected, closing the Sekoia XDR alert with a "Closed" or "Reject" status will close the Cortex XSOAR incident. | False |
+ | Close notes. | Change the closing notes that will be added to the tickets closed automatically by the automation. | True |
| Timezone ( TZ format ) | This will be used to present dates in the appropiate timezones, used for comment timestamps, etc. | True |
4. Click **Test** to validate the URLs, token, and connection.
@@ -524,6 +530,62 @@ Get an asset by its UUID from Sekoia XDR.
| SekoiaXDR.Asset.name | unknown | The name of the asset. |
| SekoiaXDR.Asset.uuid | unknown | The UUID of the asset. |
+### get-remote-data
+
+***
+This command gets new information about the incidents in the remote system and updates existing incidents in Cortex XSOAR.
+
+#### Base Command
+
+`get-remote-data`
+
+#### Input
+
+| **Argument Name** | **Description** | **Required** |
+| --- | --- | --- |
+| id | The remote alert ID. | Optional |
+| lastUpdate | ISO format date with timezone, e.g., 2023-03-01T16:41:30.589575+02:00. The incident is only updated if it was modified after the last update time. Default is 0. | Optional |
+
+#### Context Output
+
+There is no context output for this command.
+
+### get-modified-remote-data
+
+***
+available from Cortex XSOAR version 6.1.0. This command queries for incidents that were modified since the last update.
+
+#### Base Command
+
+`get-modified-remote-data`
+
+#### Input
+
+| **Argument Name** | **Description** | **Required** |
+| --- | --- | --- |
+| lastUpdate | ISO format date with timezone, e.g., 2023-03-01T16:41:30.589575+02:00. The incident is only returned if it was modified after the last update time. Default is 0. | Optional |
+
+#### Context Output
+
+There is no context output for this command.
+
+### get-mapping-fields
+
+***
+This command pulls the remote schema for the different incident types, and their associated incident fields, from the remote system.
+
+#### Base Command
+
+`get-mapping-fields`
+
+#### Input
+
+| **Argument Name** | **Description** | **Required** |
+| --- | --- | --- |
+
+#### Context Output
+
+There is no context output for this command.
### sekoia-xdr-list-assets
@@ -753,13 +815,35 @@ Command that performs a HTTP request to Sekoia using the integration authenticat
| **Argument Name** | **Description** | **Required** |
| --- | --- | --- |
| method | Method to use with the http request (GET,POST,etc). Default is GET. | Required |
-| url_sufix | The URL suffix after https://api.sekoia.io, i.e. /v1/sic/alerts/ or /v1/asset-management/assets/. | Required |
+| url_sufix | The URL suffix after , i.e. /v1/sic/alerts/ or /v1/asset-management/assets/. | Required |
| parameters | Query parameters, i.e. limit -> 10 , match['status_name'] -> Ongoing. | Optional |
#### Context Output
There is no context output for this command.
+## Incident Mirroring
+
+You can enable incident mirroring between Cortex XSOAR incidents and Sekoia XDR corresponding events (available from Cortex XSOAR version 6.0.0).
+To set up the mirroring:
+
+1. Enable *Fetching incidents* in your instance configuration.
+2. In the *Mirroring Direction* integration parameter, select in which direction the incidents should be mirrored:
+
+ | **Option** | **Description** |
+ | --- | --- |
+ | None | Turns off incident mirroring. |
+ | Incoming | Any changes in Sekoia XDR events (mirroring incoming fields) will be reflected in Cortex XSOAR incidents. |
+ | Outgoing | Any changes in Cortex XSOAR incidents will be reflected in Sekoia XDR events (outgoing mirrored fields). |
+ | Incoming and Outgoing | Changes made in Sekoia will be reflected in Cortex, and vice versa, ensuring status updates are synchronized between both systems. |
+
+3. Optional: Check the *Close Mirrored XSOAR Incident* integration parameter to close the Cortex XSOAR incident when the corresponding event is closed in Sekoia XDR.
+
+4. Optional: Check the Reopen Mirrored Cortex XSOAR Incidents integration parameter to reopen the Cortex XSOAR incident when the matching Sekoia XDR alert is reopened.
+
+Newly fetched incidents will be mirrored in the chosen direction. However, this selection does not affect existing incidents.
+**Important Note:** To ensure the mirroring works as expected, mappers are required, both for incoming and outgoing, to map the expected fields in Cortex XSOAR and Sekoia XDR.
+
## Troubleshooting
To troubleshoot possible issues with the SEKOIA XDR integration, consider the following steps:
@@ -768,12 +852,38 @@ To troubleshoot possible issues with the SEKOIA XDR integration, consider the fo
- In your integration instance, enable the Debug option.
- Navigate to `Settings > About > Troubleshooting > Download logs` to download the logs. Analyzing these logs can provide valuable insights into any issues.
+- **Mirror Values**:
+ - To diagnose mirroring issues beyond what debug mode offers, you can inspect specific fields in the context data. Check if the following dbot fields are set:
+ - **dbotMirrorInstance**: Indicates the instance managing the mirroring.
+ - **dbotMirrorDirection**: Shows the direction of mirroring.
+ - **dbotMirrorId**: The unique identifier for the mirroring process.
+ - If these fields are not set, review the mappers to ensure that they are configured correctly.
+
+- **dbotMirrorLastSync Field**:
+ - The `dbotMirrorLastSync` field in the context data will update when the mirroring process updates an incident.
+ - You can observe these updates in the **War Room** as well, which will provide a log of the mirroring activity.
+
+By following these troubleshooting steps, you can effectively diagnose and resolve issues within the SEKOIA XDR integration.
+
+## Best Practices
+
+To make the most out of your SEKOIA XDR integration, consider the following best practices:
+
+- **Mirroring Changes**: When mirroring is enabled, please allow at least 1 minute for changes to be reflected. The mirroring process runs every 1 minute, ensuring that data between SEKOIA and Cortex is kept in sync.
+
+- **Handling Reopened Incidents**: If you have enabled the reopening option, the Cortex incident will be reopened under two specific conditions:
+ - **Reopened Alert in SEKOIA**: If an alert is reopened in SEKOIA, the corresponding incident in Cortex will also be reopened. This ensures that the incident tracking is consistent across both platforms.
+ - **Reopened Incident in Cortex**: If you reopen an incident directly in Cortex, you need to be cautious. After reopening the incident in Cortex, you should promptly change the status of the SEKOIA alert. Failing to do so might lead to the incident being automatically closed by the mirroring process.
+
+By adhering to these best practices, you can ensure a smoother and more effective synchronization between SEKOIA and your incident management platform.
+
## Additional documentation
The following documentation can be useful to understand the integration:
| Information | Description |
| --- | --- |
+| [Mirroring](https://xsoar.pan.dev/docs/integrations/mirroring_integration) | Adittional information for mirroring |
| [Post process scripts](https://docs-cortex.paloaltonetworks.com/r/Cortex-XSOAR/6.5/Cortex-XSOAR-Administrator-Guide/Post-Processing-for-Incidents) | Adittional information for post process scripts |
| [Sekoia XDR documentation](https://docs.sekoia.io/xdr/) | Sekoia XDR Documentation |
-| [Rest API Documentation](https://docs.sekoia.io/xdr/develop/rest_api/alert/) | Sekoia XDR API Documentation |
\ No newline at end of file
+| [Rest API Documentation](https://docs.sekoia.io/xdr/develop/rest_api/alert/) | Sekoia XDR API Documentation |
diff --git a/Packs/SekoiaXDR/Integrations/SekoiaXDR/SekoiaXDR.py b/Packs/SekoiaXDR/Integrations/SekoiaXDR/SekoiaXDR.py
index 47981563abaa..d79e0bc0e4c1 100644
--- a/Packs/SekoiaXDR/Integrations/SekoiaXDR/SekoiaXDR.py
+++ b/Packs/SekoiaXDR/Integrations/SekoiaXDR/SekoiaXDR.py
@@ -5,7 +5,7 @@
import json
import urllib3
import dateparser # type: ignore
-from typing import Any, Dict, Tuple, List, Optional, cast
+from typing import Any, cast
from datetime import datetime
import re
import pytz # type: ignore
@@ -19,6 +19,12 @@
DATE_FORMAT = "%Y-%m-%dT%H:%M:%S"
INTERVAL_SECONDS_EVENTS = 1
TIMEOUT_EVENTS = 30
+INCIDENT_TYPE_NAME = "Sekoia XDR"
+SEKOIA_INCIDENT_FIELDS = {
+ "short_id": "The ID of the alert to edit",
+ "status": "The name of the status.",
+}
+
STATUS_TRANSITIONS = {
"Ongoing": "Validate",
"Acknowledged": "Acknowledge",
@@ -26,6 +32,13 @@
"Closed": "Close",
}
+MIRROR_DIRECTION = {
+ "None": None,
+ "Incoming": "In",
+ "Outgoing": None,
+ "Incoming and Outgoing": "In",
+}
+
""" CLIENT CLASS """
@@ -49,15 +62,15 @@ def get_validate_resource(self) -> str:
def list_alerts(
self,
- alerts_limit: Optional[int],
- alerts_status: Optional[str],
- alerts_created_at: Optional[str],
- alerts_updated_at: Optional[str],
- alerts_urgency: Optional[str],
- alerts_type: Optional[str],
- sort_by: Optional[str],
- ) -> Dict[str, Any]:
- request_params: Dict[str, Any] = {}
+ alerts_limit: int | None,
+ alerts_status: str | None,
+ alerts_created_at: str | None,
+ alerts_updated_at: str | None,
+ alerts_urgency: str | None,
+ alerts_type: str | None,
+ sort_by: str | None,
+ ) -> dict[str, Any]:
+ request_params: dict[str, Any] = {}
""" Normal parameters"""
if alerts_limit:
@@ -83,15 +96,15 @@ def list_alerts(
method="GET", url_suffix="/v1/sic/alerts", params=request_params
)
- def get_alert(self, alert_uuid: str) -> Dict[str, Any]:
+ def get_alert(self, alert_uuid: str) -> dict[str, Any]:
return self._http_request(
method="GET", url_suffix=f"/v1/sic/alerts/{alert_uuid}"
)
def update_status_alert(
- self, alert_uuid: str, action_uuid: str, comment: Optional[str]
- ) -> Dict[str, Any]:
- request_params: Dict[str, Any] = {"action_uuid": action_uuid}
+ self, alert_uuid: str, action_uuid: str, comment: str | None
+ ) -> dict[str, Any]:
+ request_params: dict[str, Any] = {"action_uuid": action_uuid}
""" Normal parameters"""
if comment:
@@ -104,9 +117,9 @@ def update_status_alert(
)
def post_comment_alert(
- self, alert_uuid: str, content: str, author: Optional[str]
- ) -> Dict[str, Any]:
- request_params: Dict[str, Any] = {"content": content}
+ self, alert_uuid: str, content: str, author: str | None
+ ) -> dict[str, Any]:
+ request_params: dict[str, Any] = {"content": content}
""" Normal parameters"""
if author:
@@ -118,13 +131,13 @@ def post_comment_alert(
json_data=request_params,
)
- def get_comments_alert(self, alert_uuid: str) -> Dict[str, Any]:
+ def get_comments_alert(self, alert_uuid: str) -> dict[str, Any]:
return self._http_request(
method="GET",
url_suffix=f"/v1/sic/alerts/{alert_uuid}/comments",
)
- def get_workflow_alert(self, alert_uuid: str) -> Dict[str, Any]:
+ def get_workflow_alert(self, alert_uuid: str) -> dict[str, Any]:
return self._http_request(
method="GET",
url_suffix=f"/v1/sic/alerts/{alert_uuid}/workflow",
@@ -135,9 +148,9 @@ def query_events(
events_earliest_time: str,
events_latest_time: str,
events_term: str,
- max_last_events: Optional[str],
- ) -> Dict[str, Any]:
- request_params: Dict[str, Any] = {
+ max_last_events: str | None,
+ ) -> dict[str, Any]:
+ request_params: dict[str, Any] = {
"earliest_time": events_earliest_time,
"latest_time": events_latest_time,
"term": events_term,
@@ -153,22 +166,22 @@ def query_events(
json_data=request_params,
)
- def query_events_status(self, event_search_job_uuid: str) -> Dict[str, Any]:
+ def query_events_status(self, event_search_job_uuid: str) -> dict[str, Any]:
return self._http_request(
method="GET",
url_suffix=f"/v1/sic/conf/events/search/jobs/{event_search_job_uuid}",
)
- def retrieve_events(self, event_search_job_uuid: str) -> Dict[str, Any]:
+ def retrieve_events(self, event_search_job_uuid: str) -> dict[str, Any]:
return self._http_request(
method="GET",
url_suffix=f"/v1/sic/conf/events/search/jobs/{event_search_job_uuid}/events",
)
def get_cases_alert(
- self, alert_uuid: str, case_id: Optional[str]
- ) -> Dict[str, Any]:
- request_params: Dict[str, Any] = {"match[alert_uuid]": alert_uuid}
+ self, alert_uuid: str, case_id: str | None
+ ) -> dict[str, Any]:
+ request_params: dict[str, Any] = {"match[alert_uuid]": alert_uuid}
""" Matching parameters"""
if case_id:
@@ -178,16 +191,16 @@ def get_cases_alert(
method="GET", url_suffix="v1/sic/cases", params=request_params
)
- def get_asset(self, asset_uuid: str) -> Dict[str, Any]:
+ def get_asset(self, asset_uuid: str) -> dict[str, Any]:
return self._http_request(
method="GET",
url_suffix=f"/v1/asset-management/assets/{asset_uuid}",
)
def list_asset(
- self, limit: Optional[str], assets_type: Optional[str]
- ) -> Dict[str, Any]:
- request_params: Dict[str, Any] = {}
+ self, limit: str | None, assets_type: str | None
+ ) -> dict[str, Any]:
+ request_params: dict[str, Any] = {}
""" Normal parameters"""
if limit:
@@ -205,8 +218,8 @@ def list_asset(
def add_attributes_asset(
self, asset_uuid: str, name: str, value: str
- ) -> Dict[str, Any]:
- request_params: Dict[str, Any] = {"name": name, "value": value}
+ ) -> dict[str, Any]:
+ request_params: dict[str, Any] = {"name": name, "value": value}
return self._http_request(
method="POST",
@@ -214,8 +227,8 @@ def add_attributes_asset(
params=request_params,
)
- def add_keys_asset(self, asset_uuid: str, name: str, value: str) -> Dict[str, Any]:
- request_params: Dict[str, Any] = {"name": name, "value": value}
+ def add_keys_asset(self, asset_uuid: str, name: str, value: str) -> dict[str, Any]:
+ request_params: dict[str, Any] = {"name": name, "value": value}
return self._http_request(
method="POST",
@@ -225,31 +238,31 @@ def add_keys_asset(self, asset_uuid: str, name: str, value: str) -> Dict[str, An
def remove_attribute_asset(
self, asset_uuid: str, attribute_uuid: str
- ) -> List[Dict[str, Any]]:
+ ) -> list[dict[str, Any]]:
return self._http_request(
method="DELETE",
url_suffix=f"/v1/asset-management/assets/{asset_uuid}/attr/{attribute_uuid}",
resp_type="text",
)
- def remove_key_asset(self, asset_uuid: str, key_uuid: str) -> Dict[str, Any]:
+ def remove_key_asset(self, asset_uuid: str, key_uuid: str) -> dict[str, Any]:
return self._http_request(
method="DELETE",
url_suffix=f"/v1/asset-management/assets/{asset_uuid}/keys/{key_uuid}",
resp_type="text",
)
- def get_user(self, user_uuid: str) -> Dict[str, Any]:
+ def get_user(self, user_uuid: str) -> dict[str, Any]:
return self._http_request(method="GET", url_suffix=f"/v1/users/{user_uuid}")
- def get_kill_chain(self, kill_chain_uuid: str) -> Dict[str, Any]:
+ def get_kill_chain(self, kill_chain_uuid: str) -> dict[str, Any]:
return self._http_request(
method="GET", url_suffix=f"/v1/sic/kill-chains/{kill_chain_uuid}"
)
def http_request(
self, method: str, url_suffix: str, params: dict
- ) -> Dict[str, Any]:
+ ) -> dict[str, Any]:
if not params:
params = {}
@@ -460,16 +473,17 @@ def filter_dict_by_keys(input_dict: dict, keys_to_keep: list) -> dict:
def fetch_incidents(
client: Client,
- max_results: Optional[int],
- last_run: Dict[str, int],
- first_fetch_time: Optional[int],
- alert_status: Optional[str],
- alert_urgency: Optional[str],
- alert_type: Optional[str],
- fetch_mode: Optional[str],
- fetch_with_assets: Optional[bool],
- fetch_with_kill_chain: Optional[bool],
-) -> Tuple[Dict[str, int], List[dict]]:
+ max_results: int | None,
+ last_run: dict[str, int],
+ first_fetch_time: int | None,
+ alert_status: str | None,
+ alert_urgency: str | None,
+ alert_type: str | None,
+ fetch_mode: str | None,
+ mirror_direction: str | None,
+ fetch_with_assets: bool | None,
+ fetch_with_kill_chain: bool | None,
+) -> tuple[dict[str, int], list[dict]]:
"""
This function retrieves new alerts every interval (default is 1 minute).
It has to implement the logic of making sure that incidents are fetched only onces and no incidents are missed.
@@ -487,6 +501,7 @@ def fetch_incidents(
alert_urgency (str): alert urgency range to search for. Format: "MIN_urgency,MAX_urgency". i.e: 80,100.
alert_type (str): type of alerts to search for.
fetch_mode (str): If the alert will be fetched with or without the events.
+ mirror_direction (str): The direction of the mirroring can be set to None or to Incoming.
fetch_with_assets (bool): If the alert will include the assets information on the fetching.
fetch_with_kill_chain (bool): If the alert will include the kill chain information on the fetching.
Returns:
@@ -519,7 +534,7 @@ def fetch_incidents(
# Initialize an empty list of incidents to return
# Each incident is a dict with a string as a key
- incidents: List[Dict[str, Any]] = []
+ incidents: list[dict[str, Any]] = []
alerts = client.list_alerts(
alerts_limit=max_results,
alerts_status=alert_status,
@@ -598,7 +613,10 @@ def fetch_incidents(
}
# If the integration parameter is set to mirror add the appropriate fields to the incident
alert["mirror_instance"] = demisto.integrationInstance()
+ alert["mirrorOut"] = str(mirror_direction) in ["Outgoing", "Incoming and Outgoing"]
incident["rawJSON"] = json.dumps(alert)
+ incident["dbotMirrorDirection"] = MIRROR_DIRECTION.get(str(mirror_direction))
+ incident["dbotMirrorId"] = alert["short_id"]
incidents.append(incident)
# Update last run and add incident if the incident is newer than last fetch
@@ -610,7 +628,201 @@ def fetch_incidents(
return next_run, incidents
-def list_alerts_command(client: Client, args: Dict[str, Any]) -> CommandResults:
+# =========== Mirroring Mechanism ===========
+
+
+def get_remote_data_command(
+ client: Client,
+ args: dict,
+ close_incident: bool,
+ close_note: str,
+ mirror_events: bool,
+ mirror_kill_chain: bool,
+ reopen_incident: bool,
+):
+ """get-remote-data command: Returns an updated alert and error entry (if needed)
+
+ Args:
+ client (Client): Sekoia XDR client to use.
+ args (dict): The command arguments
+ close_incident (bool): Indicates whether to close the corresponding XSOAR incident if the alert
+ has been closed on Sekoia's end.
+ close_note (str): Indicates the notes to be including when the incident gets closed by mirroring.
+ mirror_events (bool): If the events will be included in the mirroring of the alerts or not.
+ mirror_kill_chain: If the kill chain information from the alerts will be mirrored.
+ reopen_incident: Indicates whether to reopen the corresponding XSOAR incident if the alert
+ has been reopened on Sekoia's end.
+ Returns:
+ GetRemoteDataResponse: The Response containing the update alert to mirror and the entries
+ """
+
+ demisto.debug("#### Entering MIRRORING IN - get_remote_data_command ####")
+
+ parsed_args = GetRemoteDataArgs(args)
+ alert = client.get_alert(alert_uuid=parsed_args.remote_incident_id)
+ alert_short_id, alert_status = alert["short_id"], alert["status"]["name"]
+ last_update = arg_to_timestamp(
+ arg=parsed_args.last_update, arg_name="lastUpdate", required=True
+ )
+ alert_last_update = arg_to_timestamp(
+ arg=alert.get("updated_at"), arg_name="updated_at", required=False
+ )
+
+ demisto.debug(
+ f"Alert {alert_short_id} with status {alert_status} : last_update is {last_update} , alert_last_update is {alert_last_update}" # noqa: E501
+ )
+
+ entries = []
+
+ # Add the events to the alert
+ if mirror_events and alert["status"]["name"] not in ["Closed", "Rejected"]:
+ earliest_time = alert["first_seen_at"]
+ lastest_time = "now"
+ term = f"alert_short_ids:{alert['short_id']}"
+ interval_in_seconds = INTERVAL_SECONDS_EVENTS
+ timeout_in_seconds = TIMEOUT_EVENTS
+
+ args = {
+ "earliest_time": earliest_time,
+ "lastest_time": lastest_time,
+ "query": term,
+ "interval_in_seconds": interval_in_seconds,
+ "timeout_in_seconds": timeout_in_seconds,
+ }
+ events = search_events_command(args=args, client=client)
+ alert["events"] = events.outputs # pylint: disable=E1101
+
+ # Add the kill chain information to the alert
+ if mirror_kill_chain and alert["kill_chain_short_id"]:
+ try:
+ kill_chain = client.get_kill_chain(
+ kill_chain_uuid=alert["kill_chain_short_id"]
+ )
+ alert["kill_chain"] = kill_chain
+ except Exception as e:
+ # Handle the exception if there is any problem with the API call
+ demisto.debug(f"Error fetching kill_chain : {e}")
+
+ # This adds all the information from the XSOAR incident.
+ demisto.debug(
+ f"Alert {alert_short_id} with status {alert_status} have this info updated: {alert}"
+ )
+
+ investigation = demisto.investigation()
+ demisto.debug(f"The investigation information is {investigation}")
+
+ incident_id = investigation["id"]
+ incident_status = investigation["status"]
+
+ demisto.debug(
+ f"The XSOAR incident is {incident_id} with status {incident_status} is being mirrored with the alert {alert_short_id} that have the status {alert_status}." # noqa: E501
+ )
+
+ # Close the XSOAR incident using mirroring
+ if (
+ (close_incident)
+ and (alert_status in ["Closed", "Rejected"])
+ and (investigation["status"] != 1)
+ ):
+ demisto.debug(
+ f"Alert {alert_short_id} with status {alert_status} was closed or rejected in Sekoia, closing incident {incident_id} in XSOAR" # noqa: E501
+ )
+ entries = [
+ {
+ "Type": EntryType.NOTE,
+ "Contents": {
+ "dbotIncidentClose": True,
+ "closeReason": f"{alert_status} - Mirror",
+ "closeNotes": close_note,
+ },
+ "ContentsFormat": EntryFormat.JSON,
+ }
+ ]
+
+ # Reopen the XSOAR incident using mirroring
+ if (
+ (reopen_incident)
+ and (alert_status not in ["Closed", "Rejected"])
+ and (investigation["status"] == 1)
+ ):
+ demisto.debug(
+ f"Alert {alert_short_id} with status {alert_status} was reopened in Sekoia, reopening incident {incident_id} in XSOAR"
+ )
+ entries = [
+ {
+ "Type": EntryType.NOTE,
+ "Contents": {"dbotIncidentReopen": True},
+ "ContentsFormat": EntryFormat.JSON,
+ }
+ ]
+
+ demisto.debug("#### Leaving MIRRORING IN - get_remote_data_command ####")
+
+ demisto.debug(f"This's the final alert status for mirroring in : {alert}")
+
+ return GetRemoteDataResponse(mirrored_object=alert, entries=entries)
+
+
+def get_modified_remote_data_command(client: Client, args):
+ """Gets the list of all alert ids that have change since a given time
+
+ Args:
+ client (Client): Sekoia XDR client to use.
+ args (dict): The command argument
+
+ Returns:
+ GetModifiedRemoteDataResponse: The response containing the list of ids of notables changed
+ """
+ modified_alert_ids = []
+ remote_args = GetModifiedRemoteDataArgs(args)
+ last_update = remote_args.last_update
+ last_update_utc = dateparser.parse(
+ last_update, settings={"TIMEZONE": "UTC"}
+ ) # converts to a UTC timestamp
+ formatted_last_update = last_update_utc.strftime("%Y-%m-%dT%H:%M:%S.%f+00:00") # type: ignore
+ converted_time = time_converter(formatted_last_update)
+ last_update_time = f"{converted_time},now"
+
+ raw_alerts = client.list_alerts(
+ alerts_updated_at=last_update_time,
+ alerts_limit=100,
+ alerts_status=None,
+ alerts_created_at=None,
+ alerts_urgency=None,
+ alerts_type=None,
+ sort_by="updated_at",
+ )
+
+ modified_alert_ids = [item["short_id"] for item in raw_alerts["items"]]
+
+ return GetModifiedRemoteDataResponse(modified_incident_ids=modified_alert_ids)
+
+
+def update_remote_system_command(client: Client, args):
+ pass
+
+
+def get_mapping_fields_command() -> GetMappingFieldsResponse:
+ """
+ this command pulls the remote schema for the different incident types, and their associated incident fields,
+ from the remote system.
+ :return: A list of keys you want to map
+ """
+ sekoia_incident_type_scheme = SchemeTypeMapping(type_name=INCIDENT_TYPE_NAME)
+ for argument, description in SEKOIA_INCIDENT_FIELDS.items():
+ sekoia_incident_type_scheme.add_field(name=argument, description=description)
+
+ mapping_response = GetMappingFieldsResponse()
+ mapping_response.add_scheme_type(sekoia_incident_type_scheme)
+
+ return mapping_response
+
+
+# =========== Mirroring Mechanism ===========
+
+
+def list_alerts_command(client: Client, args: dict[str, Any]) -> CommandResults:
+
alerts = client.list_alerts(
alerts_limit=args.get("limit"),
alerts_status=args.get("status"),
@@ -633,7 +845,7 @@ def list_alerts_command(client: Client, args: Dict[str, Any]) -> CommandResults:
)
-def get_alert_command(client: Client, args: Dict[str, Any]) -> CommandResults:
+def get_alert_command(client: Client, args: dict[str, Any]) -> CommandResults:
"""Parameters"""
alert_uuid = args["id"]
@@ -659,7 +871,7 @@ def get_alert_command(client: Client, args: Dict[str, Any]) -> CommandResults:
)
-def query_events_command(client: Client, args: Dict[str, Any]) -> CommandResults:
+def query_events_command(client: Client, args: dict[str, Any]) -> CommandResults:
"""Parameters"""
earliest_time = args["earliest_time"]
lastest_time = args["lastest_time"]
@@ -684,7 +896,7 @@ def query_events_command(client: Client, args: Dict[str, Any]) -> CommandResults
)
-def query_events_status_command(client: Client, args: Dict[str, Any]) -> CommandResults:
+def query_events_status_command(client: Client, args: dict[str, Any]) -> CommandResults:
"""Parameters"""
search_job_uuid = args["uuid"]
@@ -699,7 +911,7 @@ def query_events_status_command(client: Client, args: Dict[str, Any]) -> Command
)
-def retrieve_events_command(client: Client, args: Dict[str, Any]) -> CommandResults:
+def retrieve_events_command(client: Client, args: dict[str, Any]) -> CommandResults:
"""Parameters"""
search_job_uuid = args["uuid"]
@@ -717,7 +929,7 @@ def retrieve_events_command(client: Client, args: Dict[str, Any]) -> CommandResu
@polling_function(name="sekoia-xdr-search-events", requires_polling_arg=False)
-def search_events_command(args: Dict[str, Any], client: Client) -> PollResult:
+def search_events_command(args: dict[str, Any], client: Client) -> PollResult:
"""Parameters"""
earliest_time = args["earliest_time"]
lastest_time = args["lastest_time"]
@@ -800,7 +1012,7 @@ def search_events_command(args: Dict[str, Any], client: Client) -> PollResult:
)
-def update_status_alert_command(client: Client, args: Dict[str, Any]) -> CommandResults:
+def update_status_alert_command(client: Client, args: dict[str, Any]) -> CommandResults:
"""Parameters"""
alert_uuid, updated_status, comment = (
args["id"],
@@ -831,7 +1043,7 @@ def update_status_alert_command(client: Client, args: Dict[str, Any]) -> Command
return CommandResults(readable_output=readable_output, outputs=update)
-def post_comment_alert_command(client: Client, args: Dict[str, Any]) -> CommandResults:
+def post_comment_alert_command(client: Client, args: dict[str, Any]) -> CommandResults:
"""Parameters"""
alert_uuid, comment, author = (
args["id"],
@@ -851,7 +1063,7 @@ def post_comment_alert_command(client: Client, args: Dict[str, Any]) -> CommandR
return CommandResults(readable_output=readable_output, outputs=response)
-def get_comments_alert_command(client: Client, args: Dict[str, Any]) -> CommandResults:
+def get_comments_alert_command(client: Client, args: dict[str, Any]) -> CommandResults:
"""Parameters"""
alert_uuid = args["id"]
@@ -882,7 +1094,7 @@ def get_comments_alert_command(client: Client, args: Dict[str, Any]) -> CommandR
)
-def get_workflow_alert_command(client: Client, args: Dict[str, Any]) -> CommandResults:
+def get_workflow_alert_command(client: Client, args: dict[str, Any]) -> CommandResults:
"""Parameters"""
alert_uuid = args["id"]
@@ -900,7 +1112,7 @@ def get_workflow_alert_command(client: Client, args: Dict[str, Any]) -> CommandR
)
-def get_cases_alert_command(client: Client, args: Dict[str, Any]) -> CommandResults:
+def get_cases_alert_command(client: Client, args: dict[str, Any]) -> CommandResults:
"""Parameters"""
alert_uuid, case_id = args["alert_id"], args.get("case_id")
@@ -926,7 +1138,7 @@ def get_cases_alert_command(client: Client, args: Dict[str, Any]) -> CommandResu
)
-def get_asset_command(client: Client, args: Dict[str, Any]) -> CommandResults:
+def get_asset_command(client: Client, args: dict[str, Any]) -> CommandResults:
"""Parameters"""
asset_uuid = args["asset_uuid"]
@@ -947,7 +1159,7 @@ def get_asset_command(client: Client, args: Dict[str, Any]) -> CommandResults:
)
-def list_asset_command(client: Client, args: Dict[str, Any]) -> CommandResults:
+def list_asset_command(client: Client, args: dict[str, Any]) -> CommandResults:
"""Parameters"""
limit, assets_type = args.get("limit"), args.get("assets_type")
@@ -966,7 +1178,7 @@ def list_asset_command(client: Client, args: Dict[str, Any]) -> CommandResults:
)
-def get_user_command(client: Client, args: Dict[str, Any]) -> CommandResults:
+def get_user_command(client: Client, args: dict[str, Any]) -> CommandResults:
"""Parameters"""
user_uuid = args["user_uuid"]
@@ -984,7 +1196,7 @@ def get_user_command(client: Client, args: Dict[str, Any]) -> CommandResults:
def add_attributes_asset_command(
- client: Client, args: Dict[str, Any]
+ client: Client, args: dict[str, Any]
) -> CommandResults:
"""Parameters"""
asset_uuid, name, value = (
@@ -1007,7 +1219,7 @@ def add_attributes_asset_command(
)
-def add_keys_asset_command(client: Client, args: Dict[str, Any]) -> CommandResults:
+def add_keys_asset_command(client: Client, args: dict[str, Any]) -> CommandResults:
"""Parameters"""
asset_uuid, name, value = (
args["asset_uuid"],
@@ -1028,7 +1240,7 @@ def add_keys_asset_command(client: Client, args: Dict[str, Any]) -> CommandResul
def remove_attribute_asset_command(
- client: Client, args: Dict[str, Any]
+ client: Client, args: dict[str, Any]
) -> CommandResults:
"""Parameters"""
asset_uuid, attribute_uuid = args["asset_uuid"], args["attribute_uuid"]
@@ -1041,7 +1253,7 @@ def remove_attribute_asset_command(
return CommandResults(readable_output=readable_output)
-def remove_key_asset_command(client: Client, args: Dict[str, Any]) -> CommandResults:
+def remove_key_asset_command(client: Client, args: dict[str, Any]) -> CommandResults:
"""Parameters"""
asset_uuid, key_uuid = args["asset_uuid"], args["key_uuid"]
@@ -1051,7 +1263,7 @@ def remove_key_asset_command(client: Client, args: Dict[str, Any]) -> CommandRes
return CommandResults(readable_output=readable_output)
-def get_kill_chain_command(client: Client, args: Dict[str, Any]) -> CommandResults:
+def get_kill_chain_command(client: Client, args: dict[str, Any]) -> CommandResults:
"""Parameters"""
kill_chain_uuid = args["kill_chain_uuid"]
@@ -1068,7 +1280,7 @@ def get_kill_chain_command(client: Client, args: Dict[str, Any]) -> CommandResul
)
-def http_request_command(client: Client, args: Dict[str, Any]) -> CommandResults:
+def http_request_command(client: Client, args: dict[str, Any]) -> CommandResults:
"""Parameters"""
method, url_sufix, params = (
args["method"],
@@ -1163,18 +1375,19 @@ def main() -> None:
elif command == "fetch-incidents":
# Set and define the fetch incidents command to run after activated via integration settings.
- alerts_status = ",".join(params.get("alerts_status", None))
- alerts_type = ",".join(params.get("alerts_type", None))
+ alerts_status = ",".join(params.get("alerts_status", ""))
+ alerts_type = ",".join(params.get("alerts_type", ""))
alerts_urgency = params.get("alerts_urgency", None)
fetch_mode = params.get("fetch_mode")
fetch_with_assets = params.get("fetch_with_assets")
fetch_with_kill_chain = params.get("fetch_with_kill_chain")
+ mirror_direction = params.get("mirror_direction", "None")
# Convert the argument to an int using helper function or set to MAX_INCIDENTS_TO_FETCH
max_results = arg_to_number(params["max_fetch"])
- last_run: Dict[
- str, Any
- ] = demisto.getLastRun() # getLastRun() gets the last run dict
+ last_run: dict[str, Any] = (
+ demisto.getLastRun()
+ ) # getLastRun() gets the last run dict
next_run, incidents = fetch_incidents(
client=client,
@@ -1185,6 +1398,7 @@ def main() -> None:
alert_urgency=alerts_urgency,
alert_type=alerts_type,
fetch_mode=fetch_mode,
+ mirror_direction=mirror_direction,
fetch_with_assets=fetch_with_assets,
fetch_with_kill_chain=fetch_with_kill_chain,
)
@@ -1234,6 +1448,22 @@ def main() -> None:
return_results(get_kill_chain_command(client, args))
elif command == "sekoia-xdr-http-request":
return_results(http_request_command(client, args))
+ elif command == "get-remote-data":
+ return_results(
+ get_remote_data_command(
+ client,
+ args,
+ close_incident=demisto.params().get("close_incident"), # type: ignore
+ close_note=demisto.params().get("close_notes", "Closed by Sekoia."), # type: ignore
+ mirror_events=demisto.params().get("mirror_events"), # type: ignore
+ mirror_kill_chain=demisto.params().get("mirror_kill_chain"), # type: ignore
+ reopen_incident=demisto.params().get("reopen_incident"), # type: ignore
+ )
+ )
+ elif command == "get-modified-remote-data":
+ return_results(get_modified_remote_data_command(client, args))
+ elif command == "get-mapping-fields":
+ return_results(get_mapping_fields_command())
else:
raise NotImplementedError(f"Command {command} is not implemented")
diff --git a/Packs/SekoiaXDR/Integrations/SekoiaXDR/SekoiaXDR.yml b/Packs/SekoiaXDR/Integrations/SekoiaXDR/SekoiaXDR.yml
index 97f7da5c88e8..e0eb7e8b3daf 100644
--- a/Packs/SekoiaXDR/Integrations/SekoiaXDR/SekoiaXDR.yml
+++ b/Packs/SekoiaXDR/Integrations/SekoiaXDR/SekoiaXDR.yml
@@ -7,7 +7,7 @@ category: Analytics & SIEM
sectionOrder:
- Connect
- Collect
-description: "Fetch alerts and events from SEKOIA.IO XDR.\nTo use this integration, please create an API Key with the appropriate permissions."
+description: Fetch alerts and events from SEKOIA.IO XDR.\nTo use this integration, please create an API Key with the appropriate permissions.
configuration:
- section: Connect
display: API key
@@ -198,6 +198,57 @@ configuration:
additionalinfo: |-
When selected, it includes the kill chain information in the alert when fetched from Sekoia.
And also If there's no max_fetch it will fetch 10 incidents by default.
+- section: Collect
+ display: Incident Mirroring Direction.
+ name: mirror_direction
+ defaultvalue: None
+ type: 15
+ required: true
+ additionalinfo: 'Choose the direction to mirror the incident: None(Disable mirroring), Incoming (from Sekoia XDR to Cortex XSOAR) , Outgoing (from Cortex XSOAR to Sekoia XDR), or Incoming and Outgoing (from/to Cortex XSOAR and Sekoia XDR).'
+ options:
+ - None
+ - Incoming
+ - Outgoing
+ - Incoming and Outgoing
+- section: Collect
+ advanced: true
+ display: Include events in the mirroring of the alerts.
+ name: mirror_events
+ defaultvalue: "false"
+ type: 8
+ required: false
+ additionalinfo: When selected, it includes the events in the mirrored alerts when an alert is updated in Sekoia.
+- section: Collect
+ advanced: true
+ display: Include kill chain information in the mirroring of the alerts.
+ name: mirror_kill_chain
+ defaultvalue: "false"
+ type: 8
+ required: false
+ additionalinfo: When selected, it includes the kill chain information of the alert in the mirrored alerts when an alert is updated in Sekoia.
+- section: Collect
+ advanced: true
+ display: Reopen Mirrored Cortex XSOAR Incidents (Incoming Mirroring)
+ name: reopen_incident
+ defaultvalue: "false"
+ type: 8
+ required: false
+ additionalinfo: When selected, reopening the Sekoia XDR alert will reopen the Cortex XSOAR incident.
+- section: Collect
+ advanced: true
+ display: Close Mirrored Cortex XSOAR Incidents (Incoming Mirroring)
+ name: close_incident
+ defaultvalue: "false"
+ type: 8
+ required: false
+ additionalinfo: When selected, closing the Sekoia XDR alert with a "Closed" or "Reject" status will close the Cortex XSOAR incident.
+- section: Collect
+ display: Close notes.
+ name: close_notes
+ defaultvalue: Closed by Sekoia.
+ type: 0
+ required: true
+ additionalinfo: Change the closing notes that will be added to the tickets closed automatically by the automation.
- section: Collect
display: Timezone ( TZ format )
name: timezone
@@ -382,10 +433,10 @@ script:
arguments:
- name: earliest_time
required: true
- description: "Valid formats or ISO 8601 e.g -3d, -2w, -7d, 2023-01-15T00:00:00Z."
+ description: Valid formats or ISO 8601 e.g -3d, -2w, -7d, 2023-01-15T00:00:00Z.
- name: lastest_time
required: true
- description: "Valid formats or ISO 8601 e.g +3d, +2w, now, 2023-01-15T00:00:00Z."
+ description: Valid formats or ISO 8601 e.g +3d, +2w, now, 2023-01-15T00:00:00Z.
- name: query
defaultValue: ""
description: 'The query to use, i.e: "alert_short_ids:ALUnyZCYZ9Ga".'
@@ -544,10 +595,10 @@ script:
arguments:
- name: earliest_time
required: true
- description: "Valid formats or ISO 8601 e.g -3d, -2w, -7d, 2023-01-15T00:00:00Z."
+ description: Valid formats or ISO 8601 e.g -3d, -2w, -7d, 2023-01-15T00:00:00Z.
- name: lastest_time
required: true
- description: "Valid formats or ISO 8601 e.g +3d, +2w, now, 2023-01-15T00:00:00Z."
+ description: Valid formats or ISO 8601 e.g +3d, +2w, now, 2023-01-15T00:00:00Z.
- name: query
description: 'The query to use, i.e: "alert_short_ids:ALUnyZCYZ9Ga".'
defaultValue: ""
@@ -890,13 +941,31 @@ script:
- contextPath: SekoiaXDR.Asset.uuid
description: The UUID of the asset.
description: Get an asset by its UUID from Sekoia XDR.
+ - name: get-remote-data
+ arguments:
+ - name: id
+ description: The remote alert ID.
+ - name: lastUpdate
+ description: ISO format date with timezone, e.g., 2023-03-01T16:41:30.589575+02:00. The incident is only updated if it was modified after the last update time.
+ defaultValue: "0"
+ description: This command gets new information about the incidents in the remote system and updates existing incidents in Cortex XSOAR.
+ - name: get-modified-remote-data
+ arguments:
+ - name: lastUpdate
+ description: ISO format date with timezone, e.g., 2023-03-01T16:41:30.589575+02:00. The incident is only returned if it was modified after the last update time.
+ defaultValue: "0"
+ description: available from Cortex XSOAR version 6.1.0. This command queries for incidents that were modified since the last update.
+ - name: get-mapping-fields
+ arguments: []
+ description: This command pulls the remote schema for the different incident types, and their associated incident fields, from the remote system.
- name: sekoia-xdr-list-assets
arguments:
- name: limit
- description: Limit a number of items.
+ description: 'Limit a number of items.'
defaultValue: "10"
- name: assets_type
description: Type of assets to list (computer, network, etc).
+ description: Command to retrieve a list of Assets from Sekoia XDR.
outputs:
- contextPath: SekoiaXDR.Assets.total
description: The total number of items in the response.
@@ -948,7 +1017,6 @@ script:
description: The name of the asset.
- contextPath: SekoiaXDR.Assets.items.0.uuid
description: The UUID of the asset.
- description: Command to retrieve a list of Assets from Sekoia XDR.
- name: sekoia-xdr-get-user
arguments:
- name: user_uuid
@@ -1043,7 +1111,7 @@ script:
description: 'UUID of the asset to get, the UUID should appear with "sekoia-xdr-list-assets" if that alert have assets related, example: "d4cc3b05-a78d-4f29-b27c-c637d86fa03a".'
- name: name
required: true
- description: The name of attributes.
+ description: "The name of attributes."
- name: value
required: true
description: The value of attributes.
@@ -1059,12 +1127,13 @@ script:
- name: value
required: true
description: The value of the key to be added.
- description: Command to add keys to an asset in Sekoia XDR.
+ description: "Command to add keys to an asset in Sekoia XDR."
- name: sekoia-xdr-get-kill-chain
arguments:
- name: kill_chain_uuid
required: true
description: UUID or short_id of the kill chain the UUID should appear with "sekoia-xdr-list-alerts".
+ description: Command to retrieve the definition of a Cyber Kill Chain Step.
outputs:
- contextPath: SekoiaXDR.KillChain.stix_name
description: The name of the STIX object.
@@ -1078,7 +1147,6 @@ script:
description: The short identifier of the STIX object.
- contextPath: SekoiaXDR.KillChain.order_id
description: The order identifier of the STIX object.
- description: Command to retrieve the definition of a Cyber Kill Chain Step.
- name: sekoia-xdr-remove-attribute-asset
arguments:
- name: asset_uuid
@@ -1105,7 +1173,7 @@ script:
defaultValue: GET
- name: url_sufix
required: true
- description: "The URL suffix after https://api.sekoia.io, i.e. /v1/sic/alerts/ or /v1/asset-management/assets/."
+ description: The URL suffix after https://api.sekoia.io, i.e. /v1/sic/alerts/ or /v1/asset-management/assets/.
- name: parameters
description: Query parameters, i.e. limit -> 10 , match['status_name'] -> Ongoing.
type: keyValue
@@ -1115,8 +1183,8 @@ script:
runonce: false
subtype: python3
isFetchSamples: true
+ ismappable: true
+ isremotesyncin: true
fromversion: 6.10.0
tests:
- No tests (auto formatted)
-defaultmapperin: Sekoia XDR - Incoming Mapper
-defaultclassifier: Sekoia XDR - Classifier
diff --git a/Packs/SekoiaXDR/Integrations/SekoiaXDR/SekoiaXDR_description.md b/Packs/SekoiaXDR/Integrations/SekoiaXDR/SekoiaXDR_description.md
index dab2ff00a1f8..19524201f555 100644
--- a/Packs/SekoiaXDR/Integrations/SekoiaXDR/SekoiaXDR_description.md
+++ b/Packs/SekoiaXDR/Integrations/SekoiaXDR/SekoiaXDR_description.md
@@ -1,21 +1,23 @@
## Sekoia Defend (XDR)
-- The mandatory fields are the API Key and the base URL of the API (i.e. https://api.sekoia.io/v1/sic/)
+
+- The required fields include the API Key and the base URL of the API
+
---
[View documentation for the API calls](https://docs.sekoia.io/xdr/)
-### Default Parmeters
+### Default Parameters
- **Classifier**: Sekoia XDR - Classifier
- **Incoming Mapper**: Sekoia XDR - Incoming Mapper
-- **Outgoing Mapper**: Sekoia XDR - Outgoing Mapper
- **Incident Type**: Sekoia XDR
- **First fetch timestamp**: -7d
- **Maximum incidents to fetch per interval**: 10
- **Alerts status**: Pending, Acknowledge, Ongoing
+- **Close notes**: Closed by Sekoia
### API Key Creation
-Similar to other APIs, SEKOIA's API employs an authentication mechanism that involves the use of an API key. To obtain an API key and facilitate secure access to SEKOIA's services, follow these straightforward steps:
+Similar to other APIs, **Sekoia's API** employs an authentication mechanism that involves the use of an API key. To obtain an API key and facilitate secure access to **Sekoia's** services, follow these straightforward steps:
1. **Navigate to Settings**: Begin by clicking on **Settings**. You can find this option at the bottom of the navigation bar.
@@ -25,8 +27,8 @@ Similar to other APIs, SEKOIA's API employs an authentication mechanism that inv
4. **Define Your Key**: Provide a unique name for your API key for easy identification. Additionally, input a description for the key. The description should be succinct yet informative, with a length between 10 and 100 characters.
-5. **Assign Roles**: Choose one or more roles to associate with your API key. The roles determine the permissions and access level that the key will have when interacting with SEKOIA’s API.
+5. **Assign Roles**: Choose one or more roles to associate with your API key. The roles determine the permissions and access level that the key will have when interacting with **Sekoia’s** API.
6. **Save Your Key**: After filling out the necessary details and assigning roles, click on the **Save** button to generate your API key. Your new key is now ready for use in authenticating API requests.
-By following these steps, you can effortlessly create an API key that provides secure and role-based access to SEKOIA's API, enabling seamless interaction with its suite of services.
+By following these steps, you can effortlessly create an API key that provides secure and role-based access to **Sekoia’s** API, enabling seamless interaction with its suite of services.
\ No newline at end of file
diff --git a/Packs/SekoiaXDR/Integrations/SekoiaXDR/SekoiaXDR_test.py b/Packs/SekoiaXDR/Integrations/SekoiaXDR/SekoiaXDR_test.py
index 42ffd671d7e4..1f58fb7cbb7c 100644
--- a/Packs/SekoiaXDR/Integrations/SekoiaXDR/SekoiaXDR_test.py
+++ b/Packs/SekoiaXDR/Integrations/SekoiaXDR/SekoiaXDR_test.py
@@ -3,6 +3,7 @@
import SekoiaXDR # type: ignore
from freezegun import freeze_time
+
from datetime import datetime
import pytest
import json
@@ -379,9 +380,9 @@ def test_search_events(client, requests_mock, mocker):
}
result: PollResult = SekoiaXDR.search_events_command(client=client, args=args)
- assert result.outputs[0]["action_id"] # type: ignore
- assert result.outputs[0]["action_outcome"] # type: ignore
- assert result.outputs[0]["action_name"] # type: ignore
+ assert result.outputs[0]["action_id"]
+ assert result.outputs[0]["action_outcome"]
+ assert result.outputs[0]["action_name"]
def test_list_assets(client, requests_mock):
@@ -490,9 +491,115 @@ def test_get_user(client, requests_mock):
assert result.outputs["lastname"] == "done"
+def test_modified_remote_data(client, requests_mock):
+ mock_response = util_load_json("test_data/SekoiaXDR_get_alerts.json")
+ requests_mock.get(MOCK_URL + "/v1/sic/alerts", json=mock_response)
+
+ args = {"lastUpdate": "2023-06-28T13:21:45"}
+ results = SekoiaXDR.get_modified_remote_data_command(client, args)
+
+ assert len(results.modified_incident_ids) == 2
+
+
+@pytest.mark.parametrize(
+ "close_incident, close_note, mirror_events, mirror_kill_chain, reopen_incident",
+ [
+ (False, "Closed by Sekoia.", True, True, False),
+ (False, "Closed by Sekoia.", False, False, False),
+ ],
+)
+def test_get_remote_data(
+ client,
+ mocker,
+ requests_mock,
+ close_incident,
+ close_note,
+ mirror_events,
+ mirror_kill_chain,
+ reopen_incident,
+):
+ mock_response = util_load_json("test_data/SekoiaXDR_get_alert.json")
+ mock_response_query_events = util_load_json("test_data/SekoiaXDR_query_events.json")
+ mock_response_query_events_status = util_load_json(
+ "test_data/SekoiaXDR_query_events_status.json"
+ )
+ mock_response_retrieve_events = util_load_json(
+ "test_data/SekoiaXDR_retrieve_events.json"
+ )
+ mock_response_killchain = util_load_json(
+ "test_data/SekoiaXDR_get_killchain_mirroring.json"
+ )
+ requests_mock.get(
+ MOCK_URL + "/v1/sic/kill-chains/KCXKNfnJuUUU", json=mock_response_killchain
+ )
+ requests_mock.get(MOCK_URL + "/v1/sic/alerts/ALL1A4SKUiU2", json=mock_response)
+ requests_mock.post(
+ MOCK_URL + "/v1/sic/conf/events/search/jobs", json=mock_response_query_events
+ )
+ requests_mock.get(
+ MOCK_URL + "/v1/sic/conf/events/search/jobs/df904d2e-2c57-488f",
+ json=mock_response_query_events_status,
+ )
+ requests_mock.get(
+ MOCK_URL + "/v1/sic/conf/events/search/jobs/df904d2e-2c57-488f/events",
+ json=mock_response_retrieve_events,
+ )
+
+ params = {"exclude_info_events": "False", "replace_dots_event": "_"}
+ mocker.patch.object(demisto, "params", return_value=params)
+
+ invistagation = {
+ "cacheVersn": 0,
+ "category": "",
+ "closed": "0001-01-01T00:00:00Z",
+ "created": "2024-04-09T15:58:41.908148032Z",
+ "creatingUserId": "admin",
+ "details": "",
+ "entryUsers": ["admin"],
+ "highPriority": False,
+ "id": "5721bf3c-f9ef-4b9e-8942-712ac829e0b7",
+ "isDebug": False,
+ "lastOpen": "0001-01-01T00:00:00Z",
+ "mirrorAutoClose": None,
+ "mirrorTypes": None,
+ "modified": "2024-04-10T09:10:14.357270528Z",
+ "name": "Playground",
+ "rawCategory": "",
+ "reason": None,
+ "runStatus": "",
+ "sizeInBytes": 0,
+ "slackMirrorAutoClose": False,
+ "slackMirrorType": "",
+ "status": 0,
+ "systems": None,
+ "tags": None,
+ "type": 9,
+ "users": ["admin"],
+ "version": 2,
+ }
+ mocker.patch.object(demisto, "investigation", return_value=invistagation)
+
+ args = {"lastUpdate": "2023-06-28T13:21:45", "id": "ALL1A4SKUiU2"}
+ results = SekoiaXDR.get_remote_data_command(
+ client,
+ args,
+ close_incident,
+ close_note,
+ mirror_events,
+ mirror_kill_chain,
+ reopen_incident,
+ )
+
+ assert len(results.mirrored_object) > 0
+ if mirror_kill_chain:
+ assert results.mirrored_object.get("kill_chain")
+ if mirror_events:
+ assert results.mirrored_object.get("events")
+
+
@pytest.mark.parametrize(
"max_results, last_run, first_fetch_time, alert_status, alert_urgency, alert_type, fetch_mode, \
- fetch_with_assets, fetch_with_kill_chain",
+ mirror_direction, fetch_with_assets, fetch_with_kill_chain",
[
(
100,
@@ -504,6 +611,7 @@ def test_get_user(client, requests_mock):
None,
None,
None,
+ None,
),
(
100,
@@ -515,6 +623,7 @@ def test_get_user(client, requests_mock):
None,
None,
None,
+ None,
),
],
)
@@ -528,6 +637,7 @@ def test_fetch_incidents(
alert_urgency,
alert_type,
fetch_mode,
+ mirror_direction,
fetch_with_assets,
fetch_with_kill_chain,
):
@@ -543,6 +653,7 @@ def test_fetch_incidents(
alert_urgency,
alert_type,
fetch_mode,
+ mirror_direction,
fetch_with_assets,
fetch_with_kill_chain,
)
diff --git a/Packs/SekoiaXDR/Layouts/layoutscontainer-Sekoia_XDR_Layout.json b/Packs/SekoiaXDR/Layouts/layoutscontainer-Sekoia_XDR_Layout.json
index 2771f660ab27..c6b82ff97ec9 100644
--- a/Packs/SekoiaXDR/Layouts/layoutscontainer-Sekoia_XDR_Layout.json
+++ b/Packs/SekoiaXDR/Layouts/layoutscontainer-Sekoia_XDR_Layout.json
@@ -7,6 +7,18 @@
{
"fieldId": "incident_sekoiaxdralertreject",
"isVisible": true
+ },
+ {
+ "fieldId": "incident_closereason",
+ "isVisible": true
+ },
+ {
+ "fieldId": "incident_closenotes",
+ "isVisible": true
+ },
+ {
+ "fieldId": "incident_owner",
+ "isVisible": true
}
],
"isVisible": true,
@@ -141,6 +153,7 @@
"startCol": 0
}
],
+ "maxH": null,
"maxW": 3,
"minH": 1,
"moved": false,
@@ -153,6 +166,7 @@
{
"h": 2,
"i": "caseinfoid-field-changed-caseinfoid-hmim4odmnc-caseinfoid-61263cc0-98b1-11e9-97d7-ed26ef9e46c8",
+ "maxH": null,
"maxW": 3,
"minH": 1,
"moved": false,
@@ -167,6 +181,7 @@
"displayType": "ROW",
"h": 2,
"i": "caseinfoid-field-changed-caseinfoid-hmim4odmnc-caseinfoid-6aabad20-98b1-11e9-97d7-ed26ef9e46c8",
+ "maxH": null,
"maxW": 3,
"minH": 1,
"moved": false,
@@ -182,6 +197,7 @@
"h": 2,
"i": "caseinfoid-field-changed-caseinfoid-hmim4odmnc-caseinfoid-770ec200-98b1-11e9-97d7-ed26ef9e46c8",
"isVisible": true,
+ "maxH": null,
"maxW": 3,
"minH": 1,
"moved": false,
@@ -196,6 +212,7 @@
"displayType": "ROW",
"h": 2,
"i": "caseinfoid-field-changed-caseinfoid-hmim4odmnc-caseinfoid-842632c0-98b1-11e9-97d7-ed26ef9e46c8",
+ "maxH": null,
"maxW": 3,
"minH": 1,
"moved": false,
@@ -210,6 +227,7 @@
"displayType": "ROW",
"h": 2,
"i": "caseinfoid-field-changed-caseinfoid-hmim4odmnc-caseinfoid-4a31afa0-98ba-11e9-a519-93a53c759fe0",
+ "maxH": null,
"maxW": 3,
"minH": 1,
"moved": false,
@@ -225,6 +243,7 @@
"h": 2,
"hideName": false,
"i": "caseinfoid-field-changed-caseinfoid-hmim4odmnc-caseinfoid-7717e580-9bed-11e9-9a3f-8b4b2158e260",
+ "maxH": null,
"maxW": 3,
"minH": 1,
"moved": false,
@@ -307,6 +326,7 @@
"displayType": "ROW",
"h": 2,
"i": "caseinfoid-field-changed-caseinfoid-hmim4odmnc-caseinfoid-7ce69dd0-a07f-11e9-936c-5395a1acf11e",
+ "maxH": null,
"maxW": 3,
"minH": 1,
"moved": false,
@@ -381,6 +401,7 @@
"startCol": 1
}
],
+ "maxH": null,
"maxW": 3,
"minH": 1,
"moved": false,
@@ -424,6 +445,7 @@
"startCol": 0
}
],
+ "maxH": null,
"maxW": 3,
"minH": 1,
"moved": false,
@@ -449,6 +471,7 @@
"startCol": 0
}
],
+ "maxH": null,
"maxW": 3,
"minH": 1,
"moved": false,
@@ -492,6 +515,7 @@
"startCol": 0
}
],
+ "maxH": null,
"maxW": 3,
"minH": 1,
"moved": false,
@@ -514,8 +538,10 @@
"hideName": false,
"i": "xwqxiqdriy-field-changed-xwqxiqdriy-caseinfoid-xwqxiqdriy-field-changed-xwqxiqdriy-caseinfoid-xwqxiqdriy-caseinfoid-7bde8760-a95c-11ed-9145-25ddf42500b6",
"items": [],
+ "maxH": null,
"maxW": 3,
"minH": 1,
+ "minW": 3,
"moved": false,
"name": "Comments",
"query": "SekoiaXDRPrintComments",
@@ -630,8 +656,10 @@
"startCol": 0
}
],
- "maxW": 3,
+ "maxH": null,
+ "maxW": 1,
"minH": 1,
+ "minW": 1,
"moved": false,
"name": "Alert information",
"static": false,
@@ -656,8 +684,10 @@
"startCol": 0
}
],
- "maxW": 3,
+ "maxH": null,
+ "maxW": 1,
"minH": 1,
+ "minW": 1,
"moved": false,
"name": "Details",
"static": false,
@@ -670,8 +700,10 @@
"hideName": false,
"i": "xwqxiqdriy-field-changed-xwqxiqdriy-caseinfoid-xwqxiqdriy-field-changed-xwqxiqdriy-caseinfoid-xwqxiqdriy-caseinfoid-c0be8e90-ac48-11ed-bd2d-994e9b5e36b1",
"items": [],
- "maxW": 3,
+ "maxH": null,
+ "maxW": 1,
"minH": 1,
+ "minW": 1,
"moved": false,
"name": "Case Information",
"query": "SekoiaXDRPrintCase",
@@ -692,15 +724,17 @@
{
"endCol": 2,
"fieldId": "sekoiaxdrkillchain",
- "height": 106,
+ "height": 22,
"id": "d4996120-d848-11ed-a2c5-afbbb72993fe",
"index": 0,
"sectionItemType": "field",
"startCol": 0
}
],
- "maxW": 3,
+ "maxH": null,
+ "maxW": 1,
"minH": 1,
+ "minW": 1,
"moved": false,
"name": "Kill Chain",
"static": false,
@@ -713,8 +747,10 @@
"hideName": true,
"i": "xwqxiqdriy-field-changed-xwqxiqdriy-308b91c0-3d3c-11ef-bbb1-2b336ca0953c",
"items": [],
- "maxW": 3,
+ "maxH": null,
+ "maxW": 1,
"minH": 1,
+ "minW": 1,
"moved": false,
"name": "Impacted assets",
"query": "SekoiaXDRPrintAssets",
diff --git a/Packs/SekoiaXDR/ReleaseNotes/1_1_0.md b/Packs/SekoiaXDR/ReleaseNotes/1_1_0.md
new file mode 100644
index 000000000000..d9c0eb7abffc
--- /dev/null
+++ b/Packs/SekoiaXDR/ReleaseNotes/1_1_0.md
@@ -0,0 +1,34 @@
+
+#### Integrations
+
+##### Sekoia XDR
+
+- Added mirroring functionality to the integration.
+
+#### Layouts
+
+##### Sekoia XDR Layout
+
+- Updated the layout to align it with the new mirroring functionality.
+
+#### Mappers
+
+##### Sekoia XDR - Incoming Mapper
+
+- Added some new fields to support the mirroring functionality.
+
+#### Scripts
+
+##### CloseSekoiaAlert
+
+- Updated the script to align it with the new mirroring functionality.
+
+##### SekoiaXDRChangeStatus
+
+- Updated the script to align it with the new mirroring functionality.
+
+#### Incident Fields
+
+##### SekoiaXDR MirrorOut
+
+- New incident field to support the new mirroring functionality.
diff --git a/Packs/SekoiaXDR/Scripts/SekoiaXDRChangeStatus/README.md b/Packs/SekoiaXDR/Scripts/SekoiaXDRChangeStatus/README.md
index a3b21a8e931c..4a96141d969f 100644
--- a/Packs/SekoiaXDR/Scripts/SekoiaXDRChangeStatus/README.md
+++ b/Packs/SekoiaXDR/Scripts/SekoiaXDRChangeStatus/README.md
@@ -1,4 +1,4 @@
-This script changes the status of the Sekoia alert in XSOAR.
+This script changes the status of the Sekoia alert.
## Script Data
@@ -18,6 +18,7 @@ This script changes the status of the Sekoia alert in XSOAR.
| --- | --- |
| short_id | The short ID of the alert. |
| status | Status to change on the Sekoia alert. |
+| comment | The comment to add to the alert on the status change. |
## Outputs
diff --git a/Packs/SekoiaXDR/Scripts/SekoiaXDRChangeStatus/SekoiaXDRChangeStatus.py b/Packs/SekoiaXDR/Scripts/SekoiaXDRChangeStatus/SekoiaXDRChangeStatus.py
index e1662e6e6a15..c96c961ad997 100644
--- a/Packs/SekoiaXDR/Scripts/SekoiaXDRChangeStatus/SekoiaXDRChangeStatus.py
+++ b/Packs/SekoiaXDR/Scripts/SekoiaXDRChangeStatus/SekoiaXDRChangeStatus.py
@@ -2,16 +2,46 @@
from CommonServerPython import * # noqa: F401
-def update_status(new_status: str):
- execute_command("setIncident", {"sekoiaxdralertstatus": new_status})
+def get_username():
+ get_users = execute_command("getUsers", {"current": "true"})
+ username = get_users[0]["name"] # type: ignore
+ return username
+
+
+def post_comment(alert_short_id: str, comment: Optional[str], author: str): # pragma: no cover
+ try:
+ execute_command(
+ "sekoia-xdr-post-comment-alert",
+ {"id": alert_short_id, "comment": comment, "author": author},
+ )
+ except Exception as e:
+ return_error(
+ f"Failed to post comment for alert with id {alert_short_id} : {str(e)}"
+ )
+
+
+def update_status(new_status: str, mirror_status: str, is_mirror_out: bool, short_id: str):
+ if mirror_status == "In" and is_mirror_out:
+ execute_command("sekoia-xdr-update-status-alert", {"id": short_id, "status": new_status})
+ elif mirror_status is None and is_mirror_out:
+ execute_command("setIncident", {"sekoiaxdralertstatus": new_status})
+ execute_command("sekoia-xdr-update-status-alert", {"id": short_id, "status": new_status})
+ else:
+ execute_command("setIncident", {"sekoiaxdralertstatus": new_status})
def main():
+ incident = demisto.incidents()[0] # type: ignore
+ mirror_direction = incident.get("dbotMirrorDirection")
+ is_mirror_out = incident.get("CustomFields").get("sekoiaxdrmirrorout")
alert_short_id = demisto.args()["short_id"]
new_status = demisto.args()["status"]
+ comment = demisto.args().get("comment")
if new_status in ["Ongoing", "Acknowledged"]:
- update_status(new_status)
+ update_status(new_status, mirror_direction, is_mirror_out, alert_short_id)
+ if comment and is_mirror_out:
+ post_comment(alert_short_id, comment, get_username())
readable_output = f"### Status of the alert changed to:\n {new_status}"
return_results(
{
diff --git a/Packs/SekoiaXDR/Scripts/SekoiaXDRChangeStatus/SekoiaXDRChangeStatus.yml b/Packs/SekoiaXDR/Scripts/SekoiaXDRChangeStatus/SekoiaXDRChangeStatus.yml
index 6bfdf887e400..b1b1f36feb0e 100644
--- a/Packs/SekoiaXDR/Scripts/SekoiaXDRChangeStatus/SekoiaXDRChangeStatus.yml
+++ b/Packs/SekoiaXDR/Scripts/SekoiaXDRChangeStatus/SekoiaXDRChangeStatus.yml
@@ -6,7 +6,7 @@ script: ''
type: python
tags:
- incident-action-button
-comment: This script changes the status of the Sekoia alert in XSOAR.
+comment: This script changes the status of the Sekoia alert.
enabled: true
args:
- name: short_id
@@ -19,6 +19,8 @@ args:
- Ongoing
- Acknowledged
description: Status to change on the Sekoia alert.
+- name: comment
+ description: The comment to add to the alert on the status change.
scripttarget: 0
subtype: python3
runonce: false
diff --git a/Packs/SekoiaXDR/Scripts/SekoiaXDRChangeStatus/SekoiaXDRChangeStatus_test.py b/Packs/SekoiaXDR/Scripts/SekoiaXDRChangeStatus/SekoiaXDRChangeStatus_test.py
index e794965a0586..b595519b72b6 100644
--- a/Packs/SekoiaXDR/Scripts/SekoiaXDRChangeStatus/SekoiaXDRChangeStatus_test.py
+++ b/Packs/SekoiaXDR/Scripts/SekoiaXDRChangeStatus/SekoiaXDRChangeStatus_test.py
@@ -1,14 +1,41 @@
import demistomock as demisto
-from SekoiaXDRChangeStatus import main # type: ignore
+from SekoiaXDRChangeStatus import get_username, main, update_status, post_comment # type: ignore
+
+
+def test_get_username(mocker):
+ output_data = [
+ {"Type": 3, "Contents": [{"name": "admin", "PrettyRoles": "Administrator"}]}
+ ]
+ mocker.patch.object(demisto, "executeCommand", return_value=output_data)
+ assert get_username() == "admin"
+
+
+def test_post_comment(mocker):
+ output_data = [{"Type": 3, "Contents": {}}]
+ mocker.patch.object(demisto, "executeCommand", return_value=output_data)
+ assert post_comment("1", "test", "admin") is None
+
+
+def test_update_status(mocker):
+ output_data = [{"Type": 3, "Contents": {}}]
+ mocker.patch.object(demisto, "executeCommand", return_value=output_data)
+ assert update_status("Ongoing", "In", False, "1") is None
+ assert update_status("Ongoing", "In", True, "1") is None
+ assert update_status("Ongoing", None, True, "1") is None
+ assert update_status("Ongoing", None, False, "1") is None
def test_main(mocker):
+ mocker.patch.object(
+ demisto, "incidents", return_value=[{"dbotMirrorDirection": "In", "CustomFields": {"sekoiaxdrmirrorout": True}}]
+ )
mocker.patch.object(
demisto,
"args",
- return_value={"short_id": "1", "status": "Ongoing"},
+ return_value={"short_id": "1", "status": "Ongoing", "comment": "test"},
)
mocker.patch.object(demisto, "results")
+ mocker.patch("SekoiaXDRChangeStatus.get_username", return_value="admin")
main()
assert (
demisto.results.call_args[0][0]["Contents"]
diff --git a/Packs/SekoiaXDR/Scripts/SekoiaXDRCloseAlert/SekoiaXDRCloseAlert.py b/Packs/SekoiaXDR/Scripts/SekoiaXDRCloseAlert/SekoiaXDRCloseAlert.py
index a7902d842583..f7f0eae02427 100644
--- a/Packs/SekoiaXDR/Scripts/SekoiaXDRCloseAlert/SekoiaXDRCloseAlert.py
+++ b/Packs/SekoiaXDR/Scripts/SekoiaXDRCloseAlert/SekoiaXDRCloseAlert.py
@@ -7,23 +7,84 @@ def get_status_name(alert_id: str):
return get_alert["status"]["name"] # type: ignore
+def get_username(username: str):
+ user = execute_command("getUserByUsername", {"username": username})
+ return user["name"] # type: ignore
+
+
+def post_closure_comment(
+ alert_id: str,
+ close_reason: Optional[str],
+ close_notes: Optional[str],
+ username: Optional[str],
+): # pragma: no cover
+ try:
+ execute_command(
+ "sekoia-xdr-post-comment-alert",
+ {
+ "id": alert_id,
+ "comment": (
+ f"{close_reason}-{close_notes}"
+ if close_reason and close_notes
+ else None
+ ),
+ "author": get_username(username), # type: ignore
+ },
+ )
+ except Exception as e:
+ return_error(f"Failed to post comment: {str(e)}")
+
+
def close_alert(
alert_id: str,
reject: str,
close_reason: Optional[str],
close_notes: Optional[str],
username: str,
-):
+ mirror_status: str,
+ is_mirror_out: bool,
+): # pragma: no cover
readable_output = ""
alert_status = get_status_name(alert_id)
if alert_status not in ["Closed", "Rejected"]:
if reject == "false":
- execute_command("setIncident", {"sekoiaxdralertstatus": "Closed"})
+ if mirror_status == "In" and is_mirror_out:
+ execute_command(
+ "sekoia-xdr-update-status-alert",
+ {"id": alert_id, "status": "Closed"},
+ )
+ elif mirror_status is None and is_mirror_out:
+ execute_command("setIncident", {"sekoiaxdralertstatus": "Closed"})
+ execute_command(
+ "sekoia-xdr-update-status-alert",
+ {"id": alert_id, "status": "Closed"},
+ )
+ else:
+ execute_command("setIncident", {"sekoiaxdralertstatus": "Closed"})
readable_output = f"**** The alert {alert_id} has been closed. ****"
if reject == "true":
- execute_command("setIncident", {"sekoiaxdralertstatus": "Rejected"})
+ if mirror_status == "In" and is_mirror_out:
+ execute_command(
+ "sekoia-xdr-update-status-alert",
+ {"id": alert_id, "status": "Rejected"},
+ )
+ elif mirror_status is None and is_mirror_out:
+ execute_command("setIncident", {"sekoiaxdralertstatus": "Closed"})
+ execute_command(
+ "sekoia-xdr-update-status-alert",
+ {"id": alert_id, "status": "Rejected"},
+ )
+ else:
+ execute_command("setIncident", {"sekoiaxdralertstatus": "Rejected"})
readable_output = f"**** The alert {alert_id} has been rejected. ****"
+ post_closure_comment(alert_id, close_reason, close_notes, username)
+
+ else:
+ execute_command("setIncident", {"sekoiaxdralertstatus": alert_status})
+ readable_status = "closed" if alert_status.lower() == "closed" else "rejected"
+ readable_output = f"**** The alert {alert_id} has been {readable_status}. ****"
+
return_results(
{
"ContentsFormat": formats["markdown"],
@@ -33,16 +94,17 @@ def close_alert(
)
-def main():
+def main(): # pragma: no cover
incident = demisto.incidents()[0] # type: ignore
+ mirror_direction = incident.get("dbotMirrorDirection")
+ is_mirror_out = incident.get("CustomFields", {}).get("sekoiaxdrmirrorout")
alert_short_id = incident.get("CustomFields", {}).get("alertid")
reject = demisto.getArg("sekoiaxdralertreject") # type: ignore
close_reason = demisto.getArg("closeReason")
close_notes = demisto.getArg("closeNotes")
username = demisto.getArg("closingUserId") # type: ignore
-
close_alert(
- alert_short_id, reject, close_reason, close_notes, username # type: ignore
+ alert_short_id, reject, close_reason, close_notes, username, mirror_direction, is_mirror_out # type: ignore
)
diff --git a/Packs/SekoiaXDR/Scripts/SekoiaXDRCloseAlert/SekoiaXDRCloseAlert_test.py b/Packs/SekoiaXDR/Scripts/SekoiaXDRCloseAlert/SekoiaXDRCloseAlert_test.py
index 80510ed38b26..eb0edeccfc4e 100644
--- a/Packs/SekoiaXDR/Scripts/SekoiaXDRCloseAlert/SekoiaXDRCloseAlert_test.py
+++ b/Packs/SekoiaXDR/Scripts/SekoiaXDRCloseAlert/SekoiaXDRCloseAlert_test.py
@@ -1,6 +1,12 @@
import demistomock as demisto
import SekoiaXDRCloseAlert # type: ignore
-from SekoiaXDRCloseAlert import get_status_name, close_alert, main # type: ignore
+from SekoiaXDRCloseAlert import (
+ get_status_name,
+ get_username,
+ post_closure_comment,
+ close_alert,
+ main,
+) # type: ignore
def test_get_status_name(mocker):
@@ -9,28 +15,72 @@ def test_get_status_name(mocker):
assert get_status_name("1") == "Ongoing"
+def test_get_username(mocker):
+ output_data = [{"Type": 3, "Contents": {"name": "admin1"}}]
+ mocker.patch.object(demisto, "executeCommand", return_value=output_data)
+ assert get_username("admin") == "admin1"
+
+
+def test_post_closure_comment(mocker):
+ output_data = [{"Type": 3, "Contents": {"name": "admin1"}}]
+ mocker.patch.object(demisto, "executeCommand", return_value=output_data)
+ mocker.patch.object(SekoiaXDRCloseAlert, "get_username", return_value="admin1")
+ assert post_closure_comment("1", "reason", "notes", "admin") is None
+
+
def test_close_alert(mocker):
mocker.patch.object(SekoiaXDRCloseAlert, "get_status_name", return_value="Ongoing")
output_data = [{"Type": 3, "Contents": {}}]
mocker.patch.object(demisto, "executeCommand", return_value=output_data)
+ mocker.patch.object(SekoiaXDRCloseAlert, "post_closure_comment", return_value=None)
mocker.patch.object(demisto, "results")
- close_alert("1", "false", "reason", "notes", "admin")
+ close_alert("1", "false", "reason", "notes", "admin", "In", True)
assert (
demisto.results.call_args[0][0]["Contents"]
== "**** The alert 1 has been closed. ****"
)
- close_alert("1", "true", "reason", "notes", "admin")
+ close_alert("1", "false", "reason", "notes", "admin", None, True)
+ assert (
+ demisto.results.call_args[0][0]["Contents"]
+ == "**** The alert 1 has been closed. ****"
+ )
+
+ close_alert("1", "false", "reason", "notes", "admin", None, False)
+ assert (
+ demisto.results.call_args[0][0]["Contents"]
+ == "**** The alert 1 has been closed. ****"
+ )
+
+ close_alert("1", "true", "reason", "notes", "admin", "In", False)
+ assert (
+ demisto.results.call_args[0][0]["Contents"]
+ == "**** The alert 1 has been rejected. ****"
+ )
+
+ close_alert("1", "true", "reason", "notes", "admin", None, True)
+ assert (
+ demisto.results.call_args[0][0]["Contents"]
+ == "**** The alert 1 has been rejected. ****"
+ )
+
+ close_alert("1", "true", "reason", "notes", "admin", None, False)
assert (
demisto.results.call_args[0][0]["Contents"]
== "**** The alert 1 has been rejected. ****"
)
+
+def test_close_alert_closed_cond(mocker):
mocker.patch.object(SekoiaXDRCloseAlert, "get_status_name", return_value="Closed")
- try:
- close_alert("1", "false", "reason", "notes", "admin")
- except Exception as e:
- assert str(e) == "**** The alert is already closed or rejected. ****"
+ output_data = [{"Type": 3, "Contents": {}}]
+ mocker.patch.object(demisto, "executeCommand", return_value=output_data)
+ mocker.patch.object(demisto, "results")
+ close_alert("1", "true", "reason", "notes", "admin", None, True)
+ assert (
+ demisto.results.call_args[0][0]["Contents"]
+ == "**** The alert 1 has been closed. ****"
+ )
def test_main(mocker):
@@ -39,7 +89,7 @@ def test_main(mocker):
"incidents",
return_value=[
{
- "dbotMirrorDirection": "Out",
+ "dbotMirrorDirection": "In",
"CustomFields": {"alertid": "1"},
"owner": "admin",
}
diff --git a/Packs/SekoiaXDR/pack_metadata.json b/Packs/SekoiaXDR/pack_metadata.json
index b7a7ffe1d1c7..45753fbc82c6 100644
--- a/Packs/SekoiaXDR/pack_metadata.json
+++ b/Packs/SekoiaXDR/pack_metadata.json
@@ -2,7 +2,7 @@
"name": "SekoiaXDR",
"description": "Request Sekoia Defend (XDR) from Cortex XSOAR",
"support": "partner",
- "currentVersion": "1.0.0",
+ "currentVersion": "1.1.0",
"author": "SEKOIA.IO",
"url": "https://www.sekoia.io/en/contact/",
"email": "contact@sekoia.io",
diff --git a/Packs/SendGrid/Integrations/SendGrid/SendGrid.yml b/Packs/SendGrid/Integrations/SendGrid/SendGrid.yml
index e86ee8c36ec1..31ae8885422b 100644
--- a/Packs/SendGrid/Integrations/SendGrid/SendGrid.yml
+++ b/Packs/SendGrid/Integrations/SendGrid/SendGrid.yml
@@ -345,7 +345,7 @@ script:
outputs:
- contextPath: Sendgrid.DeleteListJobId
description: Job id of the async job.
- dockerimage: demisto/py3-tools:1.0.0.102774
+ dockerimage: demisto/py3-tools:1.0.0.114656
script: ''
subtype: python3
type: python
diff --git a/Packs/SendGrid/ReleaseNotes/1_1_5.md b/Packs/SendGrid/ReleaseNotes/1_1_5.md
new file mode 100644
index 000000000000..2e77fd7200c7
--- /dev/null
+++ b/Packs/SendGrid/ReleaseNotes/1_1_5.md
@@ -0,0 +1,7 @@
+
+#### Integrations
+
+##### SendGrid
+
+
+- Updated the Docker image to: *demisto/py3-tools:1.0.0.114656*.
diff --git a/Packs/SendGrid/pack_metadata.json b/Packs/SendGrid/pack_metadata.json
index 1dc1f57b578e..67cf16f3ff87 100644
--- a/Packs/SendGrid/pack_metadata.json
+++ b/Packs/SendGrid/pack_metadata.json
@@ -2,7 +2,7 @@
"name": "SendGrid",
"description": "SendGrid provides a cloud-based service that assists businesses with email delivery. It allows companies to track email opens, unsubscribes, bounces, and spam reports. Our SendGrid pack utilize these SendGrid use cases to help you send and manage your emails.",
"support": "community",
- "currentVersion": "1.1.4",
+ "currentVersion": "1.1.5",
"author": "Sharat Patil",
"url": "",
"email": "",
diff --git a/Packs/ServiceDeskPlus/Integrations/ServiceDeskPlus/README.md b/Packs/ServiceDeskPlus/Integrations/ServiceDeskPlus/README.md
index 716570c7f951..0e6ccfda8fce 100644
--- a/Packs/ServiceDeskPlus/Integrations/ServiceDeskPlus/README.md
+++ b/Packs/ServiceDeskPlus/Integrations/ServiceDeskPlus/README.md
@@ -32,7 +32,7 @@ Follow the next steps to create an instance:
- In order to avoid repeating this process, the created Refresh Token should be saved for future use.
- For more details about generating a technician key please refer to the [help documentation](https://help.servicedeskplus.com/api/rest-api.html$key)
-![image](https://user-images.githubusercontent.com/61732335/86364400-cc70c600-bc80-11ea-9763-59acd31e08b7.png)
+![image](../../doc_files/86364400-cc70c600-bc80-11ea-9763-59acd31e08b7.png)
| **Parameter** | **Description** | **Required** |
| --- | --- | --- |
diff --git a/Packs/ServiceNow/Integrations/ServiceNow/README.md b/Packs/ServiceNow/Integrations/ServiceNow/README.md
index 3d95d486a055..c7b98ee00a90 100644
--- a/Packs/ServiceNow/Integrations/ServiceNow/README.md
+++ b/Packs/ServiceNow/Integrations/ServiceNow/README.md
@@ -469,7 +469,7 @@
}
Human Readable Output
-
+
2. Create a ticket
Creates a new ServiceNow ticket.
Base Command
@@ -930,7 +930,7 @@
}
Human Readable Output
-
+
3. Update a ticket
Updates a specified ServiceNow ticket.
@@ -1386,7 +1386,7 @@
}
Human Readable Output
-
+
4. Delete a ticket
Deletes a specified ServiceNow ticket.
@@ -1420,7 +1420,7 @@
Command Example
!servicenow-delete-ticket id=0c23f8d24f102300d316b63ca310c742
Human Readable Output
-
+
5. Add a link to a ticket
Adds a link to a specified ServiceNow ticket.
@@ -1470,7 +1470,7 @@
Command Example
!servicenow-add-link id="bd5d42994f82230021ae045e9310c7bd" ticket_type="incident" link="www.demisto.com" text="this is a link"
Human Readable Output
-
+
6. Add a comment to a ticket
Adds a comment to a specified ticket.
@@ -1514,7 +1514,7 @@
Command Example
!servicenow-add-comment id="0c23f8d24f102300d316b63ca310c742" ticket_type="incident" comment="This is a comment"
Human Readable Output
-
+
7. Get ticket information from a query
Retrieves ticket information via a query.
@@ -1771,7 +1771,7 @@
}
Human Readable Output
-
+
8. Upload a file to a ticket
Uploads a file to a specified ServiceNow ticket.
@@ -1867,7 +1867,7 @@
}
Human Readable Output
-
+
9. Get record information
Retrieves information for a specified record.
@@ -1953,7 +1953,7 @@
}
}
Human Readable Output
-
+
10. Query a table
Queries a specified table in ServiceNow.
@@ -2107,7 +2107,7 @@
}
Human Readable Output
-
+
11. Create a record in a table
Creates a new record in a specified ServiceNow table.
@@ -2201,7 +2201,7 @@
}
Human Readable Output
-
+
12. Update a record in a table
Updates a record in a specified ServiceNow table.
@@ -2294,7 +2294,7 @@
}
Human Readable Output
-
+
13. Delete a record from a table
Deletes a record from a specified ServiceNow table.
@@ -2328,7 +2328,7 @@
Command Example
!servicenow-delete-record id=748692114fc2230021ae045e9310c7ff table_name=incident
Human Readable Output
-
+
14. List API fiels for a table
Lists API fields for a specified ServiceNow table.
@@ -2585,7 +2585,7 @@
}
Human Readable Output
-
+
15. Query computers
Queries the cmdb_ci_computer table in ServiceNow.
@@ -2824,7 +2824,7 @@
}
Human Readable Output
-
+
16. Query groups
Queries the sys_user_group table in ServiceNow.
@@ -2923,7 +2923,7 @@
}
Human Readable Output
-
+
17. Query users
Queries the sys_user table in ServiceNow.
@@ -3028,7 +3028,7 @@
}
Human Readable Output
-
+
18. Get table names
Retrieves table names by a label to use in commands.
@@ -3104,7 +3104,7 @@
}
Human Readable Output
-
+
19. Get ticket notes
Returns notes for a specified ticket.
@@ -3201,7 +3201,7 @@
}
Human Readable Output
-
+
Additional Information
The tables and fields in the ServiceNow UI are different than those in the API. Each table and field name in the UI have their representation in the API.
diff --git a/Packs/ServiceNow/Integrations/ServiceNowEventCollector/README.md b/Packs/ServiceNow/Integrations/ServiceNowEventCollector/README.md
index 80bf7a8d6797..cdc0a031bf30 100644
--- a/Packs/ServiceNow/Integrations/ServiceNowEventCollector/README.md
+++ b/Packs/ServiceNow/Integrations/ServiceNowEventCollector/README.md
@@ -1,37 +1,33 @@
-Use this integration to fetch audit logs from ServiceNow as Cortex XSIAM events.
+Use this integration to fetch audit and syslog transactions logs from ServiceNow as Cortex XSIAM events.
This integration was integrated and tested with Vancouver version of ServiceNow API.
-## Configure ServiceNow Event Collector on Cortex XSOAR
+## Configure ServiceNow Event Collector in Cortex
-1. Navigate to **Settings** > **Integrations** > **Servers & Services**.
-2. Search for ServiceNow Event Collector.
-3. Click **Add instance** to create and configure a new integration instance.
-
- | **Parameter** | **Description** | **Required** |
- | --- |------------------------------------------------------------------------------------------| --- |
- | ServiceNow URL | ServiceNow URL in the format https://company.service-now.com/ | True |
- | Username | | True |
- | Password | | True |
- | Client ID | | False |
- | Client Secret | | False |
- | ServiceNow API Version (e.g. 'v1') | | False |
- | Use OAuth Login | Select this checkbox to use OAuth 2.0 authentication. See \(?\) for more information. | False |
- | Maximum number of events per fetch | Default value is 1000 | False |
- | Events Fetch Interval | | False |
- | Trust any certificate (not secure) | | False |
- | Use system proxy settings | | False |
-
-4. Click **Test** to validate the URLs, token, and connection.
+| **Parameter** | **Description** | **Required** |
+| --- | --- | --- |
+| ServiceNow URL, in the format https://company.service-now.com/ | | True |
+| Username | | True |
+| Password | | True |
+| Client ID | | False |
+| Client Secret | | False |
+| ServiceNow API Version (e.g., 'v1') | | False |
+| Use OAuth Login | Select this checkbox to use OAuth 2.0 authentication. | False |
+| Event Types To Fetch | Event types to fetch. Defaults to 'Audit' if no type is specified. | False |
+| Maximum audit events to fetch | Maximum number of audit events per fetch. | False |
+| Maximum syslog transactions events to fetch | Maximum number of syslog transactions events per fetch. | False |
+| Events Fetch Interval | | False |
+| Trust any certificate (not secure) | | False |
+| Use system proxy settings | | False |
## Commands
-You can execute these commands from the Cortex XSIAM CLI, as part of an automation, or in a playbook.
+You can execute these commands from the CLI, as part of an automation, or in a playbook.
After you successfully execute a command, a DBot message appears in the War Room with the command details.
### service-now-get-audit-logs
***
-Returns audit logs events extracted from ServiceNow. This command is used for developing/debugging and is to be used with caution, as it can create events, leading to event duplication and exceeding the API request limitation.
+Returns events extracted from ServiceNow. This command is used for developing/debugging and is to be used with caution, as it can create events, leading to event duplication and exceeding the API request limitation.
#### Base Command
@@ -42,7 +38,39 @@ Returns audit logs events extracted from ServiceNow. This command is used for de
| **Argument Name** | **Description** | **Required** |
| --- | --- | --- |
| should_push_events | Set this argument to True in order to create events, otherwise the command will only display them. Possible values are: True, False. Default is False. | Required |
-| limit | The maximum number of events to return. Default is 1000. | Optional |
+| limit | Maximum audit events to fetch. Default is 1000. | Optional |
+| from_date | The date and time of the earliest event. The time format is "{yyyy}-{mm}-{dd} {hh}:{mm}:{ss}". Example: "2021-05-18 13:45:14" indicates May 18, 2021, 1:45PM. | Optional |
+| offset | Starting record index from which to begin retrieving records. | Optional |
+
+#### Context Output
+
+There is no context output for this command.
+
+### Human Readable
+
+>### Audit Events
+>|_time|documentkey|fieldname|newvalue|record_checkpoint|sys_created_on|sys_id|tablename|
+>|---|---|---|---|---|---|---|---|
+>| 2024-01-28T13:21:43Z | 3 | DELETED | DELETED | -1 | 2024-01-28 13:21:43 | 3 | audit |
+>| 2024-01-28T13:21:43Z | 3 | DELETED | DELETED | -1 | 2024-01-28 13:21:43 | 3 | audit |
+>| 2024-01-28T13:21:43Z | 3 | DELETED | DELETED | -1 | 2024-01-28 13:21:43 | 3 | audit |
+>| 2024-01-28T13:21:43Z | 3 | DELETED | DELETED | -1 | 2024-01-28 13:21:43 | 3 | audit |
+
+### service-now-get-syslog-transactions
+
+***
+Returns syslog transactions events extracted from ServiceNow. This command is used for developing/debugging and is to be used with caution, as it can create events, leading to event duplication and exceeding the API request limitation.
+
+#### Base Command
+
+`service-now-get-syslog-transactions`
+
+#### Input
+
+| **Argument Name** | **Description** | **Required** |
+| --- | --- | --- |
+| should_push_events | Set this argument to True in order to create events, otherwise the command will only display them. Possible values are: True, False. Default is False. | Required |
+| max_fetch_syslog_transactions | Maximum syslog transactions events to fetch. Default is 1000. | Optional |
| from_date | The date and time of the earliest event. The time format is "{yyyy}-{mm}-{dd} {hh}:{mm}:{ss}". Example: "2021-05-18 13:45:14" indicates May 18, 2021, 1:45PM. | Optional |
| offset | Starting record index from which to begin retrieving records. | Optional |
@@ -52,10 +80,11 @@ There is no context output for this command.
### Human Readable
->### Audit Logs List:
->|Time|Documentkey|Fieldname|Newvalue|Record Checkpoint|Sys Created On|Sys Id|Tablename|
+>### Syslog Transactions Events
+>|_time|acl_time|business_rule_count|client_transaction|cpu_time|sys_created_on|sys_id|source_log_type|
>|---|---|---|---|---|---|---|---|
->| 2024-01-28T13:21:43Z | 3 | DELETED | DELETED | -1 | 2024-01-28 13:21:43 | 3 | test_table |
->| 2024-01-28T13:21:43Z | 3 | DELETED | DELETED | -1 | 2024-01-28 13:21:43 | 3 | test_table |
->| 2024-01-28T13:21:43Z | 3 | DELETED | DELETED | -1 | 2024-01-28 13:21:43 | 3 | test_table |
->| 2024-01-28T13:21:43Z | 3 | DELETED | DELETED | -1 | 2024-01-28 13:21:43 | 3 | test_table |
+>| 2024-01-28T13:21:43Z | 3 | 1 | false | 6 | 2024-01-28 13:21:43 | 3 | syslog transaction |
+>| 2024-01-28T13:21:43Z | 3 | 1 | false | 6 | 2024-01-28 13:21:43 | 3 | syslog transaction |
+>| 2024-01-28T13:21:43Z | 3 | 1 | false | 6 | 2024-01-28 13:21:43 | 3 | syslog transaction |
+>| 2024-01-28T13:21:43Z | 3 | 1 | false | 6 | 2024-01-28 13:21:43 | 3 | syslog transaction |
+
diff --git a/Packs/ServiceNow/Integrations/ServiceNowEventCollector/ServiceNowEventCollector.py b/Packs/ServiceNow/Integrations/ServiceNowEventCollector/ServiceNowEventCollector.py
index 2059f456ee27..81949a329a48 100644
--- a/Packs/ServiceNow/Integrations/ServiceNowEventCollector/ServiceNowEventCollector.py
+++ b/Packs/ServiceNow/Integrations/ServiceNowEventCollector/ServiceNowEventCollector.py
@@ -10,12 +10,17 @@
PRODUCT = "servicenow"
LOGS_DATE_FORMAT = "%Y-%m-%d %H:%M:%S" # New format for processing events
DATE_FORMAT = "%Y-%m-%dT%H:%M:%SZ" # ISO8601 format with UTC, default in XSIAM
-
+AUDIT = "audit"
+SYSLOG_TRANSACTIONS = "syslog transactions"
+URL = {AUDIT: "table/sys_audit", SYSLOG_TRANSACTIONS: "table/syslog_transaction"}
+LAST_FETCH_TIME = {AUDIT: "last_fetch_time", SYSLOG_TRANSACTIONS: "last_fetch_time_syslog"}
+PREVIOUS_RUN_IDS = {AUDIT: "previous_run_ids", SYSLOG_TRANSACTIONS: "previous_run_ids_syslog"}
""" CLIENT CLASS """
class Client:
- def __init__(self, use_oauth, credentials, client_id, client_secret, url, verify, proxy, fetch_limit, api_server_url):
+ def __init__(self, use_oauth, credentials, client_id, client_secret, url, verify, proxy, api_server_url, fetch_limit_audit,
+ fetch_limit_syslog):
self.sn_client = ServiceNowClient(
credentials=credentials,
use_oauth=use_oauth,
@@ -26,13 +31,16 @@ def __init__(self, use_oauth, credentials, client_id, client_secret, url, verify
headers={},
proxy=proxy,
)
- self.fetch_limit = fetch_limit
+ self.fetch_limit_audit = fetch_limit_audit
+ self.fetch_limit_syslog = fetch_limit_syslog
self.api_server_url = api_server_url
- def get_audit_logs(self, from_time: str, limit: Optional[int] = None, offset: int = 0):
- """Make a request to the ServiceNow REST API to retrieve audit logs"""
+ def search_events(self, from_time: str, log_type: str, limit: Optional[int] = None, offset: int = 0):
+ """Make a request to the ServiceNow REST API to retrieve audit and syslog transactions logs"""
+
if limit is None:
- limit = self.fetch_limit
+ limit = self.fetch_limit_audit if log_type == AUDIT else self.fetch_limit_syslog
+
params = {
"sysparm_limit": limit,
"sysparm_offset": offset,
@@ -40,7 +48,7 @@ def get_audit_logs(self, from_time: str, limit: Optional[int] = None, offset: in
}
res = self.sn_client.http_request(
method="GET",
- full_url=f"{self.api_server_url}table/sys_audit",
+ full_url=f"{self.api_server_url}{URL[log_type]}",
url_suffix=None,
params=remove_empty_elements(params),
)
@@ -50,31 +58,129 @@ def get_audit_logs(self, from_time: str, limit: Optional[int] = None, offset: in
""" HELPER METHODS """
-def add_time_field(events: List[Dict[str, Any]]) -> List[Dict[str, Any]]:
- """Adds time field to the events
+def handle_log_types(event_types_to_fetch: list) -> list:
+ """
+ Args:
+ event_types_to_fetch (list of str): A list of event type titles to be converted to log types.
+
+ Raises:
+ InvalidEventTypeError: If any of the event type titles are not found in the titles_to_types mapping.
+
+ Returns:
+ list: A list of log types corresponding to the provided event type titles.
+ The list contains log types that have a matching title in the titles_to_types mapping.
+ If an event type title is not found, an exception is raised.
+ """
+ log_types = []
+ VALID_EVENT_TITLES = ['Audit', 'Syslog Transactions']
+ titles_to_types = {'Audit': AUDIT, 'Syslog Transactions': SYSLOG_TRANSACTIONS}
+ for type_title in event_types_to_fetch:
+ if log_type := titles_to_types.get(type_title):
+ log_types.append(log_type)
+ else:
+ raise DemistoException(
+ f"'{type_title}' is not valid event type, please select from the following list: {VALID_EVENT_TITLES}")
+ return log_types
+
+
+def update_last_run(last_run: dict[str, Any], log_type: str, last_event_time: str, previous_run_ids: list) -> dict:
+ """
+ Update the last run details for a specific log type.
+
+ Args:
+ last_run (dict[str, Any]): Dictionary containing the last run details for different log types.
+ log_type (str): Type of log to update.
+ last_event_time (int): Timestamp of the last event fetched.
+ previous_run_ids (list): List of IDs from the previous fetch to track.
+
+ Returns:
+ Dict[str, Any]: Updated dictionary containing the last run details.
+ """
+ last_run[LAST_FETCH_TIME[log_type]] = last_event_time
+ last_run[PREVIOUS_RUN_IDS[log_type]] = previous_run_ids
+ return last_run
+
- :param events: List of events to add the _time field to.
+def initialize_from_date(last_run: dict[str, Any], log_type: str) -> str:
+ """
+ Initialize the start timestamp for fetching logs based on provided parameters.
+
+ Args:
+ last_run (dict[str, Any]): Dictionary containing the last fetch timestamps for different log types.
+ log_type (str): Type of log for which to initialize the start timestamp.
+
+ Returns:
+ str: The start timestamp for fetching logs.
+ """
+ start_timestamp = last_run.get(LAST_FETCH_TIME[log_type])
+ if not start_timestamp:
+ current_time = datetime.utcnow()
+ first_fetch_time = current_time - timedelta(minutes=1)
+ first_fetch_str = first_fetch_time.strftime(LOGS_DATE_FORMAT)
+ from_date = first_fetch_str
+ else:
+ from_date = start_timestamp
+
+ return from_date
+
+
+def add_time_field(events: List[Dict[str, Any]], log_type) -> List[Dict[str, Any]]:
+ """
+ Add a '_time' field to each event and set the source log type.
+
+ Args:
+ events (List[Dict[str, Any]]): List of events to add the '_time' field to.
+ log_type (str): Type of log to set as the 'source_log_type' for each event.
+
+ Returns:
+ List[Dict[str, Any]]: The list of events with '_time' and 'source_log_type' fields added.
"""
for event in events:
event["_time"] = datetime.strptime(event["sys_created_on"], LOGS_DATE_FORMAT).strftime(DATE_FORMAT)
+ event["source_log_type"] = log_type
+
return events
-def process_and_filter_events(events: list, previous_run_ids: set, from_date: str):
+def get_limit(args: dict, client: Client):
+ """
+ Retrieve the limit for the number of logs to fetch, with defaults based on client settings.
+
+ Args:
+ args (dict): Dictionary of arguments potentially containing a "limit" key.
+ client (Client): Client instance with attributes for default fetch limits.
+
+ Returns:
+ int: The limit for the number of logs to fetch.
+ """
+ limit = arg_to_number(args.get("limit")) or client.fetch_limit_audit or 1000
+ return limit
+
+
+def process_and_filter_events(events: list, previous_run_ids: set, from_date: str, log_type: str):
"""
- Removing duplicates and creating a set of last fetched ids with the same time.
+ Remove duplicates from events and create a set of last fetched IDs with the same timestamp.
+
+ Args:
+ events (list): List of events fetched from the API.
+ previous_run_ids (set): Set of event IDs matching the timestamp in the 'from_date' parameter.
+ from_date (str): Starting date for fetching events, based on the last run's timestamp.
+ log_type (str): Type of log to set as the 'source_log_type' for each event.
- :param events: events fetched from the API
- :param previous_run_ids: ids with time as the one in the from date param
- :param from_date: from date from last_run object
- :return: all unique events and a set of last ids of events with same time.
+ Returns:
+ tuple: A list of unique events and a set of the last fetched event IDs with the same timestamp.
"""
unique_events = []
from_date_datetime = datetime.strptime(from_date, LOGS_DATE_FORMAT)
+ duplicates_list = []
for event in events:
create_time = datetime.strptime(event.get("sys_created_on"), LOGS_DATE_FORMAT)
+ event["_time"] = create_time.strftime(DATE_FORMAT)
+ event["source_log_type"] = log_type
if event.get("sys_id") in previous_run_ids:
+ duplicates_list.append(event.get("sys_id"))
continue
+
if create_time > from_date_datetime:
previous_run_ids = set()
from_date_datetime = create_time
@@ -82,44 +188,45 @@ def process_and_filter_events(events: list, previous_run_ids: set, from_date: st
previous_run_ids.add(event.get("sys_id"))
unique_events.append(event)
+ demisto.debug(f"Filtered out the following event IDs due to duplication: {duplicates_list}.")
+ demisto.debug(f"Updated last_run with previous_run_ids: {previous_run_ids}.")
return unique_events, previous_run_ids
""" COMMAND METHODS """
-def get_audit_logs_command(client: Client, args: dict) -> tuple[list, CommandResults]:
+def get_events_command(client: Client, args: dict, log_type: str, last_run: dict) -> tuple[list, CommandResults]:
"""
-
Args:
limit: The maximum number of logs to return.
to_date: date to fetch events from.
from_date: date to fetch events to.
client: Client object.
-
+ last_run: Dictionary containing the last fetch timestamps for different log types.
Returns:
Sign on logs from Workday.
"""
- limit = args.get("limit", 1000)
- offset = args.get("offset", 0)
- from_date = args.get("from_date", "")
-
- audit_logs = client.get_audit_logs(from_time=from_date, limit=limit, offset=offset)
- add_time_field(audit_logs) # Add the _time field to the events
-
- demisto.debug(f"Got a total of {len(audit_logs)} events created after {from_date}")
+ types_to_titles = {AUDIT: 'Audit', SYSLOG_TRANSACTIONS: 'Syslog Transactions'}
+ all_events = []
+ if arg_from := args.get("from_date"):
+ from_date = arg_from
+ else:
+ from_date = initialize_from_date(last_run, log_type)
- readable_output = tableToMarkdown(
- "Audit Logs List:",
- audit_logs,
- removeNull=True,
- headerTransform=lambda x: string_to_table_header(camel_case_to_underscore(x)),
- )
+ offset = args.get("offset", 0)
+ limit = get_limit(args, client)
+ logs = client.search_events(from_time=from_date, log_type=log_type, limit=limit, offset=offset)
+ add_time_field(logs, log_type)
+ demisto.debug(f"Got a total of {len(logs)} {log_type} events created after {from_date}")
+ hr = tableToMarkdown(name=f'{types_to_titles[log_type]} Events', t=logs, removeNull=True,
+ headerTransform=lambda x: string_to_table_header(camel_case_to_underscore(x)))
+ all_events.extend(logs)
- return audit_logs, CommandResults(readable_output=readable_output)
+ return all_events, CommandResults(readable_output=hr)
-def fetch_events_command(client: Client, last_run: dict):
+def fetch_events_command(client: Client, last_run: dict, log_types: list):
"""
Fetches audit logs from Workday.
Args:
@@ -131,51 +238,42 @@ def fetch_events_command(client: Client, last_run: dict):
Audit logs from Workday.
"""
- events = []
- previous_run_ids = set(last_run.get("previous_run_ids", set()))
-
- if "last_fetch_time" not in last_run:
- current_time = datetime.utcnow()
- first_fetch_time = current_time - timedelta(minutes=1)
- first_fetch_str = first_fetch_time.strftime(LOGS_DATE_FORMAT)
- from_date = first_fetch_str
- else:
- from_date = last_run.get("last_fetch_time", "")
+ collected_events = []
+ for log_type in log_types:
+ previous_run_ids = set(last_run.get(PREVIOUS_RUN_IDS[log_type], set()))
+ from_date = initialize_from_date(last_run, log_type)
+ demisto.debug(f"Getting {log_type} Logs {from_date=}.")
+ new_events = client.search_events(from_date, log_type)
- demisto.debug(f"Getting Audit Logs {from_date=}.")
- audit_logs = client.get_audit_logs(from_date)
+ if new_events:
+ demisto.debug(f"Got {len(new_events)} {log_type} events. Begin processing.")
+ events, previous_run_ids = process_and_filter_events(
+ events=new_events, previous_run_ids=previous_run_ids, from_date=from_date, log_type=log_type
+ )
- if audit_logs:
- demisto.debug(f"Got {len(audit_logs)} audit_logs. Begin processing.")
- events, previous_run_ids = process_and_filter_events(
- events=audit_logs, previous_run_ids=previous_run_ids, from_date=from_date
- )
-
- demisto.debug(f"Done processing {len(events)} audit_logs.")
- last_fetch_time = events[-1].get("sys_created_on") if events else from_date
- last_run = {
- "last_fetch_time": last_fetch_time,
- "previous_run_ids": list(previous_run_ids),
- }
- demisto.debug(f"Saving last run as {last_run}")
+ demisto.debug(f"Done processing {len(events)} {log_type} events.")
+ last_fetch_time = events[-1].get("sys_created_on") if events else from_date
+ last_run = update_last_run(last_run, log_type, last_fetch_time, list(previous_run_ids))
+ collected_events.extend(events)
- return events, last_run
+ return collected_events, last_run
-def module_of_testing(client: Client) -> str: # pragma: no cover
- """Tests API connectivity and authentication
+def module_of_testing(client: Client, log_types: list) -> str: # pragma: no cover
+ """
+ Test API connectivity and authentication.
- Returning 'ok' indicates that the integration works like it is supposed to.
- Connection to the service is successful.
- Raises exceptions if something goes wrong.
+ Returns "ok" if the connection to the service is successful and the integration functions correctly.
+ Raises exceptions if the test fails.
- :type client: ``Client``
- :param Client: client to use
+ Args:
+ client (Client): Client instance used to test connectivity.
+ log_types (list): List of log types to test fetching events.
- :return: 'ok' if test passed, anything else will fail the test.
- :rtype: ``str``
+ Returns:
+ str: "ok" if the test passed; any exception raised will indicate failure.
"""
- _, _ = fetch_events_command(client, {})
+ _, _ = fetch_events_command(client, {}, log_types=log_types)
return "ok"
@@ -191,18 +289,22 @@ def main() -> None: # pragma: no cover
verify_certificate = params.get("insecure", False)
proxy = params.get("proxy", False)
use_oauth = params.get("use_oauth", False)
- client_id = params.get("client_credentials", {}).get("identifier")
- client_secret = params.get("client_credentials", {}).get("password")
+ client_id = params.get("client_credentials", {}).get("identifier", "")
+ client_secret = params.get("client_credentials", {}).get("password", "")
credentials = params.get("credentials", {})
user_name = credentials.get("identifier")
password = credentials.get("password")
- max_fetch = arg_to_number(params.get("max_fetch")) or 10000
+ max_fetch_audit = arg_to_number(params.get("max_fetch")) or 10000
+ max_fetch_syslog = arg_to_number(params.get("max_fetch_syslog_transactions")) or 10000
+ event_types_to_fetch = argToList(params.get('event_types_to_fetch', ['Audit']))
+ log_types = handle_log_types(event_types_to_fetch)
version = params.get("api_version")
if version:
api = f"/api/now/{version}/"
else:
api = "/api/now/"
+
api_server_url = f"{server_url}{api}"
demisto.debug(f"Command being called is {command}")
@@ -216,37 +318,37 @@ def main() -> None: # pragma: no cover
url=server_url,
verify=verify_certificate,
proxy=proxy,
- fetch_limit=max_fetch,
api_server_url=api_server_url,
+ fetch_limit_audit=max_fetch_audit,
+ fetch_limit_syslog=max_fetch_syslog
)
-
+ last_run = demisto.getLastRun()
if client.sn_client.use_oauth and not get_integration_context().get("refresh_token", None):
client.sn_client.login(username=user_name, password=password)
if command == "test-module":
- return_results(module_of_testing(client))
+ return_results(module_of_testing(client, log_types))
- elif command == "service-now-get-audit-logs":
- audit_logs, results = get_audit_logs_command(client=client, args=args)
+ elif command == "service-now-get-audit-logs" or command == "service-now-get-syslog-transactions":
+ log_type = AUDIT if command == "service-now-get-audit-logs" else SYSLOG_TRANSACTIONS
+ audit_logs, results = get_events_command(client=client, args=args, log_type=log_type, last_run=last_run)
return_results(results)
- if argToBoolean(args.get("should_push_events", "true")):
+ if argToBoolean(args.get("should_push_events", True)):
send_events_to_xsiam(audit_logs, vendor=VENDOR, product=PRODUCT)
elif command == "fetch-events":
- last_run = demisto.getLastRun()
demisto.debug(f"Starting new fetch with last_run as {last_run}")
- audit_logs, new_last_run = fetch_events_command(client=client, last_run=last_run)
+ events, next_run = fetch_events_command(client=client, last_run=last_run, log_types=log_types)
demisto.debug("Done fetching events, sending to XSIAM.")
- if audit_logs:
- add_time_field(audit_logs)
- send_events_to_xsiam(audit_logs, vendor=VENDOR, product=PRODUCT)
- if new_last_run:
+ if events:
+ send_events_to_xsiam(events, vendor=VENDOR, product=PRODUCT)
+ if next_run:
# saves next_run for the time fetch-events is invoked
- demisto.debug(f"Setting new last_run to {new_last_run}")
- demisto.setLastRun(new_last_run)
+ demisto.debug(f"Setting new last_run to {next_run}")
+ demisto.setLastRun(next_run)
else:
raise NotImplementedError(f"command {command} is not implemented.")
diff --git a/Packs/ServiceNow/Integrations/ServiceNowEventCollector/ServiceNowEventCollector.yml b/Packs/ServiceNow/Integrations/ServiceNowEventCollector/ServiceNowEventCollector.yml
index 8f1bc7839089..d687b8f7c036 100644
--- a/Packs/ServiceNow/Integrations/ServiceNowEventCollector/ServiceNowEventCollector.yml
+++ b/Packs/ServiceNow/Integrations/ServiceNowEventCollector/ServiceNowEventCollector.yml
@@ -33,11 +33,29 @@ configuration:
type: 8
required: false
section: Connect
-- defaultvalue: '10000'
- display: Maximum number of events per fetch
+- display: Event Types To Fetch
+ section: Collect
+ name: event_types_to_fetch
+ type: 16
+ required: false
+ additionalinfo: Event types to fetch. Defaults to 'Audit' if no type is specified.
+ defaultvalue: 'Audit'
+ options:
+ - Audit
+ - Syslog Transactions
+- additionalinfo: Maximum number of audit events per fetch.
+ defaultvalue: '10000'
+ display: Maximum audit events to fetch
name: max_fetch
+ type: 0
required: false
+ section: Collect
+- additionalinfo: Maximum number of syslog transactions events per fetch.
+ defaultvalue: '10000'
+ display: Maximum syslog transactions events to fetch
+ name: max_fetch_syslog_transactions
type: 0
+ required: false
section: Collect
- defaultvalue: 1
display: Events Fetch Interval
@@ -58,14 +76,13 @@ configuration:
type: 8
section: Connect
advanced: true
-description: Use this integration to fetch audit logs from ServiceNow as Cortex XSIAM events.
+description: Use this integration to fetch audit and syslog transactions logs from ServiceNow as Cortex XSIAM events.
display: ServiceNow Event Collector
name: ServiceNow Event Collector
script:
commands:
- name: service-now-get-audit-logs
- description: Returns audit logs events extracted from ServiceNow. This command is used for developing/debugging and is to be used with caution, as it can create events, leading to event duplication and exceeding the API request limitation.
- deprecated: false
+ description: Returns events extracted from ServiceNow. This command is used for developing/debugging and is to be used with caution, as it can create events, leading to event duplication and exceeding the API request limitation.
arguments:
- auto: PREDEFINED
defaultValue: "False"
@@ -75,11 +92,31 @@ script:
- "True"
- "False"
required: true
- - name: limit
- description: The maximum number of events to return.
+ - description: Maximum audit events to fetch. Default is 1000.
+ name: limit
+ - name: from_date
+ description: 'The date and time of the earliest event. The time format is "{yyyy}-{mm}-{dd} {hh}:{mm}:{ss}". Example: "2021-05-18 13:45:14" indicates May 18, 2021, 1:45PM.'
+ required: false
+ isArray: false
+ defaultValue: ""
+ - name: offset
+ description: Starting record index from which to begin retrieving records.
required: false
isArray: false
- defaultValue: 1000
+ defaultValue: ""
+ - name: service-now-get-syslog-transactions
+ description: Returns syslog transactions events extracted from ServiceNow. This command is used for developing/debugging and is to be used with caution, as it can create events, leading to event duplication and exceeding the API request limitation.
+ arguments:
+ - auto: PREDEFINED
+ defaultValue: "False"
+ description: Set this argument to True in order to create events, otherwise the command will only display them.
+ name: should_push_events
+ predefined:
+ - "True"
+ - "False"
+ required: true
+ - description: Maximum syslog transactions events to fetch. Default is 1000.
+ name: limit
- name: from_date
description: 'The date and time of the earliest event. The time format is "{yyyy}-{mm}-{dd} {hh}:{mm}:{ss}". Example: "2021-05-18 13:45:14" indicates May 18, 2021, 1:45PM.'
required: false
@@ -91,7 +128,7 @@ script:
isArray: false
defaultValue: ""
outputs: []
- dockerimage: demisto/python3:3.10.14.91134
+ dockerimage: demisto/python3:3.11.10.115186
isfetchevents: true
runonce: false
script: ''
diff --git a/Packs/ServiceNow/Integrations/ServiceNowEventCollector/ServiceNowEventCollector_description.md b/Packs/ServiceNow/Integrations/ServiceNowEventCollector/ServiceNowEventCollector_description.md
index 6061d1b711c1..5bc218856fbc 100644
--- a/Packs/ServiceNow/Integrations/ServiceNowEventCollector/ServiceNowEventCollector_description.md
+++ b/Packs/ServiceNow/Integrations/ServiceNowEventCollector/ServiceNowEventCollector_description.md
@@ -1,4 +1,4 @@
-Use this integration to collect audit logs automatically from ServiceNow.
+Use this integration to collect audit and syslog transactions logs automatically from ServiceNow.
To use ServiceNow on Cortex XSIAM, ensure your user account has the rest_api_explorer and web_service_admin roles.
These roles are required to make API calls.
diff --git a/Packs/ServiceNow/Integrations/ServiceNowEventCollector/ServiceNowEventCollector_test.py b/Packs/ServiceNow/Integrations/ServiceNowEventCollector/ServiceNowEventCollector_test.py
index 5ef2fddbec88..ef1c0f0cfa6a 100644
--- a/Packs/ServiceNow/Integrations/ServiceNowEventCollector/ServiceNowEventCollector_test.py
+++ b/Packs/ServiceNow/Integrations/ServiceNowEventCollector/ServiceNowEventCollector_test.py
@@ -1,9 +1,12 @@
import json
from datetime import datetime, timedelta
-from freezegun import freeze_time
-
+import ServiceNowEventCollector
import pytest
-from ServiceNowEventCollector import Client, LOGS_DATE_FORMAT
+from ServiceNowEventCollector import (
+ Client, LOGS_DATE_FORMAT, get_events_command, fetch_events_command, process_and_filter_events, get_limit,
+ SYSLOG_TRANSACTIONS, AUDIT, add_time_field, DATE_FORMAT, initialize_from_date, update_last_run, handle_log_types,
+ LAST_FETCH_TIME, PREVIOUS_RUN_IDS)
+from CommonServerPython import DemistoException
def util_load_json(path):
@@ -23,8 +26,9 @@ def create_client(self):
url=self.base_url,
verify=False,
proxy=False,
- fetch_limit=10,
api_server_url=f"{self.base_url}/api/now",
+ fetch_limit_audit=10,
+ fetch_limit_syslog=10
)
@staticmethod
@@ -61,93 +65,887 @@ def create_single(single_response, time, id, output):
output, id_to_start_from = create_single(single_response.copy(), new_time, id_to_start_from, output)
return output
- @pytest.mark.parametrize("logs_to_fetch", [1, 4, 6], ids=["Single", "Part", "All"])
- def test_get_max_fetch_activity_logging(self, logs_to_fetch, mocker):
+ # get_events_command
+
+ def test_get_events_command_standard(self, mocker):
"""
- Given: number of logging to fetch.
- When: running get activity logging command or fetch.
- Then: return the correct number of loggings.
+ Test get_events_command with typical arguments and multiple log types.
+ Given:
+ - A list of log types and standard arguments for date range and limit.
+ When:
+ - Running the 'get_events_command' function to retrieve events.
+ Then:
+ - Validates that the function returns the expected events and human-readable output.
"""
- from ServiceNowEventCollector import get_audit_logs_command
- mocker.patch.object(Client, "get_audit_logs", side_effect=self.create_response_by_limit)
- res, _ = get_audit_logs_command(client=self.client, args={"limit": logs_to_fetch})
- assert len(res) == logs_to_fetch
+ args = {"from_date": "2023-01-01T00:00:00Z", "offset": 0, "limit": 10}
+ last_run = {}
+ log_type = AUDIT
+ mock_logs = [{"event_id": 1, "timestamp": "2023-01-01 01:00:00"}]
- DUPLICATED_AUDIT_LOGS = [
- (("2023-04-15 07:00:00", 5, 2, 0), 5, 2, {"last_fetch_time": "2023-04-15 07:00:00", "previous_run_ids": set()}),
- (("2023-04-15 07:00:00", 5, 0, 0), 3, 5, {"last_fetch_time": "2023-04-15 07:00:00", "previous_run_ids": {"1", "2"}}),
- ]
+ http_responses = mocker.patch.object(
+ Client,
+ "search_events",
+ return_value=mock_logs
+ )
+
+ mocker.patch("ServiceNowEventCollector.add_time_field", return_value="")
+ all_events, command_results = get_events_command(self.client, args, log_type, last_run)
- @pytest.mark.parametrize("args, len_of_audit_logs, len_of_previous, last_run", DUPLICATED_AUDIT_LOGS)
- def test_remove_duplicated_activity_logging(self, args, len_of_audit_logs, len_of_previous, last_run):
+ assert http_responses.call_args[1] == {
+ "from_time": "2023-01-01T00:00:00Z",
+ "log_type": "audit",
+ "limit": 10,
+ "offset": 0,
+ }
+ assert len(all_events) == 1
+ assert isinstance(command_results.readable_output, str)
+ assert "Audit Events" in command_results.readable_output
+ assert "Syslog Transactions Events" not in command_results.readable_output
+
+ def test_get_events_command_empty_response(self, mocker):
"""
- Given: responses with potential duplications from last fetch.
- When: running fetch command.
- Then: return last responses with the latest requestTime to check if there are duplications.
+ Test get_events_command when no logs are returned.
+ Given:
+ - A list of log types and arguments for date range and limit.
+ When:
+ - Running 'get_events_command' function and no events are returned.
+ Then:
+ - Validates that the function returns an empty list and an appropriate human-readable output.
"""
- from ServiceNowEventCollector import process_and_filter_events
+ args = {"from_date": "2023-01-01T00:00:00Z", "offset": 0, "limit": 10}
+ last_run = {}
+ log_type = AUDIT
+ http_responses = mocker.patch.object(Client, "search_events", return_value=[])
- loggings = self.create_response_with_duplicates(*args)
+ mocker.patch("ServiceNowEventCollector.add_time_field", return_value="")
+ all_events, command_results = get_events_command(self.client, args, log_type, last_run)
- activity_loggings, previous_run_ids = process_and_filter_events(
- loggings, last_run.get('previous_run_ids'), "2023-04-15 07:00:00")
- assert len(activity_loggings) == len_of_audit_logs
- assert len(previous_run_ids) == len_of_previous
+ assert len(all_events) == 0
+ assert http_responses.call_count == 1
+ assert "No entries." in command_results.readable_output
- def test_get_activity_logging_command(self, mocker):
+ def test_get_events_command_large_limit(self, mocker):
"""
- Given: params to run get_activity_logging_command
- When: running the command
- Then: Accurate response and readable output is returned.
+ Test get_events_command with a large limit value.
+
+ Given:
+ - Arguments with a large limit and a list of log types.
+ When:
+ - Running 'get_events_command' function.
+ Then:
+ - Validates that the function handles large limits without errors.
"""
- from ServiceNowEventCollector import get_audit_logs_command
+ args = {"from_date": "2023-01-01T00:00:00Z", "offset": 0, "limit": 1000}
+ last_run = {}
+ log_type = AUDIT
+ mock_logs = [{"event_id": i, "timestamp": "2023-01-01 01:00:00"} for i in range(1000)]
+
+ http_responses = mocker.patch.object(Client, "search_events", return_value=mock_logs)
+ mocker.patch("ServiceNowEventCollector.add_time_field", return_value="")
+ all_events, command_results = get_events_command(self.client, args, log_type, last_run)
- mocker.patch.object(Client, "get_audit_logs", side_effect=self.create_response_by_limit)
- activity_loggings, res = get_audit_logs_command(client=self.client, args={"from_date": "2023-04-15 07:00:00", "limit": 4})
- assert len(activity_loggings) == 4
- assert "Audit Logs List" in res.readable_output
+ assert len(all_events) == 1000
+ assert "Audit Events" in command_results.readable_output
+ assert http_responses.call_count == 1
- @freeze_time("2023-04-12 07:01:00")
- def test_fetch_activity_logging(self, mocker):
+ def test_get_events_command_with_last_run(self, mocker):
"""
- Tests the fetch_events function
+ Test get_events_command when a last_run parameter is provided and 'from_date' is missing in args.
Given:
- - first_fetch_time
+ - A last_run dictionary with a previous 'from_date' value and arguments without 'from_date'.
When:
- - Running the 'fetch_activity_logging' function.
+ - Running the 'get_events_command' function to retrieve events.
Then:
- - Validates that the function generates the correct API requests with the expected parameters.
- - Validates that the function returns the expected events and next_run timestamps.
+ - Validates that the function uses last_run's 'from_date' to initialize the search.
"""
- from ServiceNowEventCollector import fetch_events_command
+ args = {"offset": 0, "limit": 10}
+ last_run = {"last_fetch_time": "2023-01-01T00:00:00Z"}
+ log_type = AUDIT
+ mock_logs = [{"event_id": 2, "timestamp": "2023-01-01 02:00:00"}]
- fetched_events = util_load_json("test_data/fetch_audit_logs.json")
http_responses = mocker.patch.object(
Client,
- "get_audit_logs",
- return_value=fetched_events.get("fetch_logs"),
+ "search_events",
+ return_value=mock_logs
)
- audit_logs, new_last_run = fetch_events_command(self.client, last_run={})
+ mocker.patch("ServiceNowEventCollector.add_time_field", return_value="")
+ mock_initialize_from_date = mocker.patch(
+ "ServiceNowEventCollector.initialize_from_date",
+ wraps=ServiceNowEventCollector.initialize_from_date
+ )
+
+ all_events, command_results = get_events_command(self.client, args, log_type, last_run)
+
+ mock_initialize_from_date.assert_called_once_with(last_run, log_type)
+ assert http_responses.call_args[1] == {
+ "from_time": "2023-01-01T00:00:00Z",
+ "log_type": "audit",
+ "limit": 10,
+ "offset": 0,
+ }
+ assert len(all_events) == 1
+ assert isinstance(command_results.readable_output, str)
+ assert "Audit Events" in command_results.readable_output
+
+ def test_fetch_events_command_standard(self, mocker):
+ """
+ Test fetch_events_command with standard parameters.
+
+ Given:
+ - A last_run dictionary with valid dates and an empty list of previous IDs.
+ When:
+ - Running the 'fetch_events_command' function to retrieve new events.
+ Then:
+ - Validates that the function fetches new events, processes them, and updates last_run correctly.
+ """
+
+ log_types = ["audit"]
+ last_run = {"audit": {"previous_run_ids": []}}
+ mock_events = [{"event_id": 1, "sys_created_on": "2023-01-01 01:00:00"}]
+
+ mocker.patch("ServiceNowEventCollector.initialize_from_date", return_value="2023-01-01T00:00:00Z")
+ mocker.patch.object(self.client, "search_events", return_value=mock_events)
+ mock_process_and_filter = mocker.patch(
+ "ServiceNowEventCollector.process_and_filter_events", return_value=(mock_events, {"1"}))
+ mocker.patch("ServiceNowEventCollector.update_last_run", return_value={
+ "audit": {"previous_run_ids": ["1"], "last_fetch_time": "2023-01-01 01:00:00"}})
+
+ collected_events, updated_last_run = fetch_events_command(self.client, last_run, log_types)
+
+ assert collected_events == mock_events
+ mock_process_and_filter.assert_called_once_with(
+ events=mock_events, previous_run_ids=set(), from_date="2023-01-01T00:00:00Z", log_type="audit")
+ assert updated_last_run["audit"]["last_fetch_time"] == "2023-01-01 01:00:00"
+
+ def test_fetch_events_command_no_new_events(self, mocker):
+ """
+ Test fetch_events_command when no new events are returned.
+
+ Given:
+ - A last_run dictionary with a valid date.
+ When:
+ - Running the 'fetch_events_command' function and no new events are found.
+ Then:
+ - Validates that the function returns an empty list and does not update the last_run date.
+ """
+ log_types = ["audit"]
+ last_run = {"audit": {"previous_run_ids": []}}
+
+ mocker.patch("ServiceNowEventCollector.initialize_from_date", return_value="2023-01-01T00:00:00Z")
+ mocker.patch.object(self.client, "search_events", return_value=[])
+
+ collected_events, updated_last_run = fetch_events_command(self.client, last_run, log_types)
+
+ assert collected_events == []
+ assert updated_last_run == last_run
+
+ def test_fetch_events_command_multiple_log_types(self, mocker):
+ """
+ Test fetch_events_command with multiple log types.
+
+ Given:
+ - A last_run dictionary with two log types and valid from_date values.
+ When:
+ - Running the 'fetch_events_command' function to retrieve events for both log types.
+ Then:
+ - Validates that the function processes both log types and updates last_run accordingly.
+ """
+ log_types = [AUDIT, SYSLOG_TRANSACTIONS]
+ last_run = {
+ "previous_run_ids": [],
+ "previous_run_ids_syslog": [],
+ }
+ mock_audit_events = [{"event_id": 1, "sys_created_on": "2023-01-01 01:00:00"}]
+ mock_syslog_events = [{"event_id": 2, "sys_created_on": "2023-01-01T02:00:00Z"}]
+
+ mocker.patch("ServiceNowEventCollector.initialize_from_date",
+ side_effect=["2023-01-01T00:00:00Z", "2023-01-01T00:00:00Z"])
+ mocker.patch.object(self.client, "search_events", side_effect=[mock_audit_events, mock_syslog_events])
+ mocker.patch("ServiceNowEventCollector.process_and_filter_events",
+ side_effect=[(mock_audit_events, {"1"}), (mock_syslog_events, {"2"})])
+
+ collected_events, updated_last_run = fetch_events_command(self.client, last_run, log_types)
+
+ assert collected_events == mock_audit_events + mock_syslog_events
+ assert updated_last_run[LAST_FETCH_TIME[AUDIT]] == "2023-01-01 01:00:00"
+ assert updated_last_run[LAST_FETCH_TIME[SYSLOG_TRANSACTIONS]] == "2023-01-01T02:00:00Z"
+
+ def test_fetch_events_command_empty_log_types(self):
+ """
+ Test fetch_events_command with an empty log_types list.
+
+ Given:
+ - An empty log_types list.
+ When:
+ - Running 'fetch_events_command' function with no log types.
+ Then:
+ - Validates that the function returns an empty list and does not update last_run.
+ """
+ last_run = {"audit": {"previous_run_ids": []}}
+
+ collected_events, updated_last_run = fetch_events_command(self.client, last_run, [])
+
+ assert collected_events == []
+ assert updated_last_run == last_run
+
+
+def test_process_and_filter_events_standard_case():
+ """
+ Test process_and_filter_events with a standard set of events.
+
+ Given:
+ - A list of events with unique sys_id values.
+ When:
+ - Running the 'process_and_filter_events' function.
+ Then:
+ - Validates that all events are added to unique_events, and previous_run_ids are updated.
+ """
+
+ events = [
+ {"sys_id": "1", "sys_created_on": "2023-01-01 01:00:00"},
+ {"sys_id": "2", "sys_created_on": "2023-01-01 02:00:00"}
+ ]
+ from_date = "2023-01-01 00:00:00"
+ log_type = "audit"
+
+ unique_events, previous_run_ids = process_and_filter_events(events, set(), from_date, log_type)
+
+ assert len(unique_events) == 2
+ assert all(event["source_log_type"] == log_type for event in unique_events)
+ assert previous_run_ids == {"2"}
+
+
+def test_process_and_filter_events_duplicate_event():
+ """
+ Test process_and_filter_events with duplicate events.
+
+ Given:
+ - A list of events containing a duplicate sys_id.
+ When:
+ - Running the 'process_and_filter_events' function.
+ Then:
+ - Validates that duplicate events are excluded from unique_events and not added to previous_run_ids.
+ """
+ events = [
+ {"sys_id": "1", "sys_created_on": "2023-01-01 01:00:00"},
+ {"sys_id": "1", "sys_created_on": "2023-01-01 01:30:00"}
+ ]
+ previous_run_ids = {"1"}
+ from_date = "2023-01-01 00:00:00"
+ log_type = "audit"
+
+ unique_events, updated_previous_run_ids = process_and_filter_events(events, previous_run_ids, from_date, log_type)
+
+ assert len(unique_events) == 0
+ assert updated_previous_run_ids == {"1"}
+
+
+def test_process_and_filter_events_with_same_time():
+ """
+ Test process_and_filter_events when events have the same creation time as the from_date.
+
+ Given:
+ - A list of events with the same sys_created_on time as from_date.
+ When:
+ - Running the 'process_and_filter_events' function.
+ Then:
+ - Validates that all events are added to previous_run_ids, but only one copy is in unique_events.
+ """
+ events = [
+ {"sys_id": "1", "sys_created_on": "2023-01-01 01:00:00"},
+ {"sys_id": "2", "sys_created_on": "2023-01-01 01:00:00"}
+ ]
+ from_date = "2023-01-01 01:00:00"
+ log_type = "audit"
+
+ unique_events, previous_run_ids = process_and_filter_events(events, set(), from_date, log_type)
+
+ assert len(unique_events) == 2
+ assert previous_run_ids == {"1", "2"}
+
+
+def test_process_and_filter_events_after_from_date():
+ """
+ Test process_and_filter_events with events created after the from_date.
+
+ Given:
+ - A list of events created after the from_date.
+ When:
+ - Running the 'process_and_filter_events' function.
+ Then:
+ - Validates that all events are added to unique_events and previous_run_ids is reset after finding new events.
+ """
+ events = [
+ {"sys_id": "3", "sys_created_on": "2023-01-01 02:00:00"},
+ {"sys_id": "4", "sys_created_on": "2023-01-01 02:00:00"}
+ ]
+ from_date = "2023-01-01 01:00:00"
+ log_type = "audit"
+
+ unique_events, previous_run_ids = process_and_filter_events(events, {"1", "2"}, from_date, log_type)
+
+ assert len(unique_events) == 2
+ assert previous_run_ids == {"3", "4"}
+
+
+def test_process_and_filter_events_no_events():
+ """
+ Test process_and_filter_events when the events list is empty.
+
+ Given:
+ - An empty list of events.
+ When:
+ - Running the 'process_and_filter_events' function.
+ Then:
+ - Validates that unique_events and previous_run_ids are empty.
+ """
+ events = []
+ previous_run_ids = set()
+ from_date = "2023-01-01 00:00:00"
+ log_type = "audit"
+
+ unique_events, updated_previous_run_ids = process_and_filter_events(events, previous_run_ids, from_date, log_type)
+
+ assert unique_events == []
+ assert updated_previous_run_ids == set()
+
+
+def test_process_and_filter_events_log_type_assignment():
+ """
+ Test process_and_filter_events to check log_type assignment in events.
+
+ Given:
+ - A list of events with various sys_created_on values.
+ When:
+ - Running the 'process_and_filter_events' function.
+ Then:
+ - Validates that each event has the correct 'source_log_type' value.
+ """
+ events = [
+ {"sys_id": "5", "sys_created_on": "2023-01-01 02:00:00"},
+ {"sys_id": "6", "sys_created_on": "2023-01-01 03:00:00"}
+ ]
+ from_date = "2023-01-01 00:00:00"
+ log_type = "audit"
+
+ unique_events, _ = process_and_filter_events(events, set(), from_date, log_type)
+
+ assert all(event["source_log_type"] == log_type for event in unique_events)
+
+
+def test_process_and_filter_events_handles_event_time_formatting():
+ """
+ Test process_and_filter_events to ensure proper '_time' field formatting.
+
+ Given:
+ - A list of events with sys_created_on dates.
+ When:
+ - Running the 'process_and_filter_events' function.
+ Then:
+ - Validates that each event has a correctly formatted '_time' field.
+ """
+ events = [
+ {"sys_id": "7", "sys_created_on": "2023-01-01 02:00:00"}
+ ]
+ from_date = "2023-01-01 00:00:00"
+ log_type = "audit"
+ expected_time_format = "2023-01-01T02:00:00Z"
+
+ unique_events, _ = process_and_filter_events(events, set(), from_date, log_type)
+
+ assert unique_events[0]["_time"] == expected_time_format
+
+
+def test_get_limit_with_args():
+ """
+ Test get_limit when log_type is 'audit' and args contains 'max_fetch_audit'.
+
+ Given:
+ - args dictionary with 'max_fetch_audit' set.
+ - log_type set to 'audit'.
+ When:
+ - Running the 'get_limit' function.
+ Then:
+ - Validates that 'max_fetch_audit' from args is used as the limit.
+ """
+ args = {"limit": "200"}
+ client = Client(
+ use_oauth=True,
+ credentials={"username": "test", "password": "test"},
+ client_id="test_id",
+ client_secret="test_secret",
+ url="https://test.com",
+ verify=False,
+ proxy=False,
+ api_server_url="https://test.com/api/now",
+ fetch_limit_audit=300,
+ fetch_limit_syslog=400
+ )
+
+ limit = get_limit(args, client)
+
+ assert limit == 200
+
+
+def test_get_limit_with_client_default():
+ """
+ Test get_limit when log_type is 'audit' and client provides a default.
+
+ Given:
+ - args dictionary without 'max_fetch_audit'.
+ - log_type set to 'audit'.
+ - client has fetch_limit_audit set.
+ When:
+ - Running the 'get_limit' function.
+ Then:
+ - Validates that 'fetch_limit_audit' from client is used as the limit.
+ """
+
+ args = {}
+ client = Client(
+ use_oauth=True,
+ credentials={"username": "test", "password": "test"},
+ client_id="test_id",
+ client_secret="test_secret",
+ url="https://test.com",
+ verify=False,
+ proxy=False,
+ api_server_url="https://test.com/api/now",
+ fetch_limit_audit=300,
+ fetch_limit_syslog=400
+ )
+ limit = get_limit(args, client)
+
+ assert limit == 300
+
+
+def test_get_limit_with_no_args_or_client_default():
+ """
+ Test get_limit when log_type is 'audit' and neither args nor client provides a limit.
+
+ Given:
+ - args dictionary without 'max_fetch_audit'.
+ - log_type set to 'audit'.
+ - client does not provide fetch_limit_audit.
+ When:
+ - Running the 'get_limit' function.
+ Then:
+ - Validates that the default limit of 1000 is used.
+ """
+
+ args = {}
+ client = Client(
+ use_oauth=True,
+ credentials={"username": "test", "password": "test"},
+ client_id="test_id",
+ client_secret="test_secret",
+ url="https://test.com",
+ verify=False,
+ proxy=False,
+ api_server_url="https://test.com/api/now",
+ fetch_limit_audit=None,
+ fetch_limit_syslog=400
+ )
+ limit = get_limit(args, client)
+
+ assert limit == 1000
+
+
+def test_add_time_field_standard_case():
+ """
+ Test add_time_field with a typical list of events.
+
+ Given:
+ - A list of events with 'sys_created_on' timestamps.
+ When:
+ - Calling add_time_field function to add '_time' and 'source_log_type'.
+ Then:
+ - Ensures each event has a correctly formatted '_time' field.
+ - Ensures each event has the specified 'source_log_type'.
+ """
+ events = [
+ {"sys_created_on": "2023-01-01 12:00:00", "sys_id": "1"},
+ {"sys_created_on": "2023-01-02 15:30:00", "sys_id": "2"}
+ ]
+ log_type = "audit"
+
+ result = add_time_field(events, log_type)
+
+ assert result[0]["_time"] == datetime.strptime("2023-01-01 12:00:00", LOGS_DATE_FORMAT).strftime(DATE_FORMAT)
+ assert result[1]["_time"] == datetime.strptime("2023-01-02 15:30:00", LOGS_DATE_FORMAT).strftime(DATE_FORMAT)
+ assert result[0]["source_log_type"] == log_type
+ assert result[1]["source_log_type"] == log_type
+
+
+def test_add_time_field_empty_list():
+ """
+ Test add_time_field with an empty list of events.
+
+ Given:
+ - An empty list of events.
+ When:
+ - Calling add_time_field.
+ Then:
+ - Ensures the function returns an empty list without errors.
+ """
+ events = []
+ log_type = "syslog transactions"
+
+ result = add_time_field(events, log_type)
+ assert result == []
+
+
+def test_add_time_field_invalid_date_format():
+ """
+ Test add_time_field with events containing an invalid 'sys_created_on' date format.
+
+ Given:
+ - A list of events with an invalid date format in 'sys_created_on'.
+ When:
+ - Calling add_time_field.
+ Then:
+ - Expects a ValueError due to incorrect date format.
+ """
+ events = [
+ {"sys_created_on": "2023/01/01 12:00:00", "sys_id": "1"} # incorrect format
+ ]
+ log_type = "audit"
+
+ with pytest.raises(ValueError):
+ add_time_field(events, log_type)
+
+
+def test_add_time_field_partial_valid_dates():
+ """
+ Test add_time_field with a mix of valid and invalid dates.
+
+ Given:
+ - A list of events, where one has a valid 'sys_created_on' and the other has an invalid date format.
+ When:
+ - Calling add_time_field.
+ Then:
+ - Ensures the function processes valid events and raises an error for invalid dates.
+ """
+ events = [
+ {"sys_created_on": "2023-01-01T12:00:00Z", "sys_id": "1"},
+ {"sys_created_on": "2023/01/02 15:30:00", "sys_id": "2"} # incorrect format
+ ]
+ log_type = "audit"
+
+ with pytest.raises(ValueError):
+ add_time_field(events, log_type)
+
+
+def test_add_time_field_no_sys_created_on_field():
+ """
+ Test add_time_field with events that lack 'sys_created_on' field.
+
+ Given:
+ - A list of events missing the 'sys_created_on' key.
+ When:
+ - Calling add_time_field.
+ Then:
+ - Expects a KeyError as 'sys_created_on' is missing in the event.
+ """
+ events = [
+ {"sys_id": "1"}
+ ]
+ log_type = "audit"
+
+ with pytest.raises(KeyError):
+ add_time_field(events, log_type)
+
+
+def test_initialize_from_date_with_existing_timestamp():
+ """
+ Test initialize_from_date when last_run contains a last_fetch_time for the log_type.
+
+ Given:
+ - A last_run dictionary with a last_fetch_time for the specified log_type.
+ When:
+ - Calling initialize_from_date with this log_type.
+ Then:
+ - Returns the existing last_fetch_time for the log_type.
+ """
+ last_run = {
+ "last_fetch_time": "2023-01-01T00:00:00Z",
+ "last_fetch_time_syslog": "2023-01-02T00:00:00Z"
+ }
+ log_type = "audit"
+
+ result = initialize_from_date(last_run, log_type)
+ assert result == "2023-01-01T00:00:00Z"
+
+
+def test_initialize_from_date_without_existing_timestamp():
+ """
+ Test initialize_from_date when last_run does not contain a last_fetch_time for the log_type.
+
+ Given:
+ - A last_run dictionary without a last_fetch_time for the specified log_type.
+ When:
+ - Calling initialize_from_date with this log_type.
+ Then:
+ - Returns a default timestamp set to one minute before the current UTC time.
+ """
+ last_run = {}
+ log_type = "audit"
+
+ result = initialize_from_date(last_run, log_type)
+ expected_time = (datetime.utcnow() - timedelta(minutes=1)).strftime(LOGS_DATE_FORMAT)
+
+ assert abs(datetime.strptime(result, LOGS_DATE_FORMAT)
+ - datetime.strptime(expected_time, LOGS_DATE_FORMAT)) < timedelta(seconds=5)
+
+
+def test_initialize_from_date_with_different_log_type():
+ """
+ Test initialize_from_date when last_run contains a last_fetch_time for a different log_type.
+
+ Given:
+ - A last_run dictionary with a last_fetch_time only for a different log_type.
+ When:
+ - Calling initialize_from_date with a log_type that is not in last_run.
+ Then:
+ - Returns a default timestamp set to one minute before the current UTC time.
+ """
+ last_run = {
+ "syslog transactions": {"last_fetch_time": "2023-01-02T00:00:00Z"}
+ }
+ log_type = "audit"
+
+ result = initialize_from_date(last_run, log_type)
+ expected_time = (datetime.utcnow() - timedelta(minutes=1)).strftime(LOGS_DATE_FORMAT)
+ assert abs(datetime.strptime(result, LOGS_DATE_FORMAT)
+ - datetime.strptime(expected_time, LOGS_DATE_FORMAT)) < timedelta(seconds=5)
+
+
+def test_initialize_from_date_missing_last_fetch_key():
+ """
+ Test initialize_from_date when the last_run dictionary does not have a 'last_fetch_time' key for the main level.
+
+ Given:
+ - A last_run dictionary without a top-level last_fetch_time key.
+ When:
+ - Calling initialize_from_date.
+ Then:
+ - Returns a default timestamp set to one minute before the current UTC time.
+ """
+ last_run = {
+ "audit": {"some_other_field": "some_value"}
+ }
+ log_type = "audit"
+
+ result = initialize_from_date(last_run, log_type)
+ expected_time = (datetime.utcnow() - timedelta(minutes=1)).strftime(LOGS_DATE_FORMAT)
+
+ assert abs(datetime.strptime(result, LOGS_DATE_FORMAT)
+ - datetime.strptime(expected_time, LOGS_DATE_FORMAT)) < timedelta(seconds=5)
+
+
+def test_update_existing_log_type():
+ """
+ Test update_last_run when updating an existing log type.
+
+ Given:
+ - A last_run dictionary with an existing log type entry.
+ When:
+ - Calling update_last_run with the log type, last event time, and new previous_run_ids.
+ Then:
+ - Updates the existing log type entry with new last fetch time and previous run IDs.
+ """
+ last_run = {
+ "last_fetch_time": "2023-01-01T00:00:00Z", "previous_run_ids": ["id1", "id2"]
+ }
+ log_type = "audit"
+ last_event_time = "2023-01-02T00:00:00Z"
+ previous_run_ids = ["id3", "id4"]
+
+ updated_last_run = update_last_run(last_run, log_type, last_event_time, previous_run_ids)
+
+ assert updated_last_run[LAST_FETCH_TIME[AUDIT]] == last_event_time
+ assert updated_last_run[PREVIOUS_RUN_IDS[AUDIT]] == previous_run_ids
+
+
+def test_update_new_log_type():
+ """
+ Test update_last_run when adding a new log type to last_run.
+
+ Given:
+ - A last_run dictionary without the specified log type.
+ When:
+ - Calling update_last_run with a new log type, last event time, and previous run IDs.
+ Then:
+ - Adds the new log type entry with the specified last fetch time and previous run IDs.
+ """
+ last_run = {
+ "last_fetch_time": "2023-01-01T00:00:00Z", "previous_run_ids": ["id1", "id2"]
+ }
+ log_type = "syslog transactions"
+ last_event_time = "2023-01-02T00:00:00Z"
+ previous_run_ids = ["id5", "id6"]
+
+ updated_last_run = update_last_run(last_run, log_type, last_event_time, previous_run_ids)
+
+ assert updated_last_run[LAST_FETCH_TIME[SYSLOG_TRANSACTIONS]] == last_event_time
+ assert updated_last_run[PREVIOUS_RUN_IDS[SYSLOG_TRANSACTIONS]] == previous_run_ids
+
+
+def test_update_empty_previous_run_ids():
+ """
+ Test update_last_run with an empty previous_run_ids list.
+
+ Given:
+ - A last_run dictionary and a log type.
+ - An empty list for previous_run_ids.
+ When:
+ - Calling update_last_run with an empty previous_run_ids.
+ Then:
+ - Updates the log type entry in last_run with an empty previous_run_ids list.
+ """
+ last_run = {
+ "last_fetch_time": "2023-01-01T00:00:00Z", "previous_run_ids": ["id1", "id2"]
+ }
+ log_type = "audit"
+ last_event_time = "2023-01-02T00:00:00Z"
+ previous_run_ids = []
+
+ updated_last_run = update_last_run(last_run, log_type, last_event_time, previous_run_ids)
+
+ assert updated_last_run[LAST_FETCH_TIME[AUDIT]] == last_event_time
+ assert updated_last_run[PREVIOUS_RUN_IDS[AUDIT]] == []
+
+
+def test_update_no_existing_data():
+ """
+ Test update_last_run with an initially empty last_run dictionary.
+
+ Given:
+ - An empty last_run dictionary.
+ When:
+ - Calling update_last_run with a log type, last event time, and previous run IDs.
+ Then:
+ - Creates a new entry for the log type with specified last fetch time and previous run IDs.
+ """
+ last_run = {}
+ log_type = "audit"
+ last_event_time = "2023-01-01T00:00:00Z"
+ previous_run_ids = ["id1"]
+
+ updated_last_run = update_last_run(last_run, log_type, last_event_time, previous_run_ids)
+
+ assert updated_last_run[LAST_FETCH_TIME[AUDIT]] == last_event_time
+ assert updated_last_run[PREVIOUS_RUN_IDS[AUDIT]] == previous_run_ids
+
+
+def test_update_multiple_log_types():
+ """
+ Test update_last_run when updating multiple log types sequentially.
+
+ Given:
+ - A last_run dictionary with multiple log types.
+ When:
+ - Updating the last fetch time and previous run IDs for multiple log types sequentially.
+ Then:
+ - Correctly updates each log type entry with its respective last fetch time and previous run IDs.
+ """
+ last_run = {
+ "last_fetch_time": "2023-01-01T00:00:00Z", "previous_run_ids": ["id1", "id2"],
+ "last_fetch_time_syslog": "2023-01-01T00:00:00Z", "previous_run_ids_syslog": ["id3", "id4"]
+ }
+
+ # Update audit logs
+ updated_last_run = update_last_run(last_run, "audit", "2023-01-02T00:00:00Z", ["id5", "id6"])
+ assert updated_last_run[LAST_FETCH_TIME[AUDIT]] == "2023-01-02T00:00:00Z"
+ assert updated_last_run[PREVIOUS_RUN_IDS[AUDIT]] == ["id5", "id6"]
+
+ # Update syslog transactions
+ updated_last_run = update_last_run(last_run, "syslog transactions", "2023-01-03T00:00:00Z", ["id7", "id8"])
+ assert updated_last_run[LAST_FETCH_TIME[SYSLOG_TRANSACTIONS]] == "2023-01-03T00:00:00Z"
+ assert updated_last_run[PREVIOUS_RUN_IDS[SYSLOG_TRANSACTIONS]] == ["id7", "id8"]
+
+
+def test_handle_log_types_valid_titles():
+ """
+ Test handle_log_types with valid event type titles.
+
+ Given:
+ - A list of valid event type titles.
+ When:
+ - Calling handle_log_types.
+ Then:
+ - Returns a list of corresponding log types.
+ """
+ event_types_to_fetch = ["Audit", "Syslog Transactions"]
+ expected_log_types = [AUDIT, SYSLOG_TRANSACTIONS]
+ assert handle_log_types(event_types_to_fetch) == expected_log_types
+
+
+def test_handle_log_types_single_valid_title():
+ """
+ Test handle_log_types with a single valid event type title.
+
+ Given:
+ - A list containing one valid event type title.
+ When:
+ - Calling handle_log_types.
+ Then:
+ - Returns a list with the corresponding log type.
+ """
+ event_types_to_fetch = ["Audit"]
+ expected_log_types = [AUDIT]
+ assert handle_log_types(event_types_to_fetch) == expected_log_types
+
+
+def test_handle_log_types_invalid_title():
+ """
+ Test handle_log_types with an invalid event type title.
+
+ Given:
+ - A list with an invalid event type title.
+ When:
+ - Calling handle_log_types.
+ Then:
+ - Raises a DemistoException with an appropriate error message.
+ """
+ event_types_to_fetch = ["Invalid Title"]
+ with pytest.raises(DemistoException) as excinfo:
+ handle_log_types(event_types_to_fetch)
+ assert "'Invalid Title' is not valid event type" in str(excinfo.value)
+
- assert http_responses.call_args[0][0] == "2023-04-12 07:00:00"
+def test_handle_log_types_mixed_titles():
+ """
+ Test handle_log_types with a mix of valid and invalid event type titles.
- assert audit_logs == fetched_events.get("fetched_events")
- assert new_last_run.get("last_fetch_time") == "2023-04-15 07:00:00"
- assert "2"
- assert "3" in new_last_run.get("previous_run_ids")
+ Given:
+ - A list containing both valid and invalid event type titles.
+ When:
+ - Calling handle_log_types.
+ Then:
+ - Raises a DemistoException for the invalid event type title.
+ """
+ event_types_to_fetch = ["Audit", "Invalid Title"]
+ with pytest.raises(DemistoException) as excinfo:
+ handle_log_types(event_types_to_fetch)
+ assert "'Invalid Title' is not valid event type" in str(excinfo.value)
- # assert no new results when given the last_run:
- http_responses = mocker.patch.object(Client, "get_audit_logs", return_value=fetched_events.get("fetch_loggings"))
- audit_logs, new_last_run = fetch_events_command(self.client, last_run=new_last_run)
+def test_handle_log_types_empty_list():
+ """
+ Test handle_log_types with an empty list.
- assert http_responses.call_args[0][0] == "2023-04-15 07:00:00"
- assert audit_logs == []
- assert new_last_run.get("last_fetch_time") == "2023-04-15 07:00:00"
- assert "2"
- assert "3" in new_last_run.get("previous_run_ids")
+ Given:
+ - An empty list of event type titles.
+ When:
+ - Calling handle_log_types.
+ Then:
+ - Returns an empty list as no event types are provided.
+ """
+ event_types_to_fetch = []
+ assert handle_log_types(event_types_to_fetch) == []
diff --git a/Packs/ServiceNow/Integrations/ServiceNow_CMDB/ServiceNow_CMDB.yml b/Packs/ServiceNow/Integrations/ServiceNow_CMDB/ServiceNow_CMDB.yml
index 571d9c8a043b..352bd5ab73c5 100644
--- a/Packs/ServiceNow/Integrations/ServiceNow_CMDB/ServiceNow_CMDB.yml
+++ b/Packs/ServiceNow/Integrations/ServiceNow_CMDB/ServiceNow_CMDB.yml
@@ -255,7 +255,7 @@ script:
name: servicenow-cmdb-oauth-login
- description: Test the instance configuration when using OAuth authorization.
name: servicenow-cmdb-oauth-test
- dockerimage: demisto/python3:3.10.13.89873
+ dockerimage: demisto/python3:3.11.10.113941
runonce: false
script: '-'
subtype: python3
diff --git a/Packs/ServiceNow/Integrations/ServiceNow_IAM/ServiceNow_IAM.yml b/Packs/ServiceNow/Integrations/ServiceNow_IAM/ServiceNow_IAM.yml
index d258ea6b03fa..8bd3412f4665 100644
--- a/Packs/ServiceNow/Integrations/ServiceNow_IAM/ServiceNow_IAM.yml
+++ b/Packs/ServiceNow/Integrations/ServiceNow_IAM/ServiceNow_IAM.yml
@@ -230,7 +230,7 @@ script:
type: String
- description: Retrieves a User Profile schema, which holds all of the user fields within the application. Used for outgoing-mapping through the Get Schema option.
name: get-mapping-fields
- dockerimage: demisto/python3:3.10.13.89873
+ dockerimage: demisto/python3:3.11.10.113941
runonce: false
script: '-'
subtype: python3
diff --git a/Packs/ServiceNow/Integrations/ServiceNowv2/README.md b/Packs/ServiceNow/Integrations/ServiceNowv2/README.md
index fa9045e9c361..17e80017d370 100644
--- a/Packs/ServiceNow/Integrations/ServiceNowv2/README.md
+++ b/Packs/ServiceNow/Integrations/ServiceNowv2/README.md
@@ -46,11 +46,11 @@ These scripts are wrapped around the incident table, so to wrap them around anot
for example: “10=Design,11=Development,12=Testing”.
Also, a matching user-defined list of customized incident close reasons must be configured as a "Server configuration" in Cortex XSOAR. (Meaning each Service Now custom state label will have a matching Cortex XSOAR custom close reason with the same name). ***Not following this format will result in a server error!***
For more information about Customize Incident Close Reasons, see [this link](https://docs-cortex.paloaltonetworks.com/r/Cortex-XSOAR/6.10/Cortex-XSOAR-Administrator-Guide/Customize-Incident-Close-Reasons).
- ![image](https://raw.githubusercontent.com/demisto/content/75395ba6d9118bc3a5a399a31d95de4dc27f0911/Packs/ServiceNow/Integrations/ServiceNowv2/doc_files/closing-mirror-xsoar.png)
+ ![image](../../doc_files/closing-mirror-xsoar.png)
6. To enable mirroring to close an incident in ServiceNow, under the **Mirrored ServiceNow Ticket closure method** dropdown, select the ticket closing method,
or set the **Mirrored ServiceNow Ticket custom close state code** parameter, in order to override the default closure method with a custom state.
- ![image](https://raw.githubusercontent.com/demisto/content/75395ba6d9118bc3a5a399a31d95de4dc27f0911/Packs/ServiceNow/Integrations/ServiceNowv2/doc_files/closing-mirror-snow.png)
+ ![image](../../doc_files/closing-mirror-snow.png)
## Instance Creation Flow
This integration supports two types of authorization:
@@ -182,7 +182,7 @@ When the trigger incident is ServiceNow, you use the **ServiceNow Classifier** a
6. Under **Mapper (outgoing)**, for default mapping select ServiceNow - Outgoing Mapper. For
custom mapping, follow the instructions in STEP 3 and then select the custom mapper name.
- ![image](https://raw.githubusercontent.com/demisto/content-docs/5781a9024dc9f7e6418c82d7b7318f07d49fc863/docs/doc_imgs/integrations/snowv2-configuration-settings.png)
+ ![image](../../doc_files/snowv2-configuration-settings.png)
7. Enter the connection parameters.
- Confirm whether your organization uses basic authorization or OAuth authorization (most use basic) and enter the relevant authorization details.
@@ -196,7 +196,7 @@ custom mapping, follow the instructions in STEP 3 and then select the custom map
11. Enable the checkbox for **Use Display Value** if you want to fetch comments and work notes without using sys_journal_field table which required an elevated read only permission.
12. If **Use Display Value** is enabled, **Instance Date Format** needs to be set to the date format that matches the date format used in ServiceNow by the user account used to configure the instance.
-![image](https://user-images.githubusercontent.com/74367144/212351268-12938ccc-87d6-4f36-9c9b-ef7fcd3135a0.png)
+![image](../../doc_files/212351268-12938ccc-87d6-4f36-9c9b-ef7fcd3135a0.png)
13. Set the Timestamp field to query as part of the mirroring flow. This defines the ticket_last_update - the epoch timestamp when the ServiceNow incident was last updated. The default is sys_updated_on.
14. Enter the relevant **Comment Entry Tag**, **Work Note Entry Tag**, **File Entry Tag To ServiceNow** and **File Entry Tag From ServiceNow** values.
@@ -204,7 +204,7 @@ These values are mapped to the **dbotMirrorTags** incident field in Cortex XSOAR
**Note:**
These tags work only for mirroring comments, work notes, and files from Cortex XSOAR to ServiceNow.
-![image](https://raw.githubusercontent.com/demisto/content-docs/954dfad984230fde68dc45bd3dd50bde8338413a/docs/doc_imgs/integrations/mirror-tags.png)
+![image](../../doc_files/mirror-tags.png)
15. Configure any **Custom Fields to Mirror**. These must start with "u_". This is available for ServiceNow v2 version 2.2.10 and later.
**Note:**
@@ -221,7 +221,7 @@ Any modifications require that the mappers be cloned before any changes can be a
3. Under the **Incident Type** dropdown, select ServiceNow Create Ticket and Mirror.
4. Verify the mapper has these fields mapped. They will pull the values configured on the integration instance settings at the time of ingestion.
- ![image](https://raw.githubusercontent.com/demisto/content-docs/ad6522b9c6822f5c4f9798c8aaa1a63c353ddbe3/docs/doc_imgs/snowv2-incoming-mapper.png)
+ ![image](../../doc_files/snowv2-incoming-mapper.png)
- **dbotMirrorId** - dbotMirrorId - the field used by the third-party integration to identify the ticket. This should be the sys_id of the ServiceNow ticket. The value is mapped to incident.servicenowticketid.
- **dbotMirrorDirection** - determines whether mirroring is incoming, outgoing, or both. Default is Both. This should match the instance configuration.
@@ -249,7 +249,7 @@ match.
6. Change the mapping according to your needs, including any fields you want mapped outward to ServiceNow and any custom fields. Make sure the custom fields you want mirrored are added to the integration instance settings.
7. Save your changes.
-![image](https://raw.githubusercontent.com/demisto/content/d9bd0725e4bce1d68b949e66dcdd8f42931b1a88/Packs/ServiceNow/Integrations/ServiceNowv2/doc_files/outgoing-mapper.png)
+![image](../../doc_files/outgoing-mapper.png)
#### STEP 4 - Create an Incident in ServiceNow
@@ -260,12 +260,12 @@ In the example below, we have written *A comment from Cortex XSOAR to ServiceNow
1. Click Actions > Tags and add the comments tag.
2. Add a file to the incident and mark it with the ForServiceNow tag.
-![image](https://raw.githubusercontent.com/demisto/content/d9bd0725e4bce1d68b949e66dcdd8f42931b1a88/Packs/ServiceNow/Integrations/ServiceNowv2/doc_files/mirror-files.png)
+![image](../../doc_files/mirror-files.png)
3. Navigate back to the incident in ServiceNow and within approximately one minute, the changes will be reflected there, too.
You can make additional changes like closing the incident or changing severity and those will be reflected in both systems.
-![image](https://raw.githubusercontent.com/demisto/content/d9bd0725e4bce1d68b949e66dcdd8f42931b1a88/Packs/ServiceNow/Integrations/ServiceNowv2/doc_files/ticket-example.png)
+![image](../../doc_files/ticket-example.png)
### Configure Incident Mirroring When the Trigger Incident is **Not** ServiceNow
@@ -299,7 +299,7 @@ You can set up any source integration to create a ServiceNow ticket based on a f
13. Enable the checkbox for **Use Display Value** if you want to fetch comments and work notes without using sys_journal_field table which required an elevated read only permission.
14. If **Use Display Value** is enabled, **Instance Date Format** needs to be set to the date format that matches the date format used in ServiceNow by the user account used to configure the instance.
-![image](https://user-images.githubusercontent.com/74367144/212352242-329284d8-6936-4f6c-9a30-c741b7425ff8.png)
+![image](../../doc_files/212352242-329284d8-6936-4f6c-9a30-c741b7425ff8.png)
15. Set the **Timestamp field to query as part of the mirroring flow**. This defines the ticket_last_update - the epoch timestamp when the ServiceNow incident was last updated. The default is sys_updated_on.
16. Enter the relevant **Comment Entry Tag**, **Work Note Entry Tag**, **File Entry Tag To ServiceNow** and **File Entry Tag From ServiceNow** values.
@@ -307,7 +307,7 @@ These values are mapped to the **dbotMirrorTags** incident field in Cortex XSOAR
**Note:**
These tags work only for mirroring comments from Cortex XSOAR to ServiceNow.
-![image](https://raw.githubusercontent.com/demisto/content-docs/954dfad984230fde68dc45bd3dd50bde8338413a/docs/doc_imgs/integrations/mirror-tags.png)
+![image](../../doc_files/mirror-tags.png)
17. Configure any **Custom Fields to Mirror**. These must start with "u_". This is available for ServiceNow v2 version 2.2.10 and later.
**Note:**
@@ -323,7 +323,7 @@ Any modifications require that the mappers be cloned before any changes can be a
2. Under the **Incident Type** dropdown, select the relevant triggering incident type, for example Phishing.
3. Verify the mapper has these fields mapped. They will pull the values configured on the integration instance settings at the time of ingestion.
- ![image](https://raw.githubusercontent.com/demisto/content-docs/ad6522b9c6822f5c4f9798c8aaa1a63c353ddbe3/docs/doc_imgs/snowv2-incoming-mapper.png)
+ ![image](../../doc_files/snowv2-incoming-mapper.png)
- **dbotMirrorId** - dbotMirrorId - the field used by the third-party integration to identify the ticket. This should be the sys_id of the ServiceNow ticket. The value is mapped to incident.servicenowticketid.
- **dbotMirrorDirection** - determines whether mirroring is incoming, outgoing, or both. Default is Both. This should match the instance configuration.
@@ -351,7 +351,7 @@ match.
5. Change the mapping according to your needs, including any fields you want mapped outward to ServiceNow and any custom fields. Make sure the custom fields you want mirrored are added to the integration instance settings.
6. Save your changes.
-![image](https://raw.githubusercontent.com/demisto/content/d9bd0725e4bce1d68b949e66dcdd8f42931b1a88/Packs/ServiceNow/Integrations/ServiceNowv2/doc_files/outgoing-mapper.png)
+![image](../../doc_files/outgoing-mapper.png)
#### STEP 4 - Set up Your Source Integration
Set up your source integration so that after fetching a trigger incident a ServiceNow ticket is created and mirroring starts.
@@ -362,11 +362,11 @@ Set up your source integration so that after fetching a trigger incident a Servi
Example:
The following shows the Create New Record playbook task, which creates a ServiceNow ticket.
-![image](https://raw.githubusercontent.com/demisto/content-docs/20d4fb13f3d1c822f3f3be479cc281c45dbc5667/docs/doc_imgs/integrations/snowv2-create-new-record.png)
+![image](../../doc_files/snowv2-create-new-record.png)
The Create New Record task is followed by the Set Mirroring Fields task, which starts the mirroring capability.
-![image](https://raw.githubusercontent.com/demisto/content-docs/996d0dbad4430d325e030d7a75251d8d38ca7778/docs/doc_imgs/integrations/snowv2-set-mirroring-fields.png)
+![image](../../doc_files/snowv2-set-mirroring-fields.png)
The new ServiceNow ticket will be ingested in Cortex XSOAR in approximately one minute.
@@ -375,12 +375,12 @@ In the example below, we have written *A comment from Cortex XSOAR to ServiceNow
1. Click Actions > Tags and add the comments tag.
2. Add a file to the incident and mark it with the ForServiceNow tag.
-![image](https://raw.githubusercontent.com/demisto/content/d9bd0725e4bce1d68b949e66dcdd8f42931b1a88/Packs/ServiceNow/Integrations/ServiceNowv2/doc_files/mirror-files.png)
+![image](../../doc_files/mirror-files.png)
3. Navigate back to the incident in ServiceNow and within approximately one minute, the changes will be reflected there, too.
You can make additional changes like closing the incident or changing severity and those will be reflected in both systems.
-![image](https://raw.githubusercontent.com/demisto/content/d9bd0725e4bce1d68b949e66dcdd8f42931b1a88/Packs/ServiceNow/Integrations/ServiceNowv2/doc_files/ticket-example.png)
+![image](../../doc_files/ticket-example.png)
## Commands
You can execute these commands from the Cortex XSOAR CLI, as part of an automation, or in a playbook.
diff --git a/Packs/ServiceNow/Integrations/ServiceNowv2/ServiceNowv2.yml b/Packs/ServiceNow/Integrations/ServiceNowv2/ServiceNowv2.yml
index fc080a0309ad..de679d15cd62 100644
--- a/Packs/ServiceNow/Integrations/ServiceNowv2/ServiceNowv2.yml
+++ b/Packs/ServiceNow/Integrations/ServiceNowv2/ServiceNowv2.yml
@@ -1636,7 +1636,7 @@ script:
required: true
description: Retrieves attachments from a ticket.
name: servicenow-get-ticket-attachments
- dockerimage: demisto/python3:3.11.10.113941
+ dockerimage: demisto/python3:3.11.10.115186
isfetch: true
ismappable: true
isremotesyncin: true
diff --git a/Packs/ServiceNow/ReleaseNotes/2_6_13.md b/Packs/ServiceNow/ReleaseNotes/2_6_13.md
new file mode 100644
index 000000000000..f0e5cd284b0f
--- /dev/null
+++ b/Packs/ServiceNow/ReleaseNotes/2_6_13.md
@@ -0,0 +1,34 @@
+
+#### Integrations
+
+##### ServiceNow IAM
+
+
+- Updated the Docker image to: *demisto/python3:3.11.10.113941*.
+##### ServiceNow Event Collector
+
+
+- Updated the Docker image to: *demisto/python3:3.11.10.113941*.
+##### ServiceNow CMDB
+
+
+- Updated the Docker image to: *demisto/python3:3.11.10.113941*.
+
+#### Scripts
+
+##### ServiceNowIncidentStatus
+
+
+- Updated the Docker image to: *demisto/python3:3.11.10.113941*.
+##### ServiceNowQueryIncident
+
+
+- Updated the Docker image to: *demisto/python3:3.11.10.113941*.
+##### ServiceNowUpdateIncident
+
+
+- Updated the Docker image to: *demisto/python3:3.11.10.113941*.
+##### ServiceNowCreateIncident
+
+
+- Updated the Docker image to: *demisto/python3:3.11.10.113941*.
diff --git a/Packs/ServiceNow/ReleaseNotes/2_6_14.md b/Packs/ServiceNow/ReleaseNotes/2_6_14.md
new file mode 100644
index 000000000000..dacf33fd3fc5
--- /dev/null
+++ b/Packs/ServiceNow/ReleaseNotes/2_6_14.md
@@ -0,0 +1,30 @@
+
+#### Integrations
+
+##### ServiceNow Event Collector
+
+
+- Updated the Docker image to: *demisto/python3:3.11.10.115186*.
+##### ServiceNow v2
+
+
+- Updated the Docker image to: *demisto/python3:3.11.10.115186*.
+
+#### Scripts
+
+##### ServiceNowAddComment
+
+
+- Updated the Docker image to: *demisto/python3:3.11.10.115186*.
+##### ServiceNowUpdateIncident
+
+
+- Updated the Docker image to: *demisto/python3:3.11.10.115186*.
+##### ServiceNowCreateIncident
+
+
+- Updated the Docker image to: *demisto/python3:3.11.10.115186*.
+##### ServiceNowQueryIncident
+
+
+- Updated the Docker image to: *demisto/python3:3.11.10.115186*.
diff --git a/Packs/ServiceNow/ReleaseNotes/2_7_0.md b/Packs/ServiceNow/ReleaseNotes/2_7_0.md
new file mode 100644
index 000000000000..111c97f806d3
--- /dev/null
+++ b/Packs/ServiceNow/ReleaseNotes/2_7_0.md
@@ -0,0 +1,8 @@
+
+#### Integrations
+
+##### ServiceNow Event Collector
+
+- Added the ***service-now-get-syslog-transactions*** command to retrieve syslog transaction logs.
+- Added the *event_types_to_fetch* parameter to choose type logs to fetch, allowing selection of specific log types to fetch.
+- Added support for fetching syslog transaction logs.
diff --git a/Packs/ServiceNow/Scripts/ServiceNowAddComment/ServiceNowAddComment.yml b/Packs/ServiceNow/Scripts/ServiceNowAddComment/ServiceNowAddComment.yml
index 8b4efc63dbbf..622c7b1e05ef 100644
--- a/Packs/ServiceNow/Scripts/ServiceNowAddComment/ServiceNowAddComment.yml
+++ b/Packs/ServiceNow/Scripts/ServiceNowAddComment/ServiceNowAddComment.yml
@@ -25,7 +25,7 @@ contentitemexportablefields:
dependson:
must:
- '|||servicenow-add-comment'
-dockerimage: demisto/python3:3.11.10.113941
+dockerimage: demisto/python3:3.11.10.115186
enabled: true
name: ServiceNowAddComment
runas: DBotWeakRole
diff --git a/Packs/ServiceNow/Scripts/ServiceNowCreateIncident/ServiceNowCreateIncident.yml b/Packs/ServiceNow/Scripts/ServiceNowCreateIncident/ServiceNowCreateIncident.yml
index 6e416864b0d5..2303e18fa83c 100644
--- a/Packs/ServiceNow/Scripts/ServiceNowCreateIncident/ServiceNowCreateIncident.yml
+++ b/Packs/ServiceNow/Scripts/ServiceNowCreateIncident/ServiceNowCreateIncident.yml
@@ -43,6 +43,6 @@ dependson:
tests:
- No test - Hibernating instance
fromversion: 5.0.0
-dockerimage: demisto/python3:3.10.13.83255
+dockerimage: demisto/python3:3.11.10.115186
skipprepare:
- script-name-incident-to-alert
diff --git a/Packs/ServiceNow/Scripts/ServiceNowIncidentStatus/ServiceNowIncidentStatus.yml b/Packs/ServiceNow/Scripts/ServiceNowIncidentStatus/ServiceNowIncidentStatus.yml
index 4c4e284e7459..2682905aabe9 100644
--- a/Packs/ServiceNow/Scripts/ServiceNowIncidentStatus/ServiceNowIncidentStatus.yml
+++ b/Packs/ServiceNow/Scripts/ServiceNowIncidentStatus/ServiceNowIncidentStatus.yml
@@ -13,7 +13,7 @@ comment: |
enabled: true
scripttarget: 0
subtype: python3
-dockerimage: demisto/python3:3.10.13.86272
+dockerimage: demisto/python3:3.11.10.113941
runas: DBotWeakRole
tests:
- No tests (auto formatted)
diff --git a/Packs/ServiceNow/Scripts/ServiceNowQueryIncident/ServiceNowQueryIncident.yml b/Packs/ServiceNow/Scripts/ServiceNowQueryIncident/ServiceNowQueryIncident.yml
index 8b786d848d76..bb0ca225bb90 100644
--- a/Packs/ServiceNow/Scripts/ServiceNowQueryIncident/ServiceNowQueryIncident.yml
+++ b/Packs/ServiceNow/Scripts/ServiceNowQueryIncident/ServiceNowQueryIncident.yml
@@ -40,6 +40,6 @@ dependson:
must:
- servicenow-query-table
fromversion: 5.0.0
-dockerimage: demisto/python3:3.10.13.83255
+dockerimage: demisto/python3:3.11.10.115186
skipprepare:
- script-name-incident-to-alert
diff --git a/Packs/ServiceNow/Scripts/ServiceNowUpdateIncident/ServiceNowUpdateIncident.yml b/Packs/ServiceNow/Scripts/ServiceNowUpdateIncident/ServiceNowUpdateIncident.yml
index ce82066ecd3e..03750130b8f2 100644
--- a/Packs/ServiceNow/Scripts/ServiceNowUpdateIncident/ServiceNowUpdateIncident.yml
+++ b/Packs/ServiceNow/Scripts/ServiceNowUpdateIncident/ServiceNowUpdateIncident.yml
@@ -48,6 +48,6 @@ dependson:
tests:
- No test - Hibernating instance
fromversion: 5.0.0
-dockerimage: demisto/python3:3.10.13.83255
+dockerimage: demisto/python3:3.11.10.115186
skipprepare:
- script-name-incident-to-alert
diff --git a/Packs/ServiceNow/doc_files/49080934-eac54300-f24d-11e8-8a68-41401ef2f37c.png b/Packs/ServiceNow/doc_files/49080934-eac54300-f24d-11e8-8a68-41401ef2f37c.png
new file mode 100644
index 000000000000..29e02eb2f09a
Binary files /dev/null and b/Packs/ServiceNow/doc_files/49080934-eac54300-f24d-11e8-8a68-41401ef2f37c.png differ
diff --git a/Packs/ServiceNow/doc_files/49081133-7939c480-f24e-11e8-9e91-bad17c618e8d.png b/Packs/ServiceNow/doc_files/49081133-7939c480-f24e-11e8-9e91-bad17c618e8d.png
new file mode 100644
index 000000000000..2da4ef9b245a
Binary files /dev/null and b/Packs/ServiceNow/doc_files/49081133-7939c480-f24e-11e8-9e91-bad17c618e8d.png differ
diff --git a/Packs/ServiceNow/doc_files/49081247-e4839680-f24e-11e8-848f-fda6bc14fe9e.png b/Packs/ServiceNow/doc_files/49081247-e4839680-f24e-11e8-848f-fda6bc14fe9e.png
new file mode 100644
index 000000000000..86c68735dd97
Binary files /dev/null and b/Packs/ServiceNow/doc_files/49081247-e4839680-f24e-11e8-848f-fda6bc14fe9e.png differ
diff --git a/Packs/ServiceNow/doc_files/49081310-21e82400-f24f-11e8-9876-4d1ae0f32f1a.png b/Packs/ServiceNow/doc_files/49081310-21e82400-f24f-11e8-9876-4d1ae0f32f1a.png
new file mode 100644
index 000000000000..22b00d50de53
Binary files /dev/null and b/Packs/ServiceNow/doc_files/49081310-21e82400-f24f-11e8-9876-4d1ae0f32f1a.png differ
diff --git a/Packs/ServiceNow/doc_files/49081488-b6eb1d00-f24f-11e8-8c33-4da0b732937a.png b/Packs/ServiceNow/doc_files/49081488-b6eb1d00-f24f-11e8-8c33-4da0b732937a.png
new file mode 100644
index 000000000000..97446598409b
Binary files /dev/null and b/Packs/ServiceNow/doc_files/49081488-b6eb1d00-f24f-11e8-8c33-4da0b732937a.png differ
diff --git a/Packs/ServiceNow/doc_files/49081549-e13cda80-f24f-11e8-9b64-19e3f49483e8.png b/Packs/ServiceNow/doc_files/49081549-e13cda80-f24f-11e8-9b64-19e3f49483e8.png
new file mode 100644
index 000000000000..d6691d10a951
Binary files /dev/null and b/Packs/ServiceNow/doc_files/49081549-e13cda80-f24f-11e8-9b64-19e3f49483e8.png differ
diff --git a/Packs/ServiceNow/doc_files/49081849-bf902300-f250-11e8-9786-f55cd9faa98a.png b/Packs/ServiceNow/doc_files/49081849-bf902300-f250-11e8-9786-f55cd9faa98a.png
new file mode 100644
index 000000000000..34cae916967b
Binary files /dev/null and b/Packs/ServiceNow/doc_files/49081849-bf902300-f250-11e8-9786-f55cd9faa98a.png differ
diff --git a/Packs/ServiceNow/doc_files/49082217-e569f780-f251-11e8-8fda-a0516b297d6f.png b/Packs/ServiceNow/doc_files/49082217-e569f780-f251-11e8-8fda-a0516b297d6f.png
new file mode 100644
index 000000000000..8b63b51edd0c
Binary files /dev/null and b/Packs/ServiceNow/doc_files/49082217-e569f780-f251-11e8-8fda-a0516b297d6f.png differ
diff --git a/Packs/ServiceNow/doc_files/49082687-17c82480-f253-11e8-9ab2-bdbb379fbdad.png b/Packs/ServiceNow/doc_files/49082687-17c82480-f253-11e8-9ab2-bdbb379fbdad.png
new file mode 100644
index 000000000000..e182411d89a2
Binary files /dev/null and b/Packs/ServiceNow/doc_files/49082687-17c82480-f253-11e8-9ab2-bdbb379fbdad.png differ
diff --git a/Packs/ServiceNow/doc_files/49082769-480fc300-f253-11e8-93f2-c2ca049ffdbc.png b/Packs/ServiceNow/doc_files/49082769-480fc300-f253-11e8-93f2-c2ca049ffdbc.png
new file mode 100644
index 000000000000..e33521239938
Binary files /dev/null and b/Packs/ServiceNow/doc_files/49082769-480fc300-f253-11e8-93f2-c2ca049ffdbc.png differ
diff --git a/Packs/ServiceNow/doc_files/49083041-1519ff00-f254-11e8-9af9-ac2fd755965d.png b/Packs/ServiceNow/doc_files/49083041-1519ff00-f254-11e8-9af9-ac2fd755965d.png
new file mode 100644
index 000000000000..ae98ffb600f9
Binary files /dev/null and b/Packs/ServiceNow/doc_files/49083041-1519ff00-f254-11e8-9af9-ac2fd755965d.png differ
diff --git a/Packs/ServiceNow/doc_files/49083116-5d392180-f254-11e8-8854-6b250075eab2.png b/Packs/ServiceNow/doc_files/49083116-5d392180-f254-11e8-8854-6b250075eab2.png
new file mode 100644
index 000000000000..93e977a17e75
Binary files /dev/null and b/Packs/ServiceNow/doc_files/49083116-5d392180-f254-11e8-8854-6b250075eab2.png differ
diff --git a/Packs/ServiceNow/doc_files/49083163-82c62b00-f254-11e8-95bd-ff220d887ee2.png b/Packs/ServiceNow/doc_files/49083163-82c62b00-f254-11e8-95bd-ff220d887ee2.png
new file mode 100644
index 000000000000..c663843a5f3d
Binary files /dev/null and b/Packs/ServiceNow/doc_files/49083163-82c62b00-f254-11e8-95bd-ff220d887ee2.png differ
diff --git a/Packs/ServiceNow/doc_files/49084292-b35b9400-f257-11e8-98b9-40c404c2a52f.png b/Packs/ServiceNow/doc_files/49084292-b35b9400-f257-11e8-98b9-40c404c2a52f.png
new file mode 100644
index 000000000000..bb6a0cc673ed
Binary files /dev/null and b/Packs/ServiceNow/doc_files/49084292-b35b9400-f257-11e8-98b9-40c404c2a52f.png differ
diff --git a/Packs/ServiceNow/doc_files/49084519-5dd3b700-f258-11e8-8675-47bf47bc3fba.png b/Packs/ServiceNow/doc_files/49084519-5dd3b700-f258-11e8-8675-47bf47bc3fba.png
new file mode 100644
index 000000000000..5caedf6d54f3
Binary files /dev/null and b/Packs/ServiceNow/doc_files/49084519-5dd3b700-f258-11e8-8675-47bf47bc3fba.png differ
diff --git a/Packs/ServiceNow/doc_files/49084595-95dafa00-f258-11e8-99d0-d55a044b5b8d.png b/Packs/ServiceNow/doc_files/49084595-95dafa00-f258-11e8-99d0-d55a044b5b8d.png
new file mode 100644
index 000000000000..1438f93854fa
Binary files /dev/null and b/Packs/ServiceNow/doc_files/49084595-95dafa00-f258-11e8-99d0-d55a044b5b8d.png differ
diff --git a/Packs/ServiceNow/doc_files/49084694-dfc3e000-f258-11e8-83d5-f5ee9b23f9c6.png b/Packs/ServiceNow/doc_files/49084694-dfc3e000-f258-11e8-83d5-f5ee9b23f9c6.png
new file mode 100644
index 000000000000..37a3d3cd0d6f
Binary files /dev/null and b/Packs/ServiceNow/doc_files/49084694-dfc3e000-f258-11e8-83d5-f5ee9b23f9c6.png differ
diff --git a/Packs/ServiceNow/doc_files/49531657-66189b80-f8c3-11e8-8479-cba45949186c.png b/Packs/ServiceNow/doc_files/49531657-66189b80-f8c3-11e8-8479-cba45949186c.png
new file mode 100644
index 000000000000..11d75809f280
Binary files /dev/null and b/Packs/ServiceNow/doc_files/49531657-66189b80-f8c3-11e8-8479-cba45949186c.png differ
diff --git a/Packs/ServiceNow/doc_files/52177960-28f05d00-27d1-11e9-8628-9e46a3916f61.png b/Packs/ServiceNow/doc_files/52177960-28f05d00-27d1-11e9-8628-9e46a3916f61.png
new file mode 100644
index 000000000000..d2a6e249d9cd
Binary files /dev/null and b/Packs/ServiceNow/doc_files/52177960-28f05d00-27d1-11e9-8628-9e46a3916f61.png differ
diff --git a/Packs/ServiceNow/pack_metadata.json b/Packs/ServiceNow/pack_metadata.json
index b98630aa7917..cc34a142d0b9 100644
--- a/Packs/ServiceNow/pack_metadata.json
+++ b/Packs/ServiceNow/pack_metadata.json
@@ -2,7 +2,7 @@
"name": "ServiceNow",
"description": "Use The ServiceNow IT Service Management (ITSM) solution to modernize the way you manage and deliver services to your users.",
"support": "xsoar",
- "currentVersion": "2.6.12",
+ "currentVersion": "2.7.0",
"author": "Cortex XSOAR",
"url": "https://www.paloaltonetworks.com/cortex",
"email": "",
diff --git a/Packs/ShiftLeft/Integrations/shiftleft/shiftleft.yml b/Packs/ShiftLeft/Integrations/shiftleft/shiftleft.yml
index b43ee137e3ad..54d01f35a941 100644
--- a/Packs/ShiftLeft/Integrations/shiftleft/shiftleft.yml
+++ b/Packs/ShiftLeft/Integrations/shiftleft/shiftleft.yml
@@ -66,7 +66,7 @@ script:
- name: entropy
description: Entropy.
description: Return list of app secrets.
- dockerimage: demisto/python3:3.10.13.72123
+ dockerimage: demisto/python3:3.11.10.116439
runonce: false
subtype: python3
fromversion: 6.0.0
diff --git a/Packs/ShiftLeft/ReleaseNotes/1_0_6.md b/Packs/ShiftLeft/ReleaseNotes/1_0_6.md
new file mode 100644
index 000000000000..9eb122624e5b
--- /dev/null
+++ b/Packs/ShiftLeft/ReleaseNotes/1_0_6.md
@@ -0,0 +1,10 @@
+
+#### Integrations
+
+##### ShiftLeft CORE
+- Updated the Docker image to: *demisto/python3:3.11.10.116439*.
+
+
+
+
+
diff --git a/Packs/ShiftLeft/pack_metadata.json b/Packs/ShiftLeft/pack_metadata.json
index 3af3218705a9..36bf40fbf027 100644
--- a/Packs/ShiftLeft/pack_metadata.json
+++ b/Packs/ShiftLeft/pack_metadata.json
@@ -2,7 +2,7 @@
"name": "ShiftLeft CORE",
"description": "See high risk vulnerabilities in your application before they go into production with ShiftLeft CORE",
"support": "partner",
- "currentVersion": "1.0.5",
+ "currentVersion": "1.0.6",
"author": "ShiftLeft Inc",
"url": "https://shiftleft.io",
"email": "support@shiftleft.io",
diff --git a/Packs/Shodan/Integrations/Shodan_v2/Shodan_v2.yml b/Packs/Shodan/Integrations/Shodan_v2/Shodan_v2.yml
index 480cb609351a..1e363ac55958 100644
--- a/Packs/Shodan/Integrations/Shodan_v2/Shodan_v2.yml
+++ b/Packs/Shodan/Integrations/Shodan_v2/Shodan_v2.yml
@@ -371,7 +371,7 @@ script:
- name: max_fetch
description: The maximum amount of events to return.
defaultValue: 50000
- dockerimage: demisto/python3:3.11.10.111526
+ dockerimage: demisto/python3:3.11.10.113941
runonce: false
script: '-'
subtype: python3
diff --git a/Packs/Shodan/ModelingRules/Shodan/Shodan.xif b/Packs/Shodan/ModelingRules/Shodan/Shodan.xif
new file mode 100644
index 000000000000..68a46ced2f63
--- /dev/null
+++ b/Packs/Shodan/ModelingRules/Shodan/Shodan.xif
@@ -0,0 +1,14 @@
+[MODEL:dataset = "shodan_banner_raw"]
+ alter
+ has_triggers_obj = object_create("has_triggers",has_triggers),
+ expires_obj = object_create("expires",expires),
+ expiration_obj = object_create("expiration",expiration),
+ triggers_obj = object_create("triggers",replex(triggers,"[\\\"]","")),
+ internal_ipv4 = arrayindex(regextract(json_extract(filters,"$.ip"),"(\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})"),0)
+| alter
+ xdm.alert.name = name,
+ xdm.session_context_id = id,
+ xdm.event.description = filters,
+ xdm.alert.description = object_merge(triggers_obj,has_triggers_obj,expires_obj,expiration_obj),
+ xdm.target.host.ipv4_addresses = arraycreate(internal_ipv4),
+ xdm.event.type = "NETWORK";
\ No newline at end of file
diff --git a/Packs/Shodan/ModelingRules/Shodan/Shodan.yml b/Packs/Shodan/ModelingRules/Shodan/Shodan.yml
new file mode 100644
index 000000000000..379f1aab31a2
--- /dev/null
+++ b/Packs/Shodan/ModelingRules/Shodan/Shodan.yml
@@ -0,0 +1,6 @@
+fromversion: 8.6.0 # Will be updated with XSIAM version updates
+id: Shodan_Banner_ModelingRule
+name: Shodan Banner Modeling Rule
+rules: ''
+schema: ''
+tags: ''
\ No newline at end of file
diff --git a/Packs/Shodan/ModelingRules/Shodan/Shodan_schema.json b/Packs/Shodan/ModelingRules/Shodan/Shodan_schema.json
new file mode 100644
index 000000000000..f3cb648738ca
--- /dev/null
+++ b/Packs/Shodan/ModelingRules/Shodan/Shodan_schema.json
@@ -0,0 +1,44 @@
+{
+ "shodan_banner_raw": {
+ "name": {
+ "type": "string",
+ "is_array": false
+ },
+ "expiration": {
+ "type": "string",
+ "is_array": false
+ },
+ "id": {
+ "type": "string",
+ "is_array": false
+ },
+ "size": {
+ "type": "int",
+ "is_array": false
+ },
+ "created": {
+ "type": "datetime",
+ "is_array": false
+ },
+ "expires": {
+ "type": "int",
+ "is_array": false
+ },
+ "filters": {
+ "type": "string",
+ "is_array": false
+ },
+ "triggers": {
+ "type": "string",
+ "is_array": false
+ },
+ "notifiers": {
+ "type": "string",
+ "is_array": false
+ },
+ "has_triggers": {
+ "type": "boolean",
+ "is_array": false
+ }
+ }
+}
\ No newline at end of file
diff --git a/Packs/Shodan/README.md b/Packs/Shodan/README.md
index e69de29bb2d1..e64ad2075061 100644
--- a/Packs/Shodan/README.md
+++ b/Packs/Shodan/README.md
@@ -0,0 +1,26 @@
+# Shodan Banner
+ Shodan is a search engine for Internet-connected devices. Unlike traditional search engines that index websites, Shodan indexes information about the devices connected to the internet, such as servers, routers, webcams, and other IoT devices.
+<~XSIAM>
+
+This pack includes Cortex XSIAM content.
+
+## Configuration on Server Side
+To enable the Shodan integration you need to have an API key,
+which you can get for free by creating a Shodan account https://account.shodan.io/register
+Once you have an API key you insert it into the API Key field and click the Test button.
+
+## Configuration on Cortex XSIAM
+→
+1. Navigate to **settings** → **Configurations** → **Automation & Feeds**.
+2. Search for Shodan v2.
+3. Click **Add instance** to create and configure a new integration instance.
+
+ | **Parameter** | **Description** | **Required** |
+ | --- | --- | --- |
+ | API Key | | False |
+ | Base URL to Shodan API | | True |
+ | Trust any certificate (not secure) | | False |
+ | Use system proxy settings | | False |
+ | Source Reliability | Reliability of the source providing the intelligence data. | False |
+ | The maximum number of events per fetch | | False |
+~XSIAM>
diff --git a/Packs/Shodan/ReleaseNotes/1_2_1.md b/Packs/Shodan/ReleaseNotes/1_2_1.md
new file mode 100644
index 000000000000..c8db2049ff9f
--- /dev/null
+++ b/Packs/Shodan/ReleaseNotes/1_2_1.md
@@ -0,0 +1,7 @@
+
+#### Integrations
+
+##### Shodan v2
+
+
+- Updated the Docker image to: *demisto/python3:3.11.10.113941*.
diff --git a/Packs/Shodan/ReleaseNotes/1_2_2.md b/Packs/Shodan/ReleaseNotes/1_2_2.md
new file mode 100644
index 000000000000..ccae04d448cf
--- /dev/null
+++ b/Packs/Shodan/ReleaseNotes/1_2_2.md
@@ -0,0 +1,6 @@
+
+#### Modeling Rules
+
+##### New: Shodan Banner Modeling Rule
+
+<~XSIAM> Created Modeling Rules for Shodan Banner (Available from Cortex XSIAM v2.4).~XSIAM>
diff --git a/Packs/Shodan/pack_metadata.json b/Packs/Shodan/pack_metadata.json
index 314930e98b43..a04c05952164 100644
--- a/Packs/Shodan/pack_metadata.json
+++ b/Packs/Shodan/pack_metadata.json
@@ -2,7 +2,7 @@
"name": "Shodan",
"description": "A search engine used for searching Internet-connected devices",
"support": "xsoar",
- "currentVersion": "1.2.0",
+ "currentVersion": "1.2.2",
"author": "Cortex XSOAR",
"url": "https://www.paloaltonetworks.com/cortex",
"email": "",
@@ -14,7 +14,10 @@
"Free Enricher"
],
"useCases": [],
- "keywords": [],
+ "keywords": [
+ "Shodan",
+ "Banner"
+ ],
"marketplaces": [
"xsoar",
"marketplacev2"
diff --git a/Packs/SignalSciences/Integrations/SignalSciences/SignalSciences.yml b/Packs/SignalSciences/Integrations/SignalSciences/SignalSciences.yml
index 9b1b6cb68243..67c6b500392c 100644
--- a/Packs/SignalSciences/Integrations/SignalSciences/SignalSciences.yml
+++ b/Packs/SignalSciences/Integrations/SignalSciences/SignalSciences.yml
@@ -962,7 +962,7 @@ script:
description: Retrieves a request by request ID.
isfetch: true
runonce: false
- dockerimage: demisto/python3:3.10.13.72123
+ dockerimage: demisto/python3:3.11.10.116439
fromversion: 5.0.0
tests:
- No tests (auto formatted)
diff --git a/Packs/SignalSciences/ReleaseNotes/1_0_19.md b/Packs/SignalSciences/ReleaseNotes/1_0_19.md
new file mode 100644
index 000000000000..59dd3b64a9ef
--- /dev/null
+++ b/Packs/SignalSciences/ReleaseNotes/1_0_19.md
@@ -0,0 +1,10 @@
+
+#### Integrations
+
+##### Signal Sciences WAF
+- Updated the Docker image to: *demisto/python3:3.11.10.116439*.
+
+
+
+
+
diff --git a/Packs/SignalSciences/pack_metadata.json b/Packs/SignalSciences/pack_metadata.json
index 74691105c99b..d875866736f7 100644
--- a/Packs/SignalSciences/pack_metadata.json
+++ b/Packs/SignalSciences/pack_metadata.json
@@ -2,7 +2,7 @@
"name": "Signal Sciences WAF",
"description": "Protect your web application using Signal Sciences.",
"support": "xsoar",
- "currentVersion": "1.0.18",
+ "currentVersion": "1.0.19",
"author": "Cortex XSOAR",
"url": "https://www.paloaltonetworks.com/cortex",
"email": "",
diff --git a/Packs/Silverfort/Integrations/Silverfort/Silverfort.yml b/Packs/Silverfort/Integrations/Silverfort/Silverfort.yml
index 17ed04d9b9da..99858e387407 100644
--- a/Packs/Silverfort/Integrations/Silverfort/Silverfort.yml
+++ b/Packs/Silverfort/Integrations/Silverfort/Silverfort.yml
@@ -117,7 +117,7 @@ script:
required: true
description: Update the resource entity risk.
name: silverfort-update-resource-risk
- dockerimage: demisto/auth-utils:1.0.0.91447
+ dockerimage: demisto/auth-utils:1.0.0.116752
runonce: false
script: '-'
subtype: python3
diff --git a/Packs/Silverfort/ReleaseNotes/2_0_28.md b/Packs/Silverfort/ReleaseNotes/2_0_28.md
new file mode 100644
index 000000000000..59813660ec62
--- /dev/null
+++ b/Packs/Silverfort/ReleaseNotes/2_0_28.md
@@ -0,0 +1,7 @@
+
+#### Integrations
+
+##### Silverfort
+
+
+- Updated the Docker image to: *demisto/auth-utils:1.0.0.115527*.
diff --git a/Packs/Silverfort/ReleaseNotes/2_0_29.md b/Packs/Silverfort/ReleaseNotes/2_0_29.md
new file mode 100644
index 000000000000..c08c7137acdd
--- /dev/null
+++ b/Packs/Silverfort/ReleaseNotes/2_0_29.md
@@ -0,0 +1,9 @@
+
+#### Integrations
+
+##### Silverfort
+- Updated the Docker image to: *demisto/auth-utils:1.0.0.116752*.
+
+
+
+
diff --git a/Packs/Silverfort/pack_metadata.json b/Packs/Silverfort/pack_metadata.json
index 3a800e076f3a..3dfa319fac74 100644
--- a/Packs/Silverfort/pack_metadata.json
+++ b/Packs/Silverfort/pack_metadata.json
@@ -2,7 +2,7 @@
"name": "Silverfort",
"description": "Silverfort protects organizations from data breaches by delivering strong authentication across entire corporate networks and cloud environments, without requiring any modifications to endpoints or servers. Using patent-pending technology, Silverfort's agentless approach enables multi-factor authentication and AI-driven adaptive authentication even for systems that don’t support it today, including proprietary systems, critical infrastructure, shared folders, IoT devices, and more. Use Silverfort integration to get & update Silverfort risk severity. This integration was integrated and tested with Silverfort version 2.12.",
"support": "partner",
- "currentVersion": "2.0.27",
+ "currentVersion": "2.0.29",
"author": "Silverfort",
"url": "https://support.silverfort.com/",
"email": "support@silverfort.com",
diff --git a/Packs/SimpleAPIProxy/Integrations/SimpleAPIProxy/SimpleAPIProxy.yml b/Packs/SimpleAPIProxy/Integrations/SimpleAPIProxy/SimpleAPIProxy.yml
index 451815e27dcd..6114aaec8723 100644
--- a/Packs/SimpleAPIProxy/Integrations/SimpleAPIProxy/SimpleAPIProxy.yml
+++ b/Packs/SimpleAPIProxy/Integrations/SimpleAPIProxy/SimpleAPIProxy.yml
@@ -69,7 +69,7 @@ description: Provide a simple API proxy to restrict privileges or minimize the a
display: Simple API Proxy
name: Simple API Proxy
script:
- dockerimage: demisto/fastapi:1.0.0.76036
+ dockerimage: demisto/fastapi:0.115.4.115067
longRunning: true
longRunningPort: true
runonce: true
diff --git a/Packs/SimpleAPIProxy/ReleaseNotes/1_0_4.md b/Packs/SimpleAPIProxy/ReleaseNotes/1_0_4.md
new file mode 100644
index 000000000000..b1a95b0ef9ab
--- /dev/null
+++ b/Packs/SimpleAPIProxy/ReleaseNotes/1_0_4.md
@@ -0,0 +1,6 @@
+
+#### Integrations
+
+##### Simple API Proxy
+
+- Updated the Docker image to: *demisto/fastapi:0.115.4.115067*.
diff --git a/Packs/SimpleAPIProxy/pack_metadata.json b/Packs/SimpleAPIProxy/pack_metadata.json
index f86c857b564b..20a177994e68 100644
--- a/Packs/SimpleAPIProxy/pack_metadata.json
+++ b/Packs/SimpleAPIProxy/pack_metadata.json
@@ -2,7 +2,7 @@
"name": "Simple API Proxy",
"description": "This pack provides a simple API proxy to restrict privileges or minimize the amount of credentials issued at the API.",
"support": "community",
- "currentVersion": "1.0.3",
+ "currentVersion": "1.0.4",
"author": "thimanshu474",
"url": "",
"email": "",
diff --git a/Packs/Sixgill-Darkfeed/Integrations/Sixgill_Darkfeed/Sixgill_Darkfeed.yml b/Packs/Sixgill-Darkfeed/Integrations/Sixgill_Darkfeed/Sixgill_Darkfeed.yml
index d21346d15aab..a98285a50e62 100644
--- a/Packs/Sixgill-Darkfeed/Integrations/Sixgill_Darkfeed/Sixgill_Darkfeed.yml
+++ b/Packs/Sixgill-Darkfeed/Integrations/Sixgill_Darkfeed/Sixgill_Darkfeed.yml
@@ -124,7 +124,7 @@ script:
description: Fetching Sixgill DarkFeed indicators
execution: true
name: sixgill-get-indicators
- dockerimage: demisto/sixgill:1.0.0.108071
+ dockerimage: demisto/sixgill:1.0.0.117239
feed: true
runonce: false
subtype: python3
diff --git a/Packs/Sixgill-Darkfeed/Integrations/Sixgill_Darkfeed_Enrichment/Sixgill_Darkfeed_Enrichment.yml b/Packs/Sixgill-Darkfeed/Integrations/Sixgill_Darkfeed_Enrichment/Sixgill_Darkfeed_Enrichment.yml
index 61fe8cf2394e..715d2b22ac50 100644
--- a/Packs/Sixgill-Darkfeed/Integrations/Sixgill_Darkfeed_Enrichment/Sixgill_Darkfeed_Enrichment.yml
+++ b/Packs/Sixgill-Darkfeed/Integrations/Sixgill_Darkfeed_Enrichment/Sixgill_Darkfeed_Enrichment.yml
@@ -550,7 +550,7 @@ script:
- contextPath: SixgillDarkfeed.Postid.external_reference
description: Link to the IOC on Virustotal and an abstraction of the number of detections; MITRE ATT&CK tactics and techniques.
type: Unknown
- dockerimage: demisto/sixgill:1.0.0.100379
+ dockerimage: demisto/sixgill:1.0.0.117239
runonce: false
script: '-'
subtype: python3
diff --git a/Packs/Sixgill-Darkfeed/ReleaseNotes/2_2_23.md b/Packs/Sixgill-Darkfeed/ReleaseNotes/2_2_23.md
new file mode 100644
index 000000000000..8e2689dd07f0
--- /dev/null
+++ b/Packs/Sixgill-Darkfeed/ReleaseNotes/2_2_23.md
@@ -0,0 +1,11 @@
+
+#### Integrations
+
+##### Sixgill DarkFeed Enrichment
+
+
+- Updated the Docker image to: *demisto/sixgill:1.0.0.117239*.
+##### Sixgill DarkFeed Threat Intelligence
+
+
+- Updated the Docker image to: *demisto/sixgill:1.0.0.117239*.
diff --git a/Packs/Sixgill-Darkfeed/pack_metadata.json b/Packs/Sixgill-Darkfeed/pack_metadata.json
index fa0fc9067a5a..b70472a62ead 100644
--- a/Packs/Sixgill-Darkfeed/pack_metadata.json
+++ b/Packs/Sixgill-Darkfeed/pack_metadata.json
@@ -2,7 +2,7 @@
"name": "Sixgill Darkfeed - Annual Subscription",
"description": "This edition of Sixgill Darkfeed is intended for customers who have a direct annual subscription to Sixgill Darkfeed.\n\nGet contextual and actionable insights to proactively block underground threats in real-time with the most comprehensive, automated stream of IOCs \n\nFor organizations who are currently Darkfeed customers.",
"support": "partner",
- "currentVersion": "2.2.22",
+ "currentVersion": "2.2.23",
"author": "Cybersixgill",
"url": "",
"email": "sales@cybersixgill.com",
diff --git a/Packs/Slack/Integrations/SlackEventCollector/SlackEventCollector.yml b/Packs/Slack/Integrations/SlackEventCollector/SlackEventCollector.yml
index 7d93d1609e9e..e4794640dd43 100644
--- a/Packs/Slack/Integrations/SlackEventCollector/SlackEventCollector.yml
+++ b/Packs/Slack/Integrations/SlackEventCollector/SlackEventCollector.yml
@@ -82,7 +82,7 @@ script:
type: python
subtype: python3
isfetchevents: true
- dockerimage: demisto/python3:3.10.14.97608
+ dockerimage: demisto/python3:3.11.10.115186
marketplaces:
- marketplacev2
fromversion: 6.8.0
diff --git a/Packs/Slack/Integrations/Slack_IAM/Slack_IAM.yml b/Packs/Slack/Integrations/Slack_IAM/Slack_IAM.yml
index 54987581b023..f89a74d73681 100644
--- a/Packs/Slack/Integrations/Slack_IAM/Slack_IAM.yml
+++ b/Packs/Slack/Integrations/Slack_IAM/Slack_IAM.yml
@@ -356,7 +356,7 @@ script:
- contextPath: UpdateGroup.errorMessage
description: Reason why the API failed.
type: String
- dockerimage: demisto/python3:3.10.14.97608
+ dockerimage: demisto/python3:3.11.10.113941
runonce: false
script: '-'
subtype: python3
diff --git a/Packs/Slack/ReleaseNotes/3_5_1.md b/Packs/Slack/ReleaseNotes/3_5_1.md
new file mode 100644
index 000000000000..2e99a47d4b7a
--- /dev/null
+++ b/Packs/Slack/ReleaseNotes/3_5_1.md
@@ -0,0 +1,30 @@
+
+#### Integrations
+
+##### Slack Event Collector
+
+
+- Updated the Docker image to: *demisto/python3:3.11.10.113941*.
+##### Slack IAM
+
+
+- Updated the Docker image to: *demisto/python3:3.11.10.113941*.
+
+#### Scripts
+
+##### SlackBlockBuilder
+
+
+- Updated the Docker image to: *demisto/python3:3.11.10.113941*.
+##### GetSlackBlockBuilderResponse
+
+
+- Updated the Docker image to: *demisto/python3:3.11.10.113941*.
+##### SlackAsk
+
+
+- Updated the Docker image to: *demisto/python3:3.11.10.113941*.
+##### SlackAskV2
+
+
+- Updated the Docker image to: *demisto/python3:3.11.10.113941*.
diff --git a/Packs/Slack/ReleaseNotes/3_5_2.md b/Packs/Slack/ReleaseNotes/3_5_2.md
new file mode 100644
index 000000000000..5fb3f47fbadf
--- /dev/null
+++ b/Packs/Slack/ReleaseNotes/3_5_2.md
@@ -0,0 +1,26 @@
+
+#### Integrations
+
+##### Slack Event Collector
+
+
+- Updated the Docker image to: *demisto/python3:3.11.10.115186*.
+
+#### Scripts
+
+##### SlackAskV2
+
+
+- Updated the Docker image to: *demisto/python3:3.11.10.115186*.
+##### GetSlackBlockBuilderResponse
+
+
+- Updated the Docker image to: *demisto/python3:3.11.10.115186*.
+##### SlackBlockBuilder
+
+
+- Updated the Docker image to: *demisto/python3:3.11.10.115186*.
+##### SlackAsk
+
+
+- Updated the Docker image to: *demisto/python3:3.11.10.115186*.
diff --git a/Packs/Slack/ReleaseNotes/3_5_3.md b/Packs/Slack/ReleaseNotes/3_5_3.md
new file mode 100644
index 000000000000..0b1151af8ac7
--- /dev/null
+++ b/Packs/Slack/ReleaseNotes/3_5_3.md
@@ -0,0 +1,5 @@
+#### Scripts
+
+##### SlackBlockBuilder
+
+Documentation and metadata improvements.
diff --git a/Packs/Slack/Scripts/GetSlackBlockBuilderResponse/GetSlackBlockBuilderResponse.yml b/Packs/Slack/Scripts/GetSlackBlockBuilderResponse/GetSlackBlockBuilderResponse.yml
index 4a752c871f25..a04a3340dbef 100644
--- a/Packs/Slack/Scripts/GetSlackBlockBuilderResponse/GetSlackBlockBuilderResponse.yml
+++ b/Packs/Slack/Scripts/GetSlackBlockBuilderResponse/GetSlackBlockBuilderResponse.yml
@@ -14,7 +14,7 @@ outputs:
description: The state of the response from the user will be stored under this context path.
scripttarget: 0
subtype: python3
-dockerimage: demisto/python3:3.10.14.97608
+dockerimage: demisto/python3:3.11.10.115186
runas: DBotWeakRole
dependson:
must:
diff --git a/Packs/Slack/Scripts/SlackAsk/README.md b/Packs/Slack/Scripts/SlackAsk/README.md
index 0f94b810a4d3..2061d16665a2 100644
--- a/Packs/Slack/Scripts/SlackAsk/README.md
+++ b/Packs/Slack/Scripts/SlackAsk/README.md
@@ -88,7 +88,7 @@ reflect the answer back to Cortex XSOAR.
The automation is most useful in a playbook to determine the outcome of a conditional task - which will be one of the provided options.
It uses a mechanism that allows external users to respond in Cortex XSOAR(per investigation) with entitlement strings embedded within the message contents.
-
+
The automation can utilize the interactive capabilities of Slack to send a form with buttons -
this requires the external endpoint for interactive responses to be available for connection (See the Slack v2 intergation documentation for more information).
diff --git a/Packs/Slack/Scripts/SlackAsk/SlackAsk.yml b/Packs/Slack/Scripts/SlackAsk/SlackAsk.yml
index 3bcd201d7d96..7b900e1f9ba6 100644
--- a/Packs/Slack/Scripts/SlackAsk/SlackAsk.yml
+++ b/Packs/Slack/Scripts/SlackAsk/SlackAsk.yml
@@ -53,7 +53,7 @@ tags:
- slack
timeout: '0'
type: python
-dockerimage: demisto/python3:3.10.14.97608
+dockerimage: demisto/python3:3.11.10.115186
tests:
- no test - Untestable
dependson:
diff --git a/Packs/Slack/Scripts/SlackAskV2/README.md b/Packs/Slack/Scripts/SlackAskV2/README.md
index 34becec2c20d..066adc031b9b 100644
--- a/Packs/Slack/Scripts/SlackAskV2/README.md
+++ b/Packs/Slack/Scripts/SlackAskV2/README.md
@@ -3,6 +3,7 @@ Sends a message (question) to either user (in a direct message) or to a channel.
SlackAskV2 was added to support the release of SlackV3 is only compatible with SlackV3.
## Script Data
+
---
| **Name** | **Description** |
@@ -12,18 +13,22 @@ SlackAskV2 was added to support the release of SlackV3 is only compatible with S
| Demisto Version | 5.5.0 |
## Use Case
+
---
This automation allows you to ask users in Slack (including external to Cortex XSOAR) questions, have them respond and
reflect the answer back to Cortex XSOAR.
## Dependencies
+
---
Requires an instance of the SlackV3 integration.
This script uses the following commands and scripts.
+
* send-notification
## Inputs
+
---
| **Argument Name** | **Description** |
@@ -45,20 +50,24 @@ This script uses the following commands and scripts.
| slackVersion | The version of Slack to use. SlackV3 is configured by default. |
## Outputs
+
---
There are no outputs for this script.
## Guide
+
---
The automation is most useful in a playbook to determine the outcome of a conditional task - which will be one of the provided options.
It uses a mechanism that allows external users to respond in Cortex XSOAR (per investigation) with entitlement strings embedded within the message contents.
-![SlackAsk](https://user-images.githubusercontent.com/35098543/66044107-7de39f00-e529-11e9-8099-049502b4d62f.png)
+![SlackAsk](../../doc_files/66044107-7de39f00-e529-11e9-8099-049502b4d62f.png)
The automation can utilize the interactive capabilities of Slack to send a form with buttons -
this requires the external endpoint for interactive responses to be available for connection (See the SlackV3 integration documentation for more information).
You can also utilize threads instead, simply by specifying the `responseType` argument.
## Notes
+
---
-- When using the `replyEntriesTag` argument, the `persistent` argument must be set to `True`.
-- `SlackAskV2` will not work when run in the playbook debugger. This is because the debugger does not generate entitlements, since they must be tied to an investigation. Entitlements are needed to track the response.
+
+* When using the `replyEntriesTag` argument, the `persistent` argument must be set to `True`.
+* `SlackAskV2` will not work when run in the playbook debugger. This is because the debugger does not generate entitlements, since they must be tied to an investigation. Entitlements are needed to track the response.
diff --git a/Packs/Slack/Scripts/SlackAskV2/SlackAskV2.yml b/Packs/Slack/Scripts/SlackAskV2/SlackAskV2.yml
index fed7f24a4a0d..e6a73eb79d31 100644
--- a/Packs/Slack/Scripts/SlackAskV2/SlackAskV2.yml
+++ b/Packs/Slack/Scripts/SlackAskV2/SlackAskV2.yml
@@ -61,7 +61,7 @@ tags:
- slack
timeout: '0'
type: python
-dockerimage: demisto/python3:3.10.14.92207
+dockerimage: demisto/python3:3.11.10.115186
tests:
- no test - Untestable
dependson:
diff --git a/Packs/Slack/Scripts/SlackBlockBuilder/README.md b/Packs/Slack/Scripts/SlackBlockBuilder/README.md
index 6dfa44a3eeec..7b2e41da37ea 100644
--- a/Packs/Slack/Scripts/SlackBlockBuilder/README.md
+++ b/Packs/Slack/Scripts/SlackBlockBuilder/README.md
@@ -143,3 +143,8 @@ Verify your Slack blocks payload is valid. Try simplifying the payload. Test wit
## 3. Integrate the `GetSlackBlockBuilderResponse` Script
- After receiving the response and closing the conditional task, initiate a new task that runs the `ParseSlackResponse` script.
+
+---
+**Issue**: Running the script using the playbook debugger results with an error: "invalid_blocks_format".
+
+**Troubleshooting**: `SlackBlockBuilder` will not work when run in the playbook debugger. This is because the debugger does not generate entitlements, since they must be tied to an investigation. Entitlements are needed to track the response. The workaround for this is running the playbook from an existing incident.
diff --git a/Packs/Slack/Scripts/SlackBlockBuilder/SlackBlockBuilder.yml b/Packs/Slack/Scripts/SlackBlockBuilder/SlackBlockBuilder.yml
index 158124111854..6b5760e5a14e 100644
--- a/Packs/Slack/Scripts/SlackBlockBuilder/SlackBlockBuilder.yml
+++ b/Packs/Slack/Scripts/SlackBlockBuilder/SlackBlockBuilder.yml
@@ -67,7 +67,7 @@ outputs:
description: The app ID retrieved from the message notification.
scripttarget: 0
subtype: python3
-dockerimage: demisto/python3:3.10.13.90168
+dockerimage: demisto/python3:3.11.10.115186
runas: DBotWeakRole
dependson:
must:
diff --git a/Packs/Slack/pack_metadata.json b/Packs/Slack/pack_metadata.json
index 501300032409..9c3f9552f966 100644
--- a/Packs/Slack/pack_metadata.json
+++ b/Packs/Slack/pack_metadata.json
@@ -2,7 +2,7 @@
"name": "Slack",
"description": "Interact with Slack API - collect logs, send messages and notifications to your Slack team.",
"support": "xsoar",
- "currentVersion": "3.5.0",
+ "currentVersion": "3.5.3",
"author": "Cortex XSOAR",
"url": "https://www.paloaltonetworks.com/cortex",
"email": "",
diff --git a/Packs/SplunkCIMFields/ReleaseNotes/1_1_1.md b/Packs/SplunkCIMFields/ReleaseNotes/1_1_1.md
new file mode 100644
index 000000000000..bdde5a25577b
--- /dev/null
+++ b/Packs/SplunkCIMFields/ReleaseNotes/1_1_1.md
@@ -0,0 +1,15 @@
+
+#### Scripts
+
+##### SplunkCIMFields
+- Updated the Docker image to: *demisto/python3:3.11.10.116439*.
+
+
+
+
+##### Splunk_ShortID
+- Updated the Docker image to: *demisto/python3:3.11.10.116439*.
+
+
+
+
diff --git a/Packs/SplunkCIMFields/Scripts/SplunkCIMFields/SplunkCIMFields.yml b/Packs/SplunkCIMFields/Scripts/SplunkCIMFields/SplunkCIMFields.yml
index cd5b6badd2fa..f5260492d86a 100644
--- a/Packs/SplunkCIMFields/Scripts/SplunkCIMFields/SplunkCIMFields.yml
+++ b/Packs/SplunkCIMFields/Scripts/SplunkCIMFields/SplunkCIMFields.yml
@@ -12,7 +12,7 @@ commonfields:
contentitemexportablefields:
contentitemfields:
fromServerVersion: ""
-dockerimage: demisto/python3:3.10.13.75921
+dockerimage: demisto/python3:3.11.10.116439
enabled: true
name: SplunkCIMFields
runas: DBotWeakRole
diff --git a/Packs/SplunkCIMFields/Scripts/SplunkShortID/SplunkShortID.yml b/Packs/SplunkCIMFields/Scripts/SplunkShortID/SplunkShortID.yml
index a88200b3e42a..2b8ae659d8a8 100644
--- a/Packs/SplunkCIMFields/Scripts/SplunkShortID/SplunkShortID.yml
+++ b/Packs/SplunkCIMFields/Scripts/SplunkShortID/SplunkShortID.yml
@@ -14,7 +14,7 @@ dependson:
- SplunkPy|||splunk-search
should:
- SplunkPy|||splunk-search
-dockerimage: demisto/python3:3.10.13.75921
+dockerimage: demisto/python3:3.11.10.116439
enabled: true
name: Splunk_ShortID
runas: DBotWeakRole
diff --git a/Packs/SplunkCIMFields/pack_metadata.json b/Packs/SplunkCIMFields/pack_metadata.json
index c672bce020aa..a03df52b2356 100644
--- a/Packs/SplunkCIMFields/pack_metadata.json
+++ b/Packs/SplunkCIMFields/pack_metadata.json
@@ -2,7 +2,7 @@
"name": "SplunkScripts",
"description": "Splunk helper scripts.",
"support": "community",
- "currentVersion": "1.1.0",
+ "currentVersion": "1.1.1",
"author": "Farrukh Ahmed",
"url": "",
"email": "",
diff --git a/Packs/SplunkPy/Integrations/SplunkPy/README.md b/Packs/SplunkPy/Integrations/SplunkPy/README.md
index 7aecd7f0a85c..44ae30b2acf7 100644
--- a/Packs/SplunkPy/Integrations/SplunkPy/README.md
+++ b/Packs/SplunkPy/Integrations/SplunkPy/README.md
@@ -104,24 +104,24 @@ You can use Splunk to define a user lookup table and then configure the SplunkPy
1. Define the lookup table in Splunk.
1. Under **App: Lookup Editor**, select **Lookup Editor**.
- ![image](https://raw.githubusercontent.com/demisto/content-docs/091b2154a54c6208428b84f2969836c71a36bafe/docs/doc_imgs/integrations/splunk-lookup-editor.png)
+ ![image](../../doc_files/splunk-lookup-editor.png)
2. Select **Create a New Lookup** > **KV Store lookup**.
- ![image](https://raw.githubusercontent.com/demisto/content-docs/5d64d0aa825b56327759c5c6ec1b81b1d9dcf493/docs/doc_imgs/integrations/kv-store-lookup.png)
+ ![image](../../doc_files/kv-store-lookup.png)
3. Enter the **Name** for the table. For example, **splunk_xsoar_users** is the default lookup table name defined in the SplunkPy integration settings.
4. Under **App**, select **Enterprise Security**.
5. Assign two **Key-value collection schema** fields, one for the Cortex XSOAR usernames and one for the corresponding Splunk usernames. For example, **xsoar_user** and **splunk_user** are the default field values defined in the SplunkPy integration settings.
6. Click **Create Lookup**.
- ![image](https://raw.githubusercontent.com/demisto/content-docs/ab6faddf6146cab19d13a8f3211b49ba70b166a7/docs/doc_imgs/integrations/new-lookup-table.png)
+ ![image](../../doc_files/new-lookup-table.png)
**Note:**
If the user keys are defined already in another table, you can use that table name and relevant key names in the SplunkPy integration settings.
7. Add values to the table to map Cortex XSOAR users to the Splunk users.
- ![image](https://raw.githubusercontent.com/demisto/content-docs/79c67e22ea19619f3278b2956eb41375c4d77f3f/docs/doc_imgs/integrations/add-values-to-lookup-table.png)
+ ![image](../../doc_files/add-values-to-lookup-table.png)
2. Configure the Splunk integration instance.
Define the lookup table in Splunk.
1. Under **Settings** > **Integrations**, search for the SplunkPy integration and create an instance.
@@ -131,7 +131,7 @@ Define the lookup table in Splunk.
3. Set the **XSOAR user key** to the field defined in the Splunk lookup table. By default it is **xsoar_user**.
4. Set the **SPLUNK user key** to the field defined in the Splunk lookup table. By default it is **splunk_user**.
- ![image](https://raw.githubusercontent.com/demisto/content-docs/9a41b8c19df9d5fd8120471bfde111de31caf033/docs/doc_imgs/integrations/user-mapping-settings-configuration.png)
+ ![image](../../doc_files/user-mapping-settings-configuration.png)
#### Troubleshooting enrichment status
@@ -393,7 +393,7 @@ There is no context output for this command.
##### Human Readable Output
-![image](https://user-images.githubusercontent.com/50324325/63268589-2fda4b00-c29d-11e9-95b5-4b9fcf6c08ee.png)
+![image](../../doc_files/63268589-2fda4b00-c29d-11e9-95b5-4b9fcf6c08ee.png)
### splunk-get-indexes
@@ -416,7 +416,7 @@ There is no context output for this command.
##### Human Readable Output
-![image](https://user-images.githubusercontent.com/50324325/63268447-d8d47600-c29c-11e9-88a4-5003971a492e.png)
+![image](../../doc_files/63268447-d8d47600-c29c-11e9-88a4-5003971a492e.png)
### splunk-notable-event-edit
@@ -447,7 +447,7 @@ There is no context output for this command.
``` !splunk-notable-event-edit eventIDs=66D21DF4-F4FD-4886-A986-82E72ADCBFE9@@notable@@a045b8acc3ec93c2c74a2b18c2caabf4 comment="Demisto"```
##### Human Readable Output
-![image](https://user-images.githubusercontent.com/50324325/63522203-914e2400-c500-11e9-949a-0b55eb2c5871.png)
+![image](../../doc_files/63522203-914e2400-c500-11e9-949a-0b55eb2c5871.png)
### splunk-job-create
@@ -484,7 +484,7 @@ Creates a new search job in Splunk.
}
```
##### Human Readable Output
-![image](https://user-images.githubusercontent.com/50324325/63269769-75981300-c29f-11e9-950a-6ca77bcf564c.png)
+![image](../../doc_files/63269769-75981300-c29f-11e9-950a-6ca77bcf564c.png)
### splunk-parse-raw
@@ -516,7 +516,8 @@ Parses the raw part of the event.
### splunk-submit-event-hec
***
-Sends events to an HTTP event collector using the Splunk platform JSON event protocol.
+Sends events Splunk. if `batch_event_data` or `entry_id` arguments are provided then all arguments related to a single event are ignored.
+
##### Base Command
`splunk-submit-event-hec`
@@ -524,14 +525,23 @@ Sends events to an HTTP event collector using the Splunk platform JSON event pro
| **Argument Name** | **Description** | **Required** |
| --- | --- | --- |
-| event | The event payload key-value pair. An example string: "event": "Access log test message.". | Required |
+| event | The event payload key-value pair. An example string: "event": "Access log test message.". | Optional |
| fields | Fields for indexing that do not occur in the event payload itself. Accepts multiple, comma-separated, fields. | Optional |
| index | The index name. | Optional |
| host | The hostname. | Optional |
| source_type | The user-defined event source type. | Optional |
| source | The user-defined event source. | Optional |
| time | The epoch-formatted time. | Optional |
-| request_channel | A channel identifier (ID) where to send the request, must be a Globally Unique Identifier (GUID). **If the indexer acknowledgment is turned on, a channel is required.** | Optional |
+| batch_event_data | A batch of events to send to Splunk. For example, `{"event": "something happened at 14/10/2024 12:29", "fields": {"severity": "INFO", "category": "test2, test2"}, "index": "index0","sourcetype": "sourcetype0","source": "/example/something" } {"event": "something happened at 14/10/2024 13:29", "index": "index1", "sourcetype": "sourcetype1","source": "/example/something", "fields":{ "fields" : "severity: INFO, category: test2, test2"}}`. **If provided, the arguments related to a single event and the `entry_id` argument are ignored.** | Optional |
+| batch_event_data | A batch of events to send to splunk. For example, `{"event": "something happened at 14/10/2024 12:29", "fields": {"severity": "INFO", "category": "test2, test2"}, "index": "index0","sourcetype": "sourcetype0","source": "/example/something" } {"event": "something happened at 14/10/2024 13:29", "index": "index1", "sourcetype": "sourcetype1","source": "/exeample/something", "fields":{ "fields" : "severity: INFO, category: test2, test2"}}`. **If provided, the arguments related to a single event and the `entry_id` argument are ignored.** | Optional |
+| entry_id | The entry id in Cortex XSOAR of the file containing a batch of events. Content of the file should be valid batch event's data, as it would be provided to the `batch_event_data`. **If provided, the arguments related to a single event are ignored.** | Optional |
+
+##### Batched events description
+This command allows sending events to Splunk, either as a single event or a batch of multiple events.
+To send a single event: Use the `event`, `fields`, `host`, `index`, `source`, `source_type`, and `time` arguments.
+To send a batch of events, there are two options, either use the batch_event_data argument or use the entry_id argument (for a file uploaded to Cortex XSOAR).
+Batch format requirements: The batch must be a single string containing valid dictionaries, each representing an event. Events should not be separated by commas. Each dictionary should include all necessary fields for an event. For example: `{"event": "event occurred at 14/10/2024 12:29", "fields": {"severity": "INFO", "category": "test1"}, "index": "index0", "sourcetype": "sourcetype0", "source": "/path/event1"} {"event": "event occurred at 14/10/2024 13:29", "index": "index1", "sourcetype": "sourcetype1", "source": "/path/event2", "fields": {"severity": "INFO", "category": "test2"}}`.
+This formatted string can be passed directly via `batch_event_data`, or, if saved in a file, the file can be uploaded to Cortex XSOAR, and the `entry_id` (e.g., ${File.[4].EntryID}) should be provided.
##### Context Output
@@ -573,7 +583,7 @@ Splank.JobStatus = {
```
##### Human Readable Output
-![image](https://user-images.githubusercontent.com/50324325/77630707-2b24f600-6f54-11ea-94fe-4bf6c734aa29.png)
+![image](../../doc_files/77630707-2b24f600-6f54-11ea-94fe-4bf6c734aa29.png)
### get-mapping-fields
***
diff --git a/Packs/SplunkPy/Integrations/SplunkPy/SplunkPy.py b/Packs/SplunkPy/Integrations/SplunkPy/SplunkPy.py
index 56c757f2b9d3..310a66f983af 100644
--- a/Packs/SplunkPy/Integrations/SplunkPy/SplunkPy.py
+++ b/Packs/SplunkPy/Integrations/SplunkPy/SplunkPy.py
@@ -15,6 +15,7 @@
from splunklib.binding import AuthenticationError, HTTPError, namespace
+INTEGRATION_LOG = "Splunk- "
OUTPUT_MODE_JSON = 'json' # type of response from splunk-sdk query (json/csv/xml)
# Define utf8 as default encoding
params = demisto.params()
@@ -1022,6 +1023,29 @@ def get_drilldown_timeframe(notable_data, raw) -> tuple[str, str]:
return earliest_offset, latest_offset
+def escape_invalid_chars_in_drilldown_json(drilldown_search):
+ """ Goes over the drilldown search, and replace the unescaped or invalid chars.
+
+ Args:
+ drilldown_search (str): The drilldown search.
+
+ Returns:
+ str: The escaped drilldown search.
+ """
+ # escape the " of string from the form of 'some_key="value"' which the " char are invalid in json value
+ for unescaped_val in re.findall(r'(?<==)\"[^\"]*\"', drilldown_search):
+ escaped_val = unescaped_val.replace('"', '\\"')
+ drilldown_search = drilldown_search.replace(unescaped_val, escaped_val)
+
+ # replace the new line (\n) with in the IN (...) condition with ','
+ # Splunk replace the value of some multiline fields to the value which contain \n
+ # due to the 'expandtoken' macro
+ for multiline_val in re.findall(r'(?<=in|IN)\s*\([^\)]*\n[^\)]*\)', drilldown_search):
+ csv_val = multiline_val.replace('\n', ',')
+ drilldown_search = drilldown_search.replace(multiline_val, csv_val)
+ return drilldown_search
+
+
def parse_drilldown_searches(drilldown_searches: list) -> list[dict]:
""" Goes over the drilldown searches list, parses each drilldown search and converts it to a python dictionary.
@@ -1037,6 +1061,7 @@ def parse_drilldown_searches(drilldown_searches: list) -> list[dict]:
for drilldown_search in drilldown_searches:
try:
# drilldown_search may be a json list/dict represented as string
+ drilldown_search = escape_invalid_chars_in_drilldown_json(drilldown_search)
search = json.loads(drilldown_search)
if isinstance(search, list):
searches.extend(search)
@@ -2591,6 +2616,80 @@ def splunk_submit_event_command(service: client.Service, args: dict):
return_results(f'Event was created in Splunk index: {r.name}')
+def validate_indexes(indexes, service):
+ """Validates that all provided Splunk indexes exist within the Splunk service instance."""
+ real_indexes = service.indexes
+ real_indexes_names_set = set()
+ for real_index in real_indexes:
+ real_indexes_names_set.add(real_index.name)
+ indexes_set = set(indexes)
+ return indexes_set.issubset(real_indexes_names_set)
+
+
+def get_events_from_file(entry_id):
+ """
+ Retrieves event data from a file in Demisto based on a specified entry ID as a string.
+
+ Args:
+ entry_id (int): The entry ID corresponding to the file containing event data.
+
+ Returns:
+ str: The content of the file as a string.
+ """
+ get_file_path_res = demisto.getFilePath(entry_id)
+ file_path = get_file_path_res["path"]
+ with open(file_path, encoding='utf-8') as file_data:
+ return file_data.read()
+
+
+def parse_fields(fields):
+ """
+ Parses the `fields` input into a dictionary.
+
+ - If `fields` is a valid JSON string, it is converted into the corresponding dictionary.
+ - If `fields` is not valid JSON, it is wrapped as a dictionary with a single key-value pair,
+ where the key is `"fields"` and the value is the original `fields` string.
+
+ Examples:
+ 1. Input: '{"severity": "INFO", "category": "test2, test2"}'
+ Output: {"severity": "INFO", "category": "test2, test2"}
+
+ 2. Input: 'severity: INFO, category: test2, test2'
+ Output: {"fields": "severity: INFO, category: test2, test2"}
+ """
+ if fields:
+ try:
+ parsed_fields = json.loads(fields)
+ except Exception:
+ demisto.debug('Fields provided are not valid JSON; treating as a single field')
+ parsed_fields = {'fields': fields}
+ return parsed_fields
+ return None
+
+
+def convert_to_json_for_validation(events: str | dict):
+ """Converts a batch of events to a valid JSON format for two validation purposes:
+ - Ensure the batch of events is in the the correct format expected by the Splunk API (not a Json format).
+ - To enable extraction of the indexes from the string to validate their existence in the Splunk instance.
+ Args:
+ events (str): The batch of events to be formatted as JSON.
+ Raises:
+ DemistoException: Raised if the input does not match the format expected by the Splunk API.
+ Returns:
+ list: A list of JSON objects derived from the input events.
+ """
+ try:
+ events_str = str(events)
+
+ events_str = events_str.replace("'", '"')
+ rgx = re.compile(r"}[\s]*{")
+ valid_json_events = rgx.sub("},{", events_str)
+ valid_json_events = json.loads(f"[{valid_json_events}]")
+ return valid_json_events
+ except Exception as e:
+ raise DemistoException(f'{str(e)}\nMake sure that the events are in the correct format.')
+
+
def splunk_submit_event_hec(
hec_token: str | None,
baseurl: str,
@@ -2601,27 +2700,39 @@ def splunk_submit_event_hec(
source_type: str | None,
source: str | None,
time_: str | None,
- request_channel: str | None
+ request_channel: str | None,
+ batch_event_data: str | None,
+ entry_id: int | None,
+ service
):
if hec_token is None:
raise Exception('The HEC Token was not provided')
- parsed_fields = None
- if fields:
- try:
- parsed_fields = json.loads(fields)
- except Exception:
- parsed_fields = {'fields': fields}
+ if batch_event_data:
+ events = batch_event_data
- args = assign_params(
- event=event,
- host=host,
- fields=parsed_fields,
- index=index,
- sourcetype=source_type,
- source=source,
- time=time_
- )
+ elif entry_id:
+ demisto.debug(f'{INTEGRATION_LOG} - loading events data from file with {entry_id=}')
+ events = get_events_from_file(entry_id)
+
+ else:
+ parsed_fields = parse_fields(fields)
+
+ events = assign_params(
+ event=event,
+ host=host,
+ fields=parsed_fields,
+ index=index,
+ sourcetype=source_type,
+ source=source,
+ time=time_
+ )
+ valid_json_events = convert_to_json_for_validation(events) # only used for extracting indices
+
+ indexes = [d.get('index') for d in valid_json_events if d.get('index')]
+
+ if not validate_indexes(indexes, service):
+ raise DemistoException('Index name does not exist in your splunk instance')
headers = {
'Authorization': f'Splunk {hec_token}',
@@ -2630,15 +2741,23 @@ def splunk_submit_event_hec(
if request_channel:
headers['X-Splunk-Request-Channel'] = request_channel
+ data = ''
+ if entry_id or batch_event_data:
+ data = events
+ else:
+ data = json.dumps(events)
+
+ demisto.debug(f'{INTEGRATION_LOG} sending {len(valid_json_events)}')
+
return requests.post(
f'{baseurl}/services/collector/event',
- data=json.dumps(args),
+ data=data,
headers=headers,
verify=VERIFY_CERTIFICATE,
)
-def splunk_submit_event_hec_command(params: dict, args: dict):
+def splunk_submit_event_hec_command(params: dict, service, args: dict):
hec_token = params.get('cred_hec_token', {}).get('password') or params.get('hec_token')
baseurl = params.get('hec_url')
if baseurl is None:
@@ -2652,18 +2771,25 @@ def splunk_submit_event_hec_command(params: dict, args: dict):
source = args.get('source')
time_ = args.get('time')
request_channel = args.get('request_channel')
+ batch_event_data = args.get('batch_event_data')
+ entry_id = args.get('entry_id')
+
+ if not event and not batch_event_data and not entry_id:
+ raise DemistoException("Invalid input: Please specify one of the following arguments: `event`, "
+ "`batch_event_data`, or `entry_id`.")
response_info = splunk_submit_event_hec(hec_token, baseurl, event, fields, host, index, source_type, source, time_,
- request_channel)
+ request_channel, batch_event_data, entry_id, service)
if 'Success' not in response_info.text:
return_error(f"Could not send event to Splunk {response_info.text}")
else:
- response_dict = json.loads(response_info.text)
+ response_dict = json.loads(response_info.text
+ )
if response_dict and 'ackId' in response_dict:
- return_results(f"The event was sent successfully to Splunk. AckID: {response_dict['ackId']}")
+ return_results(f"The events were sent successfully to Splunk. AckID: {response_dict['ackId']}")
else:
- return_results('The event was sent successfully to Splunk.')
+ return_results('The events were sent successfully to Splunk.')
def splunk_edit_notable_event_command(base_url: str, token: str, auth_token: str | None, args: dict) -> None:
@@ -3033,7 +3159,7 @@ def get_connection_args(params: dict) -> dict:
"""
app = params.get('app', '-')
return {
- 'host': params['host'],
+ 'host': params['host'].replace('https://', '').rstrip('/'),
'port': params['port'],
'app': app or "-",
'verify': VERIFY_CERTIFICATE,
@@ -3127,7 +3253,7 @@ def main(): # pragma: no cover
token = get_auth_session_key(service)
splunk_edit_notable_event_command(base_url, token, auth_token, args)
elif command == 'splunk-submit-event-hec':
- splunk_submit_event_hec_command(params, args)
+ splunk_submit_event_hec_command(params, service, args)
elif command == 'splunk-job-status':
return_results(splunk_job_status(service, args))
elif command.startswith('splunk-kv-') and service is not None:
diff --git a/Packs/SplunkPy/Integrations/SplunkPy/SplunkPy.yml b/Packs/SplunkPy/Integrations/SplunkPy/SplunkPy.yml
index bed6189d8e51..211c0f7e4689 100644
--- a/Packs/SplunkPy/Integrations/SplunkPy/SplunkPy.yml
+++ b/Packs/SplunkPy/Integrations/SplunkPy/SplunkPy.yml
@@ -454,7 +454,7 @@ script:
Event payload key-value pair.
String example: "event": "Access log test message".
name: event
- required: true
+ required: false
- description: Fields for indexing that do not occur in the event payload itself. Accepts multiple, comma-separated, fields.
name: fields
- description: The index name.
@@ -469,6 +469,10 @@ script:
name: time
- description: A channel identifier (ID) where to send the request, must be a Globally Unique Identifier (GUID). If the indexer acknowledgment is turned on, a channel is required.
name: request_channel
+ - description: 'A batch of events to send to Splunk. For example, `{"event": "something happened at 14/10/2024 12:29", "fields": {"severity": "INFO", "category": "test2, test2"}, "index": "index0","sourcetype": "sourcetype0","source": "/example/something" } {"event": "something happened at 14/10/2024 13:29", "index": "index1", "sourcetype": "sourcetype1","source": "/example/something", "fields":{ "fields" : "severity: INFO, category: test2, test2"}}`. If provided all arguments except of `request_channel` are ignored.'
+ name: batch_event_data
+ - description: The entry ID in Cortex XSOAR of the file containing a batch of events. If provided, the arguments related to a single event are ignored.
+ name: entry_id
description: Sends events to an HTTP Event Collector using the Splunk platform JSON event protocol.
name: splunk-submit-event-hec
- arguments:
@@ -673,7 +677,7 @@ script:
- contextPath: Splunk.UserMapping.SplunkUser
description: Splunk user mapping.
type: String
- dockerimage: demisto/splunksdk-py3:1.0.0.108075
+ dockerimage: demisto/splunksdk-py3:1.0.0.115556
isfetch: true
ismappable: true
isremotesyncin: true
diff --git a/Packs/SplunkPy/Integrations/SplunkPy/SplunkPy_description.md b/Packs/SplunkPy/Integrations/SplunkPy/SplunkPy_description.md
index 40602510948b..0a6046a5a550 100644
--- a/Packs/SplunkPy/Integrations/SplunkPy/SplunkPy_description.md
+++ b/Packs/SplunkPy/Integrations/SplunkPy/SplunkPy_description.md
@@ -140,33 +140,33 @@ Palo Alto recommends that you configure Splunk to produce basic alerts that the
1. Create a summary index in Splunk. For more information, click [here](https://docs.splunk.com/Documentation/Splunk/7.3.0/Indexer/Setupmultipleindexes#Create_events_indexes_2).
2. Build a query to return relevant alerts.
-![image](https://github.com/demisto/content/raw/master/Packs/SplunkPy/doc_files/build-query.png)
+![image](../../doc_files/build-query.png)
1. Identify the fields list from the Splunk query and save it to a local file.
-![image](https://github.com/demisto/content/raw/master/Packs/SplunkPy/doc_files/identify-fields-list.png)
+![image](../../doc_files/identify-fields-list.png)
1. Define a search macro to capture the fields list that you saved locally. For more information, click [here](https://docs.splunk.com/Documentation/Splunk/7.3.0/Knowledge/Definesearchmacros).
Use the following naming convention: (demisto_fields_{type}).
-![image](https://github.com/demisto/content/raw/master/Packs/SplunkPy/doc_files/micro-name.png)
-![image](https://github.com/demisto/content/raw/master/Packs/SplunkPy/doc_files/macro.png)
+![image](../../doc_files/micro-name.png)
+![image](../../doc_files/macro.png)
1. Define a scheduled search, the results of which are stored in the summary index. For more information about scheduling searches, click [here](https://docs.splunk.com/Documentation/Splunk/7.3.0/Knowledge/Definesearchmacros).
-![image](https://github.com/demisto/content/raw/master/Packs/SplunkPy/doc_files/scheduled-search.png)
+![image](../../doc_files/scheduled-search.png)
1. In the Summary indexing section, select the summary index, and enter the {key:value} pair for Cortex XSOAR classification.
-![image](https://github.com/demisto/content/raw/master/Packs/SplunkPy/doc_files/summary-index.png)
+![image](../../doc_files/summary-index.png)
1. Configure the incident type in Cortex XSOAR by navigating to __Settings > Advanced > Incident Types.__ Note: In the example, Splunk Generic is a custom incident type.
-![image](https://github.com/demisto/content/raw/master/Packs/SplunkPy/doc_files/incident_type.png)
+![image](../../doc_files/incident_type.png)
1. Configure the classification. Make sure that your non ES incident fields are associated with your custom incident type.
1. Navigate to __Settings > Integrations > Classification & Mapping__.
2. Click your classifier.
3. Select your instance.
4. Click the fetched data.
5. Drag the value to the appropriate incident type.
-![image](https://github.com/demisto/content/raw/master/Packs/SplunkPy/doc_files/classify.png)
+![image](../../doc_files/classify.png)
1. Configure the mapping. Make sure to map your non ES fields accordingly and make sure that these incident fields are associated with their custom incident type.
1. Navigate to __Settings > Integrations > Classification & Mapping__.
2. Click your mapper.
3. Select your instance.
4. Click the __Choose data path__ link for the field you want to map.
5. Click the data from the Splunk fields to map it to Cortex XSOAR.
-![image](https://github.com/demisto/content/raw/master/Packs/SplunkPy/doc_files/mapping.png)
+![image](../../doc_files/mapping.png)
1. (Optional) Create custom fields.
2. Build a playbook and assign it as the default for this incident type.
diff --git a/Packs/SplunkPy/Integrations/SplunkPy/SplunkPy_test.py b/Packs/SplunkPy/Integrations/SplunkPy/SplunkPy_test.py
index ee8cc43a9c73..9b079ab5c74e 100644
--- a/Packs/SplunkPy/Integrations/SplunkPy/SplunkPy_test.py
+++ b/Packs/SplunkPy/Integrations/SplunkPy/SplunkPy_test.py
@@ -11,6 +11,8 @@
from splunklib import results
import SplunkPy as splunk
from pytest_mock import MockerFixture
+from unittest.mock import MagicMock, patch
+
RETURN_ERROR_TARGET = 'SplunkPy.return_error'
@@ -358,7 +360,7 @@ def __init__(self, text):
mocker.patch.object(splunk, "splunk_submit_event_hec", return_value=MockRes(text))
return_error_mock = mocker.patch(RETURN_ERROR_TARGET)
- splunk.splunk_submit_event_hec_command(params={"hec_url": "mock_url"}, args={})
+ splunk.splunk_submit_event_hec_command(params={"hec_url": "mock_url"}, args={"entry_id": "some_entry"}, service=Service)
err_msg = return_error_mock.call_args[0][0]
assert err_msg == f"Could not send event to Splunk {text}"
@@ -390,13 +392,13 @@ def test_splunk_submit_event_hec_command_request_channel(mocker):
Then
- The return result object contains the correct message.
"""
- args = {"request_channel": "11111111-1111-1111-1111-111111111111"}
+ args = {"request_channel": "11111111-1111-1111-1111-111111111111", "entry_id": "some_entry"}
mocker.patch.object(splunk, "splunk_submit_event_hec", return_value=check_request_channel(args))
moc = mocker.patch.object(demisto, 'results')
splunk.splunk_submit_event_hec_command(params={"hec_url": "mock_url"},
- args=args)
+ args=args, service=Service)
readable_output = moc.call_args[0][0]
- assert readable_output == "The event was sent successfully to Splunk. AckID: 1"
+ assert readable_output == "The events were sent successfully to Splunk. AckID: 1"
def test_splunk_submit_event_hec_command_without_request_channel(mocker):
@@ -408,12 +410,12 @@ def test_splunk_submit_event_hec_command_without_request_channel(mocker):
Then
- The return result object contains the correct message.
"""
- args = {}
+ args = {"entry_id": "some_entry"}
mocker.patch.object(splunk, "splunk_submit_event_hec", return_value=check_request_channel(args))
return_error_mock = mocker.patch(RETURN_ERROR_TARGET)
splunk.splunk_submit_event_hec_command(params={"hec_url": "mock_url"},
- args=args)
+ args=args, service=Service)
err_msg = return_error_mock.call_args[0][0]
assert err_msg == 'Could not send event to Splunk {"text":"Data channel is missing","code":10}'
@@ -2534,6 +2536,28 @@ def test_empty_string_as_app_param_value(mocker):
assert connection_args.get('app') == '-'
+@pytest.mark.parametrize(argnames='host, expected_host', argvalues=[
+ ('8.8.8.8', '8.8.8.8'),
+ ('8.8.8.8/', '8.8.8.8'),
+ ('https://www.test.com', 'www.test.com'),
+ ('https://www.test.com/', 'www.test.com'),
+])
+def test_host_param(host, expected_host):
+ """
+ Given:
+ - Different host values
+ When:
+ - Run get_connection_args() function
+ Then:
+ - Ensure the host is as expected
+ """
+ params = {'host': host, 'port': '111'}
+
+ actuall_host = splunk.get_connection_args(params)['host']
+
+ assert actuall_host == expected_host
+
+
OWNER_MAPPING = [{'xsoar_user': 'test_xsoar', 'splunk_user': 'test_splunk', 'wait': True},
{'xsoar_user': 'test_not_full', 'splunk_user': '', 'wait': True},
{'xsoar_user': '', 'splunk_user': 'test_not_full', 'wait': True}, ]
@@ -2788,3 +2812,228 @@ def test_get_drilldown_searches(drilldown_data, expected):
"""
assert splunk.get_drilldown_searches(drilldown_data) == expected
+
+
+@pytest.mark.parametrize('drilldown_search, expected_res',
+ [('{"name":"test", "query":"|key="the value""}', 'key="the value"'),
+ ('{"name":"test", "query":"|key in (line_1\nline_2)"}', 'key in (line_1,line_2)'),
+ ('{"name":"test", "query":"search a=$a|s$ c=$c$ suffix"}', 'search a=$a|s$ c=$c$ suffix')])
+def test_escape_invalid_chars_in_drilldown_json(drilldown_search, expected_res):
+ """
+ Scenario: When extracting the drilldown search query which are a json string,
+ we should escape unescaped JSON special characters.
+
+ Given:
+ - A raw search query with text like 'key="a value"'.
+ - A raw search query with text like where 'key in (a\nb)' which it should be 'key in (a,b)'.
+ - A raw search query with normal json string, should not be changed by this function.
+
+ When:
+ - escape_invalid_chars_in_drilldown_json is called
+
+ Then:
+ - Return the expected result
+ """
+ import json
+
+ res = splunk.escape_invalid_chars_in_drilldown_json(drilldown_search)
+
+ assert expected_res in json.loads(res)['query']
+
+
+# Define minimal classes to simulate the service and index behavior
+class Index:
+ def __init__(self, name):
+ self.name = name
+
+
+class ServiceIndex:
+ def __init__(self, indexes):
+ self.indexes = [Index(name) for name in indexes]
+
+
+@pytest.mark.parametrize(
+ "given_indexes, service_indexes, expected",
+ [
+ # Test case: All indexes exist in the service
+ (["index1", "index2"], ["index1", "index2", "index3"], True),
+
+ # Test case: Some indexes do not exist in the service
+ (["index1", "index4"], ["index1", "index2", "index3"], False),
+
+ # Test case: Empty input indexes list
+ ([], ["index1", "index2", "index3"], True),
+ ]
+)
+def test_validate_indexes(given_indexes, service_indexes, expected):
+ """
+ Given: A list of indexes' names.
+ When: Calling validate_indexes function.
+ Then: The function returns `True` if all the given index names exist within the Splunk service instance;
+ otherwise, it returns `False`.
+ """
+ from SplunkPy import validate_indexes
+ service = ServiceIndex(service_indexes)
+ # Assert that the function returns the expected result
+ assert validate_indexes(given_indexes, service) == expected
+
+
+@pytest.mark.parametrize(
+ "fields, expected",
+ [
+ # Valid JSON input
+ ('{"key": "value"}', {"key": "value"}),
+
+ # Valid JSON with multiple key-value pairs
+ ('{"key1": "value1", "key2": 2}', {"key1": "value1", "key2": 2}),
+
+ # Invalid JSON input (non-JSON string)
+ ("not a json string", {"fields": "not a json string"}),
+
+ # Another invalid JSON input (partially structured JSON)
+ ("{'key': 'value'}", {"fields": "{'key': 'value'}"}),
+ ]
+)
+def test_parse_fields(fields, expected):
+ """
+ Given: A string representing fields, which may be a valid JSON string or a regular string.
+ When: The parse_fields function is called with the given string.
+ Then: If the string is valid JSON, the function returns a dictionary of the parsed fields. If the string is not valid JSON,
+ the function returns a dictionary with a single key-value pair, where the entire input string is the key.
+ """
+ from SplunkPy import parse_fields
+ result = parse_fields(fields)
+ assert result == expected
+
+
+@pytest.mark.parametrize(
+ "events, expected",
+ [
+ ("{'key1': 'value1'} {'key2': 'value2'}", [{"key1": "value1"}, {"key2": "value2"}]),
+ ("{'key1': 'value1'}", [{"key1": "value1"}]),
+ ({"key1": "value1", "key2": "value2"}, [{"key1": "value1", "key2": "value2"}]),
+ ({"key1": {"nestedKey": "nestedValue"}, "key2": "value2"}, [{"key1": {"nestedKey": "nestedValue"}, "key2": "value2"}]),
+ ]
+)
+def test_convert_to_json_for_validation_valid_inputs(events, expected):
+ """
+ Given: A string or dictionary representing valid JSON inputs, including single, multiple, and nested events.
+ When: Calling convert_to_json_for_validation.
+ Then: The function should return a list of dictionaries corresponding to the parsed events.
+ """
+ from SplunkPy import convert_to_json_for_validation
+ assert convert_to_json_for_validation(events) == expected
+
+
+@pytest.mark.parametrize(
+ "invalid_events",
+ [
+ "{key1: {'nestedKey': 'nestedValue'}}", # Missing double quotes on the outer key
+ "{'key1': {nestedKey: 'nestedValue'}}", # Missing double quotes on nested key
+ "{'key1': 'value1', 'key2': 'value2'", # Missing closing brace
+ "{'key1': 'value1', 'key2': 'value2'}, {'key3': 'value3'", # Missing closing brace on one event
+ "{'key1': 'value1' 'key2': 'value2'}", # Missing comma between key-value pairs
+ ]
+)
+def test_convert_to_json_for_validation_invalid_inputs(invalid_events):
+ """
+ Given: A string representing various invalid JSON formats (e.g., missing quotes, missing commas, unmatched braces).
+ When: Calling convert_to_json_for_validation.
+ Then: The function should raise a DemistoException due to invalid JSON format.
+ """
+ from SplunkPy import convert_to_json_for_validation
+ with pytest.raises(DemistoException, match=r"Make sure that the events are in the correct format"):
+ convert_to_json_for_validation(invalid_events)
+
+
+@pytest.mark.parametrize("event, batch_event_data, entry_id, expected_data", [
+ ("Somthing happened", None, None, '{"event": "Somthing happened", "fields": {"field1": "value1"}, "index": "main"}'),
+ (None, "{'event': 'some event', 'index': 'some index'} {'event': 'some event', 'index': 'some index'}", None,
+ "{'event': 'some event', 'index': 'some index'} {'event': 'some event', 'index': 'some index'}"), # Batch event data
+ (None, None, "some entry_id", "{'event': 'some event', 'index': 'some index'} {'event': 'some event', 'index': 'some index'}")
+])
+@patch("requests.post")
+@patch("SplunkPy.get_events_from_file") # Replace with the actual module
+@patch("SplunkPy.convert_to_json_for_validation")
+@patch("SplunkPy.validate_indexes")
+@patch("SplunkPy.parse_fields")
+def test_splunk_submit_event_hec(
+ mock_parse_fields,
+ mock_validate_indexes,
+ mock_convert_to_json_for_validation,
+ mock_get_events_from_file,
+ mock_post,
+ event,
+ batch_event_data,
+ entry_id,
+ expected_data
+):
+ """
+ Given: Different types of event submission (single event, batch event, entry_id).
+ When: Calling splunk_submit_event_hec.
+ Then: Ensure a POST request is sent with the correct data and headers.
+ """
+ from SplunkPy import splunk_submit_event_hec
+ # Arrange
+ hec_token = "valid_token"
+ baseurl = "https://splunk.example.com"
+ fields = '{"field1": "value1"}'
+ parsed_fields = {"field1": "value1"}
+
+ # Mocks
+ mock_parse_fields.return_value = parsed_fields
+ mock_validate_indexes.return_value = True
+
+ if event:
+ # Single event
+ mock_convert_to_json_for_validation.return_value = [{"event": event}]
+ elif batch_event_data:
+ # Batch event data
+ mock_convert_to_json_for_validation.return_value = [{'event': 'some event', 'index': 'some index'},
+ {'event': 'some event', 'index': 'some index'}]
+ elif entry_id:
+ # Entry ID
+ mock_get_events_from_file.return_value =\
+ "{'event': 'some event', 'index': 'some index'} {'event': 'some event', 'index': 'some index'}"
+ mock_convert_to_json_for_validation.return_value =\
+ [{'event': 'some event', 'index': 'some index'}, {'event': 'some event', 'index': 'some index'}]
+
+ # Act
+ splunk_submit_event_hec(
+ hec_token=hec_token,
+ baseurl=baseurl,
+ event=event,
+ fields=fields,
+ host=None,
+ index="main",
+ source_type=None,
+ source=None,
+ time_=None,
+ request_channel="test_channel",
+ batch_event_data=batch_event_data,
+ entry_id=entry_id,
+ service=MagicMock(),
+ )
+
+ mock_post.assert_called_once_with(
+ f"{baseurl}/services/collector/event",
+ data=expected_data,
+ headers={
+ "Authorization": f"Splunk {hec_token}",
+ "Content-Type": "application/json",
+ "X-Splunk-Request-Channel": "test_channel",
+ },
+ verify=True,
+ )
+
+
+def test_splunk_submit_event_hec_command_no_required_arguments():
+ """ Given: none of these arguments: 'entry_id', 'event', 'batch_event_data'
+ When: Runing splunk-submit-event-hec command
+ Then: An exception is thrown
+ """
+ from SplunkPy import splunk_submit_event_hec_command
+ with pytest.raises(DemistoException,
+ match=r"Invalid input: Please specify one of the following arguments: `event`, "
+ r"`batch_event_data`, or `entry_id`."):
+ splunk_submit_event_hec_command({'hec_url': 'hec_url'}, None, {})
diff --git a/Packs/SplunkPy/ReleaseNotes/3_1_40.md b/Packs/SplunkPy/ReleaseNotes/3_1_40.md
new file mode 100644
index 000000000000..fea7c97b9113
--- /dev/null
+++ b/Packs/SplunkPy/ReleaseNotes/3_1_40.md
@@ -0,0 +1,23 @@
+
+#### Scripts
+
+##### SplunkShowDrilldown
+
+
+- Updated the Docker image to: *demisto/python3:3.11.10.113941*.
+##### SplunkAddComment
+
+
+- Updated the Docker image to: *demisto/python3:3.11.10.113941*.
+##### SplunkShowIdentity
+
+
+- Updated the Docker image to: *demisto/python3:3.11.10.113941*.
+##### SplunkShowAsset
+
+
+- Updated the Docker image to: *demisto/python3:3.11.10.113941*.
+##### SplunkConvertCommentsToTable
+
+
+- Updated the Docker image to: *demisto/python3:3.11.10.113941*.
diff --git a/Packs/SplunkPy/ReleaseNotes/3_1_41.md b/Packs/SplunkPy/ReleaseNotes/3_1_41.md
new file mode 100644
index 000000000000..522317c7d799
--- /dev/null
+++ b/Packs/SplunkPy/ReleaseNotes/3_1_41.md
@@ -0,0 +1,7 @@
+
+#### Integrations
+
+##### SplunkPy
+
+- Fixed an issue where drilldown enrichment failed due to use of JSON special characters in the query.
+- Updated the Docker image to: *demisto/splunksdk-py3:1.0.0.115556*.
diff --git a/Packs/SplunkPy/ReleaseNotes/3_1_42.md b/Packs/SplunkPy/ReleaseNotes/3_1_42.md
new file mode 100644
index 000000000000..2b3b6f02dbf1
--- /dev/null
+++ b/Packs/SplunkPy/ReleaseNotes/3_1_42.md
@@ -0,0 +1,6 @@
+
+#### Integrations
+
+##### SplunkPy
+
+- Documentation and metadata improvements.
diff --git a/Packs/SplunkPy/ReleaseNotes/3_1_43.md b/Packs/SplunkPy/ReleaseNotes/3_1_43.md
new file mode 100644
index 000000000000..9deb68673f98
--- /dev/null
+++ b/Packs/SplunkPy/ReleaseNotes/3_1_43.md
@@ -0,0 +1,6 @@
+
+#### Integrations
+
+##### SplunkPy
+
+Fixed an issue where the connection to Splunk server was failed in case the *Server URL* was the full url instead of the host.
diff --git a/Packs/SplunkPy/ReleaseNotes/3_1_44.md b/Packs/SplunkPy/ReleaseNotes/3_1_44.md
new file mode 100644
index 000000000000..92262451cb06
--- /dev/null
+++ b/Packs/SplunkPy/ReleaseNotes/3_1_44.md
@@ -0,0 +1,15 @@
+
+#### Scripts
+
+##### SplunkConvertCommentsToTable
+
+
+- Updated the Docker image to: *demisto/python3:3.11.10.115186*.
+##### SplunkAddComment
+
+
+- Updated the Docker image to: *demisto/python3:3.11.10.115186*.
+##### SplunkShowDrilldown
+
+
+- Updated the Docker image to: *demisto/python3:3.11.10.115186*.
diff --git a/Packs/SplunkPy/ReleaseNotes/3_1_45.md b/Packs/SplunkPy/ReleaseNotes/3_1_45.md
new file mode 100644
index 000000000000..aa6fcf3ed868
--- /dev/null
+++ b/Packs/SplunkPy/ReleaseNotes/3_1_45.md
@@ -0,0 +1,3 @@
+#### Integrations
+##### SplunkPy
+- Added *batch_event_data* and *entry_id* arguments to **splunk-submit-event-hec** command.
\ No newline at end of file
diff --git a/Packs/SplunkPy/ReleaseNotes/3_1_46.md b/Packs/SplunkPy/ReleaseNotes/3_1_46.md
new file mode 100644
index 000000000000..14889e9aac68
--- /dev/null
+++ b/Packs/SplunkPy/ReleaseNotes/3_1_46.md
@@ -0,0 +1,5 @@
+#### Integrations
+
+##### SplunkPy
+
+- Documentation improvements.
\ No newline at end of file
diff --git a/Packs/SplunkPy/Scripts/SplunkAddComment/SplunkAddComment.yml b/Packs/SplunkPy/Scripts/SplunkAddComment/SplunkAddComment.yml
index ef7b5a32c8b9..a6d2ec2d7356 100644
--- a/Packs/SplunkPy/Scripts/SplunkAddComment/SplunkAddComment.yml
+++ b/Packs/SplunkPy/Scripts/SplunkAddComment/SplunkAddComment.yml
@@ -22,7 +22,7 @@ enabled: true
scripttarget: 0
subtype: python3
runonce: false
-dockerimage: demisto/python3:3.10.12.68714
+dockerimage: demisto/python3:3.11.10.115186
fromversion: 6.0.0
tests:
- No tests (auto formatted)
diff --git a/Packs/SplunkPy/Scripts/SplunkConvertCommentsToTable/SplunkConvertCommentsToTable.yml b/Packs/SplunkPy/Scripts/SplunkConvertCommentsToTable/SplunkConvertCommentsToTable.yml
index 0dbc3850e74c..d15b0701ddb4 100644
--- a/Packs/SplunkPy/Scripts/SplunkConvertCommentsToTable/SplunkConvertCommentsToTable.yml
+++ b/Packs/SplunkPy/Scripts/SplunkConvertCommentsToTable/SplunkConvertCommentsToTable.yml
@@ -12,7 +12,7 @@ timeout: '0'
type: python
subtype: python3
runas: DBotWeakRole
-dockerimage: demisto/python3:3.10.12.68714
+dockerimage: demisto/python3:3.11.10.115186
fromversion: 6.0.0
tests:
- No tests (auto formatted)
diff --git a/Packs/SplunkPy/Scripts/SplunkShowAsset/SplunkShowAsset.yml b/Packs/SplunkPy/Scripts/SplunkShowAsset/SplunkShowAsset.yml
index 5e236617a0a6..80fa6b48c416 100644
--- a/Packs/SplunkPy/Scripts/SplunkShowAsset/SplunkShowAsset.yml
+++ b/Packs/SplunkPy/Scripts/SplunkShowAsset/SplunkShowAsset.yml
@@ -4,7 +4,7 @@ commonfields:
contentitemexportablefields:
contentitemfields:
fromServerVersion: ""
-dockerimage: demisto/python3:3.10.13.89009
+dockerimage: demisto/python3:3.11.10.113941
enabled: true
name: SplunkShowAsset
runas: DBotWeakRole
diff --git a/Packs/SplunkPy/Scripts/SplunkShowDrilldown/SplunkShowDrilldown.yml b/Packs/SplunkPy/Scripts/SplunkShowDrilldown/SplunkShowDrilldown.yml
index 6a5a196c3f11..13e7a881b46a 100644
--- a/Packs/SplunkPy/Scripts/SplunkShowDrilldown/SplunkShowDrilldown.yml
+++ b/Packs/SplunkPy/Scripts/SplunkShowDrilldown/SplunkShowDrilldown.yml
@@ -13,7 +13,7 @@ type: python
contentitemexportablefields:
contentitemfields:
fromServerVersion: ''
-dockerimage: demisto/python3:3.10.14.95956
+dockerimage: demisto/python3:3.11.10.115186
runas: DBotWeakRole
tests:
- SplunkShowEnrichment
diff --git a/Packs/SplunkPy/Scripts/SplunkShowIdentity/SplunkShowIdentity.yml b/Packs/SplunkPy/Scripts/SplunkShowIdentity/SplunkShowIdentity.yml
index 42ecce982ea4..e80fe25891e2 100644
--- a/Packs/SplunkPy/Scripts/SplunkShowIdentity/SplunkShowIdentity.yml
+++ b/Packs/SplunkPy/Scripts/SplunkShowIdentity/SplunkShowIdentity.yml
@@ -4,7 +4,7 @@ commonfields:
contentitemexportablefields:
contentitemfields:
fromServerVersion: ""
-dockerimage: demisto/python3:3.10.13.89009
+dockerimage: demisto/python3:3.11.10.113941
enabled: true
name: SplunkShowIdentity
runas: DBotWeakRole
diff --git a/Packs/SplunkPy/pack_metadata.json b/Packs/SplunkPy/pack_metadata.json
index 57d07869996c..f2ad7d024f30 100644
--- a/Packs/SplunkPy/pack_metadata.json
+++ b/Packs/SplunkPy/pack_metadata.json
@@ -2,7 +2,7 @@
"name": "Splunk",
"description": "Run queries on Splunk servers.",
"support": "xsoar",
- "currentVersion": "3.1.39",
+ "currentVersion": "3.1.46",
"author": "Cortex XSOAR",
"url": "https://www.paloaltonetworks.com/cortex",
"email": "",
diff --git a/Packs/SplunkPyPreRelease/doc_files/classify.png b/Packs/SplunkPyPreRelease/doc_files/classify.png
deleted file mode 100644
index 671dc1f9bf4f..000000000000
--- a/Packs/SplunkPyPreRelease/doc_files/classify.png
+++ /dev/null
@@ -1,2179 +0,0 @@
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
- content-docs/docs/doc_imgs/integrations/classify.png at master · demisto/content-docs · GitHub
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
- You can’t perform that action at this time.
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
diff --git a/Packs/SplunkPyPreRelease/doc_files/incident_type.png b/Packs/SplunkPyPreRelease/doc_files/incident_type.png
deleted file mode 100644
index 5b416d044304..000000000000
--- a/Packs/SplunkPyPreRelease/doc_files/incident_type.png
+++ /dev/null
@@ -1,2179 +0,0 @@
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
- content-docs/docs/doc_imgs/integrations/incident_type.png at master · demisto/content-docs · GitHub
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
- You can’t perform that action at this time.
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
diff --git a/Packs/Stamus/Integrations/Stamus/Stamus.yml b/Packs/Stamus/Integrations/Stamus/Stamus.yml
index bdc08a43d44d..3334e9ea1389 100644
--- a/Packs/Stamus/Integrations/Stamus/Stamus.yml
+++ b/Packs/Stamus/Integrations/Stamus/Stamus.yml
@@ -188,7 +188,7 @@ script:
script: '-'
type: python
subtype: python3
- dockerimage: demisto/python3:3.10.13.72123
+ dockerimage: demisto/python3:3.11.10.116439
fromversion: 6.9.0
defaultmapperin: Stamus Networks incoming mapper
tests:
diff --git a/Packs/Stamus/ReleaseNotes/1_0_1.md b/Packs/Stamus/ReleaseNotes/1_0_1.md
new file mode 100644
index 000000000000..14157fe1da6e
--- /dev/null
+++ b/Packs/Stamus/ReleaseNotes/1_0_1.md
@@ -0,0 +1,10 @@
+
+#### Integrations
+
+##### Stamus
+- Updated the Docker image to: *demisto/python3:3.11.10.116439*.
+
+
+
+
+
diff --git a/Packs/Stamus/pack_metadata.json b/Packs/Stamus/pack_metadata.json
index f587e46c6333..52d07dedebbf 100644
--- a/Packs/Stamus/pack_metadata.json
+++ b/Packs/Stamus/pack_metadata.json
@@ -2,7 +2,7 @@
"name": "Stamus",
"description": "Stamus Security Platform",
"support": "partner",
- "currentVersion": "1.0.0",
+ "currentVersion": "1.0.1",
"author": "Stamus Networks",
"url": "https://www.stamus-networks.com/",
"email": "support@stamus-networks.com",
diff --git a/Packs/SumoLogic_Cloud_SIEM/Integrations/SumoLogicCloudSIEM/README.md b/Packs/SumoLogic_Cloud_SIEM/Integrations/SumoLogicCloudSIEM/README.md
index 3d2c36921714..0847031b09cd 100644
--- a/Packs/SumoLogic_Cloud_SIEM/Integrations/SumoLogicCloudSIEM/README.md
+++ b/Packs/SumoLogic_Cloud_SIEM/Integrations/SumoLogicCloudSIEM/README.md
@@ -86,7 +86,7 @@ The table below shows differences between this integration and the legacy JASK i
| jask-get-insight-comments | sumologic-sec-insight-get-comments | |
| jask-get-signal-details | sumologic-sec-signal-get-details | |
| jask-get-entity-details | sumologic-sec-entity-get-details | |
-| ~~jask-get-related-entities~~ | | Depreacted |
+| ~~jask-get-related-entities~~ | | Deprecated |
| ~~jask-get-whitelisted-entities~~ | | Deprecated - use command `sumologic-sec-entity-search` with filter `whitelisted:"true"` |
| jask-search-insights | sumologic-sec-insight-search | |
| jask-search-entities | sumologic-sec-entity-search | |
@@ -509,6 +509,7 @@ Change status of Insight
| insight_id | The insight to change status for. | Required |
| status | The desired Insight status. Possible values are: new, inprogress, closed. Default is in-progress. | Optional |
| resolution | Resolution for closing Insight. Valid values are: "Resolved", "False Positive", "No Action", "Duplicate". Possible values are: Resolved, False Positive, No Action, Duplicate. Default is Resolved. | Optional |
+| sub_resolution | Custom sub resolution for closing Insight. If populated, it will override the resolution field. Please make sure the resolution matches exactly your Sumo Resolutions | Optional |
#### Context Output
diff --git a/Packs/SumoLogic_Cloud_SIEM/Integrations/SumoLogicCloudSIEM/SumoLogicCloudSIEM.py b/Packs/SumoLogic_Cloud_SIEM/Integrations/SumoLogicCloudSIEM/SumoLogicCloudSIEM.py
index e34c7dcb6e34..3f44927f64ca 100644
--- a/Packs/SumoLogic_Cloud_SIEM/Integrations/SumoLogicCloudSIEM/SumoLogicCloudSIEM.py
+++ b/Packs/SumoLogic_Cloud_SIEM/Integrations/SumoLogicCloudSIEM/SumoLogicCloudSIEM.py
@@ -4,7 +4,7 @@
"""
from datetime import datetime
-from typing import List, Any, Dict, Tuple, cast
+from typing import Any, cast
import traceback
@@ -55,13 +55,13 @@ def req(self, method, url_suffix, params=None, json_data=None, headers=None):
).get('data')
return r
- def set_extra_params(self, args: Dict[str, Any]) -> None:
+ def set_extra_params(self, args: dict[str, Any]) -> None:
'''
Set any extra params (in the form of a dictionary) for this client
'''
self.extra_params = args
- def get_extra_params(self) -> Dict[str, Any]:
+ def get_extra_params(self) -> dict[str, Any]:
'''
Set any extra params (in the form of a dictionary) for this client
'''
@@ -85,7 +85,7 @@ def translate_severity(severity):
def add_to_query(q):
if len(q) > 0:
- return '{} '.format(q) # No need for 'AND' here
+ return f'{q} ' # No need for 'AND' here
else:
return q
@@ -97,11 +97,12 @@ def arg_time_query_to_q(q, argval, timefield):
if not argval or argval == 'All time':
return q
if argval == 'Last week':
- return add_to_query(q) + '{}:NOW-7D..NOW'.format(timefield)
+ return add_to_query(q) + f'{timefield}:NOW-7D..NOW'
if argval == 'Last 48 hours':
- return add_to_query(q) + '{}:NOW-48h..NOW'.format(timefield)
+ return add_to_query(q) + f'{timefield}:NOW-48h..NOW'
if argval == 'Last 24 hours':
- return add_to_query(q) + '{}:NOW-24h..NOW'.format(timefield)
+ return add_to_query(q) + f'{timefield}:NOW-24h..NOW'
+ return None
def add_list_to_q(q, fields, args):
@@ -112,10 +113,10 @@ def add_list_to_q(q, fields, args):
arg_value = args.get(arg_field, None)
if arg_value:
if ',' in arg_value:
- quoted_values = ['"{}"'.format(v) for v in arg_value.split(',')]
+ quoted_values = [f'"{v}"' for v in arg_value.split(',')]
q = add_to_query(q) + '{}:in({})'.format(arg_field, ','.join(quoted_values))
else:
- q = add_to_query(q) + '{}:"{}"'.format(arg_field, arg_value)
+ q = add_to_query(q) + f'{arg_field}:"{arg_value}"'
return q
@@ -131,20 +132,17 @@ def insight_signal_to_readable(obj):
# Only show Entity name (Insights and Signals)
cap_obj['Entity'] = ''
- if obj.get('entity'):
- if 'name' in obj['entity']:
- cap_obj['Entity'] = obj['entity']['name']
+ if obj.get('entity') and 'name' in obj['entity']:
+ cap_obj['Entity'] = obj['entity']['name']
# Only show status displayName (Insights only)
- if obj.get('status'):
- if 'displayName' in obj['status']:
- cap_obj['Status'] = obj['status']['displayName']
+ if obj.get('status') and 'displayName' in obj['status']:
+ cap_obj['Status'] = obj['status']['displayName']
# For Assignee show username (email)
cap_obj['Assignee'] = ''
- if obj.get('assignee'):
- if 'username' in obj['assignee']:
- cap_obj['Assignee'] = obj['assignee']['username']
+ if obj.get('assignee') and 'username' in obj['assignee']:
+ cap_obj['Assignee'] = obj['assignee']['username']
# Remove some deprecated fields, replaced by "Assignee"
cap_obj.pop('AssignedTo', None)
@@ -171,9 +169,8 @@ def entity_to_readable(obj):
if len(cap_obj.get('Inventory', [])) > 0:
invdata = cap_obj['Inventory'][0]
- if 'metadata' in invdata:
- if 'operatingSystem' in invdata['metadata']:
- cap_obj['OperatingSystem'] = invdata['metadata']['operatingSystem']
+ if 'metadata' in invdata and 'operatingSystem' in invdata['metadata']:
+ cap_obj['OperatingSystem'] = invdata['metadata']['operatingSystem']
cap_obj['InventoryData'] = True
else:
cap_obj['InventoryData'] = False
@@ -230,7 +227,7 @@ def is_inmirrorable_object(readable_remote_id: str) -> bool:
can be in-mirrored into XSOAR. Note the readable_remote_id must be in reable form, not
the raw ID
'''
- return True if readable_remote_id.startswith('INSIGHT') else False
+ return bool(readable_remote_id.startswith('INSIGHT'))
''' COMMAND FUNCTIONS '''
@@ -282,7 +279,7 @@ def test_module(client: Client) -> str:
return message
-def insight_get_details(client: Client, args: Dict[str, Any]) -> CommandResults:
+def insight_get_details(client: Client, args: dict[str, Any]) -> CommandResults:
'''
Get insight details
'''
@@ -314,7 +311,7 @@ def insight_get_details(client: Client, args: Dict[str, Any]) -> CommandResults:
)
-def insight_add_comment(client: Client, args: Dict[str, Any]) -> CommandResults:
+def insight_add_comment(client: Client, args: dict[str, Any]) -> CommandResults:
'''
Add a comment to an insight
'''
@@ -322,7 +319,7 @@ def insight_add_comment(client: Client, args: Dict[str, Any]) -> CommandResults:
reqbody = {}
reqbody['body'] = args.get('comment')
- c = client.req('POST', 'sec/v1/insights/{}/comments'.format(insight_id), None, reqbody)
+ c = client.req('POST', f'sec/v1/insights/{insight_id}/comments', None, reqbody)
comment = [{'Id': c.get('id'), 'Body': c.get('body'), 'Author': c.get('author').get('username'),
'Timestamp': c.get('timestamp'), 'InsightId': insight_id}]
@@ -338,12 +335,12 @@ def insight_add_comment(client: Client, args: Dict[str, Any]) -> CommandResults:
)
-def insight_get_comments(client: Client, args: Dict[str, Any]) -> CommandResults:
+def insight_get_comments(client: Client, args: dict[str, Any]) -> CommandResults:
'''
Get comments for insight
'''
insight_id = args.get('insight_id')
- resp_json = client.req('GET', 'sec/v1/insights/{}/comments'.format(insight_id))
+ resp_json = client.req('GET', f'sec/v1/insights/{insight_id}/comments')
comments = [{'Id': c.get('id'), 'Body': c.get('body'), 'Author': c.get('author').get('username'),
'Timestamp': c.get('timestamp'), 'InsightId': insight_id} for c in resp_json.get('comments')]
readable_output = tableToMarkdown('Insight Comments:', comments,
@@ -358,7 +355,7 @@ def insight_get_comments(client: Client, args: Dict[str, Any]) -> CommandResults
)
-def signal_get_details(client: Client, args: Dict[str, Any]) -> CommandResults:
+def signal_get_details(client: Client, args: dict[str, Any]) -> CommandResults:
'''
Get signal details
'''
@@ -366,7 +363,7 @@ def signal_get_details(client: Client, args: Dict[str, Any]) -> CommandResults:
if not signal_id:
raise ValueError('signal_id not specified')
- signal = client.req('GET', 'sec/v1/signals/{}'.format(signal_id))
+ signal = client.req('GET', f'sec/v1/signals/{signal_id}')
signal.pop('allRecords', None) # don't need to display records from signal
signal = insight_signal_to_readable(signal)
signal['SumoUrl'] = craft_sumo_url(client.get_extra_params()['instance_endpoint'], 'signal', signal_id)
@@ -382,7 +379,7 @@ def signal_get_details(client: Client, args: Dict[str, Any]) -> CommandResults:
)
-def entity_get_details(client: Client, args: Dict[str, Any]) -> CommandResults:
+def entity_get_details(client: Client, args: dict[str, Any]) -> CommandResults:
'''
Get entity details
'''
@@ -390,7 +387,7 @@ def entity_get_details(client: Client, args: Dict[str, Any]) -> CommandResults:
if not entity_id:
raise ValueError('entity_id not specified')
- resp_json = client.req('GET', 'sec/v1/entities/{}'.format(entity_id), {'expand': 'inventory'})
+ resp_json = client.req('GET', f'sec/v1/entities/{entity_id}', {'expand': 'inventory'})
entity = entity_to_readable(resp_json)
readable_output = tableToMarkdown(
'Entity Details:', [entity],
@@ -405,7 +402,7 @@ def entity_get_details(client: Client, args: Dict[str, Any]) -> CommandResults:
)
-def insight_search(client: Client, args: Dict[str, Any]) -> CommandResults:
+def insight_search(client: Client, args: dict[str, Any]) -> CommandResults:
'''
Search insights using available filters
@@ -478,7 +475,7 @@ def insight_search(client: Client, args: Dict[str, Any]) -> CommandResults:
)
-def entity_search(client: Client, args: Dict[str, Any]) -> CommandResults:
+def entity_search(client: Client, args: dict[str, Any]) -> CommandResults:
'''
Search entities using the available filters
'''
@@ -509,7 +506,7 @@ def entity_search(client: Client, args: Dict[str, Any]) -> CommandResults:
)
-def signal_search(client: Client, args: Dict[str, Any]) -> CommandResults:
+def signal_search(client: Client, args: dict[str, Any]) -> CommandResults:
'''
Search signals using available filters
'''
@@ -540,7 +537,7 @@ def signal_search(client: Client, args: Dict[str, Any]) -> CommandResults:
)
-def insight_set_status(client: Client, args: Dict[str, Any]) -> CommandResults:
+def insight_set_status(client: Client, args: dict[str, Any]) -> CommandResults:
'''
Change status of insight
@@ -549,11 +546,13 @@ def insight_set_status(client: Client, args: Dict[str, Any]) -> CommandResults:
insight_id = args.get('insight_id')
reqbody = {}
reqbody['status'] = args.get('status')
- if args.get('status') == 'closed' and args.get('resolution'):
+ resolution = args.get('sub_resolution') or args.get('resolution')
+
+ if args.get('status') == 'closed' and resolution:
# resolution should only be specified when the status is set to "closed"
- reqbody['resolution'] = args['resolution']
+ reqbody['resolution'] = resolution
- resp_json = client.req('PUT', 'sec/v1/insights/{}/status'.format(insight_id), None, reqbody)
+ resp_json = client.req('PUT', f'sec/v1/insights/{insight_id}/status', None, reqbody)
for s in resp_json.get('signals'):
s.pop('allRecords', None)
@@ -573,7 +572,7 @@ def insight_set_status(client: Client, args: Dict[str, Any]) -> CommandResults:
)
-def match_list_get(client: Client, args: Dict[str, Any]) -> CommandResults:
+def match_list_get(client: Client, args: dict[str, Any]) -> CommandResults:
'''
Get match lists
'''
@@ -601,7 +600,7 @@ def match_list_get(client: Client, args: Dict[str, Any]) -> CommandResults:
)
-def match_list_update(client: Client, args: Dict[str, Any]) -> CommandResults:
+def match_list_update(client: Client, args: dict[str, Any]) -> CommandResults:
'''
Add to match list
'''
@@ -612,7 +611,7 @@ def match_list_update(client: Client, args: Dict[str, Any]) -> CommandResults:
item['expiration'] = args.get('expiration')
item['value'] = args.get('value')
- resp_json = client.req('POST', 'sec/v1/match-lists/{}/items'.format(match_list_id), None, {'items': [item]})
+ resp_json = client.req('POST', f'sec/v1/match-lists/{match_list_id}/items', None, {'items': [item]})
result = get_update_result(resp_json)
readable_output = tableToMarkdown('Result:', [result], ['Result', 'Server Response'])
@@ -623,7 +622,7 @@ def match_list_update(client: Client, args: Dict[str, Any]) -> CommandResults:
)
-def threat_intel_search_indicators(client: Client, args: Dict[str, Any]) -> CommandResults:
+def threat_intel_search_indicators(client: Client, args: dict[str, Any]) -> CommandResults:
'''
Search Threat Intel Indicators
@@ -676,7 +675,7 @@ def threat_intel_search_indicators(client: Client, args: Dict[str, Any]) -> Comm
)
-def threat_intel_get_sources(client: Client, args: Dict[str, Any]) -> CommandResults:
+def threat_intel_get_sources(client: Client, args: dict[str, Any]) -> CommandResults:
'''
Get the list of Threat Intel Sources
'''
@@ -703,7 +702,7 @@ def threat_intel_get_sources(client: Client, args: Dict[str, Any]) -> CommandRes
)
-def threat_intel_update_source(client: Client, args: Dict[str, Any]) -> CommandResults:
+def threat_intel_update_source(client: Client, args: dict[str, Any]) -> CommandResults:
'''
Add Indicator to a Threat Intel Source
'''
@@ -714,7 +713,7 @@ def threat_intel_update_source(client: Client, args: Dict[str, Any]) -> CommandR
item['expiration'] = args.get('expiration')
item['value'] = args.get('value')
- resp_json = client.req('POST', 'sec/v1/threat-intel-sources/{}/items'.format(threat_intel_source_id),
+ resp_json = client.req('POST', f'sec/v1/threat-intel-sources/{threat_intel_source_id}/items',
None, {'indicators': [item]})
result = get_update_result(resp_json)
readable_output = tableToMarkdown('Result:', [result], ['Result', 'Response'])
@@ -726,7 +725,7 @@ def threat_intel_update_source(client: Client, args: Dict[str, Any]) -> CommandR
)
-def cleanup_records(signal: Optional[Dict[str, Any]]) -> Optional[Dict[str, Any]]:
+def cleanup_records(signal: Optional[dict[str, Any]]) -> Optional[dict[str, Any]]:
'''
Function to clean up all "bro" fields of the records under a Signal object
'''
@@ -788,7 +787,7 @@ def get_remote_data_command(client: Client, args: dict, close_incident: bool):
return GetRemoteDataResponse(mirrored_object=insight, entries=entries)
-def update_remote_system_command(client: Client, args: Dict[str, Any], params: Dict[str, Any]) -> str:
+def update_remote_system_command(client: Client, args: dict[str, Any], params: dict[str, Any]) -> str:
""" Pushes changes in XSOAR incident into the corresponding Sumo Logic Insight.
Args:
@@ -863,9 +862,9 @@ def get_modified_remote_data_command(client: Client, args: Any) -> Any:
raise NotImplementedError('get-modified-remote-data not implemented')
-def fetch_incidents(client: Client, max_results: int, last_run: Dict[str, int], first_fetch_time: Optional[int],
+def fetch_incidents(client: Client, max_results: int, last_run: dict[str, int], first_fetch_time: Optional[int],
fetch_query: Optional[str], pull_signals: Optional[bool], record_summary_fields: Optional[str],
- other_args: Union[Dict[str, Any], None]) -> Tuple[Dict[str, int], List[dict]]:
+ other_args: Union[dict[str, Any], None]) -> tuple[dict[str, int], list[dict]]:
'''
Retrieve new incidents periodically based on pre-defined instance parameters
'''
@@ -876,8 +875,8 @@ def fetch_incidents(client: Client, max_results: int, last_run: Dict[str, int],
last_fetch = last_run.get('last_fetch', None)
# track last_fetch_ids to handle insights with the same timestamp
- last_fetch_ids: List[str] = cast(List[str], last_run.get('last_fetch_ids', []))
- current_fetch_ids: List[str] = []
+ last_fetch_ids: list[str] = cast(list[str], last_run.get('last_fetch_ids', []))
+ current_fetch_ids: list[str] = []
# Handle first fetch time
if last_fetch is None:
@@ -893,7 +892,7 @@ def fetch_incidents(client: Client, max_results: int, last_run: Dict[str, int],
# Initialize an empty list of incidents to return
# Each incident is a dict with a string as a key
- incidents: List[Dict[str, Any]] = []
+ incidents: list[dict[str, Any]] = []
# set query values that do not change with pagination
q = f'created:>={insight_timestamp_to_created_format(last_fetch_created_time)}'
@@ -934,9 +933,8 @@ def fetch_incidents(client: Client, max_results: int, last_run: Dict[str, int],
incident_created_time = (int)(incident_created_time_ms / 1000)
# to prevent duplicates, we are only adding incidents with creation_time >= last fetched incident
- if last_fetch:
- if incident_created_time < last_fetch:
- continue
+ if last_fetch and incident_created_time < last_fetch:
+ continue
signals = a.get('signals')
for signal in signals:
@@ -1005,7 +1003,7 @@ def fetch_incidents(client: Client, max_results: int, last_run: Dict[str, int],
# Save the next_run as a dict with the last_fetch and last_fetch_ids keys to be stored
next_run = cast(
- Dict[str, Any],
+ dict[str, Any],
{
'last_fetch': latest_created_time,
'last_fetch_ids': current_fetch_ids if len(current_fetch_ids) > 0 else last_fetch_ids
diff --git a/Packs/SumoLogic_Cloud_SIEM/Integrations/SumoLogicCloudSIEM/SumoLogicCloudSIEM_test.py b/Packs/SumoLogic_Cloud_SIEM/Integrations/SumoLogicCloudSIEM/SumoLogicCloudSIEM_test.py
index 6ca53f46ef81..da8ca1cbb8f0 100644
--- a/Packs/SumoLogic_Cloud_SIEM/Integrations/SumoLogicCloudSIEM/SumoLogicCloudSIEM_test.py
+++ b/Packs/SumoLogic_Cloud_SIEM/Integrations/SumoLogicCloudSIEM/SumoLogicCloudSIEM_test.py
@@ -8,7 +8,6 @@
from CommonServerUserPython import *
import json
-import io
from datetime import datetime
from datetime import timezone
@@ -21,7 +20,7 @@
def util_load_json(path):
- with io.open(path, mode='r', encoding='utf-8') as f:
+ with open(path, encoding='utf-8') as f:
return json.loads(f.read())
@@ -76,7 +75,7 @@ def test_insight_get_comments(requests_mock):
insight_id = 'INSIGHT-116'
comments = mock_response['data']['comments']
- requests_mock.get('{}/sec/v1/insights/{}/comments'.format(MOCK_URL, insight_id), json=mock_response)
+ requests_mock.get(f'{MOCK_URL}/sec/v1/insights/{insight_id}/comments', json=mock_response)
client = Client(
base_url=MOCK_URL,
@@ -141,7 +140,7 @@ def test_signal_get_details(requests_mock):
del signal['allRecords']
signal = insight_signal_to_readable(signal)
- requests_mock.get('{}/sec/v1/signals/{}'.format(MOCK_URL, signal_id), json=mock_response)
+ requests_mock.get(f'{MOCK_URL}/sec/v1/signals/{signal_id}', json=mock_response)
client = Client(
base_url=MOCK_URL,
@@ -172,7 +171,7 @@ def test_entity_get_details(requests_mock):
entity_id = '_hostname-win10--admin.b.test.com'
entity = entity_to_readable(mock_response.get('data'))
- requests_mock.get('{}/sec/v1/entities/{}'.format(MOCK_URL, entity_id), json=mock_response)
+ requests_mock.get(f'{MOCK_URL}/sec/v1/entities/{entity_id}', json=mock_response)
client = Client(
base_url=MOCK_URL,
@@ -203,7 +202,7 @@ def test_insight_search(requests_mock):
for insight in mock_response['data']['objects']:
insights.append(insight_signal_to_readable(insight))
- requests_mock.get('{}/sec/v1/insights?limit=2'.format(MOCK_URL), json=mock_response)
+ requests_mock.get(f'{MOCK_URL}/sec/v1/insights?limit=2', json=mock_response)
client = Client(
base_url=MOCK_URL,
@@ -234,7 +233,7 @@ def test_entity_search(requests_mock):
for entity in mock_response['data']['objects']:
entities.append(entity_to_readable(entity))
- requests_mock.get('{}/sec/v1/entities?q=hostname:matchesWildcard(\"*test*\")&limit=2'.format(MOCK_URL), json=mock_response)
+ requests_mock.get(f'{MOCK_URL}/sec/v1/entities?q=hostname:matchesWildcard(\"*test*\")&limit=2', json=mock_response)
client = Client(
base_url=MOCK_URL,
@@ -267,7 +266,7 @@ def test_signal_search(requests_mock):
del signal['allRecords']
signals.append(insight_signal_to_readable(signal))
- requests_mock.get('{}/sec/v1/signals?q=contentType:\"ANOMALY\"&limit=2'.format(MOCK_URL), json=mock_response)
+ requests_mock.get(f'{MOCK_URL}/sec/v1/signals?q=contentType:\"ANOMALY\"&limit=2', json=mock_response)
client = Client(
base_url=MOCK_URL,
@@ -301,7 +300,7 @@ def test_insight_set_status(requests_mock):
del signal['allRecords']
insight = insight_signal_to_readable(mock_response.get('data'))
- requests_mock.put('{}/sec/v1/insights/{}/status'.format(MOCK_URL, insight_id), json=mock_response)
+ requests_mock.put(f'{MOCK_URL}/sec/v1/insights/{insight_id}/status', json=mock_response)
client = Client(
base_url=MOCK_URL,
@@ -334,7 +333,7 @@ def test_match_list_get(requests_mock):
for match_list in mock_response['data']['objects']:
match_lists.append({(k[0].capitalize() + k[1:]): v for k, v in match_list.items()})
- requests_mock.get('{}/sec/v1/match-lists?limit=5'.format(MOCK_URL), json=mock_response)
+ requests_mock.get(f'{MOCK_URL}/sec/v1/match-lists?limit=5', json=mock_response)
client = Client(
base_url=MOCK_URL,
@@ -362,7 +361,7 @@ def test_match_list_update(requests_mock):
mock_response = util_load_json('test_data/update_result.json')
match_list_id = '166'
- requests_mock.post('{}/sec/v1/match-lists/{}/items'.format(MOCK_URL, match_list_id), json=mock_response)
+ requests_mock.post(f'{MOCK_URL}/sec/v1/match-lists/{match_list_id}/items', json=mock_response)
client = Client(
base_url=MOCK_URL,
@@ -396,7 +395,7 @@ def test_threat_intel_search_indicators(requests_mock):
for threat_intel_indicator in mock_response['data']['objects']:
threat_intel_indicators.append({(k[0].capitalize() + k[1:]): v for k, v in threat_intel_indicator.items()})
- requests_mock.get('{}/sec/v1/threat-intel-indicators?value=11.22.33.44&sourceIds=54'.format(MOCK_URL), json=mock_response)
+ requests_mock.get(f'{MOCK_URL}/sec/v1/threat-intel-indicators?value=11.22.33.44&sourceIds=54', json=mock_response)
client = Client(
base_url=MOCK_URL,
@@ -428,7 +427,7 @@ def test_threat_intel_get_sources(requests_mock):
for threat_intel_source in mock_response['data']['objects']:
threat_intel_sources.append({(k[0].capitalize() + k[1:]): v for k, v in threat_intel_source.items()})
- requests_mock.get('{}/sec/v1/threat-intel-sources?limit=5'.format(MOCK_URL), json=mock_response)
+ requests_mock.get(f'{MOCK_URL}/sec/v1/threat-intel-sources?limit=5', json=mock_response)
client = Client(
base_url=MOCK_URL,
@@ -456,7 +455,7 @@ def test_threat_intel_update_source(requests_mock):
mock_response = util_load_json('test_data/update_result.json')
threat_intel_source_id = '54'
- requests_mock.post('{}/sec/v1/threat-intel-sources/{}/items'.format(MOCK_URL, threat_intel_source_id), json=mock_response)
+ requests_mock.post(f'{MOCK_URL}/sec/v1/threat-intel-sources/{threat_intel_source_id}/items', json=mock_response)
client = Client(
base_url=MOCK_URL,
diff --git a/Packs/SumoLogic_Cloud_SIEM/ReleaseNotes/1_1_25.md b/Packs/SumoLogic_Cloud_SIEM/ReleaseNotes/1_1_25.md
new file mode 100644
index 000000000000..6d9cbcbf1f9f
--- /dev/null
+++ b/Packs/SumoLogic_Cloud_SIEM/ReleaseNotes/1_1_25.md
@@ -0,0 +1,5 @@
+#### Integrations
+
+##### Sumo Logic Cloud SIEM
+
+- Added the ability to enter custom resolutions when setting the insight status to closed when calling sumologic-sec-insight-set-status command.
diff --git a/Packs/SumoLogic_Cloud_SIEM/pack_metadata.json b/Packs/SumoLogic_Cloud_SIEM/pack_metadata.json
index 71ce54e61888..57ecf486459a 100644
--- a/Packs/SumoLogic_Cloud_SIEM/pack_metadata.json
+++ b/Packs/SumoLogic_Cloud_SIEM/pack_metadata.json
@@ -2,7 +2,7 @@
"name": "Sumo Logic Cloud SIEM",
"description": "Sumo Logic Cloud SIEM provides threat detection and incident response for modern IT environments. This content pack will allow you to apply automation to perform actual SOC analyst workflows. Using this content pack you will be able to fetch Incidents via Insights, update status of an Insight, add items to match list, add Threat Intel Indicators to Threat Intel Sources, and so on.",
"support": "partner",
- "currentVersion": "1.1.24",
+ "currentVersion": "1.1.25",
"author": "Sumo Logic",
"url": "https://www.sumologic.com/solutions/cloud-siem-enterprise/",
"email": "support@sumologic.com",
diff --git a/Packs/SuspiciousDomainHunting/ReleaseNotes/1_0_7.md b/Packs/SuspiciousDomainHunting/ReleaseNotes/1_0_7.md
new file mode 100644
index 000000000000..f5892ec16e7b
--- /dev/null
+++ b/Packs/SuspiciousDomainHunting/ReleaseNotes/1_0_7.md
@@ -0,0 +1,15 @@
+
+#### Scripts
+
+##### RasterizeImageOriginal
+
+
+- Updated the Docker image to: *demisto/processing-image-file:1.0.0.115372*.
+##### imagecompare
+
+
+- Updated the Docker image to: *demisto/processing-image-file:1.0.0.115372*.
+##### RasterizeImageSuspicious
+
+
+- Updated the Docker image to: *demisto/processing-image-file:1.0.0.115372*.
diff --git a/Packs/SuspiciousDomainHunting/Scripts/Imagecompare/Imagecompare.yml b/Packs/SuspiciousDomainHunting/Scripts/Imagecompare/Imagecompare.yml
index 1ff703d26fd5..9c7f62165a8e 100644
--- a/Packs/SuspiciousDomainHunting/Scripts/Imagecompare/Imagecompare.yml
+++ b/Packs/SuspiciousDomainHunting/Scripts/Imagecompare/Imagecompare.yml
@@ -15,7 +15,7 @@ args:
scripttarget: 0
subtype: python3
runonce: false
-dockerimage: demisto/processing-image-file:1.0.0.108564
+dockerimage: demisto/processing-image-file:1.0.0.115372
runas: DBotWeakRole
engineinfo: {}
tests:
diff --git a/Packs/SuspiciousDomainHunting/Scripts/RasterizeImageOriginal/RasterizeImageOriginal.yml b/Packs/SuspiciousDomainHunting/Scripts/RasterizeImageOriginal/RasterizeImageOriginal.yml
index ced682816977..327739da2f50 100644
--- a/Packs/SuspiciousDomainHunting/Scripts/RasterizeImageOriginal/RasterizeImageOriginal.yml
+++ b/Packs/SuspiciousDomainHunting/Scripts/RasterizeImageOriginal/RasterizeImageOriginal.yml
@@ -10,7 +10,7 @@ enabled: true
scripttarget: 0
subtype: python3
runonce: false
-dockerimage: demisto/processing-image-file:1.0.0.92064
+dockerimage: demisto/processing-image-file:1.0.0.115372
runas: DBotWeakRole
engineinfo: {}
fromversion: 6.10.0
diff --git a/Packs/SuspiciousDomainHunting/Scripts/RasterizeImageSuspicious/RasterizeImageSuspicious.yml b/Packs/SuspiciousDomainHunting/Scripts/RasterizeImageSuspicious/RasterizeImageSuspicious.yml
index fee1bbbcf6b4..8cbc46cc7165 100644
--- a/Packs/SuspiciousDomainHunting/Scripts/RasterizeImageSuspicious/RasterizeImageSuspicious.yml
+++ b/Packs/SuspiciousDomainHunting/Scripts/RasterizeImageSuspicious/RasterizeImageSuspicious.yml
@@ -10,7 +10,7 @@ enabled: true
scripttarget: 0
subtype: python3
runonce: false
-dockerimage: demisto/processing-image-file:1.0.0.92064
+dockerimage: demisto/processing-image-file:1.0.0.115372
runas: DBotWeakRole
engineinfo: {}
fromversion: 6.10.0
diff --git a/Packs/SuspiciousDomainHunting/pack_metadata.json b/Packs/SuspiciousDomainHunting/pack_metadata.json
index 7119bf52fcc0..33070ad1f1ab 100644
--- a/Packs/SuspiciousDomainHunting/pack_metadata.json
+++ b/Packs/SuspiciousDomainHunting/pack_metadata.json
@@ -2,7 +2,7 @@
"name": "Suspicious Domain Hunting",
"description": "This pack provides all the necessary tools for the Suspicious Domain Hunting use case. It uses the CertStream integration to ingest new SSL certificates and alert for type-squatting domains with SSL certificate, these alerts are then analyzed and mitigated.",
"support": "community",
- "currentVersion": "1.0.6",
+ "currentVersion": "1.0.7",
"author": "Cortex XSOAR",
"url": "https://live.paloaltonetworks.com/t5/cortex-xsoar-discussions/bd-p/Cortex_XSOAR_Discussions",
"email": "",
diff --git a/Packs/SymantecBlueCoatMalwareAnalysis/Integrations/SymantecBlueCoatMalwareAnalysis/SymantecBlueCoatMalwareAnalysis.yml b/Packs/SymantecBlueCoatMalwareAnalysis/Integrations/SymantecBlueCoatMalwareAnalysis/SymantecBlueCoatMalwareAnalysis.yml
index 79c880e888b4..cfdee4e012ed 100644
--- a/Packs/SymantecBlueCoatMalwareAnalysis/Integrations/SymantecBlueCoatMalwareAnalysis/SymantecBlueCoatMalwareAnalysis.yml
+++ b/Packs/SymantecBlueCoatMalwareAnalysis/Integrations/SymantecBlueCoatMalwareAnalysis/SymantecBlueCoatMalwareAnalysis.yml
@@ -53,7 +53,7 @@ script:
required: true
description: Retrieves an analysis report.
name: symantec-cma-get-report
- dockerimage: demisto/python3:3.10.13.72123
+ dockerimage: demisto/python3:3.11.10.116439
runonce: false
script: '-'
subtype: python3
diff --git a/Packs/SymantecBlueCoatMalwareAnalysis/ReleaseNotes/1_0_12.md b/Packs/SymantecBlueCoatMalwareAnalysis/ReleaseNotes/1_0_12.md
new file mode 100644
index 000000000000..0fe1cf8594b5
--- /dev/null
+++ b/Packs/SymantecBlueCoatMalwareAnalysis/ReleaseNotes/1_0_12.md
@@ -0,0 +1,10 @@
+
+#### Integrations
+
+##### Symantec Blue Coat Content and Malware Analysis (Beta)
+- Updated the Docker image to: *demisto/python3:3.11.10.116439*.
+
+
+
+
+
diff --git a/Packs/SymantecBlueCoatMalwareAnalysis/pack_metadata.json b/Packs/SymantecBlueCoatMalwareAnalysis/pack_metadata.json
index 48fe7cded3b4..91dfbf07aefd 100644
--- a/Packs/SymantecBlueCoatMalwareAnalysis/pack_metadata.json
+++ b/Packs/SymantecBlueCoatMalwareAnalysis/pack_metadata.json
@@ -2,7 +2,7 @@
"name": "Symantec Blue Coat Content and Malware Analysis (Beta)",
"description": "Symantec Blue Coat Content and Malware Analysis integration.",
"support": "xsoar",
- "currentVersion": "1.0.11",
+ "currentVersion": "1.0.12",
"author": "Cortex XSOAR",
"url": "https://www.paloaltonetworks.com/cortex",
"email": "",
diff --git a/Packs/SymantecEndpointProtection/Integrations/SymantecEndpointProtection_V2/README.md b/Packs/SymantecEndpointProtection/Integrations/SymantecEndpointProtection_V2/README.md
index df0cf33d0f39..89afa6a2ca39 100644
--- a/Packs/SymantecEndpointProtection/Integrations/SymantecEndpointProtection_V2/README.md
+++ b/Packs/SymantecEndpointProtection/Integrations/SymantecEndpointProtection_V2/README.md
@@ -98,7 +98,7 @@ Returns information about endpoints.
```
#### Human Readable Output
-![Human_Readable_Output_1](https://raw.githubusercontent.com/demisto/content/master/docs/images/Integrations/SymantecEndpointProtection_V2_Human_Readable_Output_1.png)
+![Human_Readable_Output_1](../../doc_files/SymantecEndpointProtection_V2_Human_Readable_Output_1.png)
### sep-groups-info
@@ -136,7 +136,7 @@ Returns information about groups.
```
#### Human Readable Output
-![Human_Readable_Output_2](https://raw.githubusercontent.com/demisto/content/master/docs/images/Integrations/SymantecEndpointProtection_V2_Human_Readable_Output_2.png)
+![Human_Readable_Output_2](../../doc_files/SymantecEndpointProtection_V2_Human_Readable_Output_2.png)
### sep-system-info
@@ -164,7 +164,7 @@ There are no input arguments for this command.
```
#### Human Readable Output
-![Human_Readable_Output_3](https://raw.githubusercontent.com/demisto/content/master/docs/images/Integrations/SymantecEndpointProtection_V2_Human_Readable_Output_3.png)
+![Human_Readable_Output_3](../../doc_files/SymantecEndpointProtection_V2_Human_Readable_Output_3.png)
### sep-command-status
@@ -196,7 +196,7 @@ Retrieves the status of a command.
```
#### Human Readable Output
-![Human_Readable_Output_4](https://raw.githubusercontent.com/demisto/content/master/docs/images/Integrations/SymantecEndpointProtection_V2_Human_Readable_Output_4.png)
+![Human_Readable_Output_4](../../doc_files/SymantecEndpointProtection_V2_Human_Readable_Output_4.png)
### sep-client-content
@@ -225,7 +225,7 @@ There are no input arguments for this command.
```
#### Human Readable Output
-![Human_Readable_Output_5](https://raw.githubusercontent.com/demisto/content/master/docs/images/Integrations/SymantecEndpointProtection_V2_Human_Readable_Output_5.png)
+![Human_Readable_Output_5](../../doc_files/SymantecEndpointProtection_V2_Human_Readable_Output_5.png)
### sep-list-policies
@@ -261,7 +261,7 @@ There are no input arguments for this command.
```
#### Human Readable Output
-![Human_Readable_Output_6](https://raw.githubusercontent.com/demisto/content/master/docs/images/Integrations/SymantecEndpointProtection_V2_Human_Readable_Output_6.png)
+![Human_Readable_Output_6](../../doc_files/SymantecEndpointProtection_V2_Human_Readable_Output_6.png)
### sep-assign-policy
@@ -292,7 +292,7 @@ There is no context output for this command.
```
#### Human Readable Output
-![Human_Readable_Output_7](https://raw.githubusercontent.com/demisto/content/master/docs/images/Integrations/SymantecEndpointProtection_V2_Human_Readable_Output_7.png)
+![Human_Readable_Output_7](../../doc_files/SymantecEndpointProtection_V2_Human_Readable_Output_7.png)
### sep-list-locations
@@ -323,7 +323,7 @@ Retrieves a list of location IDs for a specified group.
```
#### Human Readable Output
-![Human_Readable_Output_8](https://raw.githubusercontent.com/demisto/content/master/docs/images/Integrations/SymantecEndpointProtection_V2_Human_Readable_Output_8.png)
+![Human_Readable_Output_8](../../doc_files/SymantecEndpointProtection_V2_Human_Readable_Output_8.png)
### sep-endpoint-quarantine
@@ -357,7 +357,7 @@ Quarantines an endpoint according to its policy.
```
#### Human Readable Output
-![Human_Readable_Output_9](https://raw.githubusercontent.com/demisto/content/master/docs/images/Integrations/SymantecEndpointProtection_V2_Human_Readable_Output_9.png)
+![Human_Readable_Output_9](../../doc_files/SymantecEndpointProtection_V2_Human_Readable_Output_9.png)
### sep-scan-endpoint
@@ -391,7 +391,7 @@ Scans an endpoint.
```
#### Human Readable Output
-![Human_Readable_Output_10](https://raw.githubusercontent.com/demisto/content/master/docs/images/Integrations/SymantecEndpointProtection_V2_Human_Readable_Output_10.png)
+![Human_Readable_Output_10](../../doc_files/SymantecEndpointProtection_V2_Human_Readable_Output_10.png)
### sep-update-endpoint-content
@@ -423,7 +423,7 @@ Updates the content of a specified client.
```
#### Human Readable Output
-![Human_Readable_Output_11](https://raw.githubusercontent.com/demisto/content/master/docs/images/Integrations/SymantecEndpointProtection_V2_Human_Readable_Output_11.png)
+![Human_Readable_Output_11](../../doc_files/SymantecEndpointProtection_V2_Human_Readable_Output_11.png)
### sep-move-client-to-group
@@ -452,7 +452,7 @@ There is no context output for this command.
```
#### Human Readable Output
-![Human_Readable_Output_12](https://raw.githubusercontent.com/demisto/content/master/docs/images/Integrations/SymantecEndpointProtection_V2_Human_Readable_Output_112.png)
+![Human_Readable_Output_12](../../doc_files/SymantecEndpointProtection_V2_Human_Readable_Output_112.png)
### sep-identify-old-clients
@@ -486,7 +486,7 @@ There is no context output for this command.
```
#### Human Readable Output
-![Human_Readable_Output_13](https://raw.githubusercontent.com/demisto/content/master/docs/images/Integrations/SymantecEndpointProtection_V2_Human_Readable_Output_13.png)
+![Human_Readable_Output_13](../../doc_files/SymantecEndpointProtection_V2_Human_Readable_Output_13.png)
## Known Limitations
diff --git a/Packs/SymantecEndpointProtection/Integrations/SymantecEndpointProtection_V2/SymantecEndpointProtection_V2.yml b/Packs/SymantecEndpointProtection/Integrations/SymantecEndpointProtection_V2/SymantecEndpointProtection_V2.yml
index fe0caa700063..2023412e5b5a 100644
--- a/Packs/SymantecEndpointProtection/Integrations/SymantecEndpointProtection_V2/SymantecEndpointProtection_V2.yml
+++ b/Packs/SymantecEndpointProtection/Integrations/SymantecEndpointProtection_V2/SymantecEndpointProtection_V2.yml
@@ -368,7 +368,7 @@ script:
script: '-'
type: python
subtype: python3
- dockerimage: demisto/python3:3.10.13.72123
+ dockerimage: demisto/python3:3.11.10.116439
tests:
- SymantecEndpointProtection_Test
fromversion: 5.0.0
diff --git a/Packs/SymantecEndpointProtection/ReleaseNotes/1_1_13.md b/Packs/SymantecEndpointProtection/ReleaseNotes/1_1_13.md
new file mode 100644
index 000000000000..2ea3c9db8e05
--- /dev/null
+++ b/Packs/SymantecEndpointProtection/ReleaseNotes/1_1_13.md
@@ -0,0 +1,10 @@
+
+#### Integrations
+
+##### Symantec Endpoint Protection v2
+- Updated the Docker image to: *demisto/python3:3.11.10.116439*.
+
+
+
+
+
diff --git a/Packs/SymantecEndpointProtection/pack_metadata.json b/Packs/SymantecEndpointProtection/pack_metadata.json
index f7b48d266d4c..2ba4a1ed2dc4 100644
--- a/Packs/SymantecEndpointProtection/pack_metadata.json
+++ b/Packs/SymantecEndpointProtection/pack_metadata.json
@@ -2,7 +2,7 @@
"name": "Symantec Endpoint Protection",
"description": "Query the Symantec Endpoint Protection Manager using the official REST API.",
"support": "xsoar",
- "currentVersion": "1.1.12",
+ "currentVersion": "1.1.13",
"author": "Cortex XSOAR",
"url": "https://www.paloaltonetworks.com/cortex",
"email": "",
diff --git a/Packs/SymantecEndpointSecurity/.pack-ignore b/Packs/SymantecEndpointSecurity/.pack-ignore
new file mode 100644
index 000000000000..e69de29bb2d1
diff --git a/Packs/SymantecEndpointSecurity/.secrets-ignore b/Packs/SymantecEndpointSecurity/.secrets-ignore
new file mode 100644
index 000000000000..e69de29bb2d1
diff --git a/Packs/SymantecEndpointSecurity/Integrations/SymantecEndpointSecurity/README.md b/Packs/SymantecEndpointSecurity/Integrations/SymantecEndpointSecurity/README.md
new file mode 100644
index 000000000000..bec18affd393
--- /dev/null
+++ b/Packs/SymantecEndpointSecurity/Integrations/SymantecEndpointSecurity/README.md
@@ -0,0 +1,38 @@
+Symantec Endpoint Security Event Collector for Cortex XSIAM.
+
+## Configure Symantec Endpoint Security on Cortex XSIAM
+
+1. Navigate to Settings > Configurations > Data Collection > Automations & Feed Integrations.
+2. Search for Symantec Endpoint Security.
+3. Click **Add instance** to create and configure a new integration instance.
+
+ | **Parameter** | **Required** |
+ | --- | --- |
+ | Server URL | True |
+ | OAuth credential | True |
+ | Stream ID | True |
+ | Channel ID | True |
+ | Fetch interval in seconds | True |
+ | Use system proxy settings | False |
+ | Trust any certificate (not secure) | False |
+
+4. Click **Test** to validate the URLs, token, and connection.
+
+
+### To generate a token for the ***Token*** parameter:
+
+1. Log in to the Symantec Endpoint Security console.
+2. Click **Integration** > **Client Applications**.
+3. Choose `Add Client Application`.
+4. Choose a name for the application, then click `Add`. The client application details screen will appear.
+5. Click `⋮` and select `Client Secret`.
+6. Click the ellipsis and select **Client Secret**.
+7. Click the `copy` icon next to `OAuth Credentials`.
+
+For more information on obtaining *OAuth Credentials*, refer to [this documentation](https://apidocs.securitycloud.symantec.com/#/doc?id=ses_auth) or watch [this video](https://youtu.be/d7LRygRfDLc?si=NNlERXtfzv4LjpsB).
+
+**Note:**
+
+- No need to generate the bearer token, the integration uses the provided `OAuth Credentials` to generate one.
+- The `test_module` test checks only the validity of the `OAuth credential` parameter and does not validate the `Channel ID` and `Stream ID` parameters.
+- Fetching events that occurred at a specific time may be delayed due to delays in event ingestion on Symantec's side.
\ No newline at end of file
diff --git a/Packs/SymantecEndpointSecurity/Integrations/SymantecEndpointSecurity/SymantecEndpointSecurity.py b/Packs/SymantecEndpointSecurity/Integrations/SymantecEndpointSecurity/SymantecEndpointSecurity.py
new file mode 100644
index 000000000000..160fc0a00d21
--- /dev/null
+++ b/Packs/SymantecEndpointSecurity/Integrations/SymantecEndpointSecurity/SymantecEndpointSecurity.py
@@ -0,0 +1,474 @@
+import itertools
+import demistomock as demisto
+from CommonServerPython import * # noqa # pylint: disable=unused-wildcard-import
+from CommonServerUserPython import * # noqa
+from datetime import datetime
+import dateparser
+import time
+
+
+# CONSTANTS
+VENDOR = "symantec"
+PRODUCT = "endpoint_security"
+DEFAULT_CONNECTION_TIMEOUT = 30
+MAX_CHUNK_SIZE_TO_READ = 1024 * 1024 * 150 # 150 MB
+DATE_FORMAT = "%Y-%m-%dT%H:%M:%SZ"
+
+"""
+Sleep time between fetch attempts when an error occurs in the retrieval process,
+primarily used to avoid overloading with consecutive API calls
+if an error is received from the API.
+"""
+FETCH_INTERVAL = 60
+
+
+class UnauthorizedToken(Exception):
+ """
+ Exception raised when the authentication token is unauthorized.
+ """
+
+ ...
+
+
+class NextPointingNotAvailable(Exception):
+ """
+ Exception raised when the next pointing is not available.
+ """
+
+ ...
+
+
+class Client(BaseClient):
+ def __init__(
+ self,
+ base_url: str,
+ token: str,
+ stream_id: str,
+ channel_id: str,
+ verify: bool,
+ proxy: bool,
+ ) -> None:
+
+ self.headers: dict[str, str] = {}
+ self.token = token
+ self.stream_id = stream_id
+ self.channel_id = channel_id
+
+ super().__init__(
+ base_url=base_url,
+ verify=verify,
+ proxy=proxy,
+ timeout=180,
+ )
+
+ self._update_access_token_in_headers()
+
+ def _update_access_token_in_headers(self):
+ """
+ Retrieves an access token using the `token` provided in the params, and updates `self.headers`.
+ """
+ get_token_headers: dict[str, str] = {
+ "accept": "application/json",
+ "content-type": "application/x-www-form-urlencoded",
+ "Authorization": f"Basic {self.token}",
+ }
+ try:
+ res = self._http_request(
+ "POST",
+ url_suffix="/v1/oauth2/tokens",
+ headers=get_token_headers,
+ data={},
+ )
+ except Exception as e:
+ raise DemistoException("Failed getting an access token") from e
+
+ if "access_token" not in res:
+ raise DemistoException(
+ f"The key 'access_token' does not exist in response, Response from API: {res}",
+ res=res,
+ )
+ self.headers = {
+ "Authorization": f'Bearer {res["access_token"]}',
+ "Accept": "application/x-ndjson",
+ "Content-Type": "application/json",
+ "Accept-Encoding": "gzip",
+ }
+
+ def get_events(self, payload: dict[str, str]) -> dict:
+ """
+ API call in streaming to fetch events
+ """
+ res = self._http_request(
+ method="POST",
+ url_suffix=f"/v1/event-export/stream/{self.stream_id}/{self.channel_id}",
+ json_data=payload,
+ params={"connectionTimeout": DEFAULT_CONNECTION_TIMEOUT},
+ resp_type="text",
+ headers=self.headers,
+ )
+ # Formats a string into a valid JSON array
+ res = res.replace("}\n{", ",")
+ if not res.startswith("["):
+ res = f"[{res}]"
+ return json.loads(res)
+
+
+def sleep_if_necessary(last_run_duration: float) -> None:
+ """
+ Manages the fetch interval by sleeping if necessary.
+
+ This function calculates the fetch runtime against FETCH_INTERVAL.
+ If the runtime is less than the FETCH_INTERVAL time, it will sleep
+ for the time difference between FETCH_INTERVAL and the fetch runtime.
+ Otherwise, the next fetch will occur immediately.
+ """
+ fetch_sleep = FETCH_INTERVAL - last_run_duration
+ if fetch_sleep > 0:
+ demisto.debug(f"Sleeping for {fetch_sleep} seconds")
+ time.sleep(fetch_sleep)
+ return
+
+ demisto.debug("Not sleeping, next fetch will take place immediately")
+
+
+def normalize_date_format(date_str: str) -> str:
+ """
+ Normalize the given date string by removing milliseconds.
+
+ Args:
+ date_str (str): The input date string to be normalized.
+
+ Returns:
+ str: The normalized date string without milliseconds.
+ """
+ # Parse the original date string with milliseconds
+ if not (timestamp := dateparser.parse(date_str)):
+ raise DemistoException(f"Failed to parse date string: {date_str}")
+
+ # Convert back to the desired format without milliseconds
+ return timestamp.strftime(DATE_FORMAT)
+
+
+def calculate_next_fetch(
+ filtered_events: list[dict[str, str]],
+ next_hash: str,
+ include_last_fetch_events: bool,
+ last_integration_context: dict[str, str],
+) -> None:
+ """
+ Calculate and update the integration context for the next fetch operation.
+
+ - Extracts the time of the latest event
+ - Extracts all event IDs with time matching the latest event time
+ - If the latest event time matches the latest time from the previous fetch,
+ extend the suspected duplicate IDs from the previous fetch.
+ - If a push to XSIAM fails, store all events in the `integration_context`
+ to be pushed in the next fetch.
+ - Update the integration_context
+
+ Args:
+ filtered_events (list[dict[str, str]]): A list of filtered events.
+ next_hash (str): The hash for the next fetch operation.
+ include_last_fetch_events (bool): Flag to include last fetched events in the integration context.
+ last_integration_context (dict[str, str]): The previous integration context.
+ """
+
+ if filtered_events:
+ events_suspected_duplicates = extract_events_suspected_duplicates(
+ filtered_events
+ )
+
+ # Determine the latest event time: Extract the last time of the filtered event,
+ latest_event_time = normalize_date_format(
+ max(filtered_events, key=parse_event_time)["log_time"]
+ )
+ else:
+ events_suspected_duplicates = []
+ latest_event_time = last_integration_context.get("latest_event_time", "")
+
+ if latest_event_time == last_integration_context.get("latest_event_time", ""):
+ # If the latest event time matches the previous one,
+ # extend the suspected duplicates list with events from the previous context,
+ # to control deduplication across multiple fetches.
+ demisto.debug(
+ "The latest event time equals the latest event time from the previous fetch,"
+ " adding the suspect duplicates from last time"
+ )
+ events_suspected_duplicates.extend(
+ last_integration_context.get("events_suspected_duplicates", [])
+ )
+
+ integration_context = {
+ "latest_event_time": latest_event_time,
+ "events_suspected_duplicates": events_suspected_duplicates,
+ "next_fetch": {"next": next_hash} if next_hash else {},
+ "last_fetch_events": filtered_events if include_last_fetch_events else [],
+ }
+
+ demisto.debug(f"Updating integration context with new data: {integration_context}")
+ set_integration_context(integration_context)
+
+
+def push_events(events: list[dict]):
+ """
+ Push events to XSIAM.
+ """
+ demisto.debug(f"Pushing {len(events)} to XSIAM")
+ send_events_to_xsiam(events=events, vendor=VENDOR, product=PRODUCT)
+ demisto.debug(f"Pushed {len(events)} to XSIAM successfully")
+
+
+def parse_event_time(event) -> datetime:
+ """
+ Parse the event time from the given event dict to datetime object.
+ """
+ return datetime.strptime(normalize_date_format(event["log_time"]), DATE_FORMAT)
+
+
+def extract_events_suspected_duplicates(events: list[dict]) -> list[str]:
+ """
+ Extract event IDs of potentially duplicate events.
+
+ This function identifies events with the latest timestamp and considers them as
+ potential duplicates. It returns a list of their unique identifiers (UUIDs).
+ """
+
+ # Find the maximum event time
+ latest_event_time = normalize_date_format(
+ max(events, key=parse_event_time)["log_time"]
+ )
+
+ # Filter all JSONs with the maximum event time
+ filtered_events = filter(
+ lambda event: normalize_date_format(event["log_time"]) == latest_event_time,
+ events,
+ )
+
+ # Extract the event_ids from the filtered events
+ return [event["uuid"] for event in filtered_events]
+
+
+def is_duplicate(
+ event_id: str,
+ event_time: datetime,
+ latest_event_time: datetime,
+ events_suspected_duplicates: set[str],
+) -> bool:
+ """
+ Determine if an event is a duplicate based on its time and ID.
+
+ This function checks if an event is considered a duplicate by comparing its
+ timestamp with the latest event time and checking if its ID is in the set of
+ suspected duplicates.
+
+ Args:
+ event_id (str): The unique identifier of the event.
+ event_time (datetime): The timestamp of the event.
+ latest_event_time (datetime): The timestamp of the last event from the last fetch.
+ events_suspected_duplicates (set): A set of event IDs suspected to be duplicates.
+
+ Returns:
+ bool: whether the event's time is earlier than the latest, OR
+ (its time is identical to the latest AND
+ its id is in the list of suspected duplicates)
+ """
+ if event_time < latest_event_time:
+ return True
+ elif event_time == latest_event_time and event_id in events_suspected_duplicates:
+ return True
+ return False
+
+
+def filter_duplicate_events(
+ events: list[dict[str, str]], integration_context: dict
+) -> list[dict[str, str]]:
+ """
+ Filter out duplicate events from the given list of events.
+
+ Args:
+ events (list[dict[str, str]]): A list of event dicts, each containing 'uuid' and 'log_time' keys.
+
+ Returns:
+ list[dict[str, str]]: A list of event dicts without fear of duplication.
+ """
+ events_suspected_duplicates = set(
+ integration_context.get("events_suspected_duplicates", [])
+ )
+ latest_event_time = integration_context.get(
+ "latest_event_time"
+ ) or datetime.min.strftime(DATE_FORMAT)
+
+ latest_event_time = datetime.strptime(
+ normalize_date_format(latest_event_time), DATE_FORMAT
+ )
+
+ filtered_events: list[dict[str, str]] = []
+
+ for event in events:
+ if not is_duplicate(
+ event["uuid"],
+ datetime.strptime(normalize_date_format(event["log_time"]), DATE_FORMAT),
+ latest_event_time,
+ events_suspected_duplicates,
+ ):
+ event["_time"] = event["time"]
+ filtered_events.append(event)
+
+ return filtered_events
+
+
+def get_events_command(client: Client, integration_context: dict) -> None:
+ next_fetch: dict[str, str] = integration_context.get("next_fetch", {})
+
+ try:
+ json_res = client.get_events(payload=next_fetch)
+ except DemistoException as e:
+ if e.res is not None:
+ if e.res.status_code == 401:
+ demisto.info(
+ "Unauthorized access token, trying to obtain a new access token"
+ )
+ raise UnauthorizedToken
+ if e.res.status_code == 410:
+ raise NextPointingNotAvailable
+ raise
+
+ events: list[dict] = list(
+ itertools.chain.from_iterable(chunk["events"] for chunk in json_res)
+ )
+ next_hash = json_res[0].get("next", "") if json_res else ""
+
+ if not events:
+ demisto.info("No events received")
+ return
+
+ demisto.debug(f"Starting event filtering. Initial number of events: {len(events)}")
+ filtered_events = filter_duplicate_events(events, integration_context)
+ demisto.debug(
+ f"Filtering completed. Total number of events: {len(filtered_events)}"
+ )
+
+ filtered_events.extend(integration_context.get("last_fetch_events", []))
+ demisto.debug(
+ f"Total number of events after merging with last fetch events: {len(filtered_events)}"
+ )
+
+ try:
+ push_events(filtered_events)
+ except Exception as e:
+ # If the push of events to XSIAM fails,
+ # the current fetch's events are stored in `integration_context`,
+ # ensuring they are pushed in the next fetch operation.
+ calculate_next_fetch(
+ filtered_events=filtered_events,
+ next_hash=next_hash,
+ include_last_fetch_events=True,
+ last_integration_context=integration_context,
+ )
+ raise DemistoException(
+ "Failed to push events to XSIAM, The integration_context updated"
+ ) from e
+
+ calculate_next_fetch(
+ filtered_events=filtered_events,
+ next_hash=next_hash,
+ include_last_fetch_events=False,
+ last_integration_context=integration_context,
+ )
+
+
+def perform_long_running_loop(client: Client):
+ """
+ Manages the fetch process.
+ Due to a limitation on Symantec's side,
+ the integration is configured as long-running
+ since API calls can take over 5 minutes.
+
+ Fetch process:
+ - In every iteration except the first,
+ fetch is performed with the `next_fetch` argument,
+ which acts as a pointer for Symantec.
+ - When an error is encountered from Symantec,
+ it is handled based on the error type, and before the next iteration,
+ the process enters a brief sleep period defined by `FETCH_INTERVAL`
+ to avoid overloading with API calls.
+ """
+ while True:
+ # Used to calculate the duration of the fetch run.
+ start_timestamp = time.time()
+ try:
+ integration_context = get_integration_context()
+ demisto.info(f"Starting new fetch with {integration_context=}")
+ get_events_command(client, integration_context=integration_context)
+
+ except UnauthorizedToken:
+ try:
+ client._update_access_token_in_headers()
+ except Exception as e:
+ raise DemistoException("Failed obtaining a new access token") from e
+ except NextPointingNotAvailable:
+
+ demisto.debug(
+ "Next is pointing to older event which is not available for streaming. "
+ "Clearing next_fetch, The integration's dedup mechanism will make sure we don't insert duplicate events. "
+ "We will eventually get a different pointer and fetching will overcome this edge case"
+ )
+ integration_context.pop("next_fetch")
+ set_integration_context(integration_context)
+ except Exception as e:
+ raise DemistoException("Failed to fetch logs from API") from e
+
+ # Used to calculate the duration of the fetch run.
+ end_timestamp = time.time()
+
+ sleep_if_necessary(end_timestamp - start_timestamp)
+
+
+def test_module() -> str:
+ """
+ The test is performed by obtaining the `access_token` during `Client`'s initialization.
+ avoiding the use of `test_module` with get_events due to the one-minute timeout
+ set for the `test_module` command by the our server.
+ """
+ return "ok"
+
+
+def main() -> None: # pragma: no cover
+ params = demisto.params()
+
+ host = params["host"]
+ token = params["token"]["password"]
+ stream_id = params["stream_id"]
+ channel_id = params["channel_id"]
+ verify = not argToBoolean(params.get("insecure", False))
+ proxy = argToBoolean(params.get("proxy", False))
+
+ command = demisto.command()
+ try:
+ client = Client(
+ base_url=host,
+ token=token,
+ stream_id=stream_id,
+ channel_id=channel_id,
+ verify=verify,
+ proxy=proxy,
+ )
+
+ if command == "test-module":
+ return_results(test_module())
+ if command == "long-running-execution":
+ demisto.info("Starting long running execution")
+ perform_long_running_loop(client)
+ else:
+ raise NotImplementedError(f"Command {command} is not implemented.")
+
+ except Exception as e:
+ return_error(
+ f"Failed to execute {command} command. Error in Symantec Endpoint Security Integration [{e}]."
+ )
+
+
+""" ENTRY POINT """
+
+if __name__ in ("__main__", "__builtin__", "builtins"):
+ main()
diff --git a/Packs/SymantecEndpointSecurity/Integrations/SymantecEndpointSecurity/SymantecEndpointSecurity.yml b/Packs/SymantecEndpointSecurity/Integrations/SymantecEndpointSecurity/SymantecEndpointSecurity.yml
new file mode 100644
index 000000000000..0ecf35544252
--- /dev/null
+++ b/Packs/SymantecEndpointSecurity/Integrations/SymantecEndpointSecurity/SymantecEndpointSecurity.yml
@@ -0,0 +1,60 @@
+commonfields:
+ id: Symantec Endpoint Security
+ version: -1
+name: Symantec Endpoint Security
+display: Symantec Endpoint Security
+category: Analytics & SIEM
+description: "Symantec Endpoint Security Event Collector for Cortex XSIAM."
+configuration:
+- display: Server URL
+ name: host
+ type: 0
+ defaultvalue: https://api.sep.securitycloud.symantec.com
+ required: true
+ section: Connect
+- displaypassword: OAuth credential
+ name: token
+ hiddenusername: true
+ type: 9
+ required: true
+ section: Connect
+- display: Stream ID
+ name: stream_id
+ type: 0
+ required: true
+ additionalinfo: ""
+ section: Connect
+- display: Channel ID
+ name: channel_id
+ type: 0
+ required: true
+ additionalinfo: ""
+ section: Connect
+- display: Use system proxy settings
+ name: proxy
+ required: false
+ type: 8
+ section: Connect
+- display: Trust any certificate (not secure)
+ name: insecure
+ required: false
+ type: 8
+ section: Connect
+- defaultvalue: 'true'
+ display: Long Running Instance
+ hidden: true
+ name: longRunning
+ type: 8
+ section: Connect
+script:
+ script: ""
+ type: python
+ commands: []
+ dockerimage: demisto/python3:3.11.10.113941
+ longRunning: true
+ subtype: python3
+marketplaces:
+- marketplacev2
+fromversion: 6.8.0
+tests:
+- No tests
diff --git a/Packs/SymantecEndpointSecurity/Integrations/SymantecEndpointSecurity/SymantecEndpointSecurity_description.md b/Packs/SymantecEndpointSecurity/Integrations/SymantecEndpointSecurity/SymantecEndpointSecurity_description.md
new file mode 100644
index 000000000000..83e0e540897b
--- /dev/null
+++ b/Packs/SymantecEndpointSecurity/Integrations/SymantecEndpointSecurity/SymantecEndpointSecurity_description.md
@@ -0,0 +1,18 @@
+To generate a token for the *Token* parameter:
+
+1. Log in to the Symantec Endpoint Security console.
+2. Click **Integration** > **Client Applications**.
+3. Choose `Add Client Application`.
+4. Choose a name for the application, then click `Add`. The client application details screen will appear.
+5. Click `⋮` and select `Client Secret`.
+6. Click the ellipsis and select **Client Secret**.
+7. Click the `copy` icon next to `OAuth Credentials`.
+
+For more information on obtaining *OAuth Credentials*, refer to [this documentation](https://apidocs.securitycloud.symantec.com/#/doc?id=ses_auth) or watch [this video](https://youtu.be/d7LRygRfDLc?si=NNlERXtfzv4LjpsB).
+
+
+**Note:**
+
+- No need to generate the bearer token, the integration uses the provided `OAuth Credentials` to generate one.
+- The `test_module` test checks only the validity of the `OAuth credential` parameter and does not validate the `Channel ID` and `Stream ID` parameters.
+- Fetching events that occurred at a specific time may be delayed due to delays in event ingestion on Symantec's side.
\ No newline at end of file
diff --git a/Packs/SymantecEndpointSecurity/Integrations/SymantecEndpointSecurity/SymantecEndpointSecurity_image.png b/Packs/SymantecEndpointSecurity/Integrations/SymantecEndpointSecurity/SymantecEndpointSecurity_image.png
new file mode 100644
index 000000000000..f1cf202e307f
Binary files /dev/null and b/Packs/SymantecEndpointSecurity/Integrations/SymantecEndpointSecurity/SymantecEndpointSecurity_image.png differ
diff --git a/Packs/SymantecEndpointSecurity/Integrations/SymantecEndpointSecurity/SymantecEndpointSecurity_test.py b/Packs/SymantecEndpointSecurity/Integrations/SymantecEndpointSecurity/SymantecEndpointSecurity_test.py
new file mode 100644
index 000000000000..9244a5f6ef63
--- /dev/null
+++ b/Packs/SymantecEndpointSecurity/Integrations/SymantecEndpointSecurity/SymantecEndpointSecurity_test.py
@@ -0,0 +1,336 @@
+import pytest
+from CommonServerPython import * # noqa # pylint: disable=unused-wildcard-import
+from pytest_mock import MockerFixture
+from SymantecEndpointSecurity import (
+ normalize_date_format,
+ extract_events_suspected_duplicates,
+ calculate_next_fetch,
+ filter_duplicate_events,
+ perform_long_running_loop,
+ UnauthorizedToken,
+ NextPointingNotAvailable,
+ Client,
+ test_module as _test_module,
+ get_events_command,
+ sleep_if_necessary,
+)
+
+
+def mock_client() -> Client:
+ return Client(
+ base_url="test",
+ token="test_token",
+ stream_id="test_stream_id",
+ channel_id="test_channel_id",
+ verify=True,
+ proxy=False,
+ )
+
+
+@pytest.mark.parametrize(
+ "date_str, expected_result",
+ [
+ ("2024-10-09T12:34:56.789Z", "2024-10-09T12:34:56Z"),
+ ("2024-10-09T12:34:56.789324959595959959595Z", "2024-10-09T12:34:56Z"),
+ ],
+)
+def test_normalize_date_format(date_str: str, expected_result: str):
+ """
+ Given:
+ - A date string with microseconds
+ When:
+ - The `normalize_date_format` function is called
+ Then:
+ - Ensure that return a date string without microseconds
+ """
+ assert normalize_date_format(date_str) == expected_result
+
+
+@pytest.mark.parametrize(
+ "events, expected_results",
+ [
+ (
+ [
+ {"uuid": "123", "log_time": "2024-10-09T12:34:56Z"},
+ {"uuid": "456", "log_time": "2024-10-09T12:34:56.789Z"},
+ {"uuid": "789", "log_time": "2024-10-09T12:34:55.789Z"},
+ ],
+ ["123", "456"],
+ )
+ ],
+)
+def test_extract_events_suspected_duplicates(
+ events: list[dict], expected_results: list[str]
+):
+ """
+ Given
+ - A list of events with timestamps
+ When:
+ - The `extract_events_suspected_duplicates` function is called
+ Then:
+ - Ensure that return a list of UUIDs for events suspected to be duplicates
+ """
+ assert extract_events_suspected_duplicates(events) == expected_results
+
+
+@pytest.mark.parametrize(
+ "integration_context, events, expected_filtered_events",
+ [
+ pytest.param(
+ {
+ "events_suspected_duplicates": ["123", "456"],
+ "latest_event_time": "2024-10-09T12:34:56Z",
+ },
+ [
+ {
+ "uuid": "123",
+ "log_time": "2024-10-09T12:34:56Z",
+ "time": "2024-10-09T12:34:56Z",
+ },
+ {
+ "uuid": "456",
+ "log_time": "2024-10-09T12:34:56.789Z",
+ "time": "2024-10-09T12:34:56.789Z",
+ },
+ {
+ "uuid": "789",
+ "log_time": "2024-10-09T12:34:55.789Z",
+ "time": "2024-10-09T12:34:55.789Z",
+ },
+ ],
+ [],
+ id="Event time is equal to or less than last_event_time",
+ ),
+ pytest.param(
+ {
+ "events_suspected_duplicates": ["123"],
+ "latest_event_time": "2024-10-09T12:34:56Z",
+ },
+ [
+ {
+ "uuid": "123",
+ "log_time": "2024-10-09T12:34:56Z",
+ "time": "2024-10-09T12:34:56Z",
+ },
+ {
+ "uuid": "456",
+ "log_time": "2024-10-09T12:34:56.789Z",
+ "time": "2024-10-09T12:34:56.789Z",
+ },
+ ],
+ [
+ {
+ "uuid": "456",
+ "log_time": "2024-10-09T12:34:56.789Z",
+ "time": "2024-10-09T12:34:56.789Z",
+ "_time": "2024-10-09T12:34:56.789Z",
+ }
+ ],
+ id="Events time is equal to last_event_time but one of them not include in suspected duplicates",
+ ),
+ pytest.param(
+ {
+ "events_suspected_duplicates": ["123"],
+ "latest_event_time": "2024-10-09T12:34:56Z",
+ },
+ [
+ {
+ "uuid": "456",
+ "log_time": "2024-10-09T12:35:56.789Z",
+ "time": "2024-10-09T12:35:56.789Z",
+ },
+ ],
+ [
+ {
+ "uuid": "456",
+ "log_time": "2024-10-09T12:35:56.789Z",
+ "time": "2024-10-09T12:35:56.789Z",
+ "_time": "2024-10-09T12:35:56.789Z",
+ }
+ ],
+ id="Events time is greater than last_event_time",
+ ),
+ ],
+)
+def test_filter_duplicate_events(
+ integration_context: dict[str, str],
+ events: list[dict[str, str]],
+ expected_filtered_events: list[dict[str, str]],
+):
+ """
+ Given:
+ - A list of events with timestamps
+ When:
+ - The `filter_duplicate_events` function is called
+ Then:
+ - Ensure that a list of the events that are not duplicates is returned
+ """
+ filtered_events = filter_duplicate_events(events, integration_context)
+ assert filtered_events == expected_filtered_events
+
+
+@pytest.mark.parametrize(
+ "filtered_events, next_hash, include_last_fetch_events, last_integration_context, expected_integration_context",
+ [
+ (
+ [
+ {"uuid": "12", "log_time": "2024-10-09T12:34:56Z"},
+ {"uuid": "34", "log_time": "2024-10-09T12:34:56Z"},
+ {"uuid": "56", "log_time": "2024-10-09T12:34:56Z"},
+ ],
+ "hash_test_1",
+ False,
+ {
+ "latest_event_time": "2024-10-09T12:34:56Z",
+ "events_suspected_duplicates": ["78", "90"],
+ "next_fetch": {"next": "hash_test"},
+ "last_fetch_events": [],
+ },
+ {
+ "latest_event_time": "2024-10-09T12:34:56Z",
+ "events_suspected_duplicates": ["12", "34", "56", "78", "90"],
+ "next_fetch": {"next": "hash_test_1"},
+ "last_fetch_events": [],
+ },
+ )
+ ],
+)
+def test_calculate_next_fetch_last_latest_event_time_are_equal(
+ mocker: MockerFixture,
+ filtered_events: list[dict[str, str]],
+ next_hash: str,
+ include_last_fetch_events: bool,
+ last_integration_context: dict[str, str],
+ expected_integration_context: dict,
+):
+ """
+ Given:
+ - A set of filtered events, next hash, and last integration context
+ When:
+ - The `calculate_next_fetch` function is called
+ Then:
+ - Ensure that updated the 'integration_context' with new events in addition to the old ones, and the next hash
+ """
+ mock_set_integration_context = mocker.patch(
+ "SymantecEndpointSecurity.set_integration_context"
+ )
+ calculate_next_fetch(
+ filtered_events, next_hash, include_last_fetch_events, last_integration_context
+ )
+
+ assert mock_set_integration_context.call_args[0][0] == expected_integration_context
+
+
+def test_perform_long_running_loop_unauthorized_token(mocker: MockerFixture):
+ """
+ Given:
+ - The `perform_long_running_loop` function is called
+ When:
+ - The function is called
+ Then:
+ - Ensure that the function runs indefinitely until the container is stopped
+ """
+ mocker.patch(
+ "SymantecEndpointSecurity.get_events_command",
+ side_effect=[UnauthorizedToken, Exception("Stop")],
+ )
+ mock_get_token = mocker.patch.object(Client, "_update_access_token_in_headers")
+ mocker.patch("SymantecEndpointSecurity.sleep_if_necessary")
+ with pytest.raises(DemistoException, match="Failed to fetch logs from API"):
+ perform_long_running_loop(mock_client())
+ assert mock_get_token.call_count == 2
+
+
+def test_perform_long_running_loop_next_pointing_not_available(mocker: MockerFixture):
+ """
+ Given:
+ - No args for the function call
+ When:
+ - The function `perform_long_running_loop` is called
+ Then:
+ -
+ """
+ mock_integration_context = {"next_fetch": {"next": "test"}}
+ mocker.patch(
+ "SymantecEndpointSecurity.get_events_command",
+ side_effect=[NextPointingNotAvailable, Exception("Stop")],
+ )
+ mocker.patch.object(Client, "_update_access_token_in_headers")
+ mocker.patch(
+ "SymantecEndpointSecurity.get_integration_context",
+ return_value=mock_integration_context,
+ )
+ mocker.patch("SymantecEndpointSecurity.sleep_if_necessary")
+ with pytest.raises(DemistoException, match="Failed to fetch logs from API"):
+ perform_long_running_loop(mock_client())
+ assert mock_integration_context == {}
+
+
+def test_test_module(mocker: MockerFixture):
+ """
+ Given:
+ - Client
+ When:
+ - The function `test_module` is called
+ Then:
+ - Ensure there is no API call in the test_module function
+ (see the docstring in the `test_module` function).
+ """
+ mock__http_request = mocker.patch.object(Client, "_http_request")
+ assert _test_module() == "ok"
+ mock__http_request.assert_not_called()
+
+
+@pytest.mark.parametrize(
+ "mock_status_code, exception_type",
+ [
+ (500, DemistoException),
+ (401, UnauthorizedToken),
+ (410, NextPointingNotAvailable),
+ ],
+)
+def test_get_events_command_with_raises(
+ mocker: MockerFixture,
+ mock_status_code: int,
+ exception_type: type[Exception],
+):
+ """
+ Given:
+ - Client and mock_integration_context
+ When:
+ - The function `get_events_command` is called
+ Then:
+ - Ensure that the function raises Exception based on the status code that returned from the API call
+ """
+
+ class MockException:
+ status_code = mock_status_code
+
+ mocker.patch.object(Client, "_update_access_token_in_headers")
+ mocker.patch.object(
+ Client, "get_events", side_effect=DemistoException("Test", res=MockException())
+ )
+
+ with pytest.raises(exception_type):
+ get_events_command(mock_client(), {"next_fetch": {"next": "test"}})
+
+
+@pytest.mark.parametrize(
+ "start_run, end_run, call_count",
+ [
+ pytest.param(10, 20, 1, id="The sleep function should be called once"),
+ pytest.param(10, 70, 0, id="The sleep function should not be called"),
+ ]
+)
+def test_sleep_if_necessary(mocker: MockerFixture, start_run: int, end_run: int, call_count: int):
+ """
+ Given:
+ - Mocked time passed duration
+ When:
+ - The function is called
+ Then:
+ - Ensure that the sleep function is called with the appropriate interval value or not called at all if unnecessary.
+ """
+ mock_sleep = mocker.patch("SymantecEndpointSecurity.time.sleep")
+ sleep_if_necessary(end_run - start_run)
+ assert mock_sleep.call_count == call_count
diff --git a/Packs/SymantecEndpointSecurity/README.md b/Packs/SymantecEndpointSecurity/README.md
new file mode 100644
index 000000000000..e69de29bb2d1
diff --git a/Packs/SymantecEndpointSecurity/pack_metadata.json b/Packs/SymantecEndpointSecurity/pack_metadata.json
new file mode 100644
index 000000000000..38246d153d43
--- /dev/null
+++ b/Packs/SymantecEndpointSecurity/pack_metadata.json
@@ -0,0 +1,20 @@
+{
+ "name": "Symantec Endpoint Security",
+ "description": "Use Cloud Platform Connections, a cloud-based security feature in Symantec Endpoint Security to discover and protect instances of public cloud platforms, and their workloads.",
+ "support": "xsoar",
+ "currentVersion": "1.0.0",
+ "author": "Cortex XSOAR",
+ "url": "https://www.paloaltonetworks.com/cortex",
+ "email": "",
+ "categories": [
+ "Analytics & SIEM"
+ ],
+ "tags": [],
+ "useCases": [],
+ "keywords": [
+ "ses"
+ ],
+ "marketplaces": [
+ "marketplacev2"
+ ]
+}
\ No newline at end of file
diff --git a/Packs/SymantecManagementCenter/Integrations/SymantecManagementCenter/SymantecManagementCenter.yml b/Packs/SymantecManagementCenter/Integrations/SymantecManagementCenter/SymantecManagementCenter.yml
index 4d3f0c789ab0..960ee485bea4 100644
--- a/Packs/SymantecManagementCenter/Integrations/SymantecManagementCenter/SymantecManagementCenter.yml
+++ b/Packs/SymantecManagementCenter/Integrations/SymantecManagementCenter/SymantecManagementCenter.yml
@@ -521,7 +521,7 @@ script:
name: description
description: Updates content in a policy in Symantec MC.
name: symantec-mc-update-policy-content
- dockerimage: demisto/python3:3.10.13.75921
+ dockerimage: demisto/python3:3.11.10.116439
runonce: false
script: ''
type: python
diff --git a/Packs/SymantecManagementCenter/ReleaseNotes/1_0_12.md b/Packs/SymantecManagementCenter/ReleaseNotes/1_0_12.md
new file mode 100644
index 000000000000..1ae4f3845c74
--- /dev/null
+++ b/Packs/SymantecManagementCenter/ReleaseNotes/1_0_12.md
@@ -0,0 +1,9 @@
+
+#### Integrations
+
+##### Symantec Management Center
+- Updated the Docker image to: *demisto/python3:3.11.10.116439*.
+
+
+
+
diff --git a/Packs/SymantecManagementCenter/pack_metadata.json b/Packs/SymantecManagementCenter/pack_metadata.json
index 92a6f9c72099..e95b361d356f 100644
--- a/Packs/SymantecManagementCenter/pack_metadata.json
+++ b/Packs/SymantecManagementCenter/pack_metadata.json
@@ -2,7 +2,7 @@
"name": "Symantec Management Center",
"description": "Symantec Management Center provides a unified management environment for the Symantec Security Platform portfolio of products.",
"support": "xsoar",
- "currentVersion": "1.0.11",
+ "currentVersion": "1.0.12",
"author": "Cortex XSOAR",
"url": "https://www.paloaltonetworks.com/cortex",
"email": "",
@@ -17,4 +17,4 @@
"xsoar",
"marketplacev2"
]
-}
+}
\ No newline at end of file
diff --git a/Packs/Synapse/Integrations/Synapse/Synapse.yml b/Packs/Synapse/Integrations/Synapse/Synapse.yml
index f1b5642f2f0b..37ffb5afe19a 100644
--- a/Packs/Synapse/Integrations/Synapse/Synapse.yml
+++ b/Packs/Synapse/Integrations/Synapse/Synapse.yml
@@ -421,7 +421,7 @@ script:
- contextPath: Synapse.Model.Valu
description: The given value of the Synapse object type.
type: String
- dockerimage: demisto/py3-tools:1.0.0.102774
+ dockerimage: demisto/py3-tools:1.0.0.114656
runonce: false
script: '-'
subtype: python3
diff --git a/Packs/Synapse/ReleaseNotes/1_0_8.md b/Packs/Synapse/ReleaseNotes/1_0_8.md
new file mode 100644
index 000000000000..c779b87fa3df
--- /dev/null
+++ b/Packs/Synapse/ReleaseNotes/1_0_8.md
@@ -0,0 +1,6 @@
+
+#### Integrations
+
+##### Synapse
+
+- Updated the Docker image to: *demisto/py3-tools:1.0.0.114656*.
diff --git a/Packs/Synapse/pack_metadata.json b/Packs/Synapse/pack_metadata.json
index c707c3fc80b1..128ccaba43f1 100644
--- a/Packs/Synapse/pack_metadata.json
+++ b/Packs/Synapse/pack_metadata.json
@@ -2,7 +2,7 @@
"name": "Synapse",
"description": "Vertex Synapse intelligence analysis framework.",
"support": "community",
- "currentVersion": "1.0.7",
+ "currentVersion": "1.0.8",
"author": "Jordan Berry",
"url": "",
"email": "",
diff --git a/Packs/TAXIIServer/Integrations/TAXII2Server/README.md b/Packs/TAXIIServer/Integrations/TAXII2Server/README.md
index 0ede954b388b..ef11ac284b41 100644
--- a/Packs/TAXIIServer/Integrations/TAXII2Server/README.md
+++ b/Packs/TAXIIServer/Integrations/TAXII2Server/README.md
@@ -23,37 +23,44 @@ You can add a collection description as is done in *collection1_name*, or enter
## How to Access the TAXII2 Server
-(For Cortex XSOAR 6.x) Use one of the following options:
+### For Cortex XSOAR 6.x
+Use one of the following options:
- `https:///instance/execute///`
- `http://://`
-(For Cortex XSOAR 8 or Cortex XSIAM):
+### For Cortex XSOAR 8 On-prem, Cortex XSOAR 8 Cloud, or Cortex XSIAM
-- `https://ext-/xsoar/instance/execute///`
-NOTE: The instance name cannot be changed after saving the integration configuration.
+Use `https://ext-/xsoar/instance/execute///`
+
+**Note**:
+- For Cortex XSOAR 8 On-prem, you need to add the `ext-` FQDN DNS record to map the Cortex XSOAR DNS name to the external IP address.
+ For example, `ext-xsoar.mycompany.com`.
+- The instance name cannot be changed after saving the integration configuration.
## Access the TAXII Service by Instance Name
-To access the TAXII service by instance name, make sure *Instance execute external* is enabled.
+To access the TAXII service by instance name, make sure ***Instance execute external*** is enabled.
In Cortex XSOAR 6.x:
1. Navigate to **Settings > About > Troubleshooting**.
-2. In the **Server Configuration** section, verify that the *instance.execute.external* key is set to *true*. If this key does not exist, click **+ Add Server Configuration**, add the *instance.execute.external* and set the value to *true*.
+2. In the **Server Configuration** section, verify that the ***instance.execute.external*** key is set to **true**. If this key does not exist, click **+ Add Server Configuration**, add the ***instance.execute.external*** and set the value to **true**.
-### How to use HTTPS
+## How to Use HTTPS
To use HTTPS, a certificate and private key have to be supplied in the integration configuration.
-### How to use authentication
-
-The integration allows the use of basic authentication in the requests.
-To enable basic authentication, a user and password must be supplied in the *Credentials* parameters in the integration configuration.
+## Set up Authentication
+### For Cortex XSOAR 8 Cloud Tenant or Cortex XSIAM Tenant
+The TAXII2 Server integration running on a Cortex XSOAR 8 Cloud tenant or Cortex XSIAM tenant enables using basic authentication in the requests.
+To enable basic authentication, a user and password must be supplied in the **Credentials** parameters in the integration configuration.
+The server will then authenticate the requests by the `Authorization` header, expecting basic authentication encrypted in base64 to match the given credentials.
+### For Cortex XSOAR On-prem (6.x or 8) or When Using Engines
+For Cortex XSOAR On-prem (6.x or 8) or when using engines, you can set up authentication using custom certificates. For more information on setting up a custom certificate for Cortex XSOAR 8 On-prem, see [HTTPS with a signed certificate](https://docs-cortex.paloaltonetworks.com/r/Cortex-XSOAR/8.7/Cortex-XSOAR-On-prem-Documentation/HTTPS-with-a-signed-certificate). For more information on setting up a custom certificate for Cortex XSOAR 6.x, see [HTTPS with a Signed Certificate](https://docs-cortex.paloaltonetworks.com/r/Cortex-XSOAR/6.13/Cortex-XSOAR-Administrator-Guide/HTTPS-with-a-Signed-Certificate).
-The server will then authenticate the requests by the *Authorization* header, expecting basic authentication encrypted in base64 to match the given credentials.
+## TAXII v2.0 API Endpoints
-## TAXII v2.0 API Enpoints
| **URL** | **Method** | **Response** | **TAXII2 Documentation** |
| --- | --- | --- | --- |
@@ -66,7 +73,7 @@ The server will then authenticate the requests by the *Authorization* header, ex
For more information, visit [TAXII2 Documentation](http://docs.oasis-open.org/cti/taxii/v2.0/taxii-v2.0.html).
-## TAXII v2.1 API Enpoints
+## TAXII v2.1 API Endpoints
| **URL** | **Method** | **Response** | **TAXII2 Documentation** |
|---------------------------------------------------|------------|--------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------|
@@ -79,14 +86,14 @@ For more information, visit [TAXII2 Documentation](http://docs.oasis-open.org/ct
For more information, visit [TAXII2 Documentation](https://docs.oasis-open.org/cti/taxii/v2.1/taxii-v2.1.html).
-## Known limitations
+## Known Limitations
- GET objects by ID is not allowed.
- Filtering objects by ID or version not allowed.
- POST and DELETE objects are not allowed. Cannot add or delete indicators using TAXII2 Server.
-## How UUIDs work in TAXII2 XSOAR
+## How UUIDs Work for TAXII2 in Cortex XSOAR
---
@@ -157,11 +164,9 @@ TIM fields (system generated and custom). An example of these two related object
| 100,000 | 50-90 |
-## Configuration Guide
+## Microsoft Sentinel Configuration Guide
-### Microsoft Sentinel
-
-#### Configure the TAXII2 Server instance
+### Configure the TAXII2 Server Instance
1. Set **TAXII2 Server version** to **2.0** (The integration currently doesn't work with Microsoft Sentinel in TAXII Version 2.1).
@@ -170,14 +175,14 @@ TIM fields (system generated and custom). An example of these two related object
3. Set the **Listen Port** and **Collection JSON** to your linking.
-#### Find the information required for the Sentinel TAXII connector
+### Find the Information Required for the Sentinel TAXII Connector
-**For Cortex XSOAR 6.x:**
+#### For Cortex XSOAR 6.x
1. All your server info can be found by running `!taxii-server-info`, the default API root for you server will usually be - `https:///instance/execute//threatintel/`
2. You can use the `!taxii-server-list-collections` command in order to get a list of your server's collections and their IDs. You can also do it manually by running `curl https:///instance/execute//threatintel/collections/ | jq .` to get a list of the collections available and on your TAXII server. From the list, copy the correct ID of the collection you want to ingest.
-**For Cortex XSOAR 8 or Cortex XSIAM**
+#### For Cortex XSOAR 8 On-prem, Cortex XSOAR Cloud, or Cortex XSIAM
1. All your server info can be found by running `!taxii-server-info`, the default API root for you server will usually be - `https://ext-.crtx..paloaltonetworks.com/xsoar/instance/execute//threatintel/`
2. You can use the `!taxii-server-list-collections` command in order to get a list of your server's collections and their IDs. You can also do it manually by running `curl https://ext-.crtx..paloaltonetworks.com/xsoar/instance/execute//threatintel/collections/ | jq .` to get a list of the collections available and on your TAXII server. From the list, copy the correct ID of the collection you want to ingest.
@@ -214,7 +219,7 @@ TIM fields (system generated and custom). An example of these two related object
}
```
-#### Set up the Microsoft Sentinel TAXII connector
+### Set up the Microsoft Sentinel TAXII Connector
Now that we have the API root URL and the collection ID we can configure the Threat intelligence - TAXII Connector in Microsoft Sentinel.
@@ -308,7 +313,7 @@ There are no input arguments for this command.
| TAXIIServer.ServerInfo.default | String | The default URL. |
| TAXIIServer.ServerInfo.description | String | The server description |
-#### Command example
+#### Command Example
```!taxii-server-info```
diff --git a/Packs/TAXIIServer/Integrations/TAXII2Server/TAXII2Server.py b/Packs/TAXIIServer/Integrations/TAXII2Server/TAXII2Server.py
index 8f10b03ac4c0..f82fb2299f48 100644
--- a/Packs/TAXIIServer/Integrations/TAXII2Server/TAXII2Server.py
+++ b/Packs/TAXIIServer/Integrations/TAXII2Server/TAXII2Server.py
@@ -964,7 +964,10 @@ def main(): # pragma: no cover
types_for_indicator_sdo = argToList(params.get('provide_as_indicator'))
try:
- port = int(params.get('longRunningPort'))
+ if not params.get('longRunningPort'):
+ params['longRunningPort'] = '1111'
+ # The default is for the autogeneration port feature before port allocation.
+ port = int(params.get('longRunningPort', ''))
except ValueError as e:
raise ValueError(f'Invalid listen port - {e}')
diff --git a/Packs/TAXIIServer/Integrations/TAXII2Server/TAXII2Server.yml b/Packs/TAXIIServer/Integrations/TAXII2Server/TAXII2Server.yml
index 80e78614eb1a..bb7aa4582489 100644
--- a/Packs/TAXIIServer/Integrations/TAXII2Server/TAXII2Server.yml
+++ b/Packs/TAXIIServer/Integrations/TAXII2Server/TAXII2Server.yml
@@ -150,7 +150,7 @@ script:
- contextPath: TAXIIServer.ServerInfo.description
description: The server description.
type: String
- dockerimage: demisto/flask-nginx:1.0.0.110620
+ dockerimage: demisto/flask-nginx:1.0.0.115634
longRunning: true
longRunningPort: true
script: '-'
diff --git a/Packs/TAXIIServer/Integrations/TAXIIServer/README.md b/Packs/TAXIIServer/Integrations/TAXIIServer/README.md
index ed9fa418f17f..7ea44a10f845 100644
--- a/Packs/TAXIIServer/Integrations/TAXIIServer/README.md
+++ b/Packs/TAXIIServer/Integrations/TAXIIServer/README.md
@@ -16,15 +16,20 @@ The collections are defined by a JSON object in the following format:
## How to Access the TAXII Service
-(For Cortex XSOAR 6.x)
+### For Cortex XSOAR 6.x
+Use one of the following options to access the TAXII service:
+- `https:///instance/execute//taxii-discovery-service`
+- `http://:/taxii-discovery-service`
-- `https://*demisto_address*/ins Use one of the following options:tance/execute/*instance_name/taxii-discovery-service`
-- `http://*demisto_address*:*listen_port*/taxii-discovery-service`
+### For Cortex XSOAR 8 On-prem, Cortex XSOAR 8 Cloud, or Cortex XSIAM:
+Use one of the following options to access the TAXII service:
+- `https://ext-.crtx..paloaltonetworks.com/xsoar/instance/execute///`
+- When using an engine: `http://://`
+
+**Note:**
+For Cortex XSOAR 8 On-prem, you need to add the `ext-` FQDN DNS record to map the Cortex XSOAR DNS name to the external IP address.
+For example, `ext-xsoar.mycompany.com`.
-(For Cortex XSOAR 8 or Cortex XSIAM):
-
-- `https://ext-.crtx..paloaltonetworks.com/xsoar/instance/execute//{taxii2_api_endpoint}/`
- For running on an engine: `http://demisto_address:listen_port/{taxii2_api_endpoint}/`
## Access the TAXII Service by Instance Name
@@ -35,22 +40,27 @@ To access the TAXII service by instance name, make sure ***Instance execute exte
1. Navigate to **Settings > About > Troubleshooting**.
2. In the **Server Configuration** section, verify that the ***instance.execute.external*** key is set to *true*. If this key does not exist, click **+ Add Server Configuration** and add the *instance.execute.external* and set the value to *true*.
2. Trigger the TAXII Service URL:
- - For Cortex XSOAR 6.x: `/instance/execute/`. For example, . Note that the string instance does not refer to the name of your XSOAR instance, but rather is part of the URL.
- - (For Cortex XSOAR 8 or Cortex XSIAM) `https://ext-.crtx..paloaltonetworks.com/xsoar/instance/execute/`
-
-## How to use HTTPS
-
-To use HTTPS, a certificate and private key have to be provided in the integration configuration.
-
-The `HTTP Server` check box needs to be unchecked.
-
-## How to use authentication
-
-The integration allows the use of basic authentication in the requests.
-To enable basic authentication, a user and password have to be supplied in the Credentials parameters in the integration configuration.
-
-The server will then authenticate the requests by the `Authorization` header, expecting basic authentication encrypted in base64 to match the given credentials.
+ - For Cortex XSOAR 6.x:
+ `/instance/execute/`.
+ For example, `https://my.demisto.live/instance/execute/taxiiserver`.
+ - For Cortex XSOAR 8 On-prem, Cortex XSOAR 8 Cloud, or Cortex XSIAM:
+ `https://ext-.crtx..paloaltonetworks.com/xsoar/instance/execute/`
+ **Note**:
+ The string `instance` does not refer to the name of your Cortex XSOAR instance, but rather is part of the URL.
+
+## How to Use HTTPS
+
+To use HTTPS, a certificate and private key have to be provided in the integration configuration.
+The `HTTP Server` checkbox needs to be unchecked.
+
+## Set up Authentication
+### For Cortex XSOAR 8 Cloud Tenant or Cortex XSIAM Tenant
+The TAXII Service integration running on a Cortex XSOAR 8 Cloud tenant or Cortex XSIAM tenant enables using basic authentication in the requests.
+To enable basic authentication, a user and password have to be supplied in the **Credentials** parameters in the integration configuration.
+The server then authenticates the requests by the `Authorization` header, expecting basic authentication encrypted in base64 to match the given credentials.
+### For Cortex XSOAR On-prem (6.x or 8) or When Using Engines
+For Cortex XSOAR On-prem (6.x or 8) or when using engines, you can set up authentication using custom certificates. For more information on setting up a custom certificate for Cortex XSOAR 8 On-prem, see [HTTPS with a signed certificate](https://docs-cortex.paloaltonetworks.com/r/Cortex-XSOAR/8.7/Cortex-XSOAR-On-prem-Documentation/HTTPS-with-a-signed-certificate). For more information on setting up a custom certificate for Cortex XSOAR 6.x, see [HTTPS with a Signed Certificate](https://docs-cortex.paloaltonetworks.com/r/Cortex-XSOAR/6.13/Cortex-XSOAR-Administrator-Guide/HTTPS-with-a-Signed-Certificate).
## Troubleshooting
-- If the URL address returned in the service response is wrong, you can set it in the **TAXII Service URL Address** integration parameter.
+If the URL address returned in the service response is wrong, you can set it in the **TAXII Service URL Address** integration setting.
diff --git a/Packs/TAXIIServer/ReleaseNotes/2_0_69.md b/Packs/TAXIIServer/ReleaseNotes/2_0_69.md
new file mode 100644
index 000000000000..e5352abf4f9a
--- /dev/null
+++ b/Packs/TAXIIServer/ReleaseNotes/2_0_69.md
@@ -0,0 +1,7 @@
+
+#### Integrations
+
+##### TAXII2 Server
+
+- Updated the default value of *Listen Port* parameter for use on XSIAM and XSOAR-SaaS.
+- Updated the Docker image to: *demisto/flask-nginx:1.0.0.115634*.
diff --git a/Packs/TAXIIServer/ReleaseNotes/2_0_70.md b/Packs/TAXIIServer/ReleaseNotes/2_0_70.md
new file mode 100644
index 000000000000..02e5f005e0cd
--- /dev/null
+++ b/Packs/TAXIIServer/ReleaseNotes/2_0_70.md
@@ -0,0 +1,6 @@
+
+#### Integrations
+
+##### TAXII2 Server
+
+- Updated the default value of *Listen Port* parameter for use on XSIAM and XSOAR-SaaS.
diff --git a/Packs/TAXIIServer/pack_metadata.json b/Packs/TAXIIServer/pack_metadata.json
index 2a3c9d9ab26f..08d8b383a9cc 100644
--- a/Packs/TAXIIServer/pack_metadata.json
+++ b/Packs/TAXIIServer/pack_metadata.json
@@ -2,7 +2,7 @@
"name": "TAXII Server",
"description": "This pack provides TAXII Services for system indicators (Outbound feed).",
"support": "xsoar",
- "currentVersion": "2.0.68",
+ "currentVersion": "2.0.70",
"author": "Cortex XSOAR",
"url": "https://www.paloaltonetworks.com/cortex",
"email": "",
diff --git a/Packs/TeamCymru/Integrations/TeamCymru/TeamCymru.yml b/Packs/TeamCymru/Integrations/TeamCymru/TeamCymru.yml
index d6d301ed8e6f..7a3148239a4b 100644
--- a/Packs/TeamCymru/Integrations/TeamCymru/TeamCymru.yml
+++ b/Packs/TeamCymru/Integrations/TeamCymru/TeamCymru.yml
@@ -157,7 +157,7 @@ script:
script: '-'
type: python
subtype: python3
- dockerimage: demisto/vendors-sdk:1.0.0.87491
+ dockerimage: demisto/vendors-sdk:1.0.0.115493
fromversion: 6.5.0
tests:
- TeamCymruTest
diff --git a/Packs/TeamCymru/ReleaseNotes/2_0_2.md b/Packs/TeamCymru/ReleaseNotes/2_0_2.md
new file mode 100644
index 000000000000..f5eb49495372
--- /dev/null
+++ b/Packs/TeamCymru/ReleaseNotes/2_0_2.md
@@ -0,0 +1,7 @@
+
+#### Integrations
+
+##### Team Cymru
+
+
+- Updated the Docker image to: *demisto/vendors-sdk:1.0.0.115493*.
diff --git a/Packs/TeamCymru/pack_metadata.json b/Packs/TeamCymru/pack_metadata.json
index 3d884339ff54..d878d4543285 100644
--- a/Packs/TeamCymru/pack_metadata.json
+++ b/Packs/TeamCymru/pack_metadata.json
@@ -2,7 +2,7 @@
"name": "Team Cymru",
"description": "Team Cymru's Scout integration provides comprehensive insights on IP addresses and domains for threat investigations.",
"support": "partner",
- "currentVersion": "2.0.1",
+ "currentVersion": "2.0.2",
"author": "Team Cymru",
"url": "",
"email": "support@cymru.com",
diff --git a/Packs/ThalesCipherTrustManager/Integrations/ThalesCipherTrustManager/ThalesCipherTrustManager.yml b/Packs/ThalesCipherTrustManager/Integrations/ThalesCipherTrustManager/ThalesCipherTrustManager.yml
index ab498ed574f4..11b708c65d19 100644
--- a/Packs/ThalesCipherTrustManager/Integrations/ThalesCipherTrustManager/ThalesCipherTrustManager.yml
+++ b/Packs/ThalesCipherTrustManager/Integrations/ThalesCipherTrustManager/ThalesCipherTrustManager.yml
@@ -2381,7 +2381,7 @@ script:
script: '-'
type: python
subtype: python3
- dockerimage: demisto/py3-tools:1.0.0.99035
+ dockerimage: demisto/py3-tools:1.0.0.114656
feed: false
isfetch: false
runonce: false
diff --git a/Packs/ThalesCipherTrustManager/ReleaseNotes/1_0_1.md b/Packs/ThalesCipherTrustManager/ReleaseNotes/1_0_1.md
new file mode 100644
index 000000000000..33d3054889b7
--- /dev/null
+++ b/Packs/ThalesCipherTrustManager/ReleaseNotes/1_0_1.md
@@ -0,0 +1,6 @@
+
+#### Integrations
+
+##### Thales CipherTrust Manager
+
+- Updated the Docker image to: *demisto/py3-tools:1.0.0.114656*.
diff --git a/Packs/ThalesCipherTrustManager/pack_metadata.json b/Packs/ThalesCipherTrustManager/pack_metadata.json
index b459ae508577..25364f9ac26e 100644
--- a/Packs/ThalesCipherTrustManager/pack_metadata.json
+++ b/Packs/ThalesCipherTrustManager/pack_metadata.json
@@ -2,7 +2,7 @@
"name": "Thales CipherTrust Manager",
"description": "Manage Secrets and Protect Sensitive Data through Thales CipherTrust security platform",
"support": "xsoar",
- "currentVersion": "1.0.0",
+ "currentVersion": "1.0.1",
"author": "Cortex XSOAR",
"url": "https://www.paloaltonetworks.com/cortex",
"email": "",
diff --git a/Packs/ThinkstCanary/Integrations/ThinkstCanary/README.md b/Packs/ThinkstCanary/Integrations/ThinkstCanary/README.md
index 3908b39d1009..0af8353272d9 100644
--- a/Packs/ThinkstCanary/Integrations/ThinkstCanary/README.md
+++ b/Packs/ThinkstCanary/Integrations/ThinkstCanary/README.md
@@ -170,7 +170,7 @@
Human Readable Output
-
+
2. List all Canary tokens
@@ -276,7 +276,7 @@
Human Readable Output
-
+
3. Check if an IP address is on allow list
@@ -379,7 +379,7 @@
Human Readable Output
-
+
4. Add an IP address to the allow list
@@ -482,7 +482,7 @@
Human Readable Output
-
+
5. Edit an alert status
@@ -579,7 +579,7 @@
Human Readable Output
-
+
6. Get a Canary Token file
@@ -725,5 +725,5 @@
Human Readable Output
-
+
diff --git a/Packs/ThinkstCanary/doc_files/53302684-b58aba00-3869-11e9-8a7e-67c13fc8562c.png b/Packs/ThinkstCanary/doc_files/53302684-b58aba00-3869-11e9-8a7e-67c13fc8562c.png
new file mode 100644
index 000000000000..f50e27cab705
Binary files /dev/null and b/Packs/ThinkstCanary/doc_files/53302684-b58aba00-3869-11e9-8a7e-67c13fc8562c.png differ
diff --git a/Packs/ThinkstCanary/doc_files/53302739-35b11f80-386a-11e9-95be-5af5db6e8a8a.png b/Packs/ThinkstCanary/doc_files/53302739-35b11f80-386a-11e9-95be-5af5db6e8a8a.png
new file mode 100644
index 000000000000..cc0beeebf978
Binary files /dev/null and b/Packs/ThinkstCanary/doc_files/53302739-35b11f80-386a-11e9-95be-5af5db6e8a8a.png differ
diff --git a/Packs/ThinkstCanary/doc_files/53302758-86c11380-386a-11e9-89c5-d1bd1ab46a45.png b/Packs/ThinkstCanary/doc_files/53302758-86c11380-386a-11e9-89c5-d1bd1ab46a45.png
new file mode 100644
index 000000000000..e63328dea9e0
Binary files /dev/null and b/Packs/ThinkstCanary/doc_files/53302758-86c11380-386a-11e9-89c5-d1bd1ab46a45.png differ
diff --git a/Packs/ThinkstCanary/doc_files/53302772-b5d78500-386a-11e9-816c-bb69eecfe879.png b/Packs/ThinkstCanary/doc_files/53302772-b5d78500-386a-11e9-816c-bb69eecfe879.png
new file mode 100644
index 000000000000..78e11bd13b51
Binary files /dev/null and b/Packs/ThinkstCanary/doc_files/53302772-b5d78500-386a-11e9-816c-bb69eecfe879.png differ
diff --git a/Packs/ThinkstCanary/doc_files/53302809-2f6f7300-386b-11e9-8eba-71a9d6f4ddd8.png b/Packs/ThinkstCanary/doc_files/53302809-2f6f7300-386b-11e9-8eba-71a9d6f4ddd8.png
new file mode 100644
index 000000000000..54ba3adee669
Binary files /dev/null and b/Packs/ThinkstCanary/doc_files/53302809-2f6f7300-386b-11e9-8eba-71a9d6f4ddd8.png differ
diff --git a/Packs/ThinkstCanary/doc_files/53302891-55494780-386c-11e9-8f2d-5e96faf84353.png b/Packs/ThinkstCanary/doc_files/53302891-55494780-386c-11e9-8f2d-5e96faf84353.png
new file mode 100644
index 000000000000..89758ca5fa24
Binary files /dev/null and b/Packs/ThinkstCanary/doc_files/53302891-55494780-386c-11e9-8f2d-5e96faf84353.png differ
diff --git a/Packs/ThreatConnect/Integrations/ThreatConnectV3/ThreatConnectV3.yml b/Packs/ThreatConnect/Integrations/ThreatConnectV3/ThreatConnectV3.yml
index a97bf80c5b8c..c96d7e836b6f 100644
--- a/Packs/ThreatConnect/Integrations/ThreatConnectV3/ThreatConnectV3.yml
+++ b/Packs/ThreatConnect/Integrations/ThreatConnectV3/ThreatConnectV3.yml
@@ -2963,7 +2963,7 @@ script:
- contextPath: TC.AttributeType.TC.AttributeType.validationRule.version
description: The attribute type validation rule version.
type: string
- dockerimage: demisto/python3:3.11.9.103066
+ dockerimage: demisto/python3:3.11.10.115186
isfetch: true
script: ''
subtype: python3
diff --git a/Packs/ThreatConnect/ReleaseNotes/3_1_9.md b/Packs/ThreatConnect/ReleaseNotes/3_1_9.md
new file mode 100644
index 000000000000..3eaca381ad8a
--- /dev/null
+++ b/Packs/ThreatConnect/ReleaseNotes/3_1_9.md
@@ -0,0 +1,7 @@
+
+#### Integrations
+
+##### ThreatConnect v3
+
+
+- Updated the Docker image to: *demisto/python3:3.11.10.115186*.
diff --git a/Packs/ThreatConnect/pack_metadata.json b/Packs/ThreatConnect/pack_metadata.json
index fe00b8737bc3..6bbfdc5ec412 100644
--- a/Packs/ThreatConnect/pack_metadata.json
+++ b/Packs/ThreatConnect/pack_metadata.json
@@ -2,7 +2,7 @@
"name": "ThreatConnect",
"description": "Threat intelligence platform.",
"support": "xsoar",
- "currentVersion": "3.1.8",
+ "currentVersion": "3.1.9",
"author": "Cortex XSOAR",
"url": "https://www.paloaltonetworks.com/cortex",
"email": "",
diff --git a/Packs/ThreatExchange/Integrations/ThreatExchangeV2/ThreatExchangeV2.yml b/Packs/ThreatExchange/Integrations/ThreatExchangeV2/ThreatExchangeV2.yml
index 34e3a3a42f5c..ae472cc7c313 100644
--- a/Packs/ThreatExchange/Integrations/ThreatExchangeV2/ThreatExchangeV2.yml
+++ b/Packs/ThreatExchange/Integrations/ThreatExchangeV2/ThreatExchangeV2.yml
@@ -686,7 +686,7 @@ script:
- contextPath: ThreatExchange.Object.id
description: ID of a ThreatExchange object.
type: String
- dockerimage: demisto/python3:3.10.13.74666
+ dockerimage: demisto/python3:3.11.10.116439
runonce: false
script: '-'
subtype: python3
diff --git a/Packs/ThreatExchange/ReleaseNotes/2_0_13.md b/Packs/ThreatExchange/ReleaseNotes/2_0_13.md
new file mode 100644
index 000000000000..f878e7e1d976
--- /dev/null
+++ b/Packs/ThreatExchange/ReleaseNotes/2_0_13.md
@@ -0,0 +1,8 @@
+
+#### Integrations
+
+##### ThreatExchange v2
+- Updated the Docker image to: *demisto/python3:3.11.10.116439*.
+
+
+
diff --git a/Packs/ThreatExchange/pack_metadata.json b/Packs/ThreatExchange/pack_metadata.json
index 95e013bc39a0..357bcb6d54ca 100644
--- a/Packs/ThreatExchange/pack_metadata.json
+++ b/Packs/ThreatExchange/pack_metadata.json
@@ -2,7 +2,7 @@
"name": "ThreatExchange",
"description": "Receive threat intelligence about applications, IP addresses, URLs and hashes, a service by Facebook",
"support": "xsoar",
- "currentVersion": "2.0.12",
+ "currentVersion": "2.0.13",
"author": "Cortex XSOAR",
"url": "https://www.paloaltonetworks.com/cortex",
"email": "",
diff --git a/Packs/ThreatGrid/Integrations/ThreatGrid/README.md b/Packs/ThreatGrid/Integrations/ThreatGrid/README.md
index b6e4efd5449d..21dd3bad695b 100644
--- a/Packs/ThreatGrid/Integrations/ThreatGrid/README.md
+++ b/Packs/ThreatGrid/Integrations/ThreatGrid/README.md
@@ -242,7 +242,7 @@
}
Human Readable Output
-
+
2. Get a sample by sample ID
Get a Threat Grid sample by sample ID.
@@ -344,7 +344,7 @@
}
Human Readable Output
-
+
3. Get the states of samples by sample ID
Get threat grid sample state by id
@@ -406,7 +406,7 @@
}
Human Readable Output
-
+
4. Submit a sample for analysis
Submits a sample to Threat Grid for analysis.
@@ -537,7 +537,7 @@
}
Human Readable Output
-
+
5. Search submissions
Search Threat Grid submissions.
@@ -708,7 +708,7 @@
}
Human Readable Output
-
+
6. Get a sample analysis video by ID
Get the sample analysis video by ID.
@@ -774,7 +774,7 @@
}
Human Readable Output
-
+
7. Get a detailed overview of a sample
The detailed overview of dynamic and static analysis results for the sample.
@@ -942,7 +942,7 @@
{ "ThreatGrid": { "Sample": { "Id": "9798717402a40970a2d043014d9a6170" } }, "InfoFile": { "Info": "application/json", "Name": "9798717402a40970a2d043014d9a6170-analysis.json", "Extension": "json", "EntryID": "124@16", "Type": "ASCII text, with very long lines\n", "Size": 132549 }
Human Readable Output
-
+
8. Get processes by ID
Returns a JSON object that contains a timeline of all process activities as determined by the dynamic analysis engine.
@@ -1008,7 +1008,7 @@
}
Human Readable Output
-
+
9. Get a PCAP file for a sample by sample ID
Get the tcpdump PCAP file for a specific sample ID, with all the network activity of the sample.
@@ -1074,7 +1074,7 @@
}
Human Readable Output
-
+
10. Get warnings for a sample by sample ID
Returns a JSON structure that describes warnings that occurred during the analysis.
@@ -1140,7 +1140,7 @@
}
Human Readable Output
-
+
11. Get a summary analysis for a sample by sample ID
Returns summary analysis information.
@@ -1238,7 +1238,7 @@
}
Human Readable Output
-
+
12. Get a summary of threats detected during an analysis
Returns a summary of the threats detected during analysis
@@ -1348,7 +1348,7 @@
}
Human Readable Output
-
+
13. Get the HTML report for a sample by sample ID
Get the report.html file for a specific sample ID. This is a stand-alone file with a complete report on the sample run. It is designed to be emailed or printed.
@@ -1410,7 +1410,7 @@
}
Human Readable Output
-
+
14. Download a sample as a ZIP file
Download a sample by using its ID. The downloaded file is an archive of the sample itself, in a zip format as a form of quarantine.
@@ -1476,7 +1476,7 @@
}
Human Readable Output
-
+
15. Get a list of IOCs found during a sample run
Returns a JSON list of the Indicators of Compromise identified in this sample run.
@@ -2100,7 +2100,7 @@
}
Human Readable Output
-
+
16. Get information for the logged in user
Return information for the logged in user.
@@ -2156,7 +2156,7 @@
}
Human Readable Output
-
+
17. Get the rate limit for a specified user
Get rate limit for a specific user name. ThreatGrid employs a simple rate limiting method for sample submissions by specifying the number of samples which can be submitted within some variable time period by a user. Multiple rate limits can be employed to form overlapping submission limits. For example, 20 submissions per hour AND 400 per day.
@@ -2230,7 +2230,7 @@
}
Human Readable Output
-
+
18. Get a specific threat feed
Gets a specific threat feed.
@@ -2269,7 +2269,7 @@
Command Example
!threat-grid-get-specific-feed feed-name=rat-dns output-type=csv
Human Readable Output
-
+
19. Convert a URL to a file for detonation
Convert a URL into a file for Threat Grid file detonation.
@@ -2298,7 +2298,7 @@
Command Example
!threat-grid-url-to-file urls=www.google.com
Human Readable Output
-
+
20. Get rate limits for an organization
Get rate limits applied to an organization. ThreatGrid employs a simple rate limiting method for sample submissions by specifying the number of samples which can be submitted within some variable time period by an entire organization and/or per a license basis. Multiple rate limits can be employed to form overlapping submission limits. For example, 20 submissions per hour AND 400 per day.
@@ -2374,7 +2374,7 @@
}
Human Readable Output
-
+
21. Search IP addresses
Search IPs.
@@ -2448,7 +2448,7 @@
Command Example
!threat-grid-search-ips tag=malicious
Human Readable Output
-
+
22. Get annotation data for an analysis
Returns data about the annotations of the analysis.
@@ -2574,7 +2574,7 @@
}
Human Readable Output
-
+
23. Search samples
Searches samples.
@@ -2711,7 +2711,7 @@
}
Human Readable Output
-
+
24. Search URLs
Search URLs.
@@ -2865,7 +2865,7 @@
}
Human Readable Output
-
+
26. Get the threat feed for artifacts
Get the threat feed for artifacts.
@@ -2947,7 +2947,7 @@
Command Example
!threat-grid-feeds-artifacts after=2018-01-18T00:00:00 before=2018-01-18T00:02:07 confidence=75 severity=75
Human Readable Output
-
+
27. Get the threat feed for a domain
Get the threat feed for a domain.
@@ -3014,7 +3014,7 @@
Command Example
!threat-grid-feeds-domain after=2018-01-18T00:00:00 before=2018-01-18T00:10:00 confidence=75 severity=75
Human Readable Output
-
+
28. Get the threat feed for an IP address
Returns the threat feed for an IP address.
@@ -3081,7 +3081,7 @@
Command Example
!threat-grid-feeds-ip after=2018-01-18T00:00:00 before=2018-01-18T01:00:00 confidence=75 severity=75
Human Readable Output
-
+
29. Get the threat feed for a network stream
Get network stream threat feed
@@ -3153,7 +3153,7 @@
Command Example
!threat-grid-feeds-network-stream after=2018-01-18T00:00:00 before=2018-01-18T00:02:10 confidence=75 severity=75
Human Readable Output
-
+
30. Get the threat feed for a path
Returns the threat feed for a path.
@@ -3222,7 +3222,7 @@
Command Example
!threat-grid-feeds-path after=2018-01-18T00:00:00 before=2018-01-18T00:03:00 confidence=75 severity=75
Human Readable Output
-
+
31. Get the threat feed for URL
Returns the threat feed for a URL.
@@ -3291,7 +3291,7 @@
Command Example
!threat-grid-feeds-url after=2018-01-18T00:00:00 before=2018-01-18T00:05:00 confidence=75 severity=75
Human Readable Output
-
+
32. Get the artifact for a sample ID by artifact ID
Returns the sample ID artifact with artifact id
@@ -3323,7 +3323,7 @@
Command Example
!threat-grid-get-analysis-artifact id=a6cc7ae4e3318e98d94e8a053dd72c47 aid=1
Human Readable Output
-
+
33. Get artifacts for a sample ID
Returns the sample id artifacts
@@ -3352,7 +3352,7 @@
Command Example
!threat-grid-get-analysis-artifacts id=a6cc7ae4e3318e98d94e8a053dd72c47
Human Readable Output
-
+
34. Get analysis data for an IOC
Returns data for the specified IOC.
@@ -3384,7 +3384,7 @@
Command Example
!threat-grid-get-analysis-ioc id=8ee72188b95b7d8f4e1a6c4842e98566 ioc=network-communications-http-get-url
Human Readable Output
-
+
35. Get metadata for an analysis
Returns metadata about the analysis.
@@ -3411,7 +3411,7 @@
Command Example
!threat-grid-get-analysis-metadata id=58e5e66b31484a8529b80a18a33e0814
Human Readable Output
-
+
36. Get data for a network stream
Returns data regarding a specific network stream
@@ -3445,7 +3445,7 @@
Command Example
!threat-grid-get-analysis-network-stream id=a6cc7ae4e3318e98d94e8a053dd72c47 nsid=1
Human Readable Output
-
+
37. Get the analysis for a network stream
Returns the network stream analysis.
@@ -3474,7 +3474,7 @@
Command Example
!threat-grid-get-analysis-network-streams id=a6cc7ae4e3318e98d94e8a053dd72c47
Human Readable Output
-
+
38. Get data for a process ID in an analysis
Returns data regarding the specific process ID in the analysis.
@@ -3508,7 +3508,7 @@
Command Example
!threat-grid-get-analysis-process id=9798717402a40970a2d043014d9a6170 pid=4
Human Readable Output
-
+
39. Get data for an analysis process
Returns data regarding the analysis processes.
@@ -3537,7 +3537,7 @@
Command Example
!threat-grid-get-analysis-processes id=9798717402a40970a2d043014d9a6170
Human Readable Output
-
+
40. Submit URLs for analysis
Submit the URL for Threat analysis processes.
diff --git a/Packs/ThreatGrid/doc_files/47430682-be2b9080-d7a2-11e8-87b8-9749ee4c22f2.png b/Packs/ThreatGrid/doc_files/47430682-be2b9080-d7a2-11e8-87b8-9749ee4c22f2.png
new file mode 100644
index 000000000000..fdf98489658a
Binary files /dev/null and b/Packs/ThreatGrid/doc_files/47430682-be2b9080-d7a2-11e8-87b8-9749ee4c22f2.png differ
diff --git a/Packs/ThreatGrid/doc_files/47431026-a6084100-d7a3-11e8-93ba-595783320aed.png b/Packs/ThreatGrid/doc_files/47431026-a6084100-d7a3-11e8-93ba-595783320aed.png
new file mode 100644
index 000000000000..087dd5b84d26
Binary files /dev/null and b/Packs/ThreatGrid/doc_files/47431026-a6084100-d7a3-11e8-93ba-595783320aed.png differ
diff --git a/Packs/ThreatGrid/doc_files/47431109-d3ed8580-d7a3-11e8-8d5e-1733cfea7209.png b/Packs/ThreatGrid/doc_files/47431109-d3ed8580-d7a3-11e8-8d5e-1733cfea7209.png
new file mode 100644
index 000000000000..0594fbecce07
Binary files /dev/null and b/Packs/ThreatGrid/doc_files/47431109-d3ed8580-d7a3-11e8-8d5e-1733cfea7209.png differ
diff --git a/Packs/ThreatGrid/doc_files/47432202-48c1bf00-d7a6-11e8-8aa2-d86296c48e80.png b/Packs/ThreatGrid/doc_files/47432202-48c1bf00-d7a6-11e8-8aa2-d86296c48e80.png
new file mode 100644
index 000000000000..5d19f280ba14
Binary files /dev/null and b/Packs/ThreatGrid/doc_files/47432202-48c1bf00-d7a6-11e8-8aa2-d86296c48e80.png differ
diff --git a/Packs/ThreatGrid/doc_files/47432285-7e66a800-d7a6-11e8-9077-477d92bc0710.png b/Packs/ThreatGrid/doc_files/47432285-7e66a800-d7a6-11e8-9077-477d92bc0710.png
new file mode 100644
index 000000000000..de1fe02c83ce
Binary files /dev/null and b/Packs/ThreatGrid/doc_files/47432285-7e66a800-d7a6-11e8-9077-477d92bc0710.png differ
diff --git a/Packs/ThreatGrid/doc_files/47432696-78bd9200-d7a7-11e8-9d31-7309e77f97df.png b/Packs/ThreatGrid/doc_files/47432696-78bd9200-d7a7-11e8-9d31-7309e77f97df.png
new file mode 100644
index 000000000000..497dcb5ec530
Binary files /dev/null and b/Packs/ThreatGrid/doc_files/47432696-78bd9200-d7a7-11e8-9d31-7309e77f97df.png differ
diff --git a/Packs/ThreatGrid/doc_files/47433069-4ceedc00-d7a8-11e8-9369-91f808bd5565.png b/Packs/ThreatGrid/doc_files/47433069-4ceedc00-d7a8-11e8-9369-91f808bd5565.png
new file mode 100644
index 000000000000..5be3099e101c
Binary files /dev/null and b/Packs/ThreatGrid/doc_files/47433069-4ceedc00-d7a8-11e8-9369-91f808bd5565.png differ
diff --git a/Packs/ThreatGrid/doc_files/47433233-afe07300-d7a8-11e8-8cfb-79f8bf08941e.png b/Packs/ThreatGrid/doc_files/47433233-afe07300-d7a8-11e8-8cfb-79f8bf08941e.png
new file mode 100644
index 000000000000..6786d970354a
Binary files /dev/null and b/Packs/ThreatGrid/doc_files/47433233-afe07300-d7a8-11e8-8cfb-79f8bf08941e.png differ
diff --git a/Packs/ThreatGrid/doc_files/47433444-241b1680-d7a9-11e8-8055-c335ec4df33a.png b/Packs/ThreatGrid/doc_files/47433444-241b1680-d7a9-11e8-8055-c335ec4df33a.png
new file mode 100644
index 000000000000..e2c0d145ff68
Binary files /dev/null and b/Packs/ThreatGrid/doc_files/47433444-241b1680-d7a9-11e8-8055-c335ec4df33a.png differ
diff --git a/Packs/ThreatGrid/doc_files/47433625-7d834580-d7a9-11e8-90c5-887dc34186f3.png b/Packs/ThreatGrid/doc_files/47433625-7d834580-d7a9-11e8-90c5-887dc34186f3.png
new file mode 100644
index 000000000000..c37ec318b0df
Binary files /dev/null and b/Packs/ThreatGrid/doc_files/47433625-7d834580-d7a9-11e8-90c5-887dc34186f3.png differ
diff --git a/Packs/ThreatGrid/doc_files/47433917-131ed500-d7aa-11e8-8cd8-a2964f31d24e.png b/Packs/ThreatGrid/doc_files/47433917-131ed500-d7aa-11e8-8cd8-a2964f31d24e.png
new file mode 100644
index 000000000000..29d0a4bf75b5
Binary files /dev/null and b/Packs/ThreatGrid/doc_files/47433917-131ed500-d7aa-11e8-8cd8-a2964f31d24e.png differ
diff --git a/Packs/ThreatGrid/doc_files/47435252-e61ff180-d7ac-11e8-8565-4a41610e3693.png b/Packs/ThreatGrid/doc_files/47435252-e61ff180-d7ac-11e8-8565-4a41610e3693.png
new file mode 100644
index 000000000000..f1ddb781a34e
Binary files /dev/null and b/Packs/ThreatGrid/doc_files/47435252-e61ff180-d7ac-11e8-8565-4a41610e3693.png differ
diff --git a/Packs/ThreatGrid/doc_files/47435410-3dbe5d00-d7ad-11e8-9daf-d6d1ecb91dad.png b/Packs/ThreatGrid/doc_files/47435410-3dbe5d00-d7ad-11e8-9daf-d6d1ecb91dad.png
new file mode 100644
index 000000000000..1e7a743b392a
Binary files /dev/null and b/Packs/ThreatGrid/doc_files/47435410-3dbe5d00-d7ad-11e8-9daf-d6d1ecb91dad.png differ
diff --git a/Packs/ThreatGrid/doc_files/47436037-62670480-d7ae-11e8-8328-7f69151fff1b.png b/Packs/ThreatGrid/doc_files/47436037-62670480-d7ae-11e8-8328-7f69151fff1b.png
new file mode 100644
index 000000000000..bf50f774c2d8
Binary files /dev/null and b/Packs/ThreatGrid/doc_files/47436037-62670480-d7ae-11e8-8328-7f69151fff1b.png differ
diff --git a/Packs/ThreatGrid/doc_files/47437441-43b63d00-d7b1-11e8-916b-41bb5a119b1a.png b/Packs/ThreatGrid/doc_files/47437441-43b63d00-d7b1-11e8-916b-41bb5a119b1a.png
new file mode 100644
index 000000000000..509ccb2b14bc
Binary files /dev/null and b/Packs/ThreatGrid/doc_files/47437441-43b63d00-d7b1-11e8-916b-41bb5a119b1a.png differ
diff --git a/Packs/ThreatGrid/doc_files/47438632-82e58d80-d7b3-11e8-962b-f5ec7b8e6bf1.png b/Packs/ThreatGrid/doc_files/47438632-82e58d80-d7b3-11e8-962b-f5ec7b8e6bf1.png
new file mode 100644
index 000000000000..6bb5708f98a5
Binary files /dev/null and b/Packs/ThreatGrid/doc_files/47438632-82e58d80-d7b3-11e8-962b-f5ec7b8e6bf1.png differ
diff --git a/Packs/ThreatGrid/doc_files/47441386-b971d700-d7b8-11e8-9caa-e55eae2b0932.png b/Packs/ThreatGrid/doc_files/47441386-b971d700-d7b8-11e8-9caa-e55eae2b0932.png
new file mode 100644
index 000000000000..76f759c4b1cd
Binary files /dev/null and b/Packs/ThreatGrid/doc_files/47441386-b971d700-d7b8-11e8-9caa-e55eae2b0932.png differ
diff --git a/Packs/ThreatGrid/doc_files/47480262-82dba100-d838-11e8-9e4b-9c7d2c9bb643.png b/Packs/ThreatGrid/doc_files/47480262-82dba100-d838-11e8-9e4b-9c7d2c9bb643.png
new file mode 100644
index 000000000000..970c01f4ae84
Binary files /dev/null and b/Packs/ThreatGrid/doc_files/47480262-82dba100-d838-11e8-9e4b-9c7d2c9bb643.png differ
diff --git a/Packs/ThreatGrid/doc_files/47480951-c3d4b500-d83a-11e8-899c-908d7c24a5a4.png b/Packs/ThreatGrid/doc_files/47480951-c3d4b500-d83a-11e8-899c-908d7c24a5a4.png
new file mode 100644
index 000000000000..dd3ea28ceae1
Binary files /dev/null and b/Packs/ThreatGrid/doc_files/47480951-c3d4b500-d83a-11e8-899c-908d7c24a5a4.png differ
diff --git a/Packs/ThreatGrid/doc_files/47481818-7c9bf380-d83d-11e8-9a7b-582e42a856fd.png b/Packs/ThreatGrid/doc_files/47481818-7c9bf380-d83d-11e8-9a7b-582e42a856fd.png
new file mode 100644
index 000000000000..0fc7c1aeab04
Binary files /dev/null and b/Packs/ThreatGrid/doc_files/47481818-7c9bf380-d83d-11e8-9a7b-582e42a856fd.png differ
diff --git a/Packs/ThreatGrid/doc_files/47482519-7e66b680-d83f-11e8-81c0-1953bc691ce5.png b/Packs/ThreatGrid/doc_files/47482519-7e66b680-d83f-11e8-81c0-1953bc691ce5.png
new file mode 100644
index 000000000000..f5cb033aacd8
Binary files /dev/null and b/Packs/ThreatGrid/doc_files/47482519-7e66b680-d83f-11e8-81c0-1953bc691ce5.png differ
diff --git a/Packs/ThreatGrid/doc_files/47482806-4449e480-d840-11e8-8f0b-d73bd18a09a4.png b/Packs/ThreatGrid/doc_files/47482806-4449e480-d840-11e8-8f0b-d73bd18a09a4.png
new file mode 100644
index 000000000000..1e7f198294cd
Binary files /dev/null and b/Packs/ThreatGrid/doc_files/47482806-4449e480-d840-11e8-8f0b-d73bd18a09a4.png differ
diff --git a/Packs/ThreatGrid/doc_files/47483865-f2ef2480-d842-11e8-924c-69ff668f83c1.png b/Packs/ThreatGrid/doc_files/47483865-f2ef2480-d842-11e8-924c-69ff668f83c1.png
new file mode 100644
index 000000000000..30e9369dc939
Binary files /dev/null and b/Packs/ThreatGrid/doc_files/47483865-f2ef2480-d842-11e8-924c-69ff668f83c1.png differ
diff --git a/Packs/ThreatGrid/doc_files/47484658-faafc880-d844-11e8-805f-fbe2530872ed.png b/Packs/ThreatGrid/doc_files/47484658-faafc880-d844-11e8-805f-fbe2530872ed.png
new file mode 100644
index 000000000000..9349e42eb874
Binary files /dev/null and b/Packs/ThreatGrid/doc_files/47484658-faafc880-d844-11e8-805f-fbe2530872ed.png differ
diff --git a/Packs/ThreatGrid/doc_files/47484718-1a46f100-d845-11e8-8b11-bcbce534db95.png b/Packs/ThreatGrid/doc_files/47484718-1a46f100-d845-11e8-8b11-bcbce534db95.png
new file mode 100644
index 000000000000..859eef980fb6
Binary files /dev/null and b/Packs/ThreatGrid/doc_files/47484718-1a46f100-d845-11e8-8b11-bcbce534db95.png differ
diff --git a/Packs/ThreatGrid/doc_files/47485524-e79df800-d846-11e8-8a55-d4ae00888c44.png b/Packs/ThreatGrid/doc_files/47485524-e79df800-d846-11e8-8a55-d4ae00888c44.png
new file mode 100644
index 000000000000..c26e6395bbba
Binary files /dev/null and b/Packs/ThreatGrid/doc_files/47485524-e79df800-d846-11e8-8a55-d4ae00888c44.png differ
diff --git a/Packs/ThreatGrid/doc_files/47486140-56c81c00-d848-11e8-8a49-abe9bb7184fa.png b/Packs/ThreatGrid/doc_files/47486140-56c81c00-d848-11e8-8a49-abe9bb7184fa.png
new file mode 100644
index 000000000000..83c300c7f650
Binary files /dev/null and b/Packs/ThreatGrid/doc_files/47486140-56c81c00-d848-11e8-8a49-abe9bb7184fa.png differ
diff --git a/Packs/ThreatGrid/doc_files/47486645-7e6bb400-d849-11e8-84bc-ccf5aa070ebd.png b/Packs/ThreatGrid/doc_files/47486645-7e6bb400-d849-11e8-84bc-ccf5aa070ebd.png
new file mode 100644
index 000000000000..626355a3da1b
Binary files /dev/null and b/Packs/ThreatGrid/doc_files/47486645-7e6bb400-d849-11e8-84bc-ccf5aa070ebd.png differ
diff --git a/Packs/ThreatGrid/doc_files/47486691-98a59200-d849-11e8-8178-f8bdf5842b68.png b/Packs/ThreatGrid/doc_files/47486691-98a59200-d849-11e8-8178-f8bdf5842b68.png
new file mode 100644
index 000000000000..0a5e3a962b76
Binary files /dev/null and b/Packs/ThreatGrid/doc_files/47486691-98a59200-d849-11e8-8178-f8bdf5842b68.png differ
diff --git a/Packs/ThreatGrid/doc_files/47486756-bbd04180-d849-11e8-85a4-0622a22550fc.png b/Packs/ThreatGrid/doc_files/47486756-bbd04180-d849-11e8-85a4-0622a22550fc.png
new file mode 100644
index 000000000000..45143611ea53
Binary files /dev/null and b/Packs/ThreatGrid/doc_files/47486756-bbd04180-d849-11e8-85a4-0622a22550fc.png differ
diff --git a/Packs/ThreatGrid/doc_files/47486902-0d78cc00-d84a-11e8-9015-ad05ee52f4b3.png b/Packs/ThreatGrid/doc_files/47486902-0d78cc00-d84a-11e8-9015-ad05ee52f4b3.png
new file mode 100644
index 000000000000..648bbfa81744
Binary files /dev/null and b/Packs/ThreatGrid/doc_files/47486902-0d78cc00-d84a-11e8-9015-ad05ee52f4b3.png differ
diff --git a/Packs/ThreatGrid/doc_files/47496245-eb3d7900-d85e-11e8-9d8c-37359b7d999f.png b/Packs/ThreatGrid/doc_files/47496245-eb3d7900-d85e-11e8-9d8c-37359b7d999f.png
new file mode 100644
index 000000000000..c3ac7c9b7b1d
Binary files /dev/null and b/Packs/ThreatGrid/doc_files/47496245-eb3d7900-d85e-11e8-9d8c-37359b7d999f.png differ
diff --git a/Packs/ThreatGrid/doc_files/47497034-5b4cfe80-d861-11e8-83af-34b7cea5b4b5.png b/Packs/ThreatGrid/doc_files/47497034-5b4cfe80-d861-11e8-83af-34b7cea5b4b5.png
new file mode 100644
index 000000000000..725446961d2f
Binary files /dev/null and b/Packs/ThreatGrid/doc_files/47497034-5b4cfe80-d861-11e8-83af-34b7cea5b4b5.png differ
diff --git a/Packs/ThreatGrid/doc_files/47497188-d1516580-d861-11e8-953a-219bc6c965ef.png b/Packs/ThreatGrid/doc_files/47497188-d1516580-d861-11e8-953a-219bc6c965ef.png
new file mode 100644
index 000000000000..c7b11ae6719a
Binary files /dev/null and b/Packs/ThreatGrid/doc_files/47497188-d1516580-d861-11e8-953a-219bc6c965ef.png differ
diff --git a/Packs/ThreatGrid/doc_files/47497213-e75f2600-d861-11e8-96c5-b3257fe22e98.png b/Packs/ThreatGrid/doc_files/47497213-e75f2600-d861-11e8-96c5-b3257fe22e98.png
new file mode 100644
index 000000000000..6d040259660b
Binary files /dev/null and b/Packs/ThreatGrid/doc_files/47497213-e75f2600-d861-11e8-96c5-b3257fe22e98.png differ
diff --git a/Packs/ThreatGrid/doc_files/47497291-2beac180-d862-11e8-99c2-0c64a014bf66.png b/Packs/ThreatGrid/doc_files/47497291-2beac180-d862-11e8-99c2-0c64a014bf66.png
new file mode 100644
index 000000000000..61046fa981d2
Binary files /dev/null and b/Packs/ThreatGrid/doc_files/47497291-2beac180-d862-11e8-99c2-0c64a014bf66.png differ
diff --git a/Packs/ThreatGrid/doc_files/47497306-3e64fb00-d862-11e8-8f1b-c2aa623b33ff.png b/Packs/ThreatGrid/doc_files/47497306-3e64fb00-d862-11e8-8f1b-c2aa623b33ff.png
new file mode 100644
index 000000000000..b6d55dac74d6
Binary files /dev/null and b/Packs/ThreatGrid/doc_files/47497306-3e64fb00-d862-11e8-8f1b-c2aa623b33ff.png differ
diff --git a/Packs/ThreatGrid/doc_files/integration-Threat_Grid_mceclip0.png b/Packs/ThreatGrid/doc_files/integration-Threat_Grid_mceclip0.png
new file mode 100644
index 000000000000..39217c33a6d6
Binary files /dev/null and b/Packs/ThreatGrid/doc_files/integration-Threat_Grid_mceclip0.png differ
diff --git a/Packs/ThreatQ/Integrations/ThreatQ_v2/README.md b/Packs/ThreatQ/Integrations/ThreatQ_v2/README.md
index 39bb8f7432f2..803769be544d 100644
--- a/Packs/ThreatQ/Integrations/ThreatQ_v2/README.md
+++ b/Packs/ThreatQ/Integrations/ThreatQ_v2/README.md
@@ -129,7 +129,7 @@
Command Example
!threatq-search-by-name name=test limit=6
Human Readable Output
-
+
2. Check an IP address
Checks the reputation of an IP address in ThreatQ.
@@ -270,7 +270,7 @@
!ip ip=91.140.64.113
Human Readable Output
-
+
3. Check a URL
Checks the reputation of a URL in ThreatQ.
@@ -412,7 +412,7 @@
!url url=https://www.paloaltonetworks.com/
Human Readable Output
-
+
4. Check a file
Checks the reputation of a file in ThreatQ.
@@ -578,7 +578,7 @@
!file file=a94a8fe5ccb19ba61c4c0873d391e987982fbbd3
Human Readable Output
-
+
5. Check an email
Checks the reputation of an email in ThreatQ.
@@ -718,7 +718,7 @@
Command Example
!email email=example.gmail.com
Human Readable Output
-
+
6. Check a domain
Checks the reputation of a domain in ThreatQ.
@@ -859,7 +859,7 @@
Command Example
!domain domain=www.testdomain.com
Human Readable Output
-
+
7. Create an indicator
Creates a new indicator in ThreatQ.
@@ -990,7 +990,7 @@
!threatq-create-indicator value=232.12.34.135 status=Review type="IP Address" attributes_names=TestAttr1,TestAttr2 attributes_values=Val1,Val2 sources=arian@demisto.com
Human Readable Output
-
+
8. Add an attribute
Adds an attribute to an object in ThreatQ.
@@ -1036,7 +1036,7 @@
!threatq-add-attribute obj_type=indicator obj_id=173317 name=TestAttr3 value=Val3
Human Readable Output
-
+
9. Modify an attribute
Modifies an attribute for an object in ThreatQ.
@@ -1079,7 +1079,7 @@
!threatq-modify-attribute attribute_id=996895 attribute_value=NewVal obj_id=173317 obj_type=indicator
Human Readable Output
-
+
10. Link two objects
Links two objects together in ThreatQ.
@@ -1122,7 +1122,7 @@
!threatq-link-objects obj1_id=173317 obj1_type=indicator obj2_id=1 obj2_type=adversary
Human Readable Output
-
+
11. Create an adversary
Creates a new adversary in ThreatQ.
@@ -1223,7 +1223,7 @@
!threatq-create-adversary name="Reut Shalem"
Human Readable Output
-
+
12. Create an event
Creates a new event in ThreatQ.
@@ -1348,7 +1348,7 @@
Command Example
!threatq-create-event date="2019-09-30 20:00:00" title="Offra Alta" type=Incident
Human Readable Output
-
+
13. Get related indicators
Retrieves related indicators for an object in ThreatQ.
@@ -1603,7 +1603,7 @@
Command Example
!threatq-get-related-indicators obj_id=1 obj_type=adversary
Human Readable Output
-
+
14. Update an indicator status
Updates an indicator status in ThreatQ.
@@ -1659,7 +1659,7 @@
!threatq-update-status id=173317 status=Whitelisted
Human Readable Output
-
+
15. Get related events
Retrieves related events of an object in ThreatQ.
@@ -1899,7 +1899,7 @@
Command Example
!threatq-get-related-events obj_id=1 obj_type=adversary
Human Readable Output
-
+
16. Get related adversaries
Retrieve related adversaries from an object in ThreatQ.
@@ -2095,7 +2095,7 @@
!threatq-get-related-adversaries obj_id=1 obj_type=adversary
Human Readable Output
-
+
17. Upload a-file
Uploads a file to ThreatQ.
@@ -2226,7 +2226,7 @@
!threatq-upload-file entry_id=5379@9da8d636-cf30-42c2-8263-d09f5268be8a file_category="Generic Text" title="File Title"
Human Readable Output
-
+
18. Search by Object type and ID
Searches for an object by object type and ID.
@@ -2517,7 +2517,7 @@
!threatq-search-by-id obj_id=173317 obj_type=indicator
Human Readable Output
-
+
19. Unlink two objects
Unlinks two objects in ThreatQ.
@@ -2560,7 +2560,7 @@
!threatq-unlink-objects obj1_id=173317 obj1_type=indicator obj2_id=1 obj2_type=adversary
Human Readable Output
-
+
20. Delete an object
Deletes an object in ThreatQ.
@@ -2593,7 +2593,7 @@
!threatq-delete-object obj_id=104 obj_type=event
Human Readable Output
-
+
21. Add a source to an object
Adds a source to an object in ThreatQ.
@@ -2631,7 +2631,7 @@
!threatq-add-source obj_id=173317 obj_type=indicator source="AlienVault OTX"
Human Readable Output
-
+
22. Delete a source from an object
Deletes a source from an object in ThreatQ.
@@ -2669,7 +2669,7 @@
!threatq-delete-source obj_id=173317 obj_type=indicator source_id=3333819
Human Readable Output
-
+
23. Delete an attribute
Deletes an attribute from an object in ThreatQ.
@@ -2707,7 +2707,7 @@
!threatq-delete-attribute attribute_id=996896 obj_id=173317 obj_type=indicator
Human Readable Output
-
+
24. Edit an adversary
Updates an adversary name in ThreatQ.
@@ -2798,7 +2798,7 @@
!threatq-edit-adversary id=23 name="New Adversary Name"
Human Readable Output
-
+
25. Edit an indicator
Updates an indicator in ThreatQ.
@@ -2919,7 +2919,7 @@
!threatq-edit-indicator id=173317 description="This is a new description" type="Email Address" value=goo@test.com
Human Readable Output
-
+
26. Edit an event
Updates an event in ThreatQ.
@@ -3040,7 +3040,7 @@
!threatq-edit-event id=1 date="2019-09-30 21:00:00" description="The event will take place in Expo Tel Aviv" type="Command and Control"
Human Readable Output
-
+
27. Update a score of an indicator
Modifies an indicator's score in ThreatQ. The final indicator score is the highest of the manual and generated scores.
@@ -3151,7 +3151,7 @@
!threatq-update-score id=173317 score=2
Human Readable Output
-
+
28. Download a file to Cortex XSOAR
Downloads a file from ThreatQ to Cortex XSOAR.
@@ -3179,7 +3179,7 @@
!threatq-download-file id=88
Human Readable Output
-
+
29. Get all indicators
Retrieves all indicators in ThreatQ.
@@ -3291,7 +3291,7 @@
!threatq-get-all-indicators limit=30 page=10
Human Readable Output
-
+
30. Get a list of events
Retrieves all events in ThreatQ.
@@ -3397,7 +3397,7 @@
!threatq-get-all-events limit=30 page=10
Human Readable Output
-
+
31. Get a list of all adversaries
Returns all adversaries in ThreatQ.
@@ -3488,4 +3488,4 @@
!threatq-get-all-events limit=30 page=10
Human Readable Output
-
+
diff --git a/Packs/Tripwire/Integrations/Tripwire/Tripwire.yml b/Packs/Tripwire/Integrations/Tripwire/Tripwire.yml
index 3f58edb90137..ca56e2879c30 100644
--- a/Packs/Tripwire/Integrations/Tripwire/Tripwire.yml
+++ b/Packs/Tripwire/Integrations/Tripwire/Tripwire.yml
@@ -411,7 +411,7 @@ script:
- contextPath: Tripwire.Nodes.version
description: Versions of nodes.
type: String
- dockerimage: demisto/python3:3.10.12.68714
+ dockerimage: demisto/python3:3.11.10.116439
isfetch: true
runonce: false
script: '-'
diff --git a/Packs/Tripwire/ReleaseNotes/1_0_20.md b/Packs/Tripwire/ReleaseNotes/1_0_20.md
new file mode 100644
index 000000000000..c950ac7d5dbf
--- /dev/null
+++ b/Packs/Tripwire/ReleaseNotes/1_0_20.md
@@ -0,0 +1,9 @@
+
+#### Integrations
+
+##### Tripwire
+- Updated the Docker image to: *demisto/python3:3.11.10.116439*.
+
+
+
+
diff --git a/Packs/Tripwire/pack_metadata.json b/Packs/Tripwire/pack_metadata.json
index 648bf85249df..b2f771c54c95 100644
--- a/Packs/Tripwire/pack_metadata.json
+++ b/Packs/Tripwire/pack_metadata.json
@@ -2,7 +2,7 @@
"name": "Tripwire",
"description": "Tripwire is a file integrity managment(FIM),used to track files and folders on different systems and monitors their changes.",
"support": "xsoar",
- "currentVersion": "1.0.19",
+ "currentVersion": "1.0.20",
"author": "Cortex XSOAR",
"url": "https://www.paloaltonetworks.com/cortex",
"email": "",
diff --git a/Packs/Troubleshoot/ReleaseNotes/2_1_1.md b/Packs/Troubleshoot/ReleaseNotes/2_1_1.md
new file mode 100644
index 000000000000..880efcf5b3d5
--- /dev/null
+++ b/Packs/Troubleshoot/ReleaseNotes/2_1_1.md
@@ -0,0 +1,7 @@
+
+#### Scripts
+
+##### CertificatesTroubleshoot
+
+
+- Updated the Docker image to: *demisto/auth-utils:1.0.0.115527*.
diff --git a/Packs/Troubleshoot/ReleaseNotes/2_1_2.md b/Packs/Troubleshoot/ReleaseNotes/2_1_2.md
new file mode 100644
index 000000000000..082250c75958
--- /dev/null
+++ b/Packs/Troubleshoot/ReleaseNotes/2_1_2.md
@@ -0,0 +1,9 @@
+
+#### Scripts
+
+##### CertificatesTroubleshoot
+- Updated the Docker image to: *demisto/auth-utils:1.0.0.116752*.
+
+
+
+
diff --git a/Packs/Troubleshoot/Scripts/CertificatesTroubleshoot/CertificatesTroubleshoot.yml b/Packs/Troubleshoot/Scripts/CertificatesTroubleshoot/CertificatesTroubleshoot.yml
index 5420553c4f88..db65318e05ce 100644
--- a/Packs/Troubleshoot/Scripts/CertificatesTroubleshoot/CertificatesTroubleshoot.yml
+++ b/Packs/Troubleshoot/Scripts/CertificatesTroubleshoot/CertificatesTroubleshoot.yml
@@ -337,7 +337,7 @@ tags:
- Utility
timeout: '0'
type: python
-dockerimage: demisto/auth-utils:1.0.0.103532
+dockerimage: demisto/auth-utils:1.0.0.116752
runas: DBotWeakRole
tests:
- No tests (auto formatted)
diff --git a/Packs/Troubleshoot/pack_metadata.json b/Packs/Troubleshoot/pack_metadata.json
index 321edc4d4fbe..54b6755a7b5c 100644
--- a/Packs/Troubleshoot/pack_metadata.json
+++ b/Packs/Troubleshoot/pack_metadata.json
@@ -2,7 +2,7 @@
"name": "Troubleshoot",
"description": "Use this pack to troubleshoot your environment.",
"support": "xsoar",
- "currentVersion": "2.1.0",
+ "currentVersion": "2.1.2",
"author": "Cortex XSOAR",
"url": "https://www.paloaltonetworks.com/cortex",
"email": "",
diff --git a/Packs/Tufin/Integrations/Tufin/README.md b/Packs/Tufin/Integrations/Tufin/README.md
index be2ed942be1c..1436cba1384d 100644
--- a/Packs/Tufin/Integrations/Tufin/README.md
+++ b/Packs/Tufin/Integrations/Tufin/README.md
@@ -68,7 +68,7 @@ Search the Tufin Topology Map
```!tufin-search-topology destination=10.2.2.0/24 source=192.168.60.0/24```
##### Human Readable Output
-![image](https://raw.githubusercontent.com/demisto/content/02e1aa1b9ec01b73d5c6d1c15584271a4f0e3fa6/Packs/Tufin/doc_files/tufin-search-topology.png)
+![image](../../doc_files/tufin-search-topology.png)
### 2. tufin-search-topology-image
---
@@ -93,7 +93,7 @@ There is no context output for this command.
```!tufin-search-topology-image destination=10.2.2.0/24 source=192.168.60.0/24```
##### Human Readable Output
-![image](https://raw.githubusercontent.com/demisto/content/02e1aa1b9ec01b73d5c6d1c15584271a4f0e3fa6/Packs/Tufin/doc_files/tufin-search-topology-image.png)
+![image](../../doc_files/tufin-search-topology-image.png)
### 3. tufin-object-resolve
---
@@ -120,7 +120,7 @@ Resolve IP address to Network Object
```!tufin-object-resolve ip=10.3.3.3```
##### Human Readable Output
-![image](https://raw.githubusercontent.com/demisto/content/02e1aa1b9ec01b73d5c6d1c15584271a4f0e3fa6/Packs/Tufin/doc_files/tufin-resolve-object.png)
+![image](../../doc_files/tufin-resolve-object.png)
### 4. tufin-policy-search
---
@@ -147,7 +147,7 @@ Search the policies of all devices managed by Tufin
```!tufin-policy-search search="source:192.168.1.1"```
##### Human Readable Output
-![image](https://raw.githubusercontent.com/demisto/content/02e1aa1b9ec01b73d5c6d1c15584271a4f0e3fa6/Packs/Tufin/doc_files/tufin-search-policies.png)
+![image](../../doc_files/tufin-search-policies.png)
### 5. tufin-get-zone-for-ip
---
@@ -175,7 +175,7 @@ Match the IP address to the assigned Tufin Zone
```!tufin-get-zone-for-ip ip=10.10.12.1```
##### Human Readable Output
-![image](https://raw.githubusercontent.com/demisto/content/02e1aa1b9ec01b73d5c6d1c15584271a4f0e3fa6/Packs/Tufin/doc_files/tufin-get-zone-for-ip.png)
+![image](../../doc_files/tufin-get-zone-for-ip.png)
### 6. tufin-submit-change-request
---
@@ -210,7 +210,7 @@ Submit a change request to SecureChange
```!tufin-submit-change-request request-type="Decommission Request" priority=High source=192.168.1.1 subject="This host is infected with ransomware"```
##### Human Readable Output
-![image](https://raw.githubusercontent.com/demisto/content/02e1aa1b9ec01b73d5c6d1c15584271a4f0e3fa6/Packs/Tufin/doc_files/tufin-submit-change-request.png)
+![image](../../doc_files/tufin-submit-change-request.png)
### 7. tufin-search-devices
---
@@ -244,7 +244,7 @@ Search SecureTrack devices
```!tufin-search-devices vendor=Cisco```
##### Human Readable Output
-![image](https://raw.githubusercontent.com/demisto/content/02e1aa1b9ec01b73d5c6d1c15584271a4f0e3fa6/Packs/Tufin/doc_files/tufin-search-devices.png)
+![image](../../doc_files/tufin-search-devices.png)
### 8. tufin-get-change-info
---
@@ -278,7 +278,7 @@ Get information on a SecureChange Ticket (Ticket ID retrieved from Tufin UI)
```!tufin-get-change-info ticket-id=250```
##### Human Readable Output
-![image](https://raw.githubusercontent.com/demisto/content/02e1aa1b9ec01b73d5c6d1c15584271a4f0e3fa6/Packs/Tufin/doc_files/tufin-get-change-info.png)
+![image](../../doc_files/tufin-get-change-info.png)
### 9. tufin-search-applications
---
@@ -311,7 +311,7 @@ Search SecureApp applications
```!tufin-search-applications name="3Rivers"```
##### Human Readable Output
-![image](https://raw.githubusercontent.com/demisto/content/02e1aa1b9ec01b73d5c6d1c15584271a4f0e3fa6/Packs/Tufin/doc_files/tufin-search-applications.png)
+![image](../../doc_files/tufin-search-applications.png)
### 10. tufin-search-application-connections
---
@@ -350,7 +350,7 @@ Get SecureApp application connections
```!tufin-search-application-connections app_id=215```
##### Human Readable Output
-![image](https://raw.githubusercontent.com/demisto/content/02e1aa1b9ec01b73d5c6d1c15584271a4f0e3fa6/Packs/Tufin/doc_files/tufin-search-application-connections.png)
+![image](../../doc_files/tufin-search-application-connections.png)
## Troubleshooting
---
diff --git a/Packs/UBIRCH/Integrations/UBIRCH/UBIRCH.yml b/Packs/UBIRCH/Integrations/UBIRCH/UBIRCH.yml
index 901f8429cc62..bd82ff186b7f 100644
--- a/Packs/UBIRCH/Integrations/UBIRCH/UBIRCH.yml
+++ b/Packs/UBIRCH/Integrations/UBIRCH/UBIRCH.yml
@@ -42,7 +42,7 @@ script:
commands:
- description: Create a list of sample incidents.
name: create-sample-incidents
- dockerimage: demisto/py3-tools:1.0.0.102774
+ dockerimage: demisto/py3-tools:1.0.0.114656
longRunning: true
runonce: false
script: '-'
diff --git a/Packs/UBIRCH/ReleaseNotes/1_0_3.md b/Packs/UBIRCH/ReleaseNotes/1_0_3.md
new file mode 100644
index 000000000000..0a3bb07d80ee
--- /dev/null
+++ b/Packs/UBIRCH/ReleaseNotes/1_0_3.md
@@ -0,0 +1,6 @@
+
+#### Integrations
+
+##### UBIRCH
+
+- Updated the Docker image to: *demisto/py3-tools:1.0.0.114656*.
diff --git a/Packs/UBIRCH/pack_metadata.json b/Packs/UBIRCH/pack_metadata.json
index a557a3327dea..6941642f33a1 100644
--- a/Packs/UBIRCH/pack_metadata.json
+++ b/Packs/UBIRCH/pack_metadata.json
@@ -2,7 +2,7 @@
"name": "UBIRCH",
"description": "Integration to handle Ubirch incidents.",
"support": "partner",
- "currentVersion": "1.0.2",
+ "currentVersion": "1.0.3",
"author": "Ubirch GmbH",
"url": "https://ubirch.com/helpdesk",
"email": "support@ubirch.com",
diff --git a/Packs/URLHaus/Integrations/URLHaus/URLHaus.yml b/Packs/URLHaus/Integrations/URLHaus/URLHaus.yml
index f59119cf5c8a..451183eaca98 100644
--- a/Packs/URLHaus/Integrations/URLHaus/URLHaus.yml
+++ b/Packs/URLHaus/Integrations/URLHaus/URLHaus.yml
@@ -342,7 +342,7 @@ script:
- contextPath: File.Extension
description: File extension.
type: string
- dockerimage: demisto/python3:3.10.14.99865
+ dockerimage: demisto/python3:3.11.10.115186
runonce: false
script: '-'
subtype: python3
diff --git a/Packs/URLHaus/ReleaseNotes/1_0_30.md b/Packs/URLHaus/ReleaseNotes/1_0_30.md
new file mode 100644
index 000000000000..992fc650572f
--- /dev/null
+++ b/Packs/URLHaus/ReleaseNotes/1_0_30.md
@@ -0,0 +1,7 @@
+
+#### Integrations
+
+##### URLhaus
+
+
+- Updated the Docker image to: *demisto/python3:3.11.10.113941*.
diff --git a/Packs/URLHaus/ReleaseNotes/1_0_31.md b/Packs/URLHaus/ReleaseNotes/1_0_31.md
new file mode 100644
index 000000000000..3ac3ac50f761
--- /dev/null
+++ b/Packs/URLHaus/ReleaseNotes/1_0_31.md
@@ -0,0 +1,7 @@
+
+#### Integrations
+
+##### URLhaus
+
+
+- Updated the Docker image to: *demisto/python3:3.11.10.115186*.
diff --git a/Packs/URLHaus/pack_metadata.json b/Packs/URLHaus/pack_metadata.json
index fe32edc13370..cedf04f4c095 100644
--- a/Packs/URLHaus/pack_metadata.json
+++ b/Packs/URLHaus/pack_metadata.json
@@ -2,7 +2,7 @@
"name": "URLhaus",
"description": "URLhaus has the goal of sharing malicious URLs that are being used for malware distribution.",
"support": "xsoar",
- "currentVersion": "1.0.29",
+ "currentVersion": "1.0.31",
"author": "Cortex XSOAR",
"url": "https://www.paloaltonetworks.com/cortex",
"email": "",
diff --git a/Packs/Uptycs/Integrations/Uptycs/Uptycs.yml b/Packs/Uptycs/Integrations/Uptycs/Uptycs.yml
index b9fd641b7805..1919d74ff651 100644
--- a/Packs/Uptycs/Integrations/Uptycs/Uptycs.yml
+++ b/Packs/Uptycs/Integrations/Uptycs/Uptycs.yml
@@ -1518,7 +1518,7 @@ script:
- name: filename
required: true
description: The name of the file being uploaded. This file should be uploaded to Cortex XSOAR in the Playground War Room using the paperclip icon next to the CLI.
- dockerimage: demisto/auth-utils:1.0.0.91932
+ dockerimage: demisto/auth-utils:1.0.0.116752
isfetch: true
runonce: true
script: '-'
diff --git a/Packs/Uptycs/ReleaseNotes/1_0_13.md b/Packs/Uptycs/ReleaseNotes/1_0_13.md
new file mode 100644
index 000000000000..a4e697a54d3f
--- /dev/null
+++ b/Packs/Uptycs/ReleaseNotes/1_0_13.md
@@ -0,0 +1,7 @@
+
+#### Integrations
+
+##### Uptycs
+
+
+- Updated the Docker image to: *demisto/auth-utils:1.0.0.115527*.
diff --git a/Packs/Uptycs/ReleaseNotes/1_0_14.md b/Packs/Uptycs/ReleaseNotes/1_0_14.md
new file mode 100644
index 000000000000..454ca3e4e319
--- /dev/null
+++ b/Packs/Uptycs/ReleaseNotes/1_0_14.md
@@ -0,0 +1,9 @@
+
+#### Integrations
+
+##### Uptycs
+- Updated the Docker image to: *demisto/auth-utils:1.0.0.116752*.
+
+
+
+
diff --git a/Packs/Uptycs/pack_metadata.json b/Packs/Uptycs/pack_metadata.json
index 9ffcf506d82a..7f08abea0199 100644
--- a/Packs/Uptycs/pack_metadata.json
+++ b/Packs/Uptycs/pack_metadata.json
@@ -2,7 +2,7 @@
"name": "Uptycs",
"description": "Fetches data from the Uptycs database.",
"support": "partner",
- "currentVersion": "1.0.12",
+ "currentVersion": "1.0.14",
"author": "Uptycs Inc.",
"url": "https://www.uptycs.com",
"email": "support@uptycs.com",
diff --git a/Packs/Use_Case_Builder/ReleaseNotes/1_0_8.md b/Packs/Use_Case_Builder/ReleaseNotes/1_0_8.md
new file mode 100644
index 000000000000..8d7996bf27d2
--- /dev/null
+++ b/Packs/Use_Case_Builder/ReleaseNotes/1_0_8.md
@@ -0,0 +1,15 @@
+
+#### Scripts
+
+##### UseCaseAdoptionMetrics
+- Updated the Docker image to: *demisto/python3:3.11.10.116439*.
+
+
+
+
+##### CreateUseCaseTemplateList
+- Updated the Docker image to: *demisto/python3:3.11.10.116439*.
+
+
+
+
diff --git a/Packs/Use_Case_Builder/Scripts/CreateUseCaseTemplateList/CreateUseCaseTemplateList.yml b/Packs/Use_Case_Builder/Scripts/CreateUseCaseTemplateList/CreateUseCaseTemplateList.yml
index 5bd58b7fb798..f34fa115e82e 100644
--- a/Packs/Use_Case_Builder/Scripts/CreateUseCaseTemplateList/CreateUseCaseTemplateList.yml
+++ b/Packs/Use_Case_Builder/Scripts/CreateUseCaseTemplateList/CreateUseCaseTemplateList.yml
@@ -2,7 +2,7 @@ commonfields:
id: CreateUseCaseTemplateList
version: -1
comment: ''
-dockerimage: demisto/python3:3.10.12.68714
+dockerimage: demisto/python3:3.11.10.116439
enabled: true
name: CreateUseCaseTemplateList
runas: DBotWeakRole
diff --git a/Packs/Use_Case_Builder/Scripts/UseCaseAdoptionMetrics/UseCaseAdoptionMetrics.yml b/Packs/Use_Case_Builder/Scripts/UseCaseAdoptionMetrics/UseCaseAdoptionMetrics.yml
index 8b9ccac951fe..1e5245d66320 100644
--- a/Packs/Use_Case_Builder/Scripts/UseCaseAdoptionMetrics/UseCaseAdoptionMetrics.yml
+++ b/Packs/Use_Case_Builder/Scripts/UseCaseAdoptionMetrics/UseCaseAdoptionMetrics.yml
@@ -2,7 +2,7 @@ commonfields:
id: UseCaseAdoptionMetrics
version: -1
comment: ''
-dockerimage: demisto/python3:3.10.12.68300
+dockerimage: demisto/python3:3.11.10.116439
enabled: true
name: UseCaseAdoptionMetrics
runas: DBotWeakRole
diff --git a/Packs/Use_Case_Builder/pack_metadata.json b/Packs/Use_Case_Builder/pack_metadata.json
index aaa5a7f34c2e..6d6eda1bfdf6 100644
--- a/Packs/Use_Case_Builder/pack_metadata.json
+++ b/Packs/Use_Case_Builder/pack_metadata.json
@@ -2,7 +2,7 @@
"name": "Use Case Builder",
"description": "To streamline the Use Case Design process and provide tools to help you get into production faster!",
"support": "community",
- "currentVersion": "1.0.7",
+ "currentVersion": "1.0.8",
"author": "Joe Cosgrove",
"url": "https://www.paloaltonetworks.com/cortex",
"email": "jcosgrove@paloaltonetworks.com",
diff --git a/Packs/VMwareESXi/ModelingRules/VMwareESXiModelingRules/VMwareESXiModelingRules.xif b/Packs/VMwareESXi/ModelingRules/VMwareESXiModelingRules/VMwareESXiModelingRules.xif
deleted file mode 100644
index 2f4a33e64c98..000000000000
--- a/Packs/VMwareESXi/ModelingRules/VMwareESXiModelingRules/VMwareESXiModelingRules.xif
+++ /dev/null
@@ -1,92 +0,0 @@
-[MODEL: dataset="VMware_ESXi_raw", model=Endpoint]
- filter _raw_log contains "hostd" or _raw_log contains "vcpu" or _raw_log contains "svga" or _raw_log contains "mks" or _raw_log contains "vmx"
-| filter _raw_log not contains "verbose"
-|alter request_time = arrayindex(regextract(_raw_log ,"(\d+\-\d+\-\d+T\d+\:\d+\:\d+Z?\.?\d+Z)"),0)
-|alter service_name= arrayindex(regextract(_raw_log ,"\d+\-\d+\-\d+T\d+\:\d+\:\d+\S\d+\w+\s?\w+\|?\s([^\[?\|?\]]+)") ,0)
-// extract message
-| alter message1= arrayindex(regextract(_raw_log , "\|\s\w+\:\s([^\"]+)") ,0)
-| alter message2 = arrayindex(regextract(_raw_log , "opID\=[^\s]+\s([^\"]+)"),0)
-| alter message = coalesce(message2 ,message1)
-// end extract message
-| alter opID= arrayindex(regextract(_raw_log ,"opID\=([^\]]+)") ,0)
-| alter originator=arrayindex(regextract(_raw_log ,"Originator\@(\d+)") ,0)
-// extract username
-| alter user= arrayindex(regextract(_raw_log ,"user\:\s(\w+)") ,0)
-| alter vpxuser= arrayindex(regextract(_raw_log ,"vpxuser\@(\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})"),0)
-| alter username = coalesce(user ,vpxuser)
-// end extract user
-| alter sub = arrayindex(regextract(_raw_log ,"sub\=([^\s]+)") ,0)
-| alter XDM.Endpoint.operation =service_name
-| alter XDM.Endpoint.operation =opID
-| alter XDM.Endpoint.Observer.unique_identifier =originator
-| alter XDM.Endpoint.original_event_sub_type= sub
-| alter XDM.Endpoint.Actor.user.identifier =username
-| alter XDM.Endpoint.event_timestamp= parse_timestamp("%Y-%m-%dT%H:%M:%E3SZ",request_time)
-| alter XDM.Endpoint.threat.description=message;
-
-
-[MODEL: dataset="VMware_ESXi_raw", model=Audit]
-filter _raw_log contains "verbose" or _raw_log contains "vpax" or _raw_log contains "scsiCorrelator" or _raw_log contains "cpu" or _raw_log contains "mark" or _raw_log contains "vmkeventd" or _raw_log contains "vmkdevmgr" or _raw_log contains "lwsmd" or _raw_log contains "smrtd" or _raw_log contains "sysboot" or _raw_log contains "ESXShell" or _raw_log contains "SSH" or _raw_log contains "mark" or _raw_log contains "esxupdate" or _raw_log contains "dhclient-um"
-// extract message
-| alter message1= arrayindex(regextract(_raw_log ,"opID\=[^\s]+\s([^\"]+)") ,0)
-| alter message2 = arrayindex(regextract(_raw_log ,"\:\s([^\&]+)"),0)
-| alter message = coalesce(message2 ,message1)
-// end extract message
-// extract timestamp part
-| alter boot_timestamp= arrayindex(regextract(_raw_log ,"\[(\d+\-\d+\-\d+\s+\d+\:\d+\:\d+\.\d+)") ,0)
- ,extract_timestamp2= arrayindex(regextract(_raw_log ,"(\d+\-\d+\-\d+T\d+\:\d+\:\d+)") ,0)
-| alter timestamp2 = concat(extract_timestamp2 ,".000000")
- ,boot_timestamp =replex(boot_timestamp , " ","T")
-| alter Timestamp = coalesce(boot_timestamp , Timestamp2)
-| alter the_timestamp = parse_timestamp("%Y-%m-%dT%H:%M:%E6S",Timestamp )
-// end extarct timestamp
-| alter Timestamp= arrayindex(regextract(_raw_log ,"(\d+\-\d+\-\d+T\d+\:\d+\:\d+Z?\.?\d+Z)") ,0)
-| alter opID= arrayindex(regextract(_raw_log ,"opID\=([^\]]+)") ,0)
-| alter originator=arrayindex(regextract(_raw_log ,"Originator\@(\d+)") ,0)
-| alter sub = arrayindex(regextract(_raw_log ," sub\=([^\]]+)") ,0)
-| alter process_name = arrayindex(regextract(_raw_log ,"\w+\:\s\[(\w+)") ,0)
-// extract severity_alert
-| alter severity_alert1= arrayindex(regextract(_raw_log ,"\d+\:\s\w+\:\s(\w+)") ,0)
-| alter severity_alert2 = arrayindex(regextract(_raw_log ,"\:\w+\)([^\:M]+)"),0)
-| alter severity_alert = coalesce(severity_alert2 ,severity_alert1)
-// end extract severity_alert
-// extract service_name
-| alter service_name2 = arrayindex(regextract(_raw_log ,"\w+\:\s\d+\:\s(\w+)\:\s") ,0)
-| alter service_name1 = arrayindex(regextract(_raw_log ,"\-\d+\-\d+T\d+\:\d+\:\d+\S+\s(\w+)"),0)
-| alter service_name3 = arrayindex(regextract(_raw_log ,"\d+\]\s(\w+)\:\s"),0)
-| alter service_name = coalesce(service_name2,service_name1,service_name3)
-// end extract service_name
-| alter source_ip= arrayindex(regextract(_raw_log ,"(\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})") ,0)
-| alter XDM.Audit.operation= service_name
-| alter XDM.Audit.event_timestamp = the_timestamp
-| alter XDM.Audit.audited_resource.id =opID
-| alter XDM.Audit.TriggeredBy.identity.uuid= originator
-| alter XDM.Audit.identity.sub_type= sub
-| alter XDM.Audit.TriggeredBy.identity.name= process_name
-| alter XDM.Audit.threat.description= message
-| alter XDM.Audit.threat.severity= severity_alert
-| alter XDM.Audit.TriggeredBy.ipv4= source_ip;
-
-
-[MODEL: dataset="VMware_ESXi_raw", model=Auth]
-filter _raw_log contains "vmauthd" or _raw_log contains "sshd"
-| alter Timestamp= arrayindex(regextract(_raw_log ,"(\d+\-\d+\-\d+T\d+\:\d+\:\d+Z?\.?\d+Z)") ,0)
-| alter Service_name = arrayindex(regextract(_raw_log ,"\d+\-\d+\-\d+T\d+\:\d+\:\d+\S+\s(\w+)") ,0)
-| alter pid = arrayindex(regextract(_raw_log ,"Z\s\w+\[(\d+)"),0)
-| alter message= arrayindex(regextract(_raw_log ,"Z\s\w+\[(\d+)") ,0)
-// extract source_ip
-| alter sourceip2= arrayindex(regextract(_raw_log ,"from\s(\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})") ,0)
-| alter sourceip1 = arrayindex(regextract(_raw_log ,"rhost\=(\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})"),0)
-| alter sourceip = coalesce(sourceip1 ,sourceip2)
-// end extract source_ip
-// extract username
-| alter username1= arrayindex(regextract(_raw_log ,"user\=(\w+)") ,0)
-| alter username2 = arrayindex(regextract(_raw_log ,"ruser\=(\w+)"),0)
-| alter username = coalesce(username1 ,username2)
-// end extract username
-| alter XDM.Auth.event_timestamp =parse_timestamp("%Y-%m-%dT%H:%M:%SZ",Timestamp)
-| alter XDM.Auth.Observer.unique_identifier =service_name
-| alter XDM.Auth.Client.process.pid =To_number(pid)
-| alter XDM.Auth.threat.description =message
-| alter XDM.Auth.Client.ipv4 =sourceip
-| alter XDM.Auth.Client.user.username =username;
diff --git a/Packs/VMwareESXi/ModelingRules/VMwareESXiModelingRules/VMwareESXiModelingRules.yml b/Packs/VMwareESXi/ModelingRules/VMwareESXiModelingRules/VMwareESXiModelingRules.yml
deleted file mode 100644
index c9be666961f1..000000000000
--- a/Packs/VMwareESXi/ModelingRules/VMwareESXiModelingRules/VMwareESXiModelingRules.yml
+++ /dev/null
@@ -1,7 +0,0 @@
-fromversion: 6.8.0
-id: VMwareESXi
-name: VMware ESXi
-rules: ''
-schema: ''
-tags: VMwareESXi
-toversion: 6.9.9
diff --git a/Packs/VMwareESXi/ModelingRules/VMwareESXiModelingRules/VMwareESXiModelingRules_schema.json b/Packs/VMwareESXi/ModelingRules/VMwareESXiModelingRules/VMwareESXiModelingRules_schema.json
deleted file mode 100644
index 32608854134e..000000000000
--- a/Packs/VMwareESXi/ModelingRules/VMwareESXiModelingRules/VMwareESXiModelingRules_schema.json
+++ /dev/null
@@ -1,8 +0,0 @@
-{
- "VMware_ESXi_raw": {
- "_raw_log": {
- "type": "string",
- "is_array": false
- }
- }
-}
diff --git a/Packs/VMwareESXi/ReleaseNotes/1_0_6.md b/Packs/VMwareESXi/ReleaseNotes/1_0_6.md
new file mode 100644
index 000000000000..31541d288cfc
--- /dev/null
+++ b/Packs/VMwareESXi/ReleaseNotes/1_0_6.md
@@ -0,0 +1,5 @@
+#### Modeling Rules
+##### VMware ESXi
+<~XSIAM>
+Deprecated an outdated Modeling Rule version to ensure backend compatibility, with no impact on the current state.
+~XSIAM>
\ No newline at end of file
diff --git a/Packs/VMwareESXi/pack_metadata.json b/Packs/VMwareESXi/pack_metadata.json
index c40d1207159e..9599d9856f09 100644
--- a/Packs/VMwareESXi/pack_metadata.json
+++ b/Packs/VMwareESXi/pack_metadata.json
@@ -2,7 +2,7 @@
"name": "VMware ESXi",
"description": "Modeling Rules for the VMware ESXi logs collector",
"support": "xsoar",
- "currentVersion": "1.0.5",
+ "currentVersion": "1.0.6",
"author": "Cortex XSOAR",
"url": "https://www.paloaltonetworks.com/cortex",
"email": "",
diff --git a/Packs/VaronisSaaS/Integrations/VaronisSaaS/VaronisSaaS_image.png b/Packs/VaronisSaaS/Integrations/VaronisSaaS/VaronisSaaS_image.png
index 9b1a343dd9d3..634c150ae1d4 100644
Binary files a/Packs/VaronisSaaS/Integrations/VaronisSaaS/VaronisSaaS_image.png and b/Packs/VaronisSaaS/Integrations/VaronisSaaS/VaronisSaaS_image.png differ
diff --git a/Packs/VaronisSaaS/ReleaseNotes/1_0_8.md b/Packs/VaronisSaaS/ReleaseNotes/1_0_8.md
new file mode 100644
index 000000000000..b20f5bcfe241
--- /dev/null
+++ b/Packs/VaronisSaaS/ReleaseNotes/1_0_8.md
@@ -0,0 +1,3 @@
+#### Integrations
+##### Varonis SaaS
+- Updated a Varonis log image for marketplace.
diff --git a/Packs/VaronisSaaS/pack_metadata.json b/Packs/VaronisSaaS/pack_metadata.json
index 22e51b83c69f..42a70484059a 100644
--- a/Packs/VaronisSaaS/pack_metadata.json
+++ b/Packs/VaronisSaaS/pack_metadata.json
@@ -2,7 +2,7 @@
"name": "Varonis SaaS",
"description": "Streamline alerts, events and related forensic information from Varonis SaaS",
"support": "partner",
- "currentVersion": "1.0.7",
+ "currentVersion": "1.0.8",
"author": "Varonis",
"url": "https://www.varonis.com/support",
"email": "",
diff --git a/Packs/VectraXDR/ReleaseNotes/1_0_6.md b/Packs/VectraXDR/ReleaseNotes/1_0_6.md
new file mode 100644
index 000000000000..c051a3c247a9
--- /dev/null
+++ b/Packs/VectraXDR/ReleaseNotes/1_0_6.md
@@ -0,0 +1,27 @@
+
+#### Scripts
+
+##### VectraXDRDisplayEntityDetections
+- Updated the Docker image to: *demisto/python3:3.11.10.116439*.
+
+
+
+
+##### VectraXDRSyncEntityAssignment
+- Updated the Docker image to: *demisto/python3:3.11.10.116439*.
+
+
+
+
+##### VectraXDRSyncEntityDetections
+- Updated the Docker image to: *demisto/python3:3.11.10.116439*.
+
+
+
+
+##### VectraXDRAddNotesInLayout
+- Updated the Docker image to: *demisto/python3:3.11.10.116439*.
+
+
+
+
diff --git a/Packs/VectraXDR/Scripts/VectraXDRAddNotesInLayout/VectraXDRAddNotesInLayout.yml b/Packs/VectraXDR/Scripts/VectraXDRAddNotesInLayout/VectraXDRAddNotesInLayout.yml
index 9dd5f4b9abc8..bc07143542f1 100644
--- a/Packs/VectraXDR/Scripts/VectraXDRAddNotesInLayout/VectraXDRAddNotesInLayout.yml
+++ b/Packs/VectraXDR/Scripts/VectraXDRAddNotesInLayout/VectraXDRAddNotesInLayout.yml
@@ -10,7 +10,7 @@ enabled: true
scripttarget: 0
subtype: python3
runonce: false
-dockerimage: demisto/python3:3.10.13.80593
+dockerimage: demisto/python3:3.11.10.116439
runas: DBotWeakRole
fromversion: 6.8.0
tests:
diff --git a/Packs/VectraXDR/Scripts/VectraXDRDisplayEntityDetections/VectraXDRDisplayEntityDetections.yml b/Packs/VectraXDR/Scripts/VectraXDRDisplayEntityDetections/VectraXDRDisplayEntityDetections.yml
index 9432e43831f8..b003f82452ae 100644
--- a/Packs/VectraXDR/Scripts/VectraXDRDisplayEntityDetections/VectraXDRDisplayEntityDetections.yml
+++ b/Packs/VectraXDR/Scripts/VectraXDRDisplayEntityDetections/VectraXDRDisplayEntityDetections.yml
@@ -11,7 +11,7 @@ enabled: true
scripttarget: 0
subtype: python3
runonce: false
-dockerimage: demisto/python3:3.10.13.80593
+dockerimage: demisto/python3:3.11.10.116439
runas: DBotWeakRole
fromversion: 6.8.0
tests:
diff --git a/Packs/VectraXDR/Scripts/VectraXDRSyncEntityAssignment/VectraXDRSyncEntityAssignment.yml b/Packs/VectraXDR/Scripts/VectraXDRSyncEntityAssignment/VectraXDRSyncEntityAssignment.yml
index 84e1e8071ebb..bfd8d226e0a9 100644
--- a/Packs/VectraXDR/Scripts/VectraXDRSyncEntityAssignment/VectraXDRSyncEntityAssignment.yml
+++ b/Packs/VectraXDR/Scripts/VectraXDRSyncEntityAssignment/VectraXDRSyncEntityAssignment.yml
@@ -11,7 +11,7 @@ enabled: true
scripttarget: 0
subtype: python3
runonce: false
-dockerimage: demisto/python3:3.10.13.80593
+dockerimage: demisto/python3:3.11.10.116439
runas: DBotWeakRole
fromversion: 6.8.0
tests:
diff --git a/Packs/VectraXDR/Scripts/VectraXDRSyncEntityDetections/VectraXDRSyncEntityDetections.yml b/Packs/VectraXDR/Scripts/VectraXDRSyncEntityDetections/VectraXDRSyncEntityDetections.yml
index 72d9be2f5d57..55ef49ff8cf7 100644
--- a/Packs/VectraXDR/Scripts/VectraXDRSyncEntityDetections/VectraXDRSyncEntityDetections.yml
+++ b/Packs/VectraXDR/Scripts/VectraXDRSyncEntityDetections/VectraXDRSyncEntityDetections.yml
@@ -11,7 +11,7 @@ enabled: true
scripttarget: 0
subtype: python3
runonce: false
-dockerimage: demisto/python3:3.10.13.80593
+dockerimage: demisto/python3:3.11.10.116439
runas: DBotWeakRole
fromversion: 6.8.0
tests:
diff --git a/Packs/VectraXDR/pack_metadata.json b/Packs/VectraXDR/pack_metadata.json
index 76b5ba35110a..c0728fcf6e63 100644
--- a/Packs/VectraXDR/pack_metadata.json
+++ b/Packs/VectraXDR/pack_metadata.json
@@ -2,7 +2,7 @@
"name": "Vectra XDR",
"description": "Vectra XDR pack empowers the SOC to create incidents using Vectra AI's Attack Signal Intelligence.",
"support": "partner",
- "currentVersion": "1.0.5",
+ "currentVersion": "1.0.6",
"author": "Vectra AI",
"url": "https://support.vectra.ai",
"email": "support@vectra.ai",
diff --git a/Packs/Veeam/ReleaseNotes/1_0_1.md b/Packs/Veeam/ReleaseNotes/1_0_1.md
new file mode 100644
index 000000000000..9db6ef37e085
--- /dev/null
+++ b/Packs/Veeam/ReleaseNotes/1_0_1.md
@@ -0,0 +1,22 @@
+#### XSIAM Dashboards
+
+##### New: Veeam Data Platform Monitoring
+Added the **'Veeam Data Platform Monitoring'** Cortex XSIAM dashboard for the Veeam App pack.
+##### New: Veeam Security Activities
+Added the **'Veeam Security Activities'** Cortex XSIAM dashboard for the Veeam App pack.
+
+#### XSIAM Reports
+##### New: All Veeam security events with Critical and High severity for the last 24h
+Added the **'All Veeam security events with Critical and High severity for the last 24h'** Cortex XSIAM report for the Veeam App pack.
+##### New: All Veeam malware detection events for the last 24h
+Added the **'All Veeam malware detection events for the last 24h'** Cortex XSIAM report for the Veeam App pack.
+##### New: All Veeam four-eyes authorization events for the last 24h
+Added the **'All Veeam four-eyes authorization events for the last 24h'** Cortex XSIAM report for the Veeam App pack.
+##### New: All Veeam finished jobs for the last 24h
+Added the **'All Veeam finished jobs for the last 24h'** Cortex XSIAM report for the Veeam App pack.
+##### New: All Veeam failed multi-factor authentication events for the last 24h
+Added the **'All Veeam failed multi-factor authentication events for the last 24h'** Cortex XSIAM report for the Veeam App pack.
+##### New: All Veeam security events for the last 7 days
+Added the **'All Veeam security events for the last 7 days'** Cortex XSIAM report for the Veeam App pack.
+##### New: All Veeam triggered alarms for the last 7 days
+Added the **'All Veeam triggered alarms for the last 7 days'** Cortex XSIAM report for the Veeam App pack.
\ No newline at end of file
diff --git a/Packs/Veeam/ReleaseNotes/1_0_2.md b/Packs/Veeam/ReleaseNotes/1_0_2.md
new file mode 100644
index 000000000000..3846053685e7
--- /dev/null
+++ b/Packs/Veeam/ReleaseNotes/1_0_2.md
@@ -0,0 +1 @@
+***Reverted changes released on previous version (1.0.1) due to technical issues.***
diff --git a/Packs/Veeam/pack_metadata.json b/Packs/Veeam/pack_metadata.json
index 0466de162d9f..bb3e25eaa6d5 100644
--- a/Packs/Veeam/pack_metadata.json
+++ b/Packs/Veeam/pack_metadata.json
@@ -2,7 +2,7 @@
"name": "Veeam App for Palo Alto Networks XSOAR",
"description": "Veeam content pack for Cortex XSOAR.",
"support": "partner",
- "currentVersion": "1.0.0",
+ "currentVersion": "1.0.2",
"author": "Veeam Software",
"url": "https://www.veeam.com/support.html",
"email": "paloaltoappsupport@veeam.com",
diff --git a/Packs/Vertica/Integrations/Vertica/Vertica.yml b/Packs/Vertica/Integrations/Vertica/Vertica.yml
index bb123e205902..db77e7a8361b 100644
--- a/Packs/Vertica/Integrations/Vertica/Vertica.yml
+++ b/Packs/Vertica/Integrations/Vertica/Vertica.yml
@@ -46,7 +46,7 @@ script:
description: The content of rows.
type: string
description: Executes a query on the Vertica database.
- dockerimage: demisto/py3-tools:1.0.0.108682
+ dockerimage: demisto/py3-tools:1.0.0.114656
subtype: python3
tests:
- Vertica Test
diff --git a/Packs/Vertica/ReleaseNotes/1_0_6.md b/Packs/Vertica/ReleaseNotes/1_0_6.md
new file mode 100644
index 000000000000..d2655d147d90
--- /dev/null
+++ b/Packs/Vertica/ReleaseNotes/1_0_6.md
@@ -0,0 +1,6 @@
+
+#### Integrations
+
+##### Vertica
+
+- Updated the Docker image to: *demisto/py3-tools:1.0.0.114656*.
diff --git a/Packs/Vertica/pack_metadata.json b/Packs/Vertica/pack_metadata.json
index 886b8c9467b0..98db8ab4fe04 100644
--- a/Packs/Vertica/pack_metadata.json
+++ b/Packs/Vertica/pack_metadata.json
@@ -2,7 +2,7 @@
"name": "Vertica",
"description": "Analytic database management software",
"support": "xsoar",
- "currentVersion": "1.0.5",
+ "currentVersion": "1.0.6",
"author": "Cortex XSOAR",
"url": "https://www.paloaltonetworks.com/cortex",
"email": "",
diff --git a/Packs/VirusTotal/Integrations/VirusTotalV3/README.md b/Packs/VirusTotal/Integrations/VirusTotalV3/README.md
index 1d25118031b8..fca6efeeddf7 100644
--- a/Packs/VirusTotal/Integrations/VirusTotalV3/README.md
+++ b/Packs/VirusTotal/Integrations/VirusTotalV3/README.md
@@ -39,7 +39,7 @@ The integration was integrated and tested with version v3 API of VirusTotal.
### Acquiring your API key
Your API key can be found in your VirusTotal account user menu:
-![how to get api key in virus total](https://files.readme.io/ddeb298-Screen_Shot_2019-10-17_at_3.17.04_PM.png)
+![how to get api key in virus total](../../doc_files/ddeb298-Screen_Shot_2019-10-17_at_3_17_04_PM.png)
Your API key carries all your privileges, so keep it secure and don't share it with anyone.
## DBot Score / Reputation scores
diff --git a/Packs/VirusTotal/ReleaseNotes/2_6_24.md b/Packs/VirusTotal/ReleaseNotes/2_6_24.md
new file mode 100644
index 000000000000..2ee68ab8c8e9
--- /dev/null
+++ b/Packs/VirusTotal/ReleaseNotes/2_6_24.md
@@ -0,0 +1,3 @@
+## VirusTotal
+
+- Documentation and metadata improvements.
diff --git a/Packs/VirusTotal/doc_files/ddeb298-Screen_Shot_2019-10-17_at_3_17_04_PM.png b/Packs/VirusTotal/doc_files/ddeb298-Screen_Shot_2019-10-17_at_3_17_04_PM.png
new file mode 100644
index 000000000000..a40e6ed1869e
Binary files /dev/null and b/Packs/VirusTotal/doc_files/ddeb298-Screen_Shot_2019-10-17_at_3_17_04_PM.png differ
diff --git a/Packs/VirusTotal/pack_metadata.json b/Packs/VirusTotal/pack_metadata.json
index 4f429be6e042..474f41e60f47 100644
--- a/Packs/VirusTotal/pack_metadata.json
+++ b/Packs/VirusTotal/pack_metadata.json
@@ -2,7 +2,7 @@
"name": "VirusTotal",
"description": "Analyze suspicious hashes, URLs, domains and IP addresses",
"support": "partner",
- "currentVersion": "2.6.23",
+ "currentVersion": "2.6.24",
"author": "VirusTotal",
"url": "https://www.virustotal.com",
"email": "contact@virustotal.com",
diff --git a/Packs/WebFileRepository/Integrations/WebFileRepository/README.md b/Packs/WebFileRepository/Integrations/WebFileRepository/README.md
index 5167314b2a96..b0d37c5f13c3 100644
--- a/Packs/WebFileRepository/Integrations/WebFileRepository/README.md
+++ b/Packs/WebFileRepository/Integrations/WebFileRepository/README.md
@@ -39,14 +39,18 @@ In a web browser, go to **`http://:`**
To access the File Management UI by instance name, make sure ***Instance execute external*** is enabled.
1. In Cortex XSOAR 6.x:
+
1. Navigate to **Settings > About > Troubleshooting**.
2. In the **Server Configuration** section, verify that the `instance.execute.external.` key is set to `true`. If this key does not exist, click **+ Add Server Configuration** and add the `instance.execute.external.` and set the value to `true`. See [this documentation](https://xsoar.pan.dev/docs/reference/articles/long-running-invoke) for further information.
-2. In a web browser:
+3. In a web browser:
+
+ - For Cortex XSOAR 6.x: Go to `https:///instance/execute/`.
+ - For Cortex XSOAR 8 On-prem, Cortex XSOAR 8 Cloud, or Cortex XSIAM: Go to `https://ext-.crtx..paloaltonetworks.com/xsoar/instance/execute/`.
+ - For Multi Tenant environments: Go to `https:///acc_/instance/execute/`.
- - (For Cortex XSOAR 6.x) go to `https:///instance/execute//`
- - (For Cortex XSOAR 8 or Cortex XSIAM) `https://ext-.crtx..paloaltonetworks.com/xsoar/instance/execute/`
- - (In Multi Tenant environments) `https:///acc_/instance/execute//`
-
+ **Note**:
+ For Cortex XSOAR 8 On-prem, you need to add the `ext-` FQDN DNS record to map the Cortex XSOAR DNS name to the external IP address.
+ For example, `ext-xsoar.mycompany.com`.
## Commands
diff --git a/Packs/WinRM/Integrations/WindowsRemoteManagement/WindowsRemoteManagement.yml b/Packs/WinRM/Integrations/WindowsRemoteManagement/WindowsRemoteManagement.yml
index 2c75c39db7d0..f91177d30b5d 100644
--- a/Packs/WinRM/Integrations/WindowsRemoteManagement/WindowsRemoteManagement.yml
+++ b/Packs/WinRM/Integrations/WindowsRemoteManagement/WindowsRemoteManagement.yml
@@ -338,7 +338,7 @@ script:
- contextPath: WinRM.Powershell.Status
description: Status code of the WInRM command
description: Executes a Powershell script on the endpoint
- dockerimage: demisto/auth-utils:1.0.0.76157
+ dockerimage: demisto/auth-utils:1.0.0.115527
subtype: python3
fromversion: 5.0.0
tests:
diff --git a/Packs/WinRM/ReleaseNotes/1_0_5.md b/Packs/WinRM/ReleaseNotes/1_0_5.md
new file mode 100644
index 000000000000..53b62c0ef83a
--- /dev/null
+++ b/Packs/WinRM/ReleaseNotes/1_0_5.md
@@ -0,0 +1,7 @@
+
+#### Integrations
+
+##### Windows Remote Management (Beta)
+
+
+- Updated the Docker image to: *demisto/auth-utils:1.0.0.115527*.
diff --git a/Packs/WinRM/pack_metadata.json b/Packs/WinRM/pack_metadata.json
index 3df62028a7f9..48fb2571cda6 100644
--- a/Packs/WinRM/pack_metadata.json
+++ b/Packs/WinRM/pack_metadata.json
@@ -2,7 +2,7 @@
"name": "Windows Remote Management",
"description": "Uses the Python pywinrm library and commands to execute either a process or using Powershell scripts.",
"support": "xsoar",
- "currentVersion": "1.0.4",
+ "currentVersion": "1.0.5",
"author": "Cortex XSOAR",
"url": "https://www.paloaltonetworks.com/cortex",
"email": "",
diff --git a/Packs/XForceExchange/Integrations/XFE_v2/XFE_v2.yml b/Packs/XForceExchange/Integrations/XFE_v2/XFE_v2.yml
index 5748b530c44a..5aa2b6a2926b 100644
--- a/Packs/XForceExchange/Integrations/XFE_v2/XFE_v2.yml
+++ b/Packs/XForceExchange/Integrations/XFE_v2/XFE_v2.yml
@@ -469,7 +469,7 @@ script:
- contextPath: XFE.CVESearch.Bookmark
description: Bookmark used to page through results.
type: String
- dockerimage: demisto/python3:3.10.13.73190
+ dockerimage: demisto/python3:3.11.10.115186
runonce: false
script: '-'
subtype: python3
diff --git a/Packs/XForceExchange/ReleaseNotes/1_1_28.md b/Packs/XForceExchange/ReleaseNotes/1_1_28.md
new file mode 100644
index 000000000000..e67bb17924f8
--- /dev/null
+++ b/Packs/XForceExchange/ReleaseNotes/1_1_28.md
@@ -0,0 +1,7 @@
+
+#### Integrations
+
+##### IBM X-Force Exchange v2
+
+
+- Updated the Docker image to: *demisto/python3:3.11.10.113941*.
diff --git a/Packs/XForceExchange/ReleaseNotes/1_1_29.md b/Packs/XForceExchange/ReleaseNotes/1_1_29.md
new file mode 100644
index 000000000000..78d034aa9174
--- /dev/null
+++ b/Packs/XForceExchange/ReleaseNotes/1_1_29.md
@@ -0,0 +1,7 @@
+
+#### Integrations
+
+##### IBM X-Force Exchange v2
+
+
+- Updated the Docker image to: *demisto/python3:3.11.10.115186*.
diff --git a/Packs/XForceExchange/pack_metadata.json b/Packs/XForceExchange/pack_metadata.json
index 5b1b7b6f8b6e..f542e8a1fbfb 100644
--- a/Packs/XForceExchange/pack_metadata.json
+++ b/Packs/XForceExchange/pack_metadata.json
@@ -2,7 +2,7 @@
"name": "IBM X-Force Exchange",
"description": "IBM X-Force Exchange lets you receive threat intelligence about applications,\n IP addresses, URls and hashes",
"support": "xsoar",
- "currentVersion": "1.1.27",
+ "currentVersion": "1.1.29",
"author": "Cortex XSOAR",
"url": "https://www.paloaltonetworks.com/cortex",
"email": "",
diff --git a/Packs/XMatters/Integrations/xMatters/xMatters.yml b/Packs/XMatters/Integrations/xMatters/xMatters.yml
index 2f056072babd..8ccccb13a615 100644
--- a/Packs/XMatters/Integrations/xMatters/xMatters.yml
+++ b/Packs/XMatters/Integrations/xMatters/xMatters.yml
@@ -189,7 +189,7 @@ script:
- contextPath: xMatters.GetEvent.SubmitterName
description: The user or integration that created the event
description: Get a single event from xMatters.
- dockerimage: demisto/python3:3.10.13.72123
+ dockerimage: demisto/python3:3.11.10.116439
isfetch: true
script: '-'
subtype: python3
diff --git a/Packs/XMatters/ReleaseNotes/1_0_15.md b/Packs/XMatters/ReleaseNotes/1_0_15.md
new file mode 100644
index 000000000000..35e31bfaf59f
--- /dev/null
+++ b/Packs/XMatters/ReleaseNotes/1_0_15.md
@@ -0,0 +1,10 @@
+
+#### Integrations
+
+##### xMatters
+- Updated the Docker image to: *demisto/python3:3.11.10.116439*.
+
+
+
+
+
diff --git a/Packs/XMatters/pack_metadata.json b/Packs/XMatters/pack_metadata.json
index f983257d7933..5f2986d2daf6 100644
--- a/Packs/XMatters/pack_metadata.json
+++ b/Packs/XMatters/pack_metadata.json
@@ -2,7 +2,7 @@
"name": "xMatters",
"description": "Use the xMatters pack to trigger events to on-call groups or users and wait for their response. Use their response to branch and take action in XSOAR.",
"support": "partner",
- "currentVersion": "1.0.14",
+ "currentVersion": "1.0.15",
"author": "xMatters",
"url": "https://support.xmatters.com/hc/en-us/requests/new",
"email": "support@xmatters.com",
diff --git a/Packs/XSOAR-SimpleDevToProd/ReleaseNotes/1_0_8.md b/Packs/XSOAR-SimpleDevToProd/ReleaseNotes/1_0_8.md
new file mode 100644
index 000000000000..d3eddbf0d9d6
--- /dev/null
+++ b/Packs/XSOAR-SimpleDevToProd/ReleaseNotes/1_0_8.md
@@ -0,0 +1,15 @@
+
+#### Scripts
+
+##### IsDemistoRestAPIInstanceAvailable
+- Updated the Docker image to: *demisto/python3:3.11.10.116439*.
+
+
+
+
+##### CustomContentBundleWizardry
+- Updated the Docker image to: *demisto/python3:3.11.10.116439*.
+
+
+
+
diff --git a/Packs/XSOAR-SimpleDevToProd/Scripts/CustomContentBundleWizardry/CustomContentBundleWizardry.yml b/Packs/XSOAR-SimpleDevToProd/Scripts/CustomContentBundleWizardry/CustomContentBundleWizardry.yml
index c79511abbea6..c586563b8218 100644
--- a/Packs/XSOAR-SimpleDevToProd/Scripts/CustomContentBundleWizardry/CustomContentBundleWizardry.yml
+++ b/Packs/XSOAR-SimpleDevToProd/Scripts/CustomContentBundleWizardry/CustomContentBundleWizardry.yml
@@ -16,7 +16,7 @@ comment: This automation accepts an XSOAR custom content bundle, and either retu
commonfields:
id: CustomContentBundleWizardry
version: -1
-dockerimage: demisto/python3:3.10.13.80014
+dockerimage: demisto/python3:3.11.10.116439
enabled: true
name: CustomContentBundleWizardry
outputs:
diff --git a/Packs/XSOAR-SimpleDevToProd/Scripts/IsDemistoRestAPIInstanceAvailable/IsDemistoRestAPIInstanceAvailable.yml b/Packs/XSOAR-SimpleDevToProd/Scripts/IsDemistoRestAPIInstanceAvailable/IsDemistoRestAPIInstanceAvailable.yml
index 708000b29d5b..e7643fefb7cd 100644
--- a/Packs/XSOAR-SimpleDevToProd/Scripts/IsDemistoRestAPIInstanceAvailable/IsDemistoRestAPIInstanceAvailable.yml
+++ b/Packs/XSOAR-SimpleDevToProd/Scripts/IsDemistoRestAPIInstanceAvailable/IsDemistoRestAPIInstanceAvailable.yml
@@ -7,7 +7,7 @@ commonfields:
id: IsDemistoRestAPIInstanceAvailable
id_x2: IsCoreRestAPIInstanceAvailable
version: -1
-dockerimage: demisto/python3:3.10.13.80014
+dockerimage: demisto/python3:3.11.10.116439
enabled: true
name: IsDemistoRestAPIInstanceAvailable
name_x2: IsCoreRestAPIInstanceAvailable
diff --git a/Packs/XSOAR-SimpleDevToProd/pack_metadata.json b/Packs/XSOAR-SimpleDevToProd/pack_metadata.json
index c34f87e6b8e1..ee5a130588ef 100644
--- a/Packs/XSOAR-SimpleDevToProd/pack_metadata.json
+++ b/Packs/XSOAR-SimpleDevToProd/pack_metadata.json
@@ -2,7 +2,7 @@
"name": "XSOAR - Simple Dev to Prod",
"description": "This pack simplifies exporting custom content items between your XSOAR environments.",
"support": "community",
- "currentVersion": "1.0.7",
+ "currentVersion": "1.0.8",
"author": "Mike Beauchamp",
"url": "",
"email": "mbeauchamp@paloaltonetworks.com",
diff --git a/Packs/XSOARStorage/Integrations/XSOARStorage/XSOARStorage.yml b/Packs/XSOARStorage/Integrations/XSOARStorage/XSOARStorage.yml
index 896bbf9ba48d..c1dd73190138 100644
--- a/Packs/XSOARStorage/Integrations/XSOARStorage/XSOARStorage.yml
+++ b/Packs/XSOARStorage/Integrations/XSOARStorage/XSOARStorage.yml
@@ -45,7 +45,7 @@ script:
- contextPath: XSOAR.Store
description: The data retrieved from the key.
type: string
- dockerimage: demisto/python3:3.10.13.72123
+ dockerimage: demisto/python3:3.11.10.116439
runonce: false
script: ''
subtype: python3
diff --git a/Packs/XSOARStorage/ReleaseNotes/1_0_6.md b/Packs/XSOARStorage/ReleaseNotes/1_0_6.md
new file mode 100644
index 000000000000..8578840050eb
--- /dev/null
+++ b/Packs/XSOARStorage/ReleaseNotes/1_0_6.md
@@ -0,0 +1,10 @@
+
+#### Integrations
+
+##### XSOAR Storage
+- Updated the Docker image to: *demisto/python3:3.11.10.116439*.
+
+
+
+
+
diff --git a/Packs/XSOARStorage/pack_metadata.json b/Packs/XSOARStorage/pack_metadata.json
index 74f1ce32f33b..77c5f608d4a4 100644
--- a/Packs/XSOARStorage/pack_metadata.json
+++ b/Packs/XSOARStorage/pack_metadata.json
@@ -2,7 +2,7 @@
"name": "XSOAR Storage",
"description": "XSOAR Storage provides a server-wide Key/Value store that allows values to be stored and retrieved; it supports namespaces to assist with key collisions.\n",
"support": "community",
- "currentVersion": "1.0.5",
+ "currentVersion": "1.0.6",
"author": "D Masters",
"url": "",
"email": "",
diff --git a/Packs/XSOARSummaryDashboard/ReleaseNotes/1_0_3.md b/Packs/XSOARSummaryDashboard/ReleaseNotes/1_0_3.md
new file mode 100644
index 000000000000..503b64431fb3
--- /dev/null
+++ b/Packs/XSOARSummaryDashboard/ReleaseNotes/1_0_3.md
@@ -0,0 +1,6 @@
+
+#### Scripts
+
+##### RSSWidget_LC
+
+- Updated the Docker image to: *demisto/py3-tools:1.0.0.114656*.
diff --git a/Packs/XSOARSummaryDashboard/Scripts/RSSWidgetLC/RSSWidgetLC.yml b/Packs/XSOARSummaryDashboard/Scripts/RSSWidgetLC/RSSWidgetLC.yml
index da536ba9b2c1..ae69a92d9c7d 100644
--- a/Packs/XSOARSummaryDashboard/Scripts/RSSWidgetLC/RSSWidgetLC.yml
+++ b/Packs/XSOARSummaryDashboard/Scripts/RSSWidgetLC/RSSWidgetLC.yml
@@ -9,7 +9,7 @@ commonfields:
contentitemexportablefields:
contentitemfields:
fromServerVersion: ''
-dockerimage: demisto/py3-tools:1.0.0.102774
+dockerimage: demisto/py3-tools:1.0.0.114656
enabled: true
name: RSSWidget_LC
runas: DBotWeakRole
diff --git a/Packs/XSOARSummaryDashboard/pack_metadata.json b/Packs/XSOARSummaryDashboard/pack_metadata.json
index 51780d1dde83..3aa0dc741c72 100644
--- a/Packs/XSOARSummaryDashboard/pack_metadata.json
+++ b/Packs/XSOARSummaryDashboard/pack_metadata.json
@@ -2,7 +2,7 @@
"name": "XSOAR Summary Dashboard",
"description": "Dashboard that shows overall platform performance as well as support links and cheat sheets for reference. The dashboard also pulls the most recent XSOAR live community blog posts.",
"support": "community",
- "currentVersion": "1.0.2",
+ "currentVersion": "1.0.3",
"author": "emitchell",
"url": "",
"email": "",
diff --git a/Packs/XSOAR_EDL_Checker/Integrations/XSOAREDLChecker/XSOAREDLChecker.yml b/Packs/XSOAR_EDL_Checker/Integrations/XSOAREDLChecker/XSOAREDLChecker.yml
index a045ac74ef8b..b88f3c64bfc7 100644
--- a/Packs/XSOAR_EDL_Checker/Integrations/XSOAREDLChecker/XSOAREDLChecker.yml
+++ b/Packs/XSOAR_EDL_Checker/Integrations/XSOAREDLChecker/XSOAREDLChecker.yml
@@ -43,7 +43,7 @@ script:
description: The Response or Error from the check.
- contextPath: EDLChecker.ItemsOnList
description: The number of indicators on the list, assuming a successful response!
- dockerimage: demisto/python3:3.10.13.80014
+ dockerimage: demisto/python3:3.11.10.116439
runonce: false
script: ''
subtype: python3
diff --git a/Packs/XSOAR_EDL_Checker/ReleaseNotes/1_1_2.md b/Packs/XSOAR_EDL_Checker/ReleaseNotes/1_1_2.md
new file mode 100644
index 000000000000..20fc32877788
--- /dev/null
+++ b/Packs/XSOAR_EDL_Checker/ReleaseNotes/1_1_2.md
@@ -0,0 +1,9 @@
+
+#### Integrations
+
+##### XSOAR EDL Checker
+- Updated the Docker image to: *demisto/python3:3.11.10.116439*.
+
+
+
+
diff --git a/Packs/XSOAR_EDL_Checker/pack_metadata.json b/Packs/XSOAR_EDL_Checker/pack_metadata.json
index 2b1050707484..46aba45a3457 100644
--- a/Packs/XSOAR_EDL_Checker/pack_metadata.json
+++ b/Packs/XSOAR_EDL_Checker/pack_metadata.json
@@ -2,7 +2,7 @@
"name": "XSOAR EDL Checker",
"description": "Checks EDLs hosted by the XSOAR server to ensure they are functioning.",
"support": "community",
- "currentVersion": "1.1.1",
+ "currentVersion": "1.1.2",
"author": "Mike Beauchamp",
"url": "https://live.paloaltonetworks.com/t5/cortex-xsoar-discussions/bd-p/Cortex_XSOAR_Discussions",
"email": "",
diff --git a/Packs/XsoarWebserver/Integrations/XSOARWebServer/XSOARWebServer.yml b/Packs/XsoarWebserver/Integrations/XSOARWebServer/XSOARWebServer.yml
index 956485887b46..b37a4b662cf4 100644
--- a/Packs/XsoarWebserver/Integrations/XSOARWebServer/XSOARWebServer.yml
+++ b/Packs/XsoarWebserver/Integrations/XSOARWebServer/XSOARWebServer.yml
@@ -98,7 +98,7 @@ script:
name: xsoarproxy
description: setup a form submission job that can take multiple values from multiple users.
name: xsoar-ws-setup-form-submission
- dockerimage: demisto/bottle:1.0.0.76007
+ dockerimage: demisto/bottle:1.0.0.117147
longRunning: true
longRunningPort: true
script: ''
diff --git a/Packs/XsoarWebserver/ReleaseNotes/1_0_4.md b/Packs/XsoarWebserver/ReleaseNotes/1_0_4.md
new file mode 100644
index 000000000000..56ca7c991d98
--- /dev/null
+++ b/Packs/XsoarWebserver/ReleaseNotes/1_0_4.md
@@ -0,0 +1,7 @@
+
+#### Integrations
+
+##### XSOAR-Web-Server
+
+
+- Updated the Docker image to: *demisto/bottle:1.0.0.117147*.
diff --git a/Packs/XsoarWebserver/pack_metadata.json b/Packs/XsoarWebserver/pack_metadata.json
index 0597af77393f..13cfb8ef647c 100644
--- a/Packs/XsoarWebserver/pack_metadata.json
+++ b/Packs/XsoarWebserver/pack_metadata.json
@@ -2,7 +2,7 @@
"name": "Xsoar-web-server",
"description": "Contains a minimal webserver and an automation that can be used to generate predictable URLs that can be inserted into emails and the responses can be tracked. Also contains a test playbook meant to be a POC.",
"support": "community",
- "currentVersion": "1.0.3",
+ "currentVersion": "1.0.4",
"author": "Arun Narayanan",
"url": "https://live.paloaltonetworks.com/t5/cortex-xsoar-discussions/bd-p/Cortex_XSOAR_Discussions",
"email": "",
diff --git a/Packs/Zabbix/Integrations/Zabbix/Zabbix.yml b/Packs/Zabbix/Integrations/Zabbix/Zabbix.yml
index 8372519f9b77..934c6ae5ff73 100644
--- a/Packs/Zabbix/Integrations/Zabbix/Zabbix.yml
+++ b/Packs/Zabbix/Integrations/Zabbix/Zabbix.yml
@@ -455,7 +455,7 @@ script:
description: Whether the event is suppressed.
type: number
description: Get events.
- dockerimage: demisto/py3-tools:1.0.0.87415
+ dockerimage: demisto/py3-tools:1.0.0.116158
runonce: false
script: '-'
type: python
diff --git a/Packs/Zabbix/ReleaseNotes/1_0_35.md b/Packs/Zabbix/ReleaseNotes/1_0_35.md
new file mode 100644
index 000000000000..3b8f1e5193eb
--- /dev/null
+++ b/Packs/Zabbix/ReleaseNotes/1_0_35.md
@@ -0,0 +1,7 @@
+
+#### Integrations
+
+##### Zabbix
+
+
+- Updated the Docker image to: *demisto/py3-tools:1.0.0.116158*.
diff --git a/Packs/Zabbix/pack_metadata.json b/Packs/Zabbix/pack_metadata.json
index a6c132c1bfc9..06ba0c7b39f9 100644
--- a/Packs/Zabbix/pack_metadata.json
+++ b/Packs/Zabbix/pack_metadata.json
@@ -2,7 +2,7 @@
"name": "Zabbix",
"description": "Allow integration with Zabbix api.",
"support": "developer",
- "currentVersion": "1.0.34",
+ "currentVersion": "1.0.35",
"author": "Henrique Caires",
"url": "https://support.zabbix.com/secure/Dashboard.jspa",
"email": "henrique@caires.net.br",
diff --git a/Packs/Zimperium/Integrations/Zimperium/Zimperium.yml b/Packs/Zimperium/Integrations/Zimperium/Zimperium.yml
index 87418ce88165..4c31536c85be 100644
--- a/Packs/Zimperium/Integrations/Zimperium/Zimperium.yml
+++ b/Packs/Zimperium/Integrations/Zimperium/Zimperium.yml
@@ -979,7 +979,7 @@ script:
type: String
description: Checks the reputation of an app in Zimperium.
name: file
- dockerimage: demisto/python3:3.10.13.72123
+ dockerimage: demisto/python3:3.11.10.116439
isfetch: true
runonce: false
script: '-'
diff --git a/Packs/Zimperium/ReleaseNotes/2_0_6.md b/Packs/Zimperium/ReleaseNotes/2_0_6.md
new file mode 100644
index 000000000000..b512d71e1192
--- /dev/null
+++ b/Packs/Zimperium/ReleaseNotes/2_0_6.md
@@ -0,0 +1,10 @@
+
+#### Integrations
+
+##### Zimperium
+- Updated the Docker image to: *demisto/python3:3.11.10.116439*.
+
+
+
+
+
diff --git a/Packs/Zimperium/pack_metadata.json b/Packs/Zimperium/pack_metadata.json
index cf1d3f1ad430..b88d4516ef5a 100644
--- a/Packs/Zimperium/pack_metadata.json
+++ b/Packs/Zimperium/pack_metadata.json
@@ -2,7 +2,7 @@
"name": "Zimperium",
"description": "Streamline investigation and remediation of mobile alerts, generated alerts based on anomalous or unauthorized activities using the Zimperium pack.",
"support": "xsoar",
- "currentVersion": "2.0.5",
+ "currentVersion": "2.0.6",
"author": "Cortex XSOAR",
"url": "https://www.paloaltonetworks.com/cortex",
"email": "",
diff --git a/Packs/Zoom/Integrations/Zoom/README.md b/Packs/Zoom/Integrations/Zoom/README.md
index 123e662bcdb0..7b5eaa329e3d 100644
--- a/Packs/Zoom/Integrations/Zoom/README.md
+++ b/Packs/Zoom/Integrations/Zoom/README.md
@@ -74,7 +74,7 @@ In the Team Chat Subscription section under BOT endpoint URL add:
![enter image description here](../../doc_files/scope-premissions.png)
1. Click **Local Test** >**Add** to test your app and authorize your Cortex XSOAR app.
- ![enter image description here](https://github.com/demisto/content-assets/raw/master/Assets/Zoom/test-zoom-app.gif)
+ ![enter image description here](../../doc_files/test-zoom-app.gif)
1. **If mirroring is enabled in the integration configuration or using ZoomAsk**:
**Endpoint URL Requirements-**
@@ -100,7 +100,7 @@ In the Team Chat Subscription section under BOT endpoint URL add:
- Event notification endpoint URL: Enter the Cortex XSOAR URL of your server (`CORTEX-XSOAR-URL`/instance/execute/`INTEGRATION-INSTANCE-NAME`) where you want to receive event notifications. This URL should handle incoming event data from Zoom. Make sure it's publicly accessible.
- Validate the URL: Just after setting up/configuration of the Cortex XSOAR side you can validate the URL.
- Add Events: Click **+Add Events**. Under Event types, select **Chat Message** and then select **Chat message sent**.
-![enter image description here](https://github.com/demisto/content-assets/raw/master/Assets/Zoom/add-event.gif)
+![enter image description here](../../doc_files/add-event.gif)
## Commands
diff --git a/Packs/Zoom/Integrations/Zoom/Zoom.yml b/Packs/Zoom/Integrations/Zoom/Zoom.yml
index 4c4e362f8bf9..aa73518a71b3 100644
--- a/Packs/Zoom/Integrations/Zoom/Zoom.yml
+++ b/Packs/Zoom/Integrations/Zoom/Zoom.yml
@@ -1290,7 +1290,7 @@ script:
runonce: false
longRunning: true
longRunningPort: true
- dockerimage: demisto/fastapi:0.111.0.101615
+ dockerimage: demisto/fastapi:0.115.4.115067
script: "-"
subtype: python3
type: python
diff --git a/Packs/Zoom/Integrations/ZoomEventCollector/ZoomEventCollector.py b/Packs/Zoom/Integrations/ZoomEventCollector/ZoomEventCollector.py
index fd8af3e8ee59..09206b1116b8 100644
--- a/Packs/Zoom/Integrations/ZoomEventCollector/ZoomEventCollector.py
+++ b/Packs/Zoom/Integrations/ZoomEventCollector/ZoomEventCollector.py
@@ -2,7 +2,7 @@
from CommonServerPython import *
import urllib3
from typing import Any
-from datetime import datetime, timezone, timedelta
+from datetime import datetime, timedelta, UTC
from dateutil import relativedelta
from ZoomApiModule import *
@@ -52,8 +52,8 @@ def search_events(self, log_type: str, first_fetch_time: datetime, last_time: st
demisto.debug(f"Last run before the fetch run: {last_time} for {log_type}")
start_date = first_fetch_time if not last_time else \
- dateparser.parse(last_time).replace(tzinfo=timezone.utc) # type: ignore[union-attr]
- end_date = datetime.now(timezone.utc)
+ dateparser.parse(last_time).replace(tzinfo=UTC) # type: ignore[union-attr]
+ end_date = datetime.now(UTC)
demisto.debug(f"Starting to get logs from: {start_date} to: {end_date} for {log_type}")
@@ -108,7 +108,7 @@ def test_module(client: Client) -> str:
"""
try:
- client.search_events(log_type=next(iter(LOG_TYPES)), limit=1, first_fetch_time=datetime.now(timezone.utc))
+ client.search_events(log_type=next(iter(LOG_TYPES)), limit=1, first_fetch_time=datetime.now(UTC))
except DemistoException as e:
error_message = e.message
@@ -269,7 +269,7 @@ def main() -> None:
elif command == 'zoom-get-events':
events, results = get_events(client=client,
limit=arg_to_number(args.get("limit")) or MAX_RECORDS_PER_PAGE,
- first_fetch_time=first_fetch_datetime.replace(tzinfo=timezone.utc),
+ first_fetch_time=first_fetch_datetime.replace(tzinfo=UTC),
)
return_results(results)
@@ -280,7 +280,7 @@ def main() -> None:
last_run = demisto.getLastRun()
next_run, events = fetch_events(client=client,
last_run=last_run,
- first_fetch_time=first_fetch_datetime.replace(tzinfo=timezone.utc),
+ first_fetch_time=first_fetch_datetime.replace(tzinfo=UTC),
)
call_send_events_to_xsiam(events)
diff --git a/Packs/Zoom/Integrations/ZoomEventCollector/ZoomEventCollector.yml b/Packs/Zoom/Integrations/ZoomEventCollector/ZoomEventCollector.yml
index 8eb386d8d6ee..d0b36336073d 100644
--- a/Packs/Zoom/Integrations/ZoomEventCollector/ZoomEventCollector.yml
+++ b/Packs/Zoom/Integrations/ZoomEventCollector/ZoomEventCollector.yml
@@ -60,7 +60,7 @@ script:
defaultValue: 300
description: Gets events from Zoom.
name: zoom-get-events
- dockerimage: demisto/auth-utils:1.0.0.91447
+ dockerimage: demisto/auth-utils:1.0.0.116930
isfetchevents: true
script: '-'
subtype: python3
diff --git a/Packs/Zoom/Integrations/ZoomEventCollector/ZoomEventCollector_test.py b/Packs/Zoom/Integrations/ZoomEventCollector/ZoomEventCollector_test.py
index 3728637b4c3a..226c0f4bf606 100644
--- a/Packs/Zoom/Integrations/ZoomEventCollector/ZoomEventCollector_test.py
+++ b/Packs/Zoom/Integrations/ZoomEventCollector/ZoomEventCollector_test.py
@@ -1,7 +1,7 @@
import pytest
from CommonServerPython import DemistoException
import demistomock as demisto # noqa: F401
-from datetime import datetime, timezone
+from datetime import datetime, UTC
import json
from freezegun import freeze_time
@@ -93,7 +93,7 @@ def test_fetch_events(mocker):
"""
from ZoomEventCollector import fetch_events, Client
- first_fetch_time = datetime(2023, 3, 1).replace(tzinfo=timezone.utc)
+ first_fetch_time = datetime(2023, 3, 1).replace(tzinfo=UTC)
http_request_mocker = mocker.patch.object(Client, "error_handled_http_request", side_effect=[
util_load_json('test_data/fetch_events_operationlogs.json').get('fetch_events_month_before'),
@@ -107,7 +107,7 @@ def test_fetch_events(mocker):
client = Client(base_url=BASE_URL)
next_run, events = fetch_events(client, last_run={},
- first_fetch_time=datetime(2023, 2, 1).replace(tzinfo=timezone.utc))
+ first_fetch_time=datetime(2023, 2, 1).replace(tzinfo=UTC))
mock_events = util_load_json('test_data/zoom_fetch_events.json')
assert http_request_mocker.call_args_list[0][1].get("params") == {'page_size': 300, 'from': '2023-02-01',
@@ -154,7 +154,7 @@ def test_fetch_events_with_last_run(mocker):
- Ensure the events are returned as expected and the pagination is working as expected
"""
from ZoomEventCollector import fetch_events, Client
- first_fetch_time = datetime(2023, 3, 1).replace(tzinfo=timezone.utc)
+ first_fetch_time = datetime(2023, 3, 1).replace(tzinfo=UTC)
http_request_mocker = mocker.patch.object(Client, "error_handled_http_request", side_effect=[
util_load_json('test_data/fetch_events_operationlogs.json').get('fetch_events_with_token'),
@@ -214,7 +214,7 @@ def test_get_events_command(mocker):
client = Client(base_url=BASE_URL)
events, results = get_events(client, limit=2,
- first_fetch_time=datetime(2023, 3, 1).replace(tzinfo=timezone.utc))
+ first_fetch_time=datetime(2023, 3, 1).replace(tzinfo=UTC))
mock_events = util_load_json('test_data/zoom_get_events.json')
assert http_request_mocker.call_args_list[0][1].get("params") == {'page_size': 2, 'from': '2023-03-01',
@@ -228,7 +228,7 @@ def test_get_events_command(mocker):
# Test limit > MAX_RECORDS_PER_PAGE
with pytest.raises(DemistoException) as e:
get_events(client, limit=MAX_RECORDS_PER_PAGE + 1,
- first_fetch_time=datetime(2023, 3, 1).replace(tzinfo=timezone.utc))
+ first_fetch_time=datetime(2023, 3, 1).replace(tzinfo=UTC))
assert e.value.message == f"The requested limit ({MAX_RECORDS_PER_PAGE + 1}) exceeds the maximum number of " \
f"records per page ({MAX_RECORDS_PER_PAGE}). Please reduce the limit and try again."
diff --git a/Packs/Zoom/Integrations/Zoom_IAM/Zoom_IAM.yml b/Packs/Zoom/Integrations/Zoom_IAM/Zoom_IAM.yml
index d6fc9fe8cf38..0ff33b90370f 100644
--- a/Packs/Zoom/Integrations/Zoom_IAM/Zoom_IAM.yml
+++ b/Packs/Zoom/Integrations/Zoom_IAM/Zoom_IAM.yml
@@ -164,7 +164,7 @@ script:
required: true
- description: Retrieves a User Profile schema, which holds all of the user fields within the application. Used for outgoing-mapping through the Get Schema option.
name: get-mapping-fields
- dockerimage: demisto/auth-utils:1.0.0.76157
+ dockerimage: demisto/auth-utils:1.0.0.115527
ismappable: true
isremotesyncout: true
script: '-'
diff --git a/Packs/Zoom/ReleaseNotes/1_6_15.md b/Packs/Zoom/ReleaseNotes/1_6_15.md
new file mode 100644
index 000000000000..513da0ab7ffb
--- /dev/null
+++ b/Packs/Zoom/ReleaseNotes/1_6_15.md
@@ -0,0 +1,12 @@
+
+#### Integrations
+
+##### Zoom
+
+- Updated the Docker image to: *demisto/fastapi:0.115.4.115067*.
+
+#### Scripts
+
+##### ZoomAsk
+
+- Updated the Docker image to: *demisto/fastapi:0.115.4.115067*.
diff --git a/Packs/Zoom/ReleaseNotes/1_6_16.md b/Packs/Zoom/ReleaseNotes/1_6_16.md
new file mode 100644
index 000000000000..5cd4b681cbe6
--- /dev/null
+++ b/Packs/Zoom/ReleaseNotes/1_6_16.md
@@ -0,0 +1,7 @@
+
+#### Integrations
+
+##### Zoom_IAM
+
+
+- Updated the Docker image to: *demisto/auth-utils:1.0.0.115527*.
diff --git a/Packs/Zoom/ReleaseNotes/1_6_17.md b/Packs/Zoom/ReleaseNotes/1_6_17.md
new file mode 100644
index 000000000000..7246c33e2567
--- /dev/null
+++ b/Packs/Zoom/ReleaseNotes/1_6_17.md
@@ -0,0 +1,7 @@
+
+#### Integrations
+
+##### Zoom Event Collector
+
+
+- Updated the Docker image to: *demisto/auth-utils:1.0.0.116930*.
diff --git a/Packs/Zoom/Scripts/ZoomAsk/README.md b/Packs/Zoom/Scripts/ZoomAsk/README.md
index 94f2c1e47076..058563046a6c 100644
--- a/Packs/Zoom/Scripts/ZoomAsk/README.md
+++ b/Packs/Zoom/Scripts/ZoomAsk/README.md
@@ -1,6 +1,7 @@
Sends a message (question) to either a user (in a direct message) or to a channel. The message includes predefined reply options. The response can also close a task (might be conditional) in a playbook.
## Script Data
+
---
| **Name** | **Description** |
@@ -10,11 +11,13 @@ Sends a message (question) to either a user (in a direct message) or to a channe
| Version | 5.5.0 |
## Use Case
+
---
This automation allows you to ask users in Zoom (including users who are external to Cortex XSOAR) questions, have them respond and
reflect the answer back to Cortex XSOAR.
## Dependencies
+
---
Requires an instance of the Zoom integration with Long Running instance checked.
@@ -22,6 +25,7 @@ This script uses the following commands and scripts.
send-notification
## Inputs
+
---
| **Argument Name** | **Description** |
@@ -40,18 +44,21 @@ send-notification
| defaultResponse | Default response in case the question expires. |
## Outputs
+
---
There are no outputs for this script.
## Guide
+
---
The automation is most useful in a playbook to determine the outcome of a conditional task - which will be one of the provided options.
It uses a mechanism that allows external users to respond in Cortex XSOAR (per investigation) with entitlement strings embedded within the message contents.
-![SlackAsk](https://user-images.githubusercontent.com/35098543/66044107-7de39f00-e529-11e9-8099-049502b4d62f.png)
+![SlackAsk](../../doc_files/66044107-7de39f00-e529-11e9-8099-049502b4d62f.png)
The automation can utilize the interactive capabilities of Zoom to send a form with buttons.
This requires the external endpoint for interactive responses to be available for connection. (See the Zoom integration documentation for more information).
You can also utilize a dropdown list instead, by specifying the `responseType` argument.
## Notes
+
---
\ No newline at end of file
diff --git a/Packs/Zoom/Scripts/ZoomAsk/ZoomAsk.yml b/Packs/Zoom/Scripts/ZoomAsk/ZoomAsk.yml
index 1d2b917fe510..48028b2e6fe5 100644
--- a/Packs/Zoom/Scripts/ZoomAsk/ZoomAsk.yml
+++ b/Packs/Zoom/Scripts/ZoomAsk/ZoomAsk.yml
@@ -53,7 +53,7 @@ tags:
- zoom
timeout: '0'
type: python
-dockerimage: demisto/fastapi:1.0.0.79757
+dockerimage: demisto/fastapi:0.115.4.115067
tests:
- no test - Untestable
dependson:
diff --git a/Packs/Zoom/doc_files/add-event.gif b/Packs/Zoom/doc_files/add-event.gif
new file mode 100644
index 000000000000..02dbeaa8a9f5
Binary files /dev/null and b/Packs/Zoom/doc_files/add-event.gif differ
diff --git a/Packs/Zoom/doc_files/test-zoom-app.gif b/Packs/Zoom/doc_files/test-zoom-app.gif
new file mode 100644
index 000000000000..9be89d269f16
Binary files /dev/null and b/Packs/Zoom/doc_files/test-zoom-app.gif differ
diff --git a/Packs/Zoom/pack_metadata.json b/Packs/Zoom/pack_metadata.json
index 825ebf8207c2..3a4d8832d7bf 100644
--- a/Packs/Zoom/pack_metadata.json
+++ b/Packs/Zoom/pack_metadata.json
@@ -2,7 +2,7 @@
"name": "Zoom",
"description": "Use the Zoom integration manage your Zoom users and meetings",
"support": "xsoar",
- "currentVersion": "1.6.14",
+ "currentVersion": "1.6.17",
"author": "Cortex XSOAR",
"url": "https://www.paloaltonetworks.com/cortex",
"email": "",
diff --git a/Packs/Zscaler/Integrations/Zscaler/README.md b/Packs/Zscaler/Integrations/Zscaler/README.md
index 33556f7f5d30..e3367c72bda9 100644
--- a/Packs/Zscaler/Integrations/Zscaler/README.md
+++ b/Packs/Zscaler/Integrations/Zscaler/README.md
@@ -821,9 +821,9 @@ Retrieves a full or summary report of the file that was analyzed by Sandbox. The
#### Additional Information
-[![image](https://user-images.githubusercontent.com/44546251/56854828-8a921480-6945-11e9-8784-cb55e6c7d83e.png)](https://user-images.githubusercontent.com/44546251/56854828-8a921480-6945-11e9-8784-cb55e6c7d83e.png)
+[![image](../../doc_files/56854828-8a921480-6945-11e9-8784-cb55e6c7d83e.png)](../../doc_files/56854828-8a921480-6945-11e9-8784-cb55e6c7d83e.png)
-[![image](https://user-images.githubusercontent.com/44546251/56854735-291d7600-6944-11e9-8c05-b917cc25e322.png)](https://user-images.githubusercontent.com/44546251/56854735-291d7600-6944-11e9-8c05-b917cc25e322.png)
+[![image](../../doc_files/56854735-291d7600-6944-11e9-8c05-b917cc25e322.png)](../../doc_files/56854735-291d7600-6944-11e9-8c05-b917cc25e322.png)
### zscaler-login
diff --git a/Packs/ctf01/ReleaseNotes/1_0_31.md b/Packs/ctf01/ReleaseNotes/1_0_31.md
new file mode 100644
index 000000000000..797dcf2b7507
--- /dev/null
+++ b/Packs/ctf01/ReleaseNotes/1_0_31.md
@@ -0,0 +1,6 @@
+
+#### Integrations
+
+##### Cortex XDR - IR CTF
+
+- Updated the error message on the *alert_id* input when the ***core-get-cloud-original-alerts*** command fails while using the playbook debugger.
diff --git a/Packs/ctf01/pack_metadata.json b/Packs/ctf01/pack_metadata.json
index ed516887437d..41dae5937bae 100644
--- a/Packs/ctf01/pack_metadata.json
+++ b/Packs/ctf01/pack_metadata.json
@@ -2,7 +2,7 @@
"name": "Capture The Flag - 01",
"description": "XSOAR's Capture the flag (CTF)",
"support": "xsoar",
- "currentVersion": "1.0.30",
+ "currentVersion": "1.0.31",
"serverMinVersion": "8.2.0",
"author": "Cortex XSOAR",
"url": "https://www.paloaltonetworks.com/cortex",
diff --git a/Packs/epo/Integrations/Epo/README.md b/Packs/epo/Integrations/Epo/README.md
index 21c752b7e9ff..f54a7edb7d0b 100644
--- a/Packs/epo/Integrations/Epo/README.md
+++ b/Packs/epo/Integrations/Epo/README.md
@@ -20,9 +20,9 @@ More info about McAfee ePO's permissions model is available [here](https://docs.
Example `!epo-help` outputs with permission information:
* `!epo-help command="repository.findPackages"`:
-![](https://raw.githubusercontent.com/demisto/content/0b1cdaff3a3cd238cbe98ae25bee0c6206af11e0/Packs/epo/doc_files/epo-help-find-pkg.png)
+![](../../doc_files/epo-help-find-pkg.png)
* `!epo-help command="repository.deletePackage"`:
-![](https://raw.githubusercontent.com/demisto/content/0b1cdaff3a3cd238cbe98ae25bee0c6206af11e0/Packs/epo/doc_files/epo-help-delete-pkg.png)
+![](../../doc_files/epo-help-delete-pkg.png)
## Playbooks
* McAfee ePO Endpoint Connectivity Diagnostics - Perform a check on ePO endpoints to see if any endpoints are unmanaged or lost connectivity with ePO and take steps to return to valid state.
@@ -77,7 +77,7 @@ There is no context output for this command.
##### Human Readable Output
-[![screen shot 2018-08-26 at 10 28 00](https://user-images.githubusercontent.com/37335599/44625852-d013c300-a91a-11e8-839e-b4f139ab893d.png)](https://user-images.githubusercontent.com/37335599/44625852-d013c300-a91a-11e8-839e-b4f139ab893d.png)
+[![screen shot 2018-08-26 at 10 28 00](../../doc_files/44625852-d013c300-a91a-11e8-839e-b4f139ab893d.png)](../../doc_files/44625852-d013c300-a91a-11e8-839e-b4f139ab893d.png)
### 2. Get the latest DAT file
@@ -105,7 +105,7 @@ There is no input for this command.
##### Human Readable Output
-[![screen shot 2018-08-26 at 10 15 58](https://user-images.githubusercontent.com/37335599/44625740-2b44b600-a919-11e8-9d18-ecca5185ffef.png)](https://user-images.githubusercontent.com/37335599/44625740-2b44b600-a919-11e8-9d18-ecca5185ffef.png)
+[![screen shot 2018-08-26 at 10 15 58](../../doc_files/44625740-2b44b600-a919-11e8-9d18-ecca5185ffef.png)](../../doc_files/44625740-2b44b600-a919-11e8-9d18-ecca5185ffef.png)
### 3. Check the current DAT file version
@@ -133,7 +133,7 @@ There is no input for this command.
##### Human Readable Output
-[![screen shot 2018-08-26 at 10 18 36](https://user-images.githubusercontent.com/37335599/44625764-7bbc1380-a919-11e8-9959-1090d30f1db3.png)](https://user-images.githubusercontent.com/37335599/44625764-7bbc1380-a919-11e8-9959-1090d30f1db3.png)
+[![screen shot 2018-08-26 at 10 18 36](../../doc_files/44625764-7bbc1380-a919-11e8-9959-1090d30f1db3.png)](../../doc_files/44625764-7bbc1380-a919-11e8-9959-1090d30f1db3.png)
### 4. Update the DAT file
@@ -188,7 +188,7 @@ There is no context output for this command.
##### Human Readable Output
-[![screen shot 2018-08-26 at 10 41 04](https://user-images.githubusercontent.com/37335599/44625952-9d6aca00-a91c-11e8-92b7-2a42b2b618d6.png)](https://user-images.githubusercontent.com/37335599/44625952-9d6aca00-a91c-11e8-92b7-2a42b2b618d6.png)
+[![screen shot 2018-08-26 at 10 41 04](../../doc_files/44625952-9d6aca00-a91c-11e8-92b7-2a42b2b618d6.png)](../../doc_files/44625952-9d6aca00-a91c-11e8-92b7-2a42b2b618d6.png)
### 5. Update a repository
@@ -214,7 +214,7 @@ There is no context output for this command.
##### Human Readable Output
-[![screen shot 2018-08-26 at 10 00 40](https://user-images.githubusercontent.com/37335599/44625662-65ad5380-a917-11e8-8120-5e6211e148bd.png)](https://user-images.githubusercontent.com/37335599/44625662-65ad5380-a917-11e8-8120-5e6211e148bd.png)
+[![screen shot 2018-08-26 at 10 00 40](../../doc_files/44625662-65ad5380-a917-11e8-8120-5e6211e148bd.png)](../../doc_files/44625662-65ad5380-a917-11e8-8120-5e6211e148bd.png)
### 6. Get system tree groups
@@ -242,7 +242,7 @@ Returns system tree groups.
##### Human Readable Output
-[![screen shot 2018-08-26 at 9 59 49](https://user-images.githubusercontent.com/37335599/44625635-d0aa5a80-a916-11e8-826d-15bae934412c.png)](https://user-images.githubusercontent.com/37335599/44625635-d0aa5a80-a916-11e8-826d-15bae934412c.png)
+[![screen shot 2018-08-26 at 9 59 49](../../doc_files/44625635-d0aa5a80-a916-11e8-826d-15bae934412c.png)](../../doc_files/44625635-d0aa5a80-a916-11e8-826d-15bae934412c.png)
### 7. Find systems in the system tree
@@ -301,12 +301,12 @@ epo-command
!epo-command command=system.find searchText=10.0.0.1
-[![screen shot 2018-10-02 at 9 44 34](https://user-images.githubusercontent.com/37335599/46333148-e1da3b80-c627-11e8-82cf-40970f8e5aab.png)](https://user-images.githubusercontent.com/37335599/46333148-e1da3b80-c627-11e8-82cf-40970f8e5aab.png)
+[![screen shot 2018-10-02 at 9 44 34](../../doc_files/46333148-e1da3b80-c627-11e8-82cf-40970f8e5aab.png)](../../doc_files/46333148-e1da3b80-c627-11e8-82cf-40970f8e5aab.png)
!epo-command command=agentmgmt.listAgentHandlers
-[![screen shot 2018-10-02 at 9 46 00](https://user-images.githubusercontent.com/37335599/46333232-37164d00-c628-11e8-91a7-1be03063edb0.png)](https://user-images.githubusercontent.com/37335599/46333232-37164d00-c628-11e8-91a7-1be03063edb0.png)
+[![screen shot 2018-10-02 at 9 46 00](../../doc_files/46333232-37164d00-c628-11e8-91a7-1be03063edb0.png)](../../doc_files/46333232-37164d00-c628-11e8-91a7-1be03063edb0.png)
### 9. epo-advanced-command
@@ -336,7 +336,7 @@ To get a list of available commands, run the ''epo-help'' command. For example/:
!epo-advanced-command command="clienttask.find" commandArgs="searchText:On-demand"
-[![screen shot 2018-10-29 at 13 31 53](https://user-images.githubusercontent.com/37335599/47647276-27cee480-db7f-11e8-9430-b3685d914cde.png)](https://user-images.githubusercontent.com/37335599/47647276-27cee480-db7f-11e8-9430-b3685d914cde.png)
+[![screen shot 2018-10-29 at 13 31 53](../../doc_files/47647276-27cee480-db7f-11e8-9430-b3685d914cde.png)](../../doc_files/47647276-27cee480-db7f-11e8-9430-b3685d914cde.png)
### 10. Wake up an agent
@@ -417,22 +417,22 @@ Queries an ePO table.
`!epo-query-table target=EPOLeafNode select="(select EPOLeafNode.NodeName EPOLeafNode.Tags EPOBranchNode.NodeName)" where="(hasTag EPOLeafNode.AppliedTags 4)"`
-[![screen shot 2018-10-29 at 15 17 18](https://user-images.githubusercontent.com/37335599/47652110-bf3b3400-db8d-11e8-934d-56542c178b6f.png)](https://user-images.githubusercontent.com/37335599/47652110-bf3b3400-db8d-11e8-934d-56542c178b6f.png)
+[![screen shot 2018-10-29 at 15 17 18](../../doc_files/47652110-bf3b3400-db8d-11e8-934d-56542c178b6f.png)](../../doc_files/47652110-bf3b3400-db8d-11e8-934d-56542c178b6f.png)
`!epo-query-table target=EPOLeafNode select="(select (top 3) EPOLeafNode.NodeName EPOLeafNode.Tags EPOBranchNode.NodeName)"`
-[![screen shot 2018-10-29 at 15 17 43](https://user-images.githubusercontent.com/37335599/47652140-d417c780-db8d-11e8-819b-542dcc01c925.png)](https://user-images.githubusercontent.com/37335599/47652140-d417c780-db8d-11e8-819b-542dcc01c925.png)
+[![screen shot 2018-10-29 at 15 17 43](../../doc_files/47652140-d417c780-db8d-11e8-819b-542dcc01c925.png)](../../doc_files/47652140-d417c780-db8d-11e8-819b-542dcc01c925.png)
`!epo-query-table target="EPOEvents" select="(select EPOEvents.AutoID EPOEvents.DetectedUTC EPOEvents.ReceivedUTC)" order="(order(desc EPOEvents.DetectedUTC))"`
-[![screen shot 2018-10-29 at 16 35 41](https://user-images.githubusercontent.com/37335599/47656891-b734c180-db98-11e8-9c65-1b58fd4c8268.png)](https://user-images.githubusercontent.com/37335599/47656891-b734c180-db98-11e8-9c65-1b58fd4c8268.png)
+[![screen shot 2018-10-29 at 16 35 41](../../doc_files/47656891-b734c180-db98-11e8-9c65-1b58fd4c8268.png)](../../doc_files/47656891-b734c180-db98-11e8-9c65-1b58fd4c8268.png)
`!epo-query-table target="EPExtendedEvent" select="(select (top 250) EPOEvents.ThreatName EPOEvents.AutoID EPExtendedEvent.EventAutoID EPExtendedEvent.TargetHash EPExtendedEvent.TargetPath EPOEvents.SourceHostName)" order="(order(desc EPExtendedEvent.TargetHash))" joinTables="EPOEvents"where="(where(eq EPOEvents.ThreatName "real Protect-LS!d5435f1fea5e"))"`
-[![screen shot 2018-10-31 at 10 03 49](https://user-images.githubusercontent.com/37335599/47773949-4b676b80-dcf4-11e8-9562-c67fced9176c.png)](https://user-images.githubusercontent.com/37335599/47773949-4b676b80-dcf4-11e8-9562-c67fced9176c.png)
+[![screen shot 2018-10-31 at 10 03 49](../../doc_files/47773949-4b676b80-dcf4-11e8-9562-c67fced9176c.png)](../../doc_files/47773949-4b676b80-dcf4-11e8-9562-c67fced9176c.png)
### 14. Get an ePO table
@@ -459,7 +459,7 @@ There is no context output for this command.
##### Human Readable Output
-##### [![screen shot 2018-10-29 at 15 19 13](https://user-images.githubusercontent.com/37335599/47652211-06292980-db8e-11e8-8075-87a415c92b20.png)](https://user-images.githubusercontent.com/37335599/47652211-06292980-db8e-11e8-8075-87a415c92b20.png)
+##### [![screen shot 2018-10-29 at 15 19 13](../../doc_files/47652211-06292980-db8e-11e8-8075-87a415c92b20.png)](../../doc_files/47652211-06292980-db8e-11e8-8075-87a415c92b20.png)
### 15. Get the ePO version
@@ -483,7 +483,7 @@ Gets the ePO version. This command requires global admin permissions.
!epo-get-version
-[![screen shot 2018-11-06 at 15 43 18](https://user-images.githubusercontent.com/37335599/48068154-b58f7d00-e1da-11e8-97c1-410d77954d6d.png)](https://user-images.githubusercontent.com/37335599/48068154-b58f7d00-e1da-11e8-97c1-410d77954d6d.png)
+[![screen shot 2018-11-06 at 15 43 18](../../doc_files/48068154-b58f7d00-e1da-11e8-97c1-410d77954d6d.png)](../../doc_files/48068154-b58f7d00-e1da-11e8-97c1-410d77954d6d.png)
### 16. Find systems in the system tree
@@ -530,7 +530,7 @@ Finds systems in the system tree.
!epo-find-system searchText=mar
-[![screen shot 2018-11-06 at 15 46 12](https://user-images.githubusercontent.com/37335599/48068300-1fa82200-e1db-11e8-9b4c-1df113f5934d.png)](https://user-images.githubusercontent.com/37335599/48068300-1fa82200-e1db-11e8-9b4c-1df113f5934d.png)
+[![screen shot 2018-11-06 at 15 46 12](../../doc_files/48068300-1fa82200-e1db-11e8-9b4c-1df113f5934d.png)](../../doc_files/48068300-1fa82200-e1db-11e8-9b4c-1df113f5934d.png)
### 17. Move a system to a different group
@@ -560,4 +560,4 @@ There is no context output for this command.
##### Human Readable Output
-[![Screen Shot 2019-07-31 at 11 34 28](https://user-images.githubusercontent.com/37335599/62196720-30ab4b80-b387-11e9-93e2-56f5821cd34c.png)](https://user-images.githubusercontent.com/37335599/62196720-30ab4b80-b387-11e9-93e2-56f5821cd34c.png)
\ No newline at end of file
+[![Screen Shot 2019-07-31 at 11 34 28](../../doc_files/62196720-30ab4b80-b387-11e9-93e2-56f5821cd34c.png)](../../doc_files/62196720-30ab4b80-b387-11e9-93e2-56f5821cd34c.png)
\ No newline at end of file
diff --git a/Packs/epo/Integrations/epoV2/README.md b/Packs/epo/Integrations/epoV2/README.md
index 5a28e81cfee2..182f5bcfb79b 100644
--- a/Packs/epo/Integrations/epoV2/README.md
+++ b/Packs/epo/Integrations/epoV2/README.md
@@ -9,9 +9,9 @@ More information about McAfee ePO's permissions model is available [here](https:
Example `!epo-help` outputs with permission information:
* `!epo-help command="repository.findPackages"`:
-![](https://raw.githubusercontent.com/demisto/content/0b1cdaff3a3cd238cbe98ae25bee0c6206af11e0/Packs/epo/doc_files/epo-help-find-pkg.png)
+![](../../doc_files/epo-help-find-pkg.png)
* `!epo-help command="repository.deletePackage"`:
-![](https://raw.githubusercontent.com/demisto/content/0b1cdaff3a3cd238cbe98ae25bee0c6206af11e0/Packs/epo/doc_files/epo-help-delete-pkg.png)
+![](../../doc_files/epo-help-delete-pkg.png)
## Configure McAfee ePO v2 on Cortex XSOAR
diff --git a/Packs/epo/Integrations/epoV2/epoV2_description.md b/Packs/epo/Integrations/epoV2/epoV2_description.md
index 61c739bc7d40..977d6ef28622 100644
--- a/Packs/epo/Integrations/epoV2/epoV2_description.md
+++ b/Packs/epo/Integrations/epoV2/epoV2_description.md
@@ -7,6 +7,6 @@ More info about McAfee ePO's permissions model is available [here](https://docs.
Example `!epo-help` outputs with permission information:
* `!epo-help command="repository.findPackages"`:
-![](https://raw.githubusercontent.com/demisto/content/0b1cdaff3a3cd238cbe98ae25bee0c6206af11e0/Packs/epo/doc_files/epo-help-find-pkg.png)
+![](../../doc_files/epo-help-find-pkg.png)
* `!epo-help command="repository.deletePackage"`:
-![](https://raw.githubusercontent.com/demisto/content/0b1cdaff3a3cd238cbe98ae25bee0c6206af11e0/Packs/epo/doc_files/epo-help-delete-pkg.png)
\ No newline at end of file
+![](../../doc_files/epo-help-delete-pkg.png)
\ No newline at end of file
diff --git a/Packs/epo/ReleaseNotes/2_0_37.md b/Packs/epo/ReleaseNotes/2_0_37.md
new file mode 100644
index 000000000000..5dcf9f1eeed7
--- /dev/null
+++ b/Packs/epo/ReleaseNotes/2_0_37.md
@@ -0,0 +1,6 @@
+
+#### Integrations
+
+##### McAfee ePO v2
+
+- Documentation and metadata improvements.
diff --git a/Packs/epo/pack_metadata.json b/Packs/epo/pack_metadata.json
index c26ccfc8bd81..e6a7f0861621 100644
--- a/Packs/epo/pack_metadata.json
+++ b/Packs/epo/pack_metadata.json
@@ -2,7 +2,7 @@
"name": "McAfee ePO",
"description": "McAfee ePolicy Orchestrator",
"support": "xsoar",
- "currentVersion": "2.0.36",
+ "currentVersion": "2.0.37",
"author": "Cortex XSOAR",
"url": "https://www.paloaltonetworks.com/cortex",
"email": "",
diff --git a/Packs/ipinfo/Integrations/ipinfo_v2/ipinfo_v2.yml b/Packs/ipinfo/Integrations/ipinfo_v2/ipinfo_v2.yml
index 32dc60ea3111..f51989f0dec8 100644
--- a/Packs/ipinfo/Integrations/ipinfo_v2/ipinfo_v2.yml
+++ b/Packs/ipinfo/Integrations/ipinfo_v2/ipinfo_v2.yml
@@ -166,7 +166,7 @@ script:
- contextPath: DBotScore.Vendor
description: The vendor used to calculate the score.
type: String
- dockerimage: demisto/python3:3.10.14.92207
+ dockerimage: demisto/python3:3.11.10.113941
runonce: false
script: '-'
subtype: python3
diff --git a/Packs/ipinfo/ReleaseNotes/2_1_23.md b/Packs/ipinfo/ReleaseNotes/2_1_23.md
new file mode 100644
index 000000000000..75bcee578acc
--- /dev/null
+++ b/Packs/ipinfo/ReleaseNotes/2_1_23.md
@@ -0,0 +1,7 @@
+
+#### Integrations
+
+##### IPinfo v2
+
+
+- Updated the Docker image to: *demisto/python3:3.11.10.113941*.
diff --git a/Packs/ipinfo/pack_metadata.json b/Packs/ipinfo/pack_metadata.json
index 401b9b707bab..0e0df1ad0324 100644
--- a/Packs/ipinfo/pack_metadata.json
+++ b/Packs/ipinfo/pack_metadata.json
@@ -2,7 +2,7 @@
"name": "Ipinfo",
"description": "Use the ipinfo.io API to get data about an IP address",
"support": "xsoar",
- "currentVersion": "2.1.22",
+ "currentVersion": "2.1.23",
"author": "Cortex XSOAR",
"url": "https://www.paloaltonetworks.com/cortex",
"email": "",
diff --git a/Packs/jamf/Integrations/jamfV2/jamfV2.yml b/Packs/jamf/Integrations/jamfV2/jamfV2.yml
index 501dc8b55e0e..287d3c4fa56f 100644
--- a/Packs/jamf/Integrations/jamfV2/jamfV2.yml
+++ b/Packs/jamf/Integrations/jamfV2/jamfV2.yml
@@ -2592,7 +2592,7 @@ script:
- contextPath: Endpoint.Vendor
description: The integration name of the endpoint vendor.
type: String
- dockerimage: demisto/btfl-soup:1.0.1.86352
+ dockerimage: demisto/btfl-soup:1.0.1.115405
runonce: false
script: '-'
subtype: python3
diff --git a/Packs/jamf/ReleaseNotes/2_2_3.md b/Packs/jamf/ReleaseNotes/2_2_3.md
new file mode 100644
index 000000000000..d91393f2910d
--- /dev/null
+++ b/Packs/jamf/ReleaseNotes/2_2_3.md
@@ -0,0 +1,7 @@
+
+#### Integrations
+
+##### JAMF v2
+
+
+- Updated the Docker image to: *demisto/btfl-soup:1.0.1.115405*.
diff --git a/Packs/jamf/pack_metadata.json b/Packs/jamf/pack_metadata.json
index adf524cd8f14..6104e99eb153 100644
--- a/Packs/jamf/pack_metadata.json
+++ b/Packs/jamf/pack_metadata.json
@@ -2,7 +2,7 @@
"name": "Jamf",
"description": "Jamf device management",
"support": "xsoar",
- "currentVersion": "2.2.2",
+ "currentVersion": "2.2.3",
"author": "Cortex XSOAR",
"url": "https://www.paloaltonetworks.com/cortex",
"email": "",
diff --git a/Packs/rasterize/Integrations/rasterize/rasterize.py b/Packs/rasterize/Integrations/rasterize/rasterize.py
index f3e046376e38..b8fcbb23549b 100644
--- a/Packs/rasterize/Integrations/rasterize/rasterize.py
+++ b/Packs/rasterize/Integrations/rasterize/rasterize.py
@@ -65,8 +65,9 @@
# chrome instance data keys
INSTANCE_ID = "instance_id"
CHROME_INSTANCE_OPTIONS = "chrome_options"
-RASTERIZETION_COUNT = "rasteriztion_count"
+RASTERIZATION_COUNT = "rasterization_count"
+BLOCKED_URLS = argToList(demisto.params().get('blocked_urls', '').lower())
try:
env_max_rasterizations_count = os.getenv('MAX_RASTERIZATIONS_COUNT', '500')
@@ -235,6 +236,25 @@ def network_request_will_be_sent(self, documentURL, **kwargs):
'''Triggered when a request is sent by the browser, catches mailto URLs.'''
demisto.debug(f'PychromeEventHandler.network_request_will_be_sent, {documentURL=}')
self.is_mailto = documentURL.lower().startswith('mailto:')
+
+ request_url = kwargs.get('request', {}).get('url', '')
+
+ if any(value in request_url for value in BLOCKED_URLS):
+ self.tab.Fetch.enable()
+ demisto.debug('Fetch events enabled.')
+
+ def handle_request_paused(self, **kwargs):
+ request_id = kwargs.get("requestId")
+ request_url = kwargs.get("request", {}).get("url")
+
+ # abort the request if the url inside blocked_urls param and its redirect request
+ if any(value in request_url for value in BLOCKED_URLS) and not self.request_id:
+ self.tab.Fetch.failRequest(requestId=request_id, errorReason="Aborted")
+ demisto.debug(f"Request paused: {request_url=} , {request_id=}")
+ self.tab.Fetch.disable()
+ demisto.debug('Fetch events disabled.')
+
+
# endregion
@@ -318,7 +338,7 @@ def increase_counter_chrome_instances_file(chrome_port: str = ''):
existing_data = read_json_file()
if chrome_port in existing_data:
- existing_data[chrome_port][RASTERIZETION_COUNT] = existing_data[chrome_port].get(RASTERIZETION_COUNT, 0) + 1
+ existing_data[chrome_port][RASTERIZATION_COUNT] = existing_data[chrome_port].get(RASTERIZATION_COUNT, 0) + 1
write_chrome_instances_file(existing_data)
else:
demisto.info(f"Chrome port '{chrome_port}' not found.")
@@ -426,7 +446,7 @@ def start_chrome_headless(chrome_port, instance_id, chrome_options, chrome_binar
chrome_port: {
INSTANCE_ID: instance_id,
CHROME_INSTANCE_OPTIONS: chrome_options,
- RASTERIZETION_COUNT: 0
+ RASTERIZATION_COUNT: 0
}
}
add_new_chrome_instance(new_chrome_instance_content=new_chrome_instance)
@@ -540,6 +560,58 @@ def chrome_manager() -> tuple[Any | None, str | None]:
return browser, chrome_port
+def chrome_manager_one_port() -> tuple[Any | None, str | None]:
+ """
+ Manages Chrome instances based on user-specified chrome options and integration instance ID.
+ ONLY uses one chrome instance per chrome option, until https://issues.chromium.org/issues/379034728 is fixed.
+
+
+ This function performs the following steps:
+ 1. Retrieves the Chrome options set by the user.
+ 2. Checks if the Chrome options has been used previously.
+ - If the Chrome options wasn't used and the file is empty, generates a new Chrome instance with
+ the specified Chrome options.
+ - If the Chrome options exists in the dictionary- it reuses the existing Chrome instance.
+ - If the Chrome options wasn't used and the file isn't empty- it terminates all the use port and
+ generates a new one with the new options.
+
+ Returns:
+ tuple[Any | None, int | None]: A tuple containing:
+ - The Browser or None if an error occurred.
+ - The chrome port or None if an error occurred.
+ """
+ # If instance_id or chrome_options are not set, assign 'None' to these variables.
+ # This way, when fetching the content from the file, if there was no instance_id or chrome_options before,
+ # it can compare between the fetched 'None' string and the 'None' that assigned.
+ instance_id = demisto.callingContext.get('context', {}).get('IntegrationInstanceID', 'None') or 'None'
+ chrome_options = demisto.params().get('chrome_options', 'None')
+ chrome_instances_contents = read_json_file(CHROME_INSTANCES_FILE_PATH)
+ demisto.debug(f' chrome_manager {chrome_instances_contents=} {chrome_options=} {instance_id=}')
+ chrome_options_dict = {
+ options[CHROME_INSTANCE_OPTIONS]: {
+ 'chrome_port': port
+ }
+ for port, options in chrome_instances_contents.items()
+ }
+ chrome_port = chrome_options_dict.get(chrome_options, {}).get('chrome_port', '')
+ if not chrome_instances_contents: # or instance_id not in chrome_options_dict.keys():
+ demisto.debug('chrome_manager: condition chrome_instances_contents is empty')
+ return generate_new_chrome_instance(instance_id, chrome_options)
+ if chrome_options in chrome_options_dict:
+ demisto.debug('chrome_manager: condition chrome_options in chrome_options_dict is true'
+ f'{chrome_options in chrome_options_dict}')
+ browser = get_chrome_browser(chrome_port)
+ return browser, chrome_port
+ for chrome_port_ in chrome_instances_contents:
+ if chrome_port_ == 'None':
+ terminate_port_chrome_instances_file(chrome_port_)
+ demisto.debug(f"chrome_manager {chrome_port_=}, removing the port from chrome_instances file")
+ continue
+ demisto.debug(f"chrome_manager {chrome_port_=}, terminating the port")
+ terminate_chrome(chrome_port=chrome_port_)
+ return generate_new_chrome_instance(instance_id, chrome_options)
+
+
def generate_new_chrome_instance(instance_id: str, chrome_options: str) -> tuple[Any | None, str | None]:
chrome_port = generate_chrome_port()
return start_chrome_headless(chrome_port, instance_id, chrome_options)
@@ -556,7 +628,7 @@ def generate_chrome_port() -> str | None:
if len_running_chromes == 0:
# There's no Chrome listening on that port, Start a new Chrome there
- demisto.debug(f"No Chrome found on port {chrome_port}, using it.")
+ demisto.debug(f"No Chrome found on port {chrome_port}, using the port.")
return str(chrome_port)
# There's already a Chrome listening on that port, Don't use it
@@ -577,6 +649,8 @@ def setup_tab_event(browser: pychrome.Browser, tab: pychrome.Tab) -> tuple[Pychr
tab.Page.frameStartedLoading = tab_event_handler.page_frame_started_loading
tab.Page.frameStoppedLoading = tab_event_handler.page_frame_stopped_loading
+ tab.Fetch.requestPaused = tab_event_handler.handle_request_paused
+
return tab_event_handler, tab_ready_event
@@ -819,7 +893,9 @@ def perform_rasterize(path: str | list[str],
return None
demisto.debug(f"perform_rasterize, {paths=}, {rasterize_type=}")
- browser, chrome_port = chrome_manager()
+
+ # until https://issues.chromium.org/issues/379034728 is fixed, we can only use one chrome port
+ browser, chrome_port = chrome_manager_one_port()
if browser:
support_multithreading()
@@ -848,15 +924,16 @@ def perform_rasterize(path: str | list[str],
f"active tabs len: {len(browser.list_tab())}")
chrome_instances_file_content: dict = read_json_file() # CR fix name
- rasterizations_count = chrome_instances_file_content.get(chrome_port, {}).get(RASTERIZETION_COUNT, 0) + len(
+
+ rasterization_count = chrome_instances_file_content.get(chrome_port, {}).get(RASTERIZATION_COUNT, 0) + len(
rasterization_threads)
demisto.debug(f"perform_rasterize checking if the chrome in port:{chrome_port} should be deleted:"
- f"{rasterizations_count=}, {MAX_RASTERIZATIONS_COUNT=}, {len(browser.list_tab())=}")
+ f"{rasterization_count=}, {MAX_RASTERIZATIONS_COUNT=}, {len(browser.list_tab())=}")
if not chrome_port:
demisto.debug("perform_rasterize: the chrome port was not found")
- elif rasterizations_count >= MAX_RASTERIZATIONS_COUNT:
- demisto.info(f"perform_rasterize: terminating Chrome after {rasterizations_count=} rasterizations")
+ elif rasterization_count >= MAX_RASTERIZATIONS_COUNT:
+ demisto.info(f"perform_rasterize: terminating Chrome after {rasterization_count=} rasterization")
terminate_chrome(chrome_port=chrome_port)
else:
increase_counter_chrome_instances_file(chrome_port=chrome_port)
@@ -1044,7 +1121,6 @@ def add_filename_suffix(file_names: list, file_extension: str):
def rasterize_command(): # pragma: no cover
urls = demisto.getArg('url')
- urls = [urls] if isinstance(urls, str) else urls
width, height = get_width_height(demisto.args())
full_screen = argToBoolean(demisto.args().get('full_screen', False))
rasterize_type = RasterizeType(demisto.args().get('type', 'png').lower())
@@ -1112,7 +1188,6 @@ def get_width_height(args: dict):
def main(): # pragma: no cover
demisto.debug(f"main, {demisto.command()=}")
demisto.debug(f'Using performance params: {MAX_CHROMES_COUNT=}, {MAX_CHROME_TABS_COUNT=}, {MAX_RASTERIZATIONS_COUNT=}')
-
threading.excepthook = excepthook_recv_loop
try:
if demisto.command() == 'test-module':
diff --git a/Packs/rasterize/Integrations/rasterize/rasterize.yml b/Packs/rasterize/Integrations/rasterize/rasterize.yml
index 097f15dd2ee1..5cf4661cfe48 100644
--- a/Packs/rasterize/Integrations/rasterize/rasterize.yml
+++ b/Packs/rasterize/Integrations/rasterize/rasterize.yml
@@ -50,6 +50,11 @@ configuration:
type: 0
additionalinfo: Deprecated.
defaultvalue:
+- name: blocked_urls
+ display: List of domains to block
+ required: false
+ defaultvalue: "cloudflare.com"
+ type: 0
- name: proxy
display: Use system proxy settings
required: false
@@ -334,7 +339,7 @@ script:
- contextPath: InfoFile.Type
description: The type of the image/pdf file.
type: string
- dockerimage: demisto/chromium:127.0.6533.105883
+ dockerimage: demisto/chromium:131.0.6778.116585
runonce: false
script: "-"
subtype: python3
diff --git a/Packs/rasterize/Integrations/rasterize/rasterize_test.py b/Packs/rasterize/Integrations/rasterize/rasterize_test.py
index bb3f9d6f6d2e..58e4d391f5ed 100644
--- a/Packs/rasterize/Integrations/rasterize/rasterize_test.py
+++ b/Packs/rasterize/Integrations/rasterize/rasterize_test.py
@@ -4,6 +4,7 @@
from CommonServerPython import entryTypes
from tempfile import NamedTemporaryFile
from pytest_mock import MockerFixture
+from unittest.mock import MagicMock
import os
import logging
import http.server
@@ -663,20 +664,20 @@ def test_increase_counter_chrome_instances_file(mocker):
When:
- Executing the increase_counter_chrome_instances_file function
Then:
- - The function writes to the correct file and increase the "rasteriztion_count" by 1
+ - The function writes to the correct file and increase the "RASTERIZATION_COUNT" by 1
"""
- from rasterize import increase_counter_chrome_instances_file, RASTERIZETION_COUNT
+ from rasterize import increase_counter_chrome_instances_file, RASTERIZATION_COUNT
from unittest.mock import mock_open
mocker.patch("os.path.exists", return_value=True)
mock_file_content = util_load_json("test_data/chrome_instances.json")
- expected_rasterizetion_count = mock_file_content['2222'][RASTERIZETION_COUNT] + 1
+ expected_rasterization_count = mock_file_content['2222'][RASTERIZATION_COUNT] + 1
mock_file = mock_open()
mocker.patch("builtins.open", mock_file)
mocker.patch.object(json, 'load', return_value=mock_file_content)
mocker_json = mocker.patch("json.dump")
increase_counter_chrome_instances_file(chrome_port="2222")
assert mocker_json.called
- assert expected_rasterizetion_count == mocker_json.call_args[0][0]['2222'][RASTERIZETION_COUNT]
+ assert expected_rasterization_count == mocker_json.call_args[0][0]['2222'][RASTERIZATION_COUNT]
def test_add_new_chrome_instance(mocker):
@@ -774,11 +775,114 @@ def test_rasterize_mailto(capfd, mocker):
"""
mocker_output = mocker.patch('rasterize.return_results')
- with pytest.raises(SystemExit) as excinfo:
- with capfd.disabled():
- perform_rasterize(path='mailto:some.person@gmail.com', width=250, height=250, rasterize_type=RasterizeType.PNG)
+ with pytest.raises(SystemExit) as excinfo, capfd.disabled():
+ perform_rasterize(path='mailto:some.person@gmail.com', width=250, height=250, rasterize_type=RasterizeType.PNG)
assert mocker_output.call_args.args[0].readable_output == 'URLs that start with "mailto:" cannot be rasterized.' \
'\nURL: [\'mailto:some.person@gmail.com\']'
assert excinfo.type == SystemExit
assert excinfo.value.code == 0
+
+
+def test_handle_request_paused(mocker):
+ """
+ Given:
+ - cloudflare.com as BLOCKED_URLS parameter.
+ When:
+ - Running the 'handle_request_paused' function.
+ Then:
+ - Verify that tab.Fetch.failRequest executed with the correct requestId and errorReason Aborted
+ """
+
+ mocker.patch('rasterize.BLOCKED_URLS', ['cloudflare.com'])
+ kwargs = {'requestId': '1', 'request': {'url': 'cloudflare.com'}}
+ mock_tab = MagicMock(spec=pychrome.Tab)
+ mock_fetch = mocker.MagicMock()
+ mock_fetch.disable = MagicMock()
+ mock_fail_request = mocker.patch.object(mock_fetch, 'failRequest', new_callable=MagicMock)
+ mock_tab.Fetch = mock_fetch
+ tab_event_handler = PychromeEventHandler(None, mock_tab, None)
+
+ tab_event_handler.handle_request_paused(**kwargs)
+
+ assert mock_fail_request.call_args[1]['requestId'] == '1'
+ assert mock_fail_request.call_args[1]['errorReason'] == 'Aborted'
+
+
+def test_chrome_manager_one_port_use_same_port(mocker):
+ """
+ Given:
+ - instance id and chrome options.
+ When:
+ - Executing the chrome_manager_one_port function
+ Then:
+ - The function writes to the correct file the data and selects a port that already use the given chrome_option.
+ """
+ from rasterize import chrome_manager_one_port, read_json_file
+
+ instance_id = "22222222-2222-2222-2222-222222222221" # not exist
+ chrome_options = "chrome_options2"
+
+ mock_context = {
+ 'context': {
+ 'IntegrationInstanceID': instance_id
+ }
+ }
+
+ params = {
+ 'chrome_options': chrome_options
+ }
+
+ mock_file_content = read_json_file("test_data/chrome_instances.json")
+
+ mocker.patch.object(demisto, 'callingContext', mock_context)
+ mocker.patch.object(demisto, 'params', return_value=params)
+ mocker.patch.object(rasterize, 'read_json_file', return_value=mock_file_content)
+
+ mocker.patch.object(rasterize, 'get_chrome_browser', return_value="browser_object")
+
+ browser, chrome_port = chrome_manager_one_port()
+ assert browser == "browser_object"
+ assert chrome_port == "2222"
+
+
+def test_chrome_manager_one_port_open_new_port(mocker):
+ """
+ Given:
+ - instance id and chrome options.
+ When:
+ - Executing the chrome_manager_one_port function
+ Then:
+ - The function terminate all the ports that are open in chrome_manager, and opens a new chrome port to use.
+ """
+ from rasterize import chrome_manager_one_port, read_json_file
+
+ instance_id = "22222222-2222-2222-2222-222222222221" # not exist
+ chrome_options = "new_chrome_options"
+
+ mock_context = {
+ 'context': {
+ 'IntegrationInstanceID': instance_id
+ }
+ }
+
+ params = {
+ 'chrome_options': chrome_options
+ }
+
+ mock_file_content = read_json_file("test_data/chrome_instances.json")
+
+ mocker.patch.object(demisto, 'callingContext', mock_context)
+ mocker.patch.object(demisto, 'params', return_value=params)
+ mocker.patch.object(rasterize, 'read_json_file', return_value=mock_file_content)
+
+ mocker.patch.object(rasterize, 'get_chrome_browser', return_value="browser_object")
+ terminate_chrome_mocker = mocker.patch.object(rasterize, 'terminate_chrome', return_value=None)
+ generate_new_chrome_instance_mocker = mocker.patch.object(rasterize, 'generate_new_chrome_instance',
+ return_value=["browser_object", "chrome_port"])
+
+ browser, chrome_port = chrome_manager_one_port()
+ assert terminate_chrome_mocker.call_count == 3
+ assert generate_new_chrome_instance_mocker.call_count == 1
+ assert browser == "browser_object"
+ assert chrome_port == "chrome_port"
diff --git a/Packs/rasterize/Integrations/rasterize/test_data/chrome_instances.json b/Packs/rasterize/Integrations/rasterize/test_data/chrome_instances.json
index ac3d45941085..547f7da1668b 100644
--- a/Packs/rasterize/Integrations/rasterize/test_data/chrome_instances.json
+++ b/Packs/rasterize/Integrations/rasterize/test_data/chrome_instances.json
@@ -2,16 +2,16 @@
"2222": {
"instance_id": "22222222-2222-2222-2222-222222222222",
"chrome_options": "chrome_options2",
- "rasteriztion_count": 1
+ "rasterization_count": 1
},
"3333": {
"instance_id": "33333333-3333-3333-3333-333333333333",
"chrome_options": "chrome_options3",
- "rasteriztion_count": 1
+ "rasterization_count": 1
},
"9345": {
"instance_id": "44444444-4444-4444-4444-444444444444",
"chrome_options": "chrome_options4",
- "rasteriztion_count": 1
+ "rasterization_count": 1
}
}
diff --git a/Packs/rasterize/ReleaseNotes/2_0_26.md b/Packs/rasterize/ReleaseNotes/2_0_26.md
new file mode 100644
index 000000000000..8762aac66e5d
--- /dev/null
+++ b/Packs/rasterize/ReleaseNotes/2_0_26.md
@@ -0,0 +1,8 @@
+
+#### Integrations
+
+##### Rasterize
+
+- Added the **List of domains to block** parameter, which allows an adjustment to block requests from specific domains.
+- Fixed an issue where the ***rasterize*** command failed to parse multiple URLs.
+- Updated the Docker image to: *demisto/chromium:130.0.6723.115563*.
\ No newline at end of file
diff --git a/Packs/rasterize/ReleaseNotes/2_0_27.md b/Packs/rasterize/ReleaseNotes/2_0_27.md
new file mode 100644
index 000000000000..ba799aaa9489
--- /dev/null
+++ b/Packs/rasterize/ReleaseNotes/2_0_27.md
@@ -0,0 +1,6 @@
+
+#### Integrations
+
+##### Rasterize
+
+- Updated the Docker image to: *demisto/chromium:131.0.6778.116585*.
diff --git a/Packs/rasterize/ReleaseNotes/2_0_28.md b/Packs/rasterize/ReleaseNotes/2_0_28.md
new file mode 100644
index 000000000000..4d1774f67835
--- /dev/null
+++ b/Packs/rasterize/ReleaseNotes/2_0_28.md
@@ -0,0 +1,6 @@
+
+#### Integrations
+
+##### Rasterize
+
+Fixed an issue in the **rasterize** command that caused Chrome to crash when opening a new rasterize instance or using a different Chrome option.
\ No newline at end of file
diff --git a/Packs/rasterize/pack_metadata.json b/Packs/rasterize/pack_metadata.json
index ba9ee4527e93..7f904a5f9b81 100644
--- a/Packs/rasterize/pack_metadata.json
+++ b/Packs/rasterize/pack_metadata.json
@@ -2,7 +2,7 @@
"name": "Rasterize",
"description": "Converts URLs, PDF files, and emails to an image file or PDF file.",
"support": "xsoar",
- "currentVersion": "2.0.25",
+ "currentVersion": "2.0.28",
"author": "Cortex XSOAR",
"url": "https://www.paloaltonetworks.com/cortex",
"email": "",
diff --git a/Tests/conf.json b/Tests/conf.json
index 38ce2783825a..bfb99d527bf2 100644
--- a/Tests/conf.json
+++ b/Tests/conf.json
@@ -53,6 +53,7 @@
},
{
"playbookID": "Endpoint Investigation Plan - Test",
+ "integrations": "Cortex Core - IR",
"timeout": 600
},
{
@@ -114,7 +115,8 @@
"timeout": 500,
"integrations": [
"Active Directory Query v2",
- "VirusTotal (API v3)"
+ "VirusTotal (API v3)",
+ "Cortex Core - IR"
],
"instance_names": [
"active_directory_80k",
@@ -3347,16 +3349,60 @@
"playbookID": "Elasticsearch_Fetch_Custom_Indicators_Test",
"fromversion": "5.5.0"
},
+ {
+ "integrations": "ElasticsearchFeed",
+ "instance_names": "es_demisto_feed_elastic_v8",
+ "playbookID": "Elasticsearch_Fetch_Demisto_Indicators_Test",
+ "fromversion": "5.5.0"
+ },
+ {
+ "integrations": "ElasticsearchFeed",
+ "instance_names": "es_generic_feed_elastic_v8",
+ "playbookID": "Elasticsearch_Fetch_Custom_Indicators_Test",
+ "fromversion": "5.5.0"
+ },
+ {
+ "integrations": "ElasticsearchFeed",
+ "instance_names": "os_demisto_feed",
+ "playbookID": "Elasticsearch_Fetch_Demisto_Indicators_Test",
+ "fromversion": "5.5.0"
+ },
+ {
+ "integrations": "ElasticsearchFeed",
+ "instance_names": "os_generic_feed",
+ "playbookID": "Elasticsearch_Fetch_Custom_Indicators_Test",
+ "fromversion": "5.5.0"
+ },
{
"integrations": "Elasticsearch v2",
"instance_names": "es_v6",
"playbookID": "Elasticsearch_v2_test-v6"
},
+ {
+ "integrations": "Elasticsearch v2",
+ "instance_names": "os_v6",
+ "playbookID": "Elasticsearch_v2_test-v6"
+ },
{
"integrations": "Elasticsearch v2",
"instance_names": "es_v8",
"playbookID": "Elasticsearch_v2_test-v8"
},
+ {
+ "integrations": "Elasticsearch v2",
+ "instance_names": "es_v7",
+ "playbookID": "Elasticsearch_v2_test-v7-v8"
+ },
+ {
+ "integrations": "Elasticsearch v2",
+ "instance_names": "es_v8",
+ "playbookID": "Elasticsearch_v2_test-v7-v8"
+ },
+ {
+ "integrations": "Elasticsearch v2",
+ "instance_names": "os_v7",
+ "playbookID": "Elasticsearch_v2_test-v7-v8"
+ },
{
"integrations": "PolySwarm",
"playbookID": "PolySwarm-Test"
@@ -5821,8 +5867,11 @@
{
"integrations": "netskope_api_v2",
"playbookID": "Netskope_V2_Test"
+ },
+ {
+ "integrations": "DSPM",
+ "playbookID": "DSPM Test"
}
-
],
"skipped_tests": {
"ThreatCrowd - Test": "The pack is deprecated",
@@ -5963,9 +6012,6 @@
"SymantecEndpointProtection_Test": "Issue 30157",
"TestCloudflareWAFPlaybook": "No instance",
"DBot Build Phishing Classifier Test - Multiple Algorithms": "Issue 48350",
- "Elasticsearch_v2_test-v8": "CRTX-61980",
- "Elasticsearch_Fetch_Custom_Indicators_Test": "CRTX-134283",
- "Elasticsearch_Fetch_Demisto_Indicators_Test": "CRTX-134283",
"Carbon Black Enterprise Protection V2 Test": "No credentials",
"TruSTAR v2-Test": "No credentials",
"Archer v2 - Test": "Test doesn't pass in the builds because of the creds, but creds seems ok, need to debug further",
@@ -6099,7 +6145,6 @@
"icebrg": "No instance - Issue 14312 - CIAC-2006",
"Freshdesk": "No instance - Trial account expired",
"Kafka V2": "No instance - Can not connect to instance from remote",
- "KafkaV3": "No instance - Can not connect to instance from remote",
"Check Point Sandblast": "No instance - Issue 15948",
"Remedy AR": "No instance - getting 'Not Found' in test button",
"Salesforce": "No instance - Issue 15901 - CIAC-1976",
@@ -6152,7 +6197,8 @@
"fortimail": "No instance",
"JoeSecurityV2": "CIAC-9872",
"Aha": "No instance",
- "SlackV2": "Integration slackV2 is deprecated."
+ "SlackV2": "Integration slackV2 is deprecated.",
+ "DSPM": "No instance"
},
"native_nightly_packs":[
"Palo_Alto_Networks_WildFire",
diff --git a/Tests/docker_native_image_config.json b/Tests/docker_native_image_config.json
index a922df1321a1..123467f0fbc4 100644
--- a/Tests/docker_native_image_config.json
+++ b/Tests/docker_native_image_config.json
@@ -11,7 +11,6 @@
"readpdf",
"parse-emails",
"docxpy",
- "sklearn",
"pandas",
"ippysocks-py3",
"oauthlib",
@@ -44,7 +43,6 @@
"readpdf",
"parse-emails",
"docxpy",
- "sklearn",
"pandas",
"ippysocks-py3",
"oauthlib",
@@ -78,7 +76,6 @@
"readpdf",
"parse-emails",
"docxpy",
- "sklearn",
"pandas",
"ippysocks-py3",
"oauthlib",
@@ -108,7 +105,6 @@
"readpdf",
"parse-emails",
"docxpy",
- "sklearn",
"pandas",
"ippysocks-py3",
"oauthlib",
@@ -264,6 +260,76 @@
"ignored_native_images":[
"native:8.6"
]
+ },
+ {
+ "id": "FraudWatch",
+ "reason": "CIAC-11186, This integration support only from python 3.11",
+ "ignored_native_images": [
+ "native:8.6"
+ ]
+ },
+ {
+ "id": "AWS - Security Hub",
+ "reason": "CIAC-11186, This integration support only from python 3.11",
+ "ignored_native_images": [
+ "native:8.6"
+ ]
+ },
+ {
+ "id": "Box v2",
+ "reason": "CIAC-11186, This integration support only from python 3.11",
+ "ignored_native_images": [
+ "native:8.6"
+ ]
+ },
+ {
+ "id": "CrowdStrike Falcon Intel v2",
+ "reason": "CIAC-11186, This integration support only from python 3.11",
+ "ignored_native_images": [
+ "native:8.6"
+ ]
+ },
+ {
+ "id": "Cyberpion",
+ "reason": "CIAC-11186, This integration support only from python 3.11",
+ "ignored_native_images": [
+ "native:8.6"
+ ]
+ },
+ {
+ "id": "Darktrace Event Collector",
+ "reason": "CIAC-11186, This integration support only from python 3.11",
+ "ignored_native_images": [
+ "native:8.6"
+ ]
+ },
+ {
+ "id": "Infinipoint",
+ "reason": "CIAC-11186, This integration support only from python 3.11",
+ "ignored_native_images": [
+ "native:8.6"
+ ]
+ },
+ {
+ "id": "Palo Alto Networks IoT",
+ "reason": "CIAC-11186, This integration support only from python 3.11",
+ "ignored_native_images": [
+ "native:8.6"
+ ]
+ },
+ {
+ "id": "SecurityIntelligenceServicesFeed",
+ "reason": "CIAC-11186, This integration support only from python 3.11",
+ "ignored_native_images": [
+ "native:8.6"
+ ]
+ },
+ {
+ "id": "ZoomEventCollector",
+ "reason": "CIAC-11186, This integration support only from python 3.11",
+ "ignored_native_images": [
+ "native:8.6"
+ ]
}
],
@@ -273,4 +339,4 @@
"native:ga": "native:8.8",
"native:candidate": "native:candidate"
}
-}
\ No newline at end of file
+}
diff --git a/poetry.lock b/poetry.lock
index 904b0637bcc3..c9d381fccd9b 100644
--- a/poetry.lock
+++ b/poetry.lock
@@ -709,13 +709,13 @@ reference = "pypi-public"
[[package]]
name = "demisto-sdk"
-version = "1.32.3"
+version = "1.32.4"
description = "\"A Python library for the Demisto SDK\""
optional = false
python-versions = ">=3.9,<3.13"
files = [
- {file = "demisto_sdk-1.32.3-py3-none-any.whl", hash = "sha256:8c4455fa76a6814d852a8349138b39eda83c2d54130bbdb1182bbb9251848862"},
- {file = "demisto_sdk-1.32.3.tar.gz", hash = "sha256:0ca9e52d158c1fad20ad22a251bb3151a7cccc634be4a31c60494bc8138e9d2b"},
+ {file = "demisto_sdk-1.32.4-py3-none-any.whl", hash = "sha256:1c0dc4d3235ca8d528c93ae688deef68285d8846097175e63235d1f3d4b8707b"},
+ {file = "demisto_sdk-1.32.4.tar.gz", hash = "sha256:bcb319497ae5a921db54390402b58cfedcb3cecfefdf45327372ed3e7c124c0a"},
]
[package.dependencies]
@@ -4535,4 +4535,4 @@ reference = "pypi-public"
[metadata]
lock-version = "2.0"
python-versions = "^3.9,<3.11"
-content-hash = "959f5f91bf940464fd81b9a53ca61c0f2844e58e75902b30d651ff776f50e120"
+content-hash = "e159c629031f570e40980d4ba6a7ab9071b03ea284167397d4620047d8c7f475"
diff --git a/pyproject.toml b/pyproject.toml
index 763a298d9d05..886544227fa8 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -25,7 +25,7 @@ sendgrid = "^6.11"
slack_sdk = "^3.31.0"
[tool.poetry.group.dev.dependencies]
-demisto-sdk = "1.32.3" # Only affects GitHub Actions. To control the SDK version elsewhere, modify the infra repo's pyproject file
+demisto-sdk = "1.32.4" # Only affects GitHub Actions. To control the SDK version elsewhere, modify the infra repo's pyproject file
pytest = ">=7.1.2"
requests-mock = ">=1.9.3"
pytest-mock = ">=3.7.0"