You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I would like to suggest disabling S3 storage for avatars and attachments. Despite enabling and commenting out the lines to disable S3 storage, it is still being used due to associations in the filestore-config.xml file.
Steps to Reproduce:
Enable and comment out the lines to disable S3 storage for avatars and attachments.
Observe that S3 storage is still being used.
Expected Behavior: S3 storage should be disabled for avatars and attachments when the corresponding lines are commented out.
Thanks in advance.
Product
Jira
Code of Conduct
I agree to follow this project's Code of Conduct
The text was updated successfully, but these errors were encountered:
Atlassian suggested below. Switching back to local attachment storage
As the source file system data is not changed or removed by DataSync, Jira can be reverted back to reading and writing attachment data from the file system. To do this, remove the filestore-config.xml files from your directories and restart Jira. You can also delete the element targeting attachments.
@nagarajuvemula789 thanks for raising this. Indeed, if the file exists but s3 env vars aren't set, there's no process that would delete it. This will be fixed in the image entrypoint rather than in the Helm chart.
Suggestion
I would like to suggest disabling S3 storage for avatars and attachments. Despite enabling and commenting out the lines to disable S3 storage, it is still being used due to associations in the filestore-config.xml file.
Steps to Reproduce:
Enable and comment out the lines to disable S3 storage for avatars and attachments.
Observe that S3 storage is still being used.
Expected Behavior: S3 storage should be disabled for avatars and attachments when the corresponding lines are commented out.
Thanks in advance.
Product
Jira
Code of Conduct
The text was updated successfully, but these errors were encountered: