-
Notifications
You must be signed in to change notification settings - Fork 10
/
Copy pathenv
104 lines (88 loc) · 4.44 KB
/
env
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
# -----
# This is for usage with docker, which is the preferred setzp
# Adjust and rename/copy it to .env
# -----
# This is where to look for your photos and videos, nothing will be changed or modified here
ASSET_PATH=/Users/johndoe/photos
# Where to upload
# will be a folder below the ASSET_PATH
UPLOAD_FOLDER=uploaded
# Here are previews created for providing downscaled version to be delivered by nginx
PREVIEW_PATH=/Users/johndoe/timeline/data/preview
# Where to put Log-Files
LOG_PATH=/Users/johndoe/timeline/timeline/data/log
# Where to put Files from mariaDB
DATABASE_DATA=/Users/johndoe/timeline/data/db
# Same for rabbitmq
RABBITMQ_DATA=/Users/johndoe/timeline/data/rabbitmq
# Adjust this with the number of parallel workers / tasks to be allowed
# Depends on your machine; for a NAS with 4GB and 4 processors a limit of 1
# seems reasonable
# On something more powerful 4-6 is good
# In general a good approach is to go with the number of physical processors
# The workers are on one hand CPU intensive but on the other hand also
# memory intensive as the loaded models for face and thing detection
# already eat up a a few hundret megabyte.
# That means that a single worker process need around 1.9GB RAM.
# In case you only have as NAS with 4GB just just use one. With 8GB 2-3 should work.
# On my Dev machine with 16GB RAM 6 workers are ok
# Once the workers are done anymore they will shut down and only 1 worker will
# remain. That means if the initial work is done the memory (and CPU)
# footprint should reduce
WORKERS_PROCESS=2
# This is the password to access adminer
# under http://<machine>:9091/adminer
# with the root user
DB_SUPER_USER_PW=example
# This controls the Flask Webapp and the number and type of workers
# You might want to play with the number of workers
# According to the gunicorn documentation this should be between 1 and 2 per core
# but most if the load is handled by nginx, so this shouldn't be necessary
GUNICORN_WORKERS=4
# This is the default, other options like eventlet can also be used
GUNICORN_WORKER_CLASS=sync
# Face detection tweaks
# This values determines the range of which faces are consided to be the Same
# during the clustering. A greater value results in a more
# "generous" face clustering
FACE_CLUSTER_EPSILON=0.55
# This determines how many similar faces are required to form a cluster
FACE_CLUSTER_MIN_SAMPLES=10
# How many faces are considered for the clustering
# The more, the better but it has an effect and performance and memory consumption
FACE_CLUSTER_MAX_FACES=8000
# This determines the boundaries when a face is considered to be identical
# Smaller values result more false negative matches => Faces are not named at all
# Bigger values result in more false positive matches => Faces are are named wrong
FACE_DISTANCE_VERY_SAFE=0.55
FACE_DISTANCE_SAFE=0.675
FACE_DISTANCE_MAYBE=0.75
# When set to True (recommended) full screen videos will be transocded when clicked on them.
# Once the video is trancoded it will be saved, no need to transcode again.
# When set to False all videos will be transcoded to fullscreen mode when detected.
# Consumes a lot of time and possibly space
VIDEO_TRANSCODE_ON_DEMAND=True
# How many Assets are needed to detect an Event
EVENT_MIN_SAMPLES=50
# What is the timeframe to detect this Event
EVENT_HOURS_EPSILON=24
# This is the target size for one section. It detemines basically how many asset are
# loaded if a section becomes visible; 200-300 might be good
SECTION_TARGET_SIZE=300
# optional configure FFMPEG_HWACCEL
# Possible options
# libx264 - this is the default, slow but good
# Mac: h264_videotoolbox - don't use it, lousy encoding quality
# Linux: possibly vdpau or cuda - not tested
# FFMPEG_HWACCEL=
# Set the basepath where the application will reside, e.g. http://myserver:9090/timeline/....
# please provide a value here. I need to find a fix in the nginx config for working
# without a basepath (baspath empty). The reason for a basepath is to be able to put this behind one other
# webserver (nginx, caddy) so that it allows it to proxy different applications
# under one address (e.g. some dynamic dns with nextcloud, darktable ...)
TIMELINE_BASEPATH=/timeline
# Customizations
# optimize the pull - for huge datasets default value of 10 is not enough
SQLALCHEMY_ENGINE_OPTIONS=pool_size=100,max_overflow=100
# For NodeJS 17 there is an issue https://stackoverflow.com/questions/69692842/error-message-error0308010cdigital-envelope-routinesunsupported
NODE_OPTIONS=--openssl-legacy-provider