-
Notifications
You must be signed in to change notification settings - Fork 0
Brainstorming
How should we split up the code? Should it be one single program or several ones? Here we try to decide which way to go.
The daemon should collect system data and store it in a requested data source. A Daemon should be able to use another daemon for centralised storing of data.
In the event of a non responding daemon on the other end, the transmitting one will wait and queue up the data. Should this be done purely in memory or to a fallback data source?
What kind of data should the program collect?
-
Processes
Can we rely on /proc for getting our data? This folder doesn't exist in all *NIXES? BSD? They don't seem to use it anymore. ps seems like a safe bet.The following data columns could be interested for us to gather:
- %CPU (%cpu):
- %MEM (%mem):
- COMMAND (command):
- PID (pid):
- PPID (ppid):
- RSS: The process's memory in RAM.
The following command will gather the data above:
ps -eo %cpu,%mem,command,pid,ppid,rss
Running this command seems to product the same result on (at least): ArchLinux, FreeBSD and Centos. -
Modules
/proc/modules or lsmod in Linux. -
Mounted file systems (disk usage)
-
Network traffic
-
GPU usage (GPU + Memory)
-
CPU load
-
Node acitivty (Is node up or down)?
-
Temperatures
-
IO operations
-
File modifications
Alerts
The daemon could be configured to send a notification whenever one of the requested parameters exceeds a specified value (i.e gpu temperature exceed 100 degrees C). Notifications could be send over sms, mail etc.
How should we store the data? There could (will) be a lot of data fetched by the daemon and saving it will take space! We really need to make sure that we are not wasting any space, for even if space is cheap these days I imagine it can turn out very bad when a lot of logging is done...
Implementation of a plugin/extension based system where a user can drop in a mysql data source plugin into the plugin/extension folder and configure it from the main configuration file. Shouldn't be limited to just data sources.
Example code:
configuredDataSource = conf.get("dataSource")
dsConfigTuple = conf.get("dataSource")
dataSourceModule = __import__("/plugins/datasources/" + configuredDataSource)
dbObject = dataSourceModule.DBObject(dsConfigTuple)
The proof-of-concept CLI client should be able to:
- Communicate with daemons.
Should we just focus on *NIX platforms or try to make our code work on as many platforms as possible (i.e Windows?). If so, how should we handle features that aren't available on platform A but on platform B?
We could always just fork (don't know if using the right term) the project and make the code work for Windows afterwards?
Is there anything more to be said here?
Where do we stand in dependencies? Should we do our best to avoid dependencies to other libraries as much as possible or reuse what other have already done?
Working with different kind of system data and be able to
Log information to different types of data sources, could be a MySQL database or a plain text file. Should use a generic interface so implementation of logging to additional data sources will be painless(smärtfritt).
- QT
- CLI
All unstable should be named with an odd number ex. 0.0.1, 0.0.3, 0.0.5 etc, testing should be named even numbers ex. 0.0.2, 0.0.4, 0.0.6 etc and all stable should be ex. 0.1, 0.2, 0.3 etc.
-
Class_name
-
instance_name / a_very_long_variable_name
-
open_connection(connection_string)
-
Prefix variable _ when private variables:
class My_horse:
_private_variable = 45;
def get_value():
local_variable = 5
print(self._private_variable)