Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adds support for MySQL publisher. #348

Closed
wants to merge 1 commit into from
Closed

Conversation

RatkoR
Copy link

@RatkoR RatkoR commented Sep 16, 2024

If you think that MySQL publisher would be a good thing to have..

I know we can use MQTT to get values and than insert those values into MySQL. Still, this would require users to create this "small" script and it's so much easier to just have this as an option in mbmd.

I can fix / correct this commit if anyone finds it is missing something or if anything is done in a wrong way.

I've been running mbmd with mysql for couple of days and it seems stable.

MySQL publisher stores measurements into `readings` table.
Each value received is stored as a new record. Value description and
unit name is also stored with each record. This means that table is
denormalised but it's also faster and easier to use.

Publisher waits for all readings and inserts all values with one
multiple INSERTs statement. This is way faster than saving each
value separately.

stmt, err := m.client.Prepare(sql)
if err != nil {
fmt.Println("Error preparing statement:", err)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IMHO: Please use log... everywhere.


case <-ticker.C:
if len(items) == 0 {
fmt.Println("Nothing to do ...", time.Now().Unix())

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IMHO: this will pollute the output with no valueable informatin. Remove it.


fmt.Println("Added: ", time.Now().Unix(), len(items))

items = nil

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IMHO: items = items[:0]

fmt.Println("Added: ", time.Now().Unix(), len(items))

items = nil
vals = nil

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IMHO: vals = vals[:0]

Comment on lines +370 to +379
if viper.GetString("mysql.host") != "" {
mysql := server.NewMySQLClient(
viper.GetString("mysql.host"),
viper.GetString("mysql.user"),
viper.GetString("mysql.password"),
viper.GetString("mysql.database"),
)

tee.AttachRunner(server.NewSnipRunner(mysql.Run))
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IMHO: What if I want to connect to pgsql or sqlite or mssql? At least provide an adapter/interface where other developers can integrate their desired DB. Also the CLI args should be then: db.type=mysql, db.host, db.user and so on

connString := user + ":" + password + "@tcp(" + host + ")/" + database
db, err := sql.Open("mysql", connString)
if err != nil {
panic(err.Error())

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IMHO: Please no panic, return an error with proper information

@RatkoR
Copy link
Author

RatkoR commented Jan 2, 2025

@SchumacherFM, thank you for looking at the pull request!

Comments are valid and easily fixed. Also, tests are missing. But I'd like to explain something before continuing:

I was running mbmd with MySQL publisher for two days. I queried solaredge every 10 seconds which collected more than a million values into db table. Running like this for a year (or more) would collect hundred of millions records which is very slow to query. And 10 seconds is not much. Querying every 2 seconds gathers 5x as many records.

So I added new parameters to limit which subset of fields are saved to db and which are skipped. This got me less records, but whatever parameter I needed at some point in future, I needed to add it to the list. It was skipped in previous queries, so I had no historical values for it. This proved to be less what I needed than I thought.

So maybe this is not such a great way to gather values. Prometheus or custom script through MQTT is probably better (which is what i did in the end).

Fill free to close the PR (or I'll do it later if nobody continues with it).

@SchumacherFM
Copy link

@RatkoR Thanks for the nice comment :-) I can't close this PR because I'm not owner/maintainer.

which collected more than a million values into db table.

You need to write an aggregator to build daily/weekly/monthly sums and then delete old values.

Or you can use my prometheus PR.

@RatkoR
Copy link
Author

RatkoR commented Jan 3, 2025

Sure, aggregations are fine, but you need to fine-tune them. Some need daily, some monthly. Some weekly starting with Sunday, some weekly starting with Monday. Then there's timezone offset and yearly stats, ... Some need longer retention of original data than others, etc.

Looks to me we'd need a ton of parameters so that everybody can set their own preferable way. Otherwise they need to do cronjobs to aggregate and cleanup. And if they can do crons, they can probably do mqtt (or prometheus).

I'll see where my project takes me - I'll revisit this if I find a good cause to do it in MySQL natively.

@RatkoR RatkoR closed this Jan 3, 2025
@SchumacherFM
Copy link

The aggregation program does not belong into mbmd. This should be a 3rd party service. mbmd just provides/writes the raw data.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants