-
Notifications
You must be signed in to change notification settings - Fork 21
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Simulator improvements #74
Comments
Noticed a regression in the Node-RED behavior when updating to the latest version of the flows:
The command sequence has changed enough between versions that we cannot just copy-and-paste the old sequence, so let's splice in our modified dt/eamount cmd and add back in the pause / resume functionality... EDIT: Well, splicing in the commands was easy enough! However, the pause/resume functionality is not the same as the non-P&C. We pause the session OK, but when attempting to resume, run into this certificate error... In the interest of time (we want the demo rolled forward ASAP), I'll leave this a known regression and we can inspect after the dust has settled. |
Let's make a checklist of areas I need to update for the roll-forward:
|
@the-bay-kay I have been patching after pulling - e.g. I would start with that, and switch to having |
Switching to 2024.9.0-rc2, we encounter the following config error... It seems Shankari ran into a similar issue on this previous issue when building on the uMWC - presumably, I'll need to copy over the config files from the last known working demo (e.g., 2024.3.0) -- I need to run for some evening appointments, but will investigate further after... |
That is almost certainly not the correct approach to take. I am not sure where you are getting the config from (you haven't indicated what you did to accomplish "switching to 2024.9.0-rc2"). However, if there is a mismatch in modules, it is almost certainly due to an old config, referring to an old module, being copied over, and the module being renamed in the current release. In that case, the old config is the problem and copying it over won't fix anything |
Apologies for the lack of clarity -- I've switched the version in manager/Dockerfile like I described in the checklist above (so we pull & build 2024.9.0-rc2). The config conflict makes sense. Let me read more closely through the dockerfile and corresponding |
Let's trace back and see where this fails:
We launch using the config /ext/source/config/config-sil-ocpp201-pnc.yaml. So, looking at that... Aha. If these release notes are to be believed, it seems that JsCarSimulator was replaced with JsEvManager. We're copying over an old config from everest-demo, that does not match the one in core linked above. So, let's copy over the updated configs... |
Success -- updating the config got us past the manifest loading. It seems we have to update the OCPP database file as well -- we reach the following failstate: Fail to Boot: Database out of date2024-10-22 16:53:27.114732 [INFO] ocpp:OCPP201 :: Established connection to database: "/ext/source/build/dist/share/everest/modules/OCPP201/device_model_storage.db"
2024-10-22 16:53:27.115098 [ERRO] ocpp:OCPP201 void ocpp::v201::InitDeviceModelDb::execute_init_sql(bool) :: Database does not support migrations yet, please update the database.
terminate called after throwing an instance of 'boost::wrapexcept<boost::exception_detail::error_info_injector<ocpp::v201::InitDeviceModelDbError> >'
what(): Database does not support migrations yet, please update the database. We create COPY device_model_storage_maeve_sp1.db ./dist/share/everest/modules/OCPP201/device_model_storage.db ... So let's look for an updated database file and slot that in. At a glance, the only file with the same name as |
With no obvious lead as for a replacement db file, let's trace back:
If I understand the documentation correctly, the
@shankari , if you have any insight on working with the updated database schema, that would be greatly appreciated : ) I'll continue to read through the docs to see if I missed anything, and get a better understanding of the initialization process. EDIT: Under the "to-do" for OCPP 2.0.1 integration, it says to use the provided database file... but I haven't been able to find this... |
My database experience has been limited almost entirely to MongoDB and NoSQL systems, so I am approaching this with a beginner's eye. With that said, let's learn along the way:
Because our database was created before the new initialization process, it will not follow the same "verion history" described within the design considerations. It seems the only way to move forward is to (as mentioned above) create a brand new database, using the new schema and internal database data (e.g., user_version >= 1). Is there really no way we can retrofit an old database?? I'll look into creating a new database with the init file now... EDIT: Let's read through this... This seems to detail the process of updating components using a custom |
Please see the way in which I created this database earlier |
To confirm, I do need to re-create the MaEVe database? Do you have a specific thread I can reference? Looking at the commit history of |
This is not the MaEVe database - it is the EVerest database. You should follow those instructions to copy and edit the file properly. |
Right -- but the file we copy over is called
If you're referring to this thread, resetting the host as described did not fix the issue. Nor is it an issue of an incorrect url. Around release 2024.7.0, the database was substantially changed. Relevant to our work is the adoption of a migration stratedgy, which relies on table parameters our database does not have (hence the "user_version()" error we receive: the solution isn't as simple as bumping this value up to 1). All of the demo-repository's databases were created roughly in release 2024.3.0 (e.g., the version the demo is based off of in |
@the-bay-kay you are working on upgrading the SIL. So the steps to take the SIL and run it on the uMWC are not relevant to you. That's why I suggested that you search for sqlite in general. You could also see when the lines to add the custom DB were added to the Dockerfile. It was added in #19
I don't disagree. The point I am trying to make is that the database checked into the codebase was created by editing the database that was created by EVerest automatically at startup. After the correctly formatted database is created automatically at startup
|
So, in order to re-create the database, we need to build EVerest without inserting a custom database. So, let's comment out the copies: File Changes
# Copy over the custom config *after* compilation and installation
# COPY config-docker.json ./dist/share/everest/modules/OCPP/config-docker.json
# COPY config.json ./dist/share/everest/modules/OCPP201/config.json
# COPY device_model_storage_maeve_sp1.db ./dist/share/everest/modules/OCPP201/device_model_storage.db
COPY run-test.sh /ext/source/tests/run-test.sh
elif [[ "$DEMO_VERSION" =~ sp3 ]]; then
echo "Copying device DB, configured to SecurityProfile: 3"
# docker cp manager/device_model_storage_maeve_sp3.db \
# everest-ac-demo-manager-1:/ext/source/build/dist/share/everest/modules/OCPP201/device_model_storage.db
fi ...but, as expected, fail to connect. Let's compare the old file and the new template, and get this one up to speed... |
As described above, the current plan is to take the 2024-10-24 03:43:04.345324 [ERRO] ocpp:OCPP201 void ocpp::WebsocketTlsTPM::on_conn_fail() :: OCPP client connection to server failed
2024-10-24 03:43:04.345337 [INFO] ocpp:OCPP201 :: Connect failed with state: 3 Timeouted: false
2024-10-24 03:43:04.345421 [INFO] ocpp:OCPP201 :: Reconnecting in: 3000ms, attempt: 1
2024-10-24 03:43:04.504193 [INFO] ocpp:OCPP201 :: Security Event in OCPP occured: StartupOfTheDevice
2024-10-24 03:43:07.347740 [INFO] ocpp:OCPP201 :: Connecting to uri: ws://localhost:9000/cp001 with security-profile 1
2024-10-24 03:43:07.348077 [INFO] ocpp:OCPP201 :: Using network iface:
2024-10-24 03:43:07.380724 [INFO] ocpp:OCPP201 :: LWS connect with info port: [9000] address: [localhost] path: [/cp001] protocol: [ocpp2.0.1]
2024-10-24 03:43:07.381002 [ERRO] ocpp:OCPP201 int ocpp::WebsocketTlsTPM::process_callback(void*, int, void*, void*, size_t) :: CLIENT_CONNECTION_ERROR: conn fail: 111 Turns out the URL work mentioned above was relevant -- consider my hat eaten! So, looking at the newly generated database file... sqlite3 /ext/source/build/dist/share/everest/modules/OCPP201/device_model_storage.db "SELECT * FROM VARIABLE_ATTRIBUTE" | more
...
130|128|2|1|0|0|default|[{"configurationSlot": 1, "connectionData": {"messageTimeout": 30, "ocppCsmsUrl": "ws://localhost:9000", "ocppInterface": "Wired0", "ocppTransport": "JSON", "ocppVersion": "OCPP20", "securityProfile": 1}}] ... We see that we are pointing to the wrong URL. So, let's update this with the following method, and confirm it was set correctly... UPDATE VARIABLE_ATTRIBUTE
SET "VALUE" = '[{"configurationSlot":1,"connectionData":{"messageTimeout":30,"ocppCsmsUrl":"ws://host.docker.internal/ws/cp001","ocppInterface":"Wired0","ocppTransport":"JSON","ocppVersion":"OCPP20","securityProfile":1}}]'
WHERE id=130; Let's confirm we've made the change... sqlite3 /ext/source/build/dist/share/everest/modules/OCPP201/device_model_storage.db "SELECT * FROM VARIABLE_ATTRIBUTE WHERE id=130"
...
130|128|2|1|0|0|default|[{"configurationSlot":1,"connectionData":{"messageTimeout":30,"ocppCsmsUrl":"wss://host.docker.internal/ws/cp001","ocppInterface":"Wired0","ocppTransport":"JSON","ocppVersion":"OCPP20","securityProfile":1}}] Cool! So this should act as a good jumping off place. Using the database file is more complicated than simply running with these changes (e.g., this is overwritten upon a new initialization, since the "custom database" spot is empty). So, when I've got a fresh set of eyes tomorrow morning, let's take a look at how to use this new modified db as a custom file... |
I don't think that it is this complicated. If it were overwritten, then the custom DB that I created back in May and that was copied in would have been overwritten. Since it was not, copying over this DB in the same way should ensure that it is not overwritten, and hopefully that EVerest will be able to start up properly |
I believe you are right, that it should ultimately be as simple as copying over the DB in the same way -- upon doing so, however, we receive the following error...
I think we need to build off of the database file with User Version 5, not uv=1 -- I expected the migration process to occur prior to any runtime events, but it seems we stay at uv1 and then run into a schema mismatch (I believe this is the error above)... Let me see if there is an alternate DB file I missed and attempt to copy that over. |
So, when we start up the simulator, we receive the following console info concerning the database: Startup Logs...
Originally, my plan was to modify Looking for the URL info as above... $ sqlite3 /tmp/ocpp201/cp.db "SELECT * FROM VARIABLE_ATTRIBUTE" | more
Error: in prepare, no such table: VARIABLE_ATTRIBUTE
$ sqlite /tmp/ocpp201/cp.db "PRAGMA table_list" Full table list/workspace # sqlite3 /tmp/ocpp201/cp.db "PRAGMA table_list"
main|CHARGING_PROFILES|table|5|0|0
main|METER_VALUE_ITEMS|table|13|0|0
main|METER_VALUES|table|5|0|0
main|sqlite_schema|table|5|0|0
main|TRANSACTIONS|table|7|0|0
main|NORMAL_QUEUE|table|5|0|0
main|AUTH_CACHE|table|4|0|0
main|AVAILABILITY|table|3|0|0
main|TRANSACTION_QUEUE|table|5|0|0
main|AUTH_LIST_VERSION|table|2|0|0
main|LOCATION_ENUM|table|2|0|0
main|AUTH_LIST|table|2|0|0
main|READING_CONTEXT_ENUM|table|2|0|0
main|MEASURAND_ENUM|table|2|0|0
main|PHASE_ENUM|table|2|0|0
temp|sqlite_temp_schema|table|5|0|0 It seems that the URL is no longer stored in the VARIABLE ATTRIBUTE table... So, let's do some digging and figure out where it could be |
I don't think that the db in |
I think that |
Looking at the manifest.yaml of the OCPP201 module in everest-core:2024.3.0, it seems the path to the device_model has remained consistent. That is, in the three following cases:
All three of these find the default database in /ext/source/build/dist/share/everest/modules/OCPP201/device_model_storage.db`.
So this is correct -- that was my understanding as well. However, when we modify this file as described here, the file is immediately overwritten. Video of this occurring below: Editing database in Docker...Trimmed.Database.Edit.mp4If instead of keeping this file in place, we remove it and re-spin up the docker container, we receive a different behavior, resulting in the error described here. Importing .db file...compressed_dbimport.mp4So, I want to figure out why this behavior isn't the same. Reading the documentation, it says...
So, it may be that we are editing the correct base model, but are not correctly indicating to the module that we are using a custom database. Let me read into this further... |
Upon a reread of the initialization documentation, this line stood out
While it doesn't explain how to utilize a custom database, this does give us an avenue for configuring the base model. That is, if we edit this config, we should be able to set the correct URL... let's try that out and see how it goes. |
@the-bay-kay I would read code, not documentation. We are going to modify this code. We should be able to read it. You can see where in the code the file is generated and why it is (or is not) overwritten. |
I am looking at the code here We should be able to add logs there and see what is actually going on. |
I was looking at DeviceModelStorageSqlite::get_device_model(), because it seems that this function is also where the initial statement above is created -- though the execution function itself doesn't seem to elucidate much... Let's put some logs around to shed some light on this... |
Just connecting some of the dots for my understanding:
How I'm Recompiling...Just as an aside (and for my own book keeping), I wanted to record how I'm re-compiling these modules for testing in the docker container (adding logs, making changes, etc.) - The `manager/Dockerfile` runs the install script via/entrypoint.sh run-script install
|
Bumping up to 12gb did not change anything, I can try more when I'm back at my desk but trying to squeeze what we can out of our last hour in the lab! |
Results of lab session 2024-11-11: We got the uMWC manager hooked up with docker demo. It required removing some files in After this we could not get the ocpp authentication to work while charging, so we switched back to the dummy token validator and provider. Once we verified that charging worked with the ocpp module enabled, we switched to Katie's patch. #74 (comment) This patch seems to have worked partially. The charger was able to receive a schedule to offer 70kwh of power. The max current stayed fixed at 32 A, even though we sent a schedule to lower it. |
@catarial are these from the MQTT server? Did you look at the text logs on the server? |
In my rush earlier, I misunderstood and though the instructions were only needed for SIL, but I realize that made no sense considering we were testing OCPP in the lab. My bad. Belated, but here are the instructions for checking OCPP profiles are accepted correctly. I have pushed the patches in my SIL demo, and included the patch to view OCPP logs directly here -- this is applied same as the iso patch. Testing OCPP Smart Charging Profiles
|
Wrapping up the SIL rollforward, I have created patches for the following files: List of Patches
Additionally, I have imported the When running, we are failing shortly after we receive the ChargeParamDiscoveryRes: Error Logs...
Looking at the logs, we correctly read the DepartureTime within |
Those were screenshots were from the UI of the car simulator. We did use the script to send a max current to the ocpp server. We were able to verify that the ocpp server was receiving it, but it wasn't changing on the charger. We used the I believe these issues can be tested in SIL though, since we were able to verify that the car is seeing the PMax of the charger. The issue is just getting the charger to respond to the ocpp server. |
The rollforward-demo branch of my everest-demo fork has been updated to support renegotiation! We need to make a few polishing changes to the node-red flows, but the renegotiation demo is now officially running off of release 2024.9.0, and does not require a custom image to function. Initial testing shows that we are functioning at parity with our 2024.3.0 based demo, but I plan to update this thread with a checklist of tests to ensure we are functioning as intended. |
@the-bay-kay I am not sure what you mean by this
It does require a custom image, in that we have to rebuild after applying the patches, right? |
Right -- that is what I meant, that was bad phrasing. I was just specifying that we rebuild with patches, rather than pulling a prebuilt image that I'm keeping on my personal GHCR. |
For the record, it is better to not rebuild with patches during the single line demo, since it requires larger amounts of memory/CPU and needs people to wait for the compile to finish. But I think I can deal with that as part of my cleanup. |
Gotcha! Once we're at that point, I can host a copy on my GHCR and link it here, or we can host one on US-JOET, whichever will be better. |
Working on updating the node-red flows, it seems we're still having issues with the powermeter not reporting correctly (which is surprising, since the fixing patch PR 773 has been merged to 2024.9.0) I'm assuming this is an issue with how my node-red flow is expecting/displaying the data? Let's investigate further. Edit yup, it seems the flow that the rollforward is based off of didn't include the visualization update. Let's fix that real quick... |
With the power gauge fixed (and a missing patch added back in, woops!), working on adding the OCPP / ISO-15118 Logs to Node-RED flows. Using the logs developed here as a rough guideline. In order to capture OCPP messages, we needed to find the correct MQTT topic to subscribe too. For documentation's sake, I did so by looking at each module's interface file, and looking for a $ref field as an example. So, using |
@the-bay-kay is the missing patch + power gauge committed to your fork? |
Yup! The basics of the ISO15118 Messages are added as well, though not cleaned up. |
While this is what the previous logs were doing, I don't think this is sufficient for either OCPP or ISO15118-2. That is: if our goal is to capture the high level ISO15118 calls (ChargeParameterDiscoveryRes/Req, PowerDelivery Res/Req, etc.), I do not believe these are being transmitted via the
We already have a steady stream of information concerning the ISO15118-2 call response stream in the following logs:
So, rather than trying to come up with our own path, let's find where these are published, and tack on an MQTT broadcast to these. We know this is the EvseManager module: searching there, I believe we subscribing to these messages here, and logging them if session logging is enabled (which it is). This is finally added to the session log here. So, let's see if we can't piggyback off of this log! I still think there's an easier way to do this, though. If EvseManager is subscribing to these publications and adding them to the session log, I'd assume we should likewise be able to subscribe to these messages from NodeRED, without |
So, the subscription to ISO15118 messages (here) is a member function of EvseManager's |
Closing this based on #84 (comment) |
Re-opening this briefly to get the changes into main. Checking the fork (main...the-bay-kay:everest-demo:rollforward-demo, we have this list of files to work on:
@the-bay-kay do you know why two nodered flow files have changes? |
Answering my own questions:
|
@the-bay-kay I am working on applying your patches, and I don't think they work. For example, consider
It claims that the patch will modify How did this ever work? |
To save time while investigating this, I ran an image with the sept release pre-compiled and then tried to patch it.
Compile time patch worked
But runtime patches did not
I don't think this ever worked and this has me worried about what else is broken... |
Manually adjusted the filenames on all the patches, and after issues with multiple patches to the same files, got everything to compile. While starting up, we see
|
These are primarily from @the-bay-kay's fork The main changes here are: - move them from the demo script to the dockerfile so that the compilation is done upfront, and not at runtime. this makes startup much faster and is more consistent with the ethos of containerization - create new scripts to apply the patches, since there are *so many* of them - move the existing patches for the auth method from the demo script to the new mechanism for consistency - these patches were not being applied earlier, so we also had to change the node-red flow to pass in the payment method - add new python packages to support generating curves properly EVerest#74 (comment) - fix the paths for the packages so that they work properly EVerest#74 (comment) Signed-off-by: Shankari <[email protected]>
#88 merged, closing this now |
The text was updated successfully, but these errors were encountered: