-
Notifications
You must be signed in to change notification settings - Fork 48
WIP: New api #26
base: master
Are you sure you want to change the base?
WIP: New api #26
Conversation
I have started to reverse engineer the new storage API.
Each query has two parts (see
It looks to me like a lot of requests... I am not sure yet if there is a more straight forward way to retrieve the metadata. |
For reference:
Each line represents one document/ one folder. I am currently trying to make sense of it. Since this seems to be everything that the client needs to know. The first part (until the I have no clue how you can get the visible name etc from this information alone. My guess is that the remarkable client accesses this list and tries to see if it is different to last time in which case it will sync (download) the changed document. It looks odd that we need to download the metadata for every single document manually. |
@ddvk are you able to make sense from the above information? |
more or less... |
@ddvk, Could you tell me about the content of the columns in these lines? I still don't understand/think that it's possible to get all nescessary metadata information from a single request which seems to be quite unfortunate... Edit: did a test and reinstalled I will continue tomorrow and rework rmapy to save the metadata locally and try to understand how the remarkable client keeps the metadata uptodate. |
i meant i'm experimenting. i added the calls to rmfakecloud, and i can play with it to some extend |
So the metadata part should be working now. I have started to look into the upload process. It looks like that the zip upload is depreciated. The client will now request multiple google cloud urls and upload the individual files to these urls. However, there is one large issue: |
Hello @AaronDavidSchneider have you had any luck with this? I'm also trying to reverse engineer the new API communication but I haven't gone very far. It seems to be quite messy with many calls as you said, but I'm getting lost on where some parts are coming from. |
so I managed to get back to this. The gcd, ids of the individual files are sha256 of the file content |
@AaronDavidSchneider Have you made any more progress in this regard? I have seen some more commits on your fork under the newapi branch. Is that related? |
@opal06 I didn't have the time to continue working on this and have therefore chosen to build a workaround using rmapi (which works just fine) for my workflows. |
This PR is my WIP on the new api.
Closes #25
Progress:
Collection
usingget_meta_items()
get_meta_items()
-> currently too many requests