Demo Google Colab #5
Replies: 4 comments
-
Hey, I'm afraid I don't have time to help too much, but I can definitely point you in the right direction. First, you need the trained model checkpoint. Unfortunately I have not published this, but I might do it in the near future. Otherwise you could train the model yourself, but I understand you don't have the resources to do that. (You could try to do it in Colab, but I don't think they will give you enough time to fully train the model.) Once you have the checkpoint and the configuration file (which is in the repo), it should be easy. Just (1) instantiate the model, (2) load a pair of MIDI files and (3) feed them to the model. You can also take a look at the API code for inspiration. |
Beta Was this translation helpful? Give feedback.
-
If you wanted to try to train the model in Colab, I think you can just follow the README and you should be fine. Just make sure you install the right TensorFlow version. |
Beta Was this translation helpful? Give feedback.
-
Super, thank you for the pointer. I am still learning a lot about DL in general so I was not sure if the instructions were for training or just for eval.
I will try it shortly and I will let you know how it goes.
Thank you, bro! 🙂
…________________________________
From: Ondřej Cífka <[email protected]>
Sent: Monday, December 28, 2020 4:54 AM
To: cifkao/groove2groove <[email protected]>
Cc: Alex <[email protected]>; Author <[email protected]>
Subject: Re: [cifkao/groove2groove] Demo Google Colab (#5)
If you wanted to try to train the model in Colab, I think you can just follow the README and you should be fine. Just make sure you install the right TensorFlow version.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub<#5 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/ANNXLIZHCIDYXTFH55SVCM3SXB5W3ANCNFSM4VLT7Z4Q>.
|
Beta Was this translation helpful? Give feedback.
-
Hey Ondřej, So I played with your demo today and once again I was very impressed by the performance, particularly by the no-velocity model. Great job! Thank you. And thank you for making an awesome and freely available demo for us. Much appreciated. Couple notes so far if you do not mind... I skimmed over the code and I do not know if Google Colab will be feasible because your implementation is rather complex, not to mention that you have a demo available. I do not know yet if I have the skills to make a colab, even though I still think you should have one for personal/off-site eval of your work/models. I will think about it further and I will let you know if I can do it or not. Now, non-velocity model was most impressive as you have stated yourself. The only thing is that it is of'course rather loud and flat (in terms of velocity) so I have a suggestion for you... From my experience and because you are working with MIDIs, you can easily simulate the velocity by making it equal to the highest pitches of the notes and chords. I used this approach with my models and it produces fantastic results. MIDI standard uses bytes for pitches and velocities, so it is very easy and safe to transfer one to another. I can elaborate further if you are not clear about what I am talking about. This of course would be done after the generation (MIDI post-processing). I really think that such simulated velocity would make your non-velocity model really shine and stand out so please consider my humble suggestion. Also, the output was a bit chordy to my taste no matter what settings and input/styles I tried, so I would recommend maybe adjust the timings a bit (i.e. by using a time coefficient of some sort. I.e. * 0.9) This should help IMHO. Other than that, I really enjoyed playing with the demo and models so far but it will take me more time to process your work because it turned out to be more complex than I expected (for me) so please excuse me for some time as I review it. Thank you for considering my feedback and input. I know that it is hard to do sometimes, but this helps to learn and improve so I am glad that you are open to it. Most sincerely, Alex. |
Beta Was this translation helpful? Give feedback.
-
Hey Ondřej,
I played with the grove2grove today again and I am again very impressed. Particularly I really appreciated how well your models play and harmonize + of'course the style transfer works pretty well.
I really want to explore it in depth but I can only do it with Google Colabs because I do not have fancy equipment at home. So I was wondering if you can help me to make a demo colab. I do not mind putting in work but I need at least a simple example with something simple to get started + I need advice/some explanations of the code/implementation.
Please let me know if you plan and/or have time to do it because I think it would really help to make your work accessible to more people.
For example, we can start with the demo for the accompaniment model (that seems to be the first step in your generating process), and then we can improve further on that if you want.
Basically, I want to create a Colab similar to your demo site. This will also help to offload some of the traffic to Google so that your demo site does not get overloaded/overused.
Let me know, please.
Thank you
Alex
Beta Was this translation helpful? Give feedback.
All reactions