-
Notifications
You must be signed in to change notification settings - Fork 12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Providing an example with network integration and multithreading #13
Comments
I just saw that I can probably also use Request and Reply ports for this purpose. I'll have a closer look at that. |
After a bit more digging, I figured out that I actually need real-time simulation here, which I now managed to do with the What I now basically have is a |
[just clarifying for readers not versed in the space industry jargon: OBSW = on-board computer SW] Yes, this is definitely the right place to discuss this topic :) It is also the right moment since we are actively working on v0.3, which will among others provide some RPC mechanism to drive the simulator remotely (e.g. from a Python script). But the co-simulation use-case you outline is something we want to support ASAP so it will be prioritized for v0.3 too. Many thanks for your detailed report and thoughts on the topic, this is very helpful. And admitedly, the main reason an example is missing is simply that doing co-simulation through sockets or similar is a bit awkward at the moment. We will take a bit more time to reflect on your report, but here is a raw brain dump of my personnal thoughts so you can let me know if I correctly understand the problem space or if I am off the mark:
The 2 workarounds I can see at the moment are:
None of these are great, so we need to explore more elegant solutions. One of them would be to have some kind of "reactor", like in general-purpose async executors, but hopefully we can avoid baking into the API the transport protocol and just have a general-purpose hook for blocking calls. Regarding real-time execution and the self-scheduling sensors, I do not understand at the moment how these interfere with the above issues. I would expect self-scheduling models to work irrespective of real-time and/or co-simulation, but I am probably missing something. Please let me know if the above does not cover your use case, and in any case, while we brainstorm on this, please don't hesitate to offer comments/suggestions using this issue (or via DM). |
Thanks for the quick reply 👍 I think I can find a way around blocking API for the models specifically. Maybe this helps a bit (I omitted internal simulation details like the interconnection of models here, and only included data flow from a client perspective + execution model): My plan was to use non-blocking sender components (e.g. The TC handler and TM sender live in dedicated threads and use blocking API to either poll UDP frames or send back SimReplies. The third thread is where I need to be careful with blocking API. I have a SimRequest receiver (probably |
Oh yes, so this is exactly what I had in mind in the second suggested workaround above. Indeed the call to
Would that work for you? |
That sounds like a good approach. The thing remaining that I am still interested in solving is minimizing the delay time for request handling if that is possible. Maybe I am missing a detail here. For example, I have a simulation with no request but which is still running because of some self-scheduling events, for example every 20 or 50 milliseconds (assuming system clock here). Now, if one requests arrives just 1 ms after a step call, wouldn't that introduce a delay for the rest of the time until the next event is handled? I would need something like an interrupt, where an additional event is scheduled in the middle of a step. Or I could do something like checking the request queue at least every millisecond by using I am probably never going to get the real timing performance for something like a SPI interface which works synchronously and full-duplex, but if I want to poll some simulated sensor with a frequency of 30-50ms , the delay I mentioned above might be problematic. Getting as close as possible would probably suffice though :) |
Sorry, you are right, for real-time execution this strategy does not work. The method I am familiar with for real-time co-simulation is pretty much what you proposed: the simulator runs with a dummy clock (the default That being said, you may need to account for network latency: you would probably want to wait a couple of milliseconds after the theoretical wall clock For synchronization, you could use a
|
That sounds like an excellent approach. I like the idea of using the default
I think the discussion might be very useful for other users, which is why I am asking all my questions here. I think a dedicated docs segment for the use-case I have here (generic co-simulation of a [satellite] system, and all problems/requirements associated with it) might be an excellent idea for the next release :) |
Actually in that approach the simulation clock (returned by Therefore:
I realized as well that you do not need to care for UDP packet ordering here: the events will be inserted in the priority queue according to their time stamps anyway. |
I realized there are still issues with this strategy, but I think I am getting close to a solution. Please bear with me, I will try to find some time today to write a follow-up. |
Thanks for the detailed explanation in any case. I still need to wrap around my head how time works with these systems, and that definitely helps a lot. I actually am not even sure whether the network delay is that large on the same system. I will probably do some measurements to check that. |
Note that if you ignore latency, then you would probably want to stamp the UDP events with a counter. Otherwise, if two events A and B are sent in this order but only B arrives before the deadline while A doesn't, then B will be processed before A, which may break some causal relation in the simulation. In any case, here are some issues which I identified in the algorithm of my previous comments:
I would therefore modify the event loop as follows:
Here is an updated schematic, which illustrates as well a case of event re-ordering due to UDP (events I have other ideas about how to implement this algorithm in a more elegant and more computationally efficient manner, but I think there is already enough to unpack in this comment :) |
Thanks. For this algorithm this probably means:
The algorithm currently looks like this then: pub fn run(&mut self, start_time: MonotonicTime, udp_polling_interval_ms: u64) {
let mut t = start_time + Duration::from_millis(udp_polling_interval_ms);
self.sys_clock.synchronize(t);
loop {
// Check for UDP requests every millisecond. Shift the simulator ahead here to prevent
// replies lying in the past.
t += Duration::from_millis(udp_polling_interval_ms);
self.simulation
.step_until(t)
.expect("simulation step failed");
self.handle_sim_requests();
self.sys_clock.synchronize(t);
}
} Not sure whether the bit before |
Sorry! I completely missed your reply. I am assuming that Here is more or less how I would implement this: /// For simplicity, it is assumed that requests are already ordered by the
/// channel sender side (Sender<SimRequest>) using e.g. a request counter.
/// Otherwise, requests retrieved from `receiver` must be first channeled
/// through a priority queue using the counter for ordering.
pub fn run(
&mut self,
start_time: MonotonicTime,
receiver: &mut Receiver<SimRequest>, // some channel receiver
udp_polling_interval_ms: u64,
) {
let mut t = start_time;
// Whenever a request for the next time slice is received, store it
// temporarily in this buffer.
let mut outstanding_request = None;
loop {
let t_old = t;
t += Duration::from_millis(udp_polling_interval_ms);
// Wait for a little longer than `t` to ensure that at least all
// requests that were sent before `t` are received.
self.sys_clock.synchronize(t + MAX_LATENCY);
while let Some(mut sim_request) =
outstanding_request.or_else(|| receiver.recv())
{
if sim_request.timestamp > t {
// Keep this request for the next time slice.
outstanding_request = Some(sim_request);
break;
}
assert!(sim_request.timestamp > t_old, "stale data received");
//
// ...call `self.simulation.send_event(...)` using the data from `sim_request``...
//
}
self.simulation
.step_until(t)
.expect("simulation step failed");
}
} Using |
Sounds good. About the large dependency: Maybe the module system of Rust can be used smartly here? One way would be to extract time components into a separate |
I am not sure I understand the tokio idea: wouldn't this only make sense if there is also a Most of the stuff in the I am reluctant to move the |
You're right, the tokio approach might not be best here. That is probably what would be required (core could be a default feature, but it is still a bit weird). I think this is a separate issue though, I might open a new one. I still think a crate modularization can make sense if some components could be useful for applications interfacing with an asychronix application, if those components don't have a lot of dependencies on many other asynchronix components. |
Forgot to close this one (closed by #30). |
Hi!
I am not sure where to best ask this question, I hope this is the right place :)
Thanks for providing this excellent framework. I am working on a mini-simulator for an example satellite on-board software which can be run on a host system. The goal is to provide an environment which more closely resembles a real satellite system directly on a host computer.
My goal is still to have the OBSW application and the simulator as two distinct applications which communicate through a UDP interface. So far, I have developed a basic model containing a few example devices. The simulator is driven in a dedicated thread which calls
simu.step
permanently, while the UDP server handling is done inside a separate thread. One problem I now have: How do I model deferred reply handling in a system like this? There will probably be some devices where I need to drive the simulator, wait for a certain time, and then send some output back via the UDP server.I already figured out that I probably have to separate the request/reply handling from the UDP server completely by using messaging., and that would probably be a good idea from an architectural point of view. Considering that the reply handling still has to be a fast as possible to simulate devices as best as possible, I was thinking of the following solution to leverage the asynchronix features:
What do you think about the general approach? I think an example application showcasing some sort of network integration and multi-threading might be useful in general. If this approach works well and you think this is a good idea, I could also try to provide an example application.
Repository where this is developed: https://egit.irs.uni-stuttgart.de/rust/sat-rs/src/branch/lets-get-this-minsim-started/satrs-minisim/src/main.rs
The text was updated successfully, but these errors were encountered: