You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When hardware clock is specified and returns negative timestamps (is it allowed to do so or my system is buggy?), clock_timestamp_to_time doesn't like it:
Negative number casted to u64 yields very large number, so the multiplication overflows.
$ RUST_BACKTRACE=1 sudo --preserve-env=RUST_BACKTRACE gdb --ex='break rust_begin_unwind' --ex=r --args target/debug/statime -c dante-ptpv2.toml
GNU gdb (Fedora Linux) 14.2-1.fc39
(...)
Reading symbols from target/debug/statime...
warning: Missing auto-load script at offset 0 in section .debug_gdb_scripts
of file /home/teo/Projects/statime/target/debug/statime.
Use `info auto-load python-scripts [REGEXP]' to list them.
Breakpoint 1 at 0x51e6e4: file library/std/src/panicking.rs, line 644.
Starting program: /home/teo/Projects/statime/target/debug/statime -c dante-ptpv2.toml
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/libthread_db.so.1".
[New Thread 0x7ffff7c006c0 (LWP 483574)]
[New Thread 0x7ffff78006c0 (LWP 483575)]
[New Thread 0x7ffff74006c0 (LWP 483576)]
[New Thread 0x7ffff70006c0 (LWP 483577)]
[New Thread 0x7ffff6c006c0 (LWP 483578)]
[New Thread 0x7ffff68006c0 (LWP 483579)]
[New Thread 0x7ffff64006c0 (LWP 483580)]
[New Thread 0x7ffff60006c0 (LWP 483581)]
[20:19:43.1908366.6801452637][statime][INFO] Clock identity: 68847e692c010000
[Switching to Thread 0x7ffff7c006c0 (LWP 483574)]
Thread 2 "tokio-runtime-w" hit Breakpoint 1, core::panic::panic_info::PanicInfo::message () at library/core/src/panic/panic_info.rs:96
warning: 96 library/core/src/panic/panic_info.rs: No such file or directory
Missing separate debuginfos, use: dnf debuginfo-install glibc-2.38-18.fc39.x86_64 libgcc-13.3.1-1.fc39.x86_64
(gdb) bt
#0 core::panic::panic_info::PanicInfo::message () at library/core/src/panic/panic_info.rs:96
#1 std::panicking::begin_panic_handler () at library/std/src/panicking.rs:644
#2 0x00005555555a9dd5 in core::panicking::panic_fmt () at library/core/src/panicking.rs:72
#3 0x00005555555a9e93 in core::panicking::panic () at library/core/src/panicking.rs:144
#4 0x000055555566642f in statime_linux::clock::clock_timestamp_to_time (t=...) at statime-linux/src/clock/mod.rs:64
#5 0x00005555556a7a03 in statime_linux::clock::{impl#0}::system_offset::{closure#0} () at statime-linux/src/clock/mod.rs:47
#6 0x000055555569cda4 in core::result::Result<(clock_steering::Timestamp, clock_steering::Timestamp, clock_steering::Timestamp), clock_steering::unix::Error>::map<(clock_steering::Timestamp, clock_steering::Timestamp, clock_steering::Timestamp), clock_steering::unix::Error, (statime::time::instant::Time, statime::time::instant::Time, statime::time::instant::Time), statime_linux::clock::{impl#0}::system_offset::{closure_env#0}> (self=..., op=...) at /rustc/07dca489ac2d933c78d3c5158e3f43beefeb02ce/library/core/src/result.rs:746
#7 0x000055555566635e in statime_linux::clock::LinuxClock::system_offset (self=0x555555c01350) at statime-linux/src/clock/mod.rs:40
#8 0x00005555555eb4a8 in statime::clock_task::{async_fn#0} () at statime-linux/src/main.rs:129
#9 0x0000555555620a57 in tokio::runtime::task::core::{impl#6}::poll::{closure#0}<statime::clock_task::{async_fn_env#0}, alloc::sync::Arc<tokio::runtime::scheduler::multi_thread::handle::Handle, alloc::alloc::Global>> (ptr=0x555555c01330) at /home/teo/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/task/core.rs:328
#10 0x000055555561ebbb in tokio::loom::std::unsafe_cell::UnsafeCell<tokio::runtime::task::core::Stage<statime::clock_task::{async_fn_env#0}>>::with_mut<tokio::runtime::task::core::Stage<statime::clock_task::{async_fn_env#0}>, core::task::poll::Poll<()>, tokio::runtime::task::core::{impl#6}::poll::{closure_env#0}<statime::clock_task::{async_fn_env#0}, alloc::sync::Arc<tokio::runtime::scheduler::multi_thread::handle::Handle, alloc::alloc::Global>>> (self=0x555555c01330, f=...)
at /home/teo/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/loom/std/unsafe_cell.rs:16
#11 tokio::runtime::task::core::Core<statime::clock_task::{async_fn_env#0}, alloc::sync::Arc<tokio::runtime::scheduler::multi_thread::handle::Handle, alloc::alloc::Global>>::poll<statime::clock_task::{async_fn_env#0}, alloc::sync::Arc<tokio::runtime::scheduler::multi_thread::handle::Handle, alloc::alloc::Global>> (self=0x555555c01320, cx=...)
at /home/teo/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/task/core.rs:317
#12 0x00005555555dc291 in tokio::runtime::task::harness::poll_future::{closure#0}<statime::clock_task::{async_fn_env#0}, alloc::sync::Arc<tokio::runtime::scheduler::multi_thread::handle::Handle, alloc::alloc::Global>> () at /home/teo/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/task/harness.rs:485
#13 0x0000555555611b83 in core::panic::unwind_safe::{impl#23}::call_once<core::task::poll::Poll<()>, tokio::runtime::task::harness::poll_future::{closure_env#0}<statime::clock_task::{async_fn_env#0}, alloc::sync::Arc<tokio::runtime::scheduler::multi_thread::handle::Handle, alloc::alloc::Global>>> (self=...)
at /rustc/07dca489ac2d933c78d3c5158e3f43beefeb02ce/library/core/src/panic/unwind_safe.rs:272
#14 0x00005555556433f5 in std::panicking::try::do_call<core::panic::unwind_safe::AssertUnwindSafe<tokio::runtime::task::harness::poll_future::{closure_env#0}<statime::clock_task::{async_fn_env#0}, alloc::sync::Arc<tokio::runtime::scheduler::multi_thread::handle::Handle, alloc::alloc::Global>>>, core::task::poll::Poll<()>> (data=0x7ffff7bfe708)
at /rustc/07dca489ac2d933c78d3c5158e3f43beefeb02ce/library/std/src/panicking.rs:552
#15 0x0000555555646b1b in __rust_try ()
#16 0x000055555563e6b8 in std::panicking::try<core::task::poll::Poll<()>, core::panic::unwind_safe::AssertUnwindSafe<tokio::runtime::task::harness::poll_future::{closure_env#0}<statime::clock_task::{async_fn_env#0}, alloc::sync::Arc<tokio::runtime::scheduler::multi_thread::handle::Handle, alloc::alloc::Global>>>> (f=...)
at /rustc/07dca489ac2d933c78d3c5158e3f43beefeb02ce/library/std/src/panicking.rs:516
#17 0x000055555562713a in std::panic::catch_unwind<core::panic::unwind_safe::AssertUnwindSafe<tokio::runtime::task::harness::poll_future::{closure_env#0}<statime::clock_task::{async_fn_env#0}, alloc::sync::Arc<tokio::runtime::scheduler::multi_thread::handle::Handle, alloc::alloc::Global>>>, core::task::poll::Poll<()>> (f=...)
at /rustc/07dca489ac2d933c78d3c5158e3f43beefeb02ce/library/std/src/panic.rs:142
#18 0x00005555555d956e in tokio::runtime::task::harness::poll_future<statime::clock_task::{async_fn_env#0}, alloc::sync::Arc<tokio::runtime::scheduler::multi_thread::handle::Handle, alloc::alloc::Global>> (core=0x555555c01320, cx=...) at /home/teo/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/task/harness.rs:473
#19 0x00005555555dd6aa in tokio::runtime::task::harness::Harness<statime::clock_task::{async_fn_env#0}, alloc::sync::Arc<tokio::runtime::scheduler::multi_thread::handle::Handle, alloc::alloc::Global>>::poll_inner<statime::clock_task::{async_fn_env#0}, alloc::sync::Arc<tokio::runtime::scheduler::multi_thread::handle::Handle, alloc::alloc::Global>> (self=0x7ffff7bfe920)
at /home/teo/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/task/harness.rs:208
--Type <RET> for more, q to quit, c to continue without paging--q
Quit
(gdb) frame 4
#4 0x000055555566642f in statime_linux::clock::clock_timestamp_to_time (t=...) at statime-linux/src/clock/mod.rs:64
64 Time::from_nanos((t.seconds as u64) * 1_000_000_000 + (t.nanos as u64))
(gdb) print t
$1 = clock_steering::Timestamp {seconds: -1722521702, nanos: 761103809}
Due to unrelated problem with bind_phc, the gdb log above comes from the last version before binding to PHC index was introduced: 9292412. When using newest version with the bind PHC commit reverted, the same thing happens on my machine. Disabling hardware clock in config file fixes it.
I've tried workarounds but then arithmetic errors occur in filters.
The text was updated successfully, but these errors were encountered:
When hardware clock is specified and returns negative timestamps (is it allowed to do so or my system is buggy?),
clock_timestamp_to_time
doesn't like it:Negative number casted to u64 yields very large number, so the multiplication overflows.
Due to unrelated problem with bind_phc, the gdb log above comes from the last version before binding to PHC index was introduced: 9292412. When using newest version with the bind PHC commit reverted, the same thing happens on my machine. Disabling hardware clock in config file fixes it.
I've tried workarounds but then arithmetic errors occur in filters.
The text was updated successfully, but these errors were encountered: