Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

KinematicInertialObserver and EncoderObserver Discrepancy with Noisy Encoder Data #461

Open
Saeed-Mansouri opened this issue Feb 24, 2025 · 3 comments

Comments

@Saeed-Mansouri
Copy link

Hello,
@gergondet @arntanguy @mehdi-benallegue

The KinematicInertialObserver updates the posW() and velW() of the realRobot, while the EncoderObserver updates the q and alpha of the realRobot. Within the EncoderObserver, there are three modes for updating alpha:

VelUpdate::Control
VelUpdate::EncoderFiniteDifferences
VelUpdate::EncoderVelocities

When we select VelUpdate::EncoderVelocities, the realRobot's alpha is set based on the measured joint velocities. However, when VelUpdate::EncoderFiniteDifferences is chosen, the alpha updates based on finite differences of the encoder data.

From my understanding, the KinematicInertialObserver always updates the velW() based on the numerical differential of posW(), which in turn is updated using encoder values. This implies that, even when using VelUpdate::EncoderVelocities in the EncoderObserver, the velW() of the realRobot is still being updated via the finite difference of encoder data.

Given that noisy encoder data and filtered joint velocity (from the encoder) may be present, this results in the alpha of the realRobot being relatively smooth, while the velW() becomes noisy. This discrepancy could lead to issues in the real system.

I would greatly appreciate any insights or features in mc_rtc that could help resolve this discrepancy.

Thank you!

@mehdi-benallegue
Copy link
Member

I did not develop this estimator but what I know (without checking the code again) about this estimator is that it relies on a "pendulum-like" model which allows, given an anchor point, to rebuild the position from the orientation of the root link. This means that the encoders alone cannot provide this velocity but you need also to use the orientation and the angular velocity of the base, and this is why this part is computed with finite differences.

I agree that we could exploit different sources to rebuild that velocity, especially given that gyrometer's signals are often available and may give a cleaner estimation of angular velocities, but in general the orientation estimator is kind of an integral of this signal (in the high frequencies), so it should not cause too much noise to derivate it again.

Do you have a specific example / robot where you see an issue?

We might also propose alternative estimators such as "tilt estimator" that could provide good performances in your use case.

@Saeed-Mansouri
Copy link
Author

Thanks for your response. My main concern is about the linear velocity of the floating base, which is currently calculated using finite differences. I'm not as concerned with the angular velocity.

For context, consider a biped robot where we aim to stabilize the DCM. We need to compute the first derivative of the CoM, similar to how Dr. Caron approached it in LIPM_Walking_Controller. First, we calculate the CoM position, then obtain its first derivative.

In a dual-loop control system for the low-level controller, position feedback comes from a low-speed encoder (attached to the load side), which has some noise with 2-3 count variations. On the other hand, velocity feedback comes from a high-speed encoder (attached to the robot actuator), which is much smoother.

Despite having relatively smooth joint velocity data, the issue arises because the linear velocity of the floating base is still calculated through finite differences. This causes the first derivative of the CoM to be noisy, requiring the use of an additional filter that introduces some delay.

This discrepancy between position and velocity feedback is what I'm trying to address.

Thanks again for your insights, and I look forward to any further suggestions or potential solutions.

@mehdi-benallegue
Copy link
Member

mehdi-benallegue commented Feb 26, 2025

The floating-base velocity of a robot cannot be obtained directly from joint encoders alone because the base itself is not rigidly attached to any fixed reference. Instead, the estimation relies on an instantaneous anchor point that is assumed to be momentarily fixed in the world. This constraint allows us to infer the base’s motion, but it introduces a dependency on both the base’s orientation (from an IMU) and the anchor’s motion relative to the base (from encoders). Below, we derive the velocity expression step by step, showing how each term arises and why numerical differentiation is necessary.

Equations derivation

(I did not check again the code to confirm if the equations are exactly the same, this explanation is only to describe the principles)

To estimate the position of the floating base, we rely on an instantaneous anchor point. This is a point on the robot that is in contact with the environment and assumed to be fixed in the world frame at a given instant. We define:

  • World Frame (W): A fixed global reference frame.
  • Base Frame (B): The robot’s floating base frame. Its position in the world is ${}^{W}p^{B}(t)$ and its orientation is ${}^{W}R^{B}(t)$.
  • Anchor Point (A): A point on the robot (e.g., a foot in contact with the ground). At each instant, the chosen anchor is assumed to have zero velocity in the world.

$$ \dot{{}^{W}p^{A}}(t) = \mathbf{0}. $$

However, the specific point chosen as the anchor may change over time.

  • Anchor Position in the Base Frame ${}^{B}p^{A}(t)$: The position of the anchor relative to the base, obtained from the joint angles $\boldsymbol{q}(t)$ via forward kinematics.
  • IMU Data: Provides the orientation ${}^{W}R^{B}(t)$ (and angular velocity) of the base.

Because the anchor is part of the robot, its world position can be expressed as:

$$ {}^{W}p^{A}(t) = {}^{W}p^{B}(t) + {}^{W}R^{B}(t) {}^{B}p^{A}(t). $$

This equation states that the world position of the anchor is obtained by taking the position of the floating base and adding the transformed position of the anchor from the base frame into the world frame.

Since the anchor is assumed to be instantaneously fixed in the world, its position does not change:

$$ \dot{{}^{W}p^{A}}(t) = \mathbf{0}. $$

Differentiating the previous equation gives:

$$ \dot{{}^{W}p^{B}}(t) + \frac{d}{dt} \left( {}^{W}R^{B}(t) {}^{B}p^{A}(t) \right) = 0. $$

Rearranging for $\dot{{}^{W}p^{B}}(t)$, we obtain:

$$ \dot{{}^{W}p^{B}}(t) = -\frac{d}{dt} \Bigl[ {}^{W}R^{B}(t) {}^{B}p^{A}(t) \Bigr]. $$

Thus, the base’s velocity is:

$$ \dot{{}^{W}p^{B}}(t) = -\Bigl[ \dot{{}^{W}R^{B}}(t) {}^{B}p^{A}(t) + {}^{W}R^{B}(t) \dot{{}^{B}p^{A}}(t) \Bigr]. $$

Breaking Down Each Term

  1. Rotation-Derivative Term:

    The derivative of the rotation matrix is given by:

$$ \dot{{}^{W}R^{B}}(t) = {}^{W}R^{B}(t) [\boldsymbol{\omega}^{B}(t)]_\times, $$

where:

  • $\boldsymbol{\omega}^{B}(t)$ is the angular velocity of the base (expressed in the base frame).
  • $[\boldsymbol{\omega}^{B}(t)]_\times$ is the skew-symmetric matrix corresponding to $\boldsymbol{\omega}^{B}(t)$.
  1. Joint-Derivative Term:

    The term $\dot{{}^{B}p^{A}}(t)$ is the time derivative of the anchor’s position in the base frame. This derivative comes from the joint velocities $\dot{\boldsymbol{q}}(t)$ computed via forward kinematics. Even if the joint velocities are smooth, note that:

    • ${}^{B}p^{A}(t)$ is computed from the joint angles and may change abruptly when a different point is selected as the anchor.
    • Such changes can introduce additional noise when $\dot{{}^{B}p^{A}}(t)$ is computed.

Final Expression

We can factor out the rotation matrix to obtain a compact form:

$$ \dot{{}^{W}p^{B}}(t) = - {}^{W}R^{B}(t) \Bigl( [\boldsymbol{\omega}^{B}(t)]_\times {}^{B}p^{A}(t) + \dot{{}^{B}p^{A}}(t) \Bigr). $$

Interpretation of the Terms

  • $[\boldsymbol{\omega}^{B}(t)]_\times {}^{B}p^{A}(t)$:
    This term reflects the effect of the base’s rotation on the anchor’s position in the world frame. Even if the anchor is instantaneously fixed, any rotation of the base will change the world-frame location of that anchor.

  • $\dot{{}^{B}p^{A}}(t)$:
    This term represents the change in the anchor’s position in the base frame due to the robot’s joint motions. It is computed from the encoder measurements. However, since the instantaneous anchor can switch from one moment to the next, this term might not be smooth.

Conclusion

Even with smooth joint velocities, the global linear velocity of the base, $\dot{{}^{W}p^{B}}(t)$, is obtained by combining two components:

  1. The effect of the base’s angular velocity, captured by $[\boldsymbol{\omega}^{B}(t)]_\times {}^{B}p^{A}(t)$.
  2. The time variation of the anchor’s position in the base frame, $\dot{{}^{B}p^{A}}(t)$.

The need to differentiate the reconstructed base position—and the fact that the instantaneous anchor point may change over time—introduces noise into the linear velocity estimate. This explains why, even if joint velocity data is smooth, the estimated floating-base velocity in the world frame can be noisier.

Tilt estimator

I discussed with colleagues to release an external-user-friendly version of the Tilt estimator, that is already working but requires some cleanup and documentation. This observer includes a Lyapunov-stable estimation of the tilt and the velocity. This is explained in this paper https://arxiv.org/abs/2010.04401

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants