Sensor data processing - MetaWear CPRO

I'm running some tests with the metawear CPRO. With that porpose I decided to insert it in rugby ball and make some tests and get a better feeling of what can be achieved with this board.

However I'm facing some challenges with the data processing component. For example, in order to get the angle at which the ball is released, I would need to know the moment of release, however a simple threshold on the magnitude of the acceleration vector doesn't really work that well since different throws can have huge different acceleration vectors. 

What would be some of your suggestions to solve this kind of problem and get even more useful data from the sensor readings?
Do you have any recommendations of similar projects to follow and get a better understanding or even articles/tutorials?

Thanks in advance

Comments

  • What are you trying to achieve with the data processing?  Depending on what you want, it may be better to gather the raw sensor data and post process it with a more advanced data processing library such as numpy.
  • Right now, I'm trying to get things like angle of release and flight time. I already stream the raw sensor data and use it to estimate the orientation of the rugby ball. 

    However, I'm trying to figure out a way to estimate the start and end of the throwing motion and get a good estimation of the moment the ball is realeased (without using extra sensors, like some kind of capacity sensor).

    Btw, for this test I'm using c# (UWP).
  • Is anyone able to help? (Sorry for the bump)
  • Can you post some data of the different throws?
  • nubnub
    edited February 2017
    Here you have the accelerometer data for two different throws:

    I'm wondering how to estimate the start and end of the throw.
    At naked eye it's obvious where the throw starts and ends, however I don't really know the best approach to estimate them.
    (Note: I'm searching for pointers on how to solve this problem as well as get other interesting factors from the sensor data. Examples of similar projects or even articles on the matter would be a great way for me to learn how to solve this kind of problems)

    Thanks in advance
  • Hi Nub,

    I took a look at the 2 images that you linked in your post. It seems that you might want to find a more consistent way to gather/identify your throws by collecting or filtering your data differently. While it is obvious to me with the "naked eye" that there are peaks that would indicate a throw, the pattern of the peaks seem quite different (to me at least). For example, in one throw, the blue axis is very positive while in the other throw, the blue axis is negative. This tells me that the graph we're looking at isn't the magnitude of the acceleration, but something else (perhaps raw accel. data).

    You said earlier that using the magnitude of the data isn't good enough for estimating the release of the ball, but I think that with a slightly more advanced algorithm, you can easily achieve your desired estimation of start/stop. In fact, we've worked on a few projects here at MbientLab for our partners that do almost exactly what you described, but for different reasons.

    At the end of the day, this is essentially a "pattern matching" problem. I would recommend the following steps as a good starting place:

    - Filter the data in such a way that the throws seem as consistent as possible. For example, using RMS instead of raw X,Y,Z, or using absolute values instead of signed data.

    - Eliminate unimportant variations in the data. E.g. noise reduction algorithm (low pass filter comes to mind).

    - Make sure you capture the data with the correct frequency. Some patterns might seem extremely inconsistent when captured at the wrong frequency. This doesn't simply mean that you should capture the data at the highest frequency setting either. If the freq. is too low, you won't always catch the important events, but if the freq. is too high, you'll be caught up with all the noise in the data that is unimportant.

    - Estimate a threshold that will likely catch most throws. It won't be perfect at first, but you should easily be able to detect 75-90% of throws.

    - Use that threshold to determine when throw events happen, then work backwards around those events to figure out the overall "pattern" of the event by looking at samples around it.

    Hope this helps,
    Yu
  • Thanks so much for the answer.

    As you deduced, those were plots for the raw data, however I was using RMS in order to try and detect each throw using threasholds. 

    I actually ended up using a approach similar to the one you described before checking this thread again, even though right now I'm not using any kind of "pattern matching" - I will have a look into that.

    However, since then a new question rose. I have been using madgwick's open source IMU algorithm to estimate the orientation of the ball, but he even states on his paper that the algorithm is meant to be used for small/slow movements and that fast acceleration will increase its error. This might mean it is not the best approach for this project. Do you have any suggestions for open source fusion libraries (C#) that would work well under this circunstances?

    I intend to use this orientation estimation to obtain things like the angle of attack at which the ball is released. This also raises another question, would calculate this angle when the "throw event" is detected be a good approach?

    Thanks so much for all the help! Your board is great and everytime I "play" with it I get more ideas to different applications/projects I might want to try with it.

  • You should consider picking up a MetaMotion board as it has a sensor fusion algorithm as part of the firmware.  You can directly stream quaternion or Euler Angles from the board without needing a third party library.
  • I noticed that, but since the C PRO is the one I have now, I would like to keep "playing" with this one, before investing in a new one.

    Thanks for the suggestion nonetheless
  • Hi Nub,

    Unfortunately, it is a known "difficult" problem to calculate the exact orientation of an object in rapidly changing motion. This is largely due to the relative nature of the data given to you by the IMU.

    For example, we only know the change in acceleration, not the change in position (at least not directly). The gyroscope helps quite a bit (which is where the sensor fusion comes into play), but it is still giving data in rotations per second. This means that in order to effectively know the position at time t2, where t2 = sample 2, we first need to know the position at t1 = sample 1.

    Now add in the additional possibility of the ball moving forwards/backwards/up/down in addition to rotation, and the calculations become extremely complex and incur more and more error as long as the ball keeps making rapidly changing motions. Usually, the system can "reset" itself using gravity/compass as reference if the device sits in the same orientation for some time.

    Thanks,
    Yu
This discussion has been closed.