Visualizer - Advanced features for researchers

Hi

The current visualizer included in the Sensel app is indeed very attractive as a demo, but it could be made more useful for designers and researchers if supported features like these:

  1. Visualize directly the data returned by the API like areas, bounding boxes, ellipses, etc.

  2. Toggle the 3D perspective view into a 2D view, as such a perspective visually distorts the information displayed and prevents effective measurements

  3. Allow to capture or export video segments with a high frame rate so as to be able to observe and study in detail the pressure events.

I might me mistaken, but I think I saw in some demo videos that Sensel has a more advanced version of the visualizer. I don’t know which features it supports exactly but it would be great if it could be made available.

Or if anybody has written a more advanced version of the visualizer, it would be great if they could share it.

1 Like

Hello, the visualizer we have in the SenselApp is the only one we are distributing at the moment. If that changes we will definitely let the community know.

As you said, the API gives you this information so customers have made some interesting open source visualizers that might give you a starting place.




Hi Alex. Thanks for the links. I did check everything in Github before posting so I was aware of them. They don’t really provide the kind of features that are needed to do detailed interaction research on the Morph. I hope you guys consider adding some of them in future releases of the visualizer.

Hi Alex

Now that I have a bit more time let me explain what I’m doing and what I need in case anyone is working along the same lines and can share what they’ve done.

I am currently researching early gesture detection. This means determine within the first 50ms if a contact is part of a gesture and if so which type it is. In such a short time span usually the contacts are not stable, and a tap contact may easily be confounded with a swipe contact. Geometrical analysis is sometimes not conclusive.

Right now I need to test and record multiple gestures by different people of all ages, so I can better define the detection algorithms and use that same data to later validate its output. To make this job easier I would need to record the API data stream and visually play it back so to say at slow motion.