-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Graph Interactivity #55
Comments
Hey, some basic ideas:
I think this should depend on what sort of interaction we have in mind. So if we take simple hover events, it makes sense to specify this on the Mark itself:
while something like brushing seems to belong more to the Section component:
I was playing with the idea of having 'emitters' and 'listeners', which communicate through 'channels'. So in these examples, the emitter on the The listeners would then take this data and perform actions based on it. So, to color the
So in this example you only need to know the index of the point that we hover over, and highlight the point of the same index. There are possible easier ways to do this, but I think this method scales pretty nicely when you are dealing with multiple plots:
So what happens here is:
This really gives a lot of flexibility and allows for pretty advanced interactions I think.
Ate and I ran into similar issues on different projects- adding event listeners on DOM nodes indeed isn't scalable for large numbers of marks. A better way is to use a spatial index like rbush and use that to perform collision detection. Brushing shouldn't be too much of a problem, as long as we debounce/throttle everything. Another potential bottleneck might be the updating of Marks/other elements on interaction. If we need to re-render everything on every interaction, performance will depend on how fast the virtual DOM is, but I think a more performant and scalable solution is to manually update the attributes of the elements affected by the interaction. There are probably a few options to do this, might be a matter of experimenting.
I think the set up described above is flexible enough to do things like this. We might, for example, allow emitting data from the components with
This should be covered with this set up
This can also be used to improve performance: instead of modifying DOM nodes manually, you can just draw another node on top of it. The downside is that this way, you cannot (for example) modify the opacity of a node. But both modifying marks, and drawing other marks on top, should be supported! What does everyone think about this syntax? Can it be simpler? Am I missing out on some more complex cases? |
Hmm. Actually, if we are just going to use the virtual DOM, we might not even need the whole 'channel' set up:
Then of course there is some magic going on in the |
I quite like this syntax. It's simple and elegant but can be extended quite easily. We could even just start with native events ( Also, in this way handling selections, for example, would almost more be a matter of setting up a convenient pattern/convention rather than building additional functionality (although we could discuss abstracting this away in, say, the data container ultimately..). |
I've a question about how this extends to events like the dragging of individual marks. How might we pass along information about the cursor’s position for the mark to be updated? Since any event listeners we attach will return only the cursor’s position in terms of screen coordinates, do we need to first scale them to local coordinates? |
@johsi-k yes, that is probably what we will need to do. We would just attach the listener to the root SVG element, and use a 'screen pixel to local coordinate' function, which would be the inverse of the If we only need to consider collision detection, we could also do it another way: we could build up the spatial index using screen coordinates, with the corresponding data values being stored in the spatial index and returned only on collision detection. But this wouldn't really work with brushing, so I think your approach is probably the best |
We can continue this discussion in #80 |
As a summary of some preliminary discussion on interactivity that's been taking place:
How do we handle selections/listen for events?
Attaching event listeners to every mark may not scale (e.g. 1000 data points would work out to at least 1000 event listeners per vgg-point)
Baked in basic interactivity (i.e. instead of changing the dataset to change the graph, is it possible to add in an inverse of this relationship? Such as changing x/y values in a dataset by dragging a point on a graph)
Coordinating selection across faceted graphs/multiple graphs, such as in a scatterplot matrix
Selected elements can be re-rendered on a separate layer to account for situations where a mark is hidden behind other marks
To look at how Vega does it: Signals and Event Streams that would be interesting to integrate into Vue-gg.
The text was updated successfully, but these errors were encountered: