You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We're (@frio80 and I) trying to use Hardy for visual regression testing on a JS/CSS UI library.
Hardy gets us part of the way there by providing a tool that will 'fail' on a visual difference, but we haven't figured out how to create 'reports' on what changed.
The use-case is for developers iteratively building locally, and us giving them more information on what their changes effects. For example, if they visually change Component X, lets let them know if it changes Components Y and Z. They may have expected that change, they may not have.
It seems that we'd a couple of features to make this happen:
Add the ability to create the visual diff in Hardy to get this to happen. (I've experimented with this in my dwabyick fork).
Create a report of all failed tests from Hardy. This seems like a general Cucumber runner question, maybe. I'm new to Cucumber.
A third 'maybe' feature would be more explicit support in Hardy for creating a baseline. Its kind of an implicit thing that happens the first time you run Hardy. It could help make it more intuitive for this use case.
Are these use-cases interesting to you? Happy to help as I can.
The text was updated successfully, but these errors were encountered:
This definitely sounds like a good avenue to explore. I started wondering about reporters just this weekend (actually after discovering the mocha nyan reporter). Along with the plugin API ideas, there's a lot to think about.
I'll start off with your ImageMagick fork and see if there's a simple way to specify the reporter in Cucumber.
What about continuing tests after failures? I read a bit about this, and it seems like you just have to write a contrived Cucumber test, perhaps, to do this. Is that right?
I also wonder if its possible to get failed data selector data into the JSON output. Seems possible via the callback. I'll look into that.
We're (@frio80 and I) trying to use Hardy for visual regression testing on a JS/CSS UI library.
Hardy gets us part of the way there by providing a tool that will 'fail' on a visual difference, but we haven't figured out how to create 'reports' on what changed.
The use-case is for developers iteratively building locally, and us giving them more information on what their changes effects. For example, if they visually change Component X, lets let them know if it changes Components Y and Z. They may have expected that change, they may not have.
It seems that we'd a couple of features to make this happen:
Are these use-cases interesting to you? Happy to help as I can.
The text was updated successfully, but these errors were encountered: