You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Even at moderate scale of PGA, there are always many corner cases in data, that result in errors on UAST extraction, and as a user of Engine on cluster of machines - it would be nice to be able to identify and access those cases all together (without greping though all executor machines logs), group by case, analyze, etc.
Right now bblfsh errors are handled well and reported by Engine, but only in stderr of executors. So on a Spark cluster with many machines it is quite inconvenient to grep though all executor logs, in order to identify both: your job and then all bblfsh errors.
Proposal
As Engine uses DataFrame API, it is not possible to levirate Accumulators for building error summaries as in RDD.
Thus the I can see only a few option that are left:
make extractUASTs() add one extra column, exceptions: Seq[String]
somehow incorporate summary of errors, using Metrics (count of exceptions by type)
First approach has benefit of covering all the cases: i.e user can aggregate exception by type himself, and then drill down to the particular cases. Second one is less invasive but covers only a single case of getting overall dataset statistics.
The text was updated successfully, but these errors were encountered:
This feature request is a followup on initial discussion (focused on correctness initially) started by @EgorBu src-d/sourced-ce#165
Engine already handles Bblfsh errors well. Now the idea is to somehow expose Bblfsh UAST extraction exceptions to the Engine users.
This will help both:
Context
Even at moderate scale of PGA, there are always many corner cases in data, that result in errors on UAST extraction, and as a user of Engine on cluster of machines - it would be nice to be able to identify and access those cases all together (without greping though all executor machines logs), group by case, analyze, etc.
Right now bblfsh errors are handled well and reported by Engine, but only in stderr of executors. So on a Spark cluster with many machines it is quite inconvenient to grep though all executor logs, in order to identify both: your job and then all bblfsh errors.
Proposal
As Engine uses DataFrame API, it is not possible to levirate Accumulators for building error summaries as in RDD.
Thus the I can see only a few option that are left:
exceptions: Seq[String]
First approach has benefit of covering all the cases: i.e user can aggregate exception by type himself, and then drill down to the particular cases. Second one is less invasive but covers only a single case of getting overall dataset statistics.
The text was updated successfully, but these errors were encountered: