You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I wonder how would VCD perform on LLaVA suite of benchmarks that is not focused on hallucination, e.g., GQA, ScienceQA, TextVQA, etc. Would it incur a performance hit on these benchmarks because the VLM uses less language prior?
Why use MME to evaluate hallucination, in contrast to other VQA benchmarks?
The text was updated successfully, but these errors were encountered:
Actually, MME is a general benchmark like GQA that evaluates LVLMs across different categories. Our results on MME show that VCD may have benefits for perception-related general VQA, but not for reasoning-related ones.
For evaluating hallucinations, our main benchmark is POPE. However, since it only focuses on object hallucinations, we additionally adopt several subsets from MME, where they evaluate LVLMs' perception capabilities in counting and relational recognition, as supplementary data for broader hallucination evaluation.
Thanks for the awesome project!
I have a few questions:
The text was updated successfully, but these errors were encountered: