You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
rror Source: RUNTIME
Error Code: INVALID_STATE
Reason: Operator::getOutput failed for [operator: ValueStream, plan node ID: 0]: Error during calling Java code from native code: org.apache.gluten.memory.memtarget.ThrowOnOomMemoryTarget$OutOfMemoryException: Not enough spark off-heap execution memory. Acquired: 9.5 GiB, granted: 5.2 GiB. Try tweaking config option spark.memory.offHeap.size to get larger space to run this application.
Current config settings:
spark.gluten.memory.offHeap.size.in.bytes=40.0 GiB
spark.gluten.memory.task.offHeap.size.in.bytes=10.0 GiB
spark.gluten.memory.conservative.task.offHeap.size.in.bytes=5.0 GiB
Memory consumer stats:
Task.4: Current used bytes: 4.8 GiB, peak bytes: N/A
\- Gluten.Tree.1: Current used bytes: 4.8 GiB, peak bytes: 10.0 GiB
\- root.1: Current used bytes: 4.8 GiB, peak bytes: 10.0 GiB
+- ArrowContextInstance.1: Current used bytes: 4.8 GiB, peak bytes: 10.0 GiB
+- OverAcquire.DummyTarget.5: Current used bytes: 0.0 B, peak bytes: 0.0 B
+- WholeStageIterator.3: Current used bytes: 0.0 B, peak bytes: 0.0 B
| \- single: Current used bytes: 0.0 B, peak bytes: 0.0 B
| +- WholeStageIterator_default_leaf: Current used bytes: 0.0 B, peak bytes: 0.0 B
| \- task.Gluten_Stage_0_TID_4: Current used bytes: 0.0 B, peak bytes: 0.0 B
| +- node.2: Current used bytes: 0.0 B, peak bytes: 0.0 B
| | \- op.2.0.0.FilterProject: Current used bytes: 0.0 B, peak bytes: 0.0 B
| +- node.3: Current used bytes: 0.0 B, peak bytes: 0.0 B
| | \- op.3.0.0.Unnest: Current used bytes: 0.0 B, peak bytes: 0.0 B
| +- node.7: Current used bytes: 0.0 B, peak bytes: 0.0 B
| | \- op.7.0.0.FilterProject: Current used bytes: 0.0 B, peak bytes: 0.0 B
| +- node.6: Current used bytes: 0.0 B, peak bytes: 0.0 B
| | \- op.6.0.0.Unnest: Current used bytes: 0.0 B, peak bytes: 0.0 B
| +- node.5: Current used bytes: 0.0 B, peak bytes: 0.0 B
| | \- op.5.0.0.FilterProject: Current used bytes: 0.0 B, peak bytes: 0.0 B
| \- node.0: Current used bytes: 0.0 B, peak bytes: 0.0 B
| \- op.0.0.0.ValueStream: Current used bytes: 0.0 B, peak bytes: 0.0 B
+- OverAcquire.DummyTarget.1: Current used bytes: 0.0 B, peak bytes: 0.0 B
\- RowToColumnar.3: Current used bytes: 0.0 B, peak bytes: 0.0 B
\- single: Current used bytes: 0.0 B, peak bytes: 0.0 B
\- RowToColumnar_default_leaf: Current used bytes: 0.0 B, peak bytes: 0.0 B
at org.apache.gluten.memory.memtarget.ThrowOnOomMemoryTarget.borrow(ThrowOnOomMemoryTarget.java:90)
at org.apache.gluten.memory.arrowalloc.ManagedAllocationListener.onPreAllocation(ManagedAllocationListener.java:61)
at org.apache.gluten.shaded.org.apache.arrow.memory.BaseAllocator.buffer(BaseAllocator.java:300)
at org.apache.gluten.shaded.org.apache.arrow.memory.RootAllocator.buffer(RootAllocator.java:29)
at org.apache.gluten.shaded.org.apache.arrow.memory.BaseAllocator.buffer(BaseAllocator.java:280)
at org.apache.gluten.shaded.org.apache.arrow.memory.RootAllocator.buffer(RootAllocator.java:29)
at org.apache.gluten.execution.RowToVeloxColumnarExec$$anon$1.nativeConvert(RowToVeloxColumnarExec.scala:187)
at org.apache.gluten.execution.RowToVeloxColumnarExec$$anon$1.next(RowToVeloxColumnarExec.scala:226)
at org.apache.gluten.execution.RowToVeloxColumnarExec$$anon$1.next(RowToVeloxColumnarExec.scala:137)
at org.apache.gluten.utils.InvocationFlowProtection.next(Iterators.scala:154)
at org.apache.gluten.utils.IteratorCompleter.next(Iterators.scala:77)
at org.apache.gluten.utils.PayloadCloser.next(Iterators.scala:39)
at scala.collection.convert.Wrappers$IteratorWrapper.next(Wrappers.scala:32)
The text was updated successfully, but these errors were encountered:
The root cause is large batch size of R2C after parquet scan, when row size is too large like too many columns are scanned, or one column has very large complex datatype. The solution is to decrease the batch size.
Description
The text was updated successfully, but these errors were encountered: