-
Notifications
You must be signed in to change notification settings - Fork 385
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Consolidations of shuffle files from different map tasks #635
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -287,6 +287,7 @@ private[spark] class BlockManager( | |
* never deletes (recent) items. | ||
*/ | ||
def getLocalFromDisk(blockId: String, serializer: Serializer): Option[Iterator[Any]] = { | ||
shuffleBlockManager.closeBlock(blockId) | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Can you add comment to the ShuffleBlockManager explaining the blocks are closed here? |
||
diskStore.getValues(blockId, serializer).orElse( | ||
sys.error("Block " + blockId + " not found on disk, though it should be")) | ||
} | ||
|
@@ -382,6 +383,8 @@ private[spark] class BlockManager( | |
// As an optimization for map output fetches, if the block is for a shuffle, return it | ||
// without acquiring a lock; the disk store never deletes (recent) items so this should work | ||
if (ShuffleBlockManager.isShuffle(blockId)) { | ||
//close the shuffle Writers for blockId | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Do shuflfe blocks still walk through this code path? If no, I would just throw an exception if it is a shuffle block here. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The remote shuffle blocks still go here |
||
shuffleBlockManager.closeBlock(blockId) | ||
return diskStore.getBytes(blockId) match { | ||
case Some(bytes) => | ||
Some(bytes) | ||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -26,21 +26,27 @@ private class DiskStore(blockManager: BlockManager, rootDirs: String) | |
extends BlockStore(blockManager) with Logging { | ||
|
||
class DiskBlockObjectWriter(blockId: String, serializer: Serializer, bufferSize: Int) | ||
extends BlockObjectWriter(blockId) { | ||
extends BlockObjectWriter(blockId) with Logging { | ||
|
||
private val f: File = createFile(blockId /*, allowAppendExisting */) | ||
private var f: File = createFile(blockId) | ||
|
||
// The file channel, used for repositioning / truncating the file. | ||
private var channel: FileChannel = null | ||
private var bs: OutputStream = null | ||
private var objOut: SerializationStream = null | ||
private var lastValidPosition = 0L | ||
private var initialPosition = 0L | ||
|
||
override def open(): DiskBlockObjectWriter = { | ||
val fos = new FileOutputStream(f, true) | ||
channel = fos.getChannel() | ||
bs = blockManager.wrapForCompression(blockId, new FastBufferedOutputStream(fos)) | ||
objOut = serializer.newInstance().serializeStream(bs) | ||
|
||
//commit possible file header | ||
commit() | ||
initialPosition = lastValidPosition | ||
|
||
this | ||
} | ||
|
||
|
@@ -59,7 +65,6 @@ private class DiskStore(blockManager: BlockManager, rootDirs: String) | |
// Flush the partial writes, and set valid length to be the length of the entire file. | ||
// Return the number of bytes written for this commit. | ||
override def commit(): Long = { | ||
// NOTE: Flush the serializer first and then the compressed/buffered output stream | ||
objOut.flush() | ||
bs.flush() | ||
val prevPos = lastValidPosition | ||
|
@@ -68,11 +73,28 @@ private class DiskStore(blockManager: BlockManager, rootDirs: String) | |
} | ||
|
||
override def revertPartialWrites() { | ||
// Revert by discarding current writes, except that if no values have been committed, | ||
// we revert by recreate the file (otherwise there are errors when reading objects from the file later on | ||
if (lastValidPosition == initialPosition) | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. initialPosition is set to the size of the file when the file is opened. Isn't it problematic if we open an existing file, did some writes and then want to revert, we could delete the old file? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. A shuffle file is closed when we get a shuffle request, and it should never be re-opened (and if you re-open the file and append it, you may corrupt the file as it may have a tailer). Maybe we should first delete the shuffle file if it exits when we first open it (e.g., in case we need to re-run the map tasks). There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. But if we re-run the map tasks, wouldn't that wipe out the shuffle outputs for all other previous map tasks? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The shuffle file is closed when all the map tasks are done (after the shuffle request is received). If we need to re-run a map task afterwards, I actually also tried throwing random exceptions during shuffle and it works fine. Again, is there a good way to include such tests? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Thanks for the response. It would be useful to actually put whatever you just wrote into the code comment block. For failure testing, can we do something similar to what FailureSutie does? |
||
recreateFile() | ||
else | ||
discardWrites() | ||
} | ||
|
||
private def recreateFile () { | ||
close () | ||
f.delete() | ||
f = createFile(blockId) | ||
open() | ||
} | ||
|
||
private def discardWrites () { | ||
// Discard current writes. We do this by flushing the outstanding writes and | ||
// truncate the file to the last valid position. | ||
objOut.flush() | ||
bs.flush() | ||
channel.truncate(lastValidPosition) | ||
channel.position(lastValidPosition) | ||
} | ||
|
||
override def write(value: Any) { | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add a comment on what the Int and Long represent.