Skip to content

Commit

Permalink
mogupload: avoid slurping regular files into memory
Browse files Browse the repository at this point in the history
This allows uploading of large files that do not fit into memory
again.  Data piped via stdin still uses a huge in-memory buffer
since server support for Content-Range/Transfer-Encoding:chunked
is inconsistent.
  • Loading branch information
Eric Wong committed Jan 9, 2013
1 parent 601a1c5 commit b14bc2a
Showing 1 changed file with 2 additions and 6 deletions.
8 changes: 2 additions & 6 deletions mogupload
Original file line number Diff line number Diff line change
Expand Up @@ -44,10 +44,6 @@ A local file to upload. If '-', read file from STDIN instead.
Dormando E<lt>L<[email protected]>E<gt>
=head1 BUGS
mogupload must buffer the upload in memory before transferring it. This makes it difficult to upload very large files. Future versions will lift this limitation.
=head1 LICENSE
Licensed for use and redistribution under the same terms as Perl itself.
Expand All @@ -70,7 +66,7 @@ my $filename = $c->{file};
die "Must specify a file to upload with --file" unless $filename;

my $fh;
my $size = 0;
my $size;
if ($filename eq '-') {
$fh = *STDIN;
} else {
Expand All @@ -79,7 +75,7 @@ if ($filename eq '-') {
open($fh, "< $filename") or die "Could not open " . $filename;
}

my $mf = $mogc->new_file($c->{key}, $c->{class}, undef);
my $mf = $mogc->new_file($c->{key}, $c->{class}, $size);
if ($mogc->errcode) {
die "Error opening MogileFS file: " . $mogc->errstr;
}
Expand Down

0 comments on commit b14bc2a

Please sign in to comment.