Investigate why reading content from archive file uses such small chunks

Description

When importing an archive file, we read the entire contents of the archived content body. The read happens in a loop where we keep reading into a byte buffer until the entire thing is read into memory or a user-defined size limit is reached. Even if the size limit is reached, we still read the rest of the content body (just not keeping it in memory) so that the digest can be calculated.

While debugging, I noticed that the # bytes read in the call to read() is rather small, like 1-2k. I would expect the reads to happen in bigger chunks.

Environment

None
Obsolete

Assignee

Aaron Binns

Reporter

Aaron Binns

Labels

None

Issue Category

None

Group Assignee

None

ZendeskID

None

Estimated Difficulty

None

Actual Difficulty

None

Affects versions

Priority

Major
Configure