When importing an archive file, we read the entire contents of the archived content body. The read happens in a loop where we keep reading into a byte buffer until the entire thing is read into memory or a user-defined size limit is reached. Even if the size limit is reached, we still read the rest of the content body (just not keeping it in memory) so that the digest can be calculated.
While debugging, I noticed that the # bytes read in the call to read() is rather small, like 1-2k. I would expect the reads to happen in bigger chunks.