In general, you can easily concatenate files on the go (without having to do any on-disk file operations) by writing the contents of multiple io.Readers
into a single io.Writer
. The buffering behind these abstractions will take care of the rest for you.
The key things you need are
So, a general scheme for step 2 could be, in your http.Handler
:
Set Content-Type header
Determine file chunk names in whole file
Check all chunks are accessible
For each chunk file in order
Open chunk file
Copy contents of file to w ResponseWriter
Close file
I added a pre check for file access there because you don't want to be in the middle of sending the file when you detect that you've an error condition. At that time it's too late to set an appropriate error code.
This operation will be completely IO-bound, either disk or network throughput will be the limiting factor, so a serial approach is likely to be as good as it gets (if you're considering a single server process in a single machine).
As stated in the comments by @Emile Pels, an io.MultiReader
allows you to concatenate multiple Readers, so with that you can replace the entire for loop with:
Create a slice of opened files in order
Create io.MultiReader(files...)
io.Copy(w, mreader)
Close each open file
One downside I can think of with that would be that it forces you to open all the files and keep them open for the duration of the operation, which under high load, large file size and a high chunks-per-file factor could lead to your process exceeding it's open file descriptor limit.
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…