@@ -22,9 +22,9 @@ So why would we want to process the chunks of a file individually instead of rea
There are two main reasons:
- Often, we can process each chunk of the file independently.
By handling the chunks as they become available, we can process the file sooner than if we waited to read the entire file first.
- Memory is a limited resource and loading an entire into memory may use up a large portion of a computer's memory.
- Memory is a limited resource and loading an entire file into memory may use up a large portion of a computer's memory.
If the file is especially big, it may not even be possible to load it into memory.
By processing one chunk at a time, we greatly reduce our memory footprint.
By processing one chunk at a time, the program's memory footprint is much smaller.
## Readable streams
...
...
@@ -235,7 +235,7 @@ Here are the most commonly used types of streams that Node.js provides:
### An example
By putting together Node.js's builtin streams, we can easily build some complicated programs.
Here is a complete [example](http-gunzip-pipe.js) that loads a compressed webpage over HTTPS, decompresses it with a `Gunzip` transform stream, and pipes it to the standard output.
Here is a complete [example](http-gunzip-pipe.js) that loads a compressed webpage over HTTPS, decompresses it with a `Gunzip` transform stream, and prints it to the standard output.