From 21e87f2e7c55bc1227f4c9c4d3e1abf526496b18 Mon Sep 17 00:00:00 2001
From: Caleb Sander <caleb.sander@gmail.com>
Date: Mon, 8 Feb 2021 09:21:58 -0500
Subject: [PATCH] Fix typos

---
 notes/streams/streams.md | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/notes/streams/streams.md b/notes/streams/streams.md
index a8acf6d..5802f2b 100644
--- a/notes/streams/streams.md
+++ b/notes/streams/streams.md
@@ -22,9 +22,9 @@ So why would we want to process the chunks of a file individually instead of rea
 There are two main reasons:
 - Often, we can process each chunk of the file independently.
   By handling the chunks as they become available, we can process the file sooner than if we waited to read the entire file first.
-- Memory is a limited resource and loading an entire into memory may use up a large portion of a computer's memory.
+- Memory is a limited resource and loading an entire file into memory may use up a large portion of a computer's memory.
   If the file is especially big, it may not even be possible to load it into memory.
-  By processing one chunk at a time, we greatly reduce our memory footprint.
+  By processing one chunk at a time, the program's memory footprint is much smaller.
 
 ## Readable streams
 
@@ -235,7 +235,7 @@ Here are the most commonly used types of streams that Node.js provides:
 ### An example
 
 By putting together Node.js's builtin streams, we can easily build some complicated programs.
-Here is a complete [example](http-gunzip-pipe.js) that loads a compressed webpage over HTTPS, decompresses it with a `Gunzip` transform stream, and pipes it to the standard output.
+Here is a complete [example](http-gunzip-pipe.js) that loads a compressed webpage over HTTPS, decompresses it with a `Gunzip` transform stream, and prints it to the standard output.
 ```js
 const https = require('https')
 const zlib = require('zlib')
-- 
GitLab