Table of Contents

Reading/Writing patterns in Go: one-shot vs stream vs buffered

Goal

Help you choose the right approach when reading/writing data in Go:

1) One-shot (read all / write all)

Meaning: load the whole content into memory (RAM) in one go.

Common functions

When to use

Example

b, err := os.ReadFile("data.txt")
if err != nil { return err }
// use b...

2) Stream (unbuffered)

Meaning: read/write gradually (chunks). You do NOT load everything into RAM.

Common functions / methods

When to use

Example (read chunk-by-chunk)

buf := make([]byte, 4096)
for {
    n, err := f.Read(buf)
    if n > 0 {
        // process buf[:n]
    }
    if err == io.EOF {
        break
    }
    if err != nil {
        return err
    }
}

3) Stream (buffered with bufio)

Meaning: still stream-based, but `bufio` keeps an in-memory buffer to reduce system calls. It reads/writes larger blocks internally, while you consume/produce smaller pieces.

Common functions

When to use

Example (read lines)

rd := bufio.NewReader(f)
for {
    line, err := rd.ReadString('\n')
    if len(line) > 0 {
        // use line
    }
    if err == io.EOF {
        break
    }
    if err != nil {
        return err
    }
}

Example (buffered writer)

wr := bufio.NewWriter(f)
_, _ = wr.WriteString("hello\n")
_ = wr.Flush() // important!

4) Stream token/line scanning (bufio.Scanner)

Meaning: convenient tokenization (often line-by-line), great for simple text parsing.

Common functions

When to use

Pitfall

`Scanner` has a default max token size. For very long lines, configure buffer or use `bufio.Reader`.

Quick decision table

Your case Recommended
Small file, want simplest `os.ReadFile` / `os.WriteFile`
Large/unknown size stream `io.Copy` or `Read` loop
Many small reads/writes `bufio.Reader` / `bufio.Writer`
Read line-by-line simply `bufio.Scanner` (or `Reader.ReadString`)

Why bufio is faster (in one sentence)

Because it reduces expensive OS calls by doing fewer, bigger reads/writes under the hood.

Hard words (English)