I've been banging the same drum for years: APIs must be carefully designed.

This statement doesn't resonate the same way with everyone. In order to really understand what I mean by "careful API design", one has to have experienced both ends of the spectrum.

But there is a silver lining - once you have experienced "good design", it's really hard to go back to the other kind. Even after acknowledging that "good design" inevitably comes at a cost, whether it's cognitive load, compile times, making hiring more challenging, etc.

It's a very difficult experience to learn about something new, and understand its value, and then go back to the old way of doing things. This holds true for any topic, not just programming.

I know it's been very difficult for me. So, here's your warning: once you learn to spot design deficiencies, you can't unlearn it, and it does make it harder to just "get the job done". It's a delicate balance.

Composability

When I started playing around with Go, a few years back, I was delighted to discover some of the core interfaces.

For example, io.Reader:

Go code
type Reader interface {
    Read(p []byte) (n int, err error)
}

There is a lot to like about this interface.

For starters, it's been around since Go 1.0. It's only one function. So it's been implemented for basically anything you could think of.

It's implemented for *os.File:

Go code
// in `main.go`

package main

import (
  "io"
  "log"
  "os"
)

func main() {
  log.SetFlags(log.Lshortfile)

  f, err := os.Open("main.go")
  if err != nil {
    log.Fatalf("%+v", err)
  }
  readSome(f)
}

func readSome(r io.Reader) {
  buf := make([]byte, 4)
  n, err := r.Read(buf)
  if err != nil {
    log.Printf("got error: %+v", err)
  } else {
    log.Printf("read %v bytes: %v", n, buf[:n])
  }
}
Shell session
$ go run main.go
main.go:25: read 4 bytes: [112 97 99 107]

It's embedded in the net.Conn interface, here implemented for *net.TCPConn:

Go code
package main

import (
  "io"
  "log"
  "net"
)

func main() {
  log.SetFlags(log.Lshortfile)

  conn, err := net.Dial("tcp", "example.org:80")
  if err != nil {
    log.Fatalf("%+v", err)
  }
  _, err = conn.Write([]byte("GET / HTTP/1.1\r\n\r\n"))
  if err != nil {
    log.Fatalf("%+v", err)
  }

  readSome(conn)
}

// omitted: readSome
Shell session
$ go run main.go
main.go:30: read 4 bytes: [72 84 84 80]

It's implemented by *http.Response:

Go code
package main

import (
  "io"
  "log"
  "net/http"
)

func main() {
  log.SetFlags(log.Lshortfile)

  resp, err := http.Get("http://example.org")
  if err != nil {
    log.Fatalf("%+v", err)
  }

  readSome(resp.Body)
}

func readSome(r io.Reader) {
  buf := make([]byte, 4)
  n, err := r.Read(buf)
  if err != nil {
    log.Printf("got error: %+v", err)
  } else {
    log.Printf("read %v bytes: %v", n, buf[:n])
  }
}
Shell session
$ go run main.go
main.go:26: read 4 bytes: [60 33 100 111]

We could go on, but let's not - basically, anything that can be read in the Go ecosystem almost certainly implements io.Reader.

Let's review the interface itself:

Go code
type Reader interface {
    Read(p []byte) (n int, err error)
}

It takes a single argument, a byte slice.

That's already an important decision. The alternative here would be to have something like:

Go code
type AltReader interface {
  AltRead() ([]byte, error)
}

That AltReader interface presents several issues - let's underline them by implementing it for, say, *os.File

Go code
package main

import (
  "log"
  "os"
)

type AltReader interface {
  AltRead() ([]byte, error)
}

func (f *os.File) AltRead() ([]byte, error) {
  buf := make([]byte, 1024)
  n, err := f.Read(buf)
  return buf[:n], err
}

func main() {
  log.SetFlags(log.Lshortfile)

  f, err := os.Open("main.go")
  if err != nil {
    log.Fatalf("%+v", err)
  }

  readSome(f)
}

func readSome(r AltReader) {
  buf, err := r.AltRead()
  if err != nil {
    log.Printf("got error: %+v", err)
  } else {
    log.Printf("read %v bytes: %v", len(buf), buf)
  }
}
Shell session
$ go run main.go
# command-line-arguments
.\main.go:12:6: cannot define new methods on non-local type os.File
.\main.go:26:10: cannot use f (type *os.File) as type AltReader in argument to readSome:
        *os.File does not implement AltReader (missing AltRead method)

Oh. We can't.

Well that's okay, we'll make a wrapper type.

Go code
package main

import (
  "io"
  "log"
  "os"
)

type AltReader interface {
  AltRead() ([]byte, error)
}

// new type:
type AltReadWrapper struct {
  inner io.Reader
}

// now the receiver is our wrapper type
func (arw *AltReadWrapper) AltRead() ([]byte, error) {
  buf := make([]byte, 1024)
  n, err := arw.inner.Read(buf)
  return buf[:n], err
}

func main() {
  log.SetFlags(log.Lshortfile)

  f, err := os.Open("main.go")
  if err != nil {
    log.Fatalf("%+v", err)
  }

  // the argument is now wrapped
  readSome(&AltReadWrapper{inner: f})
}

func readSome(r AltReader) {
  buf, err := r.AltRead()
  if err != nil {
    log.Printf("got error: %+v", err)
  } else {
    log.Printf("read %v bytes: %v", len(buf), buf)
  }
}
Shell session
$ go run main.go
main.go:42: read 705 bytes: [112 97 99 107 97 103 101 32 109 97 105 (etc.)]
Cool bear's hot tip

Notice that when we implemented AltReader, we just declared a function named AltRead() with the correct signature.

But how can we make sure that *AltReadWrapper implements AltReader?

Well, there's no blessed way to do it, but there sure is a workaround:

Go code
var _ AltReader = (*AltReadWrapper)(nil)

This line will error out if *AltReadWrapper does not implement AltReader.

Okay, so the AltReader interface works - but, there's no way to specify how much data you want to read. We were happy just reading four bytes earlier, but now we're dependent on whatever buffer size the implementor of AltReader chose.

Let's fix that:

Go code
type AltReader interface {
  AltRead(n int) ([]byte, error)
}

type AltReadWrapper struct {
  inner io.Reader
}

var _ AltReader = (*AltReadWrapper)(nil)

func (arw *AltReadWrapper) AltRead() ([]byte, error) {
  buf := make([]byte, 1024)
  n, err := arw.inner.Read(buf)
  return buf[:n], err
}

// omitted: everything else

we had to allocate a buffer in the AltRead() call:

Go code
func (arw *AltReadWrapper) AltRead() ([]byte, error) {
  buf := make([]byte, 1024) // ⬅ here
  n, err := arw.inner.Read(buf)
  return buf[:n], err
}
Shell session
$  go run main.go
# command-line-arguments
.\main.go:17:5: cannot use (*AltReadWrapper)(nil) (type *AltReadWrapper) as type AltReader in assignment:
        *AltReadWrapper does not implement AltReader (wrong type for AltRead method)
                have AltRead() ([]byte, error)
                want AltRead(int) ([]byte, error)
.\main.go:34:11: cannot use &AltReadWrapper literal (type *AltReadWrapper) as type AltReader in argument to readSome:
        *AltReadWrapper does not implement AltReader (wrong type for AltRead method)
                have AltRead() ([]byte, error)
                want AltRead(int) ([]byte, error)
.\main.go:38:23: not enough arguments in call to r.AltRead
        have ()
        want (int)

Ah! Our "X implements Y" assertion is working.

Let's fix both the implementation for *AltReadWrapper...

Go code
func (arw *AltReadWrapper) AltRead(n int) ([]byte, error) {
  buf := make([]byte, n)
  n, err := arw.inner.Read(buf)
  return buf[:n], err
}

...and the call in readSome:

Go code
func readSome(r AltReader) {
  buf, err := r.AltRead(4)
  if err != nil {
    log.Printf("got error: %+v", err)
  } else {
    log.Printf("read %v bytes: %v", len(buf), buf)
  }
}

Now, everything compiles and runs again - and we got four bytes, just like we wanted:

Shell session
$ go run main.go
main.go:42: read 4 bytes: [112 97 99 107]

There's another thing that isn't ideal with AltReader, and it's a big one.

Every time we make a call, we have to allocate a new buffer.

Go code
func (arw *AltReadWrapper) AltRead(n int) ([]byte, error) {
  // we're about to find out if that GC really is fast
  buf := make([]byte, n)
  n, err := arw.inner.Read(buf)
  return buf[:n], err
}
Cool bear's hot tip

Or do we?

Couldn't we just have an internal buffer, and re-use that?

We could. Let's try it:

Go code
package main

import (
  "io"
  "log"
  "os"
)

type AltReader interface {
  AltRead(n int) ([]byte, error)
}

type AltReadWrapper struct {
  inner io.Reader
  buf   []byte
}

var _ AltReader = (*AltReadWrapper)(nil)

func (arw *AltReadWrapper) AltRead(n int) ([]byte, error) {
  if len(arw.buf) < n {
    log.Printf("allocating %v bytes", n)
    arw.buf = make([]byte, n)
  }
  n, err := arw.inner.Read(arw.buf)
  return arw.buf[:n], err
}

func main() {
  log.SetFlags(log.Lshortfile)

  f, err := os.Open("main.go")
  if err != nil {
    log.Fatalf("%+v", err)
  }

  arw := &AltReadWrapper{inner: f}
  for i := 0; i < 4; i++ {
    readSome(arw)
  }
}

func readSome(r AltReader) {
  buf, err := r.AltRead(4)
  if err != nil {
    log.Printf("got error: %+v", err)
  } else {
    log.Printf("read %v bytes: %v", len(buf), buf)
  }
}
Shell session
$ go run main.go
main.go:22: allocating 4 bytes
main.go:49: read 4 bytes: [112 97 99 107]
main.go:49: read 4 bytes: [97 103 101 32]
main.go:49: read 4 bytes: [109 97 105 110]
main.go:49: read 4 bytes: [10 10 105 109]

Wonderful!

You may be wondering if that for loop...

Go code
  for i := 0; i < 4; i++ {
    readSome(arw)
  }

...can be written some other way. The answer is no, why would you ask.

Of course we're not done poking holes in our AltReader interface.

The issue with returning a internal buffer is, of course, that nothing prevents the caller from keeping references to it.

Go code
package main

import (
  "io"
  "log"
  "os"

  // new!
  "github.com/davecgh/go-spew/spew"
)

// some types/functions omitted

func main() {
  log.SetFlags(log.Lshortfile)

  f, err := os.Open("main.go")
  if err != nil {
    log.Fatalf("%+v", err)
  }

  fc := readFourChunks(&AltReadWrapper{inner: f})
  spew.Dump(fc)
}

type FourChunks struct {
  one   []byte
  two   []byte
  three []byte
  four  []byte
}

func readFourChunks(r AltReader) FourChunks {
  mustRead := func() []byte {
    r, err := r.AltRead(4)
    if err != nil {
      log.Fatalf("could not read: %+v", err)
    }
    return r
  }

  return FourChunks{
    one:   mustRead(),
    two:   mustRead(),
    three: mustRead(),
    four:  mustRead(),
  }
}
Shell session
$ go run main.go
main.go:24: allocating 4 bytes
(main.FourChunks) {
 one: ([]uint8) (len=4 cap=4) {
  00000000  0a 0a 69 6d                                       |..im|
 },
 two: ([]uint8) (len=4 cap=4) {
  00000000  0a 0a 69 6d                                       |..im|
 },
 three: ([]uint8) (len=4 cap=4) {
  00000000  0a 0a 69 6d                                       |..im|
 },
 four: ([]uint8) (len=4 cap=4) {
  00000000  0a 0a 69 6d                                       |..im|
 }
}

Uh oh. All the fields of FourChunks are set to the fourth set of four bytes found in main.go.

Okay, so we can't use an internal buffer.

Go code
type AltReadWrapper struct {
  inner io.Reader
}

func (arw *AltReadWrapper) AltRead(n int) ([]byte, error) {
  buf := make([]byte, n)
  n, err := arw.inner.Read(buf)
  return buf[:n], err
}
Shell session
$ go run main.go
(main.FourChunks) {
 one: ([]uint8) (len=4 cap=4) {
  00000000  70 61 63 6b                                       |pack|
 },
 two: ([]uint8) (len=4 cap=4) {
  00000000  61 67 65 20                                       |age |
 },
 three: ([]uint8) (len=4 cap=4) {
  00000000  6d 61 69 6e                                       |main|
 },
 four: ([]uint8) (len=4 cap=4) {
  00000000  0a 0a 69 6d                                       |..im|
 }
}

Now our code is correct again.

And we're stuck with that "allocate on every read" design, because we have no idea how long a caller might hold on to the slices we return. They might hold on to it long after our AltReader-implementing type gets garbage-collected.

But io.Reader does not have that problem. By taking a single []byte argument, it solves both these requirements:

  1. allow specifying a maximum read size (the capacity of the slice)
  2. avoid allocations on every read

And if we change our example to just use io.Reader, we can see that everything works fine:

Go code
package main

import (
  "io"
  "log"
  "os"

  "github.com/davecgh/go-spew/spew"
)

func main() {
  log.SetFlags(log.Lshortfile)

  f, err := os.Open("main.go")
  if err != nil {
    log.Fatalf("%+v", err)
  }

  fc := readFourChunks(f)
  spew.Dump(fc)
}

type FourChunks struct {
  one   []byte
  two   []byte
  three []byte
  four  []byte
}

func readFourChunks(r io.Reader) FourChunks {
  mustRead := func(p []byte) []byte {
    _, err := io.ReadFull(r, p)
    if err != nil {
      log.Fatalf("could not read: %+v", err)
    }
    return p
  }

  return FourChunks{
    one:   mustRead(make([]byte, 4)),
    two:   mustRead(make([]byte, 4)),
    three: mustRead(make([]byte, 4)),
    four:  mustRead(make([]byte, 4)),
  }
}
Shell session
$ go run main.go
(main.FourChunks) {
 one: ([]uint8) (len=4 cap=4) {
  00000000  70 61 63 6b                                       |pack|
 },
 two: ([]uint8) (len=4 cap=4) {
  00000000  61 67 65 20                                       |age |
 },
 three: ([]uint8) (len=4 cap=4) {
  00000000  6d 61 69 6e                                       |main|
 },
 four: ([]uint8) (len=4 cap=4) {
  00000000  0a 0a 69 6d                                       |..im|
 }
}

We can re-use the same buffer if we want to:

Go code
package main

import (
  "io"
  "log"
  "os"

  "github.com/davecgh/go-spew/spew"
)

func main() {
  log.SetFlags(log.Lshortfile)

  f, err := os.Open("main.go")
  if err != nil {
    log.Fatalf("%+v", err)
  }

  readFourTimes(f)
}

func readFourTimes(r io.Reader) {
  buf := make([]byte, 4)

  for i := 0; i < 4; i++ {
    _, err := io.ReadFull(r, buf)
    if err != nil {
      log.Fatalf("could not read: %+v", err)
    }
    spew.Dump(buf)
  }
}

Hell, we can even have it read to the middle of a buffer if we want:

Go code
func main() {
  log.SetFlags(log.Lshortfile)

  f, err := os.Open("main.go")
  if err != nil {
    log.Fatalf("%+v", err)
  }

  readToMiddle(f)
}

func readToMiddle(r io.Reader) {
  buf := []byte("..............................")

  _, err := io.ReadFull(r, buf[8:20])
  if err != nil {
    log.Fatalf("could not read: %+v", err)
  }
  spew.Dump(buf)
}
Shell session
$ go run main.go
([]uint8) (len=30 cap=30) {
 00000000  2e 2e 2e 2e 2e 2e 2e 2e  70 61 63 6b 61 67 65 20  |........package |
 00000010  6d 61 69 6e 2e 2e 2e 2e  2e 2e 2e 2e 2e 2e        |main..........|
}

At this point, I think it's fair to say that io.Reader is a better design.

Right?

Well, it's not that simple.

I haven't presented the full io.Reader interface - only its function signature. And unfortunately, that is not enough to specify the interface completely.

Cool bear's hot tip

This is a red flag.

Anytime the interface's signature itself is not enough to infer its behavior, there is trouble on the horizon.

Imagine a fire alarm that you can pull up or down. If you pull it up - the alarm goes off, people evacuate, everything is fine.

But if you pull it down, nothing happens.

Sure, eventually people will figure it out - they'll pull it down, see that nothing happens, be confused for a few seconds, then try pulling it up. Or maybe they won't, and assume it's broken.

Unless of course the person pulling the alarm can't hear anything. And maybe the building management could be convinced that this is a potential problem, and that something should be done about it.

And instead of replacing the alarm, they set up employee training. "All the fire alarms must be pulled down, not up - no, wait, the other way around".

They haven't fixed the root of the issue. This is a band-aid, at most. Anyone who hasn't yet attended training, such as a new hire, is a potential danger to the building.

But hey, management says: problem solved!

Now, cool bear, you know that analogy isn't going to work for everyone. I can see the comments from here...

Cool bear's hot tip

What comments?

Nevermind - anyway, let's be fair here: not everything is fixable. Sometimes you just gotta slap a notice on the wall that says "don't poke the bear", because there's a good reason for there to be a bear there.

Cool bear's hot tip

A bear? What bear?

Anyway - the point I was making is that we haven't yet discussed the full io.Reader interface.

Even though it's just one function, there's a bunch of moving parts. First off, this is Go, and in Go, you don't have tuples, and you don't have a Result<T> type, and you don't have exception handling - you have multi-valued return.

The problem - one of the problems - with multi-valued return is that nobody complains about stuff like this:

Go code
package main

import (
  "log"
  "os"

  "github.com/davecgh/go-spew/spew"
)

func main() {
  log.SetFlags(log.Lshortfile)

  // this is obviously a bad idea
  f, _ := os.Open("woops")

  buf := make([]byte, 32)
  // and so is this
  n, _ := f.Read(buf)
  spew.Dump(buf[:n])
}

The compiler doesn't complain, go vet doesn't complain, none of the 51 linters that golangci-lint runs complain.

And this part of the article is infuriating if you like Go right now.

Of course they don't complain, you're thinking, you just told them to shut up by using _!

And that's fair!

Except that - the problem I'm pointing out is that this is a thing.

The way to be lazy about error handling in languages that support exceptions is to just... do nothing.

JavaScript code
// @ts-check

const { readFile } = require("fs").promises;

async function main() {
  const contents = await readFile("woops", { encoding: "utf-8" });
  console.log(`Just read a file: ${contents.slice(0, 20)}...`);
}

main();

The woops file does not exist, and we definitely haven't spent any time thinking about error handling, and so, it brings down the whole program:

Shell session
$ node --unhandled-rejections=strict main.js
internal/process/promises.js:194
        triggerUncaughtException(err, true /* fromPromise */);
        ^

[Error: ENOENT: no such file or directory, open 'C:\msys64\home\amos\go\aac\woops'] {
  errno: -4058,
  code: 'ENOENT',
  syscall: 'open',
  path: 'C:\\msys64\\home\\amos\\go\\aac\\woops'
}

Same with Java, another language with exceptions:

Java code
import java.io.BufferedReader;
import java.io.FileInputStream;
import java.io.InputStreamReader;
import java.nio.CharBuffer;

public class Main {
  public static void main(String args[]) throws java.lang.Exception {
    BufferedReader reader = new BufferedReader
      (new InputStreamReader(new FileInputStream("woops"), "UTF-8"));

    CharBuffer buf = CharBuffer.allocate(32);
    reader.read(buf);
    reader.close();

    System.out.println("Just read a file: " + buf.toString());
  }
}
Shell session
$ javac Main.java && java Main
Exception in thread "main" java.io.FileNotFoundException: woops (The system cannot find the file specified)
        at java.base/java.io.FileInputStream.open0(Native Method)
        at java.base/java.io.FileInputStream.open(FileInputStream.java:219)
        at java.base/java.io.FileInputStream.<init>(FileInputStream.java:157)
        at java.base/java.io.FileInputStream.<init>(FileInputStream.java:112)
        at Main.main(Main.java:8)

Same with Python, another language with exceptions:

Python
file = open("woops", "r")
print(file.read())
Shell session
$ python main.py
Traceback (most recent call last):
  File "main.py", line 2, in <module>
    file = open("woops", "r")
FileNotFoundError: [Errno 2] No such file or directory: 'woops'

In fact, you know what language is closest to Go there?

C!

C doesn't force you to check errors.

C code
#define _CRT_SECURE_NO_WARNINGS
#include <stdio.h>
#include <stdlib.h>

int main(int argc, char **argv) {
  FILE *f = fopen("woops", "r");

  const size_t len = 32;
  // (!) gratuitious use of sizeof to convey intent
  // the author realizes sizeof(char) is just 1.
  char *buf = calloc(len, sizeof(char));
  fread(buf, sizeof(char), len, f);
  printf("Just read part of a file: %.*s\n", (int) len, buf);

  return 0;
}
Shell session
$ clang main.c -o main && ./main

$

Uhhh... hang on

Shell session
$ echo $?
127

Right, it crashed.

Shell session
$ lldb ./main
(lldb) target create "./main"
Current executable set to './main' (x86_64).
(lldb) r
Process 14644 launched: 'C:\msys64\home\amos\go\aac\main' (x86_64)
Process 14644 stopped
* thread #1, stop reason = Exception 0xc0000409 encountered at address 0x7ff6f10c6378
    frame #0: 0x00007ff6f10c6378 main
->  0x7ff6f10c6378: int    $0x29
    0x7ff6f10c637a: movl   $0x1, %r8d
    0x7ff6f10c6380: movl   $0xc0000417, %edx         ; imm = 0xC0000417
    0x7ff6f10c6385: leal   0x1(%r8), %ecx

But see what I did there?

C code
#define _CRT_SECURE_NO_WARNINGS

I did much more than just a few underscores. I hid some very interesting warnings on purpose.

If I remove that line, we get:

Shell session
$ clang main.c -o main
main.c:5:13: warning: 'fopen' is deprecated: This function or variable may be unsafe. Consider using fopen_s
      instead. To disable deprecation, use _CRT_SECURE_NO_WARNINGS. See online help for details.
      [-Wdeprecated-declarations]
  FILE *f = fopen("woops", "r");
            ^
C:\Program Files (x86)\Windows Kits\10\Include\10.0.18362.0\ucrt\stdio.h:207:20: note: 'fopen' has been explicitly
      marked deprecated here
    _Check_return_ _CRT_INSECURE_DEPRECATE(fopen_s)
                   ^
D:\Programs\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.26.28801\include\vcruntime.h:316:55: note:
      expanded from macro '_CRT_INSECURE_DEPRECATE'
        #define _CRT_INSECURE_DEPRECATE(_Replacement) _CRT_DEPRECATE_TEXT(    \
                                                      ^
D:\Programs\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.26.28801\include\vcruntime.h:306:47: note:
      expanded from macro '_CRT_DEPRECATE_TEXT'
#define _CRT_DEPRECATE_TEXT(_Text) __declspec(deprecated(_Text))
                                              ^
1 warning generated.

We can get rid of that warning by using fopen_s:

C code
#include <stdio.h>
#include <stdlib.h>

int main(int argc, char **argv) {
  FILE *f;
  fopen_s(&f, "woops", "r");

  // etc.

  return 0;
}

And then.. there's no other warning. Except, Microsoft (I'm writing this on Windows) has developed an annotation language for C.

The full prototype for the fopen_s supplied by ucrt\stdio.h is actually:

C code
        _Check_return_wat_
        _ACRTIMP errno_t __cdecl fopen_s(
            _Outptr_result_maybenull_ FILE**      _Stream,
            _In_z_                    char const* _FileName,
            _In_z_                    char const* _Mode
            );

Code analysis is not run by default when we compile with Microsoft Visual C++:

Shell session
> cl /nologo main.c
main.c
> ./main.exe
>

(This is run in PowerShell, by the way - I usually run shell sessions in msys2).

But we can enable it with just one flag:

Shell session
$ cl.exe /nologo /analyze main.c
main.c
C:\msys64\home\amos\go\aac\main.c(8) : warning C6031: Return value ignored: 'fopen_s'.
C:\msys64\home\amos\go\aac\main.c(14) : warning C6387: 'buf' could be '0':  this does not adhere to the specification for the function 'fread'. : Lines: 7, 8, 10, 13, 14
C:\msys64\home\amos\go\aac\main.c(14) : warning C6387: 'f' could be '0':  this does not adhere to the specification for the function 'fread'. : Lines: 7, 8, 10, 13, 14
Microsoft (R) Incremental Linker Version 14.26.28805.0
Copyright (C) Microsoft Corporation.  All rights reserved.

/out:main.exe
main.obj

Isn't that cool? I've never used SAL before. I like it.

And then of course, Rust doesn't have exceptions. And you can still be lazy about error handling, in a bunch of ways.

You cannot just pretend it's not there - this doesn't compile:

Rust code
fn main() {
    let s: String = std::fs::read_to_string("woops");
    println!("read a file: {}", s);
}
Shell session
$ cargo run --quiet
error[E0308]: mismatched types
 --> src\main.rs:2:21
  |
2 |     let s: String = std::fs::read_to_string("woops");
  |            ------   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ expected struct `std::string::String`, found enum `std::result::Result`
  |            |
  |            expected due to this
  |
  = note: expected struct `std::string::String`
               found enum `std::result::Result<std::string::String, std::io::Error>`

And - I know we've been there before - but read_to_string doesn't return a String, it returns a Result<String, E>, where E is std::io::Error.

But you can totally be lazy, like that:

Rust code
fn main() {
    let s: String = std::fs::read_to_string("woops").unwrap();
    println!("read a file: {}", s);
}

Or like that:

Rust code
fn main() -> Result<(), std::io::Error> {
    let s: String = std::fs::read_to_string("woops")?;
    println!("read a file: {}", s);
    Ok(())
}

Either way, the program halts and catch fire:

Shell session
$ cargo run --quiet
thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: Os { code: 2, kind: NotFound, message: "The system cannot find the file specified." }', src\main.rs:2:21
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrac
Shell session
$ cargo run --quiet
Error: Os { code: 2, kind: NotFound, message: "The system cannot find the file specified." }

But let's get back to Go - in our case, we have a function that returns both an int and an error, and we can read either while completely ignoring the other, and nobody complains.

To Go's credit, it does complain if you actually bind it to something:

Go code
package main

import (
  "log"
  "os"
)

func main() {
  log.SetFlags(log.Lshortfile)

  // this is obviously a bad idea
  f, err := os.Open("woops.go")

  log.Printf("the file's name is %v", f.Name())
}
Shell session
$ go run main.go
# command-line-arguments
.\main.go:12:5: err declared but not used

...but that's not specific to error handling. Unused variables are just an error. Which, eh, has its good and bad moments.

Point is, one might be tempted to use the io.Reader interface like so:

Go code
package main

import (
  "io"
  "log"
  "os"

  "github.com/davecgh/go-spew/spew"
)

func main() {
  log.SetFlags(log.Lshortfile)

  f, err := os.Open("main.go")
  if err != nil {
    log.Fatalf("%+v", err)
  }

  s := readFull(f)
  spew.Dump(len(s), s[:32])
}

func readFull(r io.Reader) []byte {
  buf := make([]byte, 16)
  var res []byte

  for {
    n, _ := r.Read(buf)
    if n == 0 {
      break
    }
    res = append(res, buf[:n]...)
  }

  return res
}

It's a bit silly, but it's not that silly, right? Just read until we get an error. We're being lazy about errors, remember?

This seems to work fine with an *os.File:

Shell session
$ go run main.go
(int) 434
([]uint8) (len=32 cap=512) {
 00000000  70 61 63 6b 61 67 65 20  6d 61 69 6e 0a 0a 69 6d  |package main..im|
 00000010  70 6f 72 74 20 28 0a 09  22 69 6f 22 0a 09 22 6c  |port (.."io".."l|
}

Except there's nothing preventing an io.Reader implementation from returning 0 for a while... and then the actual content.

Go code
package main

import (
  "io"
  "log"
  "os"

  "github.com/davecgh/go-spew/spew"
)

func main() {
  log.SetFlags(log.Lshortfile)

  f, err := os.Open("main.go")
  if err != nil {
    log.Fatalf("%+v", err)
  }

  s := readFull(&NaughtyReader{inner: f, count: 4})
  spew.Dump(len(s), s[:32])
}

// --------- naughtiness begins

type NaughtyReader struct {
  inner io.Reader
  count int
}

func (nr *NaughtyReader) Read(p []byte) (int, error) {
  if nr.count > 0 {
    nr.count -= 1
    return 0, nil
  }
  return nr.inner.Read(p)
}

// --------- naughtiness ends

func readFull(r io.Reader) []byte {
  buf := make([]byte, 16)
  var res []byte

  for {
    n, _ := r.Read(buf)
    if n == 0 {
      break
    }
    res = append(res, buf[:n]...)
  }

  return res
}
Shell session
$ go run main.go
panic: runtime error: slice bounds out of range [:32] with capacity 0

goroutine 1 [running]:
main.main()
        C:/msys64/home/amos/go/aac/main.go:20 +0x23c
exit status 2

(What happened? readFull returned a zero-byte slice, so we can't make a 32-byte-long subslice out of it)

Returning 0, nil is not forbidden. There is nothing in the language, or the interface itself, that forbids it.

It is, however, frowned upon. And should you read the actual documentation for io.Reader, you'll be given a stern talking-to:

Implementations of Read are discouraged from returning a zero byte count with a nil error, except when len(p) == 0.

But, we live in the real world, where not everyone reads the documentation all the time, and so the next paragraph reads:

Callers should treat a return of 0 and nil as indicating that nothing happened; in particular it does not indicate EOF.

("EOF" stands for "end of file" here).

Okay, fair enough - our readFull is broken. Let's fix it:

Go code
func readFull(r io.Reader) []byte {
  buf := make([]byte, 16)
  var res []byte

  for {
    n, err := r.Read(buf)
    if err == io.EOF {
      break
    }
    res = append(res, buf[:n]...)
  }

  return res
}

And let's also add some additional checks to our main function:

Go code
func main() {
  log.SetFlags(log.Lshortfile)

  f, err := os.Open("main.go")
  if err != nil {
    log.Fatalf("%+v", err)
  }
  stats, _ := f.Stat() // 🔥

  s := readFull(&NaughtyReader{inner: f, count: 4})
  log.Printf("file had %d bytes, we read %d", stats.Size(), len(s))
  spew.Dump(len(s), s[:32])
}
Shell session
$ go run main.go
main.go:21: file had 825 bytes, we read 825
(int) 825
([]uint8) (len=32 cap=1024) {
 00000000  70 61 63 6b 61 67 65 20  6d 61 69 6e 0a 0a 69 6d  |package main..im|
 00000010  70 6f 72 74 20 28 0a 09  22 69 6f 22 0a 09 22 6c  |port (.."io".."l|

That was easy! Now our readFull function is ready for prime time. It'll work flawlessly with all io.Reader implementations, and...

...okay, I'm messing with you. We're not quite there yet.

What about EOF?

Go code
func main() {
  log.SetFlags(log.Lshortfile)

  src := []byte("I honestly believe it is better to know nothing than to know what ain’t so.")

  s := readFull(&NaughtyReader{src})
  log.Printf("src had %d bytes, we read %d", len(src), len(s))
  spew.Dump(s[:32])
}

// --------- naughtiness begins

type NaughtyReader struct {
  src []byte
}

func (nr *NaughtyReader) Read(p []byte) (n int, err error) {
  wanted := len(p)
  avail := len(nr.src)
  n = wanted
  if n > avail {
    n = avail
    err = io.EOF
  }
  copy(nr.src[:n], p[:n])
  nr.src = nr.src[n:]

  return
}

// --------- naughtiness ends

func readFull(r io.Reader) []byte {
  buf := make([]byte, 256)
  var res []byte

  for {
    n, err := r.Read(buf)
    if err == io.EOF {
      break
    }
    res = append(res, buf[:n]...)
  }

  return res
}
Shell session
$ go run main.go
main.go:16: src had 77 bytes, we read 64
([]uint8) (len=32 cap=64) {
 00000000  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
 00000010  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
}

Hey, why is the output empty?

That's my bad - I mixed up the order of the arguments to copy() - no way the compiler could've seen that coming, that one's on me.

Let's fix it:

Go code
  copy(p[:n], nr.src[:n])
Shell session
$ go run main.go
main.go:16: src had 77 bytes, we read 64
([]uint8) (len=32 cap=64) {
 00000000  49 20 68 6f 6e 65 73 74  6c 79 20 62 65 6c 69 65  |I honestly belie|
 00000010  76 65 20 69 74 20 69 73  20 62 65 74 74 65 72 20  |ve it is better |
}

Okay so, this time it's not empty, but look at the counts:

main.go:16: src had 77 bytes, we read 64

We haven't read the whole thing.

Because our NaughtyReader, when it returns io.EOF, also returns some data.

If we had picked another buffer size, things might be a lot worse:

Go code
func readFull(r io.Reader) []byte {
  buf := make([]byte, 128)

  // etc.
}
Shell session
$ go run main.go
main.go:16: src had 77 bytes, we read 0
panic: runtime error: slice bounds out of range [:32] with capacity 0

goroutine 1 [running]:
main.main()
        C:/msys64/home/amos/go/aac/main.go:17 +0x251
exit status 2

That (returning some data and io.EOF) is not forbidden either.

Does the documentation talk about it? Of course it does!

When Read encounters an error or end-of-file condition after successfully reading n > 0 bytes, it returns the number of bytes read.

So far so good.

It may return the (non-nil) error from the same call or return the error (and n == 0) from a subsequent call.

Oh no. That's one situation where "there's options" is definitely a bad thing.

An instance of this general case is that a Reader returning a non-zero number of bytes at the end of the input stream may return either err == EOF or err == nil. The next Read should return 0, EOF.

So our NaughtyReader is actually not naughty at all. It's just using one of the many options at our disposal.

That's not the last of the io.Reader gotchas, by the way. I haven't even touched these:

Even if Read returns n < len(p), it may use all of p as scratch space during the call.

Implementations must not retain p.

...but these at least sound more reasonable. It's not like the compiler could check for things like that, right?

Right.

So it's our readFull that's the problem. Luckily, the Go standard library has one we could just, you know, use.

Go code
func main() {
  log.SetFlags(log.Lshortfile)

  src := []byte("I honestly believe it is better to know nothing than to know what ain’t so.")

  s := make([]byte, 1024)
  n, err := io.ReadFull(&NaughtyReader{src}, s)
  if err != nil {
    log.Fatalf("%+v", err)
  }
  log.Printf("src had %d bytes, we read %d", len(src), n)
  spew.Dump(s[:32])
}
Shell session
$ go run main.go
main.go:18: unexpected EOF
exit status 1

Well, that's not what we want. (What we wanted was io/ioutil.ReadAll).

What does io.ReadFull actually do?

func ReadFull(r Reader, buf []byte) (n int, err error)

ReadFull reads exactly len(buf) bytes from r into buf. It returns the number of bytes copied and an error if fewer bytes were read. The error is EOF only if no bytes were read. If an EOF happens after reading some but not all the bytes, ReadFull returns ErrUnexpectedEOF. On return, n == len(buf) if and only if err == nil. If r returns an error having read at least len(buf) bytes, the error is dropped.

Oh no.

For starters, "the error is dropped" is nothing short of terrifying.

But more importantly, just like the Read method from io.Reader, the multi-valued return means that there are actually four possible state combinations:

  1. n == 0, err == nil
  2. n != 0, err == nil
  3. n == 0, err != nil
  4. n != 0, err != nil

Here, the documentation states that combination one can never happen. The compiler doesn't know that. And we've seen that, in the case of io.Reader, all four of those combinations are useful - and used.

Only C has the same design deficiency. JavaScript, Java and Python only have two states - either you've successfully called read and your program continues execution, or you haven't and an exception was thrown.

Rust also has only two alternatives, either you get a Result::Ok(_), or you get a Result::Err(_):

Rust code
use std::{fs::File, io::Read};

fn main() {
    let mut f = File::open("src/main.rs").unwrap();
    let mut buf = vec![0u8; 128];
    loop {
        match f.read(&mut buf) {
            Ok(n) => match n {
                0 => {
                    println!("reached end of file!");
                    return;
                }
                n => {
                    println!("read {} bytes", n);
                }
            },
            Err(e) => {
                println!("got error {:?}", e);
                return;
            }
        }
    }
}
Shell session
$ cargo run --quiet
read 128 bytes
read 128 bytes
read 128 bytes
read 128 bytes
read 49 bytes
reached end of file!

There's not "options to choose from", because the interface is properly constrained:

As I've said before - this is an article about API design. Also, the pitfalls of the io.Reader interface are not theoretical.

Because there is incredible leniency when it comes to the behavior of io.Reader implementations, and because the compiler is completely unable to enforce any of the invariants, it is in fact non-trivial to use correctly.

Cool bear's hot tip

Which is not to say that Rust's std::io::Reads' interface is perfect.

I was actually surprised to find out that Ok(0) indicated EOF - so when I first drafted the code sample using it, I had an infinite loop reading 0 bytes over and over.

One might argue that it would be better as a variant of std::io::ErrorKind - although I'm sure this also has downsides.

The point throughout this article is never "this thing is 100% bad" or "this thing is 100% good" - it's to emphasize the importance of experiment with interfaces to see how misuse-resistant we can make them.

But let's come back to Go - and move on from the io.Reader interface to other ones.

Like... the io.ReadSeeker interface.

It combines io.Reader and io.Seeker - the latter having, again, just one function:

Go code
type Seeker interface {
  Seek(offset int64, whence int) (int64, error)
}

Again, it's presented without documentation for the time being.

Let's put our code critic hats on and look at this really closely.

What can we see?

Well, first of all, it returns an int64, whereas Read returned an int.

This makes sense - in Go land - for the following reasons:

So, that part makes sense.

Next, we notice that...

Go code
  Seek(offset int64, whence int) (int64, error)

...the whence parameter (great name btw) is an int.

That means we're in loosely-constrained territory again, unfortunately.

Out of the 2**32 possible values, only 3 are meaningful:

Go code
const (
  SeekStart   = 0 // seek relative to the origin of the file
  SeekCurrent = 1 // seek relative to the current offset
  SeekEnd     = 2 // seek relative to the end
)

This is inevitable in Go, but not elsewhere. Many languages have enums - even TypeScript!

But my favorite flavor is Algebraic data types.

I'm not about to drop some ML or Haskell on you, so let's stick with what I know - Rust. If we had to do take a parameter with three meaningful values, we'd probably declare it as an enum:

Rust code
enum Whence {
    Start,
    Current,
    End,
}

Is this a number?

Rust code
fn main() {
    let a: usize = Whence::Start;
}
Shell session
$ cargo run --quiet
error[E0308]: mismatched types
 --> src\main.rs:8:20
  |
8 |     let a: usize = Whence::Start;
  |            -----   ^^^^^^^^^^^^^ expected `usize`, found enum `Whence`
  |            |
  |            expected due to this

No!

Can we interpret it as a number?

Rust code
enum Whence {
    Start,
    Current,
    End,
}

fn main() {
    println!("Whence::Start is {}", Whence::Start as usize);
}
Shell session
$ cargo run --quiet
Whence::Start is 0

Yes!

Can we do the reverse operation? Interpret a number as a Whence?

Rust code
enum Whence {
    Start,
    Current,
    End,
}

fn main() {
    let x: usize = 0;
    let w = x as Whence;
}
Shell session
$ cargo run --quiet
error[E0605]: non-primitive cast: `usize` as `Whence`
 --> src\main.rs:9:13
  |
9 |     let w = x as Whence;
  |             ^^^^^^^^^^^
  |
  = note: an `as` expression can only be used to convert between primitive types. Consider using the `From` trait

No! Because casting (the as operator) is an operation that cannot fail, and there are many more possible values for an usize than there is for Whence.

With a little help, we can do what we want, though:

Rust code
use core::convert::TryInto;
use derive_try_from_primitive::TryFromPrimitive;

#[derive(TryFromPrimitive, Debug)]
#[repr(usize)]
enum Whence {
    Start,
    Current,
    End,
}

fn main() {
    let x: usize = 0;
    let w: Whence = x.try_into().unwrap();
    println!("w = {:?}", w);
}
Shell session
$ cargo run --quiet
w = Start

Since try_into is an operation that can fail, here too, we're face to face with a Result<T, E> - and we can unwrap() it, if we feel lazy, to either "crash or go on".

But Go doesn't have enums. Not even lesser enums, like TypeScript, Java, or C# have.

Cool bear's hot tip

C doesn't really have enums either.

It's just typedefs all the way down. Pretty much like Go. Again.

The only thing we could do to improve io.Seeker interface would be to use a "type definition":

Go code
type Whence int

const (
  SeekStart   Whence = 0 // seek relative to the origin of the file
  SeekCurrent Whence = 1 // seek relative to the current offset
  SeekEnd     Whence = 2 // seek relative to the end
)

type Seeker interface {
  Seek(offset int64, whence Whence) (int64, error)
}

But that's not a real fix. Any int can be cast into a Whence, failing to prevent this kind of misuse:

Go code
func doStuff(s Seeker) {
  s.Seek(0, Whence(1024))
}

Anyway.

One cool thing about the io.Seeker interface is that you can tell the size of the input with this pattern:

Go code
package main

import (
  "io"
  "log"
  "os"
)

func main() {
  f, _ := os.Open("main.go")
  printSize(f)
}

func printSize(s io.Seeker) {
  oldpos, _ := s.Seek(0, io.SeekCurrent)
  size, _ := s.Seek(0, io.SeekEnd)
  log.Printf("The input is %v bytes long", size)
  s.Seek(oldpos, io.SeekStart)
}
Shell session
$ go run main.go
2020/06/27 22:44:26 The input is 288 bytes long

This is a perfectly legitimate use of io.Seeker. Nothing in there violates its contract - not even the fine print (ie. the documentation).

We only ever pass valid offsets, and valid values for whence. Doing that is one of the things io.Seeker was made for.

We've seen two interfaces so far. io.Reader and io.Seeker go well together, because both of them assume we keep track of a "current position" within the input.

They combine so well, in fact, that there's an interface that embeds both of them: io.ReadSeeker, defined simply as:

Go code
type ReadSeeker interface {
    Reader
    Seeker
}

That's not the only "combo interface" - there's also io.ReadCloser, io.ReadWriteCloser, io.ReadWriteSeeker, and io.ReadWriter.

Some types allow both reading and seeking, like *os.File. It maps quite nicely to unix file descriptors:

C code
#include <stdlib.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <unistd.h>
#include <fcntl.h>
#include <stdio.h>

int main() {
    int fd = open("main.c", O_RDONLY);
    size_t count = 12;
    char *buf = calloc(count, sizeof(char));
    ssize_t n;

    for (int i = 0; i < 3; i++) {
        ssize_t pos = lseek(fd, 0, SEEK_CUR);
        n = read(fd, buf, count);
        printf("at %zd: '%.*s'\n", pos, (int) n, buf);
    }
}
Shell session
$ clang main.c -o main && ./main
at 0: '#include <st'
at 12: 'dlib.h>
#inc'
at 24: 'lude <sys/ty'
Cool bear's hot tip

The output above looks a bit funky, but it's expected - we're reading 3 times 12 bytes from a file that has newlines in it, so the output has newlines too.

Now we can plainly see where Go's Seek method comes from.

In fact, *os.File is a very thin abstraction over file descriptors. Even C's fstream interface is higher-level, because it does buffering.

That C program, even though it does many fread calls, only does one read syscall;

C code
#include <stdio.h>
#include <stdint.h>

int main() {
    FILE *f = fopen("main.c", "r");

    for (int i = 0; i < 16; i++) {
        uint8_t c;
        fread(&c, 1, 1, f);
        printf("%c", c);
    }
    printf("\n");
}
Shell session
$ clang main.c -o main && strace -e read ./main
read(3, "\177ELF\2\1\1\3\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0@q\2\0\0\0\0\0"..., 832) = 832
read(3, "#include <stdio.h>\n#include <std"..., 4096) = 224
#include <stdio.
+++ exited with 0 +++

Whereas the same Go program makes a lot of small reads:

Go code
package main

import (
  "fmt"
  "os"
)

func main() {
  f, _ := os.Open("main.go")

  var buf = make([]byte, 1)
  for i := 0; i < 16; i++ {
    f.Read(buf)
    fmt.Printf("%s", string(buf))
  }
  fmt.Printf("\n")
}
Shell session
$ go build main.go && strace -e read ./main > /dev/null
read(3, "2097152\n", 20)                = 8
--- SIGURG {si_signo=SIGURG, si_code=SI_TKILL, si_pid=97252, si_uid=1000} ---
--- SIGURG {si_signo=SIGURG, si_code=SI_TKILL, si_pid=97252, si_uid=1000} ---
--- SIGURG {si_signo=SIGURG, si_code=SI_TKILL, si_pid=97252, si_uid=1000} ---
--- SIGURG {si_signo=SIGURG, si_code=SI_TKILL, si_pid=97252, si_uid=1000} ---
read(3, "p", 1)                         = 1
read(3, "a", 1)                         = 1
read(3, "c", 1)                         = 1
read(3, "k", 1)                         = 1
read(3, "a", 1)                         = 1
read(3, "g", 1)                         = 1
read(3, "e", 1)                         = 1
read(3, " ", 1)                         = 1
read(3, "m", 1)                         = 1
read(3, "a", 1)                         = 1
read(3, "i", 1)                         = 1
read(3, "n", 1)                         = 1
read(3, "\n", 1)                        = 1
read(3, "\n", 1)                        = 1
read(3, "i", 1)                         = 1
read(3, "m", 1)                         = 1
+++ exited with 0 +++

That's why the bufio package exists:

Go code
package main

import (
  "bufio"
  "fmt"
  "os"
)

func main() {
  f, _ := os.Open("main.go")
  r := bufio.NewReader(f)

  var buf = make([]byte, 1)
  for i := 0; i < 16; i++ {
    r.Read(buf)
    fmt.Printf("%s", string(buf))
  }
  fmt.Printf("\n")
}
Shell session
$ go build main.go && strace -e read ./main > /dev/null
read(3, "2097152\n", 20)                = 8
--- SIGURG {si_signo=SIGURG, si_code=SI_TKILL, si_pid=98503, si_uid=1000} ---
--- SIGURG {si_signo=SIGURG, si_code=SI_TKILL, si_pid=98503, si_uid=1000} ---
--- SIGURG {si_signo=SIGURG, si_code=SI_TKILL, si_pid=98503, si_uid=1000} ---
--- SIGURG {si_signo=SIGURG, si_code=SI_TKILL, si_pid=98503, si_uid=1000} ---
read(3, "package main\n\nimport (\n\t\"bufio\"\n"..., 4096) = 239
+++ exited with 0 +++

While we're on the topic of API design, I can't help but notice something: wrapping a reader into a bufio.Reader does not "consume" the initial reader.

So, there's nothing stopping us from accidentally using both alternatively, and then weird stuff happens:

Go code
package main

import (
  "bufio"
  "fmt"
  "strings"
)

func main() {
  input := "abcdefghijklmnopqrstuvwxyz0123456789"

  sr := strings.NewReader(input)
  br := bufio.NewReaderSize(sr, 4)

  var sbuf []byte
  var bbuf []byte
  buf := make([]byte, 4)
  for i := 0; i < 5; i++ {
    sr.Read(buf)
    sbuf = append(sbuf, buf...)
    br.Read(buf)
    bbuf = append(bbuf, buf...)
  }
  fmt.Printf("[sbuf] %s\n", string(sbuf))
  fmt.Printf("[bbuf] %s\n", string(bbuf))
}
Shell session
$ go build main.go && ./main
[sbuf] abcduvwxyz0123456789
[bbuf] efghijklmnopqrst6789

But again, with the design of the Go language, there's no way to forbid that.

We can't even set up code standards so this doesn't happen. We might want, for example, to shadow the inner reader, so we can't accidentally re-use it:

Go code
func main() {
  input := "abcdefghijklmnopqrstuvwxyz0123456789"

  r := strings.NewReader(input)
  r = bufio.NewReaderSize(r, 4)
}
Shell session
$ go build main.go && ./main
# command-line-arguments
./main.go:12:5: cannot use bufio.NewReaderSize(r, 4) (type *bufio.Reader) as type *strings.Reader in assignment

Actually, I was wrong - there is a way, sort of. This does work:

Go code
func main() {
  input := "abcdefghijklmnopqrstuvwxyz0123456789"

  var r io.Reader = strings.NewReader(input)
  r = bufio.NewReaderSize(r, 4)
}

...but then we can never use any methods of r except Read.

I know some people don't like these comparisons but, allow me to show you why these problems couldn't happen in Rust. In Rust, you can declare another binding with the same name, and it shadows the old one. It doesn't have to have the same type.

So, in a "Go/Rust mix", we could write this:

Rust code
let r = strings.NewReader(input)
// here, `r` is of type `*strings.Reader`, we can use any of its methods
let r = bufio.NewReader(r)
// here, `r` is of type `*bufio.Reader`, we can use any of its methods

But we wouldn't even have that problem in the first place - the "accidentally hanging onto and using the inner reader" problem.

See, Rust also has a buffered reader type. And its constructor, or "new method", looks like this:

Rust code
impl<R: Read> BufReader<R> {
    pub fn new(inner: R) -> BufReader<R> {
        BufReader::with_capacity(DEFAULT_BUF_SIZE, inner)
    }
}

Here, BufReader::new takes ownership of the inner reader. In other words, the inner reader is moved into the BufReader. You can't use it afterwards accidentally:

Rust code
use std::{fs::File, io::BufReader, io::Read};

fn main() {
    let mut f = File::open("src/main.rs").unwrap();
    let mut br = BufReader::new(f);

    let mut s = String::new();
    f.read_to_string(&mut s).unwrap();
    println!("s = {:?}", s);
}
Shell session
$ cargo run --quiet
error[E0382]: borrow of moved value: `f`
 --> src/main.rs:8:5
  |
4 |     let mut f = File::open("src/main.rs").unwrap();
  |         ----- move occurs because `f` has type `std::fs::File`, which does not implement the `Copy` trait
5 |     let mut br = BufReader::new(f);
  |                                 - value moved here
...
8 |     f.read_to_string(&mut s).unwrap();
  |     ^ value borrowed here after move

Okay.

Back to io.ReadSeeker: not all types implement io.Seeker like *os.File does. An HTTP response body, for example, or a gzip stream, don't allow seeking, so they implement io.Reader, but not io.Seeker.

This is actually good - I don't have anything to say about it.

Alright, still with me?

Cool bear's hot tip

yawn sure, sure.

We've got one last interface to review: io.ReaderAt.

I like that one!

Go code
type ReaderAt interface {
  ReadAt(p []byte, off int64) (n int, err error)
}

Just like io.Read, it takes a []byte slice - but also an int64 offset. The semantics of the two returned values are the same, too. But there's one big difference: it doesn't maintain a read position.

This may be the only safe I/O interface in the whole Go standard library, because it's thread-safe!

ReadAt calls don't interfere with other ReadAt calls, so you can hand out as many copies of an io.ReaderAt as you want, and they can end up in different goroutines, and everything is fine.

Which is great, because there is no concept of "taking ownership" or "moving" in Go. Everything is passed by reference, just like Java objects.

Go code
package main

import (
  "fmt"
  "io"
  "strings"
)

func main() {
  input := "abcdefghijklmnopqrstuvwxyz0123456789"
  sr := strings.NewReader(input)

  work := func(ra io.ReaderAt) string {
    buf := make([]byte, 8)
    ra.ReadAt(buf, 0)
    return string(buf)
  }

  n := 4
  results := make(chan string, 4)
  for i := 0; i < n; i++ {
    go func() {
      results <- work(sr)
    }()
  }

  for i := 0; i < n; i++ {
    s := <-results
    fmt.Printf("%s\n", s)
  }
}
Shell session
$ go build main.go && ./main
abcdefgh
abcdefgh
abcdefgh
abcdefgh

Now that we're all on the same page, let's look at some code from another package of the Go standard library, so that we can get a concrete example of the consequences of haphazard API design.

The debug/pe package

I don't know if you knew, but the Go standard library comes with a parser for Portable Executable files, acronym "PE" - those are Windows executables.

That parser is quite useful, too! For example, did you ever want to find out what DLLs an executable depends on, without using a tool like Dependency Walker?

Just write a bit of Go code!

Go code
package main

import (
  "debug/pe"
  "fmt"
  "os"
)

func main() {
  f, err := pe.Open(os.Args[1])
  must(err)

  libs, err := f.ImportedLibraries()
  must(err)

  fmt.Printf("found %d imported libraries\n", len(libs))
  for _, lib := range libs {
    fmt.Printf("-> %v\n", lib)
  }
}

func must(err error) {
  if err != nil {
    panic(fmt.Sprintf("%+v", err))
  }
}
Shell session
$ go build main.go && ./main ./butler.exe
found 0 imported libraries

Oh, okay so... that doesn't really work.

Shell session
$ x86_64-w64-mingw32-ldd ./butler.exe
        KERNEL32.dll => not found
        msvcrt.dll => not found
        USER32.dll => not found

I guess it must've been broken sometime between Go 1.10 and Go 1.14? I see a lot of the code has been rewritten.

How do I know that? Because I've used that package in some software of mine. It has a fork from Go 1.10, and there it still works:

Shell session
$ butler exeprops ./butler.exe | jq -C .imports
[
  "KERNEL32.dll",
  "msvcrt.dll",
  "USER32.dll"
]

Oh, yeah, I see it now:

Go code
// ImportedLibraries returns the names of all libraries
// referred to by the binary f that are expected to be
// linked with the binary at dynamic link time.
func (f *File) ImportedLibraries() ([]string, error) {
  // TODO
  // cgo -dynimport don't use this for windows PE, so just return.
  return nil, nil
}

Phew, that package is a bundle of surprises. That wasn't even planned.

Okay, let's use something that hasn't been stubbed out, maybe ImportedSymbols?

Go code
package main

import (
  "debug/pe"
  "fmt"
  "os"
)

func main() {
  f, err := pe.Open(os.Args[1])
  must(err)

  syms, err := f.ImportedSymbols()
  must(err)

  fmt.Printf("found %d imported symbols\n", len(syms))
  for i, sym := range syms {
    fmt.Printf("-> %v\n", sym)
    if i > 10 {
      fmt.Printf("etc.\n")
      break
    }
  }
}

func must(err error) {
  if err != nil {
    panic(fmt.Sprintf("%+v", err))
  }
}
sh
$ go build main.go && ./main ./butler.exe
found 180 imported symbols
-> AddVectoredExceptionHandler:KERNEL32.dll
-> AreFileApisANSI:KERNEL32.dll
-> CloseHandle:KERNEL32.dll
-> CreateEventA:KERNEL32.dll
-> CreateFileA:KERNEL32.dll
-> CreateFileMappingA:KERNEL32.dll
-> CreateFileMappingW:KERNEL32.dll
-> CreateFileW:KERNEL32.dll
-> CreateIoCompletionPort:KERNEL32.dll
-> CreateMutexW:KERNEL32.dll
-> CreateSemaphoreA:KERNEL32.dll
-> CreateThread:KERNEL32.dll
etc.

Okay, better!

We can even use it to reconstruct ImportedLibraries, sort of:

Go code
  syms, err := f.ImportedSymbols()
  must(err)

  var libs = make(map[string]struct{})
  for _, sym := range syms {
    libs[strings.SplitN(sym, ":", 2)[1]] = struct{}{}
  }

  fmt.Printf("found %d imported libs\n", len(libs))
  for lib := range libs {
    fmt.Printf("-> %v\n", lib)
  }
Shell session
$ go build main.go && ./main ./butler.exe
found 3 imported libs
-> KERNEL32.dll
-> msvcrt.dll
-> USER32.dll

But the debug/pe package is not particularly robust. At the time of this writing, on Go version 1.11.4, it fails on the executable for They Bleed Pixels:

Shell session
$ go build main.go && ./main ./TheyBleedPixels.exe
panic: fail to read string table: unexpected EOF

goroutine 1 [running]:
main.must(...)
        /home/amos/ftl/aac/main.go:30
main.main()
        /home/amos/ftl/aac/main.go:12 +0x487

Not only that, but it fails spectacularly.

Shell session
$ go build main.go && valgrind --tool=massif --time-unit=B --pages-as-heap=yes ./main ./TheyBleedPixels.exe
==159780== Massif, a heap profiler
==159780== Copyright (C) 2003-2017, and GNU GPL'd, by Nicholas Nethercote
==159780== Using Valgrind-3.16.0.GIT and LibVEX; rerun with -h for copyright info
==159780== Command: ./main ./TheyBleedPixels.exe
==159780== 
panic: fail to read string table: unexpected EOF

goroutine 1 [running]:
main.must(...)
        /home/amos/ftl/aac/main.go:30
main.main()
        /home/amos/ftl/aac/main.go:12 +0x487
==159780== 
$ ms_print massif.out.159780| head -30
--------------------------------------------------------------------------------
Command:            ./main ./TheyBleedPixels.exe
Massif arguments:   --time-unit=B --pages-as-heap=yes
ms_print arguments: massif.out.159780
--------------------------------------------------------------------------------


    GB
5.053^                                                                       :
     |                                                                       @
     |                                                                       @
     |                                                                       @
     |                                                                       @
     |                                                                       @
     |                                                                       @
     |                                                                       @
     |                                                                       @
     |                                        :::::::::::::::::::::::::::::::@
     |                                        :                              @
     |                                        :                              @
     |                                        :                              @
     |                                        :                              @
     |                                        :                              @
     |                                        :                              @
     |                                        :                              @
     |                                        :                              @
     |        :@@::::::::::::::::::::::::::::::                              @
     |        :@@                             :                              @
   0 +----------------------------------------------------------------------->GB
     0                                                                   5.053

Before dying, our program allocated a whopping 2 GiB of memory.

What happened? Let's try to follow debug/pe's code.

First, Open:

Go code
// Open opens the named file using os.Open and prepares it for use as a PE binary.
func Open(name string) (*File, error) {
  f, err := os.Open(name)
  if err != nil {
    return nil, err
  }
  ff, err := NewFile(f)
  if err != nil {
    f.Close()
    return nil, err
  }
  ff.closer = f
  return ff, nil
}

Pretty straight-forward - it uses os.Open to open the file, and passes it to NewFile.

NewFile itself is a bit long, so let's focus on a few key areas:

Go code
// NewFile creates a new File for accessing a PE binary in an underlying reader.
func NewFile(r io.ReaderAt) (*File, error) {

It accepts an io.ReaderAt - good!

Go code
  f := new(File)

That's the *pe.File type - it carries a bunch of state with it.

For the next part, we need a little background info. When parsing a file format, it's often useful to have an io.Reader - because you end up doing a lot of sequential work, like:

And maintaining a position is kind of a hassle in that case!

So, I'm assuming they just wanted an io.Reader, something that would keep track of the position for them, to simplify the code.

And a SectionReader does that:

SectionReader implements Read, Seek, and ReadAt on a section of an underlying ReaderAt.

But there's a little problem - a SectionReader needs an offset (here, zero, since we want to read from the very start), and a "number of bytes after which to stop with EOF".

And the debug/pe.NewFile function takes an io.ReaderAt - with only the ReadAt function, there's no way to tell how large the input actually is.

You could fix that, by changing the function's signature - but that would break the Go compatibility promise by changing a public interface.

Stubbing out ImportedLibraries was a-okay, though. Poor function doesn't even return an error - just the wrong result. Apparently that's good enough for the Go compatibility promise.

But I digress - point is, there's no way to know the size of the input without breaking backwards compatibility. So what did they do?

Go code
  sr := io.NewSectionReader(r, 0, 1<<63-1)

Just pass the largest possible int64 value to NewSectionReader.

There.

Problem solved!

We're going to skip over some of the next few lines, to focus on that call:

Go code
  // Read string table.
  f.StringTable, err = readStringTable(&f.FileHeader, sr)
  if err != nil {
    return nil, err
  }

That's the part that crashes. I know, because I dug. Remember, Go doesn't have exceptions - just an error interface, which is part of the "universe block":

Go code
type error interface {
    Error() string
}

Notice something? No stack traces. There's many third-party packages to work around it for your own code - my current favorite is pkg/errors. (Which is in the process of being deprecated, in preparation for Go 2 error handling).

None of those third-party packages help us here, because debug/pe is part of the standard library, which uses "plain errors", with no additional context, no stack traces, no nothing.

So, yeah, digging! Fun.

Let's look at what it does:

Go code
func readStringTable(fh *FileHeader, r io.ReadSeeker) (StringTable, error) {

That function takes an io.ReadSeeker, not an io.ReaderAt.

Go code
  // COFF string table is located right after COFF symbol table.
  if fh.PointerToSymbolTable <= 0 {
    return nil, nil
  }
  offset := fh.PointerToSymbolTable + COFFSymbolSize*fh.NumberOfSymbols

Okay, computing an offset from some members of the file header and a constant, so far so good.

The <= 0 comparison is a bit weird though, since PointerToSymbolTable is an uint32, it can never be < 0. But oh well, that's not our bug.

Go code
  _, err := r.Seek(int64(offset), seekStart)
  if err != nil {
    return nil, fmt.Errorf("fail to seek to string table: %v", err)
  }

This code is a bit silly - I'm surprised nobody caught it in review.

By that point, r is actually an *io.SectionReader with a size (or limit) of 9.2 exabytes.

There is no way this Seek can ever fail. Besides, even if the SectionReader had the correct limit, it doesn't check the upper bound when seeking:

Go code
func (s *SectionReader) Seek(offset int64, whence int) (int64, error) {
  switch whence {
  default:
    return 0, errWhence
  case SeekStart:
    offset += s.base
  case SeekCurrent:
    offset += s.off
  case SeekEnd:
    offset += s.limit
  }
  if offset < s.base {
    return 0, errOffset
  }
  s.off = offset
  return offset - s.base, nil
}

It only checks it when reading:

Go code
func (s *SectionReader) Read(p []byte) (n int, err error) {
  if s.off >= s.limit {
    return 0, EOF
  }
  if max := s.limit - s.off; int64(len(p)) > max {
    p = p[0:max]
  }
  n, err = s.r.ReadAt(p, s.off)
  s.off += int64(n)
  return
}

Moving on. Next up, it's reading a size:

Go code
  var l uint32
  err = binary.Read(r, binary.LittleEndian, &l)
  if err != nil {
    return nil, fmt.Errorf("fail to read string table length: %v", err)
  }

The PE (Portable Executable) format is a bit curious in that, the length being read includes the length field itself, so, it's subtracted, as it should:

Go code
  // string table length includes itself
  if l <= 4 {
    return nil, nil
  }
  l -= 4

And then, l bytes are read:

Go code
  buf := make([]byte, l)
  _, err = io.ReadFull(r, buf)
  if err != nil {
    return nil, fmt.Errorf("fail to read string table: %v", err)
  }
  return StringTable(buf), nil
}

The problem is, for some input files, such as TheyBleedPixels.exe, the value read into l is completely off - in our case, it's 2274698020.

When I first found the problem, I was already using a fork of debug/pe, so I thought I'd go for a quick fix. Looking at the readStringTable function, and seeing that it took an io.ReadSeeker, I knew what to do.

First, find the size of the file:

Go code
  size, err := r.Seek(0, io.SeekEnd)
  if err != nil {
    return nil, err
  }

Then, before allocating whatever l is set to, make sure the table we're reading would actually fit in the file:

Go code
  var end int64 = int64(offset) + int64(l)
  if end > size {
    return nil, fmt.Errorf("fail to read string table: computed length out of bounds")
  }

But of course - that didn't work! Because, again r is an io.SectionReader, of size 9.2 exabytes.

So, io.SectionReader only implements io.Seeker correctly if its size is set correctly. Which it isn't, in this case, because the interface of debug/pe cannot change, for backwards compatibility reasons.

Not only is the bug pretty bad - user input definitely can't be trusted. Parsers have to be defensive! But on top of it, it cannot be fixed.

In my fork, I was able to fix it, by simply taking the size of the input alongside the io.ReaderAt. I can do that, because it's an internal fork - I can change the interface all I want.

But the best the Go standard library maintainers could do is set an upward bounds - maybe, just maybe, if l is above 2**30, then something went wrong.

Then again - debug/pe seems to be used by the Go compiler internally, And who knows, maybe a 1 GiB size limit for Go executables is a bit on the low side.

Closing words

This concludes our journey across unfortunate Go design decisions, for today.

There are many more examples of poor design across all of Go - and across many other languages and libraries out there.

And I want to make one thing extremely clear, because every time I post about Go, a portion of the readers dismiss the whole thing based on the notion that I'm being "a typical Rustacean".

So here goes: even though I really like the design of Rust the language, and its compiler, and its standard library to a large extent, it is not sufficient to prevent bad API design.

In the past year, I've had the opportunity to experiment with a lot of crates (allow me to shill for lib.rs, which I much prefer to crates.io) - and some of them are poorly designed!

It happens! The difference is: you can usually tell from the function and type signatures alone.

For example, this is a very suspicious Rust API:

Rust code
struct Template { /* ... */ }

impl Template {
  fn load_from_file(path: &str) -> Template { /* ... */ }
}

First of all - why does it take an &str? Rust has a Path type, I've brought that up before.

Second of all - it's clearly opening and reading a file in there. But it returns a Template, unconditionally. It doesn't return a Result<Template, E>.

So what does it do with the errors? Does it drop them? Does it panic? If I see a function signature like that, I'm not using that crate.

Some design deficiencies are a bit harder to identify. For example, consider this:

Rust code
struct TemplateCompiler {}

impl TemplateCompiler {
  fn new_from_folder(folder: &Path) -> Result<Self, Error> { /* ... */ }

  fn new() -> Self { /* ... */ }
  fn add_source(&mut self, name: &str, contents: String) { /* ... */ }
  fn compile_all(&mut self) -> Result<(), Error> { /* ... */ }
}

What's the problem here? Well, there's apparently two ways to construct that template compiler:

Presumably, compile_all is a separate step, because it needs to have all the sources at its disposal to compile any one of the templates, since they can reference each other, and it wants to error out if you refer to a template that doesn't exist.

This means that, most probably, between new() and compile_all(), the TemplateCompiler is in a partially-initialized state.

This is relatively easy to fix, though - by making another type:

Rust code
trait TemplateSourceSet {
  // add functions to enumerate the sources,
  // look them up by name - anything you want!
}

impl TemplateCompiler {
  fn new<S>(source_set: S) -> Result<Self, Error>
  where S: TemplateSourceSet
    { /* ... */ }
}

That's much better! You can even have a built-in FolderSourceSet struct, which is used in a new_from_folder convenience associated function. But the point is - once you hold an instance of TemplateCompiler, it's fully initialized. There's no way to accidentally call methods in the wrong order.

And that's why I cannot not compare Go and Rust - every time I look at either, I'm reminded of how many classes of bugs Rust lets me prevent, and Go doesn't.

I'm not talking exclusively about memory safety. Having a garbage collector makes Go much safer than, say, C. I'm talking about logic errors.

We've already seen the BufReader example.

But here's an other one. Remember how the documentation for Go's io.Reader interface says the following:

Implementations must not retain p.

That's an invariant that the Go compiler cannot check for.

The corresponding Rust interface has the same invariant - but the documentation doesn't even mention it. It's all in the function signature.

Rust code
pub trait Read {
  fn read(&mut self, buf: &mut [u8]) -> Result<usize>
}

Read::read doesn't take "a slice". It takes a mutable reference to a slice.

By the time it returns, it has to give it back.

And if we try to do it anyway, the compiler will stop us:

Rust code
use std::io::{self, Read};

struct Foo<'a> {
    blah: Option<&'a mut [u8]>,
}

impl<'a> Read for Foo<'a> {
    fn read(&mut self, buf: &mut [u8]) -> io::Result<usize> {
        self.blah = Some(buf);
        Ok(0)
    }
}
Shell session
$ cargo check
cargo check                                                                                                                        
    Checking main-rs v0.1.0 (/home/amos/ftl/aac/main-rs)                                                                                                                            
error[E0495]: cannot infer an appropriate lifetime due to conflicting requirements                                                                                                  
  --> src/main.rs:9:21                       
   |                                         
9  |         self.blah = Some(buf);                                                       
   |                     ^^^^^^^^^                                                        
   |                                         
note: first, the lifetime cannot outlive the anonymous lifetime #2 defined on the method body at 8:5...                                                                             
  --> src/main.rs:8:5                        
   |                                         
8  | /     fn read(&mut self, buf: &mut [u8]) -> io::Result<usize> {                                                                                                                
9  | |         self.blah = Some(buf);                                                     
10 | |         Ok(0)                         
11 | |     }                                 
   | |_____^                                 
note: ...so that reference does not outlive borrowed content                                                                                                                        
  --> src/main.rs:9:26                       
   |                                         
9  |         self.blah = Some(buf);                                                       
   |                          ^^^                                                         
note: but, the lifetime must be valid for the lifetime `'a` as defined on the impl at 7:6...                                                                                        
  --> src/main.rs:7:6                        
   |                                         
7  | impl<'a> Read for Foo<'a> {                                                          
   |      ^^                                 
note: ...so that the expression is assignable                                             
  --> src/main.rs:9:21                       
   |                                         
9  |         self.blah = Some(buf);                                                       
   |                     ^^^^^^^^^                                                        
   = note: expected  `std::option::Option<&'a mut [u8]>`                                                                                                                            
              found  `std::option::Option<&mut [u8]>`       

The error is a mouthful - essentially, it says, you can't store buf into blah, because buf doesn't live long enough. There's a lot more to say about this - about a book's worth, maybe two.

But my point is this: even if you don't understand the error fully, disaster is averted.

Remember when I mixed up the order of the arguments in copy? There:

Go code
  copy(nr.src[:n], p[:n])

I went "source, dest" instead of "dest, source". I guess I was thinking of the command-line cp utility.

I said "that one's on me, there's no way the compiler could've seen it coming". But there would've been.. in a different language.

Here's a similar copy function:

Rust code
fn copy<T>(src: &[T], dst: &mut [T]) -> usize {
    let n = std::cmp::min(src.len(), dst.len());
    for i in 0..n {
        dst[i] = src[i]
    }
    n
}

It works for slices of any type. And it takes the src first, just like cp does.

Now I'll never get it wrong!

Rust code
use std::io::{self, Read};

struct SliceReader<'a> {
    src: &'a [u8],
}

impl<'a> Read for SliceReader<'a> {
    fn read(&mut self, buf: &mut [u8]) -> io::Result<usize> {
        let n = copy(buf, self.src);
        self.src = &self.src[n..];
        Ok(n)
    }
}

Oh, no, I got it wrong anyway... will the compiler catch it?

Shell session
$ cargo check
    Checking main-rs v0.1.0 (/home/amos/ftl/aac/main-rs)
error[E0308]: mismatched types
 --> src/main.rs:9:27
  |
9 |         let n = copy(buf, self.src);
  |                           ^^^^^^^^ types differ in mutability
  |
  = note: expected mutable reference `&mut [u8]`
                     found reference `&'a [u8]`

Yes! Because only dst is a mutable reference. src is an immutable reference.

Of course, this only works because we designed our SliceReader interface carefully in the first place. If we asked for more than we needed, say, if we were holding a &mut [u8]:

Rust code
struct SliceReader<'a> {
  // this wasn't `mut` before
    src: &'a mut [u8],
}

impl<'a> Read for SliceReader<'a> {
    fn read(&mut self, buf: &mut [u8]) -> io::Result<usize> {
        let n = copy(buf, self.src);
        self.src = &self.src[n..];
        Ok(n)
    }
}

Then it would compile. Or would it?

Shell session
$ cargo check
    Checking main-rs v0.1.0 (/home/amos/ftl/aac/main-rs)
error[E0308]: mismatched types
  --> src/main.rs:10:20
   |
10 |         self.src = &self.src[n..];
   |                    ^^^^^^^^^^^^^^ types differ in mutability
   |
   = note: expected mutable reference `&'a mut [u8]`
                      found reference `&[u8]`

It wouldn't! We would have to double down on our bad decision to take a &mut [u8], and change the line to &mut self.src[n..].

And like I said earlier - there is a cost to all this. There's definitely more punctuation in the Rust code I've shown, than in the Go code I've shown.

There's generics, there's traits, there's sum types, I even showed a bit of pattern matching. And Rust programs typically take longer than Go programs to compile (unless you need cgo, in which case, ehhhh).

But the payout is so worth it. I recently wrote nearly seven thousand lines of Rust, with a few hundred dependencies, for this website - and it works.

And I didn't experience any of the dozens of gotchas I had gotten used to when writing Go code. In fact, I think I've started to forget some of them.

Rust gives you a lot of tools for careful API design. Most crate authors use them wisely. Learning it is an investment I strongly recommend.

Even if you end up using other languages, because they're a better fit for a particular task, learning Rust will make you a better API designer, and a better developer in general.

There's tons of good resources online. Heck, I even made some.

I hope your journey is as empowering as mine was. Good luck!

What did we learn?
  • There are significant design flaws in Go, both the language and the standard library, that enable entire classes of bugs that have very real consequences.

  • These flaws are not immediately obvious to everyone - which is fine - so we took a very long and detailed look at them, one by one.

  • These flaws are not unavoidable - the situation is not desperate. It doesn't have to be like this.

  • There has been progress in enabling misuse-resistant design, and I strongly encourage you to learn about it, even if it turns out you can't use those novel languages, because some of the techniques can be applied to classical languages as well.