Wave

Continuing on an HTTP-Server in rust

4censord

This post is basically a direct continuation of the recent Computerphile Video Coding a Web Server in 25 Lines - Computerphile

The video explains the absolute basics of the HTTP protocol, and implements a very basic HTTP-server in rust.
This blog-post will go into more detail about some HTTP headers, and some other HTTP request types, implementing some of them in the process.

At the end of the video, our source-code looks like this:

use std::io::BufRead;
use std::io::Write;

fn main() {
    let listener = std::net::TcpListener::bind("127.0.0.1:8081").unwrap();
    for mut stream in listener.incoming().flatten() {
        let mut reader = std::io::BufReader::new(&mut stream);


        let mut line = String::new();
        reader.read_line(&mut line).unwrap();
        match line.trim().split(' ').collect::<Vec<_>>().as_slice() {
            ["GET", resource, "HTTP/1.1"] => {
                loop {
                    let mut line = String::new();
                    reader.read_line(&mut line).unwrap();
                    if line.trim().is_empty() {
                        break;
                    }
                    print!("{line}");
                }

                let mut path = std::path::PathBuf::new();
                path.push("resources/");
                path.push(resource.trim_start_matches('/'));
                if resource.ends_with('/') {
                    path.push("index.html");
                }
                stream.write_all(b"HTTP/1.1 200 OK\r\n\r\n").unwrap();
                stream.write_all(&std::fs::read(path).unwrap()).unwrap();
            },
            _ => todo!(),
        }
    }
}

Dealing with other Request types

Currently, if the client sends anything but a GET request, our server just crashes.
The HTTP protocol specifies 8 different request types, GET, HEAD, POST, PUT, DELETE, CONNECT, OPTIONS and TRACE.
It also specifies 2 error codes to be used when the server does not support these:
405 Method Not Allowed The server recognizes the request type, but the client is not allowed to use it.
501 Not Implemented: Basically saying that the server does not support the requested functionality.

According to the spec because we haven't implemented these methods, we can just return 501 errors for them.

The set of methods allowed by a target resource can be listed in an
Allow header field (Section 10.2.1).

An origin server that receives a request method that is unrecognized
or not implemented SHOULD respond with the 501 (Not Implemented)
status code.
See also https://www.rfc-editor.org/rfc/rfc9110#name-overview

Therefore, we will be returning 501 errors for all the 7 methods that we haven't implemented yet.

When using the 501 error, we CAN add an Allow: header to explain to the client what we support.
For us, this looks like Allow: GET, because we haven't implemented anything else yet.

Now extending our match statement to explicitly match the other methods:

match line.trim().split(' ').collect::<Vec<_>>().as_slice() {
    ["GET", resource, "HTTP/1.1"] => {[...]},
    ["HEAD", _, "HTTP/1.1"]
    | ["POST", _, "HTTP/1.1"]
    | ["PUT", _, "HTTP/1.1"]
    | ["DELETE", _, "HTTP/1.1"]
    | ["CONNECT", _, "HTTP/1.1"]
    | ["OPTIONS", _, "HTTP/1.1"]
    | ["TRACE", _, "HTTP/1.1"] => {
        stream.write_all(b"HTTP/1.1 501 Not Implemented").unwrap();
        stream.write_all(b"\nAllow: GET\r\n\r\n").unwrap();
    }
    _ => todo()
}

And return the 501 error, and the Allow header.

If the client where to request anything that isn't one of these known
methods, e.g., the client requests SPECIAL / HTTP/1.1, we should also
return a 501 error.
So we just extend the match to include the default option as well.

match line.trim().split(' ').collect::<Vec<_>>().as_slice() {
    ["GET", resource, "HTTP/1.1"] => {[...]},
    ["HEAD", _, "HTTP/1.1"]
    | ["POST", _, "HTTP/1.1"]
    | ["PUT", _, "HTTP/1.1"]
    | ["DELETE", _, "HTTP/1.1"]
    | ["CONNECT", _, "HTTP/1.1"]
    | ["OPTIONS", _, "HTTP/1.1"]
    | ["TRACE", _, "HTTP/1.1"]
    | _ => {
        stream.write_all(b"HTTP/1.1 501 Not Implemented").unwrap();
        stream.write_all(b"\nAllow: GET\r\n\r\n").unwrap();
    }
}

Our server should now respond adequately even to unexpected requests without crashing.

Handle missing files

Currently, when a client requests a file that does not exist, our server just crashes.
We can easily fix that by trying to open the file first, catching the error and returning a 404 error instead.

match File::open(path.clone()) {
    Err(_) => {
        stream.write_all(b"HTTP/1.1 404 Not Found\r\n\r\n").unwrap();
    },
    Ok(_) => {
        stream.write_all(b"HTTP/1.1 200 OK\r\n\r\n").unwrap();
        stream.write_all(&std::fs::read(path).unwrap()).unwrap();
    },
};

Adding headers

Content-Length

Let's start with a rather simple header, Content-length.

Looking at the spec[0]

The “Content-Length” header field indicates the associated
representation's data length as a decimal non-negative integer number
of octets.
[...]
Content-Length indicates the selected representation's current length,
which can be used by recipients to estimate transfer time or to
compare with previously stored representations.

So, Content-length simply is the number of bytes of “Content” that
follows, so everything after the \r\n\r\n

So, let's just look at the files' metadata to figure out its size:

match File::open(path.clone()) {
    Err(_) => {
        stream.write_all(b"HTTP/1.1 404 Not Found").unwrap();
        stream.write_all(b"\nContent-Length: 0").unwrap();
        stream.write_all(b"\r\n\r\n").unwrap();
    },
    Ok(file) => {
        let len = file.metadata().unwrap().len();
        stream.write_all(b"HTTP/1.1 200 OK").unwrap();
        stream.write_all(format!("\nContent-Length: {len}").as_bytes()).unwrap();
        stream.write_all(b"\r\n\r\n").unwrap();
        stream.write_all(&std::fs::read(path.clone()).unwrap()).unwrap();
    },
};

And for good measure, let's add Content-Length: 0 to the 404 error as well

Compression

To save on bandwidth and to increase effective speed, HTTP payloads are commonly compressed.
To facilitate this, clients may send a Accept-Encoding header, specifying which encoding types they support.
For Firefox, this looks like this: Accept-Encoding: gzip, deflate, br.
Because we don't want to create unnecessary complexity, we will only support gzip compression.

To figure out if our client supports this, we need to start reading the headers the client supplies.
So, instead of just printing all of them, let's just match on them instead.

match line.trim().split(' ').collect::<Vec<_>>().as_slice() {
    ["GET", resource, "HTTP/1.1"] => {
        let mut compress = false;
        loop {
            let mut line = String::new();
            reader.read_line(&mut line).unwrap();
            if line.trim().is_empty() {
                break;
            }
            match line.trim().splitn(2, ' ').collect::<Vec<_>>().as_slice() {
                ["Accept-Encoding:", encodings] => {
                    if encodings.contains("gzip") {
                        compress = true;
                    }
                }
                _ => {
                    //print!("{:#?}", l);
                }
            }
        }
        [...]
    }
    [...]
}

Notice how we only split the headers into 2 parts using .splitn(2,' '), because there might be a varying amount of spaces.

Now, we just need to compress the file before sending when compress is set to true.

We need to remember to set the Content-Length header to the length of the content after compression, otherwise our clients will be confused (and waiting for more content).
Oh, and we need to include a Content-Encoding: gzip header to tell the client that we compressed the payload.
For the compression itself, we will be using the flate2 crate.

Ok(file) => {
    stream.write_all(b"HTTP/1.1 200 OK").unwrap();

    if compress {
        let mut e = GzEncoder::new(Vec::new(), Compression::default());
        e.write_all(&std::fs::read(path.clone()).unwrap()).unwrap();
        let compressed = e.finish().unwrap();

        stream
            .write_all(
                format!("\nContent-Length: {:?}", compressed.len()).as_bytes(),
            )
            .unwrap();
        stream.write_all(b"\nContent-Encoding: gzip").unwrap();
        stream.write_all(b"\r\n\r\n").unwrap();
        stream.write_all(compressed.as_slice()).unwrap();
    } else {
        let len = file.metadata().unwrap().len();
        stream
            .write_all(format!("\nContent-Length: {len}").as_bytes())
            .unwrap();
        stream.write_all(b"\r\n\r\n").unwrap();

        stream
            .write_all(&std::fs::read(path.clone()).unwrap())
            .unwrap();
    }
}

That's it for this post.
Technically, we'd need to implement the HEAD method to be minimally compliant, but that's a topic for another post.

Our whole source-code now looks like this

use std::fs::File;
use std::io::BufRead;
use std::io::Write;

use flate2::write::GzEncoder;
use flate2::Compression;

fn main() {
    let listener = std::net::TcpListener::bind("127.0.0.1:8081").unwrap();
    for mut stream in listener.incoming().flatten() {
        let mut reader = std::io::BufReader::new(&mut stream);

        let mut line = String::new();
        reader.read_line(&mut line).unwrap();
        match line.trim().split(' ').collect::<Vec<_>>().as_slice() {
            ["GET", resource, "HTTP/1.1"] => {
                let mut compress = true;
                loop {
                    let mut line = String::new();
                    reader.read_line(&mut line).unwrap();
                    if line.trim().is_empty() {
                        break;
                    }
                    match line.trim().splitn(2, ' ').collect::<Vec<_>>().as_slice() {
                        ["Accept-Encoding:", encodings] => {
                            if encodings.contains("gzip") {
                                compress = true;
                            }
                        }
                        _ => {
                            //print!("{:#?}", l);
                        }
                    }
                }

                let mut path = std::path::PathBuf::new();
                path.push(resource.trim_start_matches('/'));
                if resource.ends_with('/') {
                    path.push("index.html");
                }

                match File::open(path.clone()) {
                    Err(_) => {
                        stream.write_all(b"HTTP/1.1 404 Not Found").unwrap();
                        stream.write_all(b"\nContent-Length: 0").unwrap();
                        stream.write_all(b"\r\n\r\n").unwrap();
                    }
                    Ok(file) => {
                        stream.write_all(b"HTTP/1.1 200 OK").unwrap();
                        if compress {
                            let mut e = GzEncoder::new(Vec::new(), Compression::default());
                            e.write_all(&std::fs::read(path.clone()).unwrap()).unwrap();
                            let compressed = e.finish().unwrap();

                            stream
                                .write_all(
                                    format!("\nContent-Length: {:?}", compressed.len()).as_bytes(),
                                )
                                .unwrap();
                            stream.write_all(b"\nContent-Encoding: gzip").unwrap();
                            stream.write_all(b"\r\n\r\n").unwrap();
                            stream.write_all(compressed.as_slice()).unwrap();
                        } else {
                            let len = file.metadata().unwrap().len();
                            stream
                                .write_all(format!("\nContent-Length: {len}").as_bytes())
                                .unwrap();
                            stream.write_all(b"\r\n\r\n").unwrap();

                            stream
                                .write_all(&std::fs::read(path.clone()).unwrap())
                                .unwrap();
                        }
                    }
                };
            }
            ["HEAD", _, "HTTP/1.1"]
            | ["POST", _, "HTTP/1.1"]
            | ["PUT", _, "HTTP/1.1"]
            | ["DELETE", _, "HTTP/1.1"]
            | ["CONNECT", _, "HTTP/1.1"]
            | ["OPTIONS", _, "HTTP/1.1"]
            | ["TRACE", _, "HTTP/1.1"]
            | _ => {
                stream
                    .write_all(b"HTTP/1.1 501 Not Implemented\r\n\r\n")
                    .unwrap();
                stream.write_all(b"Allow: GET\r\n\r\n").unwrap();
            }
        }
    }
}

About the Author

4censord

4censord studies computer science in Germany.

Mia Rose WinterReviewer

This might also interest you

A Mystery Involving Hardware Security Modules and Value Tokens

Forbidden Tempura 10/7/2025

Context Historical context In July, 2021, the phenomenon known as the &ldquo;Gigaleak&rdquo; continued. The Gigaleak was a drip-feed of part of the ill-gotten data from the 2018 Nintendo data breach. On July 20, 2021, the iqcvs.tar.xz file was uploaded to the now-defunct file sharing website anonfiles.com and thereby made available to the public by The Hacker Known as 4chan. This file contains a dump of CVS repositories. The repository sw contains the BroadOn network infrastructure around the middle of the year 2006. This is shortly before the Nintendo Wii launched. The network infrastructure was initially launched alongside the iQue Player, a variant of the Nintendo 64 featuring downloadable games and some anti-piracy measures of questionable quality (non-HTTPS link) intended for the Chinese market, which was and still is notorious for being particularly prone to piracy. It was developed by a company then called BroadOn Communications Corp., a California corporation. The iQue Player u

ITInfodump

A Brief Look at the 3DS Cartridge Protocol

Forbidden Tempura 6/2/2024

About a week ago, there has been a little addition to the 3dbrew wiki page about 3DS cartridges (carts) that outlines the technical details of how the 3DS cartridge controller and a 3DS cartridge talk to each other. I would like to take this opportunity to also include the 3DS itself in the conversation to illuminate which part of which device performs which step. I will then proceed to outline where I think the corresponding design decisions originate. Finally, I will conclude with some concrete ideas for improvement. But first, we need to talk about parallel universes This protocol makes no sense unless you have a basic overview of the 3DS AES engine. The 3DS AES engine can load 128-bit AES keys in two ways: Using key-derivation from a keyX and keyY (officially called KeyId and KeySeed, respectively). Directly specifying a full AES key. The key derivation from a keyX and keyY works as follows: AES key = (((keyX ROL 2) XOR keyY) + C1) ROR 41, where ROL is left rotation on a 128-bit

ITGamesInfodump

Reconstructing the 3DS Bootstrapping Process at the Factory

Forbidden Tempura 5/13/2024

Motivation The Nintendo 3DS was a fairly popular console. In spite of that, surprisingly little is known about how it is put together at the factory. Working with information that was uncovered during the so-called Gigaleak, I will try to recover as much information as I can about the manufacturing process up and until the point the 3DS is able to complete a normal boot sequence. One-Time Programmable (OTP) region Every 3DS ships with 0x100 of one-time programmable persistent memory at 0x10012000-0x10012100, containing console-unique keys and information. This obviously has to occur before any normal firmware runs on the system because significant amounts of all data written would fail to account for console-unique information and thus the encrypted values would be all encrypted for the wrong keys. An interesting observations: ctr.7z (SHA-256: 8b05072361254437277576d53c08b95e5f076c6b33a2871fad74eaa5561d1d38) ctr/sources/bootrom/CTR/private/build/bootrom/ctr_bootrom/ARM9/main.c has a pr

ITGamesInfodump
Powered by Wave