Wave

Wave on Kubernetes

4censord

Kubernetes is an increasingly more popular deployment option for services. Originally developed by Google0, Kubernetes provides functionality for multi host containerized deployment of applications.

Today, i will take a look at deploying wave onto my Kubernetes.[1]
This does not explain how to interact with Kubernetes

Wave depends on some other services to be fully functional, namely

  1. A PostgreSQL server for persistent data storage
  2. A redis server for session storage and synchronization
  3. A reverse proxy for TLS termination
  4. An SMTP server for sending email.

Dependencies

Because we want to focus on running wave, let's just get the other stuff out of the way.
All our stuff will be happening in the wave-dev namespace, so let's create that first: kubectl create namespace wave-dev

PostgreSQL

Deployed via helm from the bitnami/postgresql chart using the following values:

# postgres.yaml
---
auth:
  username: "wave"
  password: "wavepw12345"
  database: "wave"

using

$ helm install postgres bitnami/postgresql --values postgres.yaml --namespace wave-dev 

After a successful installation helm informs us that our Postgres is reachable on postgres-postgresql.wave-dev.svc.cluster.local. Because wave will be running in the same namespace (wave-dev), we can connect to the database server with just postgres-postgresql

Redis

Same story here, deployed via helm from the bitnami/redis chart.
Values:

---
architecture: standalone
auth:
  password: "wave-redis-pw-12345"
commonConfiguration: |-
  # Enable AOF https://redis.io/topics/persistence#append-only-file
  appendonly yes
  # Disable RDB persistence, AOF persistence already enabled.
  save ""

Using

helm install redis bitnami/redis --values redis.yaml --namespace wave-dev     

With now redis being available as redis-master.wave-dev.svc.cluster.local

Reverse proxy

We will be using standard Kubernetes ingresses, more on that later.

Email

I won't be configuring email right now, but one can use any normal email server.

Wave

Wave is already provided in form of a container, so we don't need to do any extra work on that front.

Kubernetes provides 3 basic types for deploying services:

  • Deployments
    Basic type, runs 1..n copies of a pod[2]
  • Statefullsets
    Like deployments, but with more consistency guarantees. Used e.g., for the
  • Daemonsets
    Runs one copy of a pod per Kubernetes node, mostly used for Kubernetes internal services.

Therefore, we will be using a deployment.
The basic variant of a deployment looks like this:

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: wave-deployment
  labels:
    app: wave
spec:
  replicas: 1
  selector:
    matchLabels:
      app: wave
  template:
    metadata:
      labels:
        app: wave
    spec:
      containers:
        - name: wave
          image: docker.io/miawinter/wave:alpha-13
          ports:
            - containerPort: 80

This would work, but we are still missing some important things, namely:

  • Configuring waves' access to redis and Postgres
  • Have a place for wave to store its files

Luckily, wave allows configuration via environment variables, so we can just add some of these to the containers spec:

env:
  - name: WAVE_ConnectionStrings__DefaultConnection
    value: "Host=postgres-postgresql; Username=wave; Password=wavepw12345"
  - name: WAVE_ConnectionStrings__Redis
    value: redis-master,password=wave-redis-pw-12345

By default, Kubernetes does not allow files to be written to non-persistent locations, to prevent you from accidentally losing data.
Therefore, we need to configure some storage for wave.
For this, Kubernetes provides so called persistant volumes, commonly shortened to PV[3]. Persistent volumes can either be configured by hand (bääh), or created automatically by Kubernetes. To have Kubernetes create these automatically, we need to tell it what exactly we need, by issuing a persistent volume claim

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: wave-files
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi

When applying this claim, the Kubernetes control plane prepares a storage volume for us that:

  • has at least 10Gi of free space
  • is at least ReadWriteOnce capable.
    Other modes exist, e.g., ReadManyWriteOnce or ReadWriteMany, but we don't need these here because we only want to run one wave instance.
    Checking a few seconds later with kubectl get pvc wave-files shows the PVC state is Bound.
    We can now just add this volume to our wave container:
# [...]
spec:
  containers:
    - name: wave
      # [...]
      volumeMounts:
        - name: wave-files
          mountPath: /app/files
  volumes:
    - name: wave-files
      persistentVolumeClaim:
        claimName: wave-pvc

Our deployment now looks like this:

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: wave-deployment
  labels:
    app: wave
spec:
  replicas: 1
  selector:
    matchLabels:
      app: wave
  template:
    metadata:
      labels:
        app: wave
    spec:
      containers:
        - name: wave
          image: docker.io/miawinter/wave:alpha-13
          ports:
            - containerPort: 8080
          env:
            - name: WAVE_ConnectionStrings__DefaultConnection
              value: "Host=postgres-postgresql; Username=wave; Password=wavepw12345"
            - name: WAVE_ConnectionStrings__Redis
              value: redis-master,password=wave-redis-pw-12345
          volumeMounts:
            - name: wave-files
              mountPath: /app/files
              subPath: files/
      volumes:
        - name: wave-files
          persistentVolumeClaim:
            claimName: wave-files
    securityContext:
      # Wave runs as this user, ensure the fs is writable by it
      fsGroup: 1654

Only one more thing (well technically 2).
We need to define an Ingress[4], and matching service.
Let's start with the service:

---
apiVersion: v1
kind: Service
metadata:
  name: wave
spec:
  selector:
    app: wave
  ports:
    - name: wave-http
      protocol: TCP
      port: 8080

We basically tell Kubernetes that we want to access wave under the dns-name wave, and that it should map port 8080 to the wave containers' port 8080.
So within Kubernetes wave would now be accessible as http://wave:8080/
We can test this by using kubectls' port-forward option.

$ kubectl port-forward services/wave 8080:8080
Forwarding from 127.0.0.1:8080 -> 8080
Forwarding from [::1]:8080 -> 8080

Wave should now be accessible on our localhost at port 8080.
XXX check.

Since this works, the last thing needed is to expose it properly with TLS.
For this, we use an Ingress[4]. I don't really want to go into detail of how ingresses work, but the gist is
Everything that arrives with the Host header set to wave.example.com will get connected to the wave container. If the wave container is not available, the ingress presents an error page.
Also, we instruct the ingress to enable TLS for wave.example.com, using the letsencrypt acme certificate source. This does basically the same thing as certbot would for a normal reverse proxy, but all automated.
This of course requires proper DNS records to be set up already.

---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: wave-ingress
  annotations:
    # Replace this with a production issuer once you've tested it
    cert-manager.io/issuer: letsencrypt
    nginx.ingress.kubernetes.io/proxy-read-timeout: "3600"
    nginx.ingress.kubernetes.io/proxy-send-timeout: "3600"
spec:
  rules:
    - host: "wave.example.com"
      http:
        paths:
          - pathType: Prefix
            path: /
            backend:
              service:
                name: wave
                port:
                  number: 8080
  tls:
    - hosts:
        - wave.example.com
      secretName: wave-ingress-tls

Shortly after applying this, wave should be accessible on https://wave.example.com

Create an account, and to elevate it to the admin role, check the logs of the wave pod using kubectl logs deployments/wave-deployment. There should be a line like There is currently no user in your installation with the admin role, go to /Admin and use the following password to self promote your account: [password]
Enter the password on https://wave.example.com/admin to promote yourself to admin.

You're done now, wave should be running in Kubernetes

[1]: A three node cluster deployed with kubespray
[2]: A pod is a collection of one or more containers.
[3]: https://kubernetes.io/docs/concepts/storage/persistent-volumes/
[4]: Basically a reverse proxy on steroids. Even does TLS without any further configuration needed
[5]: How the ingress finds the wave containers.

About the Author

4censord

4censord studies computer science in Germany.

Mia Rose WinterReviewer

This might also interest you

A Mystery Involving Hardware Security Modules and Value Tokens

Forbidden Tempura 10/7/2025

Context Historical context In July, 2021, the phenomenon known as the “Gigaleak” continued. The Gigaleak was a drip-feed of part of the ill-gotten data from the 2018 Nintendo data breach. On July 20, 2021, the iqcvs.tar.xz file was uploaded to the now-defunct file sharing website anonfiles.com and thereby made available to the public by The Hacker Known as 4chan. This file contains a dump of CVS repositories. The repository sw contains the BroadOn network infrastructure around the middle of the year 2006. This is shortly before the Nintendo Wii launched. The network infrastructure was initially launched alongside the iQue Player, a variant of the Nintendo 64 featuring downloadable games and some anti-piracy measures of questionable quality (non-HTTPS link) intended for the Chinese market, which was and still is notorious for being particularly prone to piracy. It was developed by a company then called BroadOn Communications Corp., a California corporation. The iQue Player u

ITInfodump

A Brief Look at the 3DS Cartridge Protocol

Forbidden Tempura 6/2/2024

About a week ago, there has been a little addition to the 3dbrew wiki page about 3DS cartridges (carts) that outlines the technical details of how the 3DS cartridge controller and a 3DS cartridge talk to each other. I would like to take this opportunity to also include the 3DS itself in the conversation to illuminate which part of which device performs which step. I will then proceed to outline where I think the corresponding design decisions originate. Finally, I will conclude with some concrete ideas for improvement. But first, we need to talk about parallel universes This protocol makes no sense unless you have a basic overview of the 3DS AES engine. The 3DS AES engine can load 128-bit AES keys in two ways: Using key-derivation from a keyX and keyY (officially called KeyId and KeySeed, respectively). Directly specifying a full AES key. The key derivation from a keyX and keyY works as follows: AES key = (((keyX ROL 2) XOR keyY) + C1) ROR 41, where ROL is left rotation on a 128-bit

ITGamesInfodump

Reconstructing the 3DS Bootstrapping Process at the Factory

Forbidden Tempura 5/13/2024

Motivation The Nintendo 3DS was a fairly popular console. In spite of that, surprisingly little is known about how it is put together at the factory. Working with information that was uncovered during the so-called Gigaleak, I will try to recover as much information as I can about the manufacturing process up and until the point the 3DS is able to complete a normal boot sequence. One-Time Programmable (OTP) region Every 3DS ships with 0x100 of one-time programmable persistent memory at 0x10012000-0x10012100, containing console-unique keys and information. This obviously has to occur before any normal firmware runs on the system because significant amounts of all data written would fail to account for console-unique information and thus the encrypted values would be all encrypted for the wrong keys. An interesting observations: ctr.7z (SHA-256: 8b05072361254437277576d53c08b95e5f076c6b33a2871fad74eaa5561d1d38) ctr/sources/bootrom/CTR/private/build/bootrom/ctr_bootrom/ARM9/main.c has a pr

ITGamesInfodump
Powered by Wave