As I briefly mentioned in the previous post, I have migrated most apps I care about to my own server with a small Kubernetes "cluster" (since when do we call single nodes clusters?!) deployed using k3s. The entire cluster is managed using Pulumi with Besom. Prior to this I had very limited exposure to Kubernetes and managing the server by myself has already been a pain in the ass, but I've learned a lot.
Those apps previously lived on Fly.io, and it was quite convenient to use the flyctl
CLI to deploy the apps as part of their normal publishing Github Actions workflow. With the migration to my own server such niceties are no longer available, and I had to figure out how to trigger restarts automatically.
I could, of course, spend twenty minutes setting up one of the existing solutions for this exact problem (such as Keel), or I could spend two days fucking about with Kubernetes permissions and Github webhooks.
This post is ~1500 words long, so guess which one I went for.
TLDR
- Write and deploy a Github webhook that restarts Kubernetes deployments when package is published to GH registry
- Code on Github
Restarting in Kubernetes
Prior to this automation, I would just run kubectl rollout restart deployment <name>
by hand, like a caveman. Contrary to existing advice, I use :latest
tag in my images and a Always
image pull policy, so restarting a deployment will force a new image being pulled if there is one available. This means that releasing new application version requires a single manual step – restarting the appropriate deployment. But even one manual step is too many manual steps, so I started looking into how to automate the restarts using Kubernetes API.
Looking at the API spec with my own eyes was difficult (the spec website is objectively unusable, compare to even the free version of Swagger) and did not yield any results – there is no dedicated API for restarting deployments! At this point I was ready to give up and do the smart thing – configure someone else's superior solution to do it for me. But then I stumbled on this excellent StackOverflow answer which I reproduce here:
If you dig around in the
kubectl
source you can eventually find(k8s.io/kubectl/pkg/polymorphichelpers).defaultObjectRestarter
. All that does is change an annotation:apiVersion: apps/v1 kind: Deployment spec: template: metadata: annotations: kubectl.kubernetes.io/restartedAt: '2006-01-02T15:04:05Z'
Anything that changes a property of the embedded pod spec in the deployment object will cause a restart; there isn't a specific API call to do it.
It is actually that simple! With this crucial piece of the puzzle in mind, the rest of the plan starts to take shape:
- Figure out how to react to Github events
- Figure out how to patch the deployment through kubernetes API
- Write a service that will do all of it
- Figure out how to deploy the service correctly to the kubernetes cluster
Github webhooks
Thankfully this area is a well-trodden ground – reacting to github events is fundamental in CI/CD processes and various apps that provide services around code authoring and testing.
For any event that Github considers worth recording, it can send a payload in a HTTP POST request to an endpoint you configure in repository/organisation settings. As far as tech goes, webhooks are pretty ancient, but very efficient and ubiquitous.
Validating payloads
As the webhook endpoint has to be accessible from wider internet, it's important to make sure you are using webhook with a configured secret to make sure the application can validate deliveries and ensure they come from Github, and not a malicious actor.
Github documentation has a good page about exactly how the validation is performed, but in short, we need to compute the HMAC hex digest of the entire payload we receive, and compare it to the digest sent by github in X-Hub-Signature-256
header. Despite there being no example for Java or any JVM language, computing the hash is simple using built-in JDK classes, and it's important to correctly render the hex digest:
object Crypto:
import javax.crypto.Mac
import javax.crypto.spec.SecretKeySpec
def hash(key: String, bytes: Array[Byte]) =
val secretKeySpec = SecretKeySpec(key.getBytes(), "HmacSHA256")
val mac = Mac.getInstance("HmacSHA256")
mac.init(secretKeySpec)
bytesToHex(mac.doFinal(bytes))
end hash
def bytesToHex(hash: Array[Byte]): String =
val hexString = new StringBuilder(2 * hash.length)
for i <- hash.indices do
val hex = Integer.toHexString(0xff & hash(i))
if hex.length() == 1 then hexString.append('0')
hexString.append(hex)
hexString.toString()
end bytesToHex
end Crypto
Package published event
I publish all of my apps using self-contained multi-stage docker builds to Github Container Registry at https://ghcr.io. Every time a package is published, Github will send an event with this rough structure:
{
"action": "published",
"package": {
"namespace": "<string>",
"name": "<string>",
// lots of data here
},
// lots of data here
}
For my purposes, that is all the data I am interested in – matching package names against a hardcoded list of kubernetes labels seems like a simple enough first version. In the future I would love to lookup the deployments that contain this exact image in pod spec and only restart them.
Using Circe, the data we need can be extracted using the following case classes:
case class Package(name: String, namespace: String)
derives io.circe.Codec.AsObject
case class PublishedEvent(action: String, `package`: Option[Package])
derives io.circe.Codec.AsObject
Using the http4s terminology, we can start our /webhook
endpoint by collecting the body of the request and attempting to parse it as JSON:
case req @ POST -> Root / "webhook" =>
req.body
// Limit payload to 50KB
.take(50 * 1024)
.compile
.toVector
.map(_.toArray)
.flatMap(bytes =>
IO
.fromEither(io.circe.jawn.parseByteArray(bytes))
.map(bytes -> _)
)
Then, using the hash function we defined above, let's compute the signature:
.flatMap: (bytes, json) =>
val digest = Crypto.hash(cli.secret, bytes)
val header = req.headers
.get(CIString("X-Hub-Signature-256"))
.map(_.head.value)
If the header contains valid signature, then we can attempt to decode the event as PublishedEvent
and invoke handlePackage(pkg: String): IO[Unit]
function that will do all the heavy lifting:
if !header.contains("sha256=" + digest) then
BadRequest("invalid signature")
else
IO.fromEither(json.as[PublishedEvent])
.flatMap:
case PublishedEvent("published", Some(pkg)) =>
handlePackage(pkg)
case PublishedEvent(other, pkg) =>
info(s"ignoring $other, $pkg event") *> NoContent()
end if
Developing experience
The best tool for webhook development is ngrok – it allows tunnelling the requests from a stable public endpoint to a web service running locally on your machine – the connection is stable, so you can restart your local service and the webhook will still be up and running, available globally.
Kubernetes API
Through trial an error, I first achieved what I wanted using a simple curl request. To avoid dealing with authentication and certificates, kubectl
exposes a proxy
subcommand that handles all the communication with the Kubernetes API server. Assuming that the proxy runs on http://localhost:8001
, restarting a deployment using the technique from StackOverflow answer looks like this (the name of deployment is the last segment in the URL path):
curl -XPATCH \
http://localhost:8001/apis/apps/v1/namespaces/default/deployments/sn-bindgen-web-worker-cae19abe \
-H "Content-Type: application/strategic-merge-patch+json" \
--json '{"metadata": {"annotations": {"kubectl.kubernetes.io / restartedAt": "2025-10-09T01:31:47Z"}}}'
In http4s and circe terms, assuming that we have client: Client[IO]
created, the JSON patch (using Circe json literal) looks like this
def restartDeployment(namespace: String, name: String) =
import io.circe.*, literal.*
val annotation = "kubectl.kubernetes.io/restartedAt"
val df = DateTimeFormatter.ISO_OFFSET_DATE_TIME
val newValue = Instant
.now()
.atOffset(ZoneOffset.UTC)
.format(df)
val patch =
json"""
{
"spec": {
"template": {
"metadata": {
"annotations": {
$annotation: $newValue
}
}
}
}
}
"""
And to actually send request with the correct method and content type, we use the raw Request[IO]
API:
import org.http4s.headers.`Content-Type`
val request = Request[IO]()
.withMethod(Method.PATCH)
.withEntity(patch.noSpaces)
.withContentType(
`Content-Type`(
MediaType.unsafeParse("application/strategic-merge-patch+json")
)
)
.withUri(
url / "apis" / "apps" / "v1" / "namespaces" / namespace / "deployments" / name
)
client
.run(request)
.use(resp => resp.bodyText.compile.string.flatMap(Log.info(_)))
I will omit the details of looking up the deployment based on package name – the full code is available in the repository.
Kubernetes permissions
With the webhook running locally and correctly restarting the deployments, it was time to actually deploy it to my server. This is where things got a bit more confusing, as I had to work with kubernetes permissions in order to be able to access the API server.
After lots of web searches and consulting Claude the picture was a bit clearer:
- My pod needs to have its own service account
- The service account needs to have a role bound to it with the correct permissions for patching deployments
- The pod with the webhook service needs to have a sidecar container running the
kubectl proxy
As I use Besom for everything on the cluster, I will show the definitions in Scala – Kubernetes docs have examples in pure YAML if you are interested.
First, let's create a Service Account:
val serviceAccount =
ServiceAccount(
"webhook-kube-deployer-service-account",
args = ServiceAccountArgs(
metadata = ObjectMetaArgs(
labels = Map("app" -> "webhook-deployer")
)
)
)
At this point it has no additional permissions, so let's create a Role specificially to host them:
val role = Role(
"webhook-kube-deployer-role-binding",
args = RoleArgs(
metadata = ObjectMetaArgs(labels = Map("app" -> "webhook-deployer")),
rules = List(
PolicyRuleArgs(
apiGroups = List("apps"),
resources = List("deployments"),
verbs = List("update", "patch", "list", "get")
),
PolicyRuleArgs(
apiGroups = List(""),
resources = List("pods"),
verbs = List("list", "get")
)
)
)
)
The most important part here is adding the ability to patch
deployments
.
To assign this particular role to our new service account, we need a RoleBinding:
val roleBinding = RoleBinding(
"webhook-kube-deployer-role-binding",
args = RoleBindingArgs(
metadata = ObjectMetaArgs(labels = Map("app" -> "webhook-deployer")),
roleRef = RoleRefArgs(
apiGroup = "rbac.authorization.k8s.io",
kind = "Role",
name = role.metadata.name.getOrFail(???)
),
subjects = List(
SubjectArgs(
kind = "ServiceAccount",
name = serviceAccount.metadata.name.getOrFail(???),
namespace = "default"
)
)
)
)
through roleRef
and subjects
we connect our previously defined role
and serviceAccount
.
Then we can use the name of the created service account in the serviceAccountName
field of the Pod spec.
Note that the service in its current form requires kubectl proxy running, which we can achieve by adding another container to the pod with our webhook deployer:
ContainerArgs(
name = "kubectl-proxy",
image = "bitnami/kubectl:latest",
command =
List("kubectl", "proxy", "--address=0.0.0.0", "--accept-hosts=^.*$")
)
All that is left is to make sure that our service is correctly configured with regards to the address of the kubectl proxy and the webhook secret.
We'll add the secret to Pulumi config and create a ConfigMap containing the environment variables:
val webhookSecret = config.requireString("webhook_secret").asSecret
val webhookDeployerConfig = ConfigMap(
"webhook-deployer-env",
args = ConfigMapArgs(
metadata = ObjectMetaArgs(
labels = Map("app" -> "webhook-deployer")
),
data = Map(
"WEBHOOK_SECRET" -> webhookSecret,
"KUBE_API" -> "http://localhost:8001"
)
)
)
For configuring my services using CLI parameters and/or environment variables, I usually use my decline-derive microlibrary:
import com.monovore.decline.Argument
import decline_derive.{CommandApplication, *}
import concurrent.duration.*
case class CLI(
port: Port = port"8080",
host: Host = host"localhost",
@Env("WEBHOOK_SECRET", "")
secret: String,
@Env("KUBE_API", "")
@Name("kube-api")
kubeAPI: String = "http://localhost:8001",
@Name("delay")
delay: FiniteDuration = 10.seconds
) derives CommandApplication
And that's it! Once the webhook endpoint is added to my NGINX ingress configuration, I changed the URL from ngrok's one to my own, and started receiving events from all my repositories, with deployments restarted when necessary.
Conclusion
This was a great opportunity to learn a bit more about Kubernetes and Github webhooks – in the end the process was smoother than I expected, with Kubernetes permission being the only real thing I didn't anticipate, and had to spend a bunch of time learning about.
In the future I will tweak the logic some more, to avoid keeping a hardcoded mapping between packages and deployments.
I also need to introduce some back pressure into the system, as often there are multiple published packages in quick succession, and there's no point in restart deployments 5 times in 1 minute.