Introduction: In cloud-native architectures, Kubernetes has become the go-to platform for managing and orchestrating containerised applications. One critical element in optimizing cloud-native applications is the use of optimised binaries. These binaries help improve performance, reduce deployment times, and minimize resource consumption. In this post, we’ll explore the benefits of using precompiled binaries in Kubernetes, why compressing them with tools like UPX is essential, and how you can implement these optimizations in Go and frameworks like GraalVM.
Why Use Binaries in Cloud-Native Applications?
1. Enhanced Performance with Faster Execution
Binaries are precompiled programs that run directly on the operating system’s kernel. Unlike interpreted languages like Python or JIT-compiled languages like Java, binaries are much faster. In Kubernetes environments, speed is crucial—containers are created, destroyed, and scaled dynamically. Using binaries ensures pods start quickly, improving the overall responsiveness of your system.
2. Reduce Overhead with Minimal Dependencies
In traditional applications, external libraries and dependencies introduce overhead. In contrast, statically compiled binaries bundle all necessary dependencies into a single file, eliminating the need to install large runtimes within the container. This reduces complexity and leads to lighter, more efficient containers.
3. Smaller Attack Surface
Binaries contain only the code needed to run the application, leading to less bloat and fewer vulnerabilities. This makes the container more secure by reducing the number of potential attack vectors.
4. Significantly Smaller Docker Images
By compiling your application into a single executable, you eliminate the need for large runtime environments, resulting in smaller Docker images. Smaller images mean faster deployment times, lower storage costs, and quicker transfers between registries.
Why Compress Binaries?
Compressing binaries, especially in resource-constrained environments like Kubernetes, can have a huge impact:
- Smaller Containers:
Compressing the binary helps to reduce the size of your container images further. This is particularly useful in scenarios where container images are pulled from remote registries multiple times a day. - Faster Deployment:
Smaller binaries translate to faster deployments. In Kubernetes, when you deploy or scale your application, smaller images are quicker to pull from the container registry, reducing overall startup times for your pods. - Optimized Resource Usage:
In cloud environments, where you are billed for the resources you use, keeping your application’s footprint small helps optimize memory and CPU usage, leading to lower operational costs. - No Performance Penalty:
Tools like UPX (Ultimate Packer for Executables) can compress binaries without incurring any significant performance penalties at runtime. The decompression happens in memory, ensuring that the compressed binary behaves just like the uncompressed one.
Performance Optimisation Comparison: Current Approach vs UPX Compression for Dockerize user-service
When building Docker images for Go applications, one critical aspect is optimising the container’s size and performance. Below, we compare the performance of the current approach (non-compressed) with the optimised UPX-compressed approach for the user-service
Docker container. This optimisation can improve deployment speed and resource efficiency, particularly in Kubernetes environments.
1. Current Approach: Non-Compressed Binary
Dockerfile for the Non-Compressed Binary Approach:
# Use the official golang image to create a binary.
FROM golang:1.20.0 as builder
# Create and change to the app directory.
WORKDIR /app
# Retrieve application dependencies.
COPY go.mod go.sum ./
RUN go mod download
# Copy local code to the container image.
COPY . ./
# Build the binary.
RUN go build -v -o user-service
# Use the official Red Hat Universal Base Image (UBI) 9 for a lean production container.
FROM registry.access.redhat.com/ubi9/ubi-minimal
WORKDIR /app
# Copy the binary to the production image from the builder stage.
COPY --from=builder /app/user-service ./user-service
# Copy template file
COPY --from=builder /app/pkg/data/template.html ./pkg/data/template.html
# Run the web service on container startup.
CMD ["./user-service"]
Results for Version 0.0.1
(Non-Compressed)
- Image Size: 201 MB (as seen in the screenshot).
- Startup Time: Fast but can be optimized.
- Deployment Speed: Requires pulling a larger image (201 MB), which can result in slower deployments in environments where the image needs to be frequently pulled, like Kubernetes.
2. Optimized Approach: UPX Compression Applied to the Binary
Dockerfile for the UPX Compressed Binary Approach:
# Use the official golang image to create a binary.
FROM golang:1.20.0 as builder
# Set the UPX version
ARG upx_version=4.2.4
# Install xz-utils, download, and set up UPX
RUN apt-get update && apt-get install -y --no-install-recommends xz-utils && \
curl -Ls https://github.com/upx/upx/releases/download/v${upx_version}/upx-${upx_version}-amd64_linux.tar.xz -o - | tar xvJf - -C /tmp && \
cp /tmp/upx-${upx_version}-amd64_linux/upx /usr/local/bin/ && \
chmod +x /usr/local/bin/upx && \
apt-get remove -y xz-utils && \
rm -rf /var/lib/apt/lists/*
# Create and change to the app directory.
WORKDIR /app
# Retrieve application dependencies.
COPY go.mod go.sum ./
RUN go mod download
# Copy local code to the container image.
COPY . ./
# Build the binary.
RUN go build -v -o user-service
# Compress the binary using UPX
RUN upx --ultra-brute -qq ./user-service && upx -t ./user-service
# Use the official Red Hat Universal Base Image (UBI) 9 for a lean production container.
FROM registry.access.redhat.com/ubi9/ubi-minimal
WORKDIR /app
# Copy the compressed binary to the production image from the builder stage.
COPY --from=builder /app/user-service ./user-service
# Copy template file
COPY --from=builder /app/pkg/data/template.html ./pkg/data/template.html
# Run the web service on container startup.
CMD ["./user-service"]
Results for Version 0.0.2
(Compressed with UPX)
- Image Size: 133 MB (as seen in the screenshot).
- Startup Time: Slightly faster in certain environments since the binary is compressed and needs less time to be pulled from the registry.
- Deployment Speed: The compressed image is 133 MB, reducing the image size by ~68 MB compared to the non-compressed version. This leads to faster deployments, especially in scenarios where container images are frequently pulled, such as in Kubernetes clusters with auto-scaling.
Performance and Optimization Benefits:
Metric | Non-Compressed Binary (v0.0.1) | UPX Compressed Binary (v0.0.2) |
---|---|---|
Image Size | 201 MB | 133 MB |
Binary Size | Larger (201 MB image size) | Reduced (68 MB size reduction) |
Startup Time | Standard startup | Slight improvement due to smaller image |
Deployment Speed | Slower due to larger image | Faster due to smaller image size |
Resource Utilization | Higher memory and CPU utilization | Lower resource consumption due to smaller binary footprint |
Compression Overhead | No compression | Minimal decompression overhead at runtime (handled in memory) |
Conclusion
In cloud-native environments like Kubernetes, optimised binaries are essential for reducing deployment times, improving startup speed, and minimising resource consumption. By using tools like UPX to compress binaries, developers can achieve significant reductions in Docker image size while maintaining high performance. Whether you’re using Go, GraalVM, or other frameworks, these optimisations are key to building lean, efficient cloud-native applications.
References: