Deep Dive into policy logging
Author:
Published:
Updated:
Policies are regular programs. As such they often have the need to log information. In general, we are used to make our programs log into standard output (stdout) and standard error (stderr) outputs.
However, policies run in a confined WebAssembly environment. For this mechanism to work as usual Kubewarden would need to set up the runtime environment in a way that the policy can write to stdout and stderr file descriptors, and upon completion, Kubewarden can check them – or stream log messages as they pop up.
Given Kubewarden uses waPC for allowing intercommunication between the guest
(the policy) and the host (Kubewarden – the policy-server
or kwctl
if we are running policies
manually), we have extended our language SDK’s so that they can log messages by using waPC
internally.
Kubewarden has defined a contract between policies (guests) and the host (Kubewarden) for performing policy settings validation, policy validation, policy mutation and now, logging.
The waPC interface used for logging is therefore a contract, because once you have built a policy, it should be possible to run it in future Kubewarden versions. In this sense, Kubewarden keeps this contract behind the SDK of your preferred language, so you don’t have to deal with the details of how logging is implemented in Kubewarden. You just have to use your logging library of choice for the language you are working with.
Let’s look into how to take advantage of logging with Kubewarden in specific languages!
For Policy Authors
Go
We are going to use the Go policy template as a starting point.
Our Go SDK provides integration with the onelog
library.
When our policy is built for the WebAssembly target, it will send the logs to the host through waPC.
Otherwise, it will just print them on stderr – but this is only relevant if you happen to run your
policy outside a Kubewarden runtime environment.
One of the first things our policy does on its
main.go
file is to initialize the logger:
var (
logWriter = kubewarden.KubewardenLogWriter{}
logger = onelog.New(
&logWriter,
onelog.ALL, // shortcut for onelog.DEBUG|onelog.INFO|onelog.WARN|onelog.ERROR|onelog.FATAL
)
)
We are then able to use onelog
API in order to produce log messages. We could, for example,
perform structured logging with debugging level:
logger.DebugWithFields("validating object", func(e onelog.Entry) {
e.String("name", gjson.GetBytes(payload, "request.object.metadata.name").String())
e.String("namespace", gjson.GetBytes(payload, "request.object.metadata.namespace").String())
})
Or, with info level:
logger.InfoWithFields("validating object", func(e onelog.Entry) {
e.String("name", gjson.GetBytes(payload, "request.object.metadata.name").String())
e.String("namespace", gjson.GetBytes(payload, "request.object.metadata.namespace").String())
})
What happens under the covers is that our Go SDK sends every log event to the kubewarden
host
through
waPC.
Rust
Let’s use the Rust policy template as our guide.
Our Rust SDK implements an integration with the slog
crate. This crate exposes the concept of
drains, so we have to define a global drain that we will use throughout our policy
code:
use kubewarden::logging;
use slog::{o, Logger};
lazy_static! {
static ref LOG_DRAIN: Logger = Logger::root(
logging::KubewardenDrain::new(),
o!("some-key" => "some-value") // This key value will be shared by all logging events that use
// this logger
);
}
Then, we can use the macros provided by slog
to log on different levels:
use slog::{crit, debug, error, info, trace, warn};
Let’s log an info level message:
info!(
LOG_DRAIN,
"rejecting resource";
"resource_name" => &resource_name
);
As happens with the Go SDK implementation, our Rust implementation of the slog
drain sends this
logging events to the host by using
waPC.
You can read more about slog here.
Swift
We will be looking at the Swift policy template for this example.
As happens with Go and Rust’s SDK’s, the Swift SDK is instrumented to use Swift’s LogHandler
from
the swift-log
project, so our policy only has to initialize
it. In our Sources/Policy/main.swift
file:
import kubewardenSdk
import Logging
LoggingSystem.bootstrap(PolicyLogHandler.init)
Then, in our policy business logic, under Sources/BusinessLogic/validate.swift
, we are able to log
with different levels:
import Logging
public func validate(payload: String) -> String {
// ...
logger.info("validating object",
metadata: [
"some-key": "some-value",
])
// ...
}
Following the same strategy as the Go and Rust SDK’s, the Swift SDK is able to push log events to the host through waPC.
For Cluster Administrators
Being able to log from within a policy is half of the story. Then, we have to be able to read and potentially collect these logs.
As we have seen, Kubewarden policies support structured logging that is then forwarded to the
component running the policy. Usually, this is kwctl
if you are executing the policy in a manual
fashion, or policy-server
if the policy is being ran in a Kubernetes environment.
Both kwctl
and policy-server
use the tracing
crate to
produce log events, either the events that are produced by the application itself, or by policies
that are running in WebAssembly runtime environments.
kwctl
The kwctl
CLI tool takes a very straightforward approach to logging from policies: it will print
them to the standard error file descriptor.
policy-server
The policy-server
supports different log
formats:
json
, text
and otlp
.
otlp
? I hear you ask. It stands for OpenTelemetry Protocol. We will
look into that in a bit.
If the policy-server
is run with the --log-fmt
argument set to json
or text
, the output will
be printed to the standard error file descriptor in JSON or plain text formats. These messages can
be read using kubectl logs <policy-server-pod>
.
If --log-fmt
is set to otlp
, the policy-server
will use OpenTelemetry to report logs and
traces.
OpenTelemetry
Kubewarden is instrumented with OpenTelemetry, so it’s possible for the policy-server
to send
trace events to an OpenTelemetry collector by using the
OpenTelemetry Protocol (otlp
).
Our official Kubewarden Helm Chart has certain values that allow you to deploy Kubewarden with OpenTelemetry support, reporting logs and traces to, for example, a Jaeger instance:
telemetry:
enabled: True
tracing:
jaeger:
endpoint: "all-in-one-collector.jaeger.svc.cluster.local:14250"
This functionality closes the gap on logging/tracing, given the freedom that the OpenTelemetry collector provides to us in terms of flexibility of what to do with this logs and traces.
You can read more about Kubewarden’s integration with OpenTelemetry in our documentation.
But this is a big enough topic on its own worth a future blog post. Stay logged!