Overview
Standard ZooKeeper pods in Kubernetes write logging to STDOUT, ie to console. This article covers how to switch to traditional filesystem logging and enable DEBUG level logging for ZooKeeper running on Kubernetes.
Relevant Versions Tools and Integrations
All versions of Dremio
Steps to Resolve
The Apache Zookeeper 3.8.x docker image uses SLF4J logback.xml to configure logging.
Log output is by default set to the CONSOLE appender, available by running a kubectl logs zk-0 normally. However this doesn't help with capturing historic logging or switching to DEBUG for example.
It is possible to configure alternate logging modes and locations using the kubernetes configmap type. We already do this to inject the dremio-master-0 and executor pod configurations via the helm charts.
To define additional logging for the Zookeeper pods:
- Create a zookeeper configmap containing the logback.xml settings you wish to use. This should be created in the helm chart dremio_v2/templates directory. For example edit file zoo-configmap in that location with the following logback settings:
apiVersion: v1
data:
logback.xml: |-
<configuration>
<property name="zookeeper.console.threshold" value="INFO" />
<property name="zookeeper.log.dir" value="/logs" />
<property name="zookeeper.log.file" value="zookeeper.log" />
<property name="zookeeper.log.threshold" value="DEBUG" />
<property name="zookeeper.log.maxfilesize" value="256MB" />
<property name="zookeeper.log.maxbackupindex" value="20" />
<!--
console
Add "console" to root logger if you want to use this
-->
<appender name="CONSOLE" class="ch.qos.logback.core.ConsoleAppender">
<encoder>
<pattern>%d{ISO8601} [myid:%X{myid}] - %-5p [%t:%C{1}@%L] - %m%n</pattern>
</encoder>
<filter class="ch.qos.logback.classic.filter.ThresholdFilter">
<level>${zookeeper.console.threshold}</level>
</filter>
</appender>
<!--
Add ROLLINGFILE to root logger to get log file output
-->
<appender name="ROLLINGFILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
<File>${zookeeper.log.dir}/${zookeeper.log.file}</File>
<encoder>
<pattern>%d{ISO8601} [myid:%X{myid}] - %-5p [%t:%C{1}@%L] - %m%n</pattern>
</encoder>
<filter class="ch.qos.logback.classic.filter.ThresholdFilter">
<level>${zookeeper.log.threshold}</level>
</filter>
<rollingPolicy class="ch.qos.logback.core.rolling.FixedWindowRollingPolicy">
<maxIndex>${zookeeper.log.maxbackupindex}</maxIndex>
<FileNamePattern>${zookeeper.log.dir}/${zookeeper.log.file}.%i</FileNamePattern>
</rollingPolicy>
<triggeringPolicy class="ch.qos.logback.core.rolling.SizeBasedTriggeringPolicy">
<MaxFileSize>${zookeeper.log.maxfilesize}</MaxFileSize>
</triggeringPolicy>
</appender>
<!--
Add TRACEFILE to root logger to get log file output
Log TRACE level and above messages to a log file
-->
<!--property name="zookeeper.tracelog.dir" value="${zookeeper.log.dir}" />
<property name="zookeeper.tracelog.file" value="zookeeper_trace.log" />
<appender name="TRACEFILE" class="ch.qos.logback.core.FileAppender">
<File>${zookeeper.tracelog.dir}/${zookeeper.tracelog.file}</File>
<encoder>
<pattern>%d{ISO8601} [myid:%X{myid}] - %-5p [%t:%C{1}@%L] - %m%n</pattern>
</encoder>
<filter class="ch.qos.logback.classic.filter.ThresholdFilter">
<level>TRACE</level>
</filter>
</appender-->
<!--
zk audit logging
-->
<!--property name="zookeeper.auditlog.file" value="zookeeper_audit.log" />
<property name="zookeeper.auditlog.threshold" value="INFO" />
<property name="audit.logger" value="INFO, RFAAUDIT" />
<appender name="RFAAUDIT" class="ch.qos.logback.core.rolling.RollingFileAppender">
<File>${zookeeper.log.dir}/${zookeeper.auditlog.file}</File>
<encoder>
<pattern>%d{ISO8601} %p %c{2}: %m%n</pattern>
</encoder>
<filter class="ch.qos.logback.classic.filter.ThresholdFilter">
<level>${zookeeper.auditlog.threshold}</level>
</filter>
<rollingPolicy class="ch.qos.logback.core.rolling.FixedWindowRollingPolicy">
<maxIndex>10</maxIndex>
<FileNamePattern>${zookeeper.log.dir}/${zookeeper.auditlog.file}.%i</FileNamePattern>
</rollingPolicy>
<triggeringPolicy class="ch.qos.logback.core.rolling.SizeBasedTriggeringPolicy">
<MaxFileSize>10MB</MaxFileSize>
</triggeringPolicy>
</appender>
<logger name="org.apache.zookeeper.audit.Slf4jAuditLogger" additivity="false" level="${audit.logger}">
<appender-ref ref="RFAAUDIT" />
</logger-->
<root level="DEBUG">
<appender-ref ref="ROLLINGFILE" />
</root>
</configuration>
kind: ConfigMap
metadata:
annotations:
labels:
app.kubernetes.io/managed-by: Helm
name: zookeeper-config
There are a useful set of parameters in the initial block that will propagate through the various appender types, with the specific appender selected in the final root level block.
2. Edit the zookeeper.yaml template in the same location, and add the configmap details to the VolumeMounts and define Volumes types. As you are overlaying the logback.xml on the existing config rather than creating a new location (as with dremio/conf on other pods), you should use subPath to define the specific mount location (file). For example:
volumeMounts:
- name: datadir
mountPath: /data
- name: zoo-logback
mountPath: /conf/logback.xml
subPath: logback.xml
volumes:
- name: zoo-logback
configMap:
name: zookeeper-config
Note that the configMap name in the volumes definition should match that defined in the ConfigMap kind.
3. Once you have completed the definition you need to add to your cluster. To do so, simply run the normal upgrade command and this will import the configmap and set up the new logback.
Once you have the logging enabled, it is possible to update the loglevel on the fly by editing the configmap, ie:
kubectl edit configmap zookeeper-config -o yaml
…..and then changing any log settings. You delete each ZK pod in turn and it will pick up the new config. There is little point in applying the dynamic logback configuration on kubernetes pods, as you cannot edit the logback on the pod.
Bear in mind that the master chart will not be updated if you do this, so any subsequent helm deploy will remove those settings.