Openshift 4.5 audit log: There are 2 and the one identified is not correct
See original GitHub issueWhat happened:
Installed splunk-connect into an OKD 4.5 cluster
What you expected to happen: The following 2 audit files identified and sent to SPLUNK: /var/log/kube-apiserver/audit.log /var/log/openshift-apiserver/audit.log
How to reproduce it (as minimally and precisely as possible): 100%
Anything else we need to know?:
Environment:
-
Kubernetes version (use
kubectl version
): OKD 4.5 is the upstream Openshift 4.5: oc version Client Version: 4.5.0-0.okd-2020-08-12-020541 Server Version: 4.5.0-0.okd-2020-08-12-020541 Kubernetes Version: v1.18.3 -
Ruby version (use
ruby --version
): -
OS (e.g:
cat /etc/os-release
): OKD uses Fedora CoreOS, which is the upstream RedHat CoreOS cat /etc/os-releaseNAME=Fedora VERSION="32.20200629.3.0 (CoreOS)" ID=fedora VERSION_ID=32 VERSION_CODENAME="" PLATFORM_ID="platform:f32" PRETTY_NAME="Fedora CoreOS 32.20200629.3.0" ANSI_COLOR="0;34" LOGO=fedora-logo-icon CPE_NAME="cpe:/o:fedoraproject:fedora:32" HOME_URL="https://getfedora.org/coreos/" DOCUMENTATION_URL="https://docs.fedoraproject.org/en-US/fedora-coreos/" SUPPORT_URL="https://github.com/coreos/fedora-coreos-tracker/" BUG_REPORT_URL="https://github.com/coreos/fedora-coreos-tracker/" REDHAT_BUGZILLA_PRODUCT="Fedora" REDHAT_BUGZILLA_PRODUCT_VERSION=32 REDHAT_SUPPORT_PRODUCT="Fedora" REDHAT_SUPPORT_PRODUCT_VERSION=32 PRIVACY_POLICY_URL="https://fedoraproject.org/wiki/Legal:PrivacyPolicy" VARIANT="CoreOS" VARIANT_ID=coreos OSTREE_VERSION='32.20200629.3.0'
-
Splunk version: 8.0.5
-
Others:
I am relaying the methodology and the findings in hopes that it can be incorporated into future releases of splunk-connect (specific for Openshift??). NOTE: I find this release extremely useful and reliable.
However, I would like to relay a few points about the process.
- The audit logfile location identified in the values.yaml is incorrect for Openshift and does not exist.
- Needed to specify the metrics index to prevent installation error, even though not collecting metrics or installing (see below values.yaml provided).
DETAILS: In Openshift 4.5, the audit file exist in 2 different files on the 3 master nodes, and is not what is specified in the released version of values.yaml.
/var/log/kube-apiserver/audit.log /var/log/openshift-apiserver/audit.log
According to this values.yaml file: https://github.com/splunk/splunk-connect-for-kubernetes/blob/53d5b0c1e333ad04c16a556aedd137f15c95a630/helm-chart/splunk-connect-for-kubernetes/values.yaml#L321
The audit file is located here – but it does not in Openshift 4.5: /var/log/kube-apiserver-audit.log
I included my values.yaml to overwrite and also capture the openshift-apiserver audit log file. Additionally, all documentation that I could find references helm2 install with tiller. Openshift 4.5 supports helm3 (with annotations) and that is the process used here.
Thank you very much for your help and once again, this is a great tool. It is much appreciated.
Please let me know if there is additional information I can provide.
Brian
global: splunk: hec: host: "splunk.example.com" port: "1234" token: "REDACTED" indexName: "OKD-local-audit" kubernetes: clusterName: "OKD-local" openshift: "true" splunk-kubernetes-logging: containers: logFormatType: "cri" logFormat: "%Y-%m-%dT%H:%M:%S.%N%:z" logs: kube-audit: from: file: path: "/var/log/kube-apiserver/audit.log" openshift-apiserver: from: file: path: "/var/log/openshift-apiserver/audit.log" nodeSelector: node-role.kubernetes.io/master: "" kubernetes: clusterName: "OKD-local" openshift: "true" splunk-kubernetes-objects: enabled: false kubernetes: clusterName: "OKD-local" openshift: "true" splunk-kubernetes-metrics: enabled: false kubernetes: clusterName: "OKD-local" openshift: "true" splunk: hec: indexName: "OKD-local-metrics"
helm3 install process: `#/bin/bash
FUNCTION: check_return_code_function
Check return code from previously run command.
Will display error code and exit with 99 if previous function
is anything but 0.
check_return_code_function () { RETURN_CODE=${?} if [ “$RETURN_CODE” -ne 0 ] then echo “RETURN_CODE is: ${RETURN_CODE}” exit 99 fi
echo echo “SUCCESS!!!” echo
sleep 1 }
OCP_PROJECT=“splunk-connect-ops”
wget https://github.com/splunk/splunk-connect-for-kubernetes/archive/release/1.4.3.zip check_return_code_function
unzip 1.4.3.zip check_return_code_function
oc adm new-project ${OCP_PROJECT} --node-selector=“” --as=system:admin check_return_code_function
oc project ${OCP_PROJECT} check_return_code_function
helm3 install splunk-kubernetes-logging --namespace=${OCP_PROJECT} -f values.yaml splunk-connect-for-kubernetes-release-1.4.3/helm-chart/splunk-connect-for-kubernetes/ check_return_code_function
oc adm policy add-scc-to-user privileged -z splunk-kubernetes-logging -n ${OCP_PROJECT} check_return_code_function
#helm3 uninstall splunk-kubernetes-logging --namespace=${OCP_PROJECT}
check_return_code_function`
Issue Analytics
- State:
- Created 3 years ago
- Comments:9 (3 by maintainers)
@matthewmodestino indeed true, huge amount of data sent to Splunk. Unfortunately some sites have some requirements to ship and store it for years.
This issue was closed because it has been inactive for 14 days since being marked as stale.