question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

[BUG] Sandbox support for Apple Silicon(M1)

See original GitHub issue

Describe the bug

Sandbox does not work, even though I just tried one-line-command on README.

docker run --rm --privileged -p 30081:30081 -p 30084:30084 cr.flyte.org/flyteorg/flyte-sandbox

It is stuck with a message Starting k3s cluster....

Expected behavior

A sandbox program should run on my laptop using the port 30081 and 30084.

GitHub repo(s) flyte (we could have more as well)

[Optional] Additional context

Steps to reproduce the behavior:

  1. Environemnt
    • MacBook Pro (13-inch, M1, 2020)
    • OS Big Sur 11.4
    • Docker Desktop v3.5.1
    • Docker Engine v20.10.7
  2. Attaching the detail log on /var/log/k3s.log.

time="2021-07-16T06:29:24Z" level=info msg="Acquiring lock file /var/lib/rancher/k3s/data/.lock" time="2021-07-16T06:29:24Z" level=info msg="Preparing data dir /var/lib/rancher/k3s/data/77457d0b09d8b94d6f5029bcbc70f94b7ae9c50a08b539b76612e713ea818256" time="2021-07-16T06:29:30.410084337Z" level=info msg="Starting k3s v1.21.1+k3s1 (75dba57f)" time="2021-07-16T06:29:30.442382629Z" level=info msg="Configuring sqlite3 database connection pooling: maxIdleConns=2, maxOpenConns=0, connMaxLifetime=0s" time="2021-07-16T06:29:30.442953754Z" level=info msg="Configuring database table schema and indexes, this may take a moment..." time="2021-07-16T06:29:30.459868920Z" level=info msg="Database tables and indexes are up to date" time="2021-07-16T06:29:30.479353254Z" level=info msg="Kine listening on unix://kine.sock" time="2021-07-16T06:29:30.675448837Z" level=info msg="certificate CN=system:admin,O=system:masters signed by CN=k3s-client-ca@1626416970: notBefore=2021-07-16 06:29:30 +0000 UTC notAfter=2022-07-16 06:29:30 +0000 UTC" time="2021-07-16T06:29:30.680936587Z" level=info msg="certificate CN=system:kube-controller-manager signed by CN=k3s-client-ca@1626416970: notBefore=2021-07-16 06:29:30 +0000 UTC notAfter=2022-07-16 06:29:30 +0000 UTC" time="2021-07-16T06:29:30.685830670Z" level=info msg="certificate CN=system:kube-scheduler signed by CN=k3s-client-ca@1626416970: notBefore=2021-07-16 06:29:30 +0000 UTC notAfter=2022-07-16 06:29:30 +0000 UTC" time="2021-07-16T06:29:30.690372670Z" level=info msg="certificate CN=kube-apiserver signed by CN=k3s-client-ca@1626416970: notBefore=2021-07-16 06:29:30 +0000 UTC notAfter=2022-07-16 06:29:30 +0000 UTC" time="2021-07-16T06:29:30.693984295Z" level=info msg="certificate CN=system:kube-proxy signed by CN=k3s-client-ca@1626416970: notBefore=2021-07-16 06:29:30 +0000 UTC notAfter=2022-07-16 06:29:30 +0000 UTC" time="2021-07-16T06:29:30.698073129Z" level=info msg="certificate CN=system:k3s-controller signed by CN=k3s-client-ca@1626416970: notBefore=2021-07-16 06:29:30 +0000 UTC notAfter=2022-07-16 06:29:30 +0000 UTC" time="2021-07-16T06:29:30.702395587Z" level=info msg="certificate CN=cloud-controller-manager signed by CN=k3s-client-ca@1626416970: notBefore=2021-07-16 06:29:30 +0000 UTC notAfter=2022-07-16 06:29:30 +0000 UTC" time="2021-07-16T06:29:30.709898629Z" level=info msg="certificate CN=kube-apiserver signed by CN=k3s-server-ca@1626416970: notBefore=2021-07-16 06:29:30 +0000 UTC notAfter=2022-07-16 06:29:30 +0000 UTC" time="2021-07-16T06:29:30.716450212Z" level=info msg="certificate CN=system:auth-proxy signed by CN=k3s-request-header-ca@1626416970: notBefore=2021-07-16 06:29:30 +0000 UTC notAfter=2022-07-16 06:29:30 +0000 UTC" time="2021-07-16T06:29:30.722841629Z" level=info msg="certificate CN=etcd-server signed by CN=etcd-server-ca@1626416970: notBefore=2021-07-16 06:29:30 +0000 UTC notAfter=2022-07-16 06:29:30 +0000 UTC" time="2021-07-16T06:29:30.726369546Z" level=info msg="certificate CN=etcd-client signed by CN=etcd-server-ca@1626416970: notBefore=2021-07-16 06:29:30 +0000 UTC notAfter=2022-07-16 06:29:30 +0000 UTC" time="2021-07-16T06:29:30.732747546Z" level=info msg="certificate CN=etcd-peer signed by CN=etcd-peer-ca@1626416970: notBefore=2021-07-16 06:29:30 +0000 UTC notAfter=2022-07-16 06:29:30 +0000 UTC" time="2021-07-16T06:29:31.429857296Z" level=info msg="certificate CN=k3s,O=k3s signed by CN=k3s-server-ca@1626416970: notBefore=2021-07-16 06:29:30 +0000 UTC notAfter=2022-07-16 06:29:31 +0000 UTC" time="2021-07-16T06:29:31.434198546Z" level=info msg="Active TLS secret (ver=) (count 7): map[listener.cattle.io/cn-10.43.0.1:10.43.0.1 listener.cattle.io/cn-127.0.0.1:127.0.0.1 listener.cattle.io/cn-172.17.0.2:172.17.0.2 listener.cattle.io/cn-kubernetes:kubernetes listener.cattle.io/cn-kubernetes.default:kubernetes.default listener.cattle.io/cn-kubernetes.default.svc.cluster.local:kubernetes.default.svc.cluster.local listener.cattle.io/cn-localhost:localhost listener.cattle.io/fingerprint:SHA1=12790E97CBF1D4F0032099CCB8C574CB8AFFE72A]" time="2021-07-16T06:29:31.467758129Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --enable-admission-plugins=NodeRestriction --etcd-servers=unix://kine.sock --insecure-port=0 --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key" Flag --insecure-port has been deprecated, This flag has no effect now and will be removed in v1.24. I0716 06:29:31.495392 15 server.go:656] external host was not specified, using 172.17.0.2 I0716 06:29:31.499901 15 server.go:195] Version: v1.21.1+k3s1 I0716 06:29:32.851088 15 shared_informer.go:240] Waiting for caches to sync for node_authorizer I0716 06:29:32.865920 15 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook. I0716 06:29:32.866023 15 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota. I0716 06:29:32.876776 15 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook. I0716 06:29:32.876830 15 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota. I0716 06:29:33.030969 15 instance.go:283] Using reconciler: lease I0716 06:29:33.160415 15 rest.go:130] the default service ipfamily for this cluster is: IPv4 W0716 06:29:34.760910 15 genericapiserver.go:425] Skipping API node.k8s.io/v1alpha1 because it has no resources. W0716 06:29:34.805778 15 genericapiserver.go:425] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources. W0716 06:29:34.822455 15 genericapiserver.go:425] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources. W0716 06:29:34.848923 15 genericapiserver.go:425] Skipping API storage.k8s.io/v1alpha1 because it has no resources. W0716 06:29:34.859161 15 genericapiserver.go:425] Skipping API flowcontrol.apiserver.k8s.io/v1alpha1 because it has no resources. W0716 06:29:34.941346 15 genericapiserver.go:425] Skipping API apps/v1beta2 because it has no resources. W0716 06:29:34.941415 15 genericapiserver.go:425] Skipping API apps/v1beta1 because it has no resources. I0716 06:29:34.981256 15 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook. I0716 06:29:34.981324 15 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota. time="2021-07-16T06:29:35.051927381Z" level=info msg="Waiting for API server to become available" time="2021-07-16T06:29:35.057545214Z" level=info msg="Running kube-scheduler --address=127.0.0.1 --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --leader-elect=false --port=10251 --profiling=false --secure-port=0" time="2021-07-16T06:29:35.077942756Z" level=info msg="Running kube-controller-manager --address=127.0.0.1 --allocate-node-cidrs=true --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --leader-elect=false --port=10252 --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=0 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.key --use-service-account-credentials=true" time="2021-07-16T06:29:35.092829131Z" level=info msg="Node token is available at /var/lib/rancher/k3s/server/token" time="2021-07-16T06:29:35.093339131Z" level=info msg="To join node to cluster: k3s agent -s https://172.17.0.2:6443 -t ${NODE_TOKEN}" time="2021-07-16T06:29:35.102075298Z" level=info msg="Wrote kubeconfig /etc/rancher/k3s/k3s.yaml" time="2021-07-16T06:29:35.102198464Z" level=info msg="Run: k3s-server kubectl" time="2021-07-16T06:29:35.255427089Z" level=info msg="Cluster-Http-Server 2021/07/16 06:29:35 http: TLS handshake error from 127.0.0.1:44980: remote error: tls: bad certificate" time="2021-07-16T06:29:35.294934756Z" level=info msg="Cluster-Http-Server 2021/07/16 06:29:35 http: TLS handshake error from 127.0.0.1:44988: remote error: tls: bad certificate" time="2021-07-16T06:29:35.366491881Z" level=info msg="certificate CN=7878eb0dfcc1 signed by CN=k3s-server-ca@1626416970: notBefore=2021-07-16 06:29:30 +0000 UTC notAfter=2022-07-16 06:29:35 +0000 UTC" time="2021-07-16T06:29:35.383604173Z" level=info msg="certificate CN=system:node:7878eb0dfcc1,O=system:nodes signed by CN=k3s-client-ca@1626416970: notBefore=2021-07-16 06:29:30 +0000 UTC notAfter=2022-07-16 06:29:35 +0000 UTC" time="2021-07-16T06:29:35.547477589Z" level=info msg="Module overlay was already loaded" time="2021-07-16T06:29:35.547588214Z" level=info msg="Module nf_conntrack was already loaded" time="2021-07-16T06:29:35.563684339Z" level=warning msg="Failed to load kernel module br_netfilter with modprobe" time="2021-07-16T06:29:35.577951256Z" level=warning msg="Failed to load kernel module iptable_nat with modprobe" W0716 06:29:35.578874 15 sysinfo.go:203] Nodes topology is not available, providing CPU topology time="2021-07-16T06:29:35.585571548Z" level=info msg="Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400" time="2021-07-16T06:29:35.585863256Z" level=info msg="Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600" time="2021-07-16T06:29:35.594601006Z" level=info msg="Logging containerd to /var/lib/rancher/k3s/agent/containerd/containerd.log" time="2021-07-16T06:29:35.595207464Z" level=info msg="Running containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd" time="2021-07-16T06:29:36.609991340Z" level=info msg="Waiting for containerd startup: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /run/k3s/containerd/containerd.sock: connect: no such file or directory\"" time="2021-07-16T06:29:37.641434007Z" level=info msg="Containerd is now running" time="2021-07-16T06:29:37.708044049Z" level=info msg="Connecting to proxy" url="wss://127.0.0.1:6443/v1-k3s/connect" time="2021-07-16T06:29:37.725201341Z" level=info msg="Handling backend connection request [7878eb0dfcc1]" time="2021-07-16T06:29:37.729681049Z" level=warning msg="Disabling CPU quotas due to missing cpu.cfs_period_us" time="2021-07-16T06:29:37.730060216Z" level=info msg="Running kubelet --address=0.0.0.0 --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=cgroupfs --client-ca-file=/var/lib/rancher/k3s/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --cni-bin-dir=/var/lib/rancher/k3s/data/77457d0b09d8b94d6f5029bcbc70f94b7ae9c50a08b539b76612e713ea818256/bin --cni-conf-dir=/var/lib/rancher/k3s/agent/etc/cni/net.d --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --container-runtime=remote --containerd=/run/k3s/containerd/containerd.sock --cpu-cfs-quota=false --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=7878eb0dfcc1 --kubeconfig=/var/lib/rancher/k3s/agent/kubelet.kubeconfig --kubelet-cgroups=/k3s --node-labels= --pod-manifest-path=/var/lib/rancher/k3s/agent/pod-manifests --read-only-port=0 --resolv-conf=/etc/resolv.conf --runtime-cgroups=/k3s --serialize-image-pulls=false --tls-cert-file=/var/lib/rancher/k3s/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/k3s/agent/serving-kubelet.key" time="2021-07-16T06:29:37.738346299Z" level=info msg="Running kube-proxy --cluster-cidr=10.42.0.0/16 --conntrack-max-per-core=0 --conntrack-tcp-timeout-close-wait=0s --conntrack-tcp-timeout-established=0s --healthz-bind-address=127.0.0.1 --hostname-override=7878eb0dfcc1 --kubeconfig=/var/lib/rancher/k3s/agent/kubeproxy.kubeconfig --proxy-mode=iptables" W0716 06:29:37.740831 15 server.go:220] WARNING: all flags other than --config, --write-config-to, and --cleanup are deprecated. Please begin using a config file ASAP. Flag --cloud-provider has been deprecated, will be removed in 1.23, in favor of removing cloud provider code from Kubelet. Flag --cni-bin-dir has been deprecated, will be removed along with dockershim. Flag --cni-conf-dir has been deprecated, will be removed along with dockershim. Flag --containerd has been deprecated, This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed. W0716 06:29:37.746369 15 proxier.go:653] Failed to read file /lib/modules/5.10.25-linuxkit/modules.builtin with error open /lib/modules/5.10.25-linuxkit/modules.builtin: no such file or directory. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules I0716 06:29:37.759831 15 server.go:436] "Kubelet version" kubeletVersion="v1.21.1+k3s1" W0716 06:29:37.764220 15 proxier.go:663] Failed to load kernel module ip_vs with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules W0716 06:29:37.782070 15 proxier.go:663] Failed to load kernel module ip_vs_rr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules W0716 06:29:37.801677 15 proxier.go:663] Failed to load kernel module ip_vs_wrr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules W0716 06:29:37.818006 15 proxier.go:663] Failed to load kernel module ip_vs_sh with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules W0716 06:29:37.836981 15 proxier.go:663] Failed to load kernel module nf_conntrack with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules E0716 06:29:37.881617 15 node.go:161] Failed to retrieve node info: nodes "7878eb0dfcc1" is forbidden: User "system:kube-proxy" cannot get resource "nodes" in API group "" at the cluster scope I0716 06:29:37.925349 15 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/rancher/k3s/agent/client-ca.crt E0716 06:29:39.038827 15 node.go:161] Failed to retrieve node info: nodes "7878eb0dfcc1" is forbidden: User "system:kube-proxy" cannot get resource "nodes" in API group "" at the cluster scope E0716 06:29:41.255892 15 node.go:161] Failed to retrieve node info: nodes "7878eb0dfcc1" is forbidden: User "system:kube-proxy" cannot get resource "nodes" in API group "" at the cluster scope time="2021-07-16T06:29:42.718670551Z" level=warning msg="Unable to watch for tunnel endpoints: unknown (get endpoints)" E0716 06:29:42.940327 15 server.go:288] "Failed to run kubelet" err="failed to run Kubelet: could not detect clock speed from output: \"processor\\t: 0\\nBogoMIPS\\t: 48.00\\nFeatures\\t: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm jscvt fcma lrcpc dcpop sha3 asimddp sha512 asimdfhm dit uscat ilrcpc flagm sb paca pacg dcpodp flagm2 frint\\nCPU implementer\\t: 0x00\\nCPU architecture: 8\\nCPU variant\\t: 0x0\\nCPU part\\t: 0x000\\nCPU revision\\t: 0\\n\\nprocessor\\t: 1\\nBogoMIPS\\t: 48.00\\nFeatures\\t: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm jscvt fcma lrcpc dcpop sha3 asimddp sha512 asimdfhm dit uscat ilrcpc flagm sb paca pacg dcpodp flagm2 frint\\nCPU implementer\\t: 0x00\\nCPU architecture: 8\\nCPU variant\\t: 0x0\\nCPU part\\t: 0x000\\nCPU revision\\t: 0\\n\\nprocessor\\t: 2\\nBogoMIPS\\t: 48.00\\nFeatures\\t: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm jscvt fcma lrcpc dcpop sha3 asimddp sha512 asimdfhm dit uscat ilrcpc flagm sb paca pacg dcpodp flagm2 frint\\nCPU implementer\\t: 0x00\\nCPU architecture: 8\\nCPU variant\\t: 0x0\\nCPU part\\t: 0x000\\nCPU revision\\t: 0\\n\\nprocessor\\t: 3\\nBogoMIPS\\t: 48.00\\nFeatures\\t: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm jscvt fcma lrcpc dcpop sha3 asimddp sha512 asimdfhm dit uscat ilrcpc flagm sb paca pacg dcpodp flagm2 frint\\nCPU implementer\\t: 0x00\\nCPU architecture: 8\\nCPU variant\\t: 0x0\\nCPU part\\t: 0x000\\nCPU revision\\t: 0\\n\\nprocessor\\t: 4\\nBogoMIPS\\t: 48.00\\nFeatures\\t: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm jscvt fcma lrcpc dcpop sha3 asimddp sha512 asimdfhm dit uscat ilrcpc flagm sb paca pacg dcpodp flagm2 frint\\nCPU implementer\\t: 0x00\\nCPU architecture: 8\\nCPU variant\\t: 0x0\\nCPU part\\t: 0x000\\nCPU revision\\t: 0\\n\\nprocessor\\t: 5\\nBogoMIPS\\t: 48.00\\nFeatures\\t: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm jscvt fcma lrcpc dcpop sha3 asimddp sha512 asimdfhm dit uscat ilrcpc flagm sb paca pacg dcpodp flagm2 frint\\nCPU implementer\\t: 0x00\\nCPU architecture: 8\\nCPU variant\\t: 0x0\\nCPU part\\t: 0x000\\nCPU revision\\t: 0\\n\\nprocessor\\t: 6\\nBogoMIPS\\t: 48.00\\nFeatures\\t: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm jscvt fcma lrcpc dcpop sha3 asimddp sha512 asimdfhm dit uscat ilrcpc flagm sb paca pacg dcpodp flagm2 frint\\nCPU implementer\\t: 0x00\\nCPU architecture: 8\\nCPU variant\\t: 0x0\\nCPU part\\t: 0x000\\nCPU revision\\t: 0\\n\\nprocessor\\t: 7\\nBogoMIPS\\t: 48.00\\nFeatures\\t: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm jscvt fcma lrcpc dcpop sha3 asimddp sha512 asimdfhm dit uscat ilrcpc flagm sb paca pacg dcpodp flagm2 frint\\nCPU implementer\\t: 0x00\\nCPU architecture: 8\\nCPU variant\\t: 0x0\\nCPU part\\t: 0x000\\nCPU revision\\t: 0\\n\\n\""

Screenshots If applicable, add screenshots to help explain your problem.

image

Issue Analytics

  • State:closed
  • Created 2 years ago
  • Comments:10 (5 by maintainers)

github_iconTop GitHub Comments

1reaction
avshalommancommented, Dec 23, 2021

@evalsocket Actually it was an environment issue: I was installing flyte using pip in a conda environment and the conda python doesn’t play nicely with grpc installed using pip. It worked when doing everything with venv + pip. Thanks!

1reaction
jinserkcommented, Jul 16, 2021

@sinwoobang It will be better to add the docker version. 😃

Read more comments on GitHub >

github_iconTop Results From Across the Web

Configuring the macOS App Sandbox - Apple Developer
To ensure the App Sandbox is in an enabled state, launch your macOS app using Xcode. Then, open /Applications/Utilities/Activity Monitor. app and choose...
Read more >
How to Safeguard From macOS App Sandbox Vulnerability
Microsoft has covered macOS App Sandbox Vulnerability. Here is how you can safeguard your Mac from this bug.
Read more >
Covert channel in Apple's M1 is mostly harmless, but it sure is ...
Martin said that the flaw is mainly harmless because it can't be used to infect a Mac and it can't be used by...
Read more >
Universe Sandbox - Gaming on M1 Apple silicon Macs and ...
Universe Sandbox. From AppleGamingWiki, the wiki about gaming on M1 Apple silicon Macs. macOS Compatibility • Link. Method, Rating, Notes.
Read more >
Solved: Deploying Hortonworks Sandbox on Docker on MAC M1
Docker installed Mac M1 ... docker pull hortonworks/sandbox-hdp:3.0.1 --platform linux/amd64 ... An Unexpected Error has occurred.
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found