Managed Server pods not starting
See original GitHub issueThe admin server pod is up and running however no Managed server pods are created. I’m running on OCI on 3 instances - not provisioned using the terraform scripts but created directly from the OCI UI. The operator is running and the custom domain resource also says the following:
Name: dev-domain
Namespace: dev-domain
Labels: weblogic.domainName=dev-domain
weblogic.domainUID=dev-domain
weblogic.resourceVersion=domain-v1
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"weblogic.oracle/v1","kind":"Domain","metadata":{"annotations":{},"labels":{"weblogic.domainName":"dev-domain","weblogic.domainUID":"dev-...
API Version: weblogic.oracle/v1
Kind: Domain
Metadata:
Cluster Name:
Creation Timestamp: 2018-07-07T15:48:09Z
Generation: 0
Resource Version: 27951
Self Link: /apis/weblogic.oracle/v1/namespaces/dev-domain/domains/dev-domain
UID: 25ed332e-81fd-11e8-90c5-020017013c05
Spec:
Admin Secret:
Name: weblogic-credentials
As Name: admin-server
As Port: 7001
Cluster Startup:
Cluster Name: cluster-1
Desired State: RUNNING
Env:
Name: JAVA_OPTIONS
Value: -Dweblogic.StdoutDebugEnabled=false
Name: USER_MEM_ARGS
Value: -Xms64m -Xmx256m
Replicas: 2
Domain Name: dev-domain
Domain UID: dev-domain
Export T 3 Channels:
T3Channel
Image: store/oracle/weblogic:12.2.1.3
Image Pull Policy: IfNotPresent
Replicas: 1
Server Startup:
Desired State: RUNNING
Env:
Name: JAVA_OPTIONS
Value: -Dweblogic.StdoutDebugEnabled=false
Name: USER_MEM_ARGS
Value: -Xms64m -Xmx256m
Node Port: 30701
Server Name: admin-server
Startup Control: AUTO
Status:
Conditions:
Last Transition Time: 2018-07-07T15:54:06.083Z
Reason: ServersReady
Status: True
Type: Available
Servers:
Server Name: admin-server
State:
Start Time: 2018-07-07T15:54:06.083Z
Events: <none>
The domain creation job had also completed successfully without any errors. Not able to figure out why the managed server pods are not getting created. The only odd thing seems to be the Connection Timeout for getting the Admin server health in the operator log even though the Admin Server is up and running, and the 403 Forbidden error in the operator logs associated with the error.
What could be happening?
Issue Analytics
- State:
- Created 5 years ago
- Comments:12 (4 by maintainers)
Top Results From Across the Web
Troubleshooting kubeadm | Kubernetes
As with any program, you might run into an error installing or running kubeadm. This page lists some common failure scenarios and have ......
Read more >Troubleshooting :: WebLogic Kubernetes Operator
Check the deploy log and find the failure details with kubectl describe pod podname . Please go Getting pod error details. Process of...
Read more >Start and Stop Servers - Oracle Help Center
With these utilities you can start and stop the admin server and the managed servers in your domain. Note: Do not use the...
Read more >Troubleshooting | Google Kubernetes Engine (GKE)
If a node has adequate resources but you still see the Does not have minimum availability message, check the Pod's status. If the...
Read more >Known issues and limitations - IBM
Logging ELK pods are in CrashLoopBackOff state; Logs not working after logging ... If there is no amd64 management node in the cluster,...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Exposed node port was already in use by another service. Issue is resolved now. Thanks.
Were the manage server pods created? If so could you upload the pod log? Otherwise, we would probably need to look at the operator log to see if it provides any clues. Thanks