728x90
반응형
- 라이브니스 프로브 : 컨터이너 상태를 파악하고 문제가 있으면 다시 시작하는 기능
- 레디니스 프로브 : 포드가 준비된 상태인지 확인, 정상 서비스를 시작하는 기능, 포드가 정상적으로 준비되어 있지 않으면 로드 밸런싱을 하지 않음
- 스타트업 프로브 : 컨테이너가 시작될때 정상적으로 실행되는지 체크 후 라이브니스와 레드니스의 서비스가 진행된다.
# 라이브니스 프로브 실습 내용을 yaml 파일로 생성하기
vim exec-liveness.yaml
apiVersion: v1
kind: Pod
metadata:
labels:
test: liveness
name: liveness-exec
spec:
containers:
- name: liveness
image: k8s.gcr.io/busybox
args:
- /bin/sh
- -c
- touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy; sleep 600
livenessProbe:
exec:
command:
- cat
- /tmp/healthy
initialDelaySeconds: 5
periodSeconds: 5
- 위의 내용 저장후 나가기
# 라이브네스 실행하기
kubectl create -f exec-liveness.yaml
kubectl get pod
ec2-user:~/environment/yaml $ kubectl get pod
NAME READY STATUS RESTARTS AGE
liveness-exec 1/1 Running 0 40s
ec2-user:~/environment/yaml $
# 라이브네스 이벤트 확인하기
kubectl describe pod liveness-exec
ec2-user:~/environment/yaml $ kubectl describe pod liveness-exec
Name: liveness-exec
Namespace: default
Priority: 0
Node: ip-192-168-19-0.ap-northeast-2.compute.internal/192.168.19.0
Start Time: Tue, 05 Oct 2021 14:24:10 +0000
Labels: test=liveness
Annotations: kubernetes.io/psp: eks.privileged
Status: Running
IP: 192.168.11.203
IPs:
IP: 192.168.11.203
Containers:
liveness:
Container ID: docker://3922b141b368cf632e25152340f8f52237e3db46cffc480f83a6ec0bce131326
Image: k8s.gcr.io/busybox
Image ID: docker-pullable://k8s.gcr.io/busybox@sha256:d8d3bc2c183ed2f9f10e7258f84971202325ee6011ba137112e01e30f206de67
Port: <none>
Host Port: <none>
Args:
/bin/sh
-c
touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy; sleep 600
State: Running
Started: Tue, 05 Oct 2021 14:25:27 +0000
Last State: Terminated
Reason: Error
Exit Code: 137
Started: Tue, 05 Oct 2021 14:24:12 +0000
Finished: Tue, 05 Oct 2021 14:25:26 +0000
Ready: True
Restart Count: 1
Liveness: exec [cat /tmp/healthy] delay=5s timeout=1s period=5s #success=1 #failure=3
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-czdll (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
default-token-czdll:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-czdll
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 79s default-scheduler Successfully assigned default/liveness-exec to ip-192-168-19-0.ap-northeast-2.compute.internal
Warning Unhealthy 33s (x3 over 43s) kubelet, ip-192-168-19-0.ap-northeast-2.compute.internal Liveness probe failed: cat: can't open '/tmp/healthy': No such file or directory
Normal Killing 33s kubelet, ip-192-168-19-0.ap-northeast-2.compute.internal Container liveness failed liveness probe, will be restarted
Normal Pulling 3s (x2 over 79s) kubelet, ip-192-168-19-0.ap-northeast-2.compute.internal Pulling image "k8s.gcr.io/busybox"
Normal Pulled 2s (x2 over 77s) kubelet, ip-192-168-19-0.ap-northeast-2.compute.internal Successfully pulled image "k8s.gcr.io/busybox"
Normal Created 2s (x2 over 77s) kubelet, ip-192-168-19-0.ap-northeast-2.compute.internal Created container liveness
Normal Started 2s (x2 over 77s) kubelet, ip-192-168-19-0.ap-northeast-2.compute.internal Started container liveness
ec2-user:~/environment/yaml $ kubectl get pod
NAME READY STATUS RESTARTS AGE
liveness-exec 1/1 Running 1 2m29s
ec2-user:~/environment/yaml $
- describe와 get pod를 사용하여 pod을 보면 죽었다가 다시 살아나는 과정들을 볼 수 있다.
# 2번째 테스트 : Define a liveness HTTP request
vim http-liveness.yaml
apiVersion: v1
kind: Pod
metadata:
labels:
test: liveness
name: liveness-http
spec:
containers:
- name: liveness
image: k8s.gcr.io/liveness
args:
- /server
livenessProbe:
httpGet:
path: /healthz
port: 8080
httpHeaders:
- name: Custom-Header
value: Awesome
initialDelaySeconds: 3
periodSeconds: 3
kubectl create -f http-liveness.yaml
kubectl get pod
kubectl get pod -w
# 서비스들의 상태를 확인하여 문제가 있으면 다시시작하는 구조
ec2-user:~/environment/yaml $ kubectl get pod -w
NAME READY STATUS RESTARTS AGE
liveness-exec 1/1 Running 3 4m35s
liveness-http 1/1 Running 0 13s
liveness-http 1/1 Running 1 21s
liveness-http 1/1 Running 2 38s
liveness-exec 1/1 Running 4 5m2s
liveness-http 1/1 Running 3 57s
liveness-http 0/1 CrashLoopBackOff 3 74s
# describe로 서비스 정보 확인하기
kubectl describe pod liveness-http
- 세부 정보를 확인 할 수 있다.
ec2-user:~/environment/yaml $ kubectl describe pod liveness-http
Name: liveness-http
Namespace: default
Priority: 0
Node: ip-192-168-19-0.ap-northeast-2.compute.internal/192.168.19.0
Start Time: Tue, 05 Oct 2021 14:28:32 +0000
Labels: test=liveness
Annotations: kubernetes.io/psp: eks.privileged
Status: Running
IP: 192.168.3.17
IPs:
IP: 192.168.3.17
Containers:
liveness:
Container ID: docker://b474683ab481758029850583595bb4a62e0891f6dfb6b932d81f8dab3571e296
Image: k8s.gcr.io/liveness
Image ID: docker-pullable://k8s.gcr.io/liveness@sha256:1aef943db82cf1370d0504a51061fb082b4d351171b304ad194f6297c0bb726a
Port: <none>
Host Port: <none>
Args:
/server
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 2
Started: Tue, 05 Oct 2021 14:30:31 +0000
Finished: Tue, 05 Oct 2021 14:30:48 +0000
Ready: False
Restart Count: 5
Liveness: http-get http://:8080/healthz delay=3s timeout=1s period=3s #success=1 #failure=3
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-czdll (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-czdll:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-czdll
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 2m26s default-scheduler Successfully assigned default/liveness-http to ip-192-168-19-0.ap-northeast-2.compute.internal
Normal Pulled 108s (x3 over 2m23s) kubelet, ip-192-168-19-0.ap-northeast-2.compute.internal Successfully pulled image "k8s.gcr.io/liveness"
Normal Created 108s (x3 over 2m23s) kubelet, ip-192-168-19-0.ap-northeast-2.compute.internal Created container liveness
Normal Started 108s (x3 over 2m23s) kubelet, ip-192-168-19-0.ap-northeast-2.compute.internal Started container liveness
Normal Pulling 91s (x4 over 2m25s) kubelet, ip-192-168-19-0.ap-northeast-2.compute.internal Pulling image "k8s.gcr.io/liveness"
Warning Unhealthy 91s (x9 over 2m13s) kubelet, ip-192-168-19-0.ap-northeast-2.compute.internal Liveness probe failed: HTTP probe failed with statuscode: 500
Normal Killing 91s (x3 over 2m7s) kubelet, ip-192-168-19-0.ap-northeast-2.compute.internal Container liveness failed liveness probe, will be restarted
ec2-user:~/environment/yaml $
# Define a TCP liveness probe
vim tcp-liveness-readiness.yaml
apiVersion: v1
kind: Pod
metadata:
name: goproxy
labels:
app: goproxy
spec:
containers:
- name: goproxy
image: k8s.gcr.io/goproxy:0.1
ports:
- containerPort: 8080
readinessProbe:
tcpSocket:
port: 8080
initialDelaySeconds: 5
periodSeconds: 10
livenessProbe:
tcpSocket:
port: 8080
initialDelaySeconds: 15
periodSeconds: 20
kubectl apply -f https://k8s.io/examples/pods/probe/tcp-liveness-readiness.yaml
kubectl get pod
kubectl describe pod goproxy
ec2-user:~/environment/yaml $ kubectl apply -f https://k8s.io/examples/pods/probe/tcp-liveness-readiness.yaml
pod/goproxy created
ec2-user:~/environment/yaml $ kubectl get pod
NAME READY STATUS RESTARTS AGE
goproxy 0/1 Running 0 7s
liveness-exec 0/1 CrashLoopBackOff 7 14m
liveness-http 0/1 CrashLoopBackOff 7 10m
ec2-user:~/environment/yaml $ kubectl describe pod goproxy
Name: goproxy
Namespace: default
Priority: 0
Node: ip-192-168-19-0.ap-northeast-2.compute.internal/192.168.19.0
Start Time: Tue, 05 Oct 2021 14:38:49 +0000
Labels: app=goproxy
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"labels":{"app":"goproxy"},"name":"goproxy","namespace":"default"},"spec":{"c...
kubernetes.io/psp: eks.privileged
Status: Running
IP: 192.168.0.98
IPs:
IP: 192.168.0.98
Containers:
goproxy:
Container ID: docker://620722a602eca11641a7c23b7fe54ef305f8cbe1bf26a068bb4bc934848c66a7
Image: k8s.gcr.io/goproxy:0.1
Image ID: docker-pullable://k8s.gcr.io/goproxy@sha256:5334c7ad43048e3538775cb09aaf184f5e8acf4b0ea60e3bc8f1d93c209865a5
Port: 8080/TCP
Host Port: 0/TCP
State: Running
Started: Tue, 05 Oct 2021 14:38:52 +0000
Ready: True
Restart Count: 0
Liveness: tcp-socket :8080 delay=15s timeout=1s period=20s #success=1 #failure=3
Readiness: tcp-socket :8080 delay=5s timeout=1s period=10s #success=1 #failure=3
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-czdll (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
default-token-czdll:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-czdll
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 25s default-scheduler Successfully assigned default/goproxy to ip-192-168-19-0.ap-northeast-2.compute.internal
Normal Pulling 24s kubelet, ip-192-168-19-0.ap-northeast-2.compute.internal Pulling image "k8s.gcr.io/goproxy:0.1"
Normal Pulled 22s kubelet, ip-192-168-19-0.ap-northeast-2.compute.internal Successfully pulled image "k8s.gcr.io/goproxy:0.1"
Normal Created 22s kubelet, ip-192-168-19-0.ap-northeast-2.compute.internal Created container goproxy
Normal Started 22s kubelet, ip-192-168-19-0.ap-northeast-2.compute.internal Started container goproxy
ec2-user:~/environment/yaml $
- 스케줄링 내용을 확인 할 수있다.
728x90
반응형
'⭐ Kubernetes & EKS > EKS' 카테고리의 다른 글
EKS Deployment Manifest (0) | 2021.10.07 |
---|---|
EKS pod, nodes 의 자원 모니터링 (kubectl top) (0) | 2021.10.06 |
EKS Jenkins 디스크럽터 작성 (0) | 2021.10.05 |
EKS Pod 디스크립터 작성 (0) | 2021.10.05 |
EKS에서 Jenkins 을 실행하기 (feat.스케일 인 아웃) (0) | 2021.10.05 |