0

am trying to deploy angular frontend app on kubernetes, but i always get this error:

NAME                              READY   STATUS             RESTARTS   AGE
common-frontend-f74c899cc-p6tdn   0/1     CrashLoopBackOff   7          15m

when i try to see logs of pod, it print just empty line, so how can i find out where could be problem

this is dockerfile, build pipeline with this dockerfile alwys passed:

### STAGE 1: Build ###

# We label our stage as 'builder'
FROM node:10.11 as builder

COPY package.json ./
COPY package-lock.json ./

RUN npm set progress=false && npm config set depth 0 && npm cache clean --force
ARG NODE_OPTIONS="--max_old_space_size=4096"
## Storing node modules on a separate layer will prevent unnecessary npm installs at each build
RUN npm i && mkdir /ng-app && cp -R ./node_modules ./ng-app

WORKDIR /ng-app

COPY . .

## Build the angular app in production mode and store the artifacts in dist folder
RUN $(npm bin)/ng build --prod --output-hashing=all

### STAGE 2: Setup ###

FROM nginx:1.13.3-alpine

## Copy our default nginx config
COPY nginx/default.conf /etc/nginx/conf.d/

## Remove default nginx website
RUN rm -rf /usr/share/nginx/html/*

## From 'builder' stage copy the artifacts in dist folder to default nginx public folder
COPY --from=builder /ng-app/dist /usr/share/nginx/html

CMD ["nginx", "-g", "daemon off;"]

and deployment.yaml

---
apiVersion: v1
kind: Service
metadata:
  name: common-frontend
  labels:
    app: common-frontend
spec:
  type: ClusterIP
  selector:
    app: common-frontend
  ports:
  - port: 80
    targetPort: 8080

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: common-frontend
  labels:
    app: common-frontend
spec:
  replicas: 1
  selector:
    matchLabels:
      app: common-frontend
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 33%
  template:
    metadata:
      labels:
        app: common-frontend
    spec:
      containers:
      - name: common-frontend
        image: skunkstechnologies/common-frontend:<VERSION>
        ports:
        - containerPort: 8080
        livenessProbe:
          httpGet:
            path: /health
            port: 8080
          initialDelaySeconds: 30
          timeoutSeconds: 1

I really dont know what could be problem,can anyone help? Thanks!

2 Answers 2

1

Looks like Kubernetes fails liveness probe and will restart pod. Try to comment 'liveness probe' section and start it again. If it helps, correct probe parameters -- timeout, delay, etc.

Sign up to request clarification or add additional context in comments.

Comments

0

Hmm. Your container dies and tries to restart. First of all, try to look at his logs and status:

kubectl logs <container_name>
kubectl describe pod <container_name>

1 Comment

please merge with other answer with different stages instead of two separate answers

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.