What is service?
A Kubernetes Service is an abstraction which defines a logical set of Pods and a policy by which to access them - sometimes called a micro-service. The set of Pods targeted by a Service is usually determined by a Label Selector.
Service role
Due to the nature of K8s, the IP address of a Node is not constant. This can result in a sudden loss of communication with the Pod. This means you need a way to communicate across multiple Pod and Node IP addresses on a single endpoint without the client knowing. The main role of the Service is to abstract the existence of the Pod, Node as a solution to the above events and provide a single endpoint to communicate with the Pod.
Prep
CONFIG="3nodes.yaml"
KIND_NAME="test-cluster"
CLUSTER_NAME="kind-${KIND_NAME}"
cat << EOF > ${CONFIG}
apiVersion: kind.x-k8s.io/v1alpha4
kind: Cluster
nodes:
- role: control-plane
image: kindest/node:v1.24.1
EOF
kind create cluster --config ${CONFIG} --name ${KIND_NAME}
kubectl config get-contexts
kubectl get node
Defining a service
Service is defined as follows:
apiVersion: v1
kind: Service
metadata:
name: test-app
spec:
selector:
app: test-app-link
ports:
- protocol: TCP
port: 9000
targetPort: 9000
Creating the above Service automatically creates a Service and its corresponding Endpoints.
Starting Services
> kubectl apply -f test_app.yml
service/test-app created
Get Services
> kubectl get service test-app
or
> kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
test-app ClusterIP 10.96.211.114 <none> 9000/TCP 47s
Get endpoints
> kubectl get endpoints
NAME ENDPOINTS AGE
test-app <none> 2m3s
When you create a Service with the above definition, the cluster IP used by the service proxy is assigned to the Service, and the Pod labeled app: test-app-link is used as the destination, mapping communications that come to port 9000 of the Service to port 9000 of the destination Pod.
Delete services
> kubectl delete services test-app
service "test-app" deleted
Services without selectors
I wrote above that Service defines a label selector to abstract communication to the Pod, but it can also abstract communication to other backends.
Examples:
- Production wants to use a database cluster outside the cluster, but testing uses its own database
- Want to direct the Service to a Service outside the cluster
- Workloads are being migrated to Kubernetes, with some backends running outside of Kubernetes
The definition of a Service that does not use a label selector is:
apiVersion: v1
kind: Service
metadata:
name: test-app-no-selector
spec:
ports:
- protocol: TCP
port: 30050
targetPort: 30050
Service is created, but no corresponding Endpoints are created.
Starting Services
> kubectl apply -f test_app_service_no_selector.yml
service/test-app-no-selector created
Get Services
> kubectl get service test-app-no-selector
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
test-app-no-selector ClusterIP 10.96.183.199 <none> 30050/TCP 98s
Get endpoints
> kubectl get endpoints
No corresponding Endpoints are created
Choosing your own IP address
You can specify your own cluster IP address as part of a Service creation request. To do this, set the .spec.clusterIP field.
apiVersion: v1
kind: Service
metadata:
name: test-cip-service
spec:
type: ClusterIP
clusterIP: 10.96.211.115
selector:
app: test-app-link
ports:
- protocol: TCP
port: 9000
targetPort: 9000
Starting Services
> kubectl apply -f test_cluster.ip.yml
service/my-cip-service created
Get Services
> kubectl get service test-cip-service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
test-cip-service ClusterIP 10.96.211.115 <none> 9000/TCP 35s
Type NodePort
If you set the type field to NodePort, the Kubernetes control plane allocates a port from a range specified by --service-node-port-range flag (default: 30000-32767). Each node proxies that port (the same port number on every Node) into your Service. Your Service reports the allocated port in its .spec.ports[*].nodePort field.
Choosing your own port
If you want a specific port number, you can specify a value in the nodePort field. The control plane will either allocate you that port or report that the API transaction failed. This means that you need to take care of possible port collisions yourself.
apiVersion: v1
kind: Service
metadata:
name: test-service
spec:
type: NodePort
selector:
app.kubernetes.io/name: MyApp
ports:
# By default and for convenience, the `targetPort` is set to the same value as the `port` field.
- port: 80
targetPort: 80
# Optional field
# By default and for convenience, the Kubernetes control plane will allocate a port from a range (default: 30000-32767)
nodePort: 30007
Custom IP address configuration for type: NodePort Services
You can set up nodes in your cluster to use a particular IP address for serving node port services. You might want to do this if each node is connected to multiple networks.
Make configuration changes
kubectl -n kube-system edit cm kube-proxy
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
data:
config.conf: |-
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
clientConnection:
acceptContentTypes: ""
burst: 0
contentType: ""
kubeconfig: /var/lib/kube-proxy/kubeconfig.conf
qps: 0
clusterCIDR: 192.168.20.0/24
configSyncPeriod: 0s
conntrack:
maxPerCore: null
min: null
tcpCloseWaitTimeout: null
tcpEstablishedTimeout: null
〜〜〜
Try limiting the range of IP addresses assigned to nodeport
〜
mode: ""
nodePortAddresses: ["127.0.0.0/8"]
oomScoreAdj: null
〜
You need to remove the pod once to apply the settings.
> kubectl get pods -A | grep proxy
kube-system kube-proxy-6tq5k 1/1 Running 0 74m
kube-system kube-proxy-ln99s 1/1 Running 0 74m
kube-system kube-proxy-tpv5b 1/1 Running 0 74m
Remove these three. Please change the name because it changes every time.
kubectl delete pod --namespace=kube-system kube-proxy-6tq5k
kubectl delete pod --namespace=kube-system kube-proxy-ln99s
kubectl delete pod --namespace=kube-system kube-proxy-tpv5b
Then kubernetes will detect and restart it on its own.
> kubectl get pods -A | grep proxy
kube-system kube-proxy-g9fkt 1/1 Running 0 10s
kube-system kube-proxy-tmr2w 1/1 Running 0 10s
The NodePort will be set like this:

Type LoadBalancer
On cloud providers which support external load balancers, setting the type field to LoadBalancer provisions a load balancer for your Service.
For example:
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app.kubernetes.io/name: MyApp
ports:
- protocol: TCP
port: 80
targetPort: 9376
clusterIP: 10.0.171.239
type: LoadBalancer
status:
loadBalancer:
ingress:
- ip: 192.0.2.127
Type ExternalName
Services of type ExternalName map a Service to a DNS name, not to a typical selector such as my-service. You specify these Services with the spec.externalName parameter.
For example:
apiVersion: v1
kind: Service
metadata:
name: my-service
namespace: prod
spec:
type: ExternalName
externalName: my.database.example.com
External IPs
If there are external IPs that route to one or more cluster nodes, Kubernetes Services can be exposed on those externalIPs.
For example:
apiVersion: v1
kind: Service
metadata:
name: test-external-ip
spec:
selector:
app.kubernetes.io/name: MyApp
ports:
- name: http
protocol: TCP
port: 80
targetPort: 9376
externalIPs:
- 80.11.12.10
Starting Services
> kubectl apply -f test_external_ip.yml
Get service
> kubectl get service test-external-ip
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
test-external-ip ClusterIP 10.96.146.84 80.11.12.10 80/TCP 31s


