![]() $ kubectl apply -f soft_affinity_deployment.ymlĭeployment.apps/nginx-deployment-soft-affinity created PreferredDuringSchedulingIgnoredDuringExecution: Now, let’s take a look at what happens if we use the same selector but with soft affinity:.Scheduler can’t place the pod because there is no node with the us-east-1c label. $ kubectl apply -f hard_affinity_deployment.yml RequiredDuringSchedulingIgnoredDuringExecution: Create simple deployment with hard nodeAffinity first with us-east-1c zone, and check pod status:.$ kubectl label nodes minikube-m03 /zone=us-east-1b $ kubectl label nodes minikube-m02 /zone=us-east-1a Let’s mark the minikube-m02 node with the /zone= us-east-1a and minikube-m02 with /zone= us-east-1b labels.Spin up a minikube with three nodes: one master and two worker nodes:.In our case, soft affinity is a perfect choice. If an emergency occurs in one AZ, the application will be available because the scheduler can place it on other nodes. “Soft” affinity - preferredDuringSchedulingIgnoredDuringExecution - if the scheduler can’t find a node with the specific label, it still schedules a pod.If not - the pod will be set to a Pending state. “Hard” affinity - requiredDuringSchedulingIgnoredDuringExecution - means that the scheduler will place the pod to the assigned node only if there is a specific label.It had to be designed in a way that would optimize our cost usage and wouldn’t affect application availability.Īfter some research, we found out that we could schedule our pods by using nodeAffinity and default EKS node labels, like /zone=us-east-1a, which represented node Availability Zone. To get rid of the unnecessary data usage costs, we needed to come up with a pod scheduling pattern. Source: kubernetes.io Solution to the problem The scheduler then ranks each valid Node and binds the Pod to a suitable Node. ![]() The scheduler determines which Nodes are valid placements for each Pod in the scheduling queue according to constraints and available resources. The Kubernetes scheduler is a control plane process which assigns Pods to Nodes. This significantly increased cross-AZ traffic, resulting in the unexplainably high bill. It’s a normal default behavior for the Kubernetes scheduler, unless one sets the affinity rules. What we didn’t foresee was that most of the time, the Kubernetes scheduler randomly created containers with the app (pods) across all of the nodes in different AZs. private database server endpoints (private DNS hostnames and IPs).inner AZ connections between the servers.However, the application generates a large amount of data for the database servers.Ĭonsidering the data transfer pricing mentioned above, for AWS cost optimization we need to make the most use of There is no multi-region infrastructure, as the app will be used only within one country. In our case, there are 6 nodes in the Elastic Kubernetes Service, and 4 database servers spread across 3 AZs. The hidden “cost generator” on our project You can find more pricing details in the article about AWS data transfer cost for common architectures. Outbound traffic from the Kubernetes cluster to the internet.The main types of AWS paid data transfers include Here’s a somewhat generalized summary of the main traffic costs, and an AWS architecture diagram to illustrate them: The origin of AWS traffic costsĪs data may be shared between different regions, AZs, architecture components, understanding all the types of data traffic and the respective costs applied can be frustrating. ![]() A daily cost of about 12 USD for nearly 1,2TB of data traffic hit us unexpectedly. Besides forgetting about the AWS data charges, some of the concepts in AWS and the resulting traffic costs were challenging to foresee. That’s what we’ve faced in one of our projects after migrating the database to a self-hosted solution. Eventually, it results in a massive bill by the end of the month caused by data traffic between different Availability Zones (AZs) or regions. It’s when one often forgets about the costs for the data transferred between their servers/services. When designing the underlying cloud solution for the Kubernetes Cluster, you pay a lot of attention to the infrastructure configuration so it suits the app requirements the best. Why the AWS bill exceeded our expectations You’ll also learn how the Apiko DevOps team solved this issue, and get useful tips for AWS cost optimization. In this article, we’ll tell about how the AWS traffic cost took us by surprise on one of our recent projects, why it happened, and what the nature of the unexpected charges was. I bet some people would say “What? Have you seen the AWS data transfer cost?” ![]() Cost efficiency is claimed to be one of the main benefits of cloud infrastructure.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |