<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Multi Datacenter guides on Cozystack</title><link>https://deploy-preview-470--cozystack.netlify.app/docs/v1/operations/stretched/</link><description>Recent content in Multi Datacenter guides on Cozystack</description><generator>Hugo</generator><language>en</language><atom:link href="https://deploy-preview-470--cozystack.netlify.app/docs/v1/operations/stretched/index.xml" rel="self" type="application/rss+xml"/><item><title>Tune etcd timeouts</title><link>https://deploy-preview-470--cozystack.netlify.app/docs/v1/operations/stretched/etcd/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://deploy-preview-470--cozystack.netlify.app/docs/v1/operations/stretched/etcd/</guid><description>&lt;h2 id="potential-problems"&gt;Potential problems&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;etcd&lt;/strong&gt; is a reliable key-value store for critical data, the place where the Kubernetes API server stores its data.
Values are not considered written until a quorum of nodes reports that data has been written. Also, etcd constantly
checks that there is a leader and every node is aware which one it is. Usually, an etcd cluster is run on nodes in the
same datacenter (where latency is low) and default timeouts are tuned for the quickest error reporting possible. In a
stretched cluster, where nodes are in different datacenters, the RTT is much higher and default timeouts can cause false
alarms as if the network was partitioned. In this case, the data is still consistent, but etcd will enter readonly mode
to prevent split-brain. This could happen so frequently that many other components of Kubernetes would start to fail
with non-descriptive and unhelpful error messages.&lt;/p&gt;</description></item><item><title>Kube scheduler configuration</title><link>https://deploy-preview-470--cozystack.netlify.app/docs/v1/operations/stretched/kubeschedulerconfiguration/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://deploy-preview-470--cozystack.netlify.app/docs/v1/operations/stretched/kubeschedulerconfiguration/</guid><description>&lt;h2 id="label-nodes"&gt;Label nodes&lt;/h2&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f0f0f0;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;kubectl label node &amp;lt;nodename&amp;gt; topology.kubernetes.io/zone&lt;span style="color:#666"&gt;=&lt;/span&gt;A
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h2 id="create-global-app-topology-spread-constraints"&gt;Create global app topology spread constraints&lt;/h2&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f0f0f0;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"&gt;&lt;code class="language-yaml" data-lang="yaml"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#062873;font-weight:bold"&gt;apiVersion&lt;/span&gt;:&lt;span style="color:#bbb"&gt; &lt;/span&gt;v1&lt;span style="color:#bbb"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#bbb"&gt;&lt;/span&gt;&lt;span style="color:#062873;font-weight:bold"&gt;kind&lt;/span&gt;:&lt;span style="color:#bbb"&gt; &lt;/span&gt;ConfigMap&lt;span style="color:#bbb"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#bbb"&gt;&lt;/span&gt;&lt;span style="color:#062873;font-weight:bold"&gt;metadata&lt;/span&gt;:&lt;span style="color:#bbb"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#bbb"&gt; &lt;/span&gt;&lt;span style="color:#062873;font-weight:bold"&gt;name&lt;/span&gt;:&lt;span style="color:#bbb"&gt; &lt;/span&gt;cozystack-scheduling&lt;span style="color:#bbb"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#bbb"&gt; &lt;/span&gt;&lt;span style="color:#062873;font-weight:bold"&gt;namespace&lt;/span&gt;:&lt;span style="color:#bbb"&gt; &lt;/span&gt;cozy-system&lt;span style="color:#bbb"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#bbb"&gt;&lt;/span&gt;&lt;span style="color:#062873;font-weight:bold"&gt;data&lt;/span&gt;:&lt;span style="color:#bbb"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#bbb"&gt; &lt;/span&gt;&lt;span style="color:#062873;font-weight:bold"&gt;globalAppTopologySpreadConstraints&lt;/span&gt;:&lt;span style="color:#bbb"&gt; &lt;/span&gt;|&lt;span style="color:#4070a0;font-style:italic"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#4070a0;font-style:italic"&gt; topologySpreadConstraints:
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#4070a0;font-style:italic"&gt; - maxSkew: 1
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#4070a0;font-style:italic"&gt; topologyKey: topology.kubernetes.io/zone
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#4070a0;font-style:italic"&gt; whenUnsatisfiable: DoNotSchedule&lt;/span&gt;&lt;span style="color:#bbb"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h2 id="configure-podtopologyspread"&gt;Configure PodTopologySpread&lt;/h2&gt;
&lt;p&gt;See: 
&lt;a href="https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/#cluster-level-default-constraints" target="_blank"&gt;https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/#cluster-level-default-constraints&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;For Talm installation:
Add to &lt;code&gt;templates/_helpers.tpl&lt;/code&gt;.&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="background-color:#f0f0f0;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"&gt;&lt;code class="language-yaml" data-lang="yaml"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#062873;font-weight:bold"&gt;cluster&lt;/span&gt;:&lt;span style="color:#bbb"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#bbb"&gt;&lt;/span&gt;&lt;span style="color:#0e84b5;font-weight:bold"&gt;...&lt;/span&gt;&lt;span style="color:#bbb"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#bbb"&gt; &lt;/span&gt;&lt;span style="color:#062873;font-weight:bold"&gt;scheduler&lt;/span&gt;:&lt;span style="color:#bbb"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#bbb"&gt; &lt;/span&gt;&lt;span style="color:#062873;font-weight:bold"&gt;config&lt;/span&gt;:&lt;span style="color:#bbb"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#bbb"&gt; &lt;/span&gt;&lt;span style="color:#062873;font-weight:bold"&gt;apiVersion&lt;/span&gt;:&lt;span style="color:#bbb"&gt; &lt;/span&gt;kubescheduler.config.k8s.io/v1beta3&lt;span style="color:#bbb"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#bbb"&gt; &lt;/span&gt;&lt;span style="color:#062873;font-weight:bold"&gt;kind&lt;/span&gt;:&lt;span style="color:#bbb"&gt; &lt;/span&gt;KubeSchedulerConfiguration&lt;span style="color:#bbb"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#bbb"&gt; &lt;/span&gt;&lt;span style="color:#062873;font-weight:bold"&gt;profiles&lt;/span&gt;:&lt;span style="color:#bbb"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#bbb"&gt; &lt;/span&gt;- &lt;span style="color:#062873;font-weight:bold"&gt;schedulerName&lt;/span&gt;:&lt;span style="color:#bbb"&gt; &lt;/span&gt;default-scheduler&lt;span style="color:#bbb"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#bbb"&gt; &lt;/span&gt;&lt;span style="color:#062873;font-weight:bold"&gt;pluginConfig&lt;/span&gt;:&lt;span style="color:#bbb"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#bbb"&gt; &lt;/span&gt;- &lt;span style="color:#062873;font-weight:bold"&gt;name&lt;/span&gt;:&lt;span style="color:#bbb"&gt; &lt;/span&gt;PodTopologySpread&lt;span style="color:#bbb"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#bbb"&gt; &lt;/span&gt;&lt;span style="color:#062873;font-weight:bold"&gt;args&lt;/span&gt;:&lt;span style="color:#bbb"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#bbb"&gt; &lt;/span&gt;&lt;span style="color:#062873;font-weight:bold"&gt;defaultConstraints&lt;/span&gt;:&lt;span style="color:#bbb"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#bbb"&gt; &lt;/span&gt;- &lt;span style="color:#062873;font-weight:bold"&gt;maxSkew&lt;/span&gt;:&lt;span style="color:#bbb"&gt; &lt;/span&gt;&lt;span style="color:#40a070"&gt;1&lt;/span&gt;&lt;span style="color:#bbb"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#bbb"&gt; &lt;/span&gt;&lt;span style="color:#062873;font-weight:bold"&gt;topologyKey&lt;/span&gt;:&lt;span style="color:#bbb"&gt; &lt;/span&gt;topology.kubernetes.io/zone&lt;span style="color:#bbb"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#bbb"&gt; &lt;/span&gt;&lt;span style="color:#062873;font-weight:bold"&gt;whenUnsatisfiable&lt;/span&gt;:&lt;span style="color:#bbb"&gt; &lt;/span&gt;ScheduleAnyway&lt;span style="color:#bbb"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#bbb"&gt; &lt;/span&gt;&lt;span style="color:#062873;font-weight:bold"&gt;defaultingType&lt;/span&gt;:&lt;span style="color:#bbb"&gt; &lt;/span&gt;List&lt;span style="color:#bbb"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;Apply changes:&lt;/p&gt;</description></item><item><title>Node labels</title><link>https://deploy-preview-470--cozystack.netlify.app/docs/v1/operations/stretched/labels/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://deploy-preview-470--cozystack.netlify.app/docs/v1/operations/stretched/labels/</guid><description>&lt;h2 id="how-topology-labels-work"&gt;How topology labels work&lt;/h2&gt;
&lt;p&gt;When running a Kubernetes cluster in multiple datacenters, it&amp;rsquo;s important to know when you want to schedule workloads
close to each other (for example, a database and a backend application) or when you want to spread them out (for
example, multiple replicas of frontend, or database replicas, or volume replicas). The first step to achieving this is
to label Kubernetes nodes. Public clouds typically use the &lt;code&gt;zone&lt;/code&gt; and &lt;code&gt;region&lt;/code&gt; terms. In Kubernetes, the most common way
to designate a geographical location is to use &lt;code&gt;topology.kubernetes.io/zone&lt;/code&gt; and &lt;code&gt;topology.kubernetes.io/region&lt;/code&gt;
labels (or only the zone one).&lt;/p&gt;</description></item><item><title>LINSTOR DRBD Configuration</title><link>https://deploy-preview-470--cozystack.netlify.app/docs/v1/operations/stretched/drbd-tuning/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://deploy-preview-470--cozystack.netlify.app/docs/v1/operations/stretched/drbd-tuning/</guid><description>&lt;h2 id="introduction"&gt;Introduction&lt;/h2&gt;
&lt;p&gt;This guide explains the configuration needed to use LINSTOR storage in a stretched (distributed) Cozystack cluster.&lt;/p&gt;
&lt;p&gt;DRBD (Distributed Replicated Block Device) is a kernel-level block device replication system that works over the network.
LINSTOR server manages DRBD volumes, including their creation, deletion, and orchestration across nodes.&lt;/p&gt;
&lt;h2 id="challenges-of-using-drbd"&gt;Challenges of using DRBD&lt;/h2&gt;
&lt;p&gt;DRBD only considers data as written once it reaches a quorum of nodes.
But as it presents itself as a block device to the end user, it must return an error within a given timeout if there are not enough nodes to establish a quorum.&lt;/p&gt;</description></item><item><title>Configuring a Dedicated Network for Distributed Storage with LINSTOR</title><link>https://deploy-preview-470--cozystack.netlify.app/docs/v1/operations/stretched/linstor-dedicated-network/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://deploy-preview-470--cozystack.netlify.app/docs/v1/operations/stretched/linstor-dedicated-network/</guid><description>&lt;h2 id="introduction"&gt;Introduction&lt;/h2&gt;
&lt;p&gt;This guide explains how to improve storage reliability and performance in distributed Cozystack clusters.&lt;/p&gt;
&lt;p&gt;In hyper-converged clusters, it’s common to dedicate a network to storage traffic.
However, it’s not always possible to provision separate storage links between datacenters.&lt;/p&gt;
&lt;p&gt;If you lack dedicated inter-datacenter links for storage, you have two options:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;make storage nodes in each datacenter isolated,&lt;/li&gt;
&lt;li&gt;make storage traffic share the existing uplinks with other workloads.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;This guide shows how to configure LINSTOR to use a dedicated network for storage traffic within each datacenter,
while falling back to shared links between datacenters when needed.&lt;/p&gt;</description></item><item><title>SeaweedFS Multi-DC Configuration</title><link>https://deploy-preview-470--cozystack.netlify.app/docs/v1/operations/stretched/seaweedfs-multidc/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://deploy-preview-470--cozystack.netlify.app/docs/v1/operations/stretched/seaweedfs-multidc/</guid><description>&lt;p&gt;This guide explains how to deploy SeaweedFS over several data centers (&amp;ldquo;multi-DC&amp;rdquo;).
Multi-zone configuration for SeaweedFS is available since Cozystack v0.34.0.&lt;/p&gt;
&lt;h2 id="seaweedfs-multi-dc-configuration"&gt;SeaweedFS Multi-DC Configuration&lt;/h2&gt;
&lt;p&gt;To span SeaweedFS over several DCs, create a new cluster in multi-DC mode.&lt;/p&gt;
&lt;p&gt;By default, SeaweedFS runs in a single data centre (DC), and a running single-DC deployment cannot be switched to multi-DC mode.
If you need to change the topology, delete the current SeaweedFS instance and create a new one with the desired mode.&lt;/p&gt;</description></item></channel></rss>