

Scaling Kibana instances horizontally requires a higher degree of coordination, which can impact overall performance.Ī recommended strategy is to follow these steps: Scaling Kibana instances vertically causes higher resource usage in each Kibana instance, as it will perform more concurrent work. Choosing a scaling strategy editĮach scaling strategy comes with its own considerations, and the appropriate strategy largely depends on your use case. This could impact the performance of the Elasticsearch cluster as the workload will be higher. Tweak the Poll Interval via the xpack.task_manager.poll_interval setting, which allows each Kibana to pull scheduled tasks at a higher rate. This could impact the performance of each Kibana instance as the workload will be higher. Tweak the Max Workers via the xpack.task_manager.max_workers setting, which allows each Kibana to pull a higher number of tasks per interval. Other times it, might be preferable to increase the throughput of individual Kibana instances. Scaling horizontally editĪt times, the sustainable approach might be to expand the throughput of your cluster by provisioning additional Kibana instances.īy default, each additional Kibana instance will add an additional 10 tasks that your cluster can run concurrently, but you can also scale each Kibana instance vertically, if your diagnosis indicates that they can handle the additional workload. An appropriate number of Kibana instances can be estimated to match the required scale.įor details on monitoring the health of Kibana Task Manager, follow the guidance in Health monitoring. For the most part, the duration of tasks is below that threshold, but it can vary greatly as Elasticsearch and Kibana usage grow and task complexity increases (such as alerts executing heavy queries across large datasets).īy estimating a rough throughput requirement, you can estimate the number of Kibana instances required to reliably execute tasks in a timely manner. In practice, a Kibana instance will only achieve the upper bound of 200/tpm if the duration of task execution is below the polling rate of 3 seconds. This means that you can expect a single Kibana instance to support up to 200 tasks per minute ( 200/tpm). Default scale editīy default, Kibana polls for tasks at a rate of 10 tasks every 3 seconds. However, there is a relatively straight forward method you can follow to produce a rough estimate based on your expected usage. Predicting the throughout a deployment might require to support Task Management is difficult because features can schedule an unpredictable number of tasks at a variety of scheduled cadences. How you deploy Kibana largely depends on your use case. To ensure schedules are triggered when expected, synchronize the clocks of all nodes in the cluster using a time service such as Network Time Protocol. Deployment considerations editĮlasticsearch and Kibana instances use the system clock to determine the current time. To address these issues, tweak the Kibana Task Manager settings or the cluster scaling strategy to better suit the unique use case.įor details on the settings that can influence the performance and throughput of Task Manager, see Task Manager Settings.įor detailed troubleshooting guidance, see Troubleshooting. This is usually a symptom of the specific usage or scaling strategy of the cluster in question. It is possible for tasks to run late or at an inconsistent schedule.
