|
1 | 1 | ---
|
2 | 2 | layout: classic-docs
|
3 |
| -title: "Administrative Variables, Monitoring, and Logging" |
| 3 | +title: "Environment Variables, Auto Scaling, and Monitoring" |
4 | 4 | category: [administration]
|
5 | 5 | order: 30
|
6 | 6 | ---
|
7 | 7 |
|
8 |
| -This document is for System Administrators who are setting environment variables for installed Builders, gathering metrics for monitoring their CircleCI installation, and viewing logs: |
| 8 | +This document is for System Administrators who are setting environment variables for installed Nomad Clients, scaling their cluster, gathering metrics for monitoring their CircleCI installation, and viewing logs: |
9 | 9 |
|
10 | 10 | * TOC
|
11 | 11 | {:toc}
|
12 | 12 |
|
13 |
| -## Setting Environment Variables on Builders |
| 13 | +## Setting Environment Variables on Nomad Clients |
14 | 14 |
|
15 |
| -Several aspects of CircleCI Builder behavior can be customized by passing |
| 15 | +Several aspects of CircleCI Nomad Client behavior can be customized by passing |
16 | 16 | environment variables into the builder process.
|
17 | 17 |
|
18 |
| -If you are using the [trial]({{site.baseurl}}/2.0/single-box/) installation |
19 |
| -option on a single VM, then you can create a file called `/etc/circle-installation-customizations` |
20 |
| -with entries like `export CIRCLE_OPTION_A=foo` to set environment variables. |
| 18 | +To set environment variables create a file called `/etc/circle-installation-customizations` |
| 19 | +with environment variable entries, for example, `export CIRCLE_OPTION_A=foo`. |
21 | 20 |
|
22 | 21 | ## System Monitoring
|
23 | 22 |
|
24 | 23 | Enable the Cloudwatch by going to Replicated Admin > Settings > Monitoring > Enable Cloudwatch. **Note:** CloudWatch does **not** support monitoring of macOS containers.
|
25 | 24 |
|
26 | 25 | CloudWatch already monitors the health and basic checks for the EC2 instances, for example, CPU, memory, disk space, and basic counts with alerts. Consider upgrading machine types for the Services instance or decrease the number of containers per container if CPU or memory become a bottleneck.
|
27 | 26 |
|
| 27 | +## Auto Scaling |
| 28 | + |
| 29 | +By default, an Auto Scaling group is created on your AWS account. Go to your EC2 Dashboard and select Auto Scaling Groups from the left side menu. Then, in the Instances tab, set the Desired and Minimum number to define the number Nomad Clients to spin up and keep available. Use the Scaling Policy tab of the Auto Scaling page to scale up your group automatically only at certain times, see below for best practices for defining policies. |
| 30 | + |
| 31 | +Refer to the Shutting Down a Nomad Client section of the [Nomad]({{ site.baseurl }}/2.0/nomad/#shutting-down-a-nomad-client) document for instructions on draining and scaling down the Nomad Clients. |
| 32 | + |
| 33 | +### Auto Scaling Policy Best Practices |
| 34 | + |
28 | 35 | There is a [blog post series](https://circleci.com/blog/mathematical-justification-for-not-letting-builds-queue/)
|
29 | 36 | wherein CircleCI engineering spent time running simulations of cost savings for the purpose of developing a general set of best practices for Auto Scaling. Consider the following best practices when setting up AWS Auto Scaling:
|
30 | 37 |
|
31 |
| -1. In general, size your build cluster large enough to avoid queueing builds. That is, less than one second of queuing for most workloads and less than 10 seconds for workloads run on expensive hardware or at highest parallellism. Sizing to reduce queuing to zero is best practice because of the high cost of developer time, it is difficult to create a model in which developer time is cheap enough for under-provisioning to be cost-effective. |
| 38 | +1. In general, size your cluster large enough to avoid queueing builds. That is, less than one second of queuing for most workloads and less than 10 seconds for workloads run on expensive hardware or at highest parallellism. Sizing to reduce queuing to zero is best practice because of the high cost of developer time, it is difficult to create a model in which developer time is cheap enough for under-provisioning to be cost-effective. |
32 | 39 |
|
33 | 40 | 2. Create an Auto Scaling group with a Step Scaling policy that scales up during the normal working hours of the majority of developers and scales back down at night. Scaling up during the weekday normal working hours and back down at night is the best practice to keep queue times down during peak development without over provisioning at night when traffic is low. Looking at millions of builds over time, a bell curve during normal working hour emerges for most data sets.
|
34 | 41 |
|
|
0 commit comments