Skip to content

Commit 04b8215

Browse files
Merge pull request circleci#2311 from circleci/fix-up-server-docs
Fix up server docs
2 parents 316f710 + ecaaba1 commit 04b8215

File tree

4 files changed

+41
-33
lines changed

4 files changed

+41
-33
lines changed

jekyll/_cci2/aws.md

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -48,7 +48,7 @@ The following additional settings are required to support using private subnets
4848
Have available the following information and policies before starting the Preview Release installation:
4949

5050
* If you use network proxies, contact your Account team before attempting to install CircleCI 2.0.
51-
* Plan to provision at least two AWS instances, one for the Services and one for your first set of Builders. Best practice is to use an `m4.2xlarge` instance with 8 CPUs and 32GB RAM for the Services as well as Builders instances.
51+
* Plan to provision at least two AWS instances, one for the Services and one for your first set of Nomad Clients. Best practice is to use an `m4.2xlarge` instance with 8 vCPUs and 32GB RAM for the Services as well as Nomad Clients instances.
5252
* AWS instances must have outbound access to pull Docker containers and to verify your license.
5353
* In order to provision required AWS entities with Terraform you need an IAM User with following permissions:
5454
```
@@ -105,7 +105,9 @@ Have available the following information and policies before starting the Previe
105105
## Installation with Terraform
106106
1. Clone the [Setup](https://github.com/circleci/enterprise-setup) repository (if you already have it cloned, make sure it is up-to-date and you are on the `master` branch: `git checkout master && get pull`).
107107
2. Run `make init` to init `terraform.tfvars` file (your previous `terraform.tfvars` if any, will be backed up in the same directory).
108-
3. Fill `terraform.tfvars` with appropriate values.
108+
3. Fill `terraform.tfvars` with appropriate AWS values for section 1.
109+
4. Specify a `circle_secret_passphrase` in section 2, replacing `...` with alpha numeric characters. Passprhase cannot be empty.
110+
5. Specify the instance type for your Nomad Clients. By default, the value specified in the `terraform.tfvars` file for Nomad Clients is `m4.2xlarge` (8 vCPUs, 32GB RAM). To increase the number of concurrent CircleCI jobs that each Nomad Client can run, modify section 2 of the `terraform.tfvars` file to specify a larger `nomad_client_instance_type`. Refer to the AWS [Amazon EC2 Instance Types](https://aws.amazon.com/ec2/instance-types) guide for details. **Note:** The `builder_instance_type` is only used for 1.0 and is disabled by default in section 3.
109111
4. Run `terraform apply` to provision.
110112
5. Go to the provided URL at the end of Terraform output and follow the instructions.
111113
6. Enter your license.

jekyll/_cci2/high-availability.md

Lines changed: 12 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -295,9 +295,9 @@ Vault should be setup as follows:
295295
296296
1. Pull down vault. No higher than 0.7 currently:
297297
298-
2. Put the vault binary somewhere on $PATH (optional but makes life easier)
298+
2. Put the vault binary somewhere on $PATH as a best practice.
299299
300-
3. Create a config file like vault.hcl with the following:
300+
3. Create a `vault.hcl` config file with the following:
301301
302302
```
303303
storage "file" { # Note: This can be set to consul if they are using HashiCorps consul for HA
@@ -311,28 +311,27 @@ listener "tcp" {
311311
}
312312
```
313313
314-
4. Start vault : `sudo vault server -config=/path/to/vault.hcl & `
314+
4. Start vault by running `sudo vault server -config=/path/to/vault.hcl & `.
315315
316-
**Note:** You'll only need to do the below if you are setting up Vault as a test instance with HTTP
316+
**Note:** You'll only need to do the following if you are setting up vault as a test instance with HTTP.
317317
318-
5. `export VAULT_ADDR=http://127.0.0.1:8200`
318+
5. Run `export VAULT_ADDR=http://127.0.0.1:8200`.
319319
320-
6. `sudo vault init`
320+
6. Run `sudo vault init`.
321321
322322
7. Copy the unseal keys and the root key. You’ll need these values.
323323
324-
8. Unseal vault using: `sudo vault unseal` . You'll have to run this command 3 times using 3 different unseal keys.
324+
8. Unseal vault using: `sudo vault unseal` . You'll have to run this command three times using three different unseal keys.
325325
326-
9. Now you need to auth. Run: `sudo vault auth` The token here should be the root token that you copied earlier.
326+
9. Now you need to authenticate by running `sudo vault auth`. The token here should be the root token that you copied earlier.
327327
328-
10. Once authed, you should now need to mount the transit mount: `sudo vault mount transit`
328+
10. Once authenticated, you should now mount the transit mount by running `sudo vault mount transit`.
329329
330-
11. For CircleCI, you'll need to generate a token that can be renewed. You can generate this by running the following: `sudo vault token-create -period="1h"` . Use the generated token as your vault token, that you'll need below.
330+
11. For CircleCI, generate a token that can be renewed. Generate this by running the following: `sudo vault token-create -period="1h"`. Use the generated token as your vault token which you will also need below.
331331
332-
12. Seal vault: `sudo vault seal`
333-
334-
Now, just proceed to Configuring Replicated, and you should be almost done with setting up CircleCI in HA mode.
332+
12. Seal vault by running `sudo vault seal`.
335333
334+
Proceed to Configuring Replicated, to continue with setting up CircleCI in HA mode.
336335
337336
338337
## Configuring Replicated

jekyll/_cci2/monitoring.md

Lines changed: 15 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -1,34 +1,41 @@
11
---
22
layout: classic-docs
3-
title: "Administrative Variables, Monitoring, and Logging"
3+
title: "Environment Variables, Auto Scaling, and Monitoring"
44
category: [administration]
55
order: 30
66
---
77

8-
This document is for System Administrators who are setting environment variables for installed Builders, gathering metrics for monitoring their CircleCI installation, and viewing logs:
8+
This document is for System Administrators who are setting environment variables for installed Nomad Clients, scaling their cluster, gathering metrics for monitoring their CircleCI installation, and viewing logs:
99

1010
* TOC
1111
{:toc}
1212

13-
## Setting Environment Variables on Builders
13+
## Setting Environment Variables on Nomad Clients
1414

15-
Several aspects of CircleCI Builder behavior can be customized by passing
15+
Several aspects of CircleCI Nomad Client behavior can be customized by passing
1616
environment variables into the builder process.
1717

18-
If you are using the [trial]({{site.baseurl}}/2.0/single-box/) installation
19-
option on a single VM, then you can create a file called `/etc/circle-installation-customizations`
20-
with entries like `export CIRCLE_OPTION_A=foo` to set environment variables.
18+
To set environment variables create a file called `/etc/circle-installation-customizations`
19+
with environment variable entries, for example, `export CIRCLE_OPTION_A=foo`.
2120

2221
## System Monitoring
2322

2423
Enable the Cloudwatch by going to Replicated Admin > Settings > Monitoring > Enable Cloudwatch. **Note:** CloudWatch does **not** support monitoring of macOS containers.
2524

2625
CloudWatch already monitors the health and basic checks for the EC2 instances, for example, CPU, memory, disk space, and basic counts with alerts. Consider upgrading machine types for the Services instance or decrease the number of containers per container if CPU or memory become a bottleneck.
2726

27+
## Auto Scaling
28+
29+
By default, an Auto Scaling group is created on your AWS account. Go to your EC2 Dashboard and select Auto Scaling Groups from the left side menu. Then, in the Instances tab, set the Desired and Minimum number to define the number Nomad Clients to spin up and keep available. Use the Scaling Policy tab of the Auto Scaling page to scale up your group automatically only at certain times, see below for best practices for defining policies.
30+
31+
Refer to the Shutting Down a Nomad Client section of the [Nomad]({{ site.baseurl }}/2.0/nomad/#shutting-down-a-nomad-client) document for instructions on draining and scaling down the Nomad Clients.
32+
33+
### Auto Scaling Policy Best Practices
34+
2835
There is a [blog post series](https://circleci.com/blog/mathematical-justification-for-not-letting-builds-queue/)
2936
wherein CircleCI engineering spent time running simulations of cost savings for the purpose of developing a general set of best practices for Auto Scaling. Consider the following best practices when setting up AWS Auto Scaling:
3037

31-
1. In general, size your build cluster large enough to avoid queueing builds. That is, less than one second of queuing for most workloads and less than 10 seconds for workloads run on expensive hardware or at highest parallellism. Sizing to reduce queuing to zero is best practice because of the high cost of developer time, it is difficult to create a model in which developer time is cheap enough for under-provisioning to be cost-effective.
38+
1. In general, size your cluster large enough to avoid queueing builds. That is, less than one second of queuing for most workloads and less than 10 seconds for workloads run on expensive hardware or at highest parallellism. Sizing to reduce queuing to zero is best practice because of the high cost of developer time, it is difficult to create a model in which developer time is cheap enough for under-provisioning to be cost-effective.
3239

3340
2. Create an Auto Scaling group with a Step Scaling policy that scales up during the normal working hours of the majority of developers and scales back down at night. Scaling up during the weekday normal working hours and back down at night is the best practice to keep queue times down during peak development without over provisioning at night when traffic is low. Looking at millions of builds over time, a bell curve during normal working hour emerges for most data sets.
3441

jekyll/_cci2/nomad.md

Lines changed: 10 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -63,15 +63,20 @@ Complete the following steps to get logs from the allocation of the specified jo
6363

6464
1. Get the logs from the allocation with `nomad logs -stderr <allocation-id>`
6565

66+
<!---
6667
## Scaling the Nomad Cluster
67-
6868
Nomad itself does not provide a scaling method for cluster, so you must implement one. This section provides basic operations regarding scaling a cluster.
69+
--->
6970

7071
### Scaling Up the Client Cluster
7172

72-
Scaling up Nomad cluster is very straightforward. To scale up, you need to register new Nomad clients into the cluster. If a Nomad client knows the IP addresses of Nomad servers, then the client can register to the cluster automatically.
73+
Refer to the Auto Scaling section of the [Administrative Variables, Monitoring, and Logging](https://circleci.com/docs/2.0/monitoring/#auto-scaling) document for details about adding Nomad Client instances to an AWS auto scaling group and using a scaling policy to scale up automatically according to your requirements.
7374

75+
<!---
76+
commenting until we have non-aws installations?
77+
Scaling up Nomad cluster is very straightforward. To scale up, you need to register new Nomad clients into the cluster. If a Nomad client knows the IP addresses of Nomad servers, then the client can register to the cluster automatically.
7478
HashiCorp recommends using Consul or other service discovery mechanisms to make this more robust in production. For more information, see the following pages in the official documentation for [Clustering](https://www.nomadproject.io/intro/getting-started/cluster.html), [Service Discovery](https://www.nomadproject.io/docs/service-discovery/index.html), and [Consul Integration](https://www.nomadproject.io/docs/agent/configuration/consul.html).
79+
--->
7580

7681
### Shutting Down a Nomad Client
7782

@@ -85,15 +90,10 @@ When you want to shutdown a Nomad client, you must first set the client to `drai
8590

8691
`nomad node-status -self`
8792

88-
Alternatively, you can drain a remote node with `nomad node-drain -enable -yes <node-id>`
93+
Alternatively, you can drain a remote node with `nomad node-drain -enable -yes <node-id>`.
8994

9095
### Scaling Down the Client Cluster
9196

92-
To scale your Nomad cluster properly, you need a mechanism for clients to shutdown in `drain` mode first. Then, wait for all jobs to be finished before terminating the client.
93-
94-
While there are many ways to achieve this, here is one example of implementing such mechanism by using AWS and ASG.
97+
To set up a mechanism for clients to shutdown in `drain` mode first and wait for all jobs to be finished before terminating the client, configure an ASG Lifecycle Hook that triggers a script when scaling down instances.
9598

96-
1. Configure ASG Lifecycle Hook that triggers a script when scaling down instances.
97-
2. The script makes the instance in drain mode.
98-
3. The script monitors running jobs on the instance and waits for them to finish.
99-
4. Terminate the instance.
99+
The script should use the above commands to put the instance in drain mode, monitor running jobs on the instance, wait for them to finish and then terminate the instance.

0 commit comments

Comments
 (0)