|
1 |
| -# mongodb |
2 |
| -Work in progress, not stable, expect force pushes of this repo |
| 1 | +# AutoPilot Pattern MongoDB |
| 2 | + |
| 3 | +*A robust and highly-scalable implementation of MongoDB in Docker using the Autopilot Pattern* |
| 4 | + |
| 5 | +## Architecture |
| 6 | + |
| 7 | +A running cluster includes the following components: |
| 8 | +- [ContainerPilot](https://www.joyent.com/containerpilot): included in our MongoDB containers to orchestrate bootstrap behavior and coordinate replica joining using keys and checks stored in Consul in the `health`, and `onChange` handlers |
| 9 | +- [MongoDB](https://www.mongodb.com/community): we're using MongoDB 3.2 and setting up a [replica set](https://docs.mongodb.com/manual/replication/) |
| 10 | +- [Consul](https://www.consul.io/): used to coordinate replication and failover |
| 11 | + |
| 12 | +## Running the cluster |
| 13 | + |
| 14 | +Starting a new cluster is easy once you have [your `_env` file set with the configuration details](#configuration) |
| 15 | + |
| 16 | +- for Triton, just run `docker-compose up -d` |
| 17 | +- for non-Triton, just run `docker-compose up -f local-compose.yml -d` |
| 18 | + |
| 19 | +In a few moments you'll have a running MongoDB ready for a replica set. Both the master and replicas are described as a single `docker-compose` service. During startup, [ContainerPilot](http://containerpilot.io) will ask Consul if an existing master has been created. If not, the node will initialize as a new MongoDB replica set and all future nodes will be added to the replica set by the current master. All master election is handled by [MongoDB itself](https://docs.mongodb.com/manual/core/replica-set-elections/) and the result is cached in Consul. |
| 20 | + |
| 21 | +**Run `docker-compose -f local-compose.yml scale mongodb=2` to add a replica (or more than one!)**. The replicas will automatically be added to the replica set on the master and will register themselves in Consul as replicas once they're ready. |
| 22 | + |
| 23 | +### Configuration |
| 24 | + |
| 25 | +Pass these variables via an `_env` file. |
| 26 | + |
| 27 | +- `LOG_LEVEL`: control the amount of logging from ContainerPilot |
| 28 | +- when the primary node is sent a `SIGTERM` it will [step down](https://docs.mongodb.com/manual/reference/command/replSetStepDown/) as primary; the following control those timeouts |
| 29 | + - `MONGO_SECONDARY_CATCHUP_PERIOD`: the number of seconds that the mongod will wait for an electable secondary to catch up to the primary |
| 30 | + - `MONGO_STEPDOWN_TIME`: the number of seconds to step down the primary, during which time the stepdown member is ineligible for becoming primary |
| 31 | + - `MONGO_ELECTION_TIMEOUT`: after the primary steps down, the amount a tries to check that a new primary has been elected before the node shuts down |
| 32 | + |
| 33 | +Not yet implemented: |
| 34 | +- `MANTA_URL`: the full Manta endpoint URL. (ex. `https://us-east.manta.joyent.com`) |
| 35 | +- `MANTA_USER`: the Manta account name. |
| 36 | +- `MANTA_SUBUSER`: the Manta subuser account name, if any. |
| 37 | +- `MANTA_ROLE`: the Manta role name, if any. |
| 38 | +- `MANTA_KEY_ID`: the MD5-format ssh key id for the Manta account/subuser (ex. `1a:b8:30:2e:57:ce:59:1d:16:f6:19:97:f2:60:2b:3d`); the included `setup.sh` will encode this automatically |
| 39 | +- `MANTA_PRIVATE_KEY`: the private ssh key for the Manta account/subuser; the included `setup.sh` will encode this automatically |
| 40 | +- `MANTA_BUCKET`: the path on Manta where backups will be stored. (ex. `/myaccount/stor/triton-mysql`); the bucket must already exist and be writeable by the `MANTA_USER`/`MANTA_PRIVATE_KEY` |
| 41 | + |
| 42 | +### Sponsors |
| 43 | + |
| 44 | +Initial development of this project was sponsored by [Joyent](https://www.joyent.com). |
0 commit comments