Brig
:speedboat: Brig is a simple way to declaratively provision and run containers for your apps and supporting services.
What?
Brig is basically Foreman and Procfiles on steroids. :muscle:
You specify your environment, processes, and services:
app 'walter'
env :development, {
# App and Rack
'HOST' => 'localhost',
'PORT' => 80,
'RACK_ENV' => 'development',
# Puma
'PUMA_WORKERS' => 1,
'PUMA_MIN_THREADS' => 1,
'PUMA_MAX_THREADS' => 1,
# Sidekiq
'SIDEKIQ_THREADS' => 1,
# Redis
'SILO_REDIS_URL' => '...',
'SILO_REDIS_CONNECTIONS' => 1,
# PostgreSQL
'SILO_DATABASE_URL' => '...',
'SILO_DATABASE_CONNECTIONS' => 1,
# Email
'SMTP_HOST' => '...',
'SMTP_PORT' => '...',
'SMTP_USER' => '...',
'SMTP_PASSWORD' => '...',
# Storage
'S3_HOST' => '...',
'S3_BUCKET' => '...',
'S3_ACCESS_KEY' => '...',
'S3_SECRET_ACCESS_KEY' => '...'
}
process :web, 'bundle exec puma -C config/puma.rb', expose: 80
process :worker, 'bundle exec sidekiq -C config/sidekiq.yml'
service :redis, :redis do
databases 1
end
service :db, :postgres do
admin 'postgres', 'postgres'
user 'app', '1ba4e8a57ecaa892a882e37a8475e3c2'
database 'app', encoding: :utf8 do
owner 'app'
grant :all, 'app'
extensions [:hstore, :plv8]
end
end
deploy :default, :digital_ocean do
credentials token: ENV['DIGITALOCEAN_DEPLOY_TOKEN']
schedule :web do
cpus min: 4
memory min: 2.gigabytes
congruent :worker
conflicts :cache, :db
end
schedule :worker do
cpus min: 4
memory min: 2.gigabytes
congruent :web
conflicts :cache, :db
end
schedule :redis do
cpus min: 1
memory min: 1.gigabyte
conflicts :db
end
schedule :db do
cpus min: 4
memory min: 4.gigabytes
end
end
It packages them all up and runs them in containers, and when you're happy you can deploy (and scale) on Amazon AWS, DigitalOcean, and Linode.
Why?
The vast majority of web applications and services don't scale insanely, so simplicity does it. Also, Docker Compose leaves me wanting.
Is it production ready?
I d'unno — roll some dice.