Using the DigitalOcean metadata service, administrators can provide instructions that allow new servers to configure themselves automatically. While this is useful, many organizations like to handle all of their infrastructure configuration within a configuration management tool like Chef or Puppet.
In this guide, we will demonstrate how to bootstrap a DigitalOcean server using the metadata service and CloudInit to connect to an existing configuration management deployment. The actual configuration of the server can then be handled by the config management service. We will demonstrate how to bootstrap both Chef and Puppet nodes.
In order to complete this guide, you will have to have some familiarity with the DigitalOcean metadata service. You can find out more about how to enter information into and retrieve information from the metadata service in this guide.
This guide will leverage a type of script called cloud-config
that is consumed at first boot by the CloudInit service on your Droplet in order to perform first-run configuration. You should get some basic familiarity with cloud-config
scripts, their syntax, and behavior in order to better understand how to modify the scripts presented in this guide. You can find an introduction to cloud-config scripting here. For a more practical example (along with some discussion on the limitations of the format), you can read our guide on performing some basic tasks using cloud-config here.
Using the DigitalOcean metadata service, you can easily hook your new servers into an existing Chef-controlled infrastructure with cloud-config
scripting.
To add your new server to this system, you must already have a Chef server configured that your new server can contact to receive configuration instructions. If you need help deploying a Chef server and management workstation, you can follow this guide to get started.
When a new server is brought online, it must be brought under the control of the Chef server. Typically, this may be accomplished by connecting to the new server with the knife
management command and using the bootstrap
subcommand. This would connect to the new server, install the Chef client and the validation credentials that allow the new node to connect to the Chef server. Afterwards, the Chef client connects to the server, validates itself, receives new client credentials, pulls down its configuration from the server, and performs any actions necessary to bring itself into the desired state.
In this guide, we will use a cloud-config
script to replace the manual bootstrapping step, allowing the new node to automatically connect to the Chef server, validate itself, receive client credentials, and perform an initial Chef client run. The server will do this automatically at first boot without any manual assistance from the administrator.
In order for our cloud-config
script to successfully bootstrap, it will need access to the credentials typically available to the knife
command. Specifically, we need the following pieces of information:
All of this information is available, in the correct format, in the knife
configuration file on workstation used to manage the Chef infrastructure. Inside of the Chef repo, there should be a hidden directory called .chef
which contains this file.
Assuming that your Chef repo is located in your home directory on the workstation and is called chef-repo
, you can output the contents of the file by typing:
cat ~/chef-repo/.chef/knife.rb
The pieces of information you need are highlighted below:
current_dir = File.dirname(__FILE__)
log_level :info
log_location STDOUT
node_name "jellingwood"
client_key "#{current_dir}/jellingwood.pem"
validation_client_name "digitalocean-validator"
validation_key "#{current_dir}/digitalocean-validator.pem"
chef_server_url "https://your_server.com/organizations/digitalocean"
syntax_check_cache_path "#{ENV['HOME']}/.chef/syntaxcache"
cookbook_path ["#{current_dir}/../cookbooks"]
The validation name and the Chef server URL can be taken directly as-is from the file. Copy these values so that you can use them in the cloud-config
file.
The validation_key
points to the location where the actual key is kept. In the above example, this indicates that it is located in the same directory as the knife.rb
file and is called digitalocean-validator.pem
. This will likely be different for your configuration.
We need to contents of this file, so use the cat
command again. Modify the command to point it to the location given for your validator key:
cat ~/chef-repo/.chef/digitalocean-validator.pem
You will see an RSA private key:
-----BEGIN RSA PRIVATE KEY-----
MIIEowIBAAKCAQEA3O60HT5pwEo6xUwcZ8WtExBUhoL3bTjlsvHVXg1JVmBUES+f
V9jLu2N00uSZEDZneCIQyHLBXnqD/UNvWEPNvPzt1ecXzmw2BytB7lPDW4/F/8tJ
vAVrKqC7B04VFGmcFY2zC8gf8BWmX8CNRDQooM7UO5OWe/H6GDGPPRIITerO3GrU
. . .
sWyRAoGBAKNc/ZUM8ljRV0UJxQ9nbdozXRZjtUaNgXMNiw+oP2HYYdHrlkKnGHYJ
Js63rvjpq8pocjE8YI+2H0v4/4uWqW8GEBfrWbLMzGsYPnRyiHR5+hgjCUU50RB3
eFoNbURwLYcq2Z/IAQZpDpJWpofz3OVMpMXtei1cIflrAAd2wtWO
-----END RSA PRIVATE KEY-----
Copy the entirety of the validation key so that you can use it in the cloud-config
script momentarily.
Once you have the data above, you can build out the script. Chef configuration can be accomplished through a dedicated cloud-config
module called chef
. The cloud-config
must contain valid YAML and must have #cloud-config
as the first line of the script.
Starting off, your script will look like this:
#cloud-config
chef:
The cloud-config
documentation claims to be able to install the Chef client either from a Ruby gem, a package, or using the traditional “omnibus” installation method. However, in practice, both the gem and package methods tend to fail, so we will use the “omnibus” method. Although it is usually not necessary, we will also explicitly list the location of the omnibus installer.
We will set force_install
to “false”. This way, if for some reason the Chef client is already installed on the image (for instance, if you are deploying from a snapshot), the client will not be reinstalled. So far, our script looks like this:
#cloud-config
chef:
install_type: "omnibus"
omnibus_url: "https://www.opscode.com/chef/install.sh"
force_install: false
Next, we have the option of selecting a name for the new server within the Chef infrastructure by using the node_name
directive. If you do not set this, Chef will use the server’s hostname, so this is optional. However, this must be unique in your Chef environment.
Afterwards, we can add all of the connection information that we took from our Chef workstation. We will set the server_url
option to the location of the Chef server exactly as it was in the knife.rb
file. The same is true for the validation_name
option.
For the validation key, we will use the YAML pipe symbol (|
) to enter the entire validation key that we found on the workstation:
#cloud-config
chef:
install_type: "omnibus"
omnibus_url: "https://www.opscode.com/chef/install.sh"
force_install: false
node_name: "new_node"
server_url: "https://your_server.com/organizations/digitalocean"
validation_name: "digitalocean-validator"
validation_key: |
-----BEGIN RSA PRIVATE KEY-----
MIIEowIBAAKCAQEA3O60HT5pwEo6xUwcZ8WtExBUhoL3bTjlsvHVXg1JVmBUES+f
V9jLu2N00uSZEDZneCIQyHLBXnqD/UNvWEPNvPzt1ecXzmw2BytB7lPDW4/F/8tJ
vAVrKqC7B04VFGmcFY2zC8gf8BWmX8CNRDQooM7UO5OWe/H6GDGPPRIITerO3GrU
. . .
sWyRAoGBAKNc/ZUM8ljRV0UJxQ9nbdozXRZjtUaNgXMNiw+oP2HYYdHrlkKnGHYJ
Js63rvjpq8pocjE8YI+2H0v4/4uWqW8GEBfrWbLMzGsYPnRyiHR5+hgjCUU50RB3
eFoNbURwLYcq2Z/IAQZpDpJWpofz3OVMpMXtei1cIflrAAd2wtWO
-----END RSA PRIVATE KEY-----
At this point, your script has all of the authentication needed to connect to your Chef server and create client credentials.
While the above details provides enough information for the client to connect to the Chef server, we haven’t given the node any information about how to actually configure itself. We can provide this information in the cloud-config
script as well.
To specify the environment that the new node should be placed in, use the environment
option. If this is not set, the _default
environment will be set, which is the generic default for Chef nodes that have not been given another environment.
chef:
environment: "staging"
Our run_list
can be specified as a simple list of items that the client should apply in order. These can be either recipes or roles.
chef:
run_list:
- "recipe[lamp]"
- "role[backend-web]"
You can specify the new node’s initial attributes using an initial_attributes
hierarchy. This will set the initial attributes that will affect how the run_list
is applied:
chef:
initial_attributes:
lamp:
apache:
port: 80
mysql:
username: webclient
pass: $#fjeaiop34S
When hooked up to the previous cloud-config
script, it might look something like this:
#cloud-config
chef:
install_type: "omnibus"
omnibus_url: "https://www.opscode.com/chef/install.sh"
force_install: false
node_name: "new_node"
server_url: "https://your_server.com/organizations/digitalocean"
validation_name: "digitalocean-validator"
validation_key: |
-----BEGIN RSA PRIVATE KEY-----
MIIEowIBAAKCAQEA3O60HT5pwEo6xUwcZ8WtExBUhoL3bTjlsvHVXg1JVmBUES+f
V9jLu2N00uSZEDZneCIQyHLBXnqD/UNvWEPNvPzt1ecXzmw2BytB7lPDW4/F/8tJ
vAVrKqC7B04VFGmcFY2zC8gf8BWmX8CNRDQooM7UO5OWe/H6GDGPPRIITerO3GrU
. . .
sWyRAoGBAKNc/ZUM8ljRV0UJxQ9nbdozXRZjtUaNgXMNiw+oP2HYYdHrlkKnGHYJ
Js63rvjpq8pocjE8YI+2H0v4/4uWqW8GEBfrWbLMzGsYPnRyiHR5+hgjCUU50RB3
eFoNbURwLYcq2Z/IAQZpDpJWpofz3OVMpMXtei1cIflrAAd2wtWO
-----END RSA PRIVATE KEY-----
environment: "staging"
run_list:
- "recipe[lamp]"
- "role[backend-web]"
initial_attributes:
lamp:
apache:
port: 80
mysql:
username: webclient
pass: $#fjeaiop34S
The above script contains all of the information needed under the chef:
section. However, there are a few other things we should do using some other cloud-config
modules.
First, we should specify that we wish to redirect the output from every command and subcommand into the CloudInit process’s output log. This is located at /var/log/cloud-init-output.log
by default. We can do this with the output
module like this:
output: {all: '| tee -a /var/log/cloud-init-output.log'}
The other thing we want to do is set the Chef client up to actually run once it has been installed and configured. At the time of this writing, the omnibus installation method does not do this automatically.
We can force this behavior by waiting until the chef-client
executable is installed on the server before calling the command. Using a simple bash
loop, we will check for the existence of this file every five seconds. When it is found, we will run chef-client
in order to implement the initial configuration we have specified.
The runcmd
module can be used to issue arbitrary commands. It is the ideal location for our bash
loop:
runcmd:
- while [ ! -e /usr/bin/chef-client ]; do sleep 5; done; chef-client
Also, optionally, you can add another cloud-config
directive to null-route the metadata endpoint after the first boot. This is useful because we are putting a private key in our user data. Without null-routing the metadata endpoint, this would be accessible to any user on the server. Implement this by adding:
disable_ec2_metadata: true
Combining these with the script we’ve constructed thus far, we can get the complete script necessary to bootstrap our node and connect it to our Chef infrastructure:
#cloud-config
chef:
install_type: "omnibus"
omnibus_url: "https://www.opscode.com/chef/install.sh"
force_install: false
node_name: "new_node"
server_url: "https://your_server.com/organizations/digitalocean"
validation_name: "digitalocean-validator"
validation_key: |
-----BEGIN RSA PRIVATE KEY-----
MIIEowIBAAKCAQEA3O60HT5pwEo6xUwcZ8WtExBUhoL3bTjlsvHVXg1JVmBUES+f
V9jLu2N00uSZEDZneCIQyHLBXnqD/UNvWEPNvPzt1ecXzmw2BytB7lPDW4/F/8tJ
vAVrKqC7B04VFGmcFY2zC8gf8BWmX8CNRDQooM7UO5OWe/H6GDGPPRIITerO3GrU
. . .
sWyRAoGBAKNc/ZUM8ljRV0UJxQ9nbdozXRZjtUaNgXMNiw+oP2HYYdHrlkKnGHYJ
Js63rvjpq8pocjE8YI+2H0v4/4uWqW8GEBfrWbLMzGsYPnRyiHR5+hgjCUU50RB3
eFoNbURwLYcq2Z/IAQZpDpJWpofz3OVMpMXtei1cIflrAAd2wtWO
-----END RSA PRIVATE KEY-----
environment: "staging"
run_list:
- "recipe[lamp]"
- "role[backend-web]"
initial_attributes:
lamp:
apache:
port: 80
mysql:
username: webclient
pass: $#fjeaiop34S
output: {all: '| tee -a /var/log/cloud-init-output.log'}
runcmd:
- while [ ! -e /usr/bin/chef-client ]; do sleep 5; done; chef-client
disable_ec2_metadata: true
The above script can be tweaked as necessary for each new server in your infrastructure.
If your infrastructure relies on Puppet for configuration management, you can use the puppet
module instead. Like the Chef example, bootstrapping a Puppet node involves using cloud-config
to attach the new server to the existing configuration management infrastructure.
Before you get started, you should have a Puppet master server configured for your infrastructure. If you need help getting a Puppet server up and running, check out this guide.
When a new Puppet server is brought online, a Puppet agent is installed so that it can communicate with the Puppet master server. This agent is responsible for receiving and applying the information that dictates the desired state of the node. To do this, the agent connects with the master, uploads data about itself, pulls down the current catalog describing its desired state, and performs the actions necessary to reach that state.
Before this happens though, on its first run, the agent must register itself with the master server. It creates a certificate signing request and sends it to the master to sign. Typically, the agent will reconnect to the master periodically until the certificate is signed, but you can configure your Puppet to automatically sign incoming requests with certain characteristics if that is suitable for your environment (we will cover this later).
Using our cloud-config
script, we will configure our new server with the information that it needs to connect to the master for the first time. At that point, it can retrieve configuration details from the Puppet master server in the form of a catalog.
The first thing we need to do prior to building our cloud-config
file is gather the data from our Puppet master server that we will need to connect. We only need a few pieces of information.
First, you need to get the Puppet master server’s fully qualified domain name (FQDN). You can do this by typing:
hostname -f
In most cases, it should return something like this:
puppet.example.com
You can also check your Puppet master configuration file to see if the dns_alt_names
option is set:
cat /etc/puppet/puppet.conf
. . .
dns_alt_names = puppet,puppet.example.com
. . .
If your Puppet master’s SSL certificates were generated after setting these options, they may be usable as well.
The other item that we need to collect is the Puppet master’s certificate authority certificate. This can be found in either /var/lib/puppet/ssl/certs/ca.pem
or /var/lib/puppet/ssl/ca/ca_crt.pem
:
sudo cat /var/lib/puppet/ssl/certs/ca.pem
The results will look something like this:
-----BEGIN CERTIFICATE-----
MIIFXjCCA0agAwIBAgIBATANBgkqhkiG9w0BAQsFADAcMRowGAYDVQQDDBFQdXBw
ZXQgQ0E6IHB1cHBldDAeFw8xNTAyMTkxOTA0MzVaFw0yMDAyMTkxOTA0MzVaMBwx
GjAYBgNVBAMMEVB1cHBldCBDQTogcHVwcGV0MIICIjANBgkqhkiG9w0BAQEFAAOC
. . .
arsjZT5/CtIhtP33Jl3mCp7U2F6bsk4/GDGRaAsFXjJHvBbL93NzgpkZ7elf0zUP
rOcSGrDrUuzuJk8lEAtrZr/IfAgfKKXPqbyYF95V1qN3OMY+aTcrK20XTydKVWSe
l5UfYGY3S9UJFrSn9aBsZzN+10HXPkaFKo7HxpztlYyJNI8UVSatcRF4aYYqt9KR
UClnR+2WxK5v7ix0CVd4/KpYH/6YivvyTwxrhjF2AksZKg==
-----END CERTIFICATE-----
Copy the certificate in its entirety. We will be including this in our cloud-config
file so that our new servers can verify that they are connecting to the correct Puppet master.
Once you have these pieces of information, you can begin building the cloud-config
file so that the new server can plug itself into the existing Puppet infrastructure.
The cloud-config
configuration for new Puppet nodes is fairly simple. All Puppet-specific configuration is located within the puppet:
section of the file. As with every cloud-config
file, the very first line must contain #cloud-config
on its own:
#cloud-config
puppet:
Beneath this, there are only two subsections. The first is the ca_cert
key. This will use the pipe character to start a YAML text block so that the CA certificate can be given in its entirety as an indented block:
#cloud-config
puppet:
ca_cert: |
-----BEGIN CERTIFICATE-----
MIIFXjCCA0agAwIBAgIBATANBgkqhkiG9w0BAQsFADAcMRowGAYDVQQDDBFQdXBw
ZXQgQ0E6IHB1cHBldDAeFw8xNTAyMTkxOTA0MzVaFw0yMDAyMTkxOTA0MzVaMBwx
GjAYBgNVBAMMEVB1cHBldCBDQTogcHVwcGV0MIICIjANBgkqhkiG9w0BAQEFAAOC
. . .
arsjZT5/CtIhtP33Jl3mCp7U2F6bsk4/GDGRaAsFXjJHvBbL93NzgpkZ7elf0zUP
rOcSGrDrUuzuJk8lEAtrZr/IfAgfKKXPqbyYF95V1qN3OMY+aTcrK20XTydKVWSe
l5UfYGY3S9UJFrSn9aBsZzN+10HXPkaFKo7HxpztlYyJNI8UVSatcRF4aYYqt9KR
UClnR+2WxK5v7ix0CVd4/KpYH/6YivvyTwxrhjF2AksZKg==
-----END CERTIFICATE-----
Be sure to include the entire certificate along with the beginning and ending markers and to indent it appropriately.
The second section under the puppet:
umbrella is the conf:
section. This is used to specify key-value pairs that will be appended to a generic puppet.conf
file. The key-value pairs should be placed under section headers as they would be in the puppet.conf
file.
For instance, at the very least, the new server will need to know the address of the Puppet master server. In the puppet.conf
file, this is found under the [agent]
section, like this:
. . .
[agent]
server = puppet.example.com
. . .
To specify this in the cloud-config
syntax, you would add this to what we have so far:
#cloud-config
puppet:
ca_cert: |
-----BEGIN CERTIFICATE-----
MIIFXjCCA0agAwIBAgIBATANBgkqhkiG9w0BAQsFADAcMRowGAYDVQQDDBFQdXBw
ZXQgQ0E6IHB1cHBldDAeFw8xNTAyMTkxOTA0MzVaFw0yMDAyMTkxOTA0MzVaMBwx
GjAYBgNVBAMMEVB1cHBldCBDQTogcHVwcGV0MIICIjANBgkqhkiG9w0BAQEFAAOC
. . .
arsjZT5/CtIhtP33Jl3mCp7U2F6bsk4/GDGRaAsFXjJHvBbL93NzgpkZ7elf0zUP
rOcSGrDrUuzuJk8lEAtrZr/IfAgfKKXPqbyYF95V1qN3OMY+aTcrK20XTydKVWSe
l5UfYGY3S9UJFrSn9aBsZzN+10HXPkaFKo7HxpztlYyJNI8UVSatcRF4aYYqt9KR
UClnR+2WxK5v7ix0CVd4/KpYH/6YivvyTwxrhjF2AksZKg==
-----END CERTIFICATE-----
conf:
agent:
server: "puppet.example.com"
Note that the conf:
section is in-line with the ca_cert
section and not a child element. This is the bare minimum needed to connect to the Puppet master. Any additional configuration items found in puppet.conf
can be added in a similar way by first creating a level for the section name and then defining the key-value pair.
After this, we should redirect all future output to the cloud-init-output.log
file and add a runcmd
line comparable to the one we added for the Chef config. This will wait until the Puppet agent is installed and then enable and restart it. We can also null-route the metadata endpoint after the first run like we did in the Chef section. These lines cloud-config
directives should be placed outside of any other module sections:
. . .
conf:
agent:
server: "puppet.example.com"
output: {all: '| tee -a /var/log/cloud-init-output.log'}
runcmd:
- while [ ! -e /usr/bin/puppet ]; do sleep 5; done; puppet agent --enable; service puppet restart
disable_ec2_metadata: true
With this information, the new server can connect to the Puppet master server and then generate a client certificate signing request to transfer to the master. By default, client certificates must be manually signed on the Puppet master. Once this is done, at the next Puppet agent update interval (every 30 minutes by default), the node will pull down its configuration from the Puppet master. We will demonstrate a bit later how to implement a relatively secure auto-signing mechanism to avoid this delay.
One of the values that can be placed into the new server’s puppet.conf
file is a unique case. In the cloud-config
file, the certname
option can substitute values from the environment if certain variables are given. The following variables are recognized:
%i
: The instance ID of the server. This will be taken from http://169.254.169.254/metadata/v1/id
when the server is created. It corresponds to the Droplet ID used to uniquely identify Droplets.%f
: The FQDN of the server.With this in mind, a common certname
setting would look like this:
#cloud-config
puppet:
. . .
conf:
agent:
server: "puppet.example.com"
certname: "%i.%f"
This would produce a certname
with a pattern similar to this:
|-Droplet ID
|
| |-Fully Qualified Domain Name
| |
|-----||-------------------|
123456.testnode.example.com
Having Droplet ID as part of the certname
can be useful for configuring secure Puppet auto-signing as we will see in the next section.
If you wish to implement a certificate auto-signing system to avoid the need for administrator intervention, there are a few options. You must set this up on your Puppet master server first.
In the puppet.conf
file on the Puppet master server, you can set the autosign
option under the [master]
section of the file. This can take a few different values:
true
: This will tell the Puppet master server to sign every certificate request that comes in, without doing any checks. This is extremely dangerous in a real environment because any host can get a CSR signed and enter your infrastructure.<whitelist_filename>
: The second option is to specify a file that will function as a whitelist of hosts or host regular expressions. The Puppet master will check certificate signing requests against this list to see if the certificate should be signed. This is again not recommended as the certificate names can be spoofed easily.<policy_executable>
: The third option is to specify a script or executable that can be run to determine whether the certificate signing request should be signed. Puppet will pass the certname in as an argument and the entire CSR in through standard input. If an exit status of 0 is returned, the certificate is signed. If another status is given, the certificate will not be signed.Policy-based auto-signing is the most secure way to implement automatic key signing because it allows you to be arbitrarily complex in how you distinguish between legitimate and non-legitimate requests.
To demonstrate policy-based auto-signing, you can add the certname
variable to your cloud-config
that includes the %i
instance ID variable. We will use %i.%f
so that it also includes the hostname selected as well:
#cloud-config
puppet:
conf:
agent:
server: "puppet.example.com"
certname: "%i.%f"
ca_cert: |
. . .
Your complete cloud-config
may now look something like this:
#cloud-config
puppet:
conf:
agent:
server: "puppet.example.com"
certname: "%i.%f"
ca_cert: |
-----BEGIN CERTIFICATE-----
MIIFXjCCA0agAwIBAgIBATANBgkqhkiG9w0BAQsFADAcMRowGAYDVQQDDBFQdXBw
ZXQgQ0E6IHB1cHBldDAeFw8xNTAyMTkxOTA0MzVaFw0yMDAyMTkxOTA0MzVaMBwx
GjAYBgNVBAMMEVB1cHBldCBDQTogcHVwcGV0MIICIjANBgkqhkiG9w0BAQEFAAOC
. . .
arsjZT5/CtIhtP33Jl3mCp7U2F6bsk4/GDGRaAsFXjJHvBbL93NzgpkZ7elf0zUP
rOcSGrDrUuzuJk8lEAtrZr/IfAgfKKXPqbyYF95V1qN3OMY+aTcrK20XTydKVWSe
l5UfYGY3S9UJFrSn9aBsZzN+10HXPkaFKo7HxpztlYyJNI8UVSatcRF4aYYqt9KR
UClnR+2WxK5v7ix0CVd4/KpYH/6YivvyTwxrhjF2AksZKg==
-----END CERTIFICATE-----
output: {all: '| tee -a /var/log/cloud-init-output.log'}
runcmd:
- while [ ! -e /usr/bin/puppet ]; do sleep 5; done; puppet agent --enable; service puppet restart
disable_ec2_metadata: true
On the Puppet master server, we will have to set up a validation script. Since Ruby is already installed for Puppet, we can make a simple Ruby script.
Because we are using the %i.%f
format for the certname
, we can check whether the first part of the certname
(the part before the first dot) corresponds with a valid Droplet ID for our account. This is just a simple check which, in practice doesn’t do much more than the white list file. However, you can adapt this idea to be much more complex if you wish.
To do this, we will need a personal access token from the “Apps & API” section of the DigitalOcean control panel. You will also need to install one of the DigitalOcean Ruby libraries. Below, we will show you some simplified scripts that use the Barge and DropletKit DigitalOcean Ruby clients.
If you wish to use the Barge client, install the gem on your Puppet master by typing:
sudo gem install barge
The following script can be used to check whether the first portion of the certname
in the certificate signing request corresponds with a valid Droplet ID:
#!/usr/bin/env ruby
require 'barge'
TOKEN = 'YOUR_DIGITALOCEAN_API_TOKEN'
droplet_ids = []
certname = ARGV[0]
id_string = certname.slice(0...(certname.index('.')))
id_to_check = id_string.to_i
client = Barge::Client.new(access_token: TOKEN)
droplets = client.droplet.all
droplets.droplets.each do |droplet|
droplet_ids << droplet.id
end
Kernel.exit(droplet_ids.include?(id_to_check))
If you instead wish to use DropletKit, the official DigitalOcean Ruby client, you can install the gem by typing:
sudo gem install droplet_kit
Note that DropletKit gem is only valid for Ruby 2.0 and above, so this might not be a possibility when using the version of Ruby that comes with Puppet.
The script for DropletKit can be adapted like this:
#!/usr/bin/env ruby
require 'droplet_kit'
TOKEN = 'YOUR_DIGITALOCEAN_API_TOKEN'
droplet_ids = []
certname = ARGV[0]
id_string = certname.slice(0...(certname.index('.')))
id_to_check = id_string.to_i
client = DropletKit::Client.new(access_token: TOKEN)
droplets = client.droplets.all
droplets.each do |droplet|
droplet_ids << droplet.id
end
Kernel.exit(droplet_ids.include?(id_to_check))
You can place the script that corresponds to the gem you installed in a file called /etc/puppet/validate.rb
and mark it as executable by typing:
sudo chmod +x /etc/puppet/validate.rb
You can then add the following to your puppet.conf
file (located at /etc/puppet/puppet.conf
if using Open Source Puppet):
. . .
[master]
autosign = /etc/puppet/validate.rb
. . .
Restart the Apache service to implement the new signing policy:
sudo service apache2 restart
Now, when certificate signing requests are received by your Puppet master, it will check whether the first part of the certificate name corresponds with a valid Droplet name in your account. This is a rough example of how you can validate requests using an executable.
By leveraging cloud-config
scripts, you can easily bootstrap your new servers and hand them off to your existing configuration management systems. This allows you to control your infrastructure immediately through your existing tools prior to making important changes outside of scope of your management solution.
Thanks for learning with the DigitalOcean Community. Check out our offerings for compute, storage, networking, and managed databases.
Chef is a powerful configuration management system that can be used to programmatically control your infrastructure environment. Leveraging the Chef system allows you to easily recreate your environments in a predictable manner by automating the entire system configuration. In this series, we will introduce you to Chef concepts and demonstrate how to install and utilize the its powerful features to manage your servers.
This textbox defaults to using Markdown to format your answer.
You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!
Sign up for Infrastructure as a Newsletter.
Working on improving health and education, reducing inequality, and spurring economic growth? We'd like to help.
Get paid to write technical tutorials and select a tech-focused charity to receive a matching donation.
Hi… I am very new to Cloud and Chef…
FIrst of all its a very good tutorial to install chef server, ws, node, create cookbooks, roles, etc. Helped me in understanding the basics.
However, I am facing an issue with launching an ec2 instance by using the cloud-config as user data.
I am trying to use the same script given by you
#cloud-config chef: install_type: “omnibus” omnibus_url: “https://www.opscode.com/chef/install.sh”
So, when I launch the aws ec2 instance, it launches it perfectly fine but does not install anything (chef, nginx,etc - as mentioned in my roles.
As I said, I am a beginner so any help on this would be very much appreciated.