Tutorial

How To Create and Run a Service on a CoreOS Cluster

Published on September 5, 2014
How To Create and Run a Service on a CoreOS Cluster

This tutorial is out of date and no longer maintained.

Status: Out of Date

This article is no longer current. If you are interested in writing an update for this article, please see DigitalOcean wants to publish your tech tutorial!

Reason: On December 22, 2016, CoreOS announced that it no longer maintains fleet. CoreOS recommends using Kubernetes for all clustering needs.

See Instead: For guidance using Kubernetes on CoreOS without fleet, see the Kubernetes on CoreOS Documentation.

Introduction

One of the major benefits of the CoreOS is the ability to manage services across an entire cluster from a single point. The CoreOS platform provides integrated tools to make this process simple.

In this guide, we will demonstrate a typical work flow for getting services running on your CoreOS clusters. This process will demonstrate some simple, practical ways of interacting with some of CoreOS’s most interesting utilities in order to set up an application.

Prerequisites and Goals

In order to get started with this guide, you should have a CoreOS cluster with a minimum of three machines configured. You can follow our guide to bootstrapping a CoreOS cluster here.

For the sake of this guide, our three nodes will be as follows:

  • coreos-1
  • coreos-2
  • coreos-3

These three nodes should be configured using their private network interface for their etcd client address and peer address, as well as the fleet address. These should be configured using the cloud-config file as demonstrated in the guide above.

In this guide, we will be walking through the basic work flow of getting services running on a CoreOS cluster. For demonstration purposes, we will be setting up a simple Apache web server. We will cover setting up a containerized service environment with Docker and then we will create a systemd-style unit file to describe the service and its operational parameters.

Within a companion unit file, we will tell our service to register with etcd, which will allow other services to track its details. We will submit both of our services to fleet, where we can start and manage the services on machines throughout our cluster.

Connect to a Node and Pass your SSH Agent

The first thing we need to do to get started configuring services is connect to one of our nodes with SSH.

In order for the fleetctl tool to work, which we will be using to communicate with neighboring nodes, we need to pass in our SSH agent information while connecting.

Before you connect through SSH, you must start your SSH agent. This will allow you to forward your credentials to the server you are connecting to, allowing you to log in from that machine to other nodes. To start the user agent on your machine, you should type:

eval $(ssh-agent)

You then can add your private key to the agent’s in memory storage by typing:

ssh-add

At this point, your SSH agent should be running and it should know about your private SSH key. The next step is to connect to one of the nodes in your cluster and forward your SSH agent information. You can do this by using the -A flag:

ssh -A core@coreos_node_public_IP

Once you are connected to one of your nodes, we can get started building out our service.

Creating the Docker Container

The first thing that we need to do is create a Docker container that will run our service. You can do this in one of two ways. You can start up a Docker container and manually configure it, or you can create a Dockerfile that describes the steps necessary to build the image you want.

For this guide, we will build an image using the first method because it is more straight forward for those who are new to Docker. Follow this link if you would like to find out more about how to build a Docker image from a Dockerfile. Our goal is to install Apache on an Ubuntu 14.04 base image within Docker.

Before you begin, you will need log in or sign up with the Docker Hub registry. To do this, type:

docker login

You will be asked to supply a username, password, and email address. If this is your first time doing this, an account will be created using the details you provided and a confirmation email will be sent to the supplied address. If you have already created an account in the past, you will be logged in with the given credentials.

To create the image, the first step is to start a Docker container with the base image we want to use. The command that we will need is:

docker run -i -t ubuntu:14.04 /bin/bash

The arguments that we used above are:

  • run: This tells Docker that we want to start up a container with the parameters that follow.
  • -i: Start the Docker container in interactive mode. This will ensure that STDIN to the container environment will be available, even if it is not attached.
  • -t: This creates a pseudo-TTY, allowing us terminal access to the container environment.
  • ubuntu:14.04: This is the repository and image combination that we want to run. In this case, we are running Ubuntu 14.04. The image is kept within the Ubuntu Docker repository at Docker Hub.
  • /bin/bash: This is the command that we want to run in the container. Since we want terminal access, we need to spawn a shell session.

The base image layers will be pulled down from the Docker Hub online Docker registry and a bash session will be started. You will be dropped into the resulting shell session.

From here, we can go ahead with creating our service environment. We want to install the Apache web server, so we should update our local package index and install through apt:

apt-get update
apt-get install apache2

After the installation is complete, we can edit the default index.html file:

echo "<h1>Running from Docker on CoreOS</h1>" > /var/www/html/index.html

When you are finished, you can exit your bash session in the conventional way:

exit

Back on your host machine, we need to get the container ID of the Docker container we just left. To do this, we can ask Docker to show the latest process information:

docker ps -l
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS                      PORTS               NAMES
cb58a2ea1f8f        ubuntu:14.04        "/bin/bash"         8 minutes ago       Exited (0) 55 seconds ago                       jovial_perlman

The column that we need is “CONTAINER ID”. In the example above, this would be cb58a2ea1f8f. In order to be able to spin up the same container later on with all of the changes that you made, you need to commit the changes to your username’s repository. You will need to select a name for the image as well.

For our purposes, we will pretend that the username is user_name but you should substitute this with the Docker Hub account name you logged in with a bit ago. We will call our image apache. The command to commit the image changes is:

docker commit container_ID user_name/apache

This saves the image so that you can recall the current state of the container. You can verify this by typing:

docker images
REPOSITORY           TAG                 IMAGE ID            CREATED             VIRTUAL SIZE
user_name/apache     latest              42a71fb973da        4 seconds ago       247.4 MB
ubuntu               14.04               c4ff7513909d        3 weeks ago         213 MB

Next, you should publish the image to Docker Hub so that your nodes can pull down and run the image at will. To do this, use the following command format:

docker push user_name/apache

You now have a container image configured with your Apache instance.

Creating the Apache Service Unit File

Now that we have a Docker container available, we can begin building our service files.

Fleet manages the service scheduling for the entire CoreOS cluster. It provides a centralized interface to the user, while manipulating each host’s systemd init systems locally to complete the appropriate actions.

The files that define each service’s properties are slightly modified systemd unit files. If you have worked with systemd in the past, you will be very familiar with the syntax.

To start with, create a file called apache@.service in your home directory. The @ indicates that this is a template service file. We will go over what that means in a bit. The CoreOS image comes with the vim text editor:

vim apache@.service

To start the service definition, we will create a [Unit] section header and set up some metadata about this unit. We will include a description and specify dependency information. Since our unit will need to be run after both etcd and Docker are available, we need to define that requirement.

We also need to add the other service file that we will be creating as a requirement. This second service file will be responsible for updating etcd with information about our service. Requiring it here will force it into starting when this service is started. We will explain the %i in the service name later:

[Unit]
Description=Apache web server service
After=etcd.service
After=docker.service
Requires=apache-discovery@%i.service

Next, we need to tell the system what needs to happen when starting or stopping this unit. We do this in the [Service] section, since we are configuring a service.

The first thing we want to do is disable the service startup from timing out. Because our services are Docker containers, the first time it is started on each host, the image will have to be pulled down from the Docker Hub servers, potentially causing a longer-than-usual start up time on the first run.

We want to set the KillMode to “none” so that systemd will allow our “stop” command to kill the Docker process. If we leave this out, systemd will think that the Docker process failed when we call our stop command.

We will also want to make sure our environment is clean prior to starting our service. This is especially important since we will be referencing our services by name and Docker only allows a single container to be running with each unique name.

We will need to kill any leftover containers with the name we want to use and then remove them. It is at this point that we actually pull down the image from Docker Hub as well. We want to source the /etc/environment file as well. This includes variables, such as the public and private IP addresses of the host that is running the service:

[Unit]
Description=Apache web server service
After=etcd.service
After=docker.service
Requires=apache-discovery@%i.service

[Service]
TimeoutStartSec=0
KillMode=none
EnvironmentFile=/etc/environment
ExecStartPre=-/usr/bin/docker kill apache%i
ExecStartPre=-/usr/bin/docker rm apache%i
ExecStartPre=/usr/bin/docker pull user_name/apache

The =- syntax for the first two ExecStartPre lines indicate that those preparation lines can fail and the unit file will still continue. Since those commands only succeed if a container with that name exists, they will fail if no container is found.

You may have noticed the %i suffix at the end of the apache container names in the above directives. The service file we are creating is actually a template unit file. This means that upon running the file, fleet will automatically substitute some information with the appropriate values. Read the information at the provided link to find out more.

In our case, the %i will be replaced anywhere it exists within the file with the portion of the service file’s name to the right of the @ before the .service suffix. Our file is simply named apache@.service though.

Although we will submit the file to fleetctl with apache@.service, when we load the file, we will load it as apache@PORT_NUM.service, where “PORT_NUM” will be the port that we want to start this server on. We will be labelling our service based on the port it will be running on so that we can easily differentiate them.

Next, we need to actually start the actual Docker container:

[Unit]
Description=Apache web server service
After=etcd.service
After=docker.service
Requires=apache-discovery@%i.service

[Service]
TimeoutStartSec=0
KillMode=none
EnvironmentFile=/etc/environment
ExecStartPre=-/usr/bin/docker kill apache%i
ExecStartPre=-/usr/bin/docker rm apache%i
ExecStartPre=/usr/bin/docker pull user_name/apache
ExecStart=/usr/bin/docker run --name apache%i -p ${COREOS_PUBLIC_IPV4}:%i:80 user_name/apache /usr/sbin/apache2ctl -D FOREGROUND

We call the conventional docker run command and passed it some parameters. We pass it the name in the same format we were using above. We also are going to expose a port from our Docker container to our host machine’s public interface. The host machine’s port number will be taken from the %i variable, which is what actually allows us to specify the port.

We will use the COREOS_PUBLIC_IPV4 variable (taken from the environment file we sourced) to be explicit to the host interface we want to bind. We could leave this out, but it sets us up for easy modification later if we want to change this to a private interface (if we are load balancing, for instance).

We reference the Docker container we uploaded to Docker Hub earlier. Finally, we call the command that will start our Apache service in the container environment. Since Docker containers shut down as soon as the command given to them exits, we want to run our service in the foreground instead of as a daemon. This will allow our container to continue running instead of exiting as soon as it spawns a child process successfully.

Next, we need to specify the command to call when the service needs to be stopped. We will simply stop the container. The container cleanup is done when restarting each time.

We also want to add a section called [X-Fleet]. This section is specifically designed to give instructions to fleet as to how to schedule the service. Here, you can add restrictions so that your service must or must not run in certain arrangements in relation to other services or machine states.

We want our service to run only on hosts that are not already running an Apache web server, since this will give us an easy way to create highly available services. We will use a wildcard to catch any of the apache service files that we might have running:

[Unit]
Description=Apache web server service
After=etcd.service
After=docker.service
Requires=apache-discovery@%i.service

[Service]
TimeoutStartSec=0
KillMode=none
EnvironmentFile=/etc/environment
ExecStartPre=-/usr/bin/docker kill apache%i
ExecStartPre=-/usr/bin/docker rm apache%i
ExecStartPre=/usr/bin/docker pull user_name/apache
ExecStart=/usr/bin/docker run --name apache%i -p ${COREOS_PUBLIC_IPV4}:%i:80 user_name/apache /usr/sbin/apache2ctl -D FOREGROUND
ExecStop=/usr/bin/docker stop apache%i

[X-Fleet]
X-Conflicts=apache@*.service

With that, we are finished with our Apache server unit file. We will now make a companion service file to register the service with etcd.

Registering Service States with Etcd

In order to record the current state of the services started on the cluster, we will want to write some entries to etcd. This is known as registering with etcd.

In order to do this, we will start up a minimal companion service that can update etcd as to when the server is available for traffic.

The new service file will be called apache-discovery@.service. Open it now:

vim apache-discovery@.service

We’ll start off with the [Unit] section, just as we did before. We will describe the purpose of the service and then we will set up a directive called BindsTo.

The BindsTo directive identifies a dependency that this service look to for state information. If the listed service is stopped, the unit we are writing now will stop as well. We will use this so that if our web server unit fails unexpectedly, this service will update etcd to reflect that information. This solves potential issue of having stale information in etcd which could be erroneously used by other services:

[Unit]
Description=Announce Apache@%i service
BindsTo=apache@%i.service

For the [Service] section, we want to again source the environment file with the host’s IP address information.

For the actual start command, we want to run a simple infinite bash loop. Within the loop, we will use the etcdctl command, which is used to modify etcd values, to set a key in the etcd store at /announce/services/apache%i. The %i will be replaced with the section of the service name we will load between the @ and the .service suffix, which again will be the port number of the Apache service.

The value of this key will be set to the node’s public IP address and the port number. We will also set an expiration time of 60 seconds on the value so that the key will be removed if the service somehow dies. We will then sleep 45 seconds. This will provide an overlap with the expiration so that we are always updating the TTL (time-to-live) value prior to it reaching its timeout.

For the stopping action, we will simply remove the key with the same etcdctl utility, marking the service as unavailable:

[Unit]
Description=Announce Apache@%i service
BindsTo=apache@%i.service

[Service]
EnvironmentFile=/etc/environment
ExecStart=/bin/sh -c "while true; do etcdctl set /announce/services/apache%i ${COREOS_PUBLIC_IPV4}:%i --ttl 60; sleep 45; done"
ExecStop=/usr/bin/etcdctl rm /announce/services/apache%i

The last thing we need to do is add a condition to ensure that this service is started on the same host as the web server it is reporting on. This will ensure that if the host goes down, that the etcd information will change appropriately:

[Unit]
Description=Announce Apache@%i service
BindsTo=apache@%i.service

[Service]
EnvironmentFile=/etc/environment
ExecStart=/bin/sh -c "while true; do etcdctl set /announce/services/apache%i ${COREOS_PUBLIC_IPV4}:%i --ttl 60; sleep 45; done"
ExecStop=/usr/bin/etcdctl rm /announce/services/apache%i

[X-Fleet]
X-ConditionMachineOf=apache@%i.service

You now have your sidekick service that can record the current health status of your Apache server in etcd.

Working with Unit Files and Fleet

You now have two service templates. We can submit these directly into fleetctl so that our cluster knows about them:

fleetctl submit apache@.service apache-discovery@.service

You should be able to see your new service files by typing:

fleetctl list-unit-files
UNIT				HASH	DSTATE		STATE		TMACHINE
apache-discovery@.service	26a893f	inactive	inactive	-
apache@.service			72bcc95	inactive	inactive	-

The templates now exist in our cluster-wide init system.

Since we are using templates that depend on being scheduled on specific hosts, we need to load the files next. This will allow us to specify the new name for these files with the port number. This is when fleetctl looks at the [X-Fleet] section to see what the scheduling requirements are.

Since we are not doing any load balancing, we will just run our web server on port 80. We can load each service by specifying that between the @ and the .service suffix:

fleetctl load apache@80.service
fleetctl load apache-discovery@80.service

You should get information about which host in your cluster the service is being loaded on:

Unit apache@80.service loaded on 41f4cb9a.../10.132.248.119
Unit apache-discovery@80.service loaded on 41f4cb9a.../10.132.248.119

As you can see, these services have both been loaded on the same machine, which is what we specified. Since our apache-discovery service file is bound to our Apache service, we can simply start the later to initiate both of our services:

fleetctl start apache@80.service

Now, if you ask which units are running on our cluster, we should see the following:

fleetctl list-units
UNIT				MACHINE				ACTIVE	SUB
apache-discovery@80.service	41f4cb9a.../10.132.248.119	active	running
apache@80.service		41f4cb9a.../10.132.248.119	active	running

It appears that our web server is up and running. In our service file, we told Docker to bind to the host server’s public IP address, but the IP displayed with fleetctl is the private address (because we passed in $private_ipv4 in the cloud-config when creating this example cluster).

However, we have registered the public IP address and the port number with etcd. To get the value, you can use the etcdctl utility to query the values we have set. If you recall, the keys we set were /announce/services/apachePORT_NUM. So to get our server’s details, type:

etcdctl get /announce/services/apache80
104.131.15.192:80

If we visit this page in our web browser, we should see the very simple page we created:

CoreOS basic web page

Our service was deployed successfully. Let’s try to load up another instance using a different port. We should expect that the web server and the associated sidekick container will be scheduled on the same host. However, due to our constraint in our Apache service file, we should expect for this host to be different from the one serving our port 80 service.

Let’s load up a service running on port 9999:

fleetctl load apache@9999.service apache-discovery@9999.service
Unit apache-discovery@9999.service loaded on 855f79e4.../10.132.248.120
Unit apache@9999.service loaded on 855f79e4.../10.132.248.120

We can see that both of the new services have been scheduled on the same new host. Start the web server:

fleetctl start apache@9999.service

Now, we can get the public IP address of this new host:

etcdctl get /announce/services/apache9999
104.131.15.193:9999

If we visit the specified address and port number, we should see another web server:

CoreOS basic web page

We have now deployed two web servers within our cluster.

If you stop a web server, the sidekick container should stop as well:

fleetctl stop apache@80.service
fleetctl list-units
UNIT				MACHINE				ACTIVE		SUB
apache-discovery@80.service	41f4cb9a.../10.132.248.119	inactive	dead
apache-discovery@9999.service	855f79e4.../10.132.248.120	active	running
apache@80.service		41f4cb9a.../10.132.248.119	inactive	dead
apache@9999.service		855f79e4.../10.132.248.120	active	running

You can check that the etcd key was removed as well:

etcdctl get /announce/services/apache80
Error:  100: Key not found (/announce/services/apache80) [26693]

This seems to be working exactly as expected.

Conclusion

By following along with this guide, you should now be familiar with some of the common ways of working with the CoreOS components.

We have created our own Docker container with the service we wanted to run installed inside and we have created a fleet unit file to tell CoreOS how to manage our container. We have implemented a sidekick service to keep our etcd datastore up-to-date with state information about our web server. We have managed our services with fleetctl, scheduling services on different hosts.

In later guides, we will continue to explore some of the areas we briefly touched upon in this article.

Thanks for learning with the DigitalOcean Community. Check out our offerings for compute, storage, networking, and managed databases.

Learn more about our products


Tutorial Series: Getting Started with CoreOS

CoreOS is a powerful Linux distribution built to make large, scalable deployments on varied infrastructure simple to manage. Based on a build of Chrome OS, CoreOS maintains a lightweight host system and uses Docker containers for all applications. In this series, we will introduce you to the basics of CoreOS, teach you how to set up a CoreOS cluster, and get you started with using docker containers with CoreOS.

About the author(s)

Justin Ellingwood
Justin Ellingwood
See author profile
Category:
Tutorial

Still looking for an answer?

Ask a questionSearch for more help

Was this helpful?
 
20 Comments
Leave a comment...

This textbox defaults to using Markdown to format your answer.

You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!

How do you load balance your service? Especially given the CoreOS feature of starting your service on another node if one of the nodes fail.

Digital Ocean doesn’t provide load balancing, correct?

How do you dynamically load balance services like this?

Thanks in advance.

Justin Ellingwood
DigitalOcean Employee
DigitalOcean Employee badge
September 5, 2014

You can use normal application load balancing applications like HAProxy or Nginx configured within Docker containers.

The general idea is that you have your application servers register themselves with etcd. Your load balancing container should be configured to monitor etcd through the etcd HTTP api which should be available through Docker’s network interface with the host machine.

There are a few different tools that can monitor etcd for changes. One of them is confd. This will watch a specified etcd key or directory, rebuild the configuration file for a service when the values change, and then reload the service. This allows your load balancer to update automatically when new backend targets become available or unavailable.

More importantly…How do you load balance services between Digital Ocean datacenters? I would like to create a node1, node2, node3 at different datacenters, not in the same datacenter. This tutorial only mentioned using the private IP for all nodes which will only work within the same datacenter. Can I use the public IP interface instead?

Great series though!

Justin Ellingwood
DigitalOcean Employee
DigitalOcean Employee badge
September 5, 2014

Using the public interface should work as well since CoreOS relies on the etcd discovery token to find other members. In your cloud-config that you use to bootstrap your nodes, use $public_ipv4 instead of $private_ipv4.

Let us know if you have any issues with that.

Is etcd only meant for service-discovery, or also for configuration-management?

If, for example, I need to provide credentials to my services, should those be distributed via etcd?

If so, how do I restrict access to those settings to a (set of) containers/services and how do I prevent other containers from accessing and/or overwriting those settings?

Justin Ellingwood
DigitalOcean Employee
DigitalOcean Employee badge
September 5, 2014

Etcd is a truly global key-value store at the moment. Etcd is under active development, but currently there is no access control functionality to restrict access to a subset of cluster members. You can enable SSL to authenticate to the cluster as a whole, but this does not address your question.

If you need to restrict access to certain services, you may wish to do that through other means.

If you wish to follow the plans for access control functionality, you can follow this long-standing GitHub issue.

@jellingwood Ok, clear! Something to look out for in the future. Based on that, can I conclude that (without alternative mechanisms) CoreOS / etcd is not ready for multi-tenant installations and untrusted containers? (Since containers are able to access etcd)

Apart from that, CoreOS looks like a very interesting platform. Now that it’s available on DigitalOcean, I will definitely start experimenting with it.

Thanks for the quick response and for a great tutorial.

One remark worth mentioning is that when pushing containers to Docker Hub, those containers are publicly accessible by default. Maybe add a warning that people should not include sensitive data inside their containers.

Thanks for the great tutorial. This may be user error but when I go to start the apache@80.service it does not start the other (required) apache-discovery@%i.service. When I stop the apache@80.service it stops both…Once I started apache-discovery@80.service manually everything seemed to function normally. Am I missing something?

Justin Ellingwood
DigitalOcean Employee
DigitalOcean Employee badge
September 6, 2014

That will happen if you don’t load the service and discovery service beforehand. I don’t really know a way of getting around this at the moment with template unit files. The stopping happens automatically, but it seems like the service templates only process the machine designation at an awkward in between state, leading to the call not being entirely accurate.

Is this what you’re seeing?

Nope, I’m loading everything per the tutorial. Take a look at: http://pastie.org/private/dmfztbu1f1wth6uheuz4fq

Again, I could totally be doing something wrong but I believe I’d be getting another error. It is as if it just wants me to start the second service manually.

Here are my configs: http://pastie.org/private/vnus5ted0mfvuhwklgqfyw

Really strange…

Justin Ellingwood
DigitalOcean Employee
DigitalOcean Employee badge
September 6, 2014

I’m actually on my phone right now, so I’ll have to take a look a bit later.

I’m still trying to figure out the best way of running associated services myself, so I’m curious about these issues for my own sake :).

Justin Ellingwood
DigitalOcean Employee
DigitalOcean Employee badge
September 6, 2014

Okay, I had a chance to run through the guide again from scratch and was unable to reproduce your issue.

I also took a look at your output and unit files. I actually diff’ed them against my own and (aside from the usernames in the configs) the only difference was that you had [UNIT] instead of [Unit] at the top of the apache@.service file.

I looked back in the article and noticed that, while not in the file listings, during the paragraph introducing that file, I had this sentence, “To start the service definition, we will create a [UNIT] section header and set up some metadata about this unit.” I’ve now changed that to [Unit] in the guide.

I’m unsure of how forgiving fleet or systemd for that matter is about capitalization, but I would see if changing that to [Unit] fixes anything.

Currently, fleetctl does nothing if you re-submit a unit file that is different from the one it has in memory, so you will have to tell fleetctl to destroy its copy so that you can re-submit. Because of the machine designations for the loading, I would actually suggest destroying the both the apache and apache-discovery units and templates and re-submitting, just to be safe. The process will probably look something like this:

# Stop the services
fleetctl stop apache@80.service
fleetctl stop apache-discovery@80.service

# Destroy the unit files and templates
fleetctl destroy apache@80.service
fleetctl destroy apache@.service
fleetctl destroy apache-discovery@80.service
fleetctl destroy apache-discovery@.service

# Re-submit the templates after making the `[UNIT]` to `[Unit]` change
fleetctl submit apache@.service apache-discovery@.service

# Load the new files
fleetctl load apache@80.service
fleetctl load apache-discovery@80.service

# Start the main service
fleetctl start apache@80.service

See if that changes anything. If that doesn’t help, we can try something else.

Wow! Very finicky :).

I had actually removed everything the other night and didn’t change the UNIT to Unit and saw the same behavior. I just changed [UNIT] to [Unit] and everything worked as expected.

Needless to say, it appears that these files are case sensitive.

Thanks for the help and again, great tutorial!

Can I Use etcd as store user sessions or cache data for my distributed application, instead memcached or redis?

Justin Ellingwood
DigitalOcean Employee
DigitalOcean Employee badge
September 6, 2014

I’m not entirely sure if etcd is the best target for that, but you may want to look at something like Deis, which is a Heroku-like PaaS built on top of CoreOS that incorporates a Redis store as you discribed. Check out their architecture section to see the way the system is set up.

Since we just released our image, they don’t have native support yet, but I imagine that will be changing soon. They have had a provisioning workaround for DigitalOcean for awhile now.

Justin Ellingwood
DigitalOcean Employee
DigitalOcean Employee badge
September 6, 2014

This comment has been deleted

    Hello jellingwood, I try to attach a docker container after i start the two services apache and apache discovery and doesn’t work. I expect access to the container environment like run the command “docker run -i -t ubuntu:14.04 /bin/bash” Do you have the same problem or you have the solution ? I use putty to connect for ssh.

    Thanks in advance.

    Justin Ellingwood
    DigitalOcean Employee
    DigitalOcean Employee badge
    September 17, 2014

    Hi @acastellanom:

    The best way to connect to a container that is already running another process in the foreground (Apache in our case), is to use a utility called nsenter. Fortunately, the CoreOS team have included this within the distribution, so you have access to it directly.

    1. FIrst, SSH into the CoreOS machine that is running the container that you wish to inspect:
    fleetctl ssh service_name
    
    1. Next, find the container ID of the container in question:
    docker ps
    
    CONTAINER ID        IMAGE                       COMMAND                CREATED             STATUS              PORTS                         NAMES
    47f48a33ee88        something/apache:latest   "/usr/sbin/apache2ct   13 minutes ago      Up 13 minutes       10.132.249.212:4444->80/tcp   apache.4444
    
    1. Now, find the PID of the container in question:
    PID=$(docker inspect --format {{.State.Pid}} 47f48a33ee88)
    
    1. Use this PID as an argument to nsenter to attach to open up a new shell session in the running container, allowing you to poke around, troubleshoot, and see what’s happening. You must use sudo for this command:
    sudo nsenter --target $PID --mount --uts --ipc --net --pid
    

    You should now have a new shell session started within the container.

    I have really followed exactly the instructions in this article, however after the step:

    fleetctl start apache@80.service
    

    I get (list-units reports):

    core@earth ~ $ fleetctl list-units                                                                                                                                                                    
    UNIT MACHINE ACTIVE SUB
    apache-discovery@80.service117f7d8f.../10.131.234.68 inactive dead
    apache@80.service117f7d8f.../10.131.234.68inactivedead failed failed
    

    Here is a nicer screenshot

    please try

    fleetctl journal apache@80.service
    

    This should give you more information about whats not working.

    Awesome tutorial. Couldn’t be better. Thanks

    HI i followed every steps and read comment but i have an issue: The service load fine :

    CONTAINER ID        IMAGE                 COMMAND                CREATED             STATUS              PORTS                     NAMES
    38adfa7cf7ed        climz/apache:latest   "/usr/sbin/apache2ct   5 minutes ago       Up 5 minutes        149.xx.xx.xx:80->80/tcp   apache80 
    

    But when i try to go to the web site i got nothing.

    If i try a telnet here is the result :

    telnet 149.xx.xx.xx 80
    Trying 149.xx.xx.xx...
    telnet: Unable to connect to remote host: Connection refused
    

    Which show that the port is open but no service are listenning on it

    On the host i wanted to check if it is listening on port 80 but “netstat -plantu” give me no service listenning on port 80 -> http://pastebin.com/BgtSpV0C

    Any idea ?

    Thanks

    The first guess service is running on the other node than you connect.

    $ etcdctl get /announcement/services/apache80
    

    gives your 149.xx.xx.xx ip?

    Yep :

    core@coreos-e86a5c19-3185-41ed-8380-4c82c90bfe91 ~ $ etcdctl get /announce/services/apache80 149.xx.xx.xx:80

    Could you please post here service status output?

    $ fleetctl status apache@80.service
    
    core@coreos-e86a5c19-3185-41ed-8380-4c82c90bfe91 ~ $ fleetctl status apache@80.service
    ● apache@80.service - Apache web server service
       Loaded: loaded (/run/fleet/units/apache@80.service; linked-runtime)
       Active: active (running) since Sat 2014-10-18 08:42:24 UTC; 1 day 12h ago
      Process: 714 ExecStartPre=/usr/bin/docker pull climz/apache (code=exited, status=0/SUCCESS)
      Process: 705 ExecStartPre=/usr/bin/docker rm apache%i (code=exited, status=1/FAILURE)
      Process: 640 ExecStartPre=/usr/bin/docker kill apache%i (code=exited, status=1/FAILURE)
     Main PID: 727 (docker)
       CGroup: /system.slice/system-apache.slice/apache@80.service
               └─727 /usr/bin/docker run --name apache80 -p 149.xx.xx.xx:80:80 climz/apache /usr/sbin/apache2ctl -D FOREGROUND
    
    Oct 18 08:42:22 coreos-ccac17b2-7889-44be-b83f-597fa5c56d3a.novalocal docker[640]: Error response from daemon: No such container: apache80
    Oct 18 08:42:22 coreos-ccac17b2-7889-44be-b83f-597fa5c56d3a.novalocal docker[640]: 2014/10/18 08:42:22 Error: failed to kill one or more containers
    Oct 18 08:42:22 coreos-ccac17b2-7889-44be-b83f-597fa5c56d3a.novalocal docker[705]: Error response from daemon: No such container: apache80
    Oct 18 08:42:22 coreos-ccac17b2-7889-44be-b83f-597fa5c56d3a.novalocal docker[705]: 2014/10/18 08:42:22 Error: failed to remove one or more containers
    Oct 18 08:42:22 coreos-ccac17b2-7889-44be-b83f-597fa5c56d3a.novalocal docker[714]: Pulling repository climz/apache
    Oct 18 08:42:24 coreos-ccac17b2-7889-44be-b83f-597fa5c56d3a.novalocal systemd[1]: Started Apache web server service.
    Oct 18 08:42:25 coreos-ccac17b2-7889-44be-b83f-597fa5c56d3a.novalocal docker[727]: AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 172.17.0.2. Set the 'ServerName' directive globally to suppress this message
    
    

    it seems correct no ?

    Ok i get it working. The think is, my instance is in a cloud environment, so the command

    ExecStart=/usr/bin/docker run --name apache%i -p ${COREOS_PUBLIC_IPV4}:%i:80 climz/apache /usr/sbin/apache2ctl -D FOREGROUND
    

    is not understood because there is no port 149.xx.xx.xx to listen on. The public interface is outside of the instance environment. By changing the value to PrivateIP it is now working.

    ExecStart=/usr/bin/docker run --name apache%i -p ${COREOS_PRIVATE_IPV4}:%i:80 climz/apache /usr/sbin/apache2ctl -D FOREGROUND
    

    Hello, I’m quite new in Linux systems, I’m running the coreos machines on bare-metal installations. I’ve built the cluster, the unit files etc etc but my services are not starting beacuse of the following problem:

    $ fleetctl status apache-discovery@80.service
    ● apache-discovery@80.service - Announce Apache@80 service
       Loaded: loaded (/run/fleet/units/apache-discovery@80.service; linked-runtime)
       Active: failed (Result: resources)
    
    Oct 20 19:18:19 localhost systemd[1]: Starting Announce Apache@80 service...
    Oct 20 19:18:19 localhost systemd[1]: Failed to load environment files: No such file or directory
    Oct 20 19:18:19 localhost systemd[1]: apache-discovery@80.service failed to run 'start' task: No such file or directory
    Oct 20 19:18:19 localhost systemd[1]: Failed to start Announce Apache@80 service.
    Oct 20 19:18:19 localhost systemd[1]: Unit apache-discovery@80.service entered failed state.
    
    

    Why the system has no /etc/environment file? Thank you.

    Did you submit and load it ?

    fleetctl submit apache@.service apache-discovery@.service
    
    fleetctl load apache@80.service
    fleetctl load apache-discovery@80.service
    

    Check the result of

    fleetctl list-unit-files
    

    Should return

    UNIT                HASH    DSTATE      STATE       TMACHINE
    apache-discovery@.service   26a893f inactive    inactive    -
    apache@.service         72bcc95 inactive    inactive    -
    

    This comment has been deleted

      Sure: I submitted, loaded and started the services. The result of the fleetctl list-unit* commands is:

      $ fleetctl list-unit-files
      UNIT				HASH	DSTATE		STATE		TARGET
      apache-discovery@.service	26a893f	inactive	inactive	-
      apache-discovery@80.service	26a893f	loaded		loaded		3cdcd753.../192.168.208.24
      apache@.service			77adab6	inactive	inactive	-
      apache@80.service		77adab6	launched	launched	3cdcd753.../192.168.208.24
      
      
      $ fleetctl list-units
      UNIT				MACHINE				ACTIVE		SUB
      apache-discovery@80.service	3cdcd753.../192.168.208.24	failed		failed
      apache@80.service		3cdcd753.../192.168.208.24	inactive	dead
      

      Ok I’ve made a research on the internet: in some versions of coreos the /etc/environment file is not present, but it can be created via the cloud-config file. That solves the problem

      #cloud-config
      write_files:
          - path: /etc/environment
            permissions: 0644
            content: |
              COREOS_PUBLIC_IPV4=1.1.1.1
              COREOS_PRIVATE_IPV4=2.2.2.2
      

      bye

      Great Tut! I too would be interested in Load Balancing example across DO data centres.

      I am getting this error:

      core@coreos-01 ~ $ fleetctl status apache@9999.service Error running remote command: SSH_AUTH_SOCK environment variable is not set. Verify ssh-agent is running. See https://github.com/coreos/fleet/blob/master/Documentation/using-the-client.md for help.

      Kamal Nasser
      DigitalOcean Employee
      DigitalOcean Employee badge
      October 23, 2014

      Did you enable SSH Agent Forwarding (-A)?

      ssh -A core@coreos-01
      

      Make sure that you’re running ssh-agent locally, have a look at the SSH Agent section of this tutorial.

      Just a heads up, the config files use =- and NOT =_. The pagefont doesn’t make this distinction clear. If you use =_/usr/bin/docker coreos will complain that it couldn’t find the path “_/usr/bin/docker” as of course it doesn’t exist.

      Since I didn’t find the /etc/environment file format in any of the forums,I felt it is better to mention here. Please note that don’t export like .bash_profile. Instead just assign the value to the variables as below.

      COREOS_PUBLIC_IPV4=XX.XX.XX.XX
      

      The lines After=etcd.service and Requires=etcd.service resulted in all sort of error messages to me, one example:

      $ etcdctl cluster-health
      cluster may be unhealthy: failed to list members
      Error:  unexpected status code 404
      

      That happen because etcd and etcd2 has been used simultaneously, to fix that just change etcd.service to etcd2.service

      Join the Tech Talk
      Success! Thank you! Please check your email for further details.

      Please complete your information!

      Become a contributor for community

      Get paid to write technical tutorials and select a tech-focused charity to receive a matching donation.

      DigitalOcean Documentation

      Full documentation for every DigitalOcean product.

      Resources for startups and SMBs

      The Wave has everything you need to know about building a business, from raising funding to marketing your product.

      Get our newsletter

      Stay up to date by signing up for DigitalOcean’s Infrastructure as a Newsletter.

      New accounts only. By submitting your email you agree to our Privacy Policy

      The developer cloud

      Scale up as you grow — whether you're running one virtual machine or ten thousand.

      Get started for free

      Sign up and get $200 in credit for your first 60 days with DigitalOcean.*

      *This promotional offer applies to new accounts only.