In the world of web servers, Apache HTTP Server holds a significant place due to its wide usage, open-source nature, and rich feature set. One important aspect of Apache’s functionality that often leaves beginners and even intermediate users scratching their heads is the concept of Apache “workers”. Understanding what Apache workers are and how they function is vital to optimizing your web server for your specific needs. In this mini-tutorial, we’ll break down this concept using real-life examples to make it easier to understand.
This textbox defaults to using Markdown to format your answer.
You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!
These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.
Sign up for Infrastructure as a Newsletter.
Working on improving health and education, reducing inequality, and spurring economic growth? We'd like to help.
Get paid to write technical tutorials and select a tech-focused charity to receive a matching donation.
What is an Apache Worker?
The term “Apache worker” refers to a process or a thread, depending on the Multi-Processing Module (MPM) used, that handles the requests coming to an Apache server. When a client (for instance, a web browser) sends a request to an Apache server (like asking for a web page), it is the worker’s job to handle that request and send back the appropriate response.
There are two major types of MPMs that Apache uses, each having its own type of worker:
Prefork MPM: This uses a non-threaded, pre-forking web server model. Each request from a client is handled by a separate Apache child process. This is similar to a restaurant where each waiter serves one customer at a time.
Worker MPM: This is a hybrid multi-process, multi-threaded web server model. Here, multiple child processes are created, with each child process capable of handling multiple threads (requests). It’s like a restaurant where a waiter can serve multiple customers simultaneously.
Let’s dive deeper into our restaurant analogy to understand these models better.
Prefork MPM: The Single-Tasking Waiter
Imagine walking into a restaurant where each customer is assigned their own waiter. The waiter takes your order, goes back to the kitchen, waits for your meal to be prepared, and then delivers it to your table. During this time, the waiter does not attend to any other customer until your order is completely served.
In this setup, each waiter is an ‘Apache child process’, and you, the customer, represent a ‘client request’. This is the way Prefork MPM operates. It’s a simple and straightforward model but can become inefficient when the restaurant (server) gets busy. If there are more customers (requests) than waiters (child processes), the additional customers have to wait. This is why Prefork MPM consumes a lot of resources when traffic increases.
Worker MPM: The Multi-Tasking Waiter
Now, imagine a different restaurant setup. Here, each waiter is capable of handling multiple customers at the same time. They take your order, leave it with the kitchen, and while your meal is being prepared, they take orders from other tables. Once your meal is ready, they deliver it to you.
In this case, each waiter represents an ‘Apache child process’, and each customer they’re attending to at the same time represents a ‘thread’. This is how Worker MPM operates. It’s more efficient because a single waiter (child process) can serve many customers (handle multiple requests) simultaneously. However, this model can be a bit complex as it requires careful management of resources to prevent waiters from being overloaded with too many orders at once.
Choosing the Right Worker
Choosing between Prefork and Worker MPM is largely dependent on the specific needs and resources of your server. Prefork, being simpler, is less prone to issues but is more resource-intensive. Worker, on the other hand, is more efficient but requires careful tuning to prevent resource overload.
If your server needs to handle a high volume of requests and has sufficient resources, Worker MPM can be a good choice. However, if your server
Configuring Apache Workers
Configuring your Apache workers correctly is crucial to achieving optimal server performance. Let’s look at how you can configure both Prefork and Worker MPMs and increase the number of available workers.
Configuring Prefork MPM
You can configure Prefork MPM by editing the Apache configuration file, typically located at
/etc/httpd/conf/httpd.conf
or/etc/apache2/apache2.conf
, depending on your system. Look for the<IfModule mpm_prefork_module>
section. Here’s an example of what you might see:The key configuration directives are:
To increase the number of available workers, you can raise the MaxClients value. Be cautious, though, as each child process consumes memory, and setting this value too high can lead to your server running out of memory.
Configuring Worker MPM
Configuring Worker MPM is similar to Prefork, but with a few additional directives. In your Apache configuration file, look for the
<IfModule mpm_worker_module>
section:Here, besides the directives explained in the Prefork section, you have:
To increase the number of available workers, you can increase the MaxClients value, which represents the maximum number of simultaneous requests. You may also need to adjust ThreadsPerChild and ThreadLimit accordingly. Remember, though, that each thread consumes system resources, so these values should be tuned carefully to prevent overloading your server.
Restarting Apache
After making changes to the configuration file, don’t forget to restart the Apache server for the changes to take effect. You can do this with one of the following commands, depending on your system:
With these steps, you should be able to configure your Apache workers according to your server’s requirements and performance expectations. Remember, the goal is to find a balance between resource usage and the ability to handle the desired volume of requests.