The author selected Girst Who Code to receive a donation as part of the Write for DOnations program.
Gradio is an open-source Python library used to create machine learning (ML) and deep learning (DL) web applications. It offers a user-friendly interface that allows developers to build and deploy interactive and customizable interfaces for their machine learning models quickly, without extensive knowledge of web development.
The primary goal of Gradio is to bridge the gap between machine learning models and end-users by providing an easy-to-use interface for creating web applications. It enables users to interact with ML models in a more intuitive and accessible manner, making the deployment and usage of machine learning applications more widespread.
Gradio provides pre-built UI components for simplifying the creation of ML web applications and other common tasks such as input forms, sliders, image displays, and text boxes. Users can easily set input and output components for their machine learning models, allowing Gradio to handle other aspects such as installation and hosting of applications. This approach allows Machine learning practitioners to focus on the model itself and speed up the development and deployment process by simplifying the complexity of web development.
This tutorial will walk you through building Machine Learning web applications using Gradio on Ubuntu.
Before starting this guide, you should have:
An Ubuntu Cloud GPU Server with at least 1/7 GPU, 10 GB VRAM, 2 vCPUs, and 10 GB Memory.
A root user or a user with sudo privileges. Follow our initial server setup guide for guidance.
Python and Pip package manager installed, following Steps 1 of How To Install Python 3 and Set Up a Programming Environment on Ubuntu.
A domain name configured to point to your server. You can purchase one on Namecheap or get one for free on Freenom. You can learn how to point domains to DigitalOcean by following the relevant documentation on domains and DNS.
You can use the PIP package manager to install Gradio with the required model dependency packages.
pip3 install realesrgan gfpgan basicsr gradio
Here’s an explanation of the above-mentioned packages:
RealESRGAN is an image super-resolution model that aims to enhance the resolution and quality of images.
GFPGAN, or Generative Facial Prior GAN, is likely a GAN-based model that generates facial images.
BasicSR is a library or framework for Super-Resolution tasks in computer vision. It likely provides tools, models, and utilities for implementing and experimenting with various super-resolution algorithms.
Gradio creates user interfaces for machine learning models.
Next, verify the Jinja2
version to check the compatibility.
pip show jinja2
You will see the following output.
OutputName: Jinja2
Version: 3.0.3
Summary: A very fast and expressive template engine.
Home-page: https://palletsprojects.com/p/jinja/
Author: Armin Ronacher
Author-email: armin.ronacher@active-4.com
License: BSD-3-Clause
Location: /usr/lib/python3/dist-packages
Requires:
Required-by: altair, gradio, torch
As you can see, the Jinja2 version is 3.0.3. However, this article uses the Pandas library that requires Jinja2 version 3.1.2 or above. Let’s upgrade the jinja2 package with the latest version.
pip install --upgrade jinja2
Next, install other dependencies for OpenGL support on Linux systems.
apt install libgl1-mesa-glx libglib2.0-0 -y
This command will download and install the necessary OpenGL libraries on your system.
You can follow these basic steps to create a new Gradio web application. This your_domain assumes you have a trained machine-learning model you want to deploy using Gradio.
First, create a directory for your web application.
mkdir -p /opt/gradio-app/
Next, change the permission and ownership of this directory:
chown -R root:root /opt/gradio-app/
chmod -R 775 /opt/gradio-app/
Next, navigate to the application directory and create an app.py file for your Gradio application.
cd /opt/gradio-app/
nano app.py
First, add the following code to import the required libraries.
import gradio as gr
from gfpgan import GFPGANer
from basicsr.archs.rrdbnet_arch import RRDBNet
from realesrgan import RealESRGANer
import numpy as np
import cv2
import requests
Here is the explanation:
gr: Gradio library for creating web interfaces for machine learning models.
GFPGANer: Class for GFPGAN model.
RRDBNet: Class for the RRDBNet architecture.
RealESRGANer: Class for the RealESRGAN model.
numpy: Library for numerical operations.
cv2: OpenCV library for computer vision.
requests: Library for making HTTP requests.
Next, define the model configuration and image enhancement function:
def enhance_image(input_image):
arch = 'clean'
model_name = 'GFPGANv1.4'
gfpgan_checkpoint = 'https://github.com/TencentARC/GFPGAN/releases/download/v1.3.4/GFPGANv1.4.pth'
realersgan_checkpoint = 'https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.1/RealESRGAN_x2plus.pth'
rrdbnet = RRDBNet(num_in_ch=3, num_out_ch=3, num_feat=64, num_block=23, num_grow_ch=32, scale=2)
bg_upsampler = RealESRGANer(
scale=2,
model_path=realersgan_checkpoint,
model=rrdbnet,
tile=400,
tile_pad=10,
pre_pad=0,
half=True
)
restorer = GFPGANer(
model_path=gfpgan_checkpoint,
upscale=2,
arch=arch,
channel_multiplier=2,
bg_upsampler=bg_upsampler
)
input_image = input_image.astype(np.uint8)
cropped_faces, restored_faces, restored_img = restorer.enhance(input_image)
return restored_faces[0], restored_img
This function (enhance_image)
takes an input image and enhances it using the GFPGAN and RealESRGAN models. It configures the models, loads checkpoints, and performs the enhancement.
Next, define the Gradio interface configuration:
interface = gr.Interface(
fn=enhance_image,
inputs=gr.Image(),
outputs=[gr.Image(), gr.Image()],
live=True,
title="Face Enhancement with GFPGAN",
description="Upload an image of a face and see it enhanced using GFPGAN. Two outputs will be displayed: restored_faces and restored_img."
)
Here is the explanation:
fn: The function (enhance_image) to be used for processing inputs.
inputs: The input component (in this case, an image).
outputs: The output components (two images: restored_faces and restored_img).
live: Enables live updates in the Gradio interface.
title: Title for the Gradio interface.
description: Description for the Gradio interface.
Finally, add the following line to launch the Gradio Interface.
interface.launch(server_name="0.0.0.0", server_port=8080)
Save and close the file, then run the application to verify the model.
python3 app.py
If everything is fine, you will see the following output.
Output warnings.warn(
Running on local URL: http://0.0.0.0:8080
To create a public link, set `share=True` in `launch()`.
Press the CTRL + C
to stop the application process.
The systemd
service file is used to define how your Gradio application should be managed as a service by the system.
Use the nano editor to create a gradio.service
file.
nano /etc/systemd/system/gradio.service
Add the following configuration:
[Unit]
Description=My Gradio Web Application
[Service]
ExecStart=/usr/bin/python3 /opt/gradio-app/app.py
WorkingDirectory=/opt/gradio-app/
Restart=always
User=root
Environment=PATH=/usr/bin:/usr/local/bin
Environment=PYTHONUNBUFFERED=1
[Install]
WantedBy=multi-user.target
Save the changes and close the text editor. Then, reload the systemd
configuration:
systemctl daemon-reload
Finally, start the Gradio service and enable it to start on boot.
systemctl start gradio
systemctl enable gradio
Check the status of your Gradio service to ensure it’s running without errors:
systemctl status gradio
This command will display the current status and any error messages if the service is not running correctly.
● gradio.service - My Gradio Web Application
Loaded: loaded (/etc/systemd/system/gradio.service; disabled; vendor preset: enabled)
Active: active (running) since Sat 2024-01-13 02:49:18 UTC; 10s ago
Main PID: 18811 (python3)
Tasks: 9 (limit: 9410)
Memory: 350.9M
CPU: 10.239s
CGroup: /system.slice/gradio.service
└─18811 /usr/bin/python3 /opt/gradio-app/app.py
Jan 13 02:49:18 gradio systemd[1]: Started My Gradio Web Application.
Jan 13 02:49:26 gradio python3[18811]: /usr/local/lib/python3.10/dist-packages/torchvision/transforms/functional_tensor.py:5: UserWarning: The torchvision.transforms.f>
Jan 13 02:49:26 gradio python3[18811]: warnings.warn(
Jan 13 02:49:27 gradio python3[18811]: Running on local URL: http://0.0.0.0:8080
Jan 13 02:49:27 gradio python3[18811]: To create a public link, set `share=True` in `launch()`.
Now your Gradio application should be running as a systemd
service, and it will start automatically on system boot.
You can implement the Nginx as a reverse proxy to enhance the overall performance, security, and scalability of your Gradio web application. It allows you to take advantage of Nginx’s capabilities while Gradio focuses on serving interactive machine-learning models.
First, install the Nginx web server packages.
apt install nginx -y
Next, create a new Nginx configuration file for your Gradio application.
nano /etc/nginx/conf.d/gradio.conf
Add the following configuration, adjusting the placeholders:
server {
listen 80;
server_name gradio.your_domain.com;
location / {
proxy_pass http://127.0.0.1:8080/;
}
}
Replace gradio.your_domain.com
with your domain name or server’s IP address. Change the proxy_pass address to the address where Gradio is running. If Gradio runs on the same server as Nginx and uses the default Gradio port (usually 8080
), you can leave it as is.
Before restarting Nginx, it’s a good idea to test the configuration to ensure no syntax errors:
nginx -t
If everything is fine, you will see the following output.
Putputnginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
Next, restart Nginx to apply the changes:
systemctl restart nginx
Now, Nginx is configured as a reverse proxy for your Gradio application. When users access your domain or IP, Nginx forwards the requests to the Gradio server running on the specified address and port.
Securing a Gradio web application with Let’s Encrypt SSL encrypts the data exchanged between the user’s browser and the Gradio web application. This ensures that sensitive information, such as user inputs or any confidential data, remains secure during transmission. By incorporating Let’s Encrypt SSL into your Gradio web application, you not only enhance security but also contribute to a safer and more trustworthy online environment for your users.
First, install Certbot, the client for Let’s Encrypt, on your server.
apt install -y certbot python3-certbot-nginx
Next, run the certbot
command to obtain an SSL certificate for your domain. Replace your_domain.com with your actual domain.
certbot --nginx -d gradio.your_domain.com -m admin@your_domain.com --agree-tos
This will download the Let’s Encrypt SSL and automatically update your Nginx configuration to use the obtained SSL certificate.
OutputSaving debug log to /var/log/letsencrypt/letsencrypt.log
Would you be willing, once your first certificate is successfully issued, to
share your email address with the Electronic Frontier Foundation, a founding
partner of the Let's Encrypt project and the non-profit organization that
develops Certbot? We'd like to send you email about our work encrypting the web,
EFF news, campaigns, and ways to support digital freedom.
(Y)es/(N)o: Y
Account registered.
Requesting a certificate for gradio.your_domain.com
Successfully received certificate.
Certificate is saved at: /etc/letsencrypt/live/gradio.your_domain.com/fullchain.pem
Key is saved at: /etc/letsencrypt/live/gradio.your_domain.com/privkey.pem
This certificate expires on 2024-04-12.
These files will be updated when the certificate is renewed.
Certbot has scheduled a task to renew this certificate in the background automatically.
Deploying certificate
Successfully deployed certificate for gradio.your_domain.com to /etc/nginx/conf.d/gradio.conf
Congratulations! You have successfully enabled HTTPS on `https://gradio.your_domain.com
If you like Certbot, please consider supporting our work by:
* Donating to ISRG / Let's Encrypt: https://letsencrypt.org/donate
* Donating to EFF: https://eff.org/donate-le
Let’s Encrypt certificates are typically valid for 90 days. Add a cron job to run the renewal command to automate the renewal process.
crontab -e
Add the following line to run the renewal twice a day:
0 */12 * * * /usr/bin/certbot renew --quiet
Save the changes and exit the editor.
Your Gradio web application should be accessible over HTTPS with a valid Let’s Encrypt SSL certificate. You can open your web browser and access your Gradio web application using the URL https://gradio.your_domain.com
You can upload any image in the input_image section and verify that the application uses the GPU Server resources to process the enhanced image output.
In this tutorial, we explored the key steps involved in setting up a Gradio web application. We covered the installation of Gradio and demonstrated how to integrate it with popular machine learning models such as GFPGAN and RealESRGAN. We even discussed enhancing security and performance by configuring Nginx as a reverse proxy with Let’s Encrypt SSL.
By leveraging Gradio, users can effortlessly upload images, input data, and visualize model outputs through a dynamic web interface. The platform’s flexibility enables integration with various machine learning frameworks and models, making it a versatile tool for various applications. For more information, visit the official Gradio web interface documentation.
Thanks for learning with the DigitalOcean Community. Check out our offerings for compute, storage, networking, and managed databases.
This textbox defaults to using Markdown to format your answer.
You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!
Sign up for Infrastructure as a Newsletter.
Working on improving health and education, reducing inequality, and spurring economic growth? We'd like to help.
Get paid to write technical tutorials and select a tech-focused charity to receive a matching donation.