PythonPro #53: FastAPI on Docker, Python-CUDA Integration with Numbast, and Concurrent Requests with httpx vs aiohttp
Welcome to a brand new issue of PythonPro!
In today’s Expert Insight we bring you an excerpt from the recently published book, FastAPI Cookbook, which explains how to deploy FastAPI apps using Docker, covering Dockerfile creation, image building, and container generation.
News Highlights: Numbast simplifies Python-CUDA C++ integration by auto-generating Numba bindings for CUDA functions; and DJ Beat Drop enhances Django’s new developer onboarding with a streamlined project initializer.
Time-Series Data Meets Blockchain: Storing Time-Series Data with Solidity, Ganache and Python⛓️
Let's Eliminate General Bewilderment • Python's LEGB Rule, Scope, and Namespaces🧩
And, today’s Featured Study, introduces LSS-SKAN, a Kolmogorov–Arnold Network (KAN) variant that uses a single-parameter function (Shifted Softplus) for efficient accuracy and speed.
Stay awesome!
Divya Anne Selvaraj
Editor-in-Chief
P.S.: Thank you to those who participated in this month's survey. With this issue, we have tried to fulfill at least one request made by each participant. Keep an eye out for next month's survey.
🐍 Python in the Tech 💻 Jungle 🌳
🗞️News
Bridging the CUDA C++ Ecosystem and Python Developers with Numbast: Numbast streamlines the integration of CUDA C++ libraries with Python by automatically generating Numba bindings for CUDA functions.
Improving the New Django Developer Experience: Introduces DJ Beat Drop as a streamlined project initializer to improve the onboarding experience for new Django developers.
💼Case Studies and Experiments🔬
Concurrent Requests in Python: httpx vs aiohttp: Describes how switching from the
httpx
toaiohttp
library resolved high-concurrency issues and improved stability in a computer vision application.From Python to CPU instructions: Part 1: Explains how rewriting a Python program in C exposes low-level details Python abstracts away, particularly highlighting the manual effort required for tasks like input handling.
📊Analysis
Python 3.13, what didn't make the headlines: highlights Python 3.13's understated but impactful improvements, focusing on debugging enhancements, filesystem fixes, and minor concurrency updates.
When should you upgrade to Python 3.13?: Advises waiting until December 2024 for Python 3.13 upgrades to ensure compatibility with libraries, tools, and bug-fix improvements.
🎓 Tutorials and Guides 🤓
Python Thread Safety: Using a Lock and Other Techniques: Explains how to address issues like race conditions and introduces synchronization techniques such as semaphores to ensure safe, concurrent code execution.
Time-Series Data Meets Blockchain: Storing Time-Series Data with Solidity, Ganache and Python: Walks you through the steps to set up Ethereum locally, deploy a smart contract, and store and retrieve data points.
Beautiful Soup: Build a Web Scraper With Python: Covers how to inspect site structure, scrape HTML content, and parse data using Requests and Beautiful Soup to build a script that extracts and displays job listings.
🎥Advanced Web Scraping Tutorial! (w/ Python Beautiful Soup Library): Covers Requests to retrieve and parse data, especially from dynamic pages like Walmart's, with enhancements like using modified headers.
Fuzzy regex matching in Python: Introduces the
orc
library to simplify fuzzy matching by providing a human-friendly interface that highlights edits and can invert changes, enhancing usability for complex text correction tasks.Achieving Symmetrical ManyToMany Filtering in Django Admin: Covers using Django's
RelatedFieldWidgetWrapper
and a customModelForm
, allowing for consistent filtering on both sides of a ManyToMany relationship.Get started with the free-threaded build of Python 3.13: Details installation, usage in Python programs, compatibility with C extensions, and how to detect GIL status programmatically.
🔑Best Practices and Advice🔏
Let's Eliminate General Bewilderment • Python's LEGB Rule, Scope, and Namespaces: Details how variables are resolved in local, enclosing, global, and built-in scopes, using accessible examples to clarify potential pitfalls.
🎥Robust LLM pipelines (Mathematica, Python, Raku): Given the unreliable and often slow nature of LLMs, this presentation outlines methods to enhance pipeline efficiency, robustness, and usability.
A new way of Python Debugging with the Frame Evaluation API: Introduces Python's Frame Evaluation API, a tool that allows real-time monitoring and control of program execution at the frame level.
Buffers on the edge: Python and Rust: Explains how Python's buffer protocol, which enables memory sharing between objects, can lead to undefined behavior due to data races in C, and the challenges Rust faces in maintaining soundness.
Optimization of Iceberg Table In AWS Glue: Discusses how AWS Glue offers built-in optimization, but a Python-based solution using
boto3
and Athena SQL scripts provides customizable, cost-effective automation.
🔍Featured Study: LSS-SKAN💥
In "LSS-SKAN: Efficient Kolmogorov–Arnold Networks based on Single-Parameterized Function," Chen and Zhang from South China University of Technology present a refined Kolmogorov–Arnold Network (KAN) variant. Their study introduces an innovative design principle for neural networks, improving accuracy and computational speed while ensuring greater model interpretability.
Context
KANs are neural networks based on the Kolmogorov-Arnold theorem, which breaks down complex, multivariate functions into simpler univariate ones, aiding in better visualisation and interpretability. This makes them valuable in critical decision-making applications, where understanding a model's decision process is crucial. Unlike typical neural networks like Multilayer Perceptrons (MLPs), which rely on opaque linear and activation functions, KANs assign functions to network edges, creating a more interpretable structure. Over time, several KAN variants, such as FourierKAN and FastKAN, have emerged, each with unique basis functions to balance speed and accuracy.
LSS-SKAN builds on these advancements with the Efficient KAN Expansion (EKE) Principle, a new approach that scales networks using fewer complex basis functions, allocating parameters to the network's size instead. This principle is central to LSS-SKAN's efficiency and demonstrates how a simpler basis function can yield high accuracy with reduced computational cost.
Key Features of LSS-SKAN
EKE Principle: Scales the network by prioritising size over basis function complexity, making LSS-SKAN faster and more efficient.
Single-Parameter Basis Function: Utilises the Shifted Softplus function, requiring only one learnable parameter for each function, which simplifies the network and reduces training time.
Superior Accuracy: Outperforms KAN variants, showing a 1.65% improvement over Spl-KAN, 2.57% over FastKAN, 0.58% over FourierKAN, and 0.22% over WavKAN on the MNIST dataset.
Reduced Training Time: Achieves significant reductions in training time, running 502.89% faster than MLP+rKAN and 41.78% faster than MLP+fKAN.
What This Means for You
For those working in machine learning or fields requiring interpretable AI, LSS-SKAN offers a practical solution to enhance neural network accuracy and speed while maintaining transparency in model decision-making. LSS-SKAN is particularly beneficial in applications involving image classification, scientific computing, or scenarios demanding high interpretability, such as medical or financial sectors where model explainability is crucial.
Examining the Details
The researchers conducted detailed experiments using the MNIST dataset to measure LSS-SKAN’s performance against other KAN variants. They tested both short-term (10-epoch) and long-term (30-epoch) training cycles, focusing on two key metrics: accuracy and execution speed.
Through these tests, LSS-SKAN consistently outperformed other KAN models in accuracy, achieving a 1.65% improvement over Spl-KAN, 2.57% over FastKAN, and 0.58% over FourierKAN, while also running 502.89% faster than MLP+rKAN and 41.78% faster than MLP+fKAN.
The LSS-SKAN Python library is available on GitHub, along with experimental code, so you can replicate and build on their findings. They recommend a learning rate between 0.0001 and 0.001 for best results, particularly due to KANs’ sensitivity to learning rate adjustments.
You can learn more by reading the entire paper and accessing LSS-SKAN.
🧠 Expert insight💥
Here’s an excerpt from “Chapter 12: Deploying and Managing FastAPI Applications” in the book, FastAPI Cookbook by Giunio De Luca, published in August 2024.
Running FastAPI applications in Docker containers
Docker is a useful tool that lets developers wrap applications with their dependencies into a container. This method makes sure that the application operates reliably in different environments, avoiding the common works on my machine issue. In this recipe, we will see how to make a
Dockerfile and run a FastAPI application inside a Docker container. By the end of this guide, you will know how to put your FastAPI application into a container, making it more flexible and simpler to deploy.
Getting ready
You will benefit from some knowledge of container technology, especially Docker, to follow the recipe better. But first, check that Docker Engine is set up properly on your machine. You can see how to do it at this link: https://docs.docker.com/engine/install/.
If you use Windows, it is better to install Docker Desktop, which is a Docker virtual machine distribution with a built-in graphical interface.
Whether you have Docker Engine or Docker Desktop, make sure the daemon is running by typing this command:
$ docker images
If you don’t see any error about the daemon, that means that Docker is installed and working on the machine. The way to start the Docker daemon depends on the installation you choose. Look at the related documentation to see how to do it.
You can use the recipe for your applications or follow along with the Live Application
application that we introduced in the first recipe, which we are using throughout the chapter.
How to do it…
It is not very complicated to run a simple FastAPI application in a Docker container. The process consists of three steps:
Create the Dockerfile.
Build the image.
Generate the container.
Then, you just have to run the container to have the application working.
Creating the Dockerfile
The Dockerfile contains the instructions needed to build the image from an operating system and the file we want to specify.
It is good practice to create a separate Dockerfile for the development environment. We will name it Dockerfile.dev
and place it under the project root folder.
We start the file by specifying the base image, which will be as follows:
FROM python:3.10
This will pull an image from the Docker Hub, which already comes with Python 3.10 integrated. Then, we create a folder called /code
that will host our code:
WORKDIR /code
Next, we copy requirements.txt
into the image and install the packages inside the image:
COPY ./requirements.txt /code/requirements.txt
RUN pip install --no-cache-dir -r /code/requirements.txt
The pip install
command runs with the --no-cache-dir
parameter to avoid pip
caching operations that wouldn’t be beneficial inside a container. Also, in a production environment, for larger applications, it is recommended to pin fixed versions of the packages in requirements.txt
to avoid potential compatibility issues due to package upgrades.
Then, we can copy the app
folder containing the application into the image with the following command:
COPY ./app /code/app
Finally, we define the server startup instruction as follows:
CMD ["fastapi", "run", "app/main.py", "--port", "80"]
This is all we need to create our Dockerfile.dev
file.
Building the image
Once we have Dockerfile.dev
, we can build the image. We can do it by running the following from the command line at the project root folder level:
$ docker build -f Dockerfile.dev -t live-application .
Since we named our Dockerfile Dockerfile.dev
, we should specify it in an argument. Once the build is finished, you can check that the image has been correctly built by running the following:
$ docker images live-application
You should see the details of the image on the output print like this:
REPOSITORY TAG IMAGE ID CREATED SIZE
live-application latest 7ada80a535c2 43 seconds ago 1.06GB
With the image built, we can proceed with creating the container creation.
Creating the container
To create the container and run it; simply run the following:
$ docker run -p 8000:80 live-application
This will create the container and run it. We can see the container by running the following:
$ docker ps -a
Since we didn’t specify a container name, it will automatically affect a fancy name. Mine, for example, is bold_robinson
.
Open the browser on
http://localhost:8000
and you will see the home page response of our application.
This is all you need to run a FastAPI application inside a Docker container. Running a FastAPI application in a Docker container is a great way to use the advantages of both technologies. You can easily scale, update, and deploy your web app with minimal configuration.
See also
The Dockerfile can be used to specify several features of the image. Check the list of commands in the official documentation:
Dockerfile reference: https://docs.docker.com/reference/dockerfile/
Docker CLI documentation: https://docs.docker.com/reference/cli/docker/
FastAPI in Containers - Docker: https://fastapi.tiangolo.com/deployment/docker/
FastAPI Cookbook was published in August 2024.
Get the eBook for $35.99 $24.99!
Get the Print Book for $44.99 $30.99!
And that’s a wrap.
We have an entire range of newsletters with focused content for tech pros. Subscribe to the ones you find the most useful here. The complete PythonPro archives can be found here.
If you have any suggestions or feedback, or would like us to find you a Python learning resource on a particular subject, just leave a comment below.