Text Summarization Using Hugging Face Transformers (Example)

 

In this tutorial, I will show you how to perform text summarization using the Hugging Face transformers library in Python.

Hugging Face is a platform that allows users to share machine learning models and datasets for training pre-trained machine learning models.

It is most reputed for its transformers library built for natural language processing applications; and in this tutorial, we will demonstrate how to use the transformer library for text summarization.

Let’s now get an overview of this tutorial:

Let’s dive into the Python code!

 

What is Text Summarization?

Text summarization is a fascinating field within natural language processing (NLP) that aims to condense lengthy pieces of text into shorter, coherent, and informative versions while retaining the most important information and the overall meaning of the original content.

This process is incredibly valuable in various applications, such as news aggregation, document indexing, and even in aiding human comprehension of extensive text materials.

There are two primary approaches to text summarization: extractive and abstractive summarization. In extractive summarization, the model identifies and selects existing sentences or phrases from the original text that are deemed the most significant.

These selected segments are then combined to create the summary. This approach is relatively straightforward, as it doesn’t generate entirely new content but rather rearranges existing information.

Extractive summarization systems typically employ techniques like sentence scoring, where sentences are ranked based on their relevance and importance, often using features such as keyword frequency or sentence length.

On the other hand, abstractive summarization goes a step further by generating new sentences that may not appear verbatim in the source text.

This approach requires a deeper understanding of the content and the ability to rephrase and synthesize information to produce concise yet coherent summaries.

Abstractive summarization models typically use techniques from machine translation and employ neural networks like recurrent neural networks (RNNs) or transformer-based models like GPT (Generative Pre-trained Transformer).

 

GPU Powered Python Programming IDE

In order to be able to download and run the pre-trained text summarization model from Hugging Face, you will need a GPU-powered computer.

If your computer has a GPU, then your Python programming IDE should be able to run the code. Otherwise, an easy and free-to-use IDE that gives you access to a free GPU is Google’s Colaboratory notebook.

Another option for free GPU access is Amazon SageMaker Studio Lab. But for this tutorial, we will use the Google Colab notebook.

To use the notebook, go to the Colab website, then go to “File” in the menu tab above, and then click on “New notebook”.

That will launch a new interactive notebook similar to a Jupyter notebook with code cells that can be run. You can give the notebook a title if you like.

Next, go to “Runtime” and select “Change runtime type”. Click the dropdown under “Hardware accelerator” and select “GPU 4”, then click “Save”.

Now, you have access to a GPU-powered Python programming environment that has 12GB of RAM and 78GB of disk space.

Let us now install the transformers library.

 

Install the transformers Library

To install transformers, run the line of code below:

# install transformers
!pip install transformers

You may wonder why we prefixed “pip” with an exclamation (!) mark. This is to enable us to run shell commands directly inside our notebook cells.

So, with the transformers library installed, we can now perform text summarization. First, though, we will need a text to summarize.

 

Create Example Text Data

You can use any text data of your choice for this project. However, we will retrieve some paragraphs from an article about Hugging Face copied from Tech Crunch.

tech_crunch = """
AI startup Hugging Face has raised $235 million in a Series D funding round, as first reported by The Information, then seemingly verified by Salesforce CEO Marc Benioff on X (formerly known as Twitter). 
The tranche, which had participation from Google, Amazon, Nvidia, Intel, AMD, Qualcomm, IBM, Salesforce and Sound Ventures, values Hugging Face at $4.5 billion. 
That’s double the startup’s valuation from May 2022 and reportedly more than 100 times Hugging Face’s annualized revenue, reflecting the enormous appetite for AI and platforms to support its development.
Hugging Face offers a number of data science hosting and development tools, including a GitHub-like hub for AI code repositories, models and datasets, as well as web apps to demo AI-powered applications. 
It also provides libraries for tasks like dataset processing and evaluating models in addition to an enterprise version of the hub that supports software-as-a-service and on-premises deployments.
The company’s paid functionality includes AutoTrain, which helps to automate the task of training AI models; Inference API, which allows developers to host models without managing the underlying infrastructure; and Infinity, which is designed to increase the speed with which an in-production model processes data."""

Now that we have created the example text data, let us summarize it.
 

Summarize Text

We are going to make use of Facebook’s BART model on Hugging Face for this task. There are other pre-trained models we could use, but this ranks highest on the leaderboard.

Therefore, to summarize the above text, run the code below:

from transformers import pipeline
 
summarizer = pipeline("summarization", model="facebook/bart-large-cnn")
 
output = summarizer(tech_crunch, max_length=100, min_length=30, do_sample=False)
 
output[0]["summary_text"]
 
# Hugging Face is a cloud-based AI platform. 
# The company has raised $4.5 billion in funding since 2012. 
# It offers a number of tools for developers to build AI models.

Let’s discuss what we’ve just done. First, we imported the pipeline() function from the transformers library, which abstracts and simplifies the process of using various NLP models for specific tasks.

Then, we initialized a summarization pipeline by specifying the task as “summarization” and choosing a specific pre-trained model, in this case, “facebook/bart-large-cnn.”

Once the summarizer is set up, it is ready to generate a summary for some input text. The max_length parameter limits the length of the generated summary to 100 tokens, while min_length ensures that the summary contains at least 30 tokens.

Note that if you pass a short text to the model and set a high max_length, there is the tendency for the model to hallucinate and generate nonexistent summary. So, make sure to keep the max length consistent with the volume of text you are passing to the model.

The do_sample parameter is set to False, indicating that the summarization model should not use sampling techniques to generate the summary.

Finally, we extract the generated summary from the output of the summarizer pipeline and store it in the output as a string, which can then be used or displayed as needed.

You can play with a web version of the application on Hugging Face here.
 

Video, Further Resources & Summary

Do you need more explanations on how to perform text summarization using the Hugging Face transformer library in Python? Then you should have a look at the following YouTube video of the Statistics Globe YouTube channel.

In the video, we explain how to perform text summarization using the Hugging Face transformer library in Python.

 

The YouTube video will be added soon.

 

Furthermore, you could have a look at some of the other interesting AI-based tutorials on Statistics Globe:

This post has shown how to perform text summarization using the Hugging Face transformer library in Python. There are many other tasks you can perform using the transformer library besides text summarization. Tasks such as image classification, sentiment analysis, audio recognition, text-to-image generation, and document question answering can all be easily executed using transformers.

I hope you enjoyed reading this tutorial! In case you have further questions, you may leave a comment below.

 

R & Python Expert Ifeanyi Idiaye

This page was created in collaboration with Ifeanyi Idiaye. You might check out Ifeanyi’s personal author page to read more about his academic background and the other articles he has written for the Statistics Globe website.

 

Subscribe to the Statistics Globe Newsletter

Get regular updates on the latest tutorials, offers & news at Statistics Globe.
I hate spam & you may opt out anytime: Privacy Policy.


Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.

Top