Researchers at Boston University and Google have developed a method for illustrating articles with visual summaries.

Researchers from Google and Boston University have developed a method based on artificial intelligence (AI) to illustrate articles with visual summaries.

Recent advances in generative modeling have opened the door to many tasks that were previously only imaginable. Generic models can be trained to learn powerful representations that are used in fields like text-to image or image-totext translation.

Recent releases of Stable Diffusion API and DALL-E led to a lot of excitement about text-to image generative models that can generate complex and stunning images using descriptive text input, similar to doing a web search.

In response to the growing interest in reverse translation (i.e. image-to text), several studies have attempted to create captions using input images. Many of these methods assume a 1:1 correspondence between images and captions. Multiple images can be paired and connected with a lengthy text narrative such as photos within a news story. There is a need to use illustrative captions, such as \”travel\” or vacation, rather than literal captions, like \”airplane flight\”.


Boston University and Google Researchers Introduce An Artificial Intelligence (AI) Based Method To Illustrate Articles With Visual Summarizes