A Google Brain and Imperial College London team have built a system -- Pre-training with Extracted Gap-sentences for Abstractive SUmmarization Sequence-to-sequence, or Pegasus -- that leverages Google's Transformers architecture combined with pretraining objectives tailored for abstractive text generation. From a report: They say it achieves state-of-the-art results in 12 summarization tasks spanning news, science, stories, instructions, emails, patents, and legislative bills, and that it shows "surprising" performance on low-resource summarization, surpassing previous top results on six data sets with only 1,000 examples. As the researchers point out, text summarization aims to generate accurate and concise summaries from input documents, in contrast to executive techniques. Rather than merely copy fragments from the input, abstractive summarization might produce novel words or cover principal information such that the output remains linguistically fluent. Transformers are a type of neural architecture introduced in a paper by researchers at Google Brain, Google's AI research division. As do all deep neural networks, they contain functions (neurons) arranged in interconnected layers that transmit signals from input data and slowly adjust the synaptic strength (weights) of each connection -- that's how all AI models extract features and learn to make predictions. But Transformers uniquely have attention. Every output element is connected to every input element, and the weightings between them are calculated dynamically.
Read more of this story at Slashdot.
from RSSMix.com Mix ID 8859861 https://ift.tt/2Q29Lzb
Post a Comment