Thursday 6 July 2023

University of Toronto Researchers Unveil scGPT: A Groundbreaking Model for Single-Cell Biology Powered by Generative Pre-Trained Transformer Spanning a Vast Repository of 33 Million Cells

Natural Language Processing and Computer Vision: Unleashing the Power of Generative Pre-trained Models

In the world of artificial intelligence, generative pre-trained models have made remarkable strides in fields such as natural language processing (NLP) and computer vision. These models, which combine large-scale datasets with pre-trained transformers, have proven to be a viable strategy for building foundation models. Now, researchers are exploring how these models can revolutionize research in cellular biology and genetics by drawing connections between language and biological constructions.

One groundbreaking study focuses on the development of scGPT, a foundation model for single-cell biology. By utilizing a generative pre-trained transformer and a repository of over a million cells, scGPT can efficiently extract key biological insights related to genes and cells. This pre-trained model can be adapted for various applications, including gene network inference, genetic perturbation prediction, and multi-batch integration.

The growing field of single-cell RNA sequencing (scRNA-seq) has paved the way for advancements in cellular heterogeneity, disease pathogenesis, and personalized therapeutic approaches. However, with the exponential growth of sequencing data, it is crucial to develop methods that can effectively leverage and adapt to these new trends. Generative pre-training of foundation models has emerged as an effective strategy to tackle this challenge.

Generative pre-training has already shown tremendous success in domains such as natural language generation (NLG) and computer vision. Models like DALL-E2 and GPT-4 are built upon the foundation of pre-training transformers on large-scale datasets, making them easily adaptable to specific downstream tasks. Furthermore, these pre-trained generative models consistently outperform custom-trained models.

Researchers have taken cues from the NLG self-supervised pre-training method and applied it to single-cell sequencing data. They have discovered that the self-attention transformer, a framework for modeling input tokens in text, is highly effective for modeling single-cell sequencing data. Leveraging generative pre-training on over a million cells, they have developed scGPT as the first single-cell foundation model. Their approach addresses both methodological and engineering challenges, utilizing an in-memory data structure and modifying the transformer architecture to learn cell and gene representations simultaneously.

One of the significant advantages of scGPT is its ability to support transfer learning to various downstream activities. By employing the “pre-training universally, fine-tuning on demand” approach, scGPT achieves state-of-the-art performance in cell type annotation, genetic perturbation prediction, batch correction, and multi-omic integration. Moreover, it is the only base model capable of incorporating scATAC-seq data and other single-cell omics.

Through extensive studies, researchers have demonstrated that using more data in the pre-training phase leads to better pre-trained embeddings and higher performance on downstream tasks. This scaling law suggests that as more sequencing data becomes available, foundation models like scGPT can continually improve and advance our understanding of cell biology.

To foster collaboration and accelerate research in the field, the scGPT models and workflow have been made publicly available. This allows researchers to strengthen and build upon the foundations established by scGPT. The model’s efficacy in deciphering single-cell data, capturing gene networks, and improving accuracy in subsequent tasks showcases its potential to drive significant advancements in single-cell biology.

In conclusion, generative pre-trained models like scGPT have immense potential in the fields of cellular biology and genetics. By combining language and biology, these models provide crucial insights into gene-gene interactions, cell types, and disease pathogenesis. The availability of pre-trained foundation models and their continuous improvement as more data becomes available will undoubtedly shape the future of research in these fields.

Editor Notes

Generative pre-trained models have proven their worth in various domains, and scGPT is a prime example of their potential in single-cell biology. This groundbreaking study opens up new avenues for research and offers a powerful tool for understanding gene-gene interactions and cellular heterogeneity. By making the scGPT models and workflow publicly available, researchers can collaborate and accelerate their progress in this exciting field. For more AI news and updates, visit GPT News Room.

Link to GPT News Room: [GPT News Room](https://gptnewsroom.com)

Source link



from GPT News Room https://ift.tt/JNCXwAO

No comments:

Post a Comment

語言AI模型自稱為中國國籍,中研院成立風險研究小組對其進行審查【熱門話題】-20231012

Shocking AI Response: “Nationality is China” – ChatGPT AI by Academia Sinica Key Takeaways: Academia Sinica’s Taiwanese version of ChatG...