license: apache-2.0
tags:
- medical
datasets:
- biomed
BioMedGPT-LM-7B
In this repo, we present a medical language model named BioMedGPT-LM which is the first commercial-friendly GPT model in the biomedical domain and has demonstrated superior performance over existing LLMs of the same parameter size. We are releasing a 7B model BioMedGPT-LM-7B which is LLaMA2-7b-chat finetuned on the PMC abstracts and papers from the S2ORC.
Training Details
The model was trained with the following hyperparameters:
- Epochs: 5
- Batch size: 192
- Cutoff length: 2048
- Learning rate: 2e-5
Overview Llama 2 was pretrained on 2 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over one million new human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.
Model Developers
PharMolix
How to Use
BioMedGPT-LM-7B is a part of BioMedGPT-10B, an open-source version of BioMedGPT. BioMedGPT is a multimodal generative pre-trained transformer (GPT) for biomedicine, which bridges the natural language modality and diverse biomed- ical data modalities via a single GPT model. BioMedGPT aligns different biological modalities with the text modality via BioMedGPT-LM. The details of BioMedGPT-10B and BioMedGPT-LM-7B can be found in the technical report.
Intended Use Cases
Out-of-scope Uses
Research Paper
"BioMedGPT: Open Multimodal Generative Pre-trained Transformer for BioMedicine"
github
https://github.com/BioFM/OpenBioMed
Limitations
[Highlight any limitations or potential issues of your model.]