NExT-GPT

NExT-GPT: Any-to-Any Multimodal LLM

1 min


[vc_headings linewidth=”0″ borderwidth=”1″ borderclr=”#000000″ title=”NExT-GPT” google_fonts=”font_family:Comfortaa%3A300%2Cregular%2C700|font_style:700%20bold%20regular%3A700%3Anormal” titlesize=”60″ titleclr=”#000000″]Any-to-Any Multimodal LLM[/vc_headings]
nextgpt

NextGPT is a novel framework that combines language and vision models to generate multimodal content. It consists of three main stages:

– Multimodal Encoding Stage:

This stage uses state-of-the-art encoders to transform inputs from different modalities, such as text, images, or videos, into language-like representations that can be understood by the language model.

LLM Understanding and Reasoning Stage:

This stage employs a pre-trained language model to process the encoded inputs and perform semantic understanding and reasoning. The language model outputs text tokens as well as special “modality signal” tokens that indicate what type of content and how to generate it in the next stage.

– Multimodal Generation Stage:

This stage takes the modality signal tokens from the previous stage and uses them to guide the generation of multimodal content. Depending on the signal, the stage uses different decoders to produce text, images, or videos that match the input and the desired output.

[vc_headings style=”theme4″ borderclr=”#000000″ style2=”image” title=”Environment Preparation” google_fonts=”font_family:Comfortaa%3A300%2Cregular%2C700|font_style:700%20bold%20regular%3A700%3Anormal” lineheight=”3″ titlesize=”40″ titleclr=”#000000″ image_id=”2854″][/vc_headings]

To get started, you need to copy the repo and set up the necessary environment. You can do this by executing these commands:

conda env create -n nextgpt python=3.8

conda activate nextgpt

# CUDA 11.6
conda install pytorch==1.13.1 torchvision==0.14.1 torchaudio==0.13.1 pytorch-cuda=11.6 -c pytorch -c nvidia

git clone https://github.com/NExT-GPT/NExT-GPT.git
cd NExT-GPT

pip install -r requirements.txt
[vc_headings style=”theme4″ borderclr=”#000000″ style2=”image” title=”Training NExt-GPT on your own” google_fonts=”font_family:Comfortaa%3A300%2Cregular%2C700|font_style:700%20bold%20regular%3A700%3Anormal” lineheight=”3″ titlesize=”40″ titleclr=”#000000″ image_id=”2854″][/vc_headings]

Preparing Pre-trained Checkpoint

NExT-GPT is built on the foundations of several outstanding models. To get the checkpoints ready, please adhere to the guidelines below.

  • ImageBind is the unified image/video/audio encoder. The pre-trained checkpoint can be downloaded from here with version huge. Afterward, put the imagebind_huge.pth file at [./ckpt/pretrained_ckpt/imagebind_ckpt/huge].
  • Vicuna: first prepare the LLaMA by following the instructions [here]. Then put the pre-trained model at [./ckpt/pretrained_ckpt/vicuna_ckpt/].
  • Image Diffusion is used to generate images. NExT-GPT uses Stable Diffusion with version  v1-5. (will be automatically downloaded)
  • Audio Diffusion for producing audio content. NExT-GPT employs AudioLDM with version l-full. (will be automatically downloaded)
  • Video Diffusion for the video generation. We employ ZeroScope with version v2_576w. (will be automatically downloaded)

Preparing Dataset

Please download the following datasets used for model training:

A) T-X pairs data

B) Instruction data

Precomputing Embeddings

NExT-GPT uses decoding-side alignment training to make the signal tokens and captions representations closer. NExT-GPT precomputes the text embeddings for image, audio and video captions with the text encoder in the diffusion models to reduce time and memory costs.

Please run this command before the following training of NExT-GPT, where the produced embedding file will be saved at [./data/embed].

cd ./code/
python process_embeddings.py ../data/T-X_pair_data/cc3m/cc3m.json image ../data/embed/ runwayml/stable-diffusion-v1-5

Note of arguments:

  • args[1]: path of caption file;
  • args[2]: modality, which can be imagevideo, and audio;
  • args[3]: saving path of embedding file;
  • args[4]: corresponding pre-trained diffusion model name.

Training NExT-GPT

First of all, kindly refer to the base configuration file [./code/config/base.yaml] for the basic system setting of overall modules.

Then, the training of NExT-GPT starts with this script:

cd ./code
bash scripts/train.sh

Specifying the command:

deepspeed --include localhost:0 --master_addr 127.0.0.1 --master_port 28459 train.py \
    --model nextgpt \
    --stage 1\
    --dataset cc3m\
    --data_path  ../data/T-X_pair_data/cc3m/cc3m.json\
    --mm_root_path ../data/T-X_pair_data/cc3m/images/\
    --embed_path ../data/embed/\
    --save_path  ../ckpt/delta_ckpt/nextgpt/7b/\
    --log_path ../ckpt/delta_ckpt/nextgpt/7b/log/

where the key arguments are:

  • --includelocalhost:0 indicating the GPT cuda number 0 of deepspeed.
  • --stage: training stage.
  • --dataset: the dataset name for training model.
  • --data_path: the data path for the training file.
  • --mm_root_path: the data path for the image/video/audio file.
  • --embed_path: the data path for the text embedding file.
  • --save_path: the directory which saves the trained delta weights. This directory will be automatically created.
  • --log_path: the directory which saves the log file.

The whole NExT-GPT training involves 3 steps:

  • Step-1: Encoding-side LLM-centric Multimodal Alignment. This stage trains the input projection layer while freezing the ImageBind, LLM, output projection layer.

    Just run the above train.sh script by setting:

    • --stage 1
    • --dataset x, where x varies from [cc3mwebvidaudiocap]
    • --data_path ../.../xxx.json, where xxx is the file name of the data in [./data/T-X_pair_data]
    • --mm_root_path .../.../xx varies from [imagesaudiosvideos]

    Also refer to the running config file [./code/config/stage_1.yaml] and deepspeed config file [./code/dsconfig/stage_1.yaml] for more step-wise configurations.

  • Step-2: Decoding-side Instruction-following Alignment. This stage trains the output projection layers while freezing the ImageBind, LLM, input projection layers.

    Just run the above train.sh script by setting:

    • --stage 2
    • --dataset x, where x varies from [cc3mwebvidaudiocap]
    • --data_path ../.../xxx.json, where xxx is the file name of the data in [./data/T-X_pair_data]
    • --mm_root_path .../.../xx varies from [imagesaudiosvideos]

    Also refer to the running config file [./code/config/stage_2.yaml] and deepspeed config file [./code/dsconfig/stage_2.yaml] for more step-wise configurations.

  • Step-3: Instruction Tuning. This stage instruction-tune 1) the LLM via LoRA, 2) input projection layer and 3) output projection layer on the instruction dataset.

    Just run the above train.sh script by setting:

    Also refer to the running config file [./code/config/stage_3.yaml] and deepspeed config file [./code/dsconfig/stage_3.yaml] for more step-wise configurations.

[vc_headings style=”theme4″ borderclr=”#000000″ style2=”image” title=”Running NExT-GPT System” google_fonts=”font_family:Comfortaa%3A300%2Cregular%2C700|font_style:700%20bold%20regular%3A700%3Anormal” lineheight=”3″ titlesize=”40″ titleclr=”#000000″ image_id=”2854″][/vc_headings]

Preparing Checkpoints

First, loading the pre-trained NExT-GPT system.

Deploying Gradio Demo

Upon completion of the checkpoint loading, you can run the demo locally via:

cd ./code
bash scripts/app.sh

Specifying the key arguments as:

  • --nextgpt_ckpt_path: the path of pre-trained NExT-GPT params.
[vc_headings style=”theme4″ borderclr=”#000000″ style2=”image” title=”What can NExT-GPT do?” google_fonts=”font_family:Comfortaa%3A300%2Cregular%2C700|font_style:700%20bold%20regular%3A700%3Anormal” lineheight=”3″ titlesize=”40″ titleclr=”#000000″ image_id=”3479″]Here are some examples of NExT-GPT outputs[/vc_headings]
[mvc_advanced_button align=”center” btn_text=”Github” icon_size=”25″ use_theme_fonts=”yes” btn_icon=”fab fa-github” btn_url=”url:https%3A%2F%2Fgithub.com%2FNExT-GPT%2FNExT-GPT|target:_blank” btn_clr=”#ffffff” btn_bg=”#0a0a0a” btn_radius=”50″]
[mvc_advanced_button align=”center” btn_text=”Project Page” padding_left=”9″ icon_size=”25″ use_theme_fonts=”yes” btn_url=”url:https%3A%2F%2Fnext-gpt.github.io%2F|target:_blank” btn_clr=”#ffffff” btn_bg=”#ff4300″ btn_radius=”50″ btn_icon=”fab fa-github-square”]
[mvc_advanced_button align=”center” btn_text=”Paper” icon_size=”25″ use_theme_fonts=”yes” btn_url=”url:https%3A%2F%2Farxiv.org%2Fpdf%2F2309.05519|target:_blank” btn_clr=”#ffffff” btn_bg=”#fa0f00″ btn_radius=”50″ btn_icon=”fas fa-file-pdf”]
[mvc_advanced_button align=”center” btn_text=”Youtube” padding_left=”20″ icon_size=”25″ use_theme_fonts=”yes” btn_url=”url:https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3Daqw2SCWeWD0|target:_blank” btn_clr=”#ffffff” btn_bg=”#ff0000″ btn_radius=”50″ btn_icon=”fab fa-youtube”]
@articles{wu2023nextgpt,
  title={NExT-GPT: Any-to-Any Multimodal LLM},
  author={Shengqiong Wu and Hao Fei and Leigang Qu and Wei Ji and Tat-Seng Chua},
  journal = {CoRR},
  volume = {abs/2309.05519},
  year={2023}
}

Like it? Share with your friends!

0

0 Comments

Your email address will not be published. Required fields are marked *

Belmechri

I am an IT engineer, content creator, and proud father with a passion for innovation and excellence. In both my personal and professional life, I strive for excellence and am committed to finding innovative solutions to complex problems.
Choose A Format
Personality quiz
Series of questions that intends to reveal something about the personality
Trivia quiz
Series of questions with right and wrong answers that intends to check knowledge
Poll
Voting to make decisions or determine opinions
Story
Formatted Text with Embeds and Visuals
List
The Classic Internet Listicles
Countdown
The Classic Internet Countdowns
Open List
Submit your own item and vote up for the best submission
Ranked List
Upvote or downvote to decide the best list item
Meme
Upload your own images to make custom memes
Video
Youtube and Vimeo Embeds
Audio
Soundcloud or Mixcloud Embeds
Image
Photo or GIF
Gif
GIF format