InternLM is a base model with 100 billion parameters and multilingual capabilities. It was trained on more than a trillion tokens of data. It has a high level of knowledge and can perform well on tasks that require strong reasoning skills, such as reading comprehension and reasoning in Chinese and English. It also excels in various human-designed tests. Based on this, InternLM can use high-quality human-annotated dialogue data and techniques like RLHF to follow complex instructions in human conversations and generate responses that are ethical and aligned with human values.
InternLM-7B
InternLM Performance Evaluation
Developers used the open-source tool OpenCompass to assess InternLM’s capabilities in five areas: discipline, language, knowledge, inference, and comprehension. The following are some of the assessment outcomes, and you can see more on the OpenCompass leaderboard.
Model Zoo
The InternLM-trained programs InternLM 7B and InternLM 7B Chat have been made publicly available. Model weights are available for use in two different forms. For further pre-training or human preference alignment training, in addition to loading the models using the Transformers format, you may also directly input the weights using InternLM.
Model | InternLM Format Weight Download Link | Transformers Format Weight Download Link |
---|---|---|
InternLM 7B | 🤗internlm/intern-7b | |
InternLM Chat 7B | 🤗internlm/intern-chat-7b | |
InternLM Chat 7B 8k | 🤗internlm/intern-chat-7b-8k |
Import from Transformers
The following code snippet shows how to use Transformers to load the InternLM 7B Chat model:
>> from transformers import AutoTokenizer, AutoModelForCausalLM
>> tokenizer = AutoTokenizer.from_pretrained("internlm/internlm-chat-7b", trust_remote_code=True)
>> model = AutoModelForCausalLM.from_pretrained("internlm/internlm-chat-7b", trust_remote_code=True).cuda()
>> model = model.eval()
>> response, history = model.chat(tokenizer, "hello", history=[])
>> print(response)
Hello! How can I help you today?
>> response, history = model.chat(tokenizer, "please provide three suggestions about time management", history=history)
>> print(response)
Sure, here are three tips for effective time management:
1. Prioritize tasks based on importance and urgency: Make a list of all your tasks and categorize them into "important and urgent," "important but not urgent," and "not important but urgent." Focus on completing the tasks in the first category before moving on to the others.
2. Use a calendar or planner: Write down deadlines and appointments in a calendar or planner so you don't forget them. This will also help you schedule your time more effectively and avoid overbooking yourself.
3. Minimize distractions: Try to eliminate any potential distractions when working on important tasks. Turn off notifications on your phone, close unnecessary tabs on your computer, and find a quiet place to work if possible.
Remember, good time management skills take practice and patience. Start with small steps and gradually incorporate these habits into your daily routine.
Dialogue
To use the frontend interface for the InternLM Chat 7B model, execute this code:
pip install streamlit==1.24.0
pip install transformers==4.30.2
streamlit run web_demo.py
InternLM Deployment
LMDeploy can be used to complete the one-click deployment of InternLM.
- First, install LMDeploy:
python3 -m pip install lmdeploy
- For quick deployment, use the following command:
python3 -m lmdeploy.serve.turbomind.deploy InternLM-7B /path/to/internlm-7b/model hf
- To interact with the model after deployment, launch a server with this command:
python3 -m lmdeploy.serve.client {server_ip_addresss}:33337
0 Comments