#
1.首先給win11的ssh開(kāi)一個(gè)新端口.(修改C:\ProgramData\ssh\sshd_config即可)
2.win11設(shè)置防火墻,開(kāi)放1中添加的端口.
3.win11進(jìn)入wsl2,輸入ifconfig,查看ip地址(輸出信息第二行 inet后面那一串?dāng)?shù)字).
4.在win11的cmd中輸入以下命令:
netsh interface portproxy add v4tov4 listenaddress=127.0.0.1 listenport=<步驟1中開(kāi)放的端口> connectaddress=<步驟3中得到的ip地址> connectport=22
5. ssh連接步驟1中開(kāi)放的端口就可以連接上wsl2(注意事項(xiàng):(1)連接時(shí),win11上需要有一個(gè)wsl窗口,不然連不上,(2)ssh連接時(shí)的用戶名寫wsl2中的用戶名,密碼寫wsl2中的密碼,ip地址寫win11的ip地址)
https://www.zhihu.com/question/618935377
# /etc/hosts
140.82.112.4 www.github.com
linux每次升級(jí)后都會(huì)留下多余的內(nèi)核, 一鍵刪除的方法(Centos):@import url(http://www.aygfsteel.com/CuteSoft_Client/CuteEditor/Load.ashx?type=style&file=SyntaxHighlighter.css);@import url(/css/cuteeditor.css);
dnf remove $(dnf repoquery --installonly --latest-limit=-2)
數(shù)據(jù)分析:從一堆已知的數(shù)據(jù)中進(jìn)行分類,總結(jié)得出統(tǒng)計(jì)數(shù)據(jù),如最大 值,最小值,平均值,總和等。
只能對(duì)已知數(shù)據(jù)進(jìn)行操作,無(wú)法預(yù)測(cè)出新的數(shù)據(jù)的特征,于是就有了機(jī)器學(xué)習(xí)。
機(jī)器學(xué)習(xí):給出一堆已知的,有特征欄位的和結(jié)果欄位的數(shù)據(jù),選定一個(gè)算法,如線性回歸,邏輯回歸等,其實(shí)就是一條公式,進(jìn)行學(xué)習(xí),其實(shí)就是運(yùn)行一堆函數(shù),比較結(jié)果,得出規(guī)律,也就是確定了公式中參數(shù)的值。當(dāng)輸入新的數(shù)據(jù)時(shí),就能預(yù)測(cè)出所需的結(jié)果,其實(shí)就是把輸入數(shù)據(jù)代入公式,算出結(jié)果。
機(jī)器學(xué)習(xí)只能做比較簡(jiǎn)單的任務(wù),如預(yù)測(cè)下個(gè)月的銷售數(shù)據(jù),判斷文字內(nèi)容是正面還是反面(分類),對(duì)于復(fù)雜的任務(wù),如對(duì)話,其實(shí)就是針對(duì)輸入文字預(yù)測(cè)靠譜的輸出文字(回答),于是就有了深度學(xué)習(xí)。
深度學(xué)習(xí):給出一堆數(shù)據(jù),只需兩個(gè)本欄位,如問(wèn)題,答案等,選定一個(gè)算法,其實(shí)就是神經(jīng)網(wǎng)絡(luò)的類型,如卷積神經(jīng)網(wǎng)絡(luò)(CNN),循環(huán)神經(jīng)網(wǎng)絡(luò)(RNN),TRANSFORMER神經(jīng)網(wǎng)絡(luò)等,進(jìn)行學(xué)習(xí),其實(shí)就是運(yùn)行一堆函數(shù),比較結(jié)果,得出規(guī)律,也就是確定了公式中參數(shù)的值。
操作系統(tǒng)為centos 9.
先安裝驅(qū)動(dòng)程序
#切換成文字界面
sudo systemctl set-default multi-user.target
sudo reboot
sh NVIDIA-Linux-x86_64-550.107.02.run
#切換成圖形界面
sudo systemctl set-default graphical.target
sudo reboot
安裝docker:
yum remove docker \
docker-client \
docker-client-latest \
docker-common \
docker-latest \
docker-latest-logrotate \
docker-logrotate \
docker-engine
yum install -y yum-utils
yum-config-manager --add-repo https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/centos/docker-ce.repo
sed -i 's+https://download.docker.com+https://mirrors.tuna.tsinghua.edu.cn/docker-ce+' /etc/yum.repos.d/docker-ce.repo
yum install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
sudo nvidia-ctk runtime configure --runtime=docker
改鏡像地址:
[paul@paul-pc ~]$ cat /etc/docker/daemon.json
{
"registry-mirrors": [
"http://xxx.xxx.xxx"
],
"runtimes": {
"nvidia": {
"args": [],
"path": "nvidia-container-runtime"
}
}
}
安裝container-took-kit:
sh cuda_12.6.0_560.28.03_linux.run
驗(yàn)證:
sudo docker run --rm -it --gpus all ubuntu nvidia-smi
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 550.107.02 Driver Version: 550.107.02 CUDA Version: 12.4 |
|-----------------------------------------+------------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
| 0 NVIDIA GeForce RTX 2080 Ti Off | 00000000:01:00.0 On | N/A |
| 62% 36C P8 4W / 260W | 256MiB / 22528MiB | 1% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
| 1 NVIDIA GeForce RTX 2080 Ti Off | 00000000:02:00.0 Off | N/A |
| 64% 35C P8 5W / 260W | 9MiB / 22528MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
+-----------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=========================================================================================|
| 0 N/A N/A 2657 G /usr/libexec/Xorg 99MiB |
| 0 N/A N/A 2735 G /usr/bin/gnome-shell 38MiB |
| 0 N/A N/A 3502 G /usr/lib64/firefox/firefox 111MiB |
| 1 N/A N/A 2657 G /usr/libexec/Xorg 4MiB |
+-----------------------------------------------------------------------------------------+
參考地址:
python服務(wù)器腳本,生成html,無(wú)需寫js,css,適合AI項(xiàng)目
生成文字的代碼:
st.text(
'Fixed width text')
st.markdown(
'_Markdown_')
# see #*
st.caption(
'Balloons. Hundreds of them
')
st.latex(r
''' e^{i\pi} + 1 = 0 ''')
st.write(
'Most objects')
# df, err, func, keras!
st.write([
'st',
'is <', 3])
# see *
st.title(
'My title')
st.header(
'My header')
st.subheader(
'My sub')
st.code(
'for i in range(8): foo()')
# * optional kwarg unsafe_allow_html = True
生成form控件:
st.button('Hit me')
st.data_editor('Edit data', data)
st.checkbox('Check me out')
st.radio('Pick one:', ['nose','ear'])
st.selectbox('Select', [1,2,3])
st.multiselect('Multiselect', [1,2,3])
st.slider('Slide me', min_value=0, max_value=10)
st.select_slider('Slide to select', options=[1,'2'])
st.text_input('Enter some text')
st.number_input('Enter a number')
st.text_area('Area for textual entry')
st.date_input('Date input')
st.time_input('Time entry')
st.file_uploader('File uploader')
st.download_button('On the dl', data)
st.camera_input("一二三,茄子!")
st.color_picker('Pick a color')
用表格顯示數(shù)據(jù):
st.dataframe(my_dataframe)
st.table(data.iloc[0:10])
st.json({'foo':'bar','fu':'ba'})
st.metric(label="Temp", value="273 K", delta="1.2 K")
顯示加載進(jìn)度條與狀態(tài):
# Show a spinner during a process
>>> with st.spinner(text='In progress'):
>>> time.sleep(3)
>>> st.success('Done')
# Show and update progress bar
>>> bar = st.progress(50)
>>> time.sleep(3)
>>> bar.progress(100)
st.balloons()
st.snow()
st.toast('Mr Stay-Puft')
st.error('Error message')
st.warning('Warning message')
st.info('Info message')
st.success('Success message')
st.exception(e)
這幾天要PUSH代碼到GITHUB,發(fā)現(xiàn)之前用的密碼方式被取消了,需改成SSH KEY的方式。
1.生成SSH-KEY
ssh-keygen
#會(huì)產(chǎn)生 ~/.ssh/id_rsa 和 ~/.ssh/id_rsa_pub 文件
#如果是從別的地方拷貝過(guò)來(lái)的id_rsa,需chmod 400 ~/.ssh/id_rsa更改屬性
2.在github上新建倉(cāng)庫(kù)
https://github.com/paulwong888/python-ai
3.導(dǎo)入公鑰到github
打開(kāi)你的SSH公鑰文件,通常位于~/.ssh/id_rsa.pub。復(fù)制公鑰內(nèi)容,然后登錄到你的GitHub賬戶,進(jìn)入Settings > SSH and GPG keys,點(diǎn)擊"New SSH key"按鈕,粘貼你的公鑰,然后點(diǎn)擊"Add SSH key"。
4.克隆倉(cāng)庫(kù)
git config --global user.name "John Doe"
git config --global user.email johndoe@example.com
git clone git@github.com:paulwong888/python-ai
5.導(dǎo)入project到eclipse
上步克隆時(shí)已經(jīng)在本地新建了一個(gè)本地倉(cāng)庫(kù),Import->Git->Project from Git->Existing local repository,選擇python-ai/.git文件夾
之后的操作和用密碼的方式是一樣的。
上篇已經(jīng)合并出了訓(xùn)練好的大模型,現(xiàn)在要搭建起一套CHATBOT,使得這套大模型能有一個(gè)WEBUI用起來(lái)。
1.設(shè)置環(huán)境變量,ollama的模型保存路徑,/etc/profile
export OLLAMA_MODELS=/root/autodl-tmp/models/ollama
2.克隆ollama代碼
curl -fsSL https://ollama.com/install.sh | sh
3.啟動(dòng)ollama
4.建立ollama鏡像的配置文件,Modelfile
# set the base model
FROM /root/.ollama/llamafactory-export/saves/llama3-8b/lora/docker-commnad-nlp/export
# set custom parameter values
PARAMETER temperature 1
PARAMETER num_keep 24
PARAMETER stop <|start_header_id|>
PARAMETER stop <|end_header_id|>
PARAMETER stop <|eot_id|>
PARAMETER stop <|reserved_special_token
# set the model template
TEMPLATE """
{{ if .System }}<|start_header_id|>system<|end_header_id|>
{{ .System }}<|eot_id|>{{ end }}{{ if .Prompt }}<|start_header_id|>user<|end_header_id|>
{{ .Prompt }}<|eot_id|>{{ end }}<|start_header_id|>assistant<|end_header_id|>
{{ .Response }}<|eot_id|>
"""
# set the system message
SYSTEM You are llama3 from Meta, customized and hosted @ Paul Wong (http://paulwong88.tpddns.cn).
# set Chinese lora support
#ADAPTER /root/.ollama/models/lora/ggml-adapter-model.bin
建立鏡像命令,create-ollama-image-docker-command-nlp.sh
BIN_PATH=$(cd `dirname $0`; pwd)
cd $BIN_PATH/
pwd
ollama create llama3-docker-commnad-nlp:paul -f Modelfile
5.運(yùn)行大模型
llama3-docker-commnad-nlp:paul
對(duì)于象META的開(kāi)源大模型,如llama3,由于都是用通用數(shù)據(jù)進(jìn)行預(yù)訓(xùn)練,對(duì)想使用其模型的公司來(lái)說(shuō),可能會(huì)不適用,因?yàn)檫@大模型對(duì)公司的數(shù)據(jù)不熟悉,因此引入微調(diào)(Fine-Tunning)。
通過(guò)喂給大模型大量數(shù)據(jù),1萬(wàn)條起步,使得大模型也能對(duì)公司的數(shù)據(jù)熟悉,進(jìn)而用于各種對(duì)話場(chǎng)景。
1.克隆并安裝LLAMA FACTORY庫(kù),install-llamafactory.sh
BIN_PATH=$(cd `dirname $0`; pwd)
cd $BIN_PATH/../
pwd
git clone --depth 1 https://github.com/hiyouga/LLaMA-Factory.git
cd LLaMA-Factory
pip install -e ".[torch,metrics,bitsandbytes,modelscope]"
2.設(shè)置環(huán)境變量
export USE_MODELSCOPE_HUB=1 #使用modelscop模型庫(kù),非huggingface的
export CUDA_VISIBLE_DEVICES=0 #設(shè)置使用GPU
export HF_ENDPOINT=https://hf-mirror.com #設(shè)置huggingface的替代地址
export MODELSCOPE_CACHE=/root/autodl-tmp/models/modelscope #設(shè)置modelscope中的大模型保存路徑
export LLAMAFACTORY_HOME=/root/autodl-tmp/LLaMA-Factory
3.準(zhǔn)備數(shù)據(jù)
#在data/dataset_info.json中加入此數(shù)據(jù)
"docker_command_NL": {
"hf_hub_url": "MattCoddity/dockerNLcommands"
},
在data目錄中加入訓(xùn)練數(shù)據(jù),MattCoddity/dockerNLcommands.json
數(shù)據(jù)格式為:
[
{
"input": "Give me a list of containers that have the Ubuntu image as their ancestor.",
"instruction": "translate this sentence in docker command",
"output": "docker ps --filter 'ancestor=ubuntu'"
},

] 4.訓(xùn)練大模型
訓(xùn)練的參數(shù)文件:llama3_lora_sft_docker_command.yaml
### model
#md model id
model_name_or_path: LLM-Research/Meta-Llama-3-8B-Instruct
#huggingface model id
#model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct
### method
stage: sft
do_train: true
finetuning_type: lora
lora_target: all
### dataset
dataset: docker_command_NL
template: llama3
cutoff_len: 1024
max_samples: 1000
overwrite_cache: true
preprocessing_num_workers: 16
### output
output_dir: /root/autodl-tmp/my-test/saves/llama3-8b/lora/sft/docker-commnad-nlp/sft
logging_steps: 10
save_steps: 500
plot_loss: true
overwrite_output_dir: true
### train
per_device_train_batch_size: 4
gradient_accumulation_steps: 8
learning_rate: 1.0e-4
num_train_epochs: 3.0
lr_scheduler_type: cosine
warmup_ratio: 0.1
bf16: true
ddp_timeout: 180000000
### eval
val_size: 0.1
per_device_eval_batch_size: 1
eval_strategy: steps
eval_steps: 500
訓(xùn)練命令:lora-train-docker-command.sh
BIN_PATH=$(cd `dirname $0`; pwd)
cd $BIN_PATH/
pwd
cd $LLAMAFACTORY_HOME
pwd
llamafactory-cli train $BIN_PATH/conf/llama3_lora_sft_docker_command.yaml
執(zhí)行此命令即可開(kāi)始訓(xùn)練大模型。
5.合并大模型
合并用的參數(shù)文件,llama3_lora_export_docker_command.yaml
### model
#md model id
model_name_or_path: LLM-Research/Meta-Llama-3-8B-Instruct
#huggingface model id
#model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct
adapter_name_or_path: /root/autodl-tmp/my-test/saves/llama3-8b/lora/docker-commnad-nlp/sft
template: llama3
export_dir: /root/autodl-tmp/my-test/saves/llama3-8b/lora/docker-commnad-nlp/export
finetuning_type: lora
export_size: 2
export_device: gpu
export_legacy_format: False
合并命令,lora-export-docker-command.sh
BIN_PATH=$(cd `dirname $0`; pwd)
cd $BIN_PATH/
pwd
llamafactory-cli export conf/llama3_lora_export_docker_command.yaml