Skip to content

Instantly share code, notes, and snippets.

@bio-punk
bio-punk / ReadMe.md
Last active January 22, 2026 16:44
rdma card rename #nvidia #linux #ops

首先使用ibdev2netdev -v查看现有名称对应关系

检查亲和性,确保所有节点亲和性一致

nvidia-smi topo -m

通过脚本修改

node17_mlx_rename.sh

做成服务

@bio-punk
bio-punk / env_create.sh
Last active January 15, 2026 14:41
vasp+vtst #build #x86 #vasp
#!/bin/bash
CONDA_ENV_NAME=py4vasp_dev260114
ALL_PREFIX=
PY4VASP_GIT=https://github.com/vasp-dev/py4vasp.git
PY4VASP_GIT_BRANCH=0.9.0
source /data/apps/miniforge/25.3.0-3/etc/profile.d/conda.sh
conda create -n ${CONDA_ENV_NAME} python=3.10 cmake ninja fftw -c conda-forge -y
@bio-punk
bio-punk / env_create.sh
Last active January 13, 2026 15:26
sunshine steam #x86 #linux
#!/bin/bash
# gnome 桌面安装
sudo apt update
sudo apt-get install gnome-core -y
# sudo systemctl set-default graphical.target
# 安装伪装显示器
sudo apt install gnome-remote-desktop -y
@bio-punk
bio-punk / build_deepmd.sh
Last active January 12, 2026 04:46
lammps+deepmd+phonopy #lammps #deepmd #build
#!/bin/bash
#SBATCH --gpus=1
#SBATCH
export all_prefix=/data/run01/scvi905/dev260110
export conda_env_name=phonopy_lammps_dev260110
CLIENT_NODE=ln08
LAMMPS_TAG=stable_22Jul2025_update2
LAMMPS_GIT=https://github.com/lammps/lammps.git
DEEPMD_TAG=v3.1.1 # 3.1.1 适配 TensorFlow2.19
@bio-punk
bio-punk / build_lammps.sh
Last active December 31, 2025 10:33
lammps with ml-pace in blackwell #lammps
#!/bin/bash
# 自定义变量
CONDA_ENV_NAME=lammps_22july2025_u2
PYTHON_VER=3.11
LAMMPS_VER=stable_22Jul2025_update2
ALL_PREFIX=`pwd`
LAMMPS_SRC=${ALL_PREFIX}/lammps-${LAMMPS_VER}
CONDA_ENV_PATH=/root/shared-nvme/.conda/envs
@bio-punk
bio-punk / readme.md
Last active December 27, 2025 17:42
ssh over http proxy #run #linux

通过 http/https 代理转发 ssh 请求

问题

  1. 资源无法联网
  2. 拥有http/https代理,形如export http://${proxy_username}:${proxy_password}@${proxy_addr}:${proxy_port}
  3. 需要访问外部的ssh资源,如git@github.com

需求

安装 corkscrew 需要root权限

@bio-punk
bio-punk / qwq_function_tool_test.py
Last active December 25, 2025 16:45
qwen3 function tool call test #LLM #vllm
from qwen_agent.llm import get_chat_model
import os
import random
import json
api_key = os.environ["APIKEY"]
server_addr = os.environ["SERVER_ADDR"]
server_port = os.environ["SERVER_PORT"]
model_name = os.environ["MODEL_NAME"]
@bio-punk
bio-punk / run.sh
Created December 19, 2025 08:01
multinode run Megatron #slurm #megatron
#!/bin/bash
#SBATCH -J MEGATRON_LLAMA
#SBATCH -N 2
#SBATCH -p gpu
#SBATCH --qos=gpugpu
#SBATCH --gres=gpu:8
#SBATCH -o logs/slurm-%j.log
#SBATCH -e logs/slurm-%j.log
#SBATCH
@bio-punk
bio-punk / ReadMe.md
Last active December 11, 2025 13:23
lammps with nequip in blackwell #x86 #build #lammps #blackwell
  1. 适用于 在5090组成的HPC上配置带有nequip模型的lammps
  2. 配置环境
    bash env_create.sh
  3. 安装allegro
    sbatch build_allegro.sh
  4. 安装lammps
    sbatch build_lammps.sh
  5. 测试
    sbatch test.sh
@bio-punk
bio-punk / build.sh
Last active December 6, 2025 17:34
abacus-develop #build #deepmd
#!/bin/bash
#SBATCH --gpus=1
#SBATCH
CONDA_ENV_NAME=suanpan_dev251206
ABACUS_SRC=~/run/dev251206/abacus-src
source /data/apps/miniforge/25.3.0-3/etc/profile.d/conda.sh
conda activate $CONDA_ENV_NAME
module load cuda/12.8