Skip to content

Instantly share code, notes, and snippets.

View zezhishao's full-sized avatar
🤒
I may be slow to respond.

S22 zezhishao

🤒
I may be slow to respond.
View GitHub Profile
@zezhishao
zezhishao / VSCode Copilot代理.md
Last active March 9, 2026 09:20
VSCode Copilot代理

第一步:准备本地代理环境

首先,你的本地电脑必须有一个 HTTP/S 代理服务正在运行。

  • 如果你使用的是常见的网络工具(如 v2ray, Clash, Charles 等),它们通常会在本地开启一个端口(常见的如 78901080)。
  • 请确认你本地代理的端口号(下文假设本地代理端口为 7890)。

第二步:配置 SSH 远程转发 (Remote Forwarding)

我们需要配置 SSH,将远程服务器的某个端口流量转发回本地的代理端口。

@zezhishao
zezhishao / CPU占用过高.py
Last active January 7, 2022 12:33
PytorchCPU占用过高 (50%or100%) (pytorch CPU usage high)
torch.set_num_threads(1)
@zezhishao
zezhishao / time_count_via_decorator.py
Created October 30, 2021 16:50
python用装饰器统计函数运行时间
import time
def clock(func):
def clocked(*args, **kw):
t0 = time.perf_counter()
result = func(*args, **kw)
elapsed = time.perf_counter() - t0
name = func.__name__
arg_str = ', '.join(repr(arg) for arg in args)
print('[%0.8fs] %s' % (elapsed, name))
@zezhishao
zezhishao / example.py
Created October 30, 2021 16:49
Torch中的Unsqueeze会增大显存占用量
import torch as th
device = th.device("cuda:0")
data1 = th.randn(207, 621).to(device)
data2 = th.randn(64, 13, 621, 32).to(device)
# situation 1
data3 = th.matmul(data1, data2) # 1175MiB / 11019MiB
# situation 2
# data4 = th.matmul(data1.unsqueeze(0), data2) # 1519MiB / 11019MiB
# equal = th.all(data3 == data4) # True
@zezhishao
zezhishao / batch_diag.md
Created October 30, 2021 16:47
Pytorch Batch Diagonal

在Pytorch中,torch.diag只能用于非batch数据。 下面实现batch版本的diag:

import torch

def matrix_diag(diagonal):
    """
    diagonal: [batch_size, N]
    """
 N = diagonal.shape[-1]
@zezhishao
zezhishao / example.md
Created October 30, 2021 16:45
Pytorch Element-Wise Matrix Product

Assume we got two matrix [A1, A2] and other two matrix [W1, W2], where A in [N, D] and W in [D, H]. And the goal is to get [A1W1, A2W2] to cat them (e.g. Mixhop GCN), we can achieve it by torch.matmul as following:

import torch
N = 207
D = 64
H = 32
A1 = torch.randn(N, D)
A2 = torch.randn(N, D)
W1 = torch.randn(D, H)
@zezhishao
zezhishao / onehot_encoding.md
Created October 30, 2021 16:44
Pytorch的独热编码
import torch.nn.functional as F
F.one_hot(torch.arange(0, 5) % 3, num_classes=5)
>>> tensor([[1, 0, 0, 0, 0],
        [0, 1, 0, 0, 0],
        [0, 0, 1, 0, 0],
@zezhishao
zezhishao / multiplication.md
Last active October 30, 2021 16:42
Pytorch的几种乘法

1. Matrix Multiplication in Pytorch

  • torch.mul()
  • torch.matmul()
  • torch.mm()
  • torch.bmm()

1.0. Broadcast

Broadcast。

1.1. torch.mul(mat1, other, out=None)