Skip to content

Instantly share code, notes, and snippets.

@nagataka
nagataka / construct_inverted_index.py
Created January 16, 2026 18:48
A naive script to construct an inverted index
def construct_index(docs):
term_dict = {}
postings = {}
for id, doc in docs.items():
terms = list(set(doc.lower().split()))
for term in terms:
term_count = term_dict.get(term, 0)
term_posting = postings.get(term, None)
@nagataka
nagataka / tech_notes.md
Last active January 16, 2026 18:49
Tech notes

SAM (Segment Anything Model)

SAMを試す (論文メモはこちら - nagataka/Read-a-Paper#55) SAM3を使うにはHugging Faceのレポジトリ上からアクセスリクエストが必要そうなので、一旦SAM2で実験。

使用可能なチェックポイントはgithub レポジトリに一覧がある。今回は tiny を試すことにする。

ただ、サンプルをそのまま試しても以下のようなエラーが出て動かない。おそらく、gitからクローンしてきた sam2 ディレクトリで作業する分には問題ないのだろうけど、今回自分は別のプロジェクトディレクトリから sam2 を読み込んで使っているので、Hydraの設定をいじらないといけないようだ。

Screenshot 2026-01-12 at 2 30 39 PM
@nagataka
nagataka / math_in_english.md
Created February 28, 2021 15:06
数学表現 in English
@nagataka
nagataka / study_lstm.md
Last active February 6, 2021 01:24
Studying LSTM
@nagataka
nagataka / blocking_maze_env01.py
Last active January 15, 2021 05:33
Blocking Maze for OpenAI Gym
# OpenAI gym custom environment mimicking Blocking Maze
# See Sutton and Barto "Reinforcement Learning an Introduction"
# Example 8.2: Blocking Maze
from enum import Enum
import sys
import copy
import gym
from gym import error, spaces, utils
from gym.utils import seeding
@nagataka
nagataka / settings.json
Created September 4, 2020 22:55
VS Code settings.json
{
"python.formatting.provider": "black",
"python.linting.pylintEnabled": false,
"python.linting.flake8Enabled": true,
"python.linting.flake8Args": [
"--ignore=E501,W503"
],
"python.sortImports.args": [
"-m 3"
],
@nagataka
nagataka / kelly_criterion.py
Created May 11, 2020 05:34
Experiment on coin flipping game
import random
import numpy as np
np.random.seed(0)
def kerri(p, b):
"""https://en.wikipedia.org/wiki/Kelly_criterion
"""
return (p*(b+1)-1 )/b
N = 300
@nagataka
nagataka / minimal_rllib.py
Created April 21, 2020 22:15
Initial example of using RLlib
import gym
import ray
from ray.rllib.agents.ppo import PPOTrainer, DEFAULT_CONFIG
import pprint as pp
#tune.run(PPOTrainer, config={"env": "Breakout-v0", "use_pytorch": True})
ray.init(num_gpus=1, ignore_reinit_error=True, log_to_driver=False)
# https://github.com/ray-project/ray/blob/master/rllib/agents/ppo/ppo.py#L15
@nagataka
nagataka / notify_slack.sh
Created March 4, 2020 19:28
Send a slack notification
#!/bin/bash
set -eu
### Incoming WebHooks URL
WEBHOOKURL="https://hooks.slack.com/services/FILL_YOUR_WEBHOOKURL"
### channel
CHANNEL=${CHANNEL:-"#notifications"}
@nagataka
nagataka / README.md
Last active November 20, 2019 19:23
README_template.md

The repository is organized as follows:

  • src : Contains the source codes for all .... The source code is written in Python and it takes advantage of Numpy and Matplotlib. In order to run a simulation you have to use the file run_xxxx.py.

  • tools: In this folder you can find some tools for.... With yyy.py you can reproduce the figures found in ().

  • data: Here are saved all the results once you run a simulation.

  • params: Here you can find all the configuration files containing all the parameters (for each experiments).