Skip to content

Instantly share code, notes, and snippets.

View sozercan's full-sized avatar
:shipit:
I may be slow to respond.

Sertaç Özercan sozercan

:shipit:
I may be slow to respond.
View GitHub Profile
@sozercan
sozercan / setup-amlfs-aks.sh
Last active March 13, 2026 03:43
Enable Azure Managed Lustre File System (AMLFS) on an existing AKS cluster with dynamic provisioning
#!/usr/bin/env bash
#
# setup-amlfs-aks.sh
#
# Enable Azure Managed Lustre File System (AMLFS) on an existing AKS cluster
# with dynamic provisioning via the Azure Lustre CSI driver.
#
# IMPORTANT: AMLFS requires a dedicated subnet separate from the AKS node subnet.
# This script automatically creates one if it doesn't exist. Using the AKS node
# subnet causes extremely slow provisioning (90+ min vs ~15 min with a dedicated subnet).
apiVersion: ray.io/v1
kind: RayService
metadata:
name: qwen
spec:
serveConfigV2: |
applications:
- name: llm_app
import_path: ray.serve.llm:build_openai_app
route_prefix: "/"
apiVersion: ray.io/v1
kind: RayService
metadata:
name: qwen-disaggregated
spec:
serveConfigV2: |
applications:
- name: pd-disaggregation
import_path: ray.serve.llm:build_pd_openai_app
route_prefix: "/"
oss-gpt120b-bench-j9b29 Collecting ruamel-yaml~=0.18.12 (from aiperf==0.1.1)
oss-gpt120b-bench-j9b29 Downloading ruamel.yaml-0.18.15-py3-none-any.whl.metadata (25 kB)
oss-gpt120b-bench-j9b29 Collecting setproctitle~=1.3.6 (from aiperf==0.1.1)
oss-gpt120b-bench-j9b29 Downloading setproctitle-1.3.7-cp312-cp312-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl.metadata (10 kB)
oss-gpt120b-bench-j9b29 Collecting soundfile~=0.13.1 (from aiperf==0.1.1)
oss-gpt120b-bench-j9b29 Downloading soundfile-0.13.1-py2.py3-none-manylinux_2_28_x86_64.whl.metadata (16 kB)
oss-gpt120b-bench-j9b29 Collecting textual~=5.3.0 (from aiperf==0.1.1)
oss-gpt120b-bench-j9b29 Downloading textual-5.3.0-py3-none-any.whl.metadata (9.1 kB)
oss-gpt120b-bench-j9b29 Collecting tqdm>=4.67.1 (from aiperf==0.1.1)
oss-gpt120b-bench-j9b29 Downloading tqdm-4.67.1-py3-none-any.whl.metadata (57 kB)
oss-gpt120b-bench-94ldh Downloading setproctitle-1.3.7-cp312-cp312-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl (32 kB)
oss-gpt120b-bench-94ldh Downloading soundfile-0.13.1-py2.py3-none-manylinux_2_28_x86_64.whl (1.3 MB)
oss-gpt120b-bench-94ldh ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.3/1.3 MB 115.8 MB/s eta 0:00:00
oss-gpt120b-bench-94ldh Downloading textual-5.3.0-py3-none-any.whl (702 kB)
oss-gpt120b-bench-94ldh ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 702.7/702.7 kB 71.6 MB/s eta 0:00:00
oss-gpt120b-bench-94ldh Downloading tqdm-4.67.1-py3-none-any.whl (78 kB)
oss-gpt120b-bench-94ldh Downloading transformers-4.57.1-py3-none-any.whl (12.0 MB)
oss-gpt120b-bench-94ldh ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 12.0/12.0 MB 141.6 MB/s eta 0:00:00
oss-gpt120b-bench-94ldh Downloading uvloop-0.21.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (4.7 MB)
oss-gpt120b-bench-94ldh ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 4.7/4.7 MB 146.3 MB/s eta 0:00:00
│ 175 │ │ │ │ await asyncio.sleep(0.5) │
│ 176 │ │ │
│ 177 │ │ try: │
│ ❱ 178 │ │ │ await asyncio.wait_for(_wait_for_registration(), timeout=timeout_seconds) │
│ 179 │ │ except asyncio.TimeoutError as e: │
{
"@context": "https://openvex.dev/ns/v0.2.0",
"@id": "govulncheck/vex:e47eb4a0ed7d490a5a94dfb6f85150e2244773b6977de80e8dc620dbd3d30a72",
"author": "Unknown Author",
"timestamp": "2025-09-26T20:54:41.812737311Z",
"version": 1,
"tooling": "https://pkg.go.dev/golang.org/x/vuln/cmd/govulncheck",
"statements": [
{
"vulnerability": {
package types
import (
"time"
)
// Options contains common copacetic options.
type Options struct {
// Core image configuration
Image string
2025-03-07 09:54:52,381 INFO usage_lib.py:467 -- Usage stats collection is enabled by default without user confirmation because this terminal is detected to be non-interactive. To disable this, add `--disable-usage-stats` to the command that starts the cluster, or run the following command: `ray disable-usage-stats` before starting the cluster. See https://docs.ray.io/en/master/cluster/usage-stats.html for more details.
2025-03-07 09:54:52,381 INFO scripts.py:865 -- Local node IP: 10.244.2.105
2025-03-07 09:54:54,440 SUCC scripts.py:902 -- --------------------
2025-03-07 09:54:54,440 SUCC scripts.py:903 -- Ray runtime started.
2025-03-07 09:54:54,440 SUCC scripts.py:904 -- --------------------
2025-03-07 09:54:54,440 INFO scripts.py:906 -- Next steps
2025-03-07 09:54:54,440 INFO scripts.py:909 -- To add another node to this Ray cluster, run
2025-03-07 09:54:54,440 INFO scripts.py:912 -- ray start --address='10.244.2.105:6379'
2025-03-07 09:54:54,440 INFO scripts.py:921 -- To connect to this Ray cluster:
202