I hereby claim:
- I am dvanders on github.
- I am dvanders (https://keybase.io/dvanders) on keybase.
- I have a public key ASAJy2L7fN70X5n0vp7FyhL1m4jydu-WyRLpDc-ijtpNdAo
To claim this, I am signing this object:
| #!/bin/bash | |
| # random sleep to avoid thundering herd | |
| sleep $[ ( $RANDOM % 30 ) + 1 ]s | |
| if ls /sys/kernel/debug/ceph/*/caps 1> /dev/null 2>&1; then | |
| CAPS=`cat /sys/kernel/debug/ceph/*/caps | grep total | awk '{sum += $2} END {print sum}'` | |
| else | |
| CAPS=0 | |
| fi |
| [Service] | |
| ExecStart= | |
| ExecStart=/usr/bin/numactl --interleave=all /usr/bin/ceph-osd -f --cluster ${CLUSTER} --id %i --setuser ceph --setgroup ceph |
| import json | |
| import argparse | |
| # Define the weight calculation function | |
| def calculate_weight(bluestore_bdev_size): | |
| return max(0.00001, float(bluestore_bdev_size) / float(1 << 40)) | |
| def main(input_file): | |
| # Read the OSD metadata from the input file | |
| with open(input_file, 'r') as file: |
| #!/bin/sh | |
| # prefer cfq (el7), then mq-deadline (el8), then do nothing | |
| if grep -q cfq /sys/block/sd*/queue/scheduler; then | |
| # tune SSDs to use noop scheduler and spinning disks to use cfq | |
| for DISK in /sys/block/sd*; do grep -q 0 ${DISK}/queue/rotational && echo noop > ${DISK}/queue/scheduler; done | |
| for DISK in /sys/block/sd*; do grep -q 1 ${DISK}/queue/rotational && echo cfq > ${DISK}/queue/scheduler; done | |
| # tune cfq not to penalize writes when reading heavily |
| #!/usr/bin/env python3 | |
| # The goal of this is to gradually balance a btrfs filesystem which contains DM-SMR drives. | |
| # Such drive are described in detail at https://www.usenix.org/node/188434 | |
| # A normal drive should be able to balance a single 1GB chunk in under 30s. | |
| # Such a stripe would normally be written directly to the shingled blocks, but in the case | |
| # it was cached, it would take roughly 100s to clean. | |
| # So our heuristic here is: |
| ignoredisk --only-use=sda,sdb | |
| clearpart --all --initlabel --drives sda,sdb | |
| # for /boot | |
| partition raid.01 --size 1024 --ondisk sda | |
| partition raid.02 --size 1024 --ondisk sdb | |
| # for /boot/efi | |
| partition raid.11 --size 256 --ondisk sda | |
| partition raid.12 --size 256 --ondisk sdb |
| [global] | |
| auth cluster required = cephx | |
| auth service required = cephx | |
| auth client required = cephx | |
| fsid = xxx | |
| debug filestore = 1 | |
| debug mon = 1 | |
| debug osd = 1 |
| # This file managed by Puppet | |
| global | |
| chroot /var/lib/haproxy | |
| group haproxy | |
| log 127.0.0.1 local0 | |
| maxconn 2048 | |
| pidfile /var/run/haproxy.pid | |
| ssl-default-bind-ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128:AES256:AES:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK | |
| stats socket /var/lib/haproxy/stats level admin | |
| tune.ssl.default-dh-param 2048 |
I hereby claim:
To claim this, I am signing this object:
| diff --git a/src/osd/OSD.cc b/src/osd/OSD.cc | |
| index 0562eed..1a2d397 100644 | |
| --- a/src/osd/OSD.cc | |
| +++ b/src/osd/OSD.cc | |
| @@ -1809,6 +1809,15 @@ int OSD::init() | |
| dout(2) << "boot" << dendl; | |
| + // initialize the daily loadavg with current 15min loadavg | |
| + double loadavgs[3]; |