Skip to content

Instantly share code, notes, and snippets.

View WonderBeat's full-sized avatar

Denis Golovachev WonderBeat

View GitHub Profile
@WonderBeat
WonderBeat / script.sql
Created October 15, 2025 11:36
copytraders.sql
WITH
last_week_trades AS (
SELECT * from SANTIMENT__HYPERLIQUID.DEX.TRADES where timestamp > current_timestamp() - INTERVAL '10 DAY'
),
parsed_trades AS (
SELECT
TIMESTAMP,
COIN,
MARKET_TYPE,
BUYER_ADDRESS AS address,
@WonderBeat
WonderBeat / nebula.service
Created May 17, 2025 07:32
/etc/systemd/system/nebula.service
sudo cat /etc/systemd/system/nebula.service
[Unit]
Description=Nebula Network
Wants=basic.target
After=basic.target network.target
[Service]
ExecStart=/usr/bin/nebula -config /etc/nebula/config.yaml
ExecReload=/bin/kill -HUP $MAINPID
Restart=always
-- HISTORICAL
set ts_from = TIME_SLICE(1743754858::timestamp_ntz, 5, 'MINUTE', 'START');
set ts_to = TIME_SLICE(1743927658::timestamp_ntz, 5, 'MINUTE', 'START');
SET asset_id = (SELECT METRICS_DEV.PUBLIC.GET_ASSET_ID_BY_REF(11559779572935330088));
SET metric_id = (SELECT METRICS_DEV.PUBLIC.GET_METRIC_ID_BY_NAME('transaction_volume'));
CREATE OR REPLACE TEMPORARY TABLE metrics_dev.public.INTERVALS AS (WITH RECURSIVE time_slices AS (
SELECT
$ts_from AS slice_start,
TIME_SLICE($ts_from, 5, 'MINUTE', 'END') AS slice_end
#!/bin/sh
# https://github.com/openwrt-xiaomi/awg-openwrt/wiki/AmneziaWG-installing#%D1%83%D1%81%D1%82%D0%B0%D0%BD%D0%BE%D0%B2%D0%BA%D0%B0-amneziawg-%D0%B8-%D0%B4%D1%80%D1%83%D0%B3%D0%B8%D1%85-%D0%BD%D1%83%D0%B6%D0%BD%D1%8B%D1%85-%D1%83%D1%82%D0%B8%D0%BB%D0%B8%D1%82-%D0%BD%D0%B0-vds-%D1%81%D0%B5%D1%80%D0%B2%D0%B5%D1%80%D0%B5
# https://github.com/amnezia-vpn/amneziawg-linux-kernel-module?tab=readme-ov-file#debian
# https://habr.com/ru/companies/amnezia/articles/807539/
#
# AmneziaWG setup
#
apt install -y gnupg2 linux-headers-$(uname -r)
apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 57290828
"Flink-RocksDBStateDataTransfer-thread-1" #570511 daemon prio=5 os_prio=0 cpu=40891.60ms elapsed=80173.33s tid=0x00007fb857d21000 nid=0x98839 waiting on condition [0x00007fb6b56f2000]
java.lang.Thread.State: TIMED_WAITING (sleeping)
at java.lang.Thread.sleep(java.base@11.0.18/Native Method)
at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:441)
at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:318)
at org.apache.hadoop.fs.s3a.WriteOperationHelper.finalizeMultipartUpload(WriteOperationHelper.java:353)
at org.apache.hadoop.fs.s3a.WriteOperationHelper.completeMPUwithRetries(WriteOperationHelper.java:396)
at org.apache.hadoop.fs.s3a.S3ABlockOutputStream$MultiPartUpload.lambda$complete$1(S3ABlockOutputStream.java:886)
at org.apache.hadoop.fs.s3a.S3ABlockOutputStream$MultiPartUpload$$Lambda$2906/0x000000084099c440.apply(Unknown Source)
at org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.trackDurationOfInvocation(IOStatisticsBinding.java:464)

Job Manager

2022-12-27 13:38:47.155 StaticFileSplitEnumerator  - Assigned split to subtask 1 : FileSourceSplit: s3a://bucket/2000000/part-00001-00cb73ef-346b-4e1e-a86a-007223ddf275-c000.zstd.parquet [0, 97489087)  hosts=[localhost] ID=0000000032 position=null
2022-12-27 13:38:47.156 StaticFileSplitEnumerator  - Assigned split to subtask 9 : FileSourceSplit: s3a://bucket/2000000/part-00002-00cb73ef-346b-4e1e-a86a-007223ddf275-c000.zstd.parquet [0, 97342071)  hosts=[localhost] ID=0000000033 position=null
2022-12-27 13:38:47.156 StaticFileSplitEnumerator  - Assigned split to subtask 6 : FileSourceSplit: s3a://bucket/2000000/part-00000-00cb73ef-346b-4e1e-a86a-007223ddf275-c000.zstd.parquet [0, 97377047)  hosts=[localhost] ID=0000000031 position=null
2022-12-27 13:38:47.157 StaticFileSplitEnumerator  - Assigned split to subtask 5 : FileSourceSplit: s3a://bucket/2000000/part-00003-00cb73ef-346b-4e1e-a86a-007223ddf275-c000.zstd.parquet [0, 97406878)  hosts=[localhost] ID=0000000034 position=null
2022-12-27 1
@WonderBeat
WonderBeat / TrackingFsDataInputStream.md
Last active December 21, 2022 12:41
TrackingFsDataInputStream

TrackingFsDataInputStream batch tracking issue

org.apache.flink.connector.file.src.impl.StreamFormatAdapter.TrackingFsDataInputStream wraps underlying InputStream to count bytes consumed. org.apache.flink.connector.file.src.impl.StreamFormatAdapter.Reader relies on this to create batches of data.

            while (stream.hasRemainingInBatch() && (next = reader.read()) != null) {
                result.add(next);
            }
Letter Encryption
a cd86
b cea8
c cfc8
d c8c7
e c9fb
f ca47
g cb4e
h c446
connector.class=com.github.jcustenborder.kafka.connect.spooldir.SpoolDirCsvSourceConnector
csv.first.row.as.header=false
finished.path=/tmp/spooldir_finished
tasks.max=1
halt.on.error=true
schema.generation.enabled=false
value.converter.schema.registry.url=https://ip-10-0-0-24.eu-west-1.compute.internal:8481,https://ip-10-0-0-25.eu-west-1.compute.internal:8481
input.file.pattern=.*.csv
name=direct-mail-csv
topic=csv-test
public interface TokenManager {
enum VerificationResult {
OK, INVALID;
}
byte[] generate();
VerificationResult verify(byte[] token);
}