Content
WARNING Skipping jet PUID SFs for variation: jer6_down, is_mc: True, dnn_year: 2018.0 copperhead_processor.py:2346
INFO [timing] Jet pT variations time: 53.34 seconds copperhead_processor.py:1771
INFO [timing] various region (z-peak) fill time: 0.03 seconds copperhead_processor.py:1788
INFO [timing] Zpt weights time: 0.00 seconds copperhead_processor.py:1837
INFO [timing] Weights variations time: 0.04 seconds copperhead_processor.py:1864
INFO [timing] Weights partials time: 0.05 seconds copperhead_processor.py:1873
INFO [timing] Cutflow time: 0.00 seconds copperhead_processor.py:1989
WARNING [resume] attempt 1 failed for dy_M-50_aMCatNLO[0] (FutureCancelledError: ('write-parquet-aca86cfc695ee295660a7df1f9fba0f1-finalize', 0) cancelled for reason: scheduler-connection-lost. run_stage1.py:506
Client lost the connection to the scheduler. Please check your connection and re-run your work.)
ERROR [resume] write failed after 1 attempts for dy_M-50_aMCatNLO[0] run_stage1.py:515
╭───────────────────────────────────────────────────────────────────────────────────────────── Traceback (most recent call last) ──────────────────────────────────────────────────────────────────────────────────────────────╮
│ /depot/cms/private/users/shar1172/copperheadV2_main/run_stage1.py:492 in <module> │
│ │
│ 489 │ │ │ │ │ │ │ │
│ 490 │ │ │ │ │ │ │ # to_persist = to_persist.persist() │
│ 491 │ │ │ │ │ │ │ # to_persist.to_parquet(save_path, write_metadata_file=False │
│ ❱ 492 │ │ │ │ │ │ │ to_persist.to_parquet(save_path) │
│ 493 │ │ │ │ │ │ │ │
│ 494 │ │ │ │ │ │ │ if not _parquet_dir_has_files(save_path): │
│ 495 │ │ │ │ │ │ │ │ raise RuntimeError("Parquet write produced no files.") │
│ │
│ /depot/cms/users/yun79/conda_envs/yun_coffea_latest/lib/python3.11/site-packages/dask_awkward/lib/core.py:1757 in to_parquet │
│ │
│ 1754 │ ) -> Any: │
│ 1755 │ │ from dask_awkward.lib.io.parquet import to_parquet │
│ 1756 │ │ │
│ ❱ 1757 │ │ return to_parquet(self, path, storage_options=storage_options, **kwargs) │
│ 1758 │ │
│ 1759 │ def to_dask_bag(self) -> DaskBag: │
│ 1760 │ │ from dask_awkward.lib.io.io import to_dask_bag │
│ │
│ /depot/cms/users/yun79/conda_envs/yun_coffea_latest/lib/python3.11/site-packages/dask_awkward/lib/io/parquet.py:694 in to_parquet │
│ │
│ 691 │ ) │
│ 692 │ out = new_scalar_object(graph, final_name, dtype="f8") │
│ 693 │ if compute: │
│ ❱ 694 │ │ out.compute() │
│ 695 │ │ return None │
│ 696 │ else: │
│ 697 │ │ return out │
│ │
│ /depot/cms/users/yun79/conda_envs/yun_coffea_latest/lib/python3.11/site-packages/dask/base.py:372 in compute │
│ │
│ 369 │ │ -------- │
│ 370 │ │ dask.compute │
│ 371 │ │ """ │
│ ❱ 372 │ │ (result,) = compute(self, traverse=False, **kwargs) │
│ 373 │ │ return result │
│ 374 │ │
│ 375 │ def __await__(self): │
│ │
│ /depot/cms/users/yun79/conda_envs/yun_coffea_latest/lib/python3.11/site-packages/dask/base.py:660 in compute │
│ │
│ 657 │ │ postcomputes.append(x.__dask_postcompute__()) │
│ 658 │ │
│ 659 │ with shorten_traceback(): │
│ ❱ 660 │ │ results = schedule(dsk, keys, **kwargs) │
│ 661 │ │
│ 662 │ return repack([f(r, *a) for r, (f, a) in zip(results, postcomputes)]) │
│ 663 │
│ │
│ /depot/cms/users/yun79/conda_envs/yun_coffea_latest/lib/python3.11/site-packages/distributed/client.py:2417 in _gather │
│ │
│ 2414 │ │ │ │ │ │ st = future._state │
│ 2415 │ │ │ │ │ │ exception = st.exception │
│ 2416 │ │ │ │ │ │ traceback = st.traceback │
│ ❱ 2417 │ │ │ │ │ │ raise exception.with_traceback(traceback) │
│ 2418 │ │ │ │ │ if errors == "skip": │
│ 2419 │ │ │ │ │ │ bad_keys.add(key) │
│ 2420 │ │ │ │ │ │ bad_data[key] = None │
╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
FutureCancelledError: ('write-parquet-aca86cfc695ee295660a7df1f9fba0f1-finalize', 0) cancelled for reason: scheduler-connection-lost.
Client lost the connection to the scheduler. Please check your connection and re-run your work.
Processing datasets: 15%|████████████████████████████▎ | 4/27 [12:53<1:14:09, 193.45s/it]
Traceback (most recent call last):
File "/depot/cms/private/users/shar1172/copperheadV2_main/run_stage1.py", line 492, in <module>
to_persist.to_parquet(save_path)
File "/depot/cms/users/yun79/conda_envs/yun_coffea_latest/lib/python3.11/site-packages/dask_awkward/lib/core.py", line 1757, in to_parquet
return to_parquet(self, path, storage_options=storage_options, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/depot/cms/users/yun79/conda_envs/yun_coffea_latest/lib/python3.11/site-packages/dask_awkward/lib/io/parquet.py", line 694, in to_parquet
out.compute()
File "/depot/cms/users/yun79/conda_envs/yun_coffea_latest/lib/python3.11/site-packages/dask/base.py", line 372, in compute
(result,) = compute(self, traverse=False, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/depot/cms/users/yun79/conda_envs/yun_coffea_latest/lib/python3.11/site-packages/dask/base.py", line 660, in compute
results = schedule(dsk, keys, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/depot/cms/users/yun79/conda_envs/yun_coffea_latest/lib/python3.11/site-packages/distributed/client.py", line 2417, in _gather
raise exception.with_traceback(traceback)
distributed.client.FutureCancelledError: ('write-parquet-aca86cfc695ee295660a7df1f9fba0f1-finalize', 0) cancelled for reason: scheduler-connection-lost.
Client lost the connection to the scheduler. Please check your connection and re-run your work.
Unclosed client session
client_session: <aiohttp.client.ClientSession object at 0x7fe12cf42ed0>
Program FAILED on Mon Dec 1 04:52:39 CET 2025NFO [timing] Weights time: 0.44 seconds copperhead_processor.py:1256
INFO [timing] some GEN event weights for syst time: 0.57 seconds copperhead_processor.py:1365
INFO [timing] Fill muon and gjet variables time: 0.29 seconds copperhead_processor.py:1743
INFO Applying jet PUID! copperhead_processor.py:2230
INFO Using puId for PUID jet.py:309
INFO [timing] Jet pT variations time: 1.02 seconds copperhead_processor.py:1771
INFO [timing] various region (z-peak) fill time: 0.03 seconds copperhead_processor.py:1788
INFO [timing] Zpt weights time: 0.00 seconds copperhead_processor.py:1837
INFO [timing] Weights variations time: 0.01 seconds copperhead_processor.py:1864
INFO [timing] Weights partials time: 0.03 seconds copperhead_processor.py:1873
INFO [timing] Cutflow time: 0.00 seconds copperhead_processor.py:1989
Processing datasets: 11%|█████████████████████▌ | 3/27 [00:20<00:01, 12.90it/s]WARNING [resume] attempt 1 failed for dy_M-50_aMCatNLO[0] (KilledWorker: Attempted to run task ('from-uproot-cd85382e4ec3c223c8ca71677167a2da', 168) on 4 different workers, but all those workers died while running it. The last run_stage1.py:506
worker that attempt to run the task was tls://10.5.11.113:44583. Inspecting worker logs is often a good next step to diagnose what went wrong. For more information see https://distributed.dask.org/en/stable/killed.html.)
ERROR [resume] write failed after 1 attempts for dy_M-50_aMCatNLO[0] run_stage1.py:515
╭───────────────────────────────────────────────────────────────────────────────────────────── Traceback (most recent call last) ──────────────────────────────────────────────────────────────────────────────────────────────╮
│ /depot/cms/private/users/shar1172/copperheadV2_main/run_stage1.py:492 in <module> │
│ │
│ 489 │ │ │ │ │ │ │ │
│ 490 │ │ │ │ │ │ │ # to_persist = to_persist.persist() │
│ 491 │ │ │ │ │ │ │ # to_persist.to_parquet(save_path, write_metadata_file=False │
│ ❱ 492 │ │ │ │ │ │ │ to_persist.to_parquet(save_path) │
│ 493 │ │ │ │ │ │ │ │
│ 494 │ │ │ │ │ │ │ if not _parquet_dir_has_files(save_path): │
│ 495 │ │ │ │ │ │ │ │ raise RuntimeError("Parquet write produced no files.") │
│ │
│ /depot/cms/users/yun79/conda_envs/yun_coffea_latest/lib/python3.11/site-packages/dask_awkward/lib/core.py:1757 in to_parquet │
│ │
│ 1754 │ ) -> Any: │
│ 1755 │ │ from dask_awkward.lib.io.parquet import to_parquet │
│ 1756 │ │ │
│ ❱ 1757 │ │ return to_parquet(self, path, storage_options=storage_options, **kwargs) │
│ 1758 │ │
│ 1759 │ def to_dask_bag(self) -> DaskBag: │
│ 1760 │ │ from dask_awkward.lib.io.io import to_dask_bag │
│ │
│ /depot/cms/users/yun79/conda_envs/yun_coffea_latest/lib/python3.11/site-packages/dask_awkward/lib/io/parquet.py:694 in to_parquet │
│ │
│ 691 │ ) │
│ 692 │ out = new_scalar_object(graph, final_name, dtype="f8") │
│ 693 │ if compute: │
│ ❱ 694 │ │ out.compute() │
│ 695 │ │ return None │
│ 696 │ else: │
│ 697 │ │ return out │
│ │
│ /depot/cms/users/yun79/conda_envs/yun_coffea_latest/lib/python3.11/site-packages/dask/base.py:372 in compute │
│ │
│ 369 │ │ -------- │
│ 370 │ │ dask.compute │
│ 371 │ │ """ │
│ ❱ 372 │ │ (result,) = compute(self, traverse=False, **kwargs) │
│ 373 │ │ return result │
│ 374 │ │
│ 375 │ def __await__(self): │
│ │
│ /depot/cms/users/yun79/conda_envs/yun_coffea_latest/lib/python3.11/site-packages/dask/base.py:660 in compute │
│ │
│ 657 │ │ postcomputes.append(x.__dask_postcompute__()) │
│ 658 │ │
│ 659 │ with shorten_traceback(): │
│ ❱ 660 │ │ results = schedule(dsk, keys, **kwargs) │
│ 661 │ │
│ 662 │ return repack([f(r, *a) for r, (f, a) in zip(results, postcomputes)]) │
│ 663 │
│ │
│ /depot/cms/users/yun79/conda_envs/yun_coffea_latest/lib/python3.11/site-packages/distributed/client.py:2417 in _gather │
│ │
│ 2414 │ │ │ │ │ │ st = future._state │
│ 2415 │ │ │ │ │ │ exception = st.exception │
│ 2416 │ │ │ │ │ │ traceback = st.traceback │
│ ❱ 2417 │ │ │ │ │ │ raise exception.with_traceback(traceback) │
│ 2418 │ │ │ │ │ if errors == "skip": │
│ 2419 │ │ │ │ │ │ bad_keys.add(key) │
│ 2420 │ │ │ │ │ │ bad_data[key] = None │
╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
KilledWorker: Attempted to run task ('from-uproot-cd85382e4ec3c223c8ca71677167a2da', 168) on 4 different workers, but all those workers died while running it. The last worker that attempt to run the task was
tls://10.5.11.113:44583. Inspecting worker logs is often a good next step to diagnose what went wrong. For more information see https://distributed.dask.org/en/stable/killed.html.
Processing datasets: 15%|████████████████████████████▎ | 4/27 [40:22<3:52:08, 605.57s/it]
Traceback (most recent call last):
File "/depot/cms/private/users/shar1172/copperheadV2_main/run_stage1.py", line 492, in <module>
to_persist.to_parquet(save_path)
File "/depot/cms/users/yun79/conda_envs/yun_coffea_latest/lib/python3.11/site-packages/dask_awkward/lib/core.py", line 1757, in to_parquet
return to_parquet(self, path, storage_options=storage_options, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/depot/cms/users/yun79/conda_envs/yun_coffea_latest/lib/python3.11/site-packages/dask_awkward/lib/io/parquet.py", line 694, in to_parquet
out.compute()
File "/depot/cms/users/yun79/conda_envs/yun_coffea_latest/lib/python3.11/site-packages/dask/base.py", line 372, in compute
(result,) = compute(self, traverse=False, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/depot/cms/users/yun79/conda_envs/yun_coffea_latest/lib/python3.11/site-packages/dask/base.py", line 660, in compute
results = schedule(dsk, keys, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/depot/cms/users/yun79/conda_envs/yun_coffea_latest/lib/python3.11/site-packages/distributed/client.py", line 2417, in _gather
raise exception.with_traceback(traceback)
distributed.scheduler.KilledWorker: Attempted to run task ('from-uproot-cd85382e4ec3c223c8ca71677167a2da', 168) on 4 different workers, but all those workers died while running it. The last worker that attempt to run the task was tls://10.5.11.113:44583. Inspecting worker logs is often a good next step to diagnose what went wrong. For more information see https://distributed.dask.org/en/stable/killed.html.
Unclosed client session
client_session: <aiohttp.client.ClientSession object at 0x7f0aff84df50>
Program FAILED on Mon Dec 1 05:43:41 CET 2025