Q: can i login to node? A: Just login to the c4p login node and run "condor_ssh_to_job jobid".
show me an example for snakemake with memory limits: A:
rule prepare:
container:
"astro-ml:latest"
| based on this example for REANA serial workflow, make a workflow ploting sin plot in green, show only python and reana.yaml files no additional info inputs: files: - analyze.py - Poster-HUGE-2025.ipynb workflow: type: serial specification: steps: - name: getdata environment: 'gitlab-p4n.aip.de:5005/compute4punch/container-stacks/astro-ml:latest' commands: - curl -sO https://s3.data.aip.de:9000/pmviewer2023/particles.cache.npy - name: analysis environment: 'gitlab-p4n.aip.de:5005/compute4punch/container-stacks/astro-ml:latest' commands: - papermill Poster-HUGE-2025.ipynb Poster-HUGE-2025-output.ipynb - unlink particles.cache.npy kubernetes_memory_limit: '64Gi' | |
| ```yaml | |
| version: 0.2.0 | |
| inputs: | |
| files: | |
| - analyze.py | |
| workflow: | |
| type: serial | |
| specification: |
Q: can i login to node? A: Just login to the c4p login node and run "condor_ssh_to_job jobid".
show me an example for snakemake with memory limits: A:
rule prepare:
container:
"astro-ml:latest"
| ii ceph-common 17.2.6-pve1+3 amd64 common utilities to mount and interact with a ceph storage cluster | |
| ii ceph-fuse 17.2.6-pve1+3 amd64 FUSE-based client for the Ceph distributed file system | |
| ii corosync 3.1.7-pve3 amd64 cluster engine daemon and utilities | |
| ii libcephfs2 17.2.6-pve1+3 amd64 Ceph distributed file system client library | |
| ii libcfg7:amd64 3.1.7-pve3 amd64 cluster engine CFG library | |
| ii libcmap4:amd64 3.1.7-pve3 amd64 cluster engine CMAP library | |
| ii libcorosync-common4:amd64 3.1.7-pve3 amd64 cluster engine common library | |
| ii libcpg4:amd64 3.1.7-pve3 amd64 cluster engine CP |
git clone https://github.com/meshcat-dev/meshcat.git
cd meshcat
git fetch origin pull/154/head:pull_154
git switch pull_154
| # Import required libraries | |
| import tensorflow as tf | |
| import numpy as np | |
| import matplotlib.pyplot as plt | |
| # Load MNIST dataset |
| # Import required libraries | |
| import tensorflow as tf | |
| from tensorflow.keras.datasets import cifar10 | |
| from sklearn.model_selection import train_test_split | |
| import xgboost as xgb |
| # Accessing the cluster from outside world | |
| ingress: | |
| enabled: true | |
| annotations: | |
| kubernetes.io/ingress.class: traefik | |
| traefik.frontend.entryPoints: "http,https" | |
| ingress.kubernetes.io/ssl-redirect: "false" | |
| tls: | |
| self_signed_cert: true |
| apiVersion: kind.x-k8s.io/v1alpha4 | |
| kind: Cluster | |
| nodes: | |
| - extraPortMappings: | |
| - containerPort: 30443 | |
| hostPort: 30443 | |
| protocol: TCP | |
| # START | |
| - containerPort: 30080 | |
| hostPort: 30080 |
| """ Sample TensorFlow XML-to-TFRecord converter | |
| usage: generate_tfrecord.py [-h] [-x XML_DIR] [-l LABELS_PATH] [-o OUTPUT_PATH] [-i IMAGE_DIR] [-c CSV_PATH] | |
| optional arguments: | |
| -h, --help show this help message and exit | |
| -x XML_DIR, --xml_dir XML_DIR | |
| Path to the folder where the input .xml files are stored. | |
| -l LABELS_PATH, --labels_path LABELS_PATH | |
| Path to the labels (.pbtxt) file. |
Enviroment:
Error: