Use the gh CLI to generate a comprehensive weekly development intelligence report that goes far beyond basic commit summaries.
Data Collection:
- All commits from the past week with full diff analysis
- Pull requests created, reviewed, and merged
Use the gh CLI to generate a comprehensive weekly development intelligence report that goes far beyond basic commit summaries.
Data Collection:
An LLM fine-tuning course online conference for everything LLMs.
Build skills to be effective with LLMs
Course website: https://maven.com/parlance-labs/fine-tuning
| import requests | |
| import time | |
| import os | |
| import sys | |
| import openai | |
| import tiktoken | |
| from termcolor import colored | |
| openai.api_key = open(os.path.expanduser('~/.openai')).read().strip() |
CUDA 12.1.1 toolkit is gonna offer to install Nvidia driver 530 for us. It's from New Feature branch. It's likely to be newer than the default Nvidia driver you would've installed via apt-get (apt would prefer to give you 525, i.e. Production Branch).
If you're confident that you already have a new enough Nvidia driver for CUDA 12.1.1, and you'd like to keep your driver: feel free to skip this "uninstall driver" step.
But if you're not sure, or you know your driver is too old: let's uninstall it. CUDA will install a new driver for us later.
Syncing an Ethereum node is largely reliant on latency and IOPS, I/O Per Second, of the storage. Budget SSDs will struggle to an extent, and some won't be able to sync at all. IOPS can roughly be used as proxy of / predictor for latency. Measuring latency directly is arguably better.
This document aims to snapshot some known good and known bad models.
The drive lists are ordered by interface and then by capacity and alphabetically by vendor name, not by preference. The lists are not exhaustive at all. @mwpastore linked a filterable spreadsheet in comments that has a far greater variety of drives and their characteristics. Filter it by DRAM yes, NAND Type TLC, Form Factor M.2, and desired capacity.
For size, 4TB is a conservative choice which also supports a Fusaka "supernode". The smaller 2TB drive should last an Ethereum full node until at least sometime 2026, with [pre-merge history expiry](http
| # If you are using Spark 1.4, then launch SparkR with the command | |
| # | |
| # ./bin/sparkR --packages com.databricks:spark-csv_2.10:1.0.3 | |
| # as the `sparkPackages=` flag was only added in Spark 1.4.1. | |
| # # This will work in Spark 1.4.1. | |
| sc <- sparkR.init(spark_link, sparkPackages = "com.databricks:spark-csv_2.10:1.0.3") | |
| sqlContext <- sparkRSQL.init(sc) | |
| flights <- read.df(sqlContext, "s3n://sparkr-data/nycflights13.csv","com.databricks.spark.csv", header="true") |
Whether you're trying to give back to the open source community or collaborating on your own projects, knowing how to properly fork and generate pull requests is essential. Unfortunately, it's quite easy to make mistakes or not know what you should do when you're initially learning the process. I know that I certainly had considerable initial trouble with it, and I found a lot of the information on GitHub and around the internet to be rather piecemeal and incomplete - part of the process described here, another there, common hangups in a different place, and so on.
In an attempt to coallate this information for myself and others, this short tutorial is what I've found to be fairly standard procedure for creating a fork, doing your work, issuing a pull request, and merging that pull request back into the original project.
Just head over to the GitHub page and click the "Fork" button. It's just that simple. Once you've done that, you can use your favorite git client to clone your repo or j