Update: As of 11 January 2022, git.io no longer accepts new URLs.
Command:
curl https://git.io/ -i -F "url=https://github.com/YOUR_GITHUB_URL" -F "code=YOUR_CUSTOM_NAME"URLs that can be created is from:
https://github.com/*https://*.github.com
| #!/bin/sh | |
| [ -x /etc/vnc/xstartup ] && exec /etc/vnc/xstartup | |
| [ -r $HOME/.Xresources ] && xrdb $HOME/.Xresources | |
| vncconfig -iconic & | |
| export DESKTOP_SESSION=/usr/share/xsessions/ubuntu.desktop | |
| export XDG_CURRENT_DESKTOP=ubuntu:GNOME | |
| export GNOME_SHELL_SESSION_MODE=ubuntu | |
| export XDG_DATA_DIRS=/usr/share/ubuntu:/usr/local/share/:/usr/share/:/var/lib/snapd/desktop | |
| dbus-launch --exit-with-session /usr/bin/gnome-session --systemd --session=ubuntu |
| import numpy as np | |
| import gym | |
| from gym.envs.mujoco import mujoco_env | |
| from mujoco_py.generated import const | |
| from scipy.spatial.transform import Rotation | |
| """ Marker types in const | |
| GEOM_PLANE = 0 |
| #![feature(arbitrary_self_types)] | |
| use pyo3::prelude::*; | |
| use pyo3::pyclass::PyClassShell; | |
| use pyo3::types::{PyBytes, PyTuple}; | |
| use pyo3::ToPyObject; | |
| use bincode::{deserialize, serialize}; | |
| use serde::{Deserialize, Serialize}; | |
| #[derive(Serialize, Deserialize)] |
| import jax | |
| import jax.numpy as np | |
| from jax import grad, jit | |
| from jax.scipy.special import logsumexp | |
| def dadashi_fig2d(): | |
| """ Figure 2 d) of | |
| ''The Value Function Polytope in Reinforcement Learning'' | |
| by Dadashi et al. (2019) https://arxiv.org/abs/1901.11524 |
Update: As of 11 January 2022, git.io no longer accepts new URLs.
Command:
curl https://git.io/ -i -F "url=https://github.com/YOUR_GITHUB_URL" -F "code=YOUR_CUSTOM_NAME"URLs that can be created is from:
https://github.com/*https://*.github.com| [global_config] | |
| window_state = maximise | |
| handle_size = 0 | |
| title_hide_sizetext = True | |
| title_transmit_fg_color = "#bd93f9" | |
| title_inactive_fg_color = "#f8f8f2" | |
| title_receive_bg_color = "#282a36" | |
| title_transmit_bg_color = "#282a36" | |
| title_receive_fg_color = "#8be9fd" |
https://github.com/aancel/admin/wiki/VirtualGL-on-Ubuntu
https://virtualgl.org/About/Introduction
When you use ssh with X forwarding, you might have noticed that you cannot execute programs that require 3D acceleration. That's where VirtualGL comes into play.
| """ | |
| A bare bones examples of optimizing a black-box function (f) using | |
| Natural Evolution Strategies (NES), where the parameter distribution is a | |
| gaussian of fixed standard deviation. | |
| """ | |
| import numpy as np | |
| np.random.seed(0) | |
| # the function we want to optimize |
| # License: | |
| # I hereby state this snippet is below "threshold of originality" where applicable (public domain). | |
| # | |
| # Otherwise, since initially posted on Stackoverflow, use as: | |
| # CC-BY-SA 3.0 skyking, Glenn Maynard, Axel Huebl | |
| # http://stackoverflow.com/a/31047259/2719194 | |
| # http://stackoverflow.com/a/4858123/2719194 | |
| import types |
| """ Trains an agent with (stochastic) Policy Gradients on Pong. Uses OpenAI Gym. """ | |
| import numpy as np | |
| import cPickle as pickle | |
| import gym | |
| # hyperparameters | |
| H = 200 # number of hidden layer neurons | |
| batch_size = 10 # every how many episodes to do a param update? | |
| learning_rate = 1e-4 | |
| gamma = 0.99 # discount factor for reward |