Install visdom on your local system and remote server.
pip3 install visdom
On remote server, do:
| """ | |
| Adapted from modula's Hello GPT tutorial: | |
| https://github.com/modula-systems/modula/blob/ede2ba72a1b9de3e1f44156db058b5c32c682941/examples/hello-gpt.ipynb | |
| This script simply exposes dataloader seed as command line argument to test | |
| training sensitivity to seed. | |
| Usage: | |
| python hello-gpt.py --seed 0 |
| Input shape: (12, 64) | |
| Target shape: (12, 64) | |
| First input sequence: [41 53 50 42 1 40 50 53 53 42] ... | |
| First target sequence: [53 50 42 1 40 50 53 53 42 1] ... | |
| Decoded input: cold blood no spark of honour bides. | |
| NORTHUMBERLAND: | |
| Be thou a |
| # Copyright 2021 DeepMind Technologies Limited. All Rights Reserved. | |
| # | |
| # Licensed under the Apache License, Version 2.0 (the "License"); | |
| # you may not use this file except in compliance with the License. | |
| # You may obtain a copy of the License at | |
| # | |
| # http://www.apache.org/licenses/LICENSE-2.0 | |
| # | |
| # Unless required by applicable law or agreed to in writing, software | |
| # distributed under the License is distributed on an "AS IS" BASIS, |
| %% This part goes in preamble | |
| \newcommand{\dummyfig}[1]{ | |
| \centering | |
| \fbox{ | |
| \begin{minipage}[c][0.33\textheight][c]{0.5\textwidth} | |
| \centering{#1} | |
| \end{minipage} | |
| } | |
| } |
Following are the higher quality versions of TLP and TinyTLP datasets. The resolution is still the same but images are much sharper and have higher quality.
https://drive.google.com/open?id=1mv0ULctCzGn4gzum_Sb6kwXwB2QjZwu1
| # simple neural network implementation of qlearning | |
| import gym | |
| from gym import wrappers | |
| import numpy as np | |
| import tensorflow as tf | |
| # build environment | |
| env = gym.make("FrozenLake-v0") | |
| env = wrappers.Monitor(env, '/tmp/frozenlake-qlearning', force=True) | |
| n_obv = env.observation_space.n |
| # monte carlo policy gradient algorithm | |
| # use neural network to decide the policy | |
| # from observations and rewards, update the parameters of the neural networks to optimize the policy | |
| import numpy as np | |
| import tensorflow as tf | |
| import gym | |
| from gym import wrappers | |
| # initialize constants |