Skip to content

Instantly share code, notes, and snippets.

@loristns
Created February 19, 2019 12:08
Show Gist options
  • Select an option

  • Save loristns/215e359d6598f4f49f674e80f770c6d2 to your computer and use it in GitHub Desktop.

Select an option

Save loristns/215e359d6598f4f49f674e80f770c6d2 to your computer and use it in GitHub Desktop.
Demo of OpenAI's GPT-2 algorithm
Display the source blob
Display the rendered blob
Raw
{
"nbformat": 4,
"nbformat_minor": 0,
"metadata": {
"colab": {
"name": "GPT2.ipynb",
"version": "0.3.2",
"provenance": [],
"collapsed_sections": []
},
"kernelspec": {
"name": "python3",
"display_name": "Python 3"
}
},
"cells": [
{
"metadata": {
"id": "Lv0Jz379pfPF",
"colab_type": "text"
},
"cell_type": "markdown",
"source": [
"# GPT-2\n",
"\n",
"[Twitter thread](https://twitter.com/the_new_sky/status/1097817274310946816)"
]
},
{
"metadata": {
"id": "C1ToKLVZM3F5",
"colab_type": "code",
"colab": {}
},
"cell_type": "code",
"source": [
"!pip install pytorch-pretrained-bert"
],
"execution_count": 0,
"outputs": []
},
{
"metadata": {
"id": "xW7eEyBzNaYa",
"colab_type": "code",
"colab": {}
},
"cell_type": "code",
"source": [
"import torch\n",
"from torch.nn import functional as F\n",
"from pytorch_pretrained_bert import GPT2Tokenizer, GPT2LMHeadModel\n",
"\n",
"tokenizer = GPT2Tokenizer.from_pretrained('gpt2')\n",
"model = GPT2LMHeadModel.from_pretrained('gpt2')\n",
"\n",
"while True:\n",
" text = tokenizer.encode(input('Input : '))\n",
" x, past = torch.tensor([text]), None\n",
" \n",
" for _ in range(200):\n",
" logits, past = model(x, past=past)\n",
" x = torch.multinomial(F.softmax(logits[:, -1]), 1)\n",
" text.append(x.item())\n",
" \n",
" print()\n",
" print(\"Output :\", tokenizer.decode(text))\n",
" print()"
],
"execution_count": 0,
"outputs": []
}
]
}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment