Last active
February 25, 2026 16:22
-
-
Save hoonsubin/e492921dcc86d6e3e3014ca926e16ad3 to your computer and use it in GitHub Desktop.
Ollama model card for obsidian web clipper
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| { | |
| "schemaVersion": "0.1.0", | |
| "name": "Ollama Model Card", | |
| "behavior": "create", | |
| "noteContentFormat": "# {{title}}\n\n**Model Summary**\n\n{{selectorHtml:#summary-content|markdown|callout}}\n\n**{{selectorHtml:div.use-panel?data-panel|first|markdown}} usage**\n\n```sh\n{{selectorHtml:div.use-panel > pre|first|markdown}}\n```\n\n**Models**\n\n{{selector:div.hidden.group a.font-medium, div.hidden.group p.col-span-2.text-neutral-500|map:v => v.trim()|table:(\"Name\",\"Size\",\"Context\",\"Input\")}}\n\n{{selectorHtml:section.flex-col > div > a|markdown}}\n\n## Description\n\n{{selectorHtml:div#readme > div > div#display|markdown}}\n", | |
| "properties": [ | |
| { | |
| "name": "title", | |
| "value": "{{title}}", | |
| "type": "text" | |
| }, | |
| { | |
| "name": "clipped_date", | |
| "value": "{{time}}", | |
| "type": "date" | |
| }, | |
| { | |
| "name": "source", | |
| "value": "{{url}}", | |
| "type": "text" | |
| }, | |
| { | |
| "name": "description", | |
| "value": "{{description}}", | |
| "type": "text" | |
| }, | |
| { | |
| "name": "model_tags", | |
| "value": "{{selector:.inline-flex.items-center.rounded-md}}", | |
| "type": "multitext" | |
| } | |
| ], | |
| "triggers": [ | |
| "https://ollama.com/library" | |
| ], | |
| "noteNameFormat": "{{title}}", | |
| "path": "Imports/ollama" | |
| } |
Author
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
example output:
title: "glm-ocr"
clipped_date: 2026-02-25
source: "https://ollama.com/library/glm-ocr"
description: "GLM-OCR is a multimodal OCR model for complex document understanding, built on the GLM-V encoder–decoder architecture."
cover_img: "https://ollama.com/public/og.png"
skills: "["vision","tools"]"
glm-ocr
Model Summary
cli usage
Models
View all →
Description
GLM-OCR is a multimodal OCR model for complex document understanding, built on the GLM-V encoder–decoder architecture. The model integrates the CogViT visual encoder pre-trained on large-scale image–text data, a lightweight cross-modal connector with efficient token downsampling, and a GLM-0.5B language decoder.
Usage
Text recognition
Table recognition
Figure recognition
Key features