curl "https://api.lib.harvard.edu/v2/items.[xml | dc | json]?recordIdentifier=[Hollis ID]"
Example, Dublin Core output: curl "https://api.lib.harvard.edu/v2/items.dc?recordIdentifier=990058255550203941"
| #!/usr/python3 | |
| import os | |
| import csv | |
| from ipwhois import IPWhois | |
| # clean-up | |
| try: | |
| os.remove('.\\output\\Z39.50 Usage - IPs (last 3 months) -- enhanced.csv') | |
| except FileNotFoundError: | |
| pass |
curl "https://api.lib.harvard.edu/v2/items.[xml | dc | json]?recordIdentifier=[Hollis ID]"
Example, Dublin Core output: curl "https://api.lib.harvard.edu/v2/items.dc?recordIdentifier=990058255550203941"
| influential | |
| renowned | |
| not(able|ed) | |
| distinguished | |
| reputable | |
| prestigious | |
| prominent | |
| significant | |
| respected | |
| expert |
| """ Adaptation of Kelly Bolding's Terms of Aggrandizement xquery script to report on aggrandizing | |
| language in archival finding aid "Biography or History" (bioghist) notes. | |
| The original script uses xquery on a directory of EAD XML files, and produces an XML report. | |
| This version uses Python to query the ArchivesSpace REST API, and produces a JSON report. | |
| """ | |
| import re | |
| import json | |
| from asnake.aspace import ASpace |
Each citation includes an abstract or annotation. Feel free to suggest an addition!
Davis, Robin Camille. 2015. “Git and GitHub for Librarians.” Publications and Research, January. https://academicworks.cuny.edu/jj_pubs/34.
| <data> | |
| { | |
| for $Record in /ead | |
| where $Record/archdesc/scopecontent/p[contains(., 'Mrs')] | |
| let $id := $Record/archdesc/did/unitid[1]/text() | |
| let $title := $Record/archdesc/did/unittitle | |
| let $repo := $Record/eadheader/eadid/@mainagencycode | |
| let $scopeMrs := $Record/archdesc/scopecontent/p[contains(., 'Mrs')] |
I'm a long-time fan of the graph visualization tool Gephi and since Wikimania 2019 I got involved with Wikidata. I was aware of the Gephi plugin "Semantic Web Importer", but when I check it out, I only find old tutorials connecting to DBpedia, not Wikidata:
| from itertools import chain, starmap | |
| def flatten_json_iterative_solution(dictionary): | |
| """Flatten a nested json file""" | |
| def unpack(parent_key, parent_value): | |
| """Unpack one level of nesting in json file""" | |
| # Unpack one level only!!! | |
| if isinstance(parent_value, dict): |
| // XPath CheatSheet | |
| // To test XPath in your Chrome Debugger: $x('/html/body') | |
| // http://www.jittuu.com/2012/2/14/Testing-XPath-In-Chrome/ | |
| // 0. XPath Examples. | |
| // More: http://xpath.alephzarro.com/content/cheatsheet.html | |
| '//hr[@class="edge" and position()=1]' // every first hr of 'edge' class |