Skip to content

Instantly share code, notes, and snippets.

@mohammadne
Last active October 28, 2025 05:48
Show Gist options
  • Select an option

  • Save mohammadne/2aa742ebfcd7b835d88fa9da9da868fd to your computer and use it in GitHub Desktop.

Select an option

Save mohammadne/2aa742ebfcd7b835d88fa9da9da868fd to your computer and use it in GitHub Desktop.
Artificial Intelligence

Artificial Intelligence

Mathematics

  • Mathematics for Machine Learning

Calculus

Derivatives & gradients Partial derivatives Chain rule Gradient descent

Linear Algebra

Vectors, matrices, dot product Matrix multiplication Transpose, inverse, determinant Eigenvalues & eigenvectors Singular Value Decomposition (SVD)

Probability & Statistics

Random variables Mean, variance, expectation Distributions (Normal, Bernoulli, etc.) Bayes theorem Likelihood, entropy

Theory of Computation

  • Computability Theory [1 ed.] - S. Barry Cooper

Linear Optimization

Gradient descent Cost/loss functions Convex vs non-convex optimization

  • Operations Research [4 ed.] - Wayne L. Winston

Classical ML

Regression classification trees clustering feature engineering. Supervised learning (linear/logistic regression, SVM) Decision trees, random forests Clustering (k-means) Overfitting, regularization Cross-validation, evaluation metrics Feature scaling, encoding

Python (you can keep using Go for backend; ML is mostly in Python) NumPy, pandas, scikit-learn

Deep Learning

Neural networks CNNs RNNs Transformers Feedforward Neural Networks Backpropagation & gradient descent CNNs (images) RNNs / LSTMs (sequences) Transformers (foundation of modern AI) Attention mechanisms

PyTorch (preferred for flexibility) TensorFlow/Keras (for quicker prototyping) Google Colab for free GPU compute

Generative AI

Diffusion models large language models (LLMs) fine-tuning prompt engineering How GPT-style LLMs are trained (transformer architecture, self-attention) Fine-tuning vs LoRA vs RAG (Retrieval-Augmented Generation) Diffusion models (e.g., Stable Diffusion) Embeddings, tokenization, vector databases (FAISS, Chroma, Pinecone)

Applied AI / Agents

Building AI systems that use models to reason, act, plan, and interact with tools and APIs. LangChain or LlamaIndex (Python) Agent frameworks (e.g., OpenAI’s Assistants API, CrewAI, AutoGen) Memory, context windows, tool-use Integration with your backend (Go service calling AI microservices) Vector stores and RAG optimization

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment