What is underfitting? | | Basic Concepts |
F1 score | | Metrics |
Why is L1 regularization supposed to lead to sparsity than L2? | | Regularization |
What is backpropagation? | | NN |
Why MSE doesn’t work with logistic regression? | | Metrics |
Why AUC is desirable? | | Metrics |
How to avoid overfitting? | | Basic Concepts |
L1 VS L2 regularization | | Regularization |
Generative vs Discriminative | | Basic Concepts |
Lasso and Ridge | | Regularization |
Hinge loss - SVM | | Loss |
Why is bias used in NN? | | NN |
Entropy / Cross-Entropy / Relative-entropy loss | | Loss |
Cross Validation / Stratified cross-validation | | Basic Concepts |
Confusion matrix | | Metrics |
Why does regularization use L1, L2 not others? | | Regularization |
Why can regularization reduce overfitting? | | Regularization |
Compare two classifiers | | System design |
What is overfitting? | | Basic Concepts |
Precision and recall | | Metrics |
SGD | | NN |
Initializing weight with 0 for NN? | | NN |
Sigmoid function | | Activation Function |
Adam optimizer | | Training |
Learning rate | | NN |
tanh | | Activation Function |
Grid search and random search | | Metrics |
Why is non linear activation function needed? | | Activation Function |
Relation between logistic regression and neural network? | | NN |
Exploding gradient | | NN |
Does NN fit data better than logistic regression? | | System design |
Vanishing gradient | | NN |
RMSprop | | NN |
ReLU | | Activation Function |
linear regression vs logistic regression | | Basic Concepts |
Fourier transform | | Basic Concepts |
1x1 convolutional filter | | CNN |
Bias metric | | Basic Concepts |
Naive Bayes | | Basic Concepts |
supervised, unsupervised, semi-supervised, weakly-supervised, Self-supervised and unsupervised learning reinforcement learning | | Basic Concepts |
CNNs in segmentation | | CNN |
data visualization libraries | | CV |
model accuracy or model performance | | Basic Concepts |
CV Framework description in Chinese | | CV |
What is attention, why attention | | NN |
equivariant translation, invariant translation | | CNN |
Bayes’ Theorem How to interpret Bayes rule | | Basic Concepts |
Residual Networks skip connections | | CNN |
Parameter calculation | | CNN |
Convolution / Kernel / Feature map dimension | | CNN |
Small convolutional kernels such as 3x3 rather than a few large ones | | CNN |
max-pooling CNN | | CNN |
Gradient Descent | | Basic Concepts |
Compare CNN and Transformer | | NN |
Transformer | | NN |
Data Drift Detection | Importance of Data Drift Detection | | Data prepare |
one important feature missing from a trained model, what can we do? | | Data prepare |
why convolution layer for images than FC | | CNN |
calculate convolution layer output size | | CNN |
RNN | | CNN |
Deep learning | | NN |
Autoencoder | | CNN |
GAN | | CNN |
LSTM | | CNN |
Turing test | | Data prepare |
Why do we need a validation set and test set | | Data prepare |
data normalization | | Data prepare |
How to deal with imbalanced dataset | | Data prepare |
data augmentation | | Data prepare |
PCA - Principal Components Analysis | | Basic Concepts |
Dropouts | | Basic Concepts |
Reinforcement | | Basic Concepts |
Partition | | Basic Concepts |
Attribute | | NN |
Accuracy | | Basic Concepts |
Logistic Regression | | Basic Concepts |
Batch norm, mini-batch, layer norm | | NN |
Softmax Activation Function | | Activation Function |
ROC AUC | | Metrics |
word2vec vs. doc2vec | | Basic Concepts |
covariance matrix | | Metrics |
Type I error / Type II error loss | | Loss |
What is cost function loss function | | Loss |
Receptive Field | | CNN |
confidence interval | | Basic Concepts |
Why do I have to convert "uint8" into "float32" | | Data prepare |
Log loss | | Loss |
Loss becomes Inf or NaN - reason | | Loss |
F-beta Score | | Metrics |
EM algorithm | | Basic Concepts |
how you would create a 3D model of an object from imagery and depth sensor measurements taken at all angles around the object | | CV |
How does CBIR work | | CV |
Handling Outliers in data | | Data prepare |
CNNs translation invariant | | CNN |
RMSE | | Metrics |
How do you prepare a data? | | Data prepare |
Bias-Variance Tradeoff | | Regularization |
Write Conv2D and Active function from scratch | | ML Coding |
Write Max Pooling from scratch | | ML Coding |
Ran‍‌‌‌‌‍‌‌‌‌‌‍‍‌‍‌dom Forest vs. Gradient Boosted Forest (Decision Tree) | | Basic Concepts |
Maximum Likelihood Estimation (MLE) - when find the mean and standard diviation of a group of models | | Basic Concepts |
K-means | | Basic Concepts |
Compare RNN,LSTM with transformer | | NN |
L1 loss vs. L2 loss | | Loss |
Ensemble boosting bagging | | Basic Concepts |
difference between t-SNE and UMAP for dimensionality reduction | | Basic Concepts |
non maximal suppression | | CV |
Label Encoding vs. One Hot Encoding | | Basic Concepts |
Instance-Based vs Model-Based Learning | | Basic Concepts |
t-SNE sklearn | | Basic Concepts |
UMAP | | Basic Concepts |
difference between LDA and PCA for dimensionality reduction | | Basic Concepts |
PCA, t-SNE, and UMAP | | Basic Concepts |
probability vs maximum likelihood | | Basic Concepts |
KNN different from k-means clustering | | NN |
Create a function to compute an integral image, and create another function to get area sums from the integral image | | CV |
Epoch vs. Batch vs. Iteration | | Training |
Batch Gradient Descent and Stochastic Gradient Descent | | NN |
Momentum | | NN |
Connected Component Labeling | | CV |
Pytorch CNN | | ML Coding |
Write plot image using Matplotlib pyplot from scratch | | ML Coding |
Write K-means from scratch | | ML Coding |
Write Gradient Descent / Active Function from scratch | | ML Coding |
PyTorch Cheatsheet | | ML Coding |
Write KNN from scratch | | ML Coding |
Python Library | | ML Coding |
Feature Crosses | | Basic Concepts |
SOTA | | SOTA |
Feature Hashing | | Basic Concepts |
One-hot encoding | | Basic Concepts |
Embedding | | Basic Concepts |
Numeric Features | | Basic Concepts |
How do we measure similarity? | | Metrics |
Mean encoding | | Basic Concepts |
GELU | | Activation Function |
Loss Functions | | Loss |
Huber loss | | Loss |
Data Generation Strategy | | Training |
Handle Imbalance Class Distribution | | Training |
Data Partitioning | | Training |
Mean square error and mean absolute error | | Loss |
Common Resampling Use Cases | | Training |
A/B Testing in Ads | | System design |
How LinkedIn Generates Data for Course Recommendation | | System design |
Conditional Random Fields (CRFs) | | Basic Concepts |
Random Number Generator | | ML Coding |
Split Train test valid | | ML Coding |
Quantile Loss | | Loss |
Active function | | ML Coding |
parallel implementation | | ML Coding |
Pytorch Skip connection | | ML Coding |
Forcast Metrics | | Metrics |
Normalized Cross Entropy | | Metrics |
Focal loss | | Loss |
Write a Perceptron Classifier for Binary Classification | | ML Coding |
Write a Simple Image Classifier (Using package) | | ML Coding |
Write a Naive Bayes Classifier from scratch | | ML Coding |
IOU | | Metrics |
KNN | | Basic Concepts |
Activation functions | | Activation Function |
DALL·E | | NN |
Vision Transformer | | NN |
MultiModal (Text-Image) | | NN |
BERT | | NN |
What do you understand by transfer learning? Name a few commonly used transfer learning models. | | Basic Concepts |
01 ML system design introduction - bytebytego | | System design |
Write FC Layer Neural Network from scratch | | ML Coding |
Retrieval Augmented Generation | | GENAI |
AlBEF - BLIP - BLIP2 | | GENAI |
MultiModal | | GENAI |
PCA | | ML Coding |
SVM (Support Vector Machine) | | ML Coding |
Logistic Regression | | ML Coding |