This project evaluates five NER models: LSTM-CRF, Hidden Markov Models, Brown Clustering, Decision Tree Classifier and DistilBERT, across seven languages: English, French, Chinese, Arabic, Farsi, Finnish and Swahili. It explores baseline performance on monolingual datasets then Few-Shot Learning at 5%, 10% and 20% to study transfer learning from high-resource to low-resource languages, offering insights into model effectiveness in language transfer.