Machine Learning class
This project was carried out as part of the ‘Data Science and Machine Learning’ course given by Professor Vlachos in the SMT master’s programme. The idea for this project arose from a Kaggle competition aimed at improving foreign language learning. The aim of the Kaggle competition was to create a model capable of predicting the difficulty of texts written in French for English speakers.
For this project, we developed several models to identify the most effective one. Our approach involved fine-tuning both BERT and OpenAI models. Fine-tuning refers to the process of taking a pre-trained model and adjusting it with additional data specific to our task, enhancing its performance and accuracy.
After selecting the best-performing model, we created an application using Streamlit. Streamlit allowed us to build an interactive and user-friendly interface to showcase our model’s capabilities. This streamlined the process of deploying our model, making it accessible and functional for end-users.
My Takeaways

Fine-Tuning OpenAI Models
Utilized the OpenAI API with Python for seamless integration and model manipulation.
Applied the fine-tuned models to specific tasks, enhancing their performance and accuracy for targeted applications.

Fine-Tuning BERT Models
Implemented hyperparameter tuning to optimize BERT models, achieving better performance.

Creating a Streamlit Application
Focused on designing a user-friendly interface to ensure a positive user experience.