ZoniQ is an innovative music player that uses facial recognition to detect the user's mood and plays music accordingly. By blending the power of AI and music, ZoniQ enhances your listening experience by adapting to your emotions.
- 🧠 Mood Detection: Utilizes facial recognition to analyze your mood in real-time.
- 🎶 Music Recommendation: Plays tracks that match your mood (happy, sad, angry, surprise, neutral, etc.).
- ⚡ Real-Time Processing: Detects mood and starts playback instantly.
- 📁 Customizable Music Library: Users can add their own music for different moods.
Before starting, ensure you have the following installed:
-
Python 3.8: Download it from Python's official website.
-
Libraries:
- OpenCV:
pip install opencv-python - TensorFlow & Numpy:
pip install tensorflow==2.13.0 numpy==1.24.3 - Pygame: pip install pygame
- OpenCV:
-
Webcam: Required for real-time mood detection.
-
Clone the repository:
git clone https://github.com/himanshu-21-0/ZoniQ-Mood_Based_Music_Player.git -
Navigate to the project directory:
cd ZoniQ-Mood_Based_Music_Player -
Install dependencies: pip install -r requirements.txt
-
Run the application:
python ZoniQ.py
- Improve emotion detection accuracy with a larger dataset.
- Add a web-based interface for broader accessibility.
- Introduce advanced controls, such as volume adjustment and playlist shuffling.
This project is licensed under the MIT License.
Himanshu Singh
GitHub