Data science interview is not easy. There is considerable uncertainty about the issues. Regardless of what kind of work experience you have or what kind of data science certification you have, the interviewer may be throwing you a series of questions that you weren’t expecting. During a data science interview, the interviewer will have technical questions on a wide range of topics, requiring the interviewee to have both strong knowledge and good communication skills.
In this note, I would like to talk about how to prepare for a machine learning science / interview date. We will sort out the categories of questions, I will share links with questions and answers to frequently asked questions.
Traditionally, data science / machine learning interviews include the following categories of questions:
- Machine learning algorithms
- Programming skills, algorithms and data structures
- Knowledge of the domain area
- Machine Learning Systems Design
- Culture Fit
In this post, I’ll show you how to create a working prototype of a web application with a working machine learning model in 50 lines of Python code. Imagine you have a cool project idea. Now you need to implement MVP (minimum viable product) and show it to your manager / partner / investor or just show off to your friends.
We will be using BentoML. It is a flexible, high-performance platform that is ideal for building an MVP.
- supports multiple machine learning frameworks including Tensorflow, PyTorch, Keras, XGBoost, and more.
- own cloud deployment with Docker, Kubernetes, AWS, Azure and many more
- high performance online service via API
- web dashboards and APIs for managing model registry and deployment
In this post we will try to choose a library for logging in Python. Logs help to record and understand what went wrong in the work of your service. Informational messages are often written to the logs. For example: parameters, quality metrics and model training progress. An example of a piece of model training log:
A trained machine learning model alone will not add value for business. The model must be integrated into the company’s IT infrastructure. Let’s develope REST API microservice to classify Iris flowers. The dataset consists of the length and width of two types of Iris petals: sepal and petal. The target variable is Iris variety: 0 – Setosa, 1 – Versicolor, 2 – Virginica.
Saving and loading a model
Before moving on to develope API, we need to train and save the model. Take the RandomForestClassifier model. Now let’s save the model to a file and load it to make predictions. This can be done with pickle or joblib.
import pickle filename = 'model.pkl'
pickle.dump(clf, open(filename, 'wb'))
We’ll use pickle.load to load and validate the model.
loaded_model = pickle.load(open(filename, 'rb'))
result = loaded_model.score(X_test, y_test)
The code for training, saving and loading the model is available in the repository — link
For junior Date Scientists, a CV consists of courses taken, education, and possibly not the most relevant work experience. Such resumes are not much different from the bulk of job seekers.
Working on a pet project is a great opportunity to improve skills. If you add the implemented pet-project to the CV, it will immediately become attractive and a topic for conversation at the interview will appear.
So what is a pet-project? Pet-project is a project that is done for yourself. It is created outside of work and is often self-interested. For example: sports, electronics, food preparation, auto, travel, medicine, etc. The project will help expand professional skills and learn new ones that will be useful in work.
Here are some ideas for projects in Data Science that you can get started with: