Flask, Docker and inferences via a server/cloud
While my Protein Classification model was churning in the background, I focused on learning skills and technologies that would help 'production-ize' ML models for inference. This would mean that I could host my ML models on a server which would mean that a useful model would be available to more people than just me. I learnt about the Flask framework through which one could expose python functions and have it return some data. In the ML case, this would mean having Flask expose a function where inference is performed and then send the results back to the user. I also learnt about Docker and how I could build and run an image with the ML model inference being served by an Apache server, with the help of Flask and WSGI (a bridge between Flask and Apache). Ill update this post when I have something working. UPDATE 1: I obtained a free $300 credit to play with Google's cloud services and they did have an ML Engine to work with so I decided to give it a shot be...