I really advise to push your Machine Learning models to production as FAST as possible! It is easy to spend months improving your models but the opportunity cost is very high and chances are you did not account for all the details at planning time. Personally, I always go for the simplest model that gives me a lift compared to the current process in place, I push to prod and iterate from there.
In general, I think it is important to prioritize your work based on the potential impact and to factor in the potential cost associated to that work. Is your time really well spent testing yet again another feature transformation? Establish strict timelines, focus on low hanging fruits, keep the development to a minimum and make sure you can get quick feedbacks from the production environment on the performance of your models while grabbing business impact in the process.
There can be a real cost to having a model that is too complex! The inference can become too slow and have a real impact on the user experience. The infra cost of a model that is too big can become significant. All future experiments can become really slow and can impair your ability to iterate fast! Maintainability is a pain point that Machine Learning developers tend to neglect. The more features, the more data quality checks that need to be established. In some industries, every feature have to be audited for regulatory purposes. Every time a law change, all the features have to be audited again! I am not even addressing the subject if you need interpretability! Some readings on the subject:
One the heaviest costs might be the ones related to tech debts. Those small item work items that seem insignificant at the time but when compounded overtime, can have disastrous consequences. This talk of a Google engineer is good at outlining the different risks:
Comentários