More like, there are existing mathematical/computational models to connect visual input to attention/focus, hardware like spectacle camera and so there may be a direction to solve this need gap.
But, as everything with neuroscience(https://needgap.com/problems/41-human-memory-lack-of-thereof-psychology-neuroscience) this is going to be very complex. Creating 'context' aware algorithms is very hard, even with current progress in machine learning.
e.g. Say you are wearing a 'focus drift-alert' spectacles while programming, but you open browser to clear some doubts or to look at the API document; would the spectacles be contextual enough to identify that you are still focussed on your original task? May be it will, may be particular context mode for each task can be activated such as programming, cooking, reading etc.
But, a consumer grade product is one of the best ways to accelerate scientific research.
Need karma! Please check contributor guidelines.