RESUMO
Retinal prosthetic devices can significantly and positively impact the ability of visually challenged individuals to live a more independent life. We describe a visual processing system which leverages image analysis techniques to produce visual patterns and allows the user to more effectively perceive their environment. These patterns are used to stimulate a retinal prosthesis to allow self guidance and a higher degree of autonomy for the affected individual. Specifically, we describe an image processing pipeline that allows for object and face localization in cluttered environments as well as various contrast enhancement strategies in the "implanted image." Finally, we describe a real-time implementation and deployment of this system on the Argus II platform. We believe that these advances can significantly improve the effectiveness of the next generation of retinal prostheses.