谷歌浏览器插件
订阅小程序
在清言上使用

Semantic Translation of Face Image with Limited Pixels for Simulated Prosthetic Vision

Information sciences(2022)

引用 3|浏览11
暂无评分
摘要
Facial perception and cognition are among the most critical functions of retinal prostheses for blind people. However, owing to the limitations of the electrode array, simulated prosthetic vision can only provide limited pixel images, which seriously weakens the image semantics expression. To improve the intelligibility of face images with limited pixels, we constructed a face semantic information transformation model to transform real faces into pixel faces based on the analogy between human and artificial intelligence, named F2Pnet (face to pixel networks). This is the first attempt at face pixelation using deep neural networks for prosthetic vision. Furthermore, we established a pixel face database designed for prosthesis vision and proposed a new training strategy for generative adversarial networks for image-to-image translation tasks aiming to solve the problem of semantic loss under limited pixels. The results of psychophysical experiments and user studies show that the identifiability of pixel faces in characteristic and expression is much better than that of comparable methods, which is significant for improving the social ability of blind people. The real-time operation (17.7 fps) on the Raspberry Pi 4 B shows that F2Pnet has reasonable practicability.
更多
查看译文
关键词
Simulated prosthetic vision,Image-to-image translation,Generative adversarial network,Pixel face
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要