Teaching AI to see the world like babies

You may not remember this, but in the first few weeks of your life, you likely saw vague shapes and shades of grey. Surprisingly, scientists think that this may be a good thing, because the slow initial learning of the visual world can help build a stronger foundation – like learning how to walk, before you can run! Unlike infants, most of today’s state-of-the-art artificial intelligence (AI) instead rely on seeing tons of data and all at once, making them less efficient and costly to train.

Researchers at Indiana University have thus asked whether AI could also benefit from “starting slow”. They trained AI with videos taken from 2~12-month-old infant head-mounted cameras, presented in developmentally chronological and non-chronological orders. Fascinatingly, AI learned much faster and performed better at video classification tasks when it saw videos in the same developmental sequence as human babies. Crucially, researchers found that this was thanks to the characteristic slowness and simplicity of the content of 2~4-month-old infants’ videos. This research suggests that learning how babies make sense of the visual world can shed light on how machines can be made to learn more efficiently, encouraging machine “development” and challenging the traditional brute-force machine learning methods.


Original Article: Sheybani S., Hansaria H., Wood J., Smith L.B., Tiganj Z. (2023). Curriculum Learning with Infant Egocentric Videos. In Advances in Neural Information Processing Systems 36 (NeurIPS 2023).

Next
Next

Using Network Science to Understand Human Thinking