Visual Processing in the Brain
How does the brain convert the light rays impinging into the eyes into semantically meaningful information that guides human actions is a fascinating question. A lot of great work has investigated this question. Eventually this blog post will try to summarize some of the landmark work in neural visual processing, but for now it is a random assortment of papers that I find interesting.
One interesting current theme in the field is to compare neural activities against activations in artificial neural networks.
What principal investigators should you look into read more: Bruno Olshaussen, Jim Di Carlo, Jack Gallant, Rebecca Saxe, Nancy Kanwisher, Aapo Hyvarinen, Eero Simoncelli, David Marr, David Van Essen, Nikos Logothetis.
Organization of high-level visual cortex in human infants
While many studies have been conducted to analyze how visual information is represented in the adult human brain, very few studies have looked at this organization in infants due to issues in recording brain activity.
This paper indicates that 2.3-8.6 month infants have areas that are more selective to faces than scenes. The face selective areas are unable to discriminate between faces and objects (video clips of children toys). In addition to semantic feature (i.e. class-category) even low-level visual feature (Figure 3; Rectilinearity/corners) also explain activity in the face area. Low-level features do not explain activity in areas selective for scenes.
Overall, while face and scene representation is similar to adults the areas are not as selective as in adults. This finding suggests that while the visual cortex has the rough structure similar to adults it is refined over time to become more category selective.
One thing I don’t understand is why do the low-level features don’t explain the scene selective areas. Maybe they are not appropriately chosen?