This class examines the human relationship to mechanized perception in art and architecture. Mechanical eyes, such as satellites, rovers, computer vision, and autonomous sensing devices, give us unprecedented access to nonhuman and superhuman views into known and unknown environments. But the technology of automatic observation alienates human observers and fools them into thinking that this is an unemotional, inhuman point of view due to its existence in a numeric or digital domain. The observer is looking at seemingly trustworthy data that has been “flattened” or distilled from the real world. But this face-value acceptance should be rejected; interpreters of this device data should interrogate the motives, biases, or perspectives informing the “artist” in this case (that is, the developer/programmer/engineer who created the devices). Despite the displacement of direct human observation, mechanical eyes present in remote sensing, LiDAR scanning, trail-cams, metagenomic sequencing, urban informatics, and hyperspectral imaging have become fundamental to spatial analysis. But as these become standard practice, observers should also be trained in cracking open the data to understand the human perspective that originally informed it. In this class, students investigate the impact of the mechanical eye on cultural and aesthetic inquiry into a specific site. They conceptually consider their role as interpreter for the machine and create a series of site analysis experiments across a range of mediums. The experiments are based on themes of inversion, mirroring, portraiture, memory, calibration, and foregrounding to “unflatten” data into structure and form. Limited enrollment.