In this work, we posit that a user’s head pose can serve as a proxy for gaze in a VR object selection task. We describe a study in which participants were asked to describe a series of objects in a known order, providing approximate labels for the focus of attention. The participants’ head pose was then evaluated as a function of the position and orientation of the headset, and how closely that pose matched the location of known objects was calculated. The object that most closely matched the gaze was then evaluated using a mean reciprocal ranking. We demonstrate that using a concept of gaze derived from head pose can be used to effectively narrow the set of objects that are the target of participants’ attention.