We have to remember that what we observe is not nature in itself but nature exposed to our method of questioning
                                                                                                                                                    Werner Karl Heisenberg

I am always drawn to systems that learn from experience, adapt to changing environment to fulfill a function,; systems that move, act and mimic or show intelligence. My scientific journey started with studying signal processing methods to find direction of arrival of spatially distributed signals and isolate them in the presence of interference in a dynamic environment. I developed a fascination for machine learning problems and robotics. This fascination was followed by frustration; no matter how hard we, scientists and engineers, try we cannot build systems that can act in uncertain environments with the ease and efficiency of living systems. To get a better handle on the problem I have decided to move from engineering to biology, more specifically to neuroscience.

The big picture

My research interests are driven by a single question: what makes biological systems different from artificial and non-biological natural systems? Pursuit of this question is essential to understand cognition, consciousness and "self" -- terms that are still mysterious to us. Even the most “intelligent“ robots are only crude models of their living counterparts. Due to lack of sufficient complexity a robot is incapable of generating “meaning” from its experiences: It is perfectly happy if it is sitting on the surface of the Mars digging its wheels to the sand without moving an inch, or exposing a rock surface for evidence of life (can’t say the same for the engineers in the mission room, however). The “meaning” of the robot’s actions are to its observers (or designers) that are outside of it. In fact we can probably argue that the robot and the user together make a system that fits to criteria of “being intelligent” in the sense that a living system would. Hence, we are missing something/s that are essential and fundamental in the making of living systems.

Why should we care? We should because, at the most, our ever growing demand for systems that are capable of functioning in complex environments require engineering solutions that will most likely be very similar to biological systems in terms of their functional structure. At the least, they can be a source of inspiration to learn and built systems that can generate, gather, categorize information and learn, infer and adapt to continue to function.

The symptom I have given above is well known and one of the most debated issues in artificial intelligence, psychology and other related areas. A good term for it is “the grounding problem” as in symbol grounding in psychology, and linguistics. Understanding how living systems build symbols that represent their experiences is one of the key steps to cross the gap between a living-engineered system and a robot. A second step is figuring out "what a living system is". Surprisingly, the first question seems to be easier to answer since it lends itself to experimental questioning (see my dissertation work or my book). The second question, however, require new approaches and lots of questioning of our standard assumptions. Some new approaches have been introduced under variety of disciplines; such as general system theory, relational biology and complexity theory. A single theme that emerges from these efforts is that the living systems are complex. A complete description of a complex system cannot be obtained by reductionist approaches. For the similar reasons theories of classic physics are also not sufficient (a good discussions of these issues can be found in Robert Rosen’s Life Itself).

My current and future research goal is to understand the functional organization of living systems, learning more about them from the perspective of complex systems. Concepts such as, emergence, self-reference, agency, adaptation, evolution and organizational closure and how they relate to living systems are among my interests. Organizational closure carries particular importance, for, it has been claimed to be a necessary, if not sufficient, for a system to be living. Interesting conclusions follow from this statement on the computability of the living systems (see Life Itself). Amongst my interests are the origin and the meaning of terms such as measurement (sensing), information, anticipation, decision, all of which only meaningful within a living system, can be listed. In addition, I put significant effort to understand the mathematical concepts; circular hierarchy, non-well founded set theory and category theory, for their relation to the concept I listed above.

Research in Eye-head coordination and the role of fixational head movements in 3-dimensional visual perception

Recent Research (2010 - current)

I am conducting this research in the Active Perception Laboratory (APLAB) in Boston University under the supervision of Dr. Michele Rucci. The project aims to investigate how head and eye movements enable 3-dimensional visual perception. Standing human subjects who maintain fixation to a distal target, provided data to investigate how the spontaneous instability of the body and the head shape the retinal input. I have implemented a method that allowed high-resolution optical tracking of the head and estimation of spatial positions of the eyes. Using this data it was possible to reconstruct the retinal image of the fixated target and imaginary point-light sources positioned in the visual field, means of Gullstrand's schematic eye model. The evaluation of the motion parallax through computation of relative retinal velocity revealed that the natural instability of the body can generate sufficient motion parallax to perceive depth of objects in large part of the visual field. I presented the first set of results from this project in the 11th Vision Sciences Society Annual Meeting that took place in Florida in 2011. This result can be particularly interesting for machine vision: Cameras placed on mobile platforms are subject to vibrations which may cause instability in the captured sequence of images. This inherent instability can be utilized to discriminate depth, which in turn would simplify the segmentation of the visual scene to objects.

An additional avenue I am following for this project involves investigation of the head-eye coordination and the resulting retinal image motion of observers who were instructed to fixate several visual targets placed on a table in a sequence. This data was collected by Dr. Julie Epelboim and her colloquies under the supervision of Dr. Robert M. Steinman at the University of Maryland several years ago. The head and the eyes were tracked in space via a revolving magnetic-field monitor (RMFM) which provided 1/60th of a degree resolution. The head's translation was recorded by a microphone array with an accuracy under 1 mm. Using this data, I reconstruct dynamic retinal signals and quantify available motion parallax as a depth cue.

In parallel, I am participating in the construction of a new revolving magnetic-field monitor in the APLAB. Combined with the optical motion tracking system we will be able to conduct simultaneous measurements of eye and head movements.

Research in Echolocation and Spatial Perception

Recent Research (2008 - 2009)

Recently, I conducted experiments that were designed to understand adaptive control of sonar vocalization by echolocating bats, as they track an approaching prey in space in the presence of multiple interfering objects. These experiments were completed in the Batlab in the University of Maryland, College Park under Dr. Cynthia F. Moss's advisory. The time-frequency structure as well as the temporal patterns of sonar vocalizations, combined with measurements of the sonar beam patterns revealed clues on how bats build and maintain spatial representation of their environment, control flow of spatial information and spatial attention. A link for the report (Spatial perception and adaptive sonar behavior) of our findings can be found in my publications page.

I also participated in a project that aims to understand integration of the information received from consequent sonar vocalizations to discriminate objects by echolocating bats.

Past Research

As being a biological airborne sonar system a bat accomplishes target detection, identification, localization, tracking and finally capturing or avoiding. Engineered radar/sonar systems designed to realize only a subset of these functions in a limited capability. Bats seem to be very adaptive to the changing environmental conditions in degree to which engineered systems currently cannot achieve. In order to design systems that can be capable behaving in complex environments, bats could be ideal model system to study.

My previous research in bat echolocation was mainly in two related directions: Bats monitor their environment by making ultrasonic vocalizations and listening to echoes reflected from objects in the scene. Localizing the position of an object is essential for bats’ survival. Echoes acoustically interact with the head and external ear in a direction-dependent sense. The transformation as a result of this interaction creates the physical cues necessary to localize the source of the echo. The direction dependent transformations can simply be modeled as time-independent transfer functions of linear systems. The project I was involved in aimed to measure and analyze these transfer functions, also known as head related transfer functions (HRTF), to understand more about sound localization by bats. I investigated the cues for sound localization that are likely to be used by evaluating frequency structure of the HRTFs. Hypothesis as a result of these investigations are tested via psychoacoustical experiments involving echolocating bats. Some interesting results of this research can be found here.

I was also employing computational methods to understand how the auditory system might compute sound localization. An example of these efforts is a binaural model for sound localization based on the bat HRTF to show that bats can use interaural level difference (ILD) cues to localize sound sources. Pursuing this avenue brought up an interesting question which motivated the computational part of my research. How an initially naïve, - unfamiliar to spatial nature of the sound - animal could learn to localize sound sources? Unlike the common approaches in sound localization modeling that assume availability of the acoustic cues for sound location, my approach attempted to circumvent the need for this assumption and ground the problem of auditory space learning. This approach employs sensorimotor contingencies for the learning of the auditory space. More detail about this approach and its motivation can be found here (see also a related lay language paper here).

The second set of studies aimed to understand the spatial properties of the bats' outgoing sonar calls. The goal of these studies was to measure the sonar beam profile across the frequency range of 20 kHz to 100 kHz from a freely behaving echolocating bat. Unlike the earlier studies on this species this study employs a different approach which allows sonar beam pattern measurements without requiring restraining of the bat and electrical stimulation of the brain stem to elicit sonar vocalizations. My preliminary studies suggest that the outgoing ultrasound beamshape of a bat was not constant but varies from vocalization to vocalization. For more detail on this project see the Measuring Sonar Beam Shape of Echolocating Bats page a full description of the study can be found my dissertation. Follow up studies are focusing on how the sonar beamshape changes and whether or not this change is controlled by the bat and if so what are the implications for echolocation.

Aforementioned work was conducted in the Batlab in the University of Maryland, College Park under the advisory of Dr. Cynthia F. Moss.