Projects

Active Projects

Title: Abstract Reasoning and Life-Long Learning via Symbol and Rule Discovery
Funded by: Scientific and Technological Research Council of Turkey (TUBITAK, 1001)
Abstraction and abstract reasoning are among the most essential characteristics of high-level intelligence that distinguishes humans from other animals. High-level cognitive skills can only be achieved through abstract concepts, symbols representing these concepts, and rules that express relationships between symbols. This project aims to self-discover abstract concepts, symbols, and rules that allow complex reasoning by the robot. If the robots can achieve such abstract reasoning on their own, they can perform new tasks in completely novel environments by updating their cognitive skills, or by discovering new symbols and rules. If the objectives of this project are achieved, scientific foundations will be laid for robotic systems that learn life-long lasting symbols and rules through self-interacting with the environment and express various sensory-motor and cognitive tasks in a single framework.


Title: Robots Understanding Their Actions by Imagining Their Effects
Acronym: IMAGINE
Duration: 01.2017 – 12.2020
Funded by: European Union, H2020-ICT
Code: 731761
Budget: 365,000 Euro
Today’s robots are good at executing programmed motions, but they do not understand their actions in the sense that they could automatically generalize them to novel situations or recover from failures. IMAGINE seeks to enable robots to understand the structure of their environment and how it is affected by its actions. The core functional element is a generative model based on an association engine and a physics simulator. “Understanding” is given by the robot’s ability to predict the effects of its actions, before and during their execution. This scientific objective is pursued in the context of recycling of electromechanical appliances. Current recycling practices do not automate disassembly, which exposes humans to hazardous materials, encourages illegal disposal, and creates significant threats to environment and health, often in third countries. IMAGINE will develop a TRL-5 prototype that can autonomously disassemble prototypical classes of devices, generate and execute disassembly actions for unseen instances of similar devices, and recover from certain failures.


Title: Wearable Flexible Sensor Supported Lower Body Exoskeleton System
Duration: 02.2020 – 01.2023
Funded by: Scientific and Technological Research Council of Turkey (TUBITAK)
Code:
Bogazici Budget: 140,000 Euro
In this project, a novel wearable exoskeleton with flexible clothing will be developed for the use of paraplegic persons who have lost their lower extremity motor functions due to low back pain, paralysis and similar disturbances. This prototype will be designed as a rigid-link exoskeleton that is actuated via series-elastic actuators. The user’s physical state will be observed using soft elements attached to a wearable sensorized clothing, which will be used in conjunction with the exoskeleton. Trajectory planning will be explored by a human-to-robot skill transfer problem to minimize the metabolic cost. Stability and balance analyses will run, and environmental factors will be estimated to provide safe walking support to the human using a hierarchical control structure.


Title: Developmentally and biologically realistic modeling of perspective invariant action understanding
Duration: 11.2019 – 10.2020
Funded by: International Joint Research Promotion Program, Osaka University
Code:
Bogazici Budget: 10,000 Euro
In this project, we study the novel hypothesis that action understanding can develop bottom-up with ‘effect’ based representation of actions without requiring any coordinate transformation. We may call the computational model to be developed as ‘ Effect Based Action Understanding Model’. We adopt a computational brain modeling approach and validate our findings on embodied systems (i.e. robots). Once the skill can be developed bottom-up, we argue that the action understanding skill has the power to facilitate the development of perspective transformation skill (as opposed to the general belief). In the project, the hypothesis will be tested on robots, and the models will be implemented in a biologically relevant way. This way our model will be able to address the so called mirror neurons that are believed to represents action of others and the self in a multimodal way.


Title: A Computational Model of Event Learning and Segmentation: Event Granularity, Sensory Reliability and Expectation
Acronym: SEGMENT
Duration: 06.2020 – 05.2021
Funded by: Bogazici University Research Fund
Budget: 54,000 TL (7,200 Euro)
Code: 16913
Event is a fuzzy term that refers to a closed spatio-temporal unit. The aim of the project is to develop a computational model that can learn event models and use learned event models to segment ongoing activities in varying granularities and compare its performance with human subjects. By doing so, we aim to clarify the effect of the reliability of sensory information and expectation on event segmentation performance by several experiments through our computational model and to develop a computational model that is capable of learning, segmenting and representing new events while being robust to noise. In addition to comparing human event segmentation performance with that of the computational model, we plan to design a new validation method to increase the reliability of assumptions of the computational model in terms of validating the psychological theory and assessing how well the computational model performs in terms of capturing human event representations. Results of our experiments and our computational model will be used to validate predictions of a psychological theory, namely Event Segmentation Theory, and to develop robotics models that are capable of simulating higher-level cognitive processes such as action segmentation in different granularities and formation of concepts representing events.


Title: Design of Cognitive Mirroring Systems Based on Predictive Coding
Duration: 01.2019 – 03.2020
Funded by: Japanese Science and Technology Agency (JST)
Code:
Bogazici Budget: 45,000 Euro

Completed Projects

Title: Imagining Other’s Goals in Cognitive Robots
Acronym: IMAGINE-COG++
Duration: 06.2018 – 06.2019
Funded by: Bogazici University Research Fund
Budget: 36,000 TL
Code: 18A01P5
In this research project, we aim to design and implement an effective robotic system that can infer others’ goals from their incomplete action executions, and that can help others achieving these goals. Our approach is inspired from helping behavior observed in infants; exploits robot’s own sensorimotor control and affordance detection mechanisms in understanding demonstrators’ actions and goals; and has similarities with human brain processing related to control and understanding of motor programs. Our system will follow a developmental progress similar to infants, whose performance in inferring goals of others’ actions is closely linked to development of their own sensorimotor skills. At the end of this project, we plan to verify whether our developmental goal inference and helping strategy is effective or not through human-robot interface experiments using upper body Baxter robot in different tasks.


Title: Sağlarlık güdümlü karmaşık manipülasyon ögrenme çerçevesi
Duration: 14.03.2017 – 01.02.2019
Funded by: TUBITAK 2232, Return Fellowship Award
Code: 117C016
Budget: 108,000 TL
Bu proje ile, ortamın robota sunduğu sağlarlıkları (affordances) öğrenip modelleyerek sağlarlıklar ve sensör geribildirimleri ile desteklenen gelişmiş bir manipülasyon beceri sistemini kurmayı hedeflemekteyiz. Bu tip ortamlardaki tutmak, taşımak ve yerleştirmek gibi eylemler tipik oldukları için, hareketleri, gösterim yolu ile öğrenme (learning by demonstration) ile robota aktarmayı planlamaktayız. Bu yolla yarı-yapısal ortamlar için gerekli manipülasyon becerilerini öğrendikten sonra, robot, ortamın sunduğu görsel ve diğer sağlarlıkların, bu becerilerin yürütülmesini nasıl etkilediğini öğrenmelidir.


Title: Learning in Cognitive Robots
Duration: 08.2016 – 08.2017
Funded by: Bogazici University Research Fund
Budget: 55,000 Euro
The aim of this project is to start forming a new cognitive and developmental robotics research group in Bogazici University with a special emphasis on intelligent and adaptive manipulation. This start-up fund will be used to build the laboratory with the most important and necessary setup that includes a human-friendly robotic system for manipulation (Baxter robot), a number of sensors for perception, and a workstation for computations and control.

Open student projects for CMPE 492

The topics are not limited with the ones below. You are free to suggest your own project description with the state of the art robots (Baxter, Sawyer, NAO) in our lab!


Robot simulation and motion control for Sawyer: Operating a real robot can be cumbersome, risky and slow. Therefore, it is often helpful to be able to simulate the robot. Moreover, if a robot needs to move its hand into a desired target, it should not simply follow any path from its current position because it may hit an obstacle. Therefore, the robot needs to plan a path from its current pose to the target pose. The objective of this project is to create a realistic kinematic, volumetric and dynamic model of the Sawyer robot platform, to adapt a number of motion planning packages for Sawyer, and finally implement a benchmark task such as a pick-and-place operation across an obstacle.


Graphical User Interface for Baxter Robot:The aim of this project is to implement a GUI to control Baxter robot. Through its user interface, we expect to move joints seperately, move the hand to a specific position, open and close the grippers, and display the sensors such as force/torque sensor, camera and depth data.

Completed Student Projects

Recovering Cost Function Behind Dexterous Manipulation Actions Using Inverse Reinforcement Learning
Completed by: Pınar Baki
Term Completed: Spring 2018 as Undergraduate Final Project
Learning complex tasks which require dexterous manipulations is a very challenging task in robotics. This issue is getting importance as robots are used in human environment and industrial areas. Solving such complex tasks needs some nontrivial sensory motor skills, but it is hard to program manually these skills. On the other hand, specifying a reward function manually for such a task is also difficult. There are so many features to consider, but even a human expert cannot easily specify these features exactly and assign them weights. So, in this project, we use inverse reinforcement learning in order to recover a cost function from a pushing behavior of a human expert. At first, we take the orientation and position data from behaviors of human experts while they are pushing different objects through the specified trajectory. Then using the inverse reinforcement algorithms, we extract a reward function from these data.


Adapting Full Body Synergies
Completed by: Ezgi Tekdemir
Term Completed: Fall 2017 & Spring 2018 as Undergraduate Final Project
Adapting Full Body Synergies project is an analysis on humans’ way of controlling motor movements. Instead of controlling all degrees of freedom separately, brain tends to use combinations of motor synergies to perform a movement, which is more efficient. After conforming this capability of humans, the synergies are extracted to perform further analysis on human learning. This is an important question to tackle in order to understand how human central nervous system works in adaptation of difficult tasks. First, the data from a periodic movement, walking, is collected as the base reference. Then, the same movement with a constraint on it is performed multiple times. In this research, the aim is to see if the synergies extracted from the restricted movement will eventually converge to the base synergies after the subject learns how to perform the constrained version of the movement.


SERVE: See-Listen-Plan-Act
Completed by: Abdullah Muaz Ekici and Özer Biber
Term Completed: Fall 2017 & Spring 2018 as Undergraduate Final Project
In this project, you will integrate state-of-the-art DNN based object detection and classification systems for perception, existing libraries for speech recognition, grounded conceptual knowledge base for language interpretation, a planner for reasoning and robot actuators for achieving the given goals. Initially, we plan to investigate use of YOLO real-time object detection system, PRAXICON for translation from natural language instructions to robot knowledge base, PRADA engine for probabilistic planning, and CaffeNet Deep Convolutional Neural Networks fine-tuned for robotic table-top settings.


HRI: Robot control and communication through speech

Completed by: Bilgehan Nal
Term Completed: Summer 2017 as Summer Internship
The aim of this project is to integrate existing speech processing and synthesis tools to communication with the Baxter robot. English and Turkish languages will be used in communications, in setting tasks or in getting information from the robot. The robot’s voice based communication skills will be reinforced with various interfaces including emotional faces displayed on the tablet.


Facial Expressions and Object Tracking for Baxter Robot
Completed by: Bilgehan Nal
Term Completed: Summer 2017 as Summer Internship
The aim of the project is possess Baxter with a face including mouth, eye and eyebrows for different facial expressions. In addition enabling it with the ability to track its own end effector with its eyes and hand, or with help of its sonar sensors an object around it. Then control this face using an application for pc or Android phone.


NAO Robot Avatar
Completed by: M. Yunus Seker and Mehmet Ozdemir
Selected as “Best Undergraduate Project” of Spring 2017
Term Completed: Spring 2017 as Undergraduate Final Project
In this project, you will implement a system that enables seeing from NAO’s eyes and moving with NAO’s body. NAO’s motions will be copied through utilizing an adapted whole-body tracking system, and the robot camera images will be displayed on a head-mount display system. This system will enable full embodiment, and will be used for a very fruitful research direction: Utilizing robot avatars to understand the underlying mechanisms of human sensorimotor processes through changing different aspects of the embodiment.