Active Projects

Robots Understanding Their Actions by Imagining Their Effects

Robots Understanding Their Actions by Imagining Their Effects(IMAGINE)

Duration: 01.2017 - 12.2020

Funded by: European Union, H2020-ICT

Budget: ~365K Euro

Brief Information Today's robots are good at executing programmed motions, but they do not understand their actions in the sense that they could automatically generalize them to novel situations or recover from failures. IMAGINE seeks to enable robots to understand the structure of their environment and how it is affected by its actions. The core functional element is a generative model based on an association engine and a physics simulator. “Understanding” is given by the robot's ability to predict the effects of its actions, before and during their execution. This scientific objective is pursued in the context of recycling of electromechanical appliances. Current recycling practices do not automate disassembly, which exposes humans to hazardous materials, encourages illegal disposal, and creates significant threats to environment and health, often in third countries. IMAGINE will develop a TRL-5 prototype that can autonomously disassemble prototypical classes of devices, generate and execute disassembly actions for unseen instances of similar devices, and recover from certain failures.

Sağlarlık güdümlü karmaşık manipülasyon ögrenme çerçevesi

Duration: 02.2017 - 01.2019

Funded by: TUBITAK 2232, Return Fellowship Award

Budget: 108,000 TL

baxter-door.jpg Brief Information

TR:Bu proje ile, ortamın robota sunduğu sağlarlıkları (affordances) öğrenip modelleyerek sağlarlıklar ve sensör geribildirimleri ile desteklenen gelişmiş bir manipülasyon beceri sistemini kurmayı hedeflemekteyiz. Bu tip ortamlardaki `tutmak', `taşımak' ve `yerleştirmek' gibi eylemler tipik oldukları için, hareketleri, gösterim yolu ile öğrenme (learning by demonstration) ile robota aktarmayı planlamaktayız. Bu yolla yarı-yapısal ortamlar için gerekli manipülasyon becerilerini öğrendikten sonra, robot, ortamın sunduğu görsel ve diğer sağlarlıkların, bu becerilerin yürütülmesini nasıl etkilediğini öğrenmelidir.

ENG: With this project, the aim is to create an advanced manipulation skill system which will be supported by sensory feedback and affordances. Affordances, which are provided by environment to the robot, will be learned and modelled. For environments the actions like 'hold', 'carry' and 'place' etc. are typical, therefore, we are planning to teach those actions using learning by demonstration. After robot learns the required manipulative skills for semi-structured environments, robot should learn how visual or other affordances provided by environment impacts execution of these skills.

Imagining Other's Goals in Cognitive Robots (IMAGINE - COG)

Duration: 1 year

Funded by: Bogazici University Research Fund

Budget: ~50K TL

p3.jpgBrief Information In this research project, we aim to design and implement an effective robotic system that can infer others' goals from their incomplete action executions, and that can help others achieving these goals. Our approach is inspired from helping behaviour observed in infants; exploits robot's own sensorimotor control and affordance detection mechanisms in understanding demonstrators' actions and goals; and has similarities with human brain processing related to control and understanding of motor programs.

Learning in Cognitive Robots

Funded by: Bogazici University Research Fund

baxter.jpg Budget: ~55K Euro

Brief Information

The aim of this project is to start forming a new cognitive and developmental robotics research group in Bogazici University with a special emphasis on intelligent and adaptive manipulation. This start-up fund will be used to build the laboratory with the most important and necessary setup that includes a human-friendly robotic system for manipulation (Baxter robot), a number of sensors for perception, and a workstation for computations and control.

Active Student Projects for Undergraduate Students

Will be updated when semester starts.

Open student projects for Undergraduate Students

NOTE : The topics are not limited with the ones below. You are free to suggest your own project description with the state of the art robots (UR10, Baxter, NAO etc…) in our lab!

Robot simulation and motion control for Sawyer:

sawyer-sim.jpg Operating a real robot can be cumbersome, risky and slow. Therefore, it is often helpful to be able to simulate the robot. Moreover, if a robot needs to move its hand into a desired target, it should not simply follow any path from its current position because it may hit an obstacle. Therefore, the robot needs to plan a path from its current pose to the target pose. The objective of this project is to create a realistic kinematic, volumetric and dynamic model of the Sawyer robot platform, to adapt a number of motion planning packages for Sawyer, and finally implement a benchmark task such as a pick-and-place operation across an obstacle.

Graphical User Interface for Baxter Robot:

baxter-gui.jpg

The aim of this project is to implement a GUI to control Baxter robot. Through its user interface, we expect to move joints seperately, move the hand to a specific position, open and close the grippers, and display the sensors such as force/torque sensor, camera and depth data.

Gripper Design and Production for Baxter:

baxter_gripper.jpg Baxter already has two kinds of gripper, namely suction gripper and electric gripper. It's electric gripper is parallel jaw gripper for lifting up to 2,25 kg approximately. Suction gripper is for attaching one or more vacuum cups. Electric gripper includes 2 parallel fingers which are used for grasping, lifting object in tasks like pick and place. There are studies to for new designs for more efficient grasping which are printable with 3D printer (Yale Open Hand Project, Giulia et. al. IROS 2015, Lionel Birglen RSS 2017 etc.). We also aim to produce a gripper for our Baxter Robot using 3D printer in our lab. We have everything but a candidate :) You can contact us for detailed information. (BM 32 or BM 36)

Previous Completed Undergraduate Student Projects

SERVE: See-Listen-Plan-Act:

Completed by: Özer Biber and Abdullah Muaz Ekici

Term Completed: Fall 2017 & Spring 2018 as Undergraduate Final Project

See – Listen – Plan – Act project enables human-robot interaction by speech recognition and object detection. Baxter which is our robot listens for commands and sees its environment with the Kinect camera attached to its waist. According to the given command, Baxter exhibits an appropriate action on the environment by adapting the dynamic changes in the environment.




Adapting Full Body Synergies:

Completed by: Ezgi Tekdemir

Term Completed: Fall 2017 & Spring 2018 as Undergraduate Final Project

Adapting Full Body Synergies project is an analysis on humans’ way of controlling motor movements. Instead of controlling all degrees of freedom separately, brain tends to use combinations of motor synergies to perform a movement, which is more efficient. After conforming this capability of humans, the synergies are extracted to perform further analysis on human learning. This is an important question to tackle in order to understand how human central nervous system works in adaptation of difficult tasks. First, the data from a periodic movement, walking, is collected as the base reference. Then, the same movement with a constraint on it is performed multiple times. In this research, the aim is to see if the synergies extracted from the restricted movement will eventually converge to the base synergies after the subject learns how to perform the constrained version of the movement.

Recovering Cost Function Behind Dexterous Manipulation Actions Using Inverse Reinforcement Learning

Completed by: Pınar Baki

Term Completed: Spring 2018 as Undergraduate Final Project

Learning complex tasks which require dexterous manipulations is a very challenging task in robotics. This issue is getting importance as robots are used in human environment and industrial areas. Solving such complex tasks needs some nontrivial sensory motor skills, but it is hard to program manually these skills. On the other hand, specifying a reward function manually for such a task is also difficult. There are so many features to consider, but even a human expert cannot easily specify these features exactly and assign them weights. So, in this project, we use inverse reinforcement learning in order to recover a cost function from a pushing behavior of a human expert. At first, we take the orientation and position data from behaviors of human experts while they are pushing different objects through the specified trajectory. Then using the inverse reinforcement algorithms, we extract a reward function from these data.

NAO Robot Avatar:

nao2.jpg

Completed by: Yunus Seker and Mehmet Özdemir

Term Completed: Spring 2017 as Undergraduate Final Project

Selected as “Best Undergraduate Project” of Spring 2017

In this project, you will implement a system that enables seeing from NAO's eyes and moving with NAO's body. NAO's motions will be copied through utilizing an adapted whole-body tracking system, and the robot camera images will be displayed on a head-mount display system. This system will enable full embodiment, and will be used for a very fruitful research direction: Utilizing robot avatars to understand the underlying mechanisms of human sensorimotor processes through changing different aspects of the embodiment.

Facial Expressions and Object Tracking for Baxter Robot:

Completed by: Bilgehan Nal

Term Completed: 2017 Summer as Summer Internship

The aim of the project is possess Baxter with a face including mouth, eye and eyebrows for different facial expressions. In addition enabling it with the ability to track its own end effector with its eyes and hand, or with help of its sonar sensors an object around it. Then control this face using an application for pc or Android phone.

HRI: Robot control and communication through speech:

sawyer-hri.jpg

Completed by: Bilgehan Nal

Term Completed: 2017 Summer as Summer Internship

The aim of this project is to integrate existing speech processing and synthesis tools to communication with the Baxter robot. English and Turkish languages will be used in communications, in setting tasks or in getting information from the robot. The robot's voice based communication skills will be reinforced with various interfaces including emotional faces displayed on the tablet.