Skip to content
Snippets Groups Projects
systems.md 3.32 KiB
Newer Older
  • Learn to ignore specific revisions
  • ---
    permalink: /systems/
    layout: single
    title:  "Systems"
    excerpt: "ToBi Systems"
    header:
      overlay_image: /assets/images/home_banner.jpg
    classes: wide
    gallery:
      - url: /assets/images/system/mtc_approach.jpg
        image_path: /assets/images/system/mtc_approach_th.jpg
        title: "Grasp Task"
      - url: /assets/images/system/objrec-rviz.jpg
        image_path: /assets/images/system/objrec-rviz_th.jpg
        title: "Object Recognition visualization"
    sidebar:
      - 
        image: /assets/images/system/ma-controller4x.gif
        image_alt: "Robocup at Home logo"
      - title: ""
        text: |
          - [Platforms](#platforms)
    ---
    
    
    lgraesner's avatar
    lgraesner committed
    Our service robots employ distributed systems with multiple clients sharing information over network. These clients host numerous software components written in different programming languages.
    
    We provide a full specification of the system in our [online catalog platform](https://citkat-citec.bob.ci.cit-ec.net/distribution/tiago-noetic-nightly.xml)
    
    
    ### Reuseable Behavior Modeling
    
    
    For modeling the robot behavior in a flexible manner ToBI uses the [BonSAI](https://github.com/CentralLabFacilities/bonsai) framework. It is a domain-specific library that builds up on the concept of sensors and actuators that allow the linking of perception to action.
    
    
    ### Development and Deployment Tool-Chain
    
    
    The software dependencies — from operating system dependencies to intercomponent relations — are completely modeled in the description of a system distribution which consists of a collection of so called recipes. In order to foster reproducibility, traceability, and potential software (component) re-use of the ToBI system, we provide a full specification of the different systems in our [online catalog platform](https://citkat-citec.bob.ci.cit-ec.net/browse/distribution/).
    
    
    {% include figure image_path="/assets/images/system/CITK.png" alt="this is a placeholder image" caption="Cognitive Interaction Toolkit" %}
    
    ### Object Recognition and Manipulation
    
    
    lgraesner's avatar
    lgraesner committed
    Our current object recognition is based on [YoloX](https://github.com/CentralLabFacilities/clf_object_recognition/tree/yolox). We augment the 2D recognition results with 3D segmentation and superquadratic fitting of object primitives. For manipulation ToBi utilizes the Task Constructor Framework for MoveIt!, which provides a way to solve manipulation tasks by defining multiple interdependent subtasks.
    
    {% include gallery caption="Object Recognition visualization and resulting object primitives for grasp generation" id="gallery"%}
    
    
    <!--
    ## Videos
    
    <video width="100%" controls>
    
        <source src="{{ site.baseurl }}/assets/videos/open.webm" type="video/webm">
    
        Your browser does not support the video tag.
    </video> 
    
    <video width="100%" controls>
    
        <source src="{{ site.baseurl }}/assets/videos/rc.webm" type="video/webm">
    
        Your browser does not support the video tag.
    </video> 
    
    <video width="100%" controls>
    
        <source src="{{ site.baseurl }}/assets/videos/tiago_clf.webm" type="video/webm">
    
        Your browser does not support the video tag.
    </video> 
    
    
    <video width="100%" controls>
    
        <source src="{{ site.baseurl }}/assets/videos/tiago_clf2.webm" type="video/webm">
    
        Your browser does not support the video tag.
    </video> 
    -->
    
    
    # Platforms
    
    <div class="entries-{{ entries_layout }}">
      {% include documents-collection.html collection="platforms" sort_by=page.sort_by sort_order="reverse" type="grid" %}
    </div>