Download and Install
Quick Start

Driver Interfaces


Full Software Listings

Dev Guide

Users Logo
Mailing lists



Quick-Start Guide

Reviewed for release 2.13.0.

This page describes how to get a quick demonstration up and running with your new Orca install. Before you begin, make sure that you've installed Orca (see Download and Install Orca ).

The system which we will assemble is shown in the diagram below. It consists of two infrastructure applications (IceGrid Registry and IceStorm) and two Orca components (Laser2d and LaserMon).


Starting Up the Infrastructure

We'll be using sample configuration files which are distributed with Orca. As the general rule, you shouldn't work or run programs from the distribution. So we'll create separate directories for each project (or tutorial) and copy config files into it. Will put all of these directories into one place: a new directory we'll call sys in the home directory.

$ mkdir ~/sys
IceGrid Registry

The IceGrid Registry provides a Naming service: a mapping from logical interface names to physical addresses. It's currently the only way for components to find one another. We create a separate directory for it to run in, copy a sample config file, create the database directory and start it up.

$ mkdir -p ~/sys/icereg; cd ~/sys/icereg
$ cp [ORCA-SRC]/scripts/ice/icegridregistry.cfg .
$ mkdir db
$ icegridregistry --Ice.Config=icegridregistry.cfg

This starts icegridregistry on your local machine. If you're going to be using Orca a lot, it's probably a good idea to set this up so it's always running on a single machine on the network.

IceStorm Service
For Debian/Ubuntu users: See an entry in the Wiki FAQ on in the /etc/hosts file.]

IceStorm is an event service, used to decouple publishers from subscribers. Typically, there is one IceStorm service per host. We create a separate directory for it to run in, copy a sample config file, create the database directory and start it up.

If you are using a version of Ice different from the one we recommend you will need to change the version number of IceStormService in the icebox_icestorm.cfg file (e.g. replace 32 with 31 to go back from v.3.2.0 to 3.1.0).
$ mkdir -p ~/sys/icestorm; cd ~/sys/icestorm
$ cp [ORCA-SRC]/scripts/ice/icebox_icestorm.cfg .
$ mkdir stormdb
$ icebox --Ice.Config=icebox_icestorm.cfg

Pointing Components at this Infrastructure

When an Orca component starts up, it needs to know how to find the services above. This information can go into config files for individual components.

Components in this tutorial use libOrcaIce, which looks up this information in a single file configuration file. Create a file ~/.orcarc (in your home directory) and place this text into it:

# Standard Ice Configuration for Orca
Ice.Default.Locator=IceGrid/Locator:default -p 12000

Note that only one piece of information is required: the address of the Registry. You can add other global default properties to this file.

Getting Two Components Talking

Now will connect a fake laser component to a laser monitoring component. First, copy default configuration files for the Laser2d and LaserMon components.

$ mkdir -p ~/sys/quickstart; cd ~/sys/quickstart
$ cp [ORCA-INSTALL]/share/orca/cfg/laser2d.cfg .
$ cp [ORCA-INSTALL]/share/orca/cfg/lasermon.cfg .
The Server

Configure the laser for fake (simulated) operation (or skip this step if you're connected to a real SICK laser). Edit laser2d.cfg, and replace the default library name '' with '', so it looks like this:

Start the Laser2d component.

$ laser2d laser2d.cfg
The Client

Start a new shell, go to the quickstart directory and fire up the LaserMon component (a laser monitor). No modifications are needed for its configuration file. Note that the name of the configuration file is not specified and it is assumed to be lasermon.cfg. By default, all Orca components assume componentname.cfg as an argument if nothing is specified.

$ lasermon

You should see the scans scroll by on the screen. Congratulations, your first two components are talking!

To stop components, type Ctrl-C.

If something does not work, check out the FAQ on Orca Wiki.

Connecting Two Components across the Network

Leave the server running. Note the hostname of the computer on which it's running. On Linux, you can find out what it is by typing

$ hostname

In this example we assume that the server's hostname is alpha. You need to replace alpha with the actual name of your computer.

Now you need another computer connected to the first one through a network. Orca needs to be installed here as well. Make sure you can ping the first host. On Linux, do this quick test and you should something like this:

$ ping alpha
PING ( 56(84) bytes of data.
64 bytes from ( icmp_seq=1 ttl=64 time=2.19 ms
64 bytes from ( icmp_seq=2 ttl=64 time=0.378 ms
64 bytes from ( icmp_seq=3 ttl=64 time=0.609 ms

Now we'll get the client connect to the server. Create a sys/quickstart directory and copy a lasermon.cfg file as before. Modify the following line, by replacing 'local' platform name with 'alpha'.


If you are having problems with remote connectiosn and you are using Ubuntu, check out this FAQ entry on firewalls.

Exercise for the User

If you bored, you can try the following:

Give you server a custom platform name.

In the file laser2d.cfg replace 'local' with 'elvis'


Now the server will register itself with elvis/laser2d instead of using the default which is the hostname. (In the unlikely event that your host is actually called elvis, you'll see no difference.)

Now you have to repoint the client to the new name, regardless whether the client is running on the same host or not (the trick of using local will no longer work).


Why would you want to explicitly name the platform? There are a couple of potential reasons. Your big robot may have multiple hosts and you want all components to use the same platform name. This could be convenient when you move components from one host to another. Or when you want to connect from the outside, you don't usually care on which internal host the component is running. Another situation is when you want to simulate a distributed system on a single host, you may need to be able to assign different platform names to the components.

What's Next

If everything works, read more in-depth explanations or what is actually happening here or check out other Orca Tutorials.


Webmaster: Tobias Kaupp (tobasco at

Generated for Orca Robotics by  doxygen 1.4.5