Download and Install
Full Software Listings
This page describes how to get a quick demonstration up and running with your new Orca install. Before you begin, make sure that you've installed Orca (see Download and Install Orca ).
The system which we will assemble is shown in the diagram below. It consists of two infrastructure applications (IceGrid Registry and IceStorm) and two Orca components (Laser2d and LaserMon).
We'll be using sample configuration files which are distributed with Orca. As the general rule, you shouldn't work or run programs from the distribution. So we'll create separate directories for each project (or tutorial) and copy config files into it. Will put all of these directories into one place: a new directory we'll call
$ mkdir ~/sys
The IceGrid Registry provides a Naming service: a mapping from logical interface names to physical addresses. It's currently the only way for components to find one another. We create a separate directory for it to run in, copy a sample config file, create the database directory and start it up.
$ mkdir -p ~/sys/icereg; cd ~/sys/icereg $ cp [ORCA-SRC]/scripts/ice/icegridregistry.cfg . $ mkdir db $ icegridregistry --Ice.Config=icegridregistry.cfg
IceStorm is an event service, used to decouple publishers from subscribers. Typically, there is one IceStorm service per host. We create a separate directory for it to run in, copy a sample config file, create the database directory and start it up.
$ mkdir -p ~/sys/icestorm; cd ~/sys/icestorm $ cp [ORCA-SRC]/scripts/ice/icebox_icestorm.cfg . $ mkdir stormdb $ icebox --Ice.Config=icebox_icestorm.cfg
When an Orca component starts up, it needs to know how to find the services above. This information can go into config files for individual components.
Components in this tutorial use
# Standard Ice Configuration for Orca Ice.Default.Locator=IceGrid/Locator:default -p 12000
Note that only one piece of information is required: the address of the Registry. You can add other global default properties to this file.
Now will connect a fake laser component to a laser monitoring component. First, copy default configuration files for the Laser2d and LaserMon components.
$ mkdir -p ~/sys/quickstart; cd ~/sys/quickstart $ cp [ORCA-INSTALL]/share/orca/cfg/laser2d.cfg . $ cp [ORCA-INSTALL]/share/orca/cfg/lasermon.cfg .
Configure the laser for fake (simulated) operation (or skip this step if you're connected to a real SICK laser). Edit
Start the Laser2d component.
$ laser2d laser2d.cfg
Start a new shell, go to the quickstart directory and fire up the LaserMon component (a laser monitor). No modifications are needed for its configuration file. Note that the name of the configuration file is not specified and it is assumed to be
You should see the scans scroll by on the screen. Congratulations, your first two components are talking!
To stop components, type
If something does not work, check out the FAQ on Orca Wiki.
Leave the server running. Note the hostname of the computer on which it's running. On Linux, you can find out what it is by typing
In this example we assume that the server's hostname is
Now you need another computer connected to the first one through a network. Orca needs to be installed here as well. Make sure you can ping the first host. On Linux, do this quick test and you should something like this:
$ ping alpha PING alpha.xxx.xxx.xx (xxx.xx.xxx.xxx) 56(84) bytes of data. 64 bytes from alpha.xxx.xxx.xx (xxx.xx.xxx.xxx): icmp_seq=1 ttl=64 time=2.19 ms 64 bytes from alpha.xxx.xxx.xx (xxx.xx.xxx.xxx): icmp_seq=2 ttl=64 time=0.378 ms 64 bytes from alpha.xxx.xxx.xx (xxx.xx.xxx.xxx): icmp_seq=3 ttl=64 time=0.609 ms
Now we'll get the client connect to the server. Create a sys/quickstart directory and copy a
If you are having problems with remote connectiosn and you are using Ubuntu, check out this FAQ entry on firewalls.
If you bored, you can try the following:
Give you server a custom platform name.
In the file
Now the server will register itself with
Now you have to repoint the client to the new name, regardless whether the client is running on the same host or not (the trick of using
Why would you want to explicitly name the platform? There are a couple of potential reasons. Your big robot may have multiple hosts and you want all components to use the same platform name. This could be convenient when you move components from one host to another. Or when you want to connect from the outside, you don't usually care on which internal host the component is running. Another situation is when you want to simulate a distributed system on a single host, you may need to be able to assign different platform names to the components.
If everything works, read more in-depth explanations or what is actually happening here or check out other Orca Tutorials.
Webmaster: Tobias Kaupp (tobasco at users.sourceforge.net)