Pi-puck and e-puck2 PC side development: Difference between pages

From GCtronic wiki
(Difference between pages)
Jump to navigation Jump to search
 
 
Line 1: Line 1:
=Hardware=
[{{fullurl:e-puck2}} e-puck2 main wiki]<br/>
==Overview==
<span class="plainlinks">[https://projects.gctronic.com/epuck2/wiki_images/pipuck-overview.jpg <img width=600 src="https://projects.gctronic.com/epuck2/wiki_images/pipuck-overview-small.jpg">]</span><br/>
Features:
* Raspberry Pi Zero W or Zero 2 W connected to the robot via I2C
* interface between the robot base camera and the rPi via USB, up to 15 FPS
* 1 digital microphone and 1 speaker
* USB hub connected to the rPi with 2 free ports
* uUSB cable to the rPi uart port. Also ok for charging
* 2 chargers. 1 for the robot battery and 1 for the auxiliary battery on top of the extension
* charging contact points in front for automatic charging. External docking station available
* several extension options. 6 i2C channels, 2 ADC inputs
* several LED to show the status of the rPi and the power/chargers


==I2C bus==
=Robot configuration=
I2C is used to let communicate various elements present in the robot, Pi-puck and extensions. An overall schema is shown in the following figure:<br/>
This section explains how to configure the robot based on the communication channel you will use for your developments, thus you need to read only one of the following sections, but it would be better if you spend a bit of time reading them all in order to have a full understanding of the available configurations.
<span class="plainlinks">[https://projects.gctronic.com/epuck2/wiki_images/i2c-buses.png <img width=600 src="https://projects.gctronic.com/epuck2/wiki_images/i2c-buses.png">]</span><br/>
An I2C switcher is included in the Pi-puck extension in order to support additional I2C buses (the RPi alone has only one usable I2C bus). These are needed to avoid conflicts between Time-of-Flight sensors that have a fixed I2C address.


=Getting started=
==USB==
This introductory section explains the minimal procedures needed to work with the Raspberry Pi Zero W / Zero 2 W mounted on the Pi-puck extension board and gives a general overview of the available basic demos and scripts shipped with the system flashed on the micro SD. More advanced demos are described in the following separate sections (e.g. ROS), but the steps documented here are fundamental, so be sure to fully understand them. <br/>
The main microcontroller is initially programmed with a firmware that support USB communication.<br/>


The extension is mostly an interface between the e-puck robot and the Raspberry Pi, so you can exploit the computational power of a Linux machine to extend the robot capabilities.<br/>
If the main microcontroller isn't programmed with the factory firmware or if you want to be sure to have the last firmware on the robot, you need to program it with the last factory firmware by referring to section [https://www.gctronic.com/doc/index.php?title=e-puck2#Firmware_update main microcontroller firmware update].<br/>


In most cases, the Pi-puck extension will be attached to the robot, but it's interesting to note that it can be used also alone when the interaction with the robot isn't required.<br/>
The radio module can be programmed with either the <code>Bluetooth</code> or the <code>WiFi</code> firmware, both are compatible with USB communication:
The following sections assume the full configuration (robot + extension), unless otherwise stated.
* Bluetooth: refer to section [https://www.gctronic.com/doc/index.php?title=e-puck2#Firmware_update_2 radio module firmware update]
* WiFi: download the [https://projects.gctronic.com/epuck2/esp32-firmware-wifi_25.02.19_e2f4883.zip radio module wifi firmware (25.02.19)] and then refer to section [https://www.gctronic.com/doc/index.php?title=e-puck2#Firmware_update_2 radio module firmware update]


==Requirements==
When you want to interact with the robot from the computer you need to place the selector in position 8 to work with USB. <br/>
The robot must be programmed with a special firmware in order to communicate via I2C bus with the Raspberry Pi mounted on the Pi-puck extension. The same I2C bus is shared by all the devices (camera, IMU, distance sensor, others extensions), the main microcontroller and the Raspberry Pi. Since the Raspberry Pi acts as I2C master, these devices will not be anymore reachable directly from the robot main microcontroller that will act instead as I2C slave.


===e-puck1===
Section [https://www.gctronic.com/doc/index.php?title=e-puck2#PC_interface PC interface] gives step by step instructions on how to connect the robot with the computer via USB.<br/>
The e-puck1 robot must be programmed with the following firmware [https://raw.githubusercontent.com/yorkrobotlab/pi-puck/master/e-puck1/pi-puck-e-puck1.hex pi-puck-e-puck1.hex].


===e-puck2===
Once you tested the connection with the robot and the computer, you can start developing your own application by looking at the details behind the communication protocol. Both USB and Bluetooth communication channels use the same protocol called [https://www.gctronic.com/doc/index.php?title=e-puck2_PC_side_development#Bluetooth_and_USB advanced sercom v2], refer to section [https://www.gctronic.com/doc/index.php?title=e-puck2_PC_side_development#Bluetooth_and_USB_2 Communication protocol: BT and USB] for detailed information about this protocol.<br/>
The e-puck2 robot must be programmed with the following firmware [https://projects.gctronic.com/epuck2/gumstix/e-puck2_main-processor_extension_b346841_07.06.19.elf  e-puck2_main-processor_extension.elf (07.06.19)] and the selector must be placed in position 10.<br/>
The source code is available in the <code>gumstix</code> branch of the repo <code>https://github.com/e-puck2/e-puck2_main-processor</code>.


==Turn on/off the extension==
==Bluetooth==
To turn on the extension you need to press the <code>auxON</code> button as shown in the follwoing figure; this will turn on also the robot (if not already turned on). Similarly, if you turn on the robot then also the extension will turn on automatically.<br/>
The main microcontroller and radio module of the robot are initially programmed with firmwares that together support Bluetooth communication.<br/>
<span class="plainlinks">[https://projects.gctronic.com/epuck2/wiki_images/pipuck_btn_on_off.jpg <img width=250 src="https://projects.gctronic.com/epuck2/wiki_images/pipuck_btn_on_off-small.jpg">]</span><br/>


To turn off the Pi-puck you need to press and hold the <code>auxON</code> button for 2 seconds; this will initiate the power down procedure.<br>
If the main microcontroller and radio module aren't programmed with the factory firmware or if you want to be sure to have the last firmwares on the robot, you need to program them with the last factory firmwares:
* for the main microcontroller, refer to section [https://www.gctronic.com/doc/index.php?title=e-puck2#Firmware_update main microcontroller firmware update]
* for the radio module, refer to section [https://www.gctronic.com/doc/index.php?title=e-puck2#Firmware_update_2 radio module firmware update]


Beware that by turning off the robot, the extension will not be turned off automatically if it is powered from another source like the micro usb cable or a secondary battery. You need to use its power off button to switch it off. Instead if there is no other power source, then by turning off the robot also the extension will be turned off (not cleanly).
When you want to interact with the robot from the computer you need to place the selector in position 3 if you want to work with Bluetooth. <br/>


==Console mode==
Section [https://www.gctronic.com/doc/index.php?title=e-puck2_PC_side_development#Connecting_to_the_Bluetooth Connecting to the Bluetooth] gives step by step instructions on how to accomplish your first Bluetooth connection with the robot.<br/>
The Pi-puck extension board comes with a pre-configured system ready to run without any additional configuration.<br/>
In order to access the system from a PC in console mode, the following steps must be performed:<br/>
1. connect a micro USB cable from the PC to the extension module. If needed, the drivers are available in the following link [https://www.silabs.com/products/development-tools/software/usb-to-uart-bridge-vcp-drivers USB to UART bridge drivers]<br/>
<span class="plainlinks">[https://projects.gctronic.com/epuck2/wiki_images/pipuck_usb.png <img width=250 src="https://projects.gctronic.com/epuck2/wiki_images/pipuck_usb-small.png">]</span><br/>
2. execute a terminal program and configure the connection with 115200-8N1 (no flow control). The serial device is the one created when the extension is connected to the computer<br/>
3. switch on the robot (the extension will turn on automatically); now the terminal should display the Raspberry Pi booting information. If the robot isn't present, then you can directly power on the extension board with the related button<br/>
4. login with <code>user = pi</code>, <code>password = raspberry</code><br/>


==Battery charge==
Once you tested the connection with the robot and the computer, you can start developing your own application by looking at the details behind the communication protocol. Both Bluetooth and USB communication channels use the same protocol called [https://www.gctronic.com/doc/index.php?title=e-puck2_PC_side_development#Bluetooth_and_USB advanced sercom v2], refer to section [https://www.gctronic.com/doc/index.php?title=e-puck2_PC_side_development#Bluetooth_and_USB Communication protocol: BT and USB] for detailed information about this protocol.<br/>
You can either charge the robot battery or the additional battery connected to the Pi-puck extension or both the batteries by simply plugging the micro USB cable.<br/>
The following figure shows the connector for the additional battery.<br/>
<span class="plainlinks">[https://projects.gctronic.com/epuck2/wiki_images/pipuck_battery.jpg <img width=250 src="https://projects.gctronic.com/epuck2/wiki_images/pipuck_battery-small.jpg">]</span><br/>


The robot can also autonomously charge itself if the charging wall is available. The Pi-puck extension includes two spring contacts on the front side that let the robot easily make contact with the charging wall and charge itself. The charging wall and the spring contacts are shown in the following figures:<br/>
==WiFi==
<span class="plainlinks">[https://www.gctronic.com/img2/shop/pipuck-charger-robot.jpg <img width=250 src="https://www.gctronic.com/img2/shop/pipuck-charger-robot-small.jpg">]</span>
For working with the WiFi, the main microcontroller must be programmed with the factory firmware and the radio module must be programmed with a dedicated firmware (not the factory one):
<span class="plainlinks">[https://projects.gctronic.com/epuck2/wiki_images/pipuck_contacts.jpg <img width=250 src="https://projects.gctronic.com/epuck2/wiki_images/pipuck_contacts-small.jpg">]</span><br/>
* for the main microcontroller, refer to section [https://www.gctronic.com/doc/index.php?title=e-puck2#Firmware_update main microcontroller firmware update]
* [https://projects.gctronic.com/epuck2/esp32-firmware-wifi_25.02.19_e2f4883.zip radio module wifi firmware (25.02.19)], for information on how to update the firmware refer to section [https://www.gctronic.com/doc/index.php?title=e-puck2#Firmware_update_2 radio module firmware update]
Put the selector in position 15(F).<br/>


==Reset button==
Section [https://www.gctronic.com/doc/index.php?title=e-puck2_PC_side_development#Connecting_to_the_WiFi Connecting to the WiFi] gives step by step instructions on how to accomplish your first WiFi connection with the robot.<br/>
A button is available to reset the robot, when pressed it will resets only the robot restarting its firmware. This is useful for instance during development or for specific demos in which a restart of the robot is needed. In these cases you don't need to turn off completely the robot (and consequently also the Pi-puck if energy is supplied by the robot) but instead you can simply reset the robot. The position of the reset button is shown in the following figure:<br/>
<span class="plainlinks">[https://projects.gctronic.com/epuck2/wiki_images/pipuck_reset.png <img width=250 src="https://projects.gctronic.com/epuck2/wiki_images/pipuck_reset-small.png">]</span><br/>


=How to communicate with the robot and its sensors=
The communication protocol is described in detail in the section [https://www.gctronic.com/doc/index.php?title=e-puck2_PC_side_development#WiFi_2 Communication protocol: WiFi].<br/>
==Communicate with the e-puck1==
Refer to the repo [https://github.com/yorkrobotlab/pi-puck-e-puck1 https://github.com/yorkrobotlab/pi-puck-e-puck1].


==Communicate with the e-puck2==
=Connecting to the Bluetooth=
An example showing how to exchange data between the robot and the Pi-puck extension is available in the Pi-puck repository; you can find it in the directory <code>/home/pi/Pi-puck/e-puck2/</code>.<br/>
You can build the program with the command <code>gcc e-puck2_test.c -o e-puck2_test</code>.<br/>
Now you can run the program by issueing <code>./e-puck2_test</code>; this demo will print the sensors data on the terminal and send some commands to the robot at 2 Hz.<br/>
The same example is also available in Python, you can run it by issueing <code>python3 e-puck2_test.py</code>.


===Packet format===
The factory firmware of the radio module creates 3 Bluetooth channels using the RFcomm protocol when the robot is paired with the computer:
Extension to robot packet format, 20 bytes payload (the number in the parenthesis represents the bytes for each field):
# Channel 1, GDB: port to connect with GDB if the programmer is in mode 1 or 3 (refer to chapter [https://www.gctronic.com/doc/index.php?title=e-puck2_programmer_development#Configuring_the_Programmer.27s_settings Configuring the Programmer's settings] for more information about these modes)
{| border="1"
# Channel 2, UART: port to connect to the UART port of the main processor
| Left speed (2)  
# Channel 3, SPI: port to connect to the SPI port of the main processor (not yet implemented. Just do an echo for now)
| Right speed (2)
 
| Speaker (1)
By default, the e-puck2 is not visible when you search for it in the Bluetooth utility of your computer.<br>
| LED1, LED3, LED5, LED7 (1)
'''To make it visible, it is necessary to hold the USER button (also labeled "esp32" on the electronic board) while turning on the robot with the ON/OFF button.'''<br>
| LED2 RGB (3)
::<span class="plain links">[https://projects.gctronic.com/epuck2/wiki_images/e-puck2-bt-pair.png <img width=250 src="https://projects.gctronic.com/epuck2/wiki_images/e-puck2-bt-pair-small.png">]</span><br/>
| LED4 RGB (3)  
Then it will be discoverable and you will be able to pair with it.<br>
| LED6 RGB (3)
Note that a prompt could ask you to confirm that the number written on the screen is the same on the e-puck. just ignore this and accept. Otherwise if you are asked for a pin insert 0000.
| LED8 RGB (3)
 
| Settings (1)
==Windows 7==
| Checksum (1)
When you pair your computer with the e-puck2, 3 COM ports will be automatically created.
|}
To see which COM port corresponds to which channel you need to open the properties of the paired e-puck2 robot from <code>Bluetooth devices</code>. Then the ports and related channels are listed in the <code>Services</code> tab, as shown in the following figure:<br/>
* Left, right speed: [-2000 ... 2000]
<span class="plain links">[https://projects.gctronic.com/epuck2/wiki_images/BT-connection-win7.png <img width=300 src="https://projects.gctronic.com/epuck2/wiki_images/BT-connection-win7.png">]</span>
* Speaker: sound id = [0, 1, 2]
* LEDs on/off flag: bit0 for LED1, bit1 for LED3, bit2 for LED5, bit3 for LED7
* RGB LEDs: [0 (off) ... 100 (max)]
* Settings:
** bit0: 1=calibrate IR proximity sensors
** bit1: 0=disable onboard obstacle avoidance; 1=enable onboard obstacle avoidance (not implemented yet)
** bit2: 0=set motors speed; 1=set motors steps (position)
* Checksum: Longitudinal Redundancy Check (XOR of the bytes 0..18)


Robot to extension packet format, 47 bytes payload (the number in the parenthesis represents the bytes for each field):
==Windows 10==
{| border="1"
When you pair your computer with the e-puck2, 6 COM ports will be automatically created. The three ports you will use have <code>Outgoing</code> direction and are named <code>e_puck2_xxxxx-GDB</code>, <code>e_puck2_xxxxx-UART</code>, <code>e_puck2_xxxxx-SPI</code>. <code>xxxxx</code> is the ID number of your e-puck2.<br/>
| 8 x Prox (16)
To see which COM port corresponds to which channel you need to:
| 8 x Ambient (16)
# open the Bluetooth devices manager
| 4 x Mic (8)
# pair with the robot
| Selector + button (1)
# click on <code>More Bluetooth options</code>
| Left steps (2)
# the ports and related channels are listed in the <code>COM Ports</code> tab, as shown in the following figure:<br/>
| Right steps (2)
:<span class="plain links">[https://projects.gctronic.com/epuck2/wiki_images/BT-connection-win10.png <img height=300 src="https://projects.gctronic.com/epuck2/wiki_images/BT-connection-win10.png">]</span>
| TV remote (1)
| Checksum
|}
* Selector + button: selector values represented by 4 least significant bits (bit0, bit1, bit2, bit3); button state is in bit4 (1=pressed, 0=not pressed)
* Checksum: Longitudinal Redundancy Check (XOR of the bytes 0..45)


==Communicate with the IMU==
==Linux==
===e-puck1===
Once paired with the Bluetooth manager, you need to create the port for communicating with the robot by issueing the command: <br/>
An example written in C showing how to read data from the IMU (LSM330) mounted on e-puck 1.3 is available in the Pi-puck repository; you can find it in the directory <code>/home/pi/Pi-puck/e-puck1/</code>.<br/>
<code>sudo rfcomm bind /dev/rfcomm0 MAC_ADDR 2</code><br/>
You can build the program with the command <code>gcc e-puck1_imu.c -o e-puck1_imu</code>.<br/>
The MAC address is visible from the Bluetooth manager. The parameter <code>2</code> indicates the channel, in this case a port for the <code>UART</code> channel is created. If you want to connect to another service you need to change this parameter accordingly (e.g. <code>1</code> for <code>GDB</code> and <code>3</code> for <code>SPI</code>). Now you can use <code>/dev/rfcomm0</code> to connect to the robot.
Now you can run the program by issueing <code>./e-puck1_imu</code> and then choose whether to get data from the accelerometer or gyroscope; this demo will print the sensors data on the terminal.<br/>


===e-puck2===
==Mac==
An example showing how to read data from the IMU (MPU-9250) is available in the Pi-puck repository; you can find it in the directory <code>/home/pi/Pi-puck/e-puck2/</code>.<br/>
When you pair your computer with the e-puck2, 3 COM ports will be automatically created: <code>/dev/cu.e-puck2_xxxxx-GDB</code>, <code>/dev/cu.e-puck2_xxxxx-UART</code> and <code>/dev/cu.e-puck2_xxxxx-SPI</code>. xxxxx is the ID number of your e-puck2.
You can build the program with the command <code>gcc e-puck2_imu.c -o e-puck2_imu</code>.<br/>
Now you can run the program by issueing <code>./e-puck2_imu</code> and then choose whether to get data from the accelerometer or gyroscope; this demo will print the sensors data on the terminal.<br/>
The same example is also available in Python, you can run it by issueing <code>python3 e-puck2_imu.py</code>.


==Communicate with the ToF sensor==
==Testing the Bluetooth connection==
The Time of Flight sensor is available only on the e-puck2 robot.<br/>
You need to download the PC application provided in section [https://www.gctronic.com/doc/index.php?title=e-puck2#Available_executables PC interface: available executables].<br/>
In the connection textfield you need to enter the UART channel port, for example:
* Windows 7: <code>COM258</code>
* Windows 10: <code>e_puck2_xxxxx-UART</code>
* Linux: <code>/dev/rfcomm0</code>
* Mac: <code>/dev/cu.e-puck2_xxxxx-UART</code>
and then click <code>Connect</code>. <br/>
You should start receiving sensors data and you can send commands to the robot.<br/>


First of all you need to verify that the VL53L0X Python package is installed with the following command: <code>python3 -c "import VL53L0X"</code>. If the command returns nothing you're ready to go, otherwise if you receive an <code>ImportError</code> then you need to install the package with the command: <code>pip3 install git+https://github.com/gctronic/VL53L0X_rasp_python</code>.<br/>
Alternatively you can also use a simple terminal program (e.g. <code>realterm</code> in Windows) instead of the PC application, then you can issue manually the commands to receive sensors data or for setting the actuators (once connected, type <code>h + ENTER</code> for a list of availables commands).


A Python example showing how to read data from the ToF sensor is available in the Pi-puck repository; you can find it in the directory <code>/home/pi/Pi-puck/e-puck2/</code>.<br/>
==Python examples==
You can run the example by issueing <code>python3 VL53L0X_example.py</code> (this is the example that you can find in the repository [https://github.com/gctronic/VL53L0X_rasp_python/tree/master/python https://github.com/gctronic/VL53L0X_rasp_python/tree/master/python]).
Here are some basic Python 3 examples that show how to get data from the robot through Bluetooth using the commands available with the [https://www.gctronic.com/doc/index.php?title=e-puck2_PC_side_development#Bluetooth_and_USB advanced sercom v2]:
* [https://projects.gctronic.com/epuck2/printhelp.py printhelp.py]: print the list of commands available in the [https://www.gctronic.com/doc/index.php?title=e-puck2_PC_side_development#Bluetooth_and_USB advanced sercom v2]
* [https://projects.gctronic.com/epuck2/getprox.py getprox.py]: print the values of the proximity sensors
* [https://projects.gctronic.com/epuck2/complete.py complete.py]: set all the actuators and get all the sensors data printing their values on the screen
* [https://projects.gctronic.com/epuck2/getimage.py getimage.py]: request an image and save it to disk
* [https://projects.gctronic.com/epuck2/getmagnetometer.py getmagnetometer.py]: enable the magnetometer and print its values
In all the examples you need to set the correct Bluetooth serial port related to the robot.


==Capture an image==
===Connecting to multiple robots===
The robot camera is connected to the Pi-puck extension as a USB camera, so you can access it very easily.<br/>
Here is a simple Python 3 script [https://projects.gctronic.com/epuck2/multi-robot.py multi-robot.py] that open a connection with 2 robots and exchange data with them using the [https://www.gctronic.com/doc/index.php/Advanced_sercom_protocol advanced sercom protocol]. This example can be extended to connect to more than 2 robots.
An example showing how to capture an image from the robot's camera using OpenCV is available in the Pi-puck repository; you can find it in the directory <code>/home/pi/Pi-puck/snapshot/</code>.<br/>
You can build the program with the command <code>g++ $(pkg-config --libs --cflags opencv) -ljpeg -o snapshot snapshot.cpp</code>.<br/>
Now you can run the program by issueing <code>./snapshot</code>; this will save a VGA image (JPEG) named <code>image01.jpg</code> to disk.<br/>
The program can accept the following parameters:<br/>
<code>-d DEVICE_ID</code> to specify the input video device from which to capture an image, by default is <code>0</code> (<code>/dev/video0</code>). This is useful when working also with the [http://www.gctronic.com/doc/index.php?title=Omnivision_Module_V3 Omnivision V3] extension that crates another video device; in this case you need to specify <code>-d 1</code> to capture from the robot camera.<br/>
<code>-n NUM</code> to specify how many images to capture (1-99), by default is 1<br/>
<code>-v</code> to enable verbose mode (print some debug information)<br/>
Beware that in this demo the acquisition rate is fixed to 5 Hz, but the camera supports up to '''15 FPS'''.<br/>
The same example is also available in Python, you can run it by issueing <code>python snapshot.py</code>.


==Communicate with the ground sensors extension==
===Automotive===
Both e-puck1 and e-puck2 support the [https://www.gctronic.com/doc/index.php?title=Others_Extensions#Ground_sensors ground sensors extension].<br/>
Initial project in which some robots navigate a city trying to handle the crossroads using only the onboard sensors. You can download the Python 3 script from [https://projects.gctronic.com/epuck2/epuck2_automotive.py epuck2_automotive.py]. <br/>
This extension is attached to the I2C bus and can be read directly from the Pi-puck.<br/>
Here is a video of this demo: {{#ev:youtube|N39EDy1qt4o}}
An example written in C showing how to read data from the ground sensors extension is available in the Pi-puck repository; you can find it in the directory <code>/home/pi/Pi-puck/ground-sensor/</code>.<br/>
You can build the program with the command <code>gcc groundsensor.c -o groundsensor</code>.<br/>
Now you can run the program by issueing <code>./groundsensor</code>; this demo will print the sensors data on the terminal.<br/>
The same example is also available in Python, you can run it by issueing <code>python3 groundsensor.py</code>.


==Communicate with the range and bearing extension==
==C++ remote library==
Both e-puck1 and e-puck2 support the [https://www.gctronic.com/doc/index.php?title=Others_Extensions#Range_and_bearing range and bearing extension].<br/>
A remote control library implemented in C++ is available to control the e-puck2 robot via a Bluetooth connection from the computer.<br/>
This extension is attached to the I2C bus and can be read directly from the Pi-puck.<br/>
The remote control library is multiplatform and uses only standard C++ libraries.<br/>
An example written in C showing how to start playing with the range and bearing extension is available in the Pi-puck repository; you can find it in the directory <code>/home/pi/Pi-puck/randb/</code>. You need two boards: one is the transmitter (run <code>randb_tx</code>) and the other is the receiver (run <code>randb_rx</code>). The receiver will print the data received from the transmitter.<br/>
You can download the library with the command <code>git clone https://github.com/e-puck2/e-puck2_cpp_remote_library</code>.<br/>
You can build the programs with the command <code>gcc randb_tx.c -o randb_tx</code> and <code>gcc randb_rx.c -o randb_rx</code>.<br/>
A simple example showing how to use the library is also available; you can download it with the command <code>git clone https://github.com/e-puck2/e-puck2_cpp_remote_example</code>.<br/>
The same example is also available in Python, you can run it by issueing <code>python3 randb_tx.py</code> and <code>python3 randb_rx.py</code>.
Before building the example you need to build the library. Then when building the example, make sure that both the library and the example are in the same directory, that is you must end up with the following directory tree:<br>
: e-puck2_projects
::|_ e-puck2_cpp_remote_library
::|_ e-puck2_cpp_remote_example
The complete API reference is available in the following link [https://projects.gctronic.com/epuck2/e-puck2_cpp_remote_library_api_reference_rev3ac41e3.pdf e-puck2_cpp_remote_library_api_reference.pdf].


==Wireless remote control==
=Connecting to the WiFi=
If you want to control the robot from a computer, for instance when you have an algorithm that requires heavy processing not suitable for the Pi-puck or when the computer acts as a master controlling a fleet of robots that return some information to the controller, then you have 3 options:<br/>
The WiFi channel is used to communicate with robot faster than with Bluetooth. At the moment a QQVGA (160x120) color image is transferred to the computer together with the sensors values at about 10 Hz; of course the robot is also able to receive commands from the computer.<br/>
1) The computer establishes a WiFi connection with the Pi-puck to receive data processed by the Pi-puck (e.g. results of an image processing task); at the same time the computer establishes a Bluetooth connection directly with the e-puck2 robot to control it.
In order to communicate with the robot through WiFi, first you need to configure the network parameters on the robot by connecting directly to it, since the robot is initially configured in access point mode, as explained in the following section. Once the configuration is saved on the robot, it will then connect automatically to the network and you can connect to it.
:''Disadvantages'':
:- the Bluetooth standard only allow up to seven simultaneous connections
:- doubled latency (Pi-puck <-> pc and pc <-> robot)
2) The computer establishes a WiFi connection with both the Pi-puck and the e-puck2 robot.
:''Advantages'':
:- only one connection type needed, easier to handle
:''Disadvantages'':
:- doubled latency (Pi-puck <-> pc and pc <-> robot)
3) The computer establishes a WiFi connection with the Pi-puck and then the Pi-puck is in charge of controlling the robot via I2C based on the data received from the computer controller.
:''Advantages'':
:- less latency involved
:- less number of connections to handle
:- depending on your algorithm, it would be possible to initially develop the controller on the computer (easier to develop and debug) and then transfer the controller directly to the Pi-puck without the need to change anything related to the control of the robot via I2C


The following figure summarizes these 3 options:<br/>
The LED2 is used to indicate the state of the WiFi connection:
<span class="plainlinks">[https://projects.gctronic.com/epuck2/wiki_images/wireless-remote-control-options.png <img width=600 src="https://projects.gctronic.com/epuck2/wiki_images/wireless-remote-control-options.png">]</span>
* red indicates that the robot is in ''access point mode'' (waiting for configuration)
* green indicates that the robot is connected to a network and has received an IP address
* blue (toggling) indicates that the robot is transferring the image to the computer
* off when the robot cannot connect to the saved configuration
::<span class="plain links">[https://projects.gctronic.com/epuck2/wiki_images/e-puck2-wifi-led.png <img width=250 src="https://projects.gctronic.com/epuck2/wiki_images/e-puck2-wifi-led-small.png">]</span><br/>


=How to work with the Pi-puck=
==Network configuration==
==Demos and scripts update==
If there is no WiFi configuration saved in flash, then the robot will be in ''access point mode'' in order to let the user connect to it and setup a WiFi connection. The LED2 is red.  
First of all you should update to the last version of the demos and scripts released with the system that you can use to start playing with the Pi-puck extension and the robot.<br/>
To update the repository follow these steps:<br/>
1. go to the directory <code>/home/pi/Pi-puck</code><br/>
2. issue the command <code>git pull</code><br/>
Then to update some configurations of the system:<br/>
1. go to the directory <code>/home/pi/Pi-puck/system</code><br/>
2. issue the command <code>./update.sh</code>; the system will reboot.<br/>
You can find the Pi-puck repository here [https://github.com/gctronic/Pi-puck https://github.com/gctronic/Pi-puck].<br/>


==Audio recording==
The access point SSID will be <code>e-puck2_0XXXX</code> where <code>XXXX</code> is the id of the robot; the password to connect to the access point is <code>e-puck2robot</code>.<br/>
Use the <code>arecord</code> utility to record audio from the onboard microphone. The following example shows how to record an audio of 2 seconds (<code>-d</code> parameter) and save it to a wav file (<code>test.wav</code>):<br/>
You can use a phone, a tablet or a computer to connect to the robot's WiFi and then you need to open a browser and insert the address <code>192.168.1.1</code>. The available networks are scanned automatically and listed in the browser page as shown in ''figure 1''. Choose the WiFi signal you want the robot to establish a conection with from the web generated list, and enter the related password; if the password is correct you'll get a message saying that the connection is established as shown in ''figure 2''. After pressing <code>OK</code> you will be redirected to the main page showing the network to which you're connected and the others available nearby as shown in ''figure 3''. If you press on the connected network, then you can see your IP address as shown in ''figure 4''; <b>take note of the address since it will be needed later</b>.<br/>
<code>arecord -Dmic_mono -c1 -r16000 -fS32_LE -twav -d2 test.wav</code><br/>
You can also specify a rate of 48 KHz with <code>-r48000</code>


==Audio play==
<span class="plainlinks">
Use <code>aplay</code> to play <code>wav</code> files and <code>mplayer</code> to play <code>mp3</code> files.
<table>
<tr>
<td align="center">[1]</td>
<td align="center">[2]</td>
<td align="center">[3]</td>
<td align="center">[4]</td>
</tr>
<tr>
<td>[https://projects.gctronic.com/epuck2/wiki_images/esp32-wifi-setup1.png <img width=150 src="https://projects.gctronic.com/epuck2/wiki_images/esp32-wifi-setup1.png">]</td>
<td>[https://projects.gctronic.com/epuck2/wiki_images/esp32-wifi-setup2.png <img width=150 src="https://projects.gctronic.com/epuck2/wiki_images/esp32-wifi-setup2.png">]</td>
<td>[https://projects.gctronic.com/epuck2/wiki_images/esp32-wifi-setup3.png <img width=150 src="https://projects.gctronic.com/epuck2/wiki_images/esp32-wifi-setup3.png">]</td>
<td>[https://projects.gctronic.com/epuck2/wiki_images/esp32-wifi-setup4.png <img width=150 src="https://projects.gctronic.com/epuck2/wiki_images/esp32-wifi-setup4.png">]</td>
</tr>
</table>
</span><br/>
Now the configuration is saved in flash, this means that when the robot is turned on it will read this configuration and try to establish a connection automatically.<br/>
Remember that you need to power cycle the robot at least once for the new configuration to be active.<br/>


==Battery reading==
Once the connection is established, the LED2 will be green.<br/>
A Python example showing how to measure both the battery of the robot and the battery of the Pi-puck extension is available in the Pi-puck repository; you can find it in the directory <code>/home/pi/Pi-puck/battery/</code>.<br/>
You can start reading the batteries values by issueing <code>python read-battery.py</code>.; this demo will print the batteries values (given in Volts) on the terminal.


==WiFi configuration==
In order to reset the current configuration you need to press the user button for 2 seconds (the LED2 red will turn on), then you need to power cycle the robot to enter ''access point mode''.
Specify your network configuration in the file <code>/etc/wpa_supplicant/wpa_supplicant-wlan0.conf</code>.<br/>
::<span class="plain links">[https://projects.gctronic.com/epuck2/wiki_images/e-puck2-wifi-reset.png <img width=250 src="https://projects.gctronic.com/epuck2/wiki_images/e-puck2-wifi-reset-small.png">]</span><br/>
Example:<br/>
<pre>
ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev
update_config=1
country=CH
network={
        ssid="MySSID"
        psk="9h74as3xWfjd"
}
</pre>
You can have more than one <code>network</code> parameter to support more networks. For more information about ''wpa_supplicant'' refer to [https://hostap.epitest.fi/wpa_supplicant/ https://hostap.epitest.fi/wpa_supplicant/].


Once the configuration is done, you can also connect to the Pi-puck with <code>SSH</code>. If you are working in Windows you can use [https://www.putty.org/ PuTTY].
==Finding the IP address==
Often the IP address assigned to the robot will remain the same when connecting to the same network, so if you took note of the IP address in section [https://www.gctronic.com/doc/index.php?title=e-puck2#Network_configuration Network configuration] you're ready to go to the next section. <br/>


===How to know your IP address===
Otherwise you need to connect the robot to the computer with the USB cable, open a terminal and connect to the port labeled <code>Serial Monitor</code> (see chapter [https://www.gctronic.com/doc/index.php?title=e-puck2#Finding_the_USB_serial_ports_used Finding the USB serial ports used]). Then power cycle the robot and the IP address will be shown in the terminal (together with others informations), as illustrated in the following figure:<br/>
A simple method to know your IP address is to connect the USB cable to the Pi-puck extension and issue the command <code>ip a</code>; from the command's result you will be able to get you current assigned IP address.
<span class="plain links">[https://projects.gctronic.com/epuck2/wiki_images/esp32-wifi-setup5.png <img width=500 src="https://projects.gctronic.com/epuck2/wiki_images/esp32-wifi-setup5.png">]</span>


If you prefer to know your IP address remotely (without connecting any cable) then you can use <code>nmap</code>.<br/>
==Testing the WiFi connection==
For example you can search all connected devices in your network with the following command: <code>nmap 192.168.1.*</code>. Beware that you need to specify the subnet based on your network configuration.<br/>
A dedicated WiFi version of the PC application was developed to communicate with the robot through TCP protocol. You can download the executable from one of the following links:
From the command's result you need to look for the hostname <code>raspberrypi</code>.<br/>
* [https://projects.gctronic.com/epuck2/monitor_wifi_27dddd4.zip Windows executable - WiFi]
If you are working in Windows you can use the [https://nmap.org/zenmap/ Zenmap] application.
* [https://projects.gctronic.com/epuck2/monitor_mac_wifi.zip Max OS X executable - WiFi]
* [https://projects.gctronic.com/epuck2/monitor_wifi_linux64bit_27dddd4.tar.gz Ubuntu 14.04 (or later) - 64 bit]


==File transfer==
If you are interested to the source code, you can download it with the command <code>git clone -b wifi --recursive https://github.com/e-puck2/monitor.git</code><br/>
===USB cable===
You can transfer files via USB cable between the computer and the Pi-puck extension by using on of the <code>zmodem</code> protocol.<br/>
The <code>lrzsz</code> package is pre-installed in the system, thus you can use the <code>sx</code> and <code>rx</code> utilities to respectevely send files to the computer and receive files from the computer.<br/>
Example of sending a file to the computer using the <code>Minicom</code> terminal program:<br/>
1. in the Pi-puck console type <code>sx --zmodem fliename.ext</code>. The transfer should start automatically and you'll find the file in the home directory.<br/>
<!--2. to start the transfer type the sequence <code>CTRL+A+R</code>, then chose <code>xmodem</code> and finally enter the name you want to assign to the received file. You'll find the file in the home directory.<br/>-->
Example of receiving a file from the computer using the <code>Minicom</code> terminal program:<br/>
1. in the Pi-puck console type <code>rx -Z</code><br/>
2. to start the transfer type the sequence <code>CTRL+A+S</code>, then chose <code>zmodem</code> and select the file you want to send with the <code>spacebar</code>. Finally press <code>enter</code> to start the transfer.<br/>
===WiFi===
The Pi-puck extension supports <code>SSH</code> connections.<br/>
To exchange files between the Pi-puck and the computer, the <code>scp</code> tool (secure copy) can be used. An example of transferring a file from the Pi-puck to the computer is the following:<br/>
<code>scp pi@192.168.1.20:/home/pi/example.txt example.txt</code>


If you are working in Windows you can use [https://www.putty.org/ PuTTY].
Run the PC application, insert the IP address of the robot in the connection textfield and then click on the <code>Connect</code> button. You should start receiving sensors data and you can send commands to the robot. The LED2 blue will toggle.<br/>


==Image streaming==
==Web server==
When the robot is in ''access point mode'' you can have access to a web page showing the camera image and some buttons that you can use to move the robot; it is a basic example that you can use as a starting point to develop your own web browser interface.<br/>
You can use a phone, a tablet or a computer to connect to the robot's WiFi and then you need to open a browser and insert the address <code>192.168.1.1/monitor.html</code>.


==Python examples==
===Connecting to multiple robots===
A simple Python 3 script was developed as a starting point to open a connection with multiple robots and exchange data with them using the [https://www.gctronic.com/doc/index.php?title=e-puck2_PC_side_development#WiFi_2 WiFi communication protocol]. The demo was tested with 10 robots but can be easily extended to connect to more robots.<br/>
You can download the script with the command <code>git clone https://github.com/e-puck2/e-puck2_python_wifi_multi.git</code>. The code was tested to work with Python 3.x.


==Bluetooth LE==
=Communication protocol=
An example of a ''BLE uart service'' is available in the Pi-puck repository; you can find it in the directory <code>/home/pi/Pi-puck/ble/</code>.<br/>
This section is the hardest part to understand. It outlines all the details about the communication protocols that you'll need to implement in order to communicate with the robot form the computer. So spend a bit of time reading and re-reading this section in order to grasp completely all the details.
To start the service you need to type: <code>python uart_peripheral.py</code>.<br/>
Then you can use the ''e-puck2-android-ble app'' you can find in chapter [https://www.gctronic.com/doc/index.php?title=e-puck2_mobile_phone_development#Connecting_to_the_BLE Connecting to the BLE] in order to connect to the Pi-puck extension via BLE. Once connected you'll receive some dummy data for the proximity values and by clicking on the motion buttons you'll see the related action printed on the Pi-puck side. This is a starting point that you can extend based on your needs.


=Operating system=
==Bluetooth and USB==
The system is based on Raspbian Stretch and can be downloaded from the following link [https://projects.gctronic.com/epuck2/PiPuck/pi-puck-os_25.05.22.zip pi-puck-os_25.05.22.zip].
The communication protocol is based on the [https://www.gctronic.com/doc/index.php/Advanced_sercom_protocol advanced sercom protocol], used with the e-puck1.x robot. The <code>advanced sercom v2</code> includes all the commands available in the <code>advanced sercom</code> protocol and add some additional commands to handle the new features of the e-puck2 robot. In particular here are the new commands:
{| border="1" cellpadding="10" cellspacing="0"
!Command
!Description
!Return value / set value
|-
|<code>-0x08</code>
|Get all sensors
|<span class="plain links">[https://projects.gctronic.com/epuck2/wiki_images/packet-format-robot-to-pc.jpg <img width=1150 src="https://projects.gctronic.com/epuck2/wiki_images/packet-format-robot-to-pc.jpg">]</span>
see section [https://www.gctronic.com/doc/index.php?title=e-puck2_PC_side_development#WiFi_2 Communication protocol: WiFi] for the content description
|-
|<code>-0x09</code>
|Set all actuators
|<span class="plain links">[https://projects.gctronic.com/epuck2/wiki_images/packet-format-pc-to-robot-bt.jpg <img width=600 src="https://projects.gctronic.com/epuck2/wiki_images/packet-format-pc-to-robot-bt.jpg">]</span>
see section [https://www.gctronic.com/doc/index.php?title=e-puck2_PC_side_development#WiFi_2 Communication protocol: WiFi] for the content description
|-
|<code>-0x0A</code>
|Set RGB LEDs, values from 0 (off) to 100 (completely on)
|<code>[LED2_red][LED2_green][LED2_blue][LED4_red][LED4_green][LED4_blue][LED6_red][LED6_green][LED6_blue][LED8_red][LED8_green][LED8_blue]</code>
|-
|<code>-0x0B</code>
|Get button state: 0 = not pressed, 1 = pressed
|<code>[STATE]</code>
|-
|<code>-0x0C</code>
|Get all 4 microphones volumes
|<code>[MIC0_LSB][MIC0_MSB][MIC1_LSB][MIC1_MSB][MIC2_LSB][MIC2_MSB][MIC3_LSB][MIC3_MSB]</code>
|-
|<code>-0x0D</code>
|Get distance from ToF sensor (millimeters)
|<code>[DIST_LSB][DIST_MSB]</code>
|-
|<code>-0x0E</code>
|Get SD state: 0 = micro sd not connected, 1 = micro sd connected
|<code>[STATE]</code>
|-
|<code>-0x0F</code>
|Enable/disable magnetometer: 0 = disable, 1 = enable
|<code>0 = success, 1 = error</code>
|-
|<code>-0x10</code>
|Set proximity state: <br/>
0 = disable proximity sampling <br/>
1 = enable fast proximity sampling (100 hz)<br/>
2 = enable slow proximity sampling (20 hz)
|<code>0 = success, 1 = error</code>
|-
|<code>-0x11</code>
|Enable/disable time of flight sensor: 0 = disable, 1 = enable
|<code>0 = success, 1 = error</code>
|}


When booting the first time, the first thing to do is expanding the file system in order to use all the available space on the micro sd:<br/>
==WiFi==
1. <code>sudo raspi-config</code><br/>
The communication is based on TCP; the robot create a TCP server and wait for a connection.<br/>
2. Select <code>Advanced Options</code> and then <code>Expand Filesystem</code><br/>
3. reboot


==e-puck2 camera configuration==
Each packet is identified by an ID (1 byte). The following IDs are used to send data from the robot to the computer:
The e-puck2 camera need to be configured through I2C before it can be used. For this reason a Python script is called at boot that detects and configures the camera. The script resides in the Pi-puck repository installed in the system (<code>/home/pi/Pi-puck/camera-configuration.py</code>), so beware to not remove it.
* 0x00 = reserved
* 0x01 = QQVGA color image packet (only the first segment includes this id); packet size (without id) = 38400 bytes; image format = RGB565
* 0x02 = sensors packet; packet size (without id) = 104 bytes; the format of the returned values are based on the [https://www.gctronic.com/doc/index.php/Advanced_sercom_protocol advanced sercom protocol] and are compatible with e-puck1.x:


If the robot is plugged after the boot process is completed, you need to call manually the Python configuration script before using the camera by issueing the command <code>python3 /home/pi/Pi-puck/camera-configuration.py</code>.
:<span class="plain links">[https://projects.gctronic.com/epuck2/wiki_images/packet-format-robot-to-pc.jpg <img width=1150 src="https://projects.gctronic.com/epuck2/wiki_images/packet-format-robot-to-pc.jpg">]</span><br/>
:*Acc: raw axes values (0=X LSB, 1=X MSB, 2=Y LSB, 3=Y MSB, 4=Z LSB, 5=Z MSB), between -1500 and 1500, resolution is +-2g
:*Acceleration expressed in float: acceleration magnitude <img width=70 src="https://projects.gctronic.com/epuck2/wiki_images/3dvector-magnitude.png">, between 0.0 and about 2600.0 (~3.46 g)
:*Orientation expressed in float: between 0.0 and 360.0 degrees <table><tr><td align="center">0.0 deg</td><td align="center">90.0 deg</td><td align="center">180 deg</td><td align="center">270 deg</td></tr><tr><td><img width=80 src="https://projects.gctronic.com/epuck2/wiki_images/orientation0.png"></td><td><img width=80 src="https://projects.gctronic.com/epuck2/wiki_images/orientation90.png"></td><td><img width=80 src="https://projects.gctronic.com/epuck2/wiki_images/orientation180.png"></td><td><img width=80 src="https://projects.gctronic.com/epuck2/wiki_images/orientation270.png"></td></tr></table>


In order to automatically run the script at boot, the <code>/etc/rc.local</code> was modified by adding the call to the script just before the end of the file.
:*Inclination expressed in float: between 0.0 and 90.0 degrees (when tilted in any direction)<table><tr><td align="center">0.0 deg</td><td align="center">90.0 deg</td></tr><tr><td><img width=80 src="https://projects.gctronic.com/epuck2/wiki_images/inclination0.png"></td><td><img width=80 src="https://projects.gctronic.com/epuck2/wiki_images/inclination90.png"></td></tr></table>
:*Gyro: raw axes values (0=X LSB, 1=X MSB, 2=Y LSB, 3=Y MSB, 4=Z LSB, 5=Z MSB), between -32768 and 32767, range is +-250dps
:*Magnetometer: raw axes values expressed in float, range is +-4912.0 uT (magnetic flux density expressed in micro Tesla)
:*Temp: temperature given in Celsius degrees
:*IR proximity (0=IR_0 LSB, 1=IR_0 MSB, ...): between 0 (no objects detected) and 4095 (object near the sensor)
:*IR ambient (0=IR_0 LSB, 1=IR_0 MSB, ...): between 0 (strong light) and 4095 (dark)
:*ToF distance: distance given in millimeters
:*Mic volume (0=MIC_0 LSB, 1=MIC_0 MSB, ...): between 0 and 4095
:*Motors steps: 1000 steps per wheel revolution
:*Battery:
:*uSD state: 1 if the micro sd is present and can be read/write, 0 otherwise
:*TV remote data: RC5 protocol
:*Selector position: between 0 and 15
:*Ground proximity (0=GROUND_0 LSB, 1=GROUND_0 MSB, ...): between 0 (no surface at all or not reflective surface e.g. black) and 1023 (very reflective surface e.g. white)
:*Ground ambient (0=GROUND_0 LSB, 1=GROUND_0 MSB, ...): between 0 (strong light) and 1023 (dark)
:*Button state: 1 button pressed, 0 button released
* 0x03 = empty packet (only id is sent); this is used as an acknowledgment for the commands packet when no sensors and no image is requested
The following IDs are used to send data from the computer to the robot:
* 0x80 = commands packet; packet size (without id) = 20 bytes:


==Power button handling==
:<span class="plain links">[https://projects.gctronic.com/epuck2/wiki_images/packet-format-pc-to-robot.jpg <img width=600 src="https://projects.gctronic.com/epuck2/wiki_images/packet-format-pc-to-robot.jpg">]</span><br/>
The power button press is handled by a background service (<code>systemd</code>) started automatically at boot. The service description file is located in <code>/etc/systemd/system/power_handling.service</code> and it calls the <code>/home/pi/power-handling/</code> program. Beware to not remove neither of these files.<br/>
The source code of the power button handling program is available in the Pi-puck repository and is located in <code>/home/pi/Pi-puck/power-handling/power-handling.c</code>.


==Desktop mode==
:*request:
The system starts in console mode, to switch to desktop (LXDE) mode issue the command <code>startx</code>.
:** bit0: 0=stop image stream; 1=start image stream
===Camera viewer===
:** bit1: 0=stop sensors stream; 1=start sensors stream
A camera viewer called <code>luvcview</code> is installed in the system. You can open a terminal and issue simply the command <code>luvcview</code> to see the image coming from the robot camera.
:*settings:
:** bit0: 1=calibrate IR proximity sensors
:** bit1: 0=disable onboard obstacle avoidance; 1=enable onboard obstacle avoidance (not implemented yet)
:** bit2: 0=set motors speed; 1=set motors steps (position)
:*left and right: when bit2 of <code>settings</code> field is <code>0</code>, then this is the desired motors speed (-1000..1000); when <code>1</code> then this is the value that will be set as motors position (steps)
:*LEDs: 0=off; 1=on
:** bit0: 0=LED1 off; 1=LED1 on
:** bit1: 0=LED3 off; 1=LED3 on
:** bit2: 0=LED5 off; 1=LED5 on
:** bit3: 0=LED7 off; 1=LED7 on
:** bit4: 0=body LED off; 1=body LED on
:** bit5: 0=front LED off; 1=front LED on
:*RGB LEDs: for each LED, it is specified in sequence the value of red, green and blue (0...100)
:* sound id: 0x01=MARIO, 0x02=UNDERWOLRD, 0x04=STARWARS, 0x08=4KHz, 0x10=10KHz, 0x20=stop sound


==VNC==
For example to receive the camera image (stream) the following steps need to be followed:<br/>
[https://www.realvnc.com/en/ VNC] is a remote control desktop application that lets you connect to the Pi-puck from your computer and then you will see the desktop of the Pi-puck inside a window on your computer. You'll be able to control it as though you were working on the Pi-puck itself.<br/>
1) connect to the robot through TCP<br/>
VNC is installed in the system and the ''VNC server'' is automatically started at boot, thus you can connect with ''VNC Viewer'' from your computer by knowing the IP address of the Pi-puck (refer to section [https://www.gctronic.com/doc/index.php?title=Pi-puck#How_to_know_your_IP_address How to know your IP address]).<br/>
2) send the command packet:
Notice that the ''VNC server'' is started also in console mode.
:{| border="1"
|0x80
|0x01
|0x00
|0x00
|0x00
|0x00
|0x00
|0x00
|0x00
|0x00
|0x00
|0x00
|0x00
|0x00
|0x00
|0x00
|0x00
|0x00
|0x00
|0x00
|0x00
|}
3) read the ID (1 byte) and the QQVGA color image pakcet (38400 bytes)<br/>
4) go to step 3


==I2C communication==
=Webots=
The communication between the Pi-puck extension and the robot is based on I2C. The system is configured to exploit the I2C hardware peripheral in order to save CPU usage, but if you need to use the software I2C you can enable it by modifying the <code>/boot/config.txt</code> file and removing the <code>#</code> symbol (comment) in front of the line with the text <code>dtparam=soft_i2c</code> (it is placed towards the end of the file).
1. Download the last version of [https://cyberbotics.com/ Webots] for your platform and install it.<br/>
2. Program the robot with the [https://www.gctronic.com/doc/index.php?title=e-puck2#WiFi_firmware WiFi firmware] and put the selector in position 15(F). Connect the robot to your WiFi network.<br/>
3. Open the example world you can find in the Webots installation directory <code>Webots\projects\robots\gctronic\e-puck\worlds\e-puck2.wbt</code>.<br/>
4. Double click the robot, a new small window will appear: insert the IP address of the robot and click connect.<br/>
:<span class="plainlinks">[https://www.gctronic.com/doc/images/epuck2-webots.png <img width=450 src="https://www.gctronic.com/doc/images/epuck2-webots.png">]</span>
5. Now you can start the demo, the robot will be remote controlled.<br/>


==Audio output configuration==
For more information have a look at the [https://cyberbotics.com/doc/guide/epuck e-puck Webots guide].
You can enable or disable audio output by modifying the <code>config.txt</code> file in the <code>boot</code> partition.<br/>
To enable audio output insert the line: <code>gpio=22=op,dh</code><br/>
To disable audio output insert the line: <code>gpio=22=op,dl</code><br/>
If you don't need to play audio files it is suggested to disable audio output in order to save power.


=ROS=
=ROS=
ROS Kinetic is integrated in the Pi-puck system.<br/>
This chapter explains how to use ROS with the e-puck2 robots by connecting them via Bluetooth or WiFi to the computer that runs the ROS nodes. Basically all the sensors are exposed to ROS and you can also send commands back to the robot through ROS. Both Pyhton and cpp versions are implemented to give the user the possibility to choose its preferred programming language. Here is a general schema:<br/>
A ROS node developed to run in the Pi-puck is available for both <code>CPP</code> and <code>Python</code>, the communication system is based on the third architecture shown in chapter [https://www.gctronic.com/doc/index.php?title=Pi-puck#Wireless_remote_control Wireless remote control]; a more detailed schema is shown below:<br/>
<span class="plainlinks">[https://www.gctronic.com/doc/images/epuck2-ros-schema.png <img width=450 src="https://www.gctronic.com/doc/images/epuck2-ros-schema-small.png">]</span>
<span class="plainlinks">[https://projects.gctronic.com/epuck2/wiki_images/epuck2-ros-schema.png <img width=600 src="https://projects.gctronic.com/epuck2/wiki_images/epuck2-ros-schema.png">]</span>
''<font size="2">Click to enlarge</font>''<br/>
 
First of all you need to install and configure ROS, refer to [https://wiki.ros.org/Distributions https://wiki.ros.org/Distributions] for more informations. <font style="color:red"> This tutorial is based on ROS Kinetic</font>. The same instructions are working with ROS Noetic, beware to use <code>noetic</code> instead of <code>kinetic</code> when installing the packages.
 
Starting from the work done with the e-puck1 (see [https://www.gctronic.com/doc/index.php?title=E-Puck#ROS E-Puck ROS]), we updated the code in order to support the e-puck2 robot.


==Initial configuration==
==Initial configuration==
The ROS workspace is located in <code>~/rosbots_catkin_ws/</code><br/>
The following steps need to be done only once, after installing ROS:
The e-puck2 ROS driver is located in <code>~/rosbots_catkin_ws/src/epuck_driver_cpp/</code><br/>
:1. If not already done, create a catkin workspace, refer to [https://wiki.ros.org/catkin/Tutorials/create_a_workspace https://wiki.ros.org/catkin/Tutorials/create_a_workspace]. Basically you need to issue the following commands: 
Remember to follow the steps in the section [http://www.gctronic.com/doc/index.php?title=Pi-puck#Requirements Requirements ] and section [https://www.gctronic.com/doc/index.php?title=Pi-puck#Demos_and_scripts_update Demos and scripts update], only once.<br/>
<pre>  mkdir -p ~/catkin_ws/src
The PC (if used) and the Pi-puck extension are supposed to be configured in the same network.
  cd ~/catkin_ws/src
  catkin_init_workspace
  cd ~/catkin_ws/
  catkin_make
  source devel/setup.bash </pre>
:2. You will need to add the line <code>source ~/catkin_ws/devel/setup.bash</code> to your <tt>.bashrc</tt> in order to automatically have access to the ROS commands when the system is started
:3. Move to <code>~/catkin_ws/src</code> and clone the ROS e-puck2 driver repo:
:* if you are working with Python (only Bluetooth communication supported at the moment): <code>git clone -b e-puck2 https://github.com/gctronic/epuck_driver</code>
:* if you are working with cpp:
:** Bluetooth communication: <code>git clone -b e-puck2 https://github.com/gctronic/epuck_driver_cpp</code>
:** WiFi communication: <code>git clone -b e-puck2_wifi https://github.com/gctronic/epuck_driver_cpp</code>
:4. Install the dependencies:
:* ROS:
:** [https://wiki.ros.org/gmapping gmapping (SLAM)] package: <code>sudo apt-get install ros-kinetic-gmapping</code>
:** [https://wiki.ros.org/rviz_imu_plugin Rviz IMU plugin] package: <code>sudo apt-get install ros-kinetic-rviz-imu-plugin</code>
:* Python:
:** The ROS e-puck2 driver is based on the e-puck2 Python library that requires some dependencies:
:*** install the Python setup tools: <code>sudo apt-get install python-setuptools</code>
:*** install the Python image library: <code>sudo apt-get install python-imaging</code>
:*** install pybluez version 0.22: <code>sudo pip install pybluez==0.22</code>
:**** install pybluez dependencies: <code>sudo apt-get install libbluetooth-dev</code>
:*** install OpenCV: <code>sudo apt-get install python3-opencv</code>
:* cpp:
:** install the library used to communicate with Bluetooth: <code>sudo apt-get install libbluetooth-dev</code>
:** install OpenCV: <code>sudo apt-get install libopencv-dev</code>
:*** if you are working with OpenCV 4, then you need to change the header include from <code>#include <opencv/cv.h></code> to <code>#include <opencv2/opencv.hpp></code>
:5. Open a terminal and go to the catkin workspace directory (<tt>~/catkin_ws</tt>) and issue the command <code>catkin_make</code>, there shouldn't be errors
:6. Program the e-puck2 robot with the [https://www.gctronic.com/doc/index.php?title=e-puck2#Factory_firmware factory firmware] and put the selector in position 3 for Bluetooth communication or in position 15(F) for WiFi Communication
:7. Program the radio module with the correct firmware:
:* Bluetooth communication: use the [https://www.gctronic.com/doc/index.php?title=e-puck2#Factory_firmware_2 factory firmware]
:* WiFi communication: use the [https://www.gctronic.com/doc/index.php?title=e-puck2#WiFi_firmware WiFi firmware]
 
==Running the Python ROS node==
First of all get the last version of the ROS e-puck2 driver from github. Move to <code>~/catkin_ws/src</code> and issue: <code>git clone -b e-puck2 https://github.com/gctronic/epuck_driver</code>. <br/>
Then build the driver by opening a terminal and issueing the command <code>catkin_make</code> from within the catkin workspace directory (e.g. ~/catkin_ws).<br/>
Moreover make sure the node is marked as executable by opening a terminal and issueing the following command from within the catkin workspace directory (e.g. ~/catkin_ws): <code>chmod +x ./src/epuck_driver/scripts/epuck2_driver.py</code>. <br/>
 
Before actually starting the e-puck2 node you need to configure the e-puck2 robot as Bluetooth device in the system, refer to section [https://www.gctronic.com/doc/index.php?title=e-puck2_PC_side_development#Connecting_to_the_Bluetooth Connecting to the Bluetooth].<br/>
Once the robot is paired with the computer, you need to take note of its MAC address (this will be needed when launching the ROS node). To know the MAC address of a paired robot, go to <tt>System Settings</tt>, <tt>Bluetooth</tt> and select the robot; once selected you'll see in the right side the related MAC address.
 
First thing to do before launching the script file is running the <tt>roscore</tt>, open another terminal tab and issue the command <tt>roscore</tt>.
 
Now you can finally start the e-puck2 ROS node, for this purposes there is a launch script (based on [https://wiki.ros.org/roslaunch roslaunch]).<br/>
Open a terminal and issue the following command: <code>roslaunch epuck_driver epuck2_controller.launch epuck2_address:='B4:E6:2D:EB:9C:4F'</code>.<br/>
<tt>B4:E6:2D:EB:9C:4F</tt> is the e-puck2 Bluetooth MAC address that need to be changed accordingly to your robot.
 
If all is going well you'll see the robot make a blink meaning it is connected and ready to exchange data and [https://wiki.ros.org/rviz/UserGuide rviz] will be opened showing the informations gathered from the topics published by the e-puck2 driver node.
 
The launch script is configured also to run the [https://wiki.ros.org/gmapping gmapping (SLAM)] node that let the robot construct a map of the environment; the map is visualized in real-time directly in the rviz window. The gmapping package provides laser-based SLAM (Simultaneous Localization and Mapping) and since the e-puck2 has no laser sensor, the information from the 6 proximity sensors on the front side of the robot are interpolated to get 19 laser scan points.
 
The following figures show all the topics published by the e-puck2 driver node (left) and the <code>rviz</code> interface (right): <br/>
<span class="plainlinks">[https://projects.gctronic.com/epuck2/wiki_images/e-puck2_topics.png <img width=200 src="https://projects.gctronic.com/epuck2/wiki_images/e-puck2_topics_small.png">]</span>
''<font size="2">Click to enlarge</font>''
<span class="plainlinks">[https://projects.gctronic.com/epuck2/wiki_images/e-puck2-rviz.png <img width=400 src="https://projects.gctronic.com/epuck2/wiki_images/e-puck2-rviz_small.png">]</span>
''<font size="2">Click to enlarge</font>''<br/>
 
==Running the cpp ROS node==
There is a small difference at the moment between the Bluetooth and WiFi versions of the ROS node: the WiFi ROS node supports also the publication of the magnetometer data.
===Bluetooth===
First of all get the last version of the ROS e-puck2 driver from github. Move to <code>~/catkin_ws/src</code> and issue: <code>git clone -b e-puck2 https://github.com/gctronic/epuck_driver_cpp</code>. <br/>
Then build the driver by opening a terminal and issueing the command <code>catkin_make</code> from within the catkin workspace directory (e.g. ~/catkin_ws).<br/>
 
Before actually starting the e-puck2 node you need to configure the e-puck2 robot as Bluetooth device in the system, refer to section [https://www.gctronic.com/doc/index.php?title=e-puck2_PC_side_development#Connecting_to_the_Bluetooth Connecting to the Bluetooth].<br/>
Once the robot is paired with the computer, you need to take note of its MAC address (this will be needed when launching the ROS node). To know the MAC address of a paired robot, go to <tt>System Settings</tt>, <tt>Bluetooth</tt> and select the robot; once selected you'll see in the right side the related MAC address.
 
First thing to do before launching the script file is running the <tt>roscore</tt>, open another terminal tab and issue the command <tt>roscore</tt>.
 
Now you can finally start the e-puck2 ROS node, for this purposes there is a launch script (based on [https://wiki.ros.org/roslaunch roslaunch]).<br/>
Open a terminal and issue the following command: <code>roslaunch epuck_driver_cpp epuck2_controller.launch epuck2_address:='B4:E6:2D:EB:9C:4F'</code>.<br/>
<tt>B4:E6:2D:EB:9C:4F</tt> is the e-puck2 Bluetooth MAC address that need to be changed accordingly to your robot.
 
If all is going well the robot will be ready to exchange data and [https://wiki.ros.org/rviz/UserGuide rviz] will be opened showing the informations gathered from the topics published by the e-puck2 driver node.
 
The launch script is configured also to run the [https://wiki.ros.org/gmapping gmapping (SLAM)] node that let the robot construct a map of the environment; the map is visualized in real-time directly in the rviz window. The gmapping package provides laser-based SLAM (Simultaneous Localization and Mapping) and since the e-puck2 has no laser sensor, the information from the 6 proximity sensors on the front side of the robot are interpolated to get 19 laser scan points.
===WiFi===
First of all get the last version of the ROS e-puck2 driver from github. Move to <code>~/catkin_ws/src</code> and issue: <code>git clone -b e-puck2_wifi https://github.com/gctronic/epuck_driver_cpp</code>. <br/>
Then build the driver by opening a terminal and issueing the command <code>catkin_make</code> from within the catkin workspace directory (e.g. ~/catkin_ws).<br/>


==Running roscore==
Before actually starting the e-puck2 node you need to connect the e-puck2 robot to your WiFi network, refer to section [https://www.gctronic.com/doc/index.php?title=e-puck2_PC_side_development#Connecting_to_the_WiFi Connecting to the WiFi].<br/>
<code>roscore</code> can be launched either from the PC or directly from the Pi-puck.<br/>
Before starting roscore, open a terminal and issue the following commands:
* <code>export ROS_IP=roscore-ip</code>
* <code>export ROS_MASTER_URI=http://roscore-ip:11311</code>
where <code>roscore-ip</code> is the IP of the machine that runs <code>roscore</code><br/>
Then start <code>roscore</code> by issueing <code>roscore</code>.


==Running the ROS node==
First thing to do before launching the script file is running the <tt>roscore</tt>, open another terminal tab and issue the command <tt>roscore</tt>.
Before starting the e-puck2 ROS node on the Pi-puck, issue the following commands:
* <code>export ROS_IP=pipuck-ip</code>
* <code>export ROS_MASTER_URI=http://roscore-ip:11311</code>
where <code>pipuck-ip</code> is the IP of the Pi-puck extension and <code>roscore-ip</code> is the IP of the machine that runs <code>roscore</code> (can be the same IP if <code>roscore</code> runs directly on the Pi-puck).


To start the e-puck2 ROS node issue the command:<br/>
Now you can finally start the e-puck2 ROS node, for this purposes there is a launch script (based on [https://wiki.ros.org/roslaunch roslaunch]).<br/>
<code>roslaunch epuck_driver_cpp epuck_minimal.launch debug_en:=true ros_rate:=20</code><br/>
Open a terminal and issue the following command: <code>roslaunch epuck_driver_cpp epuck2_controller.launch epuck2_address:='192.168.1.20'</code>.<br/>
<!--
<tt>192.168.1.20</tt> is the e-puck2 IP address that need to be changed accordingly to your robot.
To start the e-puck2 ROS node issue the command:<br/>
<code>roslaunch epuck_driver_cpp epuck_controller.launch epuck_id:='3000'</code><br/>
This launch file will start the e-puck2 node and the camera node.
If you are using a PC, then you can start <code>rviz</code>:
* in a terminal issue the command <code>rviz rviz</code>
* open the configuration file named <code>single_epuck_driver_rviz.rviz</code> you can find in <code>epuck_driver_cpp/config/</code> directory
-->


The following graph shows all the topics published by the e-puck2 driver node:<br/>
If all is going well the robot will be ready to exchange data and [https://wiki.ros.org/rviz/UserGuide rviz] will be opened showing the informations gathered from the topics published by the e-puck2 driver node.
<span class="plainlinks">[https://projects.gctronic.com/epuck2/wiki_images/ros-e-puck2_.jpg <img width=150 src="https://projects.gctronic.com/epuck2/wiki_images/ros-e-puck2_small.jpg">]</span>
 
The launch script is configured also to run the [https://wiki.ros.org/gmapping gmapping (SLAM)] node that let the robot construct a map of the environment; the map is visualized in real-time directly in the rviz window. The gmapping package provides laser-based SLAM (Simultaneous Localization and Mapping) and since the e-puck2 has no laser sensor, the information from the 6 proximity sensors on the front side of the robot are interpolated to get 19 laser scan points.
 
The refresh rate of the topics is about 11 Hz when the camera image is enabled (see [https://projects.gctronic.com/epuck2/wiki_images/e-puck2_topics_wifi_refresh_camon.pdf e-puck2_topics_wifi_refresh_camon.pdf]) and about 50 Hz when the camera image is disabled (see [https://projects.gctronic.com/epuck2/wiki_images/e-puck2_topics_wifi_refresh_camoff.pdf e-puck2_topics_wifi_refresh_camoff.pdf]). The same graphs can be created using the command <code>rosrun tf view_frames</code>.
 
The following figure shows all the topics published by the e-puck2 WiFi ROS node. The same graph can be created using the command <code>rqt_graph</code>. <br/>
<span class="plainlinks">[https://projects.gctronic.com/epuck2/wiki_images/e-puck2_topics_wifi.png <img width=200 src="https://projects.gctronic.com/epuck2/wiki_images/e-puck2_topics_wifi.png">]</span>
''<font size="2">Click to enlarge</font>''
''<font size="2">Click to enlarge</font>''


==Test the communication==
==Move the robot==
You can test if the communication between the robot and the computer is actually working by simply display messages published by a topic, e.g.:<br/>
You have some options to move the robot.<br/>
<code>rostopic echo /proximity0</code><br/>
 
You can have the list of all the topics by issuing the command: <code>rostopic list</code>.
The first one is to use the <code>rviz</code> interface: in the bottom left side of the interface there is a <code>Teleop</code> panel containing an ''interactive square'' meant to be used with differential drive robots. By clicking in this square you'll move the robot, for instance by clicking on the top-right section, then the robot will move forward-right.<br/>
 
The second method to move the robot is using the <code>ros-kinetic-turtlebot-teleop</code> ROS package. If not already done, you can install this package by issueing <code>sudo apt-get install ros-kinetic-turtlebot-teleop</code>.<br/>
There is a lunch file in the e-puck2 ROS driver that configures this package in order to be used with the e-puck2 robot. To start the launch file, issue the following command <code>roslaunch epuck_driver epuck2_teleop.launch</code>, then follow the instructions printed on the terminal to move the robot.<br/>


==Get the source code==
The third method is by directly publishing on the <code>/mobile_base/cmd_vel</code> topic, for instance by issueing the following command <code>rostopic pub -1 /mobile_base/cmd_vel geometry_msgs/Twist -- '[0.0, 0.0, 0.0]' '[0.0, 0.0, 1.0]'</code> the robot will rotate on the spot, instead by issueing the following command <code>rostopic pub -1 /mobile_base/cmd_vel geometry_msgs/Twist -- '[4.0, 0.0, 0.0]' '[0.0, 0.0, 0.0]'</code> the robot will move straight forward.<br/>
The last version of the e-puck2 ROS node can be downloaded from the git: <code>git clone -b pi-puck https://github.com/gctronic/epuck_driver_cpp.git</code><br/>
Beware that there shouldn't be any other node publishing on the <code>/mobile_base/cmd_vel</code> topic, otherwise your commands will be overwritten.


To update to the last version follow these steps:
==Control the RGB LEDs==
# <code>cd ~/rosbots_catkin_ws/src/</code>
The general command to change the RGB LEDs colors is the following:<br/>
# <code>rm -R -f epuck_driver_cpp</code>
<code>rostopic pub -1 /mobile_base/rgb_leds std_msgs/UInt8MultiArray "{data: [LED2 red, LED2 green, LED2 blue, LED4 red, LED4 green, LED4 blue, LED6 red, LED6 green, LED6 blue, LED8 red, LED8 green, LED8 blue]}"</code><br/>
# <code>git clone -b pi-puck https://github.com/gctronic/epuck_driver_cpp.git</code>
The values range is from 0 (off) to 100 (completely on). Have a look at the [https://www.gctronic.com/doc/index.php?title=e-puck2#Overview e-puck2 overview] to know the position of the RGB LEDs.<br/>
# <code>cd ~/rosbots_catkin_ws/</code>
# <code>catkin_make --only-pkg-with-deps epuck_driver_cpp</code>


==Python version==
For instance to set all the RGB LEDs to red, issue the following command:<br/>
A Python version developed by the York University can be found here [https://github.com/yorkrobotlab/pi-puck-ros https://github.com/yorkrobotlab/pi-puck-ros].
<code>rostopic pub -1 /mobile_base/rgb_leds std_msgs/UInt8MultiArray "{data: [100,0,0, 100,0,0, 100,0,0, 100,0,0]}"</code><br/>


=OpenCV=
To turn off all the RGB LEDs issue the following command:<br/>
OpenCV 3.4.1 is integrated in the Pi-puck system.
<code>rostopic pub -1 /mobile_base/rgb_leds std_msgs/UInt8MultiArray "{data: [0,0,0, 0,0,0, 0,0,0, 0,0,0]}"</code>


=York Robotics Lab Expansion Board=
==Control the LEDs==
The York Robotics Lab developed an expansion board for the Pi-puck extension that includes: 9-DoF IMU, 5-input navigation switch, RGB LED, XBee socket, 24-pin Raspberry Pi compatible header. For more information have a look at [https://pi-puck.readthedocs.io/en/latest/extensions/yrl-expansion/ https://pi-puck.readthedocs.io/en/latest/extensions/yrl-expansion/].<br/>
The general command to change the LEDs state is the following:<br/>
<span class="plainlinks">[https://projects.gctronic.com/epuck2/wiki_images/yrl-expansion-top.jpg <img width=350 src="https://projects.gctronic.com/epuck2/wiki_images/yrl-expansion-top.jpg">]</span><br/>
<code>rostopic pub -1 /mobile_base/cmd_led std_msgs/UInt8MultiArray "{data: [LED1, LED3, LED5, LED7, body LED, front LED]}"</code><br/>
The values are: 0 (off), 1 (on) and 2 (toggle). Have a look at the [https://www.gctronic.com/doc/index.php?title=e-puck2#Overview e-puck2 overview] to know the position of the LEDs.<br/>


An example showing how to communicate with the YRL expansion board is available in the Pi-puck repository of the York Robotics Lab:
For instance to turn on LED1, LED5, body LED and front LED, issue the following command:<br/>
# <code> git clone https://github.com/yorkrobotlab/pi-puck.git pi-puck_yrl</code>
<code>rostopic pub -1 /mobile_base/cmd_led std_msgs/UInt8MultiArray "{data: [1,0,1,0,1,1]}"</code><br/>
# <code>cd pi-puck_yrl/python-library</code>
# <code>python3 pipuck-library-test.py -x</code> Once started, press in sequence up, down, left, right, center to continue the demo.


==Assembly==
To toggle the state of all the LEDs issue the following command:<br/>
The assembly is very simple: place the YRL expansion board on top of the Raspberry Pi and then connect them with the provided screws. Once they are connected, you can attach both on top of the Pi-puck extension.<br/>
<code>rostopic pub -1 /mobile_base/cmd_led std_msgs/UInt8MultiArray "{data: [2,2,2,2,2,2]}"</code>
<span class="plainlinks">[https://projects.gctronic.com/epuck2/wiki_images/yrl-exp1.jpg <img width=200 src="https://projects.gctronic.com/epuck2/wiki_images/yrl-exp1.jpg">]</span>
<span class="plainlinks">[https://projects.gctronic.com/epuck2/wiki_images/yrl-exp2.jpg <img width=150 src="https://projects.gctronic.com/epuck2/wiki_images/yrl-exp2.jpg">]</span>
<span class="plainlinks">[https://projects.gctronic.com/epuck2/wiki_images/yrl-exp3.jpg <img width=200 src="https://projects.gctronic.com/epuck2/wiki_images/yrl-exp3.jpg">]</span><br/>
==XBee==
In this section it is explained how to send data from the Pi-puck to the computer using XBee modules Series 1.


The XBee module mounted on the YRL expansion must be programmed with the <code>XBEE 802.15.4-USB ADAPTER</code> firmware; this can be done with the [http://www.digi.com/products/wireless-wired-embedded-solutions/zigbee-rf-modules/xctu XTCU software]. With XTCU be sure to program also the same parameters on both modules in order to be able to communicate between each other: <code>Channel</code> (e.g. <code>C</code>), <code>PAN ID</code> (e.g. <code>3332</code>), <code>DH = 0</code>, <code>DL = 0</code>, <code>MY = 0</code>.
==Visualize the camera image==
By default the camera is disabled to avoid communication delays. In order to enable it and visualize the image through ROS you need to pass an additional parameter <code>cam_en</code> to the launch script as follows:<br/>
* Python: <code>roslaunch epuck_driver epuck2_controller.launch epuck2_address:='B4:E6:2D:EB:9C:4F' cam_en:='true'</code>
* cpp:
** Bluetooth: <code>roslaunch epuck_driver_cpp epuck2_controller.launch epuck2_address:='B4:E6:2D:EB:9C:4F' cam_en:='true'</code>
** WiFi: <code>roslaunch epuck_driver_cpp epuck2_controller.launch epuck2_address:='192.168.1.20' cam_en:='true'</code>
 
Then with the Python ROS node you need to open another terminal and issue the command <code>rosrun image_view image_view image:=/camera</code> that will open a window with the e-puck2 camera image.<br/>
With the cpp ROS node the image is visualized directly in the Rviz window (on the right).<br/>
 
When using the Bluetooth ROS node, by default the image is greyscale and its size is 160x2, but you can change the image parameters in the launch script.<br/>
Instead when using the WiFi node, the image is RGB565 and its size is fixed to 160x120 (you can't change it).
==Multiple robots==
There is a lunch script file designed to run up to 4 robots simultaneously, you can find it in <code>~/catkin_ws/src/epuck_driver_cpp/launch/multi_epuck2.launch</code>. Here is an example to run 2 robots:<br/>
<code>roslaunch epuck_driver_cpp multi_epuck2.launch robot_addr0:='192.168.1.21' robot_addr1:='192.168.1.23'</code><br/>
After issueing the command, rviz will be opened showing the values of all the 4 robots; it is assumed that the robots are placed in a square (each robot in each corner) of 20 cm.<br/>
Beware that this launch script is available only in the WiFi branch, but it can be used as a starting point also for the Bluetooth communication.
 
==Troubleshooting==
===Robot state publisher===
If you get an error similar to the following when you start a node with roslaunch:
<pre>
ERROR: cannot launch node of type [robot_state_publisher/state_publisher]: Cannot locate node of type [state_publisher] in package [robot_state_publisher]. Make sure file exists in package path and permission is set to executable (chmod +x)
</pre>
Then you need to change the launch file from:
<pre>
<node name="robot_state_publisher" pkg="robot_state_publisher" type="state_publisher" />
</pre>
To:
<pre>
<node name="robot_state_publisher" pkg="robot_state_publisher" type="robot_state_publisher" />
</pre>
This is due to the fact that <code>state_publisher</code> was a deprecated alias for the node named <code>robot_state_publisher</code> (see [https://github.com/ros/robot_state_publisher/pull/87 https://github.com/ros/robot_state_publisher/pull/87]).


Some Python examples ara available in the [https://github.com/yorkrobotlab/pi-puck-expansion-board YRL Expansion Board GitHub repository] that can be used to communicate with the XBee module mounted on the YRL expansion. These examples are based on the [https://github.com/digidotcom/xbee-python Digi XBee Python library] that can be installed with the command <code>pip3 install digi-xbee</code>. This library requires the XBee module to be configured in API mode; you can setup this mode following these steps:
=Tracking=
# <code> git clone https://github.com/yorkrobotlab/pi-puck-expansion-board.git</code>
Some experiments are done with the [https://en.wikibooks.org/wiki/SwisTrack SwisTrack software] in order to be able to track the e-puck2 robots through a color marker placed on top of the robots.
# <code>cd pi-puck-expansion-board/xbee</code>
# <code>python3 xbee-enable-api-mode.py</code>


Now connect the second module to the computer and run XTCU, select the console view and open the serial connection. Then run the [https://projects.gctronic.com/epuck2/PiPuck/xbee-send-broadcast.py xbee-send-broadcast.py] example from the Pi-puck by issuing the command: <code>python3 xbee-send-broadcast.py</code>. From the XTCU console you should receive <code>Hello Xbee World!</code>.
The requirements are the following:
* e-puck robots equipped with a color marker attached on top of the robot; beware that there should be a white border of about 1 cm to avoid wrong detection (marker merging). The colors marker were printed with a laser printer.
* USB webcam with a resolution of at least 640x480. In our tests we used the <code>Trust SpotLight Pro</code>.
* Windows OS: the SwisTrack pre-compiled package was built to run in Windows. Moreover the controller example depends on Windows libraries.<br/>''Anyway it's important to notice that SwisTrack is multiplatform and that the controller code can be ported to Linux.
* An arena with uniform light conditions to make the detection more robust.


For more information refer to [https://pi-puck.readthedocs.io/en/latest/extensions/yrl-expansion/xbee/ https://pi-puck.readthedocs.io/en/latest/extensions/yrl-expansion/xbee/].
==Controller example==
In this example we exploit the ''SwisTrack'' blobs detection feature in order to detect the color markers on top of the robots and then track these blob with a ''Nearest Neighbour tracking'' algorithm.<br/>
The ''SwisTrack'' application get an image from the USB camera, then applies some conversions and thresholding before applying the blobs detection and finally tracks these blobs. All the data, like the blob's positions, are published to the network (TCP). <br/>
The controller is a separate application that receives the data from SwisTrack through the network and opens a Bluetooth connection with each robot in order to remote control them. In the example, the informations received are printed in the terminal while moving the robots around (obstacles avoidance).<br/>
The following schema shows the connections schema:<br/>
<span class="plain links">[https://projects.gctronic.com/epuck2/wiki_images/tracking-schema.png <img width=400 src="https://projects.gctronic.com/epuck2/wiki_images/tracking-schema.png">]</span><br/>


=Time-of-Flight Distance Sensor add-on=
The Pi-puck extension integrates six sensor board sockets that can be used to add up to six VL53L1X-based distance sensor add-ons. The Pi-puck equipped with these add-ons is shown in the following figure:<br/>
<span class="plainlinks">[https://projects.gctronic.com/epuck2/wiki_images/pi-puck-tof.jpg <img width=250 src="https://projects.gctronic.com/epuck2/wiki_images/pi-puck-tof.jpg">]</span><br/>
For more information have a look at [https://pi-puck.readthedocs.io/en/latest/extensions/tof-sensor/#time-of-flight-distance-sensor https://pi-puck.readthedocs.io/en/latest/extensions/tof-sensor/#time-of-flight-distance-sensor].


<font style="color:red"> Beware that once the socket for the ToF add-on sensor '''3''' is soldered on the pi-puck extension, you are no more able to connect the HDMI cable.</font>
Follow these steps to run the example:
* program all the e-puck2 robots with the last factory firmware (see section [https://www.gctronic.com/doc/index.php?title=e-puck2#Firmware_update Firmware update]) and put the selector in position 3
* pair the robots with the computer, refer to section [https://www.gctronic.com/doc/index.php?title=e-puck2_PC_side_development#Connecting_to_the_Bluetooth Connecting to the Bluetooth]
* the controller example is based on the [https://www.gctronic.com/doc/index.php?title=e-puck2_PC_side_development#C.2B.2B_remote_library C++ remote library], so download it
* download the controller example by issueing the following command: <code>git clone https://github.com/e-puck2/e-puck2_tracking_example</code>.<br/> When building the example, make sure that both the library and the example are in the same directory
* download the pre-compiled [https://projects.gctronic.com/elisa3/SwisTrackEnvironment-10.04.13.zip SwisTrack software] and extract it. The ''SwisTrack'' executable can be found in <code>SwisTrackEnvironment/SwisTrack - Release.exe</code>
* prepare the arena: place the USB camera on the roof pointing towards the robots. Download the [https://projects.gctronic.com/epuck2/tracking/e-puck2-tracking-markers.pdf markers] and attach one of them on top of each robot.
* download the [https://projects.gctronic.com/epuck2/tracking/swistrack-conf.zip configuration files package] for ''SwisTrack'' and extract it. Run the ''SwisTrack'' executable and open the configuration file called <code>epuck2.swistrack</code>. All the components to accomplish the tracking of '''2 robots''' should be loaded automatically.<br/> If needed you can tune the various components to improve the blobs detection in your environment or for tracking more robots.
* Run the controller example: at the beginning you must enter the Bluetooth UART port numbers for the 2 robots. Then the robots will be moved slightly in order to identify which robot belong to which blob. Then the controller loop is started sending motion commands to the robots for doing obstacles avoidance and printing the data received from SwisTrack in the terminal.


==Communicate with the ToF sensors==
The following image shows the example running:<br/>
In order to communicate with the sensors you can use the <code>multiple-i2c-bus-support</code> branch of the [https://github.com/pimoroni/vl53l1x-python vl53l1x-python] library from [https://shop.pimoroni.com/ Pimoroni]. To install this library follow these steps:
<span class="plain links">[https://projects.gctronic.com/epuck2/wiki_images/tracking-epuck2.png <img width=250 src="https://projects.gctronic.com/epuck2/wiki_images/tracking-epuck2_small.png">]</span><br/>
# <code>git clone -b multiple-i2c-bus-support https://github.com/pimoroni/vl53l1x-python.git</code>
# <code>cd vl53l1x-python</code>
# <code>sudo python3 setup.py install</code>


A Python example showing how to read data from the ToF sensors is available in the Pi-puck repository of the York Robotics Lab:
=Matlab=
# <code> git clone https://github.com/yorkrobotlab/pi-puck.git pi-puck_yrl</code>
A Matlab interface is available in the following repository [https://github.com/gctronic/e-puck-library/tree/master/tool/ePic https://github.com/gctronic/e-puck-library/tree/master/tool/ePic]. This interface was developed for the e-puck version 1 robot but it is compatible also with e-puck version 2 robot since it is based on the [https://www.gctronic.com/doc/index.php/Advanced_sercom_protocol advanced sercom protocol].
# <code>cd pi-puck_yrl/python-library</code>
# <code>python3 pipuck-library-test.py -t</code>

Revision as of 11:29, 29 March 2023

e-puck2 main wiki

Robot configuration

This section explains how to configure the robot based on the communication channel you will use for your developments, thus you need to read only one of the following sections, but it would be better if you spend a bit of time reading them all in order to have a full understanding of the available configurations.

USB

The main microcontroller is initially programmed with a firmware that support USB communication.

If the main microcontroller isn't programmed with the factory firmware or if you want to be sure to have the last firmware on the robot, you need to program it with the last factory firmware by referring to section main microcontroller firmware update.

The radio module can be programmed with either the Bluetooth or the WiFi firmware, both are compatible with USB communication:

When you want to interact with the robot from the computer you need to place the selector in position 8 to work with USB.

Section PC interface gives step by step instructions on how to connect the robot with the computer via USB.

Once you tested the connection with the robot and the computer, you can start developing your own application by looking at the details behind the communication protocol. Both USB and Bluetooth communication channels use the same protocol called advanced sercom v2, refer to section Communication protocol: BT and USB for detailed information about this protocol.

Bluetooth

The main microcontroller and radio module of the robot are initially programmed with firmwares that together support Bluetooth communication.

If the main microcontroller and radio module aren't programmed with the factory firmware or if you want to be sure to have the last firmwares on the robot, you need to program them with the last factory firmwares:

When you want to interact with the robot from the computer you need to place the selector in position 3 if you want to work with Bluetooth.

Section Connecting to the Bluetooth gives step by step instructions on how to accomplish your first Bluetooth connection with the robot.

Once you tested the connection with the robot and the computer, you can start developing your own application by looking at the details behind the communication protocol. Both Bluetooth and USB communication channels use the same protocol called advanced sercom v2, refer to section Communication protocol: BT and USB for detailed information about this protocol.

WiFi

For working with the WiFi, the main microcontroller must be programmed with the factory firmware and the radio module must be programmed with a dedicated firmware (not the factory one):

Put the selector in position 15(F).

Section Connecting to the WiFi gives step by step instructions on how to accomplish your first WiFi connection with the robot.

The communication protocol is described in detail in the section Communication protocol: WiFi.

Connecting to the Bluetooth

The factory firmware of the radio module creates 3 Bluetooth channels using the RFcomm protocol when the robot is paired with the computer:

  1. Channel 1, GDB: port to connect with GDB if the programmer is in mode 1 or 3 (refer to chapter Configuring the Programmer's settings for more information about these modes)
  2. Channel 2, UART: port to connect to the UART port of the main processor
  3. Channel 3, SPI: port to connect to the SPI port of the main processor (not yet implemented. Just do an echo for now)

By default, the e-puck2 is not visible when you search for it in the Bluetooth utility of your computer.
To make it visible, it is necessary to hold the USER button (also labeled "esp32" on the electronic board) while turning on the robot with the ON/OFF button.


Then it will be discoverable and you will be able to pair with it.
Note that a prompt could ask you to confirm that the number written on the screen is the same on the e-puck. just ignore this and accept. Otherwise if you are asked for a pin insert 0000.

Windows 7

When you pair your computer with the e-puck2, 3 COM ports will be automatically created. To see which COM port corresponds to which channel you need to open the properties of the paired e-puck2 robot from Bluetooth devices. Then the ports and related channels are listed in the Services tab, as shown in the following figure:

Windows 10

When you pair your computer with the e-puck2, 6 COM ports will be automatically created. The three ports you will use have Outgoing direction and are named e_puck2_xxxxx-GDB, e_puck2_xxxxx-UART, e_puck2_xxxxx-SPI. xxxxx is the ID number of your e-puck2.
To see which COM port corresponds to which channel you need to:

  1. open the Bluetooth devices manager
  2. pair with the robot
  3. click on More Bluetooth options
  4. the ports and related channels are listed in the COM Ports tab, as shown in the following figure:

Linux

Once paired with the Bluetooth manager, you need to create the port for communicating with the robot by issueing the command:
sudo rfcomm bind /dev/rfcomm0 MAC_ADDR 2
The MAC address is visible from the Bluetooth manager. The parameter 2 indicates the channel, in this case a port for the UART channel is created. If you want to connect to another service you need to change this parameter accordingly (e.g. 1 for GDB and 3 for SPI). Now you can use /dev/rfcomm0 to connect to the robot.

Mac

When you pair your computer with the e-puck2, 3 COM ports will be automatically created: /dev/cu.e-puck2_xxxxx-GDB, /dev/cu.e-puck2_xxxxx-UART and /dev/cu.e-puck2_xxxxx-SPI. xxxxx is the ID number of your e-puck2.

Testing the Bluetooth connection

You need to download the PC application provided in section PC interface: available executables.
In the connection textfield you need to enter the UART channel port, for example:

  • Windows 7: COM258
  • Windows 10: e_puck2_xxxxx-UART
  • Linux: /dev/rfcomm0
  • Mac: /dev/cu.e-puck2_xxxxx-UART

and then click Connect.
You should start receiving sensors data and you can send commands to the robot.

Alternatively you can also use a simple terminal program (e.g. realterm in Windows) instead of the PC application, then you can issue manually the commands to receive sensors data or for setting the actuators (once connected, type h + ENTER for a list of availables commands).

Python examples

Here are some basic Python 3 examples that show how to get data from the robot through Bluetooth using the commands available with the advanced sercom v2:

In all the examples you need to set the correct Bluetooth serial port related to the robot.

Connecting to multiple robots

Here is a simple Python 3 script multi-robot.py that open a connection with 2 robots and exchange data with them using the advanced sercom protocol. This example can be extended to connect to more than 2 robots.

Automotive

Initial project in which some robots navigate a city trying to handle the crossroads using only the onboard sensors. You can download the Python 3 script from epuck2_automotive.py.

Here is a video of this demo:

C++ remote library

A remote control library implemented in C++ is available to control the e-puck2 robot via a Bluetooth connection from the computer.
The remote control library is multiplatform and uses only standard C++ libraries.
You can download the library with the command git clone https://github.com/e-puck2/e-puck2_cpp_remote_library.
A simple example showing how to use the library is also available; you can download it with the command git clone https://github.com/e-puck2/e-puck2_cpp_remote_example.
Before building the example you need to build the library. Then when building the example, make sure that both the library and the example are in the same directory, that is you must end up with the following directory tree:

e-puck2_projects
|_ e-puck2_cpp_remote_library
|_ e-puck2_cpp_remote_example

The complete API reference is available in the following link e-puck2_cpp_remote_library_api_reference.pdf.

Connecting to the WiFi

The WiFi channel is used to communicate with robot faster than with Bluetooth. At the moment a QQVGA (160x120) color image is transferred to the computer together with the sensors values at about 10 Hz; of course the robot is also able to receive commands from the computer.
In order to communicate with the robot through WiFi, first you need to configure the network parameters on the robot by connecting directly to it, since the robot is initially configured in access point mode, as explained in the following section. Once the configuration is saved on the robot, it will then connect automatically to the network and you can connect to it.

The LED2 is used to indicate the state of the WiFi connection:

  • red indicates that the robot is in access point mode (waiting for configuration)
  • green indicates that the robot is connected to a network and has received an IP address
  • blue (toggling) indicates that the robot is transferring the image to the computer
  • off when the robot cannot connect to the saved configuration

Network configuration

If there is no WiFi configuration saved in flash, then the robot will be in access point mode in order to let the user connect to it and setup a WiFi connection. The LED2 is red.

The access point SSID will be e-puck2_0XXXX where XXXX is the id of the robot; the password to connect to the access point is e-puck2robot.
You can use a phone, a tablet or a computer to connect to the robot's WiFi and then you need to open a browser and insert the address 192.168.1.1. The available networks are scanned automatically and listed in the browser page as shown in figure 1. Choose the WiFi signal you want the robot to establish a conection with from the web generated list, and enter the related password; if the password is correct you'll get a message saying that the connection is established as shown in figure 2. After pressing OK you will be redirected to the main page showing the network to which you're connected and the others available nearby as shown in figure 3. If you press on the connected network, then you can see your IP address as shown in figure 4; take note of the address since it will be needed later.

[1] [2] [3] [4]


Now the configuration is saved in flash, this means that when the robot is turned on it will read this configuration and try to establish a connection automatically.
Remember that you need to power cycle the robot at least once for the new configuration to be active.

Once the connection is established, the LED2 will be green.

In order to reset the current configuration you need to press the user button for 2 seconds (the LED2 red will turn on), then you need to power cycle the robot to enter access point mode.


Finding the IP address

Often the IP address assigned to the robot will remain the same when connecting to the same network, so if you took note of the IP address in section Network configuration you're ready to go to the next section.

Otherwise you need to connect the robot to the computer with the USB cable, open a terminal and connect to the port labeled Serial Monitor (see chapter Finding the USB serial ports used). Then power cycle the robot and the IP address will be shown in the terminal (together with others informations), as illustrated in the following figure:

Testing the WiFi connection

A dedicated WiFi version of the PC application was developed to communicate with the robot through TCP protocol. You can download the executable from one of the following links:

If you are interested to the source code, you can download it with the command git clone -b wifi --recursive https://github.com/e-puck2/monitor.git

Run the PC application, insert the IP address of the robot in the connection textfield and then click on the Connect button. You should start receiving sensors data and you can send commands to the robot. The LED2 blue will toggle.

Web server

When the robot is in access point mode you can have access to a web page showing the camera image and some buttons that you can use to move the robot; it is a basic example that you can use as a starting point to develop your own web browser interface.
You can use a phone, a tablet or a computer to connect to the robot's WiFi and then you need to open a browser and insert the address 192.168.1.1/monitor.html.

Python examples

Connecting to multiple robots

A simple Python 3 script was developed as a starting point to open a connection with multiple robots and exchange data with them using the WiFi communication protocol. The demo was tested with 10 robots but can be easily extended to connect to more robots.
You can download the script with the command git clone https://github.com/e-puck2/e-puck2_python_wifi_multi.git. The code was tested to work with Python 3.x.

Communication protocol

This section is the hardest part to understand. It outlines all the details about the communication protocols that you'll need to implement in order to communicate with the robot form the computer. So spend a bit of time reading and re-reading this section in order to grasp completely all the details.

Bluetooth and USB

The communication protocol is based on the advanced sercom protocol, used with the e-puck1.x robot. The advanced sercom v2 includes all the commands available in the advanced sercom protocol and add some additional commands to handle the new features of the e-puck2 robot. In particular here are the new commands:

Command Description Return value / set value
-0x08 Get all sensors

see section Communication protocol: WiFi for the content description

-0x09 Set all actuators

see section Communication protocol: WiFi for the content description

-0x0A Set RGB LEDs, values from 0 (off) to 100 (completely on) [LED2_red][LED2_green][LED2_blue][LED4_red][LED4_green][LED4_blue][LED6_red][LED6_green][LED6_blue][LED8_red][LED8_green][LED8_blue]
-0x0B Get button state: 0 = not pressed, 1 = pressed [STATE]
-0x0C Get all 4 microphones volumes [MIC0_LSB][MIC0_MSB][MIC1_LSB][MIC1_MSB][MIC2_LSB][MIC2_MSB][MIC3_LSB][MIC3_MSB]
-0x0D Get distance from ToF sensor (millimeters) [DIST_LSB][DIST_MSB]
-0x0E Get SD state: 0 = micro sd not connected, 1 = micro sd connected [STATE]
-0x0F Enable/disable magnetometer: 0 = disable, 1 = enable 0 = success, 1 = error
-0x10 Set proximity state:

0 = disable proximity sampling
1 = enable fast proximity sampling (100 hz)
2 = enable slow proximity sampling (20 hz)

0 = success, 1 = error
-0x11 Enable/disable time of flight sensor: 0 = disable, 1 = enable 0 = success, 1 = error

WiFi

The communication is based on TCP; the robot create a TCP server and wait for a connection.

Each packet is identified by an ID (1 byte). The following IDs are used to send data from the robot to the computer:

  • 0x00 = reserved
  • 0x01 = QQVGA color image packet (only the first segment includes this id); packet size (without id) = 38400 bytes; image format = RGB565
  • 0x02 = sensors packet; packet size (without id) = 104 bytes; the format of the returned values are based on the advanced sercom protocol and are compatible with e-puck1.x:

  • Acc: raw axes values (0=X LSB, 1=X MSB, 2=Y LSB, 3=Y MSB, 4=Z LSB, 5=Z MSB), between -1500 and 1500, resolution is +-2g
  • Acceleration expressed in float: acceleration magnitude , between 0.0 and about 2600.0 (~3.46 g)
  • Orientation expressed in float: between 0.0 and 360.0 degrees
    0.0 deg90.0 deg180 deg270 deg
  • Inclination expressed in float: between 0.0 and 90.0 degrees (when tilted in any direction)
    0.0 deg90.0 deg
  • Gyro: raw axes values (0=X LSB, 1=X MSB, 2=Y LSB, 3=Y MSB, 4=Z LSB, 5=Z MSB), between -32768 and 32767, range is +-250dps
  • Magnetometer: raw axes values expressed in float, range is +-4912.0 uT (magnetic flux density expressed in micro Tesla)
  • Temp: temperature given in Celsius degrees
  • IR proximity (0=IR_0 LSB, 1=IR_0 MSB, ...): between 0 (no objects detected) and 4095 (object near the sensor)
  • IR ambient (0=IR_0 LSB, 1=IR_0 MSB, ...): between 0 (strong light) and 4095 (dark)
  • ToF distance: distance given in millimeters
  • Mic volume (0=MIC_0 LSB, 1=MIC_0 MSB, ...): between 0 and 4095
  • Motors steps: 1000 steps per wheel revolution
  • Battery:
  • uSD state: 1 if the micro sd is present and can be read/write, 0 otherwise
  • TV remote data: RC5 protocol
  • Selector position: between 0 and 15
  • Ground proximity (0=GROUND_0 LSB, 1=GROUND_0 MSB, ...): between 0 (no surface at all or not reflective surface e.g. black) and 1023 (very reflective surface e.g. white)
  • Ground ambient (0=GROUND_0 LSB, 1=GROUND_0 MSB, ...): between 0 (strong light) and 1023 (dark)
  • Button state: 1 button pressed, 0 button released
  • 0x03 = empty packet (only id is sent); this is used as an acknowledgment for the commands packet when no sensors and no image is requested

The following IDs are used to send data from the computer to the robot:

  • 0x80 = commands packet; packet size (without id) = 20 bytes:

  • request:
    • bit0: 0=stop image stream; 1=start image stream
    • bit1: 0=stop sensors stream; 1=start sensors stream
  • settings:
    • bit0: 1=calibrate IR proximity sensors
    • bit1: 0=disable onboard obstacle avoidance; 1=enable onboard obstacle avoidance (not implemented yet)
    • bit2: 0=set motors speed; 1=set motors steps (position)
  • left and right: when bit2 of settings field is 0, then this is the desired motors speed (-1000..1000); when 1 then this is the value that will be set as motors position (steps)
  • LEDs: 0=off; 1=on
    • bit0: 0=LED1 off; 1=LED1 on
    • bit1: 0=LED3 off; 1=LED3 on
    • bit2: 0=LED5 off; 1=LED5 on
    • bit3: 0=LED7 off; 1=LED7 on
    • bit4: 0=body LED off; 1=body LED on
    • bit5: 0=front LED off; 1=front LED on
  • RGB LEDs: for each LED, it is specified in sequence the value of red, green and blue (0...100)
  • sound id: 0x01=MARIO, 0x02=UNDERWOLRD, 0x04=STARWARS, 0x08=4KHz, 0x10=10KHz, 0x20=stop sound

For example to receive the camera image (stream) the following steps need to be followed:
1) connect to the robot through TCP
2) send the command packet:

0x80 0x01 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00

3) read the ID (1 byte) and the QQVGA color image pakcet (38400 bytes)
4) go to step 3

Webots

1. Download the last version of Webots for your platform and install it.
2. Program the robot with the WiFi firmware and put the selector in position 15(F). Connect the robot to your WiFi network.
3. Open the example world you can find in the Webots installation directory Webots\projects\robots\gctronic\e-puck\worlds\e-puck2.wbt.
4. Double click the robot, a new small window will appear: insert the IP address of the robot and click connect.

5. Now you can start the demo, the robot will be remote controlled.

For more information have a look at the e-puck Webots guide.

ROS

This chapter explains how to use ROS with the e-puck2 robots by connecting them via Bluetooth or WiFi to the computer that runs the ROS nodes. Basically all the sensors are exposed to ROS and you can also send commands back to the robot through ROS. Both Pyhton and cpp versions are implemented to give the user the possibility to choose its preferred programming language. Here is a general schema:
Click to enlarge

First of all you need to install and configure ROS, refer to https://wiki.ros.org/Distributions for more informations. This tutorial is based on ROS Kinetic. The same instructions are working with ROS Noetic, beware to use noetic instead of kinetic when installing the packages.

Starting from the work done with the e-puck1 (see E-Puck ROS), we updated the code in order to support the e-puck2 robot.

Initial configuration

The following steps need to be done only once, after installing ROS:

1. If not already done, create a catkin workspace, refer to https://wiki.ros.org/catkin/Tutorials/create_a_workspace. Basically you need to issue the following commands:
  mkdir -p ~/catkin_ws/src
  cd ~/catkin_ws/src
  catkin_init_workspace
  cd ~/catkin_ws/
  catkin_make
  source devel/setup.bash 
2. You will need to add the line source ~/catkin_ws/devel/setup.bash to your .bashrc in order to automatically have access to the ROS commands when the system is started
3. Move to ~/catkin_ws/src and clone the ROS e-puck2 driver repo:
4. Install the dependencies:
  • ROS:
  • Python:
    • The ROS e-puck2 driver is based on the e-puck2 Python library that requires some dependencies:
      • install the Python setup tools: sudo apt-get install python-setuptools
      • install the Python image library: sudo apt-get install python-imaging
      • install pybluez version 0.22: sudo pip install pybluez==0.22
        • install pybluez dependencies: sudo apt-get install libbluetooth-dev
      • install OpenCV: sudo apt-get install python3-opencv
  • cpp:
    • install the library used to communicate with Bluetooth: sudo apt-get install libbluetooth-dev
    • install OpenCV: sudo apt-get install libopencv-dev
      • if you are working with OpenCV 4, then you need to change the header include from #include <opencv/cv.h> to #include <opencv2/opencv.hpp>
5. Open a terminal and go to the catkin workspace directory (~/catkin_ws) and issue the command catkin_make, there shouldn't be errors
6. Program the e-puck2 robot with the factory firmware and put the selector in position 3 for Bluetooth communication or in position 15(F) for WiFi Communication
7. Program the radio module with the correct firmware:

Running the Python ROS node

First of all get the last version of the ROS e-puck2 driver from github. Move to ~/catkin_ws/src and issue: git clone -b e-puck2 https://github.com/gctronic/epuck_driver.
Then build the driver by opening a terminal and issueing the command catkin_make from within the catkin workspace directory (e.g. ~/catkin_ws).
Moreover make sure the node is marked as executable by opening a terminal and issueing the following command from within the catkin workspace directory (e.g. ~/catkin_ws): chmod +x ./src/epuck_driver/scripts/epuck2_driver.py.

Before actually starting the e-puck2 node you need to configure the e-puck2 robot as Bluetooth device in the system, refer to section Connecting to the Bluetooth.
Once the robot is paired with the computer, you need to take note of its MAC address (this will be needed when launching the ROS node). To know the MAC address of a paired robot, go to System Settings, Bluetooth and select the robot; once selected you'll see in the right side the related MAC address.

First thing to do before launching the script file is running the roscore, open another terminal tab and issue the command roscore.

Now you can finally start the e-puck2 ROS node, for this purposes there is a launch script (based on roslaunch).
Open a terminal and issue the following command: roslaunch epuck_driver epuck2_controller.launch epuck2_address:='B4:E6:2D:EB:9C:4F'.
B4:E6:2D:EB:9C:4F is the e-puck2 Bluetooth MAC address that need to be changed accordingly to your robot.

If all is going well you'll see the robot make a blink meaning it is connected and ready to exchange data and rviz will be opened showing the informations gathered from the topics published by the e-puck2 driver node.

The launch script is configured also to run the gmapping (SLAM) node that let the robot construct a map of the environment; the map is visualized in real-time directly in the rviz window. The gmapping package provides laser-based SLAM (Simultaneous Localization and Mapping) and since the e-puck2 has no laser sensor, the information from the 6 proximity sensors on the front side of the robot are interpolated to get 19 laser scan points.

The following figures show all the topics published by the e-puck2 driver node (left) and the rviz interface (right):
Click to enlarge Click to enlarge

Running the cpp ROS node

There is a small difference at the moment between the Bluetooth and WiFi versions of the ROS node: the WiFi ROS node supports also the publication of the magnetometer data.

Bluetooth

First of all get the last version of the ROS e-puck2 driver from github. Move to ~/catkin_ws/src and issue: git clone -b e-puck2 https://github.com/gctronic/epuck_driver_cpp.
Then build the driver by opening a terminal and issueing the command catkin_make from within the catkin workspace directory (e.g. ~/catkin_ws).

Before actually starting the e-puck2 node you need to configure the e-puck2 robot as Bluetooth device in the system, refer to section Connecting to the Bluetooth.
Once the robot is paired with the computer, you need to take note of its MAC address (this will be needed when launching the ROS node). To know the MAC address of a paired robot, go to System Settings, Bluetooth and select the robot; once selected you'll see in the right side the related MAC address.

First thing to do before launching the script file is running the roscore, open another terminal tab and issue the command roscore.

Now you can finally start the e-puck2 ROS node, for this purposes there is a launch script (based on roslaunch).
Open a terminal and issue the following command: roslaunch epuck_driver_cpp epuck2_controller.launch epuck2_address:='B4:E6:2D:EB:9C:4F'.
B4:E6:2D:EB:9C:4F is the e-puck2 Bluetooth MAC address that need to be changed accordingly to your robot.

If all is going well the robot will be ready to exchange data and rviz will be opened showing the informations gathered from the topics published by the e-puck2 driver node.

The launch script is configured also to run the gmapping (SLAM) node that let the robot construct a map of the environment; the map is visualized in real-time directly in the rviz window. The gmapping package provides laser-based SLAM (Simultaneous Localization and Mapping) and since the e-puck2 has no laser sensor, the information from the 6 proximity sensors on the front side of the robot are interpolated to get 19 laser scan points.

WiFi

First of all get the last version of the ROS e-puck2 driver from github. Move to ~/catkin_ws/src and issue: git clone -b e-puck2_wifi https://github.com/gctronic/epuck_driver_cpp.
Then build the driver by opening a terminal and issueing the command catkin_make from within the catkin workspace directory (e.g. ~/catkin_ws).

Before actually starting the e-puck2 node you need to connect the e-puck2 robot to your WiFi network, refer to section Connecting to the WiFi.

First thing to do before launching the script file is running the roscore, open another terminal tab and issue the command roscore.

Now you can finally start the e-puck2 ROS node, for this purposes there is a launch script (based on roslaunch).
Open a terminal and issue the following command: roslaunch epuck_driver_cpp epuck2_controller.launch epuck2_address:='192.168.1.20'.
192.168.1.20 is the e-puck2 IP address that need to be changed accordingly to your robot.

If all is going well the robot will be ready to exchange data and rviz will be opened showing the informations gathered from the topics published by the e-puck2 driver node.

The launch script is configured also to run the gmapping (SLAM) node that let the robot construct a map of the environment; the map is visualized in real-time directly in the rviz window. The gmapping package provides laser-based SLAM (Simultaneous Localization and Mapping) and since the e-puck2 has no laser sensor, the information from the 6 proximity sensors on the front side of the robot are interpolated to get 19 laser scan points.

The refresh rate of the topics is about 11 Hz when the camera image is enabled (see e-puck2_topics_wifi_refresh_camon.pdf) and about 50 Hz when the camera image is disabled (see e-puck2_topics_wifi_refresh_camoff.pdf). The same graphs can be created using the command rosrun tf view_frames.

The following figure shows all the topics published by the e-puck2 WiFi ROS node. The same graph can be created using the command rqt_graph.
Click to enlarge

Move the robot

You have some options to move the robot.

The first one is to use the rviz interface: in the bottom left side of the interface there is a Teleop panel containing an interactive square meant to be used with differential drive robots. By clicking in this square you'll move the robot, for instance by clicking on the top-right section, then the robot will move forward-right.

The second method to move the robot is using the ros-kinetic-turtlebot-teleop ROS package. If not already done, you can install this package by issueing sudo apt-get install ros-kinetic-turtlebot-teleop.
There is a lunch file in the e-puck2 ROS driver that configures this package in order to be used with the e-puck2 robot. To start the launch file, issue the following command roslaunch epuck_driver epuck2_teleop.launch, then follow the instructions printed on the terminal to move the robot.

The third method is by directly publishing on the /mobile_base/cmd_vel topic, for instance by issueing the following command rostopic pub -1 /mobile_base/cmd_vel geometry_msgs/Twist -- '[0.0, 0.0, 0.0]' '[0.0, 0.0, 1.0]' the robot will rotate on the spot, instead by issueing the following command rostopic pub -1 /mobile_base/cmd_vel geometry_msgs/Twist -- '[4.0, 0.0, 0.0]' '[0.0, 0.0, 0.0]' the robot will move straight forward.
Beware that there shouldn't be any other node publishing on the /mobile_base/cmd_vel topic, otherwise your commands will be overwritten.

Control the RGB LEDs

The general command to change the RGB LEDs colors is the following:
rostopic pub -1 /mobile_base/rgb_leds std_msgs/UInt8MultiArray "{data: [LED2 red, LED2 green, LED2 blue, LED4 red, LED4 green, LED4 blue, LED6 red, LED6 green, LED6 blue, LED8 red, LED8 green, LED8 blue]}"
The values range is from 0 (off) to 100 (completely on). Have a look at the e-puck2 overview to know the position of the RGB LEDs.

For instance to set all the RGB LEDs to red, issue the following command:
rostopic pub -1 /mobile_base/rgb_leds std_msgs/UInt8MultiArray "{data: [100,0,0, 100,0,0, 100,0,0, 100,0,0]}"

To turn off all the RGB LEDs issue the following command:
rostopic pub -1 /mobile_base/rgb_leds std_msgs/UInt8MultiArray "{data: [0,0,0, 0,0,0, 0,0,0, 0,0,0]}"

Control the LEDs

The general command to change the LEDs state is the following:
rostopic pub -1 /mobile_base/cmd_led std_msgs/UInt8MultiArray "{data: [LED1, LED3, LED5, LED7, body LED, front LED]}"
The values are: 0 (off), 1 (on) and 2 (toggle). Have a look at the e-puck2 overview to know the position of the LEDs.

For instance to turn on LED1, LED5, body LED and front LED, issue the following command:
rostopic pub -1 /mobile_base/cmd_led std_msgs/UInt8MultiArray "{data: [1,0,1,0,1,1]}"

To toggle the state of all the LEDs issue the following command:
rostopic pub -1 /mobile_base/cmd_led std_msgs/UInt8MultiArray "{data: [2,2,2,2,2,2]}"

Visualize the camera image

By default the camera is disabled to avoid communication delays. In order to enable it and visualize the image through ROS you need to pass an additional parameter cam_en to the launch script as follows:

  • Python: roslaunch epuck_driver epuck2_controller.launch epuck2_address:='B4:E6:2D:EB:9C:4F' cam_en:='true'
  • cpp:
    • Bluetooth: roslaunch epuck_driver_cpp epuck2_controller.launch epuck2_address:='B4:E6:2D:EB:9C:4F' cam_en:='true'
    • WiFi: roslaunch epuck_driver_cpp epuck2_controller.launch epuck2_address:='192.168.1.20' cam_en:='true'

Then with the Python ROS node you need to open another terminal and issue the command rosrun image_view image_view image:=/camera that will open a window with the e-puck2 camera image.
With the cpp ROS node the image is visualized directly in the Rviz window (on the right).

When using the Bluetooth ROS node, by default the image is greyscale and its size is 160x2, but you can change the image parameters in the launch script.
Instead when using the WiFi node, the image is RGB565 and its size is fixed to 160x120 (you can't change it).

Multiple robots

There is a lunch script file designed to run up to 4 robots simultaneously, you can find it in ~/catkin_ws/src/epuck_driver_cpp/launch/multi_epuck2.launch. Here is an example to run 2 robots:
roslaunch epuck_driver_cpp multi_epuck2.launch robot_addr0:='192.168.1.21' robot_addr1:='192.168.1.23'
After issueing the command, rviz will be opened showing the values of all the 4 robots; it is assumed that the robots are placed in a square (each robot in each corner) of 20 cm.
Beware that this launch script is available only in the WiFi branch, but it can be used as a starting point also for the Bluetooth communication.

Troubleshooting

Robot state publisher

If you get an error similar to the following when you start a node with roslaunch:

ERROR: cannot launch node of type [robot_state_publisher/state_publisher]: Cannot locate node of type [state_publisher] in package [robot_state_publisher]. Make sure file exists in package path and permission is set to executable (chmod +x)

Then you need to change the launch file from:

<node name="robot_state_publisher" pkg="robot_state_publisher" type="state_publisher" />

To:

<node name="robot_state_publisher" pkg="robot_state_publisher" type="robot_state_publisher" />

This is due to the fact that state_publisher was a deprecated alias for the node named robot_state_publisher (see https://github.com/ros/robot_state_publisher/pull/87).

Tracking

Some experiments are done with the SwisTrack software in order to be able to track the e-puck2 robots through a color marker placed on top of the robots.

The requirements are the following:

  • e-puck robots equipped with a color marker attached on top of the robot; beware that there should be a white border of about 1 cm to avoid wrong detection (marker merging). The colors marker were printed with a laser printer.
  • USB webcam with a resolution of at least 640x480. In our tests we used the Trust SpotLight Pro.
  • Windows OS: the SwisTrack pre-compiled package was built to run in Windows. Moreover the controller example depends on Windows libraries.
    Anyway it's important to notice that SwisTrack is multiplatform and that the controller code can be ported to Linux.
  • An arena with uniform light conditions to make the detection more robust.

Controller example

In this example we exploit the SwisTrack blobs detection feature in order to detect the color markers on top of the robots and then track these blob with a Nearest Neighbour tracking algorithm.
The SwisTrack application get an image from the USB camera, then applies some conversions and thresholding before applying the blobs detection and finally tracks these blobs. All the data, like the blob's positions, are published to the network (TCP).
The controller is a separate application that receives the data from SwisTrack through the network and opens a Bluetooth connection with each robot in order to remote control them. In the example, the informations received are printed in the terminal while moving the robots around (obstacles avoidance).
The following schema shows the connections schema:


Follow these steps to run the example:

  • program all the e-puck2 robots with the last factory firmware (see section Firmware update) and put the selector in position 3
  • pair the robots with the computer, refer to section Connecting to the Bluetooth
  • the controller example is based on the C++ remote library, so download it
  • download the controller example by issueing the following command: git clone https://github.com/e-puck2/e-puck2_tracking_example.
    When building the example, make sure that both the library and the example are in the same directory
  • download the pre-compiled SwisTrack software and extract it. The SwisTrack executable can be found in SwisTrackEnvironment/SwisTrack - Release.exe
  • prepare the arena: place the USB camera on the roof pointing towards the robots. Download the markers and attach one of them on top of each robot.
  • download the configuration files package for SwisTrack and extract it. Run the SwisTrack executable and open the configuration file called epuck2.swistrack. All the components to accomplish the tracking of 2 robots should be loaded automatically.
    If needed you can tune the various components to improve the blobs detection in your environment or for tracking more robots.
  • Run the controller example: at the beginning you must enter the Bluetooth UART port numbers for the 2 robots. Then the robots will be moved slightly in order to identify which robot belong to which blob. Then the controller loop is started sending motion commands to the robots for doing obstacles avoidance and printing the data received from SwisTrack in the terminal.

The following image shows the example running:

Matlab

A Matlab interface is available in the following repository https://github.com/gctronic/e-puck-library/tree/master/tool/ePic. This interface was developed for the e-puck version 1 robot but it is compatible also with e-puck version 2 robot since it is based on the advanced sercom protocol.