Robotics and Modular Machines

From XPUB & Lens-Based wiki

This is a course where we will explore various aspects of the broad field of Robotics. In theory and practice.

We will explore fundamental questions such as "What is a robot", "What makes a machine a robot", explore concepts like

  • Machine & Motor Control,
  • Perception through Sensors & Sensor Systems
  • Autonomy vs. Remote-control
  • Decision-making algorithms and State-machines
  • Modular design

We will build machines, or components of machines that can be assembled together in different configurations (i.e. Modular Machines) We will learn how to program these machines to move according to local and remote perception, and local and remote decisions. We will learn how to make the distincte modules of such a system aware of each-others presence and state, and lean about inter-processor communication.


Robots from Science Fiction

Czech author Karel Čapek (1890 - 1938) coined the word Robot in 1920. It derives from the Slavic verb Robotit, meaning ‘to work’. Many Science-Fiction authors have written about robots, long before they existed. Most notably of course are the works of Isaac Asimov (1920 - 1992) Asimov first postulated the Three Laws of Robotics in his short story “Runaround” (1942). The Three Laws are:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

This set of simple, logical rules forms an elegant and concise directive for the behaviour of robots and similar machines. However, they also open up a vast array of assumptions and implications. The ethical implications of these Laws, the obvious and not-so-obvious ones, have been explored at great length in Asimov’s robot-novels.

At this point, i would rather like to focus on the technological implications of the Three Laws. Assumptions and implications about the abilities and capabilities of the machines that could effectively be governed by these Laws.

Starting from the bottom up, in order to comply with the 3rd Law, what must a Robot be able to do?

  • Have a sense of self
    • A model of itself
    • Internal Sensors (proprioception)
  • Be able to detect faults, diagnose them and possibly bypass or repair them
    • redundancy
    • self-repair ability
  • Be able to perceive and asses potential threats (perception)
    • External sensors
    • Sensor-data analysis and feature-extraction
    • Object recognition
      • Based on a huge database of known objects? Generalising algorithms? ???
    • Trajectory prediction
      • Requires knowledge of Newtonian physics (or Einsteinian physics when high velocities are involved) and ability to estimate a perceived object’s mass & velocity.
    • knowledge of its own resilience (at what expected impact-level does a threat cease to be harmless?)
  • Be able to avoid or avert potential threats
    • Spontaneous movement

Quite a list, and probably not exhaustive. But let’s move on.

In order to comply with the 2nd law, a Robot must be able to:

  • Receive, interpret and understand commands.
  • Analyse and break-down a complex task into sub-task
  • Execute (sub-)tasks
  • Report the results of executed tasks.

This may sound obvious, but the list above is just the tip of the iceberg. The purely practical question wether the commands are received through some sort of keyboard on the machine, a remote-control device or voice-commands, is trivial compared to the task of really understanding commands. Compare: “move 2 meters south”, “move 2 meters left”, “take that bottle from the table and bring it here”, “go get milk”... Obviously, depending on the given command, the machine may have to do a large amount of inferring, interpolating, extrapolating.

From your own experience, you know that, even after many years of development and refinement, present-day operating-systems are certainly not capable of dealing with vague commands like “oh, please download that movie with that guy, you know.. and put it on the harddisk somewhere”. Or “please pay my telephone bills”. For example...

The desired ability for the robot to handle and manipulate objects brings the machine’s capability of perception to whole new levels of complexity. Unless of course the position of the objects relative to the robot is precisely known a priori, as in factory assembly-lines.

Finding a sequence of sub-tasks that might accomplish an overall task is also quite an interesting challenge. This requires a close link with the self-perception of the machine (“can i take this action now?”) and might involve finding implementable solutions for abilities we humans know as ‘intuition’, ‘causality’ and ‘improvisation’...

These topics are whole fields of study within the discipline of Artificial Intelligence.

By comparison, compliance with the 1st law does not require that much more from the machine’s capabilities. I would say mainly:

  • the ability to distinguish between humans, other ‘active‘ agents (animals, other robots, vehicles) and passive ‘objects’.
  • knowledge of what constitutes ‘harm‘ to a human. This might possibly accomplished by extending the capacity to detect, asses and avert harm to self (from the 3rd Law) an being able to project this ‘sense of threats and self-preservation‘ onto perceived humans.

Autonomy

Most, if not all of these abilities contribute toward the machine’s autonomy. Its ability to function independently. Independence and autonomy are quite broad and abstract concepts, so it helps if we can be more specific.

Independent movement can be divided into

  • movement of the machine’s limbs and other body-parts relative to each-other
  • movement of the machine as a whole

The former requires proprioception and some degree of perception, the latter requires a capability to navigate within the environment and perception of the environment Other considerations for independent movement of the machine are its capabilities of communication and energy-autonomy. How will the machine receive commands and report back? Where is it’s power coming from? Cables vs. batteries, wired vs wireless communication and/or audio-visual communication.

Another aspect of autonomy is the capacity for independent thought. The ability to asses situations, weigh possible outcomes of potential actions to be taken, make decisions, etc. More IA again...

Yet another interesting aspect of autonomy in machines is what might be called ‘independence from maintenance and repair’. In this area, redundancy is the idea of designing a machine from smaller sub-units that partially or completely overlap each other’s function. If one sub-unit fails, one or more others can take over the broken sub-unit’s tasks until such time when it is convenient to (be) shut down and have the broken unit replaced. Or maybe for the machine to replace the broken sub-unit itself. Then we’re talking about self-repair capabilities.

Robot Workshop

On 22, 23 & 24 March 2012, we did a Robotics Workshop. The first one. The goal was to build a proof-of-concept style driving platform with a moving camera.

Materials used:

  • 2 stepper-motors (1.5Nm torque) 23LC064-025-8W-F10.F10-1.5
  • 2 stepper-motor drivers M415C
  • 2 micro-servos TowerPro MG90
  • a small, light-weight webcam Sweex WC003V6
  • a 24V 6.3A power-supply, with a long spool of cable TDK Lambda SWS150-24
    • Eventually we want the platform to run on batteries, of course
  • a castor wheel, and two drive-wheels from an old foldable cart (Thanks Leslie!!) that we mounted directly ont the shafts of the stepper-motors.
  • 2 Arduino Uno boards, one contrlling the drive-motors, the other controlling the camera pan & tilt servos
  • a USB-hub with a long USB-cable
    • We tried wireless control, using an Arduino BT board, but the BlueTooth connection proved unreliable.
    • We will have to look into other wireless control & communication protocols.

Laura kindly lent us her Xbox game-controller, and wrote the Pyton-code to use the controller's two joysticks to steer the robot and the camera.

Documentation

Here is a video-collage that Quinten made.



On 6, 7, 8 and 13, 14, & 15 March 2013, we have another Robotics Workshop.

Some documents: