Robotics

 
 
 

   A robot can be defined as a programmable, self-controlled device consisting of electronic, electrical, or mechanical units. More generally, it is a machine that functions in place of a living agent. Robots are especially desirable for certain work functions because, unlike humans, they never get tired; they can endure physical conditions that are uncomfortable or even dangerous; they can operate in airless conditions; they do not get bored by repetition; and they cannot be distracted from the task at hand.
  The concept of robots is a very old one yet the actual word robot was invented in the 20th century from the Czechoslovakian word robota or robotnik meaning slave, servant, or forced labour. Robots don't have to look or act like humans but they do need to be flexible so they can perform different tasks.
  Early industrial robots handled radioactive material in atomic labs and were called master/slave manipulators. They were connected together with mechanical linkages and steel cables. Remote arm manipulators can now be moved by push buttons, switches or joysticks.
   Current robots have advanced sensory systems that process information and appear to function as if they have brains. Their "brain" is actually a form of computerized artificial intelligence (AI). AI allows a robot to perceive conditions and decide upon a course of action based on those conditions.
A robot can include any of the following components:
effectors - "arms", "legs", "hands", "feet"
sensors - parts that act like senses and can detect objects or things like heat and light and convert the object information into symbols that computers understand
computer - the brain that contains instructions called algorithms to control the robot
equipment - this includes tools and mechanical fixtures Characteristics that make robots different from regular machinery are that robots usually function by themselves, are sensitive to their environment, adapt to variations in the environment or to errors in prior performance, are task oriented and often have the ability to try different methods to accomplish a task.

   Common industrial robots are generally heavy rigid devices limited to manufacturing. They operate in precisely structured environments and perform single highly repetitive tasks under pre-programmed control. There were an estimated 720,000 industrial robots in 1998.  Tele-operated robots are used in semi-structured environments such as undersea and nuclear facilities. They perform non-repetitive tasks and have limited real-time control.
  The robots that are in use today can be effectively divided into three different types: puppet robots, stationary type robots, and world modelling robots.

   Puppet robots, for many, are not really robots at all, but mere remotely controlled tools. These types of robots rely on human control. These types of robots use tele-presence, which is a type of control system in which the human is aware of the condition the robot is in by means of a video camera or other means.
    This is a 1960's era manual controller developed for controlling robots operating in radioactive environments. This controller is roughly the size of a human arm. At the back of the controller you can see a number of black disks. These are electric motors. These electric motors provide the energy to feed forces back to the operator. These forces are proportional to the current in the robot's motors which in turn are proportional to the forces being experienced by the robot. The placement of the motors at the back of the controller provides perfect counter balancing. The motors drive the robot joints with almost no friction via metal tape. Developed 40 years ago and without any computer control, I believe this controller works as well or better than any human-arm scale controller available today.
    Stationary type robots are robots that are placed in a structured setting, where conditions change very little. Stationary robots at factories, which spray paint, carry loads, arc-weld, pour poisons, handle explosives, or place parts, are excellent employees, that don’t get sick or tried. They don’t need coffee, or bathroom breaks, and their computer brains allow them to function without human control. But factory robots are far from being totally independent, because they are deaf, blind, and dumb, and therefore have to work in static settings. If the environment changes, the robot probably won’t be able to do its job

some examples.......
    A robot welder seals the corners of a car's windscreen frame. The human welder no longer participates in the process, but a human electronic technician has to monitor the working of the robot, repair it if necessary, and stop the production line if something goes wrong. Progress may require less overall human input, but inevitably requires some human skills

 

   A robot equipped with the V-500iA/2DV vision sensor uses visual line tracking to palletize and depalletize cased product. Visual line tracking allows the unit to handle multiple products on the same automation line. The machine is a four-axis, modular constructed, electric servo-driven robot with a 40 kg payload and a remote control unit. It is designed for high-speed manufacturing applications, including packaging, palletizing, material handling, machine load/unload and parts transfer.


   The next type of robot, which some people feel is the only type that deserves the name "robot," is the world modelling robot, or more simply, a robot that responds to its environment. A world modelling robot, in some aspects, is far superior to a puppet or stationary robot, in that it can function in, and respond to, a changing environment. In times past, the main way to program this type of robot, was to have the robot store a model of its surroundings in its brain, hence the name "world modelling" robot. This type of programming, was effective, but extremely slow, because of the huge amounts of data in even the simplest surroundings, so in the late 80’s Rodney Brooks, of the MIT Artificial intelligence Laboratory, came up with a different control strategy called subsumption architecture. Subsumption architecture is "…a way of organizing the intelligence systems by means of layering task-achieving behaviors without recourse to world models or sensor fusion". More simply, subsumption architecture, is just having different behaviors triggered by different stimuli, and a layering of these behaviors so that all behaviors are in a hierarchy, from top level behavior to bottom level. This approach allows seemingly complex behaviors exhibited by animals and such, to be broken down into small commands, like "go forward", "lift leg", "move back" that execute in response to certain stimuli. Yet even with all these complex control strategies, robots are still far from achieving human level behavior, simply because "…humans are just very good. We take for granted our own biological selves…Installing human-level equivalence in a robot is quite a challenge"

   One example of a world-modeling robot, is a robot employed by the Los Angeles Museum of Art. This robot patrols the building at night. It watches out for walls and obstacle, while also checking for smoke, humidity, and damaging chemicals. Another example is the Help-Mate robot from Transitions Research Corporation in Connecticut. This robot delivers records, medicine, food trays, and supplies, in a hospital. It finds its way around with a pre-programmed map of the building. The last example, which is more accessible to the average household, is the Paulan/Weedeater robotic lawn mower, developed in 1994. This machine keeps away from the road, and away from off-limit property by the use of a wire embedded around one’s property. When it detects an obstacle, it turns and continues on its way. The robot is solar powered, and runs constantly. These types of robots are probably the most beneficial to the average household, because of the small cost of production, and its adaptability. Most likely world-modeling/subsumptive robots will continue to advance, and become the new future in robotics.

   Despite advances in artificial intelligence, sensors and mechanical devices, researchers are still a long way from realizing the guiding vision of robotics: machines that can move and work like humans, learn new tasks with little or no training, and react with sensitivity to the changing moods of their mortal masters. Instead, most robots remain human-dependent machines that can perform only specialized tasks, like welding parts in a factory, searching through the rubble of a collapsed building or vacuuming a living room. Few display what could be considered sensitivity to people, and those that do tend to be toys, like Sony's Aibo pet, that serve only to entertain.
    Robotics researchers are realizing that the journey to more autonomous, adaptable robots will require more than just improvements in mechanical, sensory and computing capabilities. Equally important, they say, is improving the way people and robots interact: after all, they say, that may be how robots will learn, and to be truly useful, robots must be acceptable to people.
"Now that robots are beginning to come into our world, it's time to look beyond engineering and ask how people are going to react to them," said Arvin Agah, a robotics researcher at the University of Kansas.
Not all researchers believe that an all-purpose humanoid robot is a realistic goal, at least in the short term.
"I don't doubt that we will see more special-purpose machines such as robotic lawn mowers and car washers," said George Beckey, another robotics researcher at U.S.C. "But I do not expect the same robot to be able to vacuum the home and make coffee and take the dog for a walk."
    Nonetheless, researchers at robotics labs around the world are studying the way people and robots interact. If people are to teach machines, they ask, what would be the best way? And if machines are to serve people, washing dishes and sending faxes, what kind of robotic behavior will people be comfortable with? How should the robots appear?
    Some scientists believe that making robots seem human will smooth interaction. Shuji Hashimoto, a robotics engineer at Waseda University in Tokyo, envisions a world in which humans and humanoid robots will interact seamlessly, teaming up to carry out domestic and office tasks.
"Since personal robots will have to operate in environments designed for humans, they will be better off functionally with a form like the human body.  Beyond mastering some social skills and the developing the ability to learn tasks, robotic assistants would need to move in ways acceptable to users. Dr. Agah at the University of Kansas has studied people's psychological responses to a mobile robot. Working with a colleague, he asked 40 subjects how comfortable they felt in different situations around a cylindrical robot that was about a foot tall and moved on wheels.
  The researchers found that most subjects preferred that the robot move at a slower speed than normal walking pace. When the robot was mounted with a humanoid body, subjects wanted it to stay at a distance from them; in that form, the robot seemed to invade the subjects' personal space by coming too close to their faces.
   "These observations would not have been relevant back in the old days when robots were inside a cage painting cars in Detroit," Dr. Agah said. "Now they may be central to many aspects of robot design."
   At the Royal Institute of Technology in Sweden, researchers have tried to tackle the issue of distance, both physical and emotional. Led by Kerstin S. Eklundh, the researchers have built a prototype of a robot that can accomplish office tasks. Users can communicate with it by speaking to it or by clicking on a graphical interface on a computer. The researchers believed that having both modes of interaction would be important in an office, where workers might be too deeply immersed in other tasks to speak to a robot.
 

The office assistant is a doll on a mobile platform. It has no facial features but can make simple head and arm movements.  Dr. Eklundh and her colleagues chose its design to give users the sense that they were working with a reliable transportation agent. In one interaction they have programmed, a user can ask the robot to bring coffee from the kitchen. The doll tilts its head in a gesture of attention. To express its understanding of the command, the robot repeats the command as a question: "Get coffee from the kitchen?" "Yes, please," the user answers. The robot responds with "Going to get coffee from the kitchen!" and sets off. "The important thing is for the robot to give a clear indication about where it is headed and what it's going to do next," Dr. Eklundh said.
    As researchers plug away, trying to breathe human characteristics into circuits and metal, they parade the best of the humanoid robots before museum visitors and television cameras. Honda's Asimo, a robot that resembles an astronaut and can walk up and down stairs, does a little work as well. At the automaker's Tokyo headquarters, Asimo sometimes guides visitors to a conference room. "People really enjoy this," a company representative said. "The only problem is that when Asimo leaves the room, guests tend to follow it to see what it's going to do next."

   Robots that walk like human beings are common in science fiction but not so easy to make in real life. The most famous current example, the Honda Asimo, moves smoothly but on large, flat feet. And compared with a person, it consumes much more energy.  But researchers at Cornell University, the Massachusetts Institute of Technology (MIT) and Holland's Delft University of Technology have built robots that seem to more closely mimic the human gait -- and the Cornell robot matches human efficiency. The researchers' inspiration: simple walking toys that fascinated children in the 19th century.

   Gravity-powered walking toys work by swaying from side to side, allowing first one foot and then the other to swing forward. Human beings minimize the swaying and bend their knees to allow the moving foot to clear the ground, and two of the three new robots do the same. All three robots have arms synchronized to swing with the opposite leg for balance.
   The Cornell robot supplies power to the ankles to push off. When the forward foot hits the ground, a simple microchip controller tells the rear foot to push off. During the forward swing of each leg a small motor stretches a spring, which is finally released to provide the push.
   The Delft robot uses a pneumatic push at the hip, and the MIT robot uses electric motors that directly move the ankle. Control programs in the Cornell and Delft robots are extremely simple, while the MIT robot uses a learning program that allows the robot to teach itself to walk, which it can do in about 600 steps.
   The fact these robots can walk with a humanlike gait with very simple control programs "suggests that steady-state human walking might require only simple control as well," the researchers say in their paper. "The success of human mimicry demonstrated here … strongly suggests an intimate relationship between body architecture and control in human walking."

 

   
       

 

         
 
                   
 

.      .

 
 

© Copyright pc-control.co.uk 2008